added initial roadmap and implementation

This commit is contained in:
2026-03-11 22:23:16 +02:00
parent 39b97a6db5
commit c0792f2e1d
62 changed files with 12410 additions and 1 deletions

20
.dockerignore Normal file
View File

@@ -0,0 +1,20 @@
.git
.github
.idea
.vscode
**/target
**/target/**
**/node_modules
**/node_modules/**
**/.next
**/.next/**
**/dist
**/dist/**
**/build
**/build/**
**/*.log
.DS_Store

6
.env Normal file
View File

@@ -0,0 +1,6 @@
DATABASE_URL=postgres://admin:admin_password@localhost:5433/madbase_control
PORT=8001
HOST=0.0.0.0
JWT_SECRET=supersecret
DEFAULT_TENANT_DB_URL=postgres://postgres:postgres@localhost:5432/postgres
RATE_LIMIT_PER_SECOND=100

5
.env.example Normal file
View File

@@ -0,0 +1,5 @@
DATABASE_URL=postgres://admin:admin_password@localhost:5433/madbase_control
DEFAULT_TENANT_DB_URL=postgres://postgres:postgres@localhost:5432/postgres
PORT=8001
HOST=0.0.0.0
JWT_SECRET=supersecret

4
.gitignore vendored
View File

@@ -16,3 +16,7 @@ target/
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# Integration Tests
tests/integration/node_modules/
tests/integration/.env

5316
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

43
Cargo.toml Normal file
View File

@@ -0,0 +1,43 @@
[workspace]
resolver = "2"
members = [
"common",
"gateway",
"auth",
"data_api",
"control_plane",
"realtime",
"storage",
]
[workspace.dependencies]
tokio = { version = "1.36", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
axum = "0.7"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "json"] }
sqlx = { version = "0.8", features = ["runtime-tokio-rustls", "postgres", "uuid", "chrono", "json", "migrate"] }
uuid = { version = "1.7", features = ["v4", "serde"] }
thiserror = "1.0"
dotenvy = "0.15"
config = "0.13"
chrono = { version = "0.4", features = ["serde"] }
anyhow = "1.0"
argon2 = "0.5"
jsonwebtoken = "9.2"
rand = "0.8"
regex = "1.10"
futures = "0.3"
sha2 = "0.10"
aws-sdk-s3 = "1.15.0"
aws-config = "1.1.2"
aws-types = "1.1.2"
# Local dependencies
common = { path = "common" }
auth = { path = "auth" }
data_api = { path = "data_api" }
control_plane = { path = "control_plane" }
realtime = { path = "realtime" }
storage = { path = "storage" }

11
Dockerfile Normal file
View File

@@ -0,0 +1,11 @@
FROM rust:latest AS builder
WORKDIR /app
COPY . .
RUN cargo build --release --bin gateway
FROM debian:trixie-slim
WORKDIR /app
RUN apt-get update && apt-get install -y libssl-dev ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/gateway .
COPY web ./web
CMD ["./gateway"]

134
README.md
View File

@@ -1,2 +1,134 @@
# madbase
# MadBase
**MadBase** is an open-source, high-performance Backend-as-a-Service (BaaS) written in Rust. It serves as a lightweight alternative to Supabase, providing a comprehensive suite of tools for building modern web and mobile applications.
## 🚀 Features
MadBase consolidates the following services into a single, efficient binary:
* **🔐 Authentication (`/auth/v1`)**
* User Signup & Login (Email/Password).
* JWT-based Session Management.
* Row Level Security (RLS) integration with PostgreSQL.
* **💾 Data API (`/rest/v1`)**
* Auto-generated REST API for your Postgres tables.
* CRUD operations (Select, Insert, Update, Delete).
* Filtering, Pagination, and Ordering.
* Stored Procedure (RPC) calls.
* **⚡ Realtime (`/realtime/v1`)**
* WebSocket-based event streaming.
* Listen to database changes via Postgres `LISTEN/NOTIFY`.
* **📦 Storage (`/storage/v1`)**
* S3-compatible object storage (backed by MinIO).
* File Upload, Download, and Management.
* Integrated RLS permissions for buckets and objects.
* **🎛️ Control Plane (`/platform/v1`)**
* Project Management.
* Automatic API Key Generation (`anon` and `service_role`).
## 🛠️ Architecture
MadBase is built as a modular monolith in **Rust**, utilizing the **Axum** web framework for high performance and low latency.
* **Gateway**: The central entry point that routes requests to appropriate internal modules.
* **PostgreSQL**: The primary database for data, auth, and system state.
* **MinIO**: S3-compatible object storage.
## 🏁 Getting Started
### Prerequisites
* **Rust** (latest stable)
* **Docker** & **Docker Compose** (for DB and MinIO)
* **PostgreSQL Client** (optional, for debugging)
### Installation
1. **Clone the repository:**
```bash
git clone https://github.com/yourusername/madbase.git
cd madbase
```
2. **Start Infrastructure:**
Spin up PostgreSQL and MinIO using Docker Compose:
```bash
docker-compose up -d
```
3. **Run Migrations:**
Initialize the database schema:
```bash
sqlx migrate run
```
*(Note: You may need to install sqlx-cli: `cargo install sqlx-cli`)*
4. **Start the Gateway:**
Run the main server:
```bash
cargo run -p gateway
```
The server will start at `http://0.0.0.0:8000`.
## 📖 Usage Guide
### 1. Create a Project
Use the Control Plane to initialize a project and get your API keys.
```bash
curl -X POST http://localhost:8000/platform/v1/projects \
-H "Content-Type: application/json" \
-d '{"name": "my-awesome-app"}'
```
**Response:**
```json
{
"id": "...",
"anon_key": "eyJ...",
"service_role_key": "eyJ...",
...
}
```
*Save the `anon_key` and `service_role_key`!*
### 2. Authentication
Sign up a new user:
```bash
curl -X POST http://localhost:8000/auth/v1/signup \
-H "apikey: <ANON_KEY>" \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com", "password": "securepassword"}'
```
### 3. Data Operations
Query a table (e.g., `users`):
```bash
curl -X GET "http://localhost:8000/rest/v1/users?select=*" \
-H "apikey: <ANON_KEY>" \
-H "Authorization: Bearer <USER_ACCESS_TOKEN>"
```
### 4. Realtime
Connect via WebSocket:
`ws://localhost:8000/realtime/v1`
### 5. Storage
Upload a file:
```bash
curl -X POST http://localhost:8000/storage/v1/object/my-bucket/image.png \
-H "apikey: <ANON_KEY>" \
-H "Authorization: Bearer <USER_ACCESS_TOKEN>" \
-F "file=@./local-image.png"
```
## 🗺️ Roadmap
See [ROADMAP.md](./ROADMAP.md) for detailed progress and future plans.
## 📄 License
MIT

146
ROADMAP.md Normal file
View File

@@ -0,0 +1,146 @@
# MadBase Development Roadmap
This document outlines the development plan for **MadBase**, a high-performance, resource-efficient, Supabase-compatible API layer written in Rust. The roadmap is derived from the requirements specified in [SPECIFICATIONS.md](./SPECIFICATIONS.md).
## Phase 1: Foundation & Core APIs (MVP)
**Goal:** Establish the single-binary architecture and deliver functional Auth and Data APIs for a single project context.
### 1.1 Project Scaffolding & Architecture
- [x] Initialize Rust workspace with modular crate structure (`gateway`, `auth`, `data_api`, `common`, `control_plane`).
- [x] Implement configuration management (Environment variables + .env).
- [x] Set up basic HTTP server (Axum/Actix) acting as the **Gateway**.
- [x] Implement connection pooling for PostgreSQL (SQLx or similar).
- [x] Create `docker-compose.yml` for dev database (compatible with Podman).
### 1.2 Authentication Service (`/auth/v1`)
- [x] Implement User model & schema (compatible with GoTrue/Supabase).
- [x] **Sign Up**: Email/password registration with Argon2 hashing.
- [x] **Sign In**: Email/password login returning JWTs.
- [x] **Token Management**:
- [x] Issue Access Tokens (JWT) with required claims (`sub`, `role`, `iss`, `iat`, `exp`) and optional (`aud`, `email`).
- [x] Issue Refresh Tokens and implement rotation logic.
- [x] **Session**: `/user` endpoint to retrieve current session.
### 1.3 Data API (PostgREST-lite) (`/rest/v1`)
- [x] **Query Parser**: Parse URL parameters for filtering, ordering, and pagination.
- [x] Filters: `eq`, `neq`, `lt`, `gt`, `in`, `is`.
- [x] Ordering: `order=col.asc|desc`.
- [x] Pagination: `limit`, `offset`.
- [x] **CRUD Operations**:
- [x] `GET`: Select rows (basic `select=*`).
- [x] `POST`: Insert rows.
- [x] `PATCH`: Update rows.
- [x] `DELETE`: Delete rows.
- [x] **RPC**: `POST /rpc/<function>` support for calling Postgres functions.
- [x] **RLS Enforcement**:
- [x] Implement transaction wrapping.
- [x] Inject claims via `SET LOCAL request.jwt.claims`.
- [x] Switch roles (`anon` vs `authenticated` vs `service_role`).
### 1.9 Podman Compose Deployment
Single `docker-compose.yml` (compatible with `podman-compose`) deploys:
- [x] **PostgreSQL**: Database for Auth and Data storage.
- [x] **MinIO**: Object storage for file uploads.
- [x] **Control Plane DB**: Stores project-specific config and secrets.
---
## Phase 2: Realtime & Storage
**Goal:** Enable real-time data subscriptions and object storage capabilities.
### 2.1 Realtime Service (`/realtime/v1`)
- [x] **WebSocket Server**: Implement using `axum` + `tungstenite`.
- [x] **Replication Consumer**:
- [x] Connect to Postgres via LISTEN/NOTIFY (fallback path).
- [ ] Connect to Postgres replication slot (`pgoutput`) via `tokio-postgres` or `sqlx` (Defer to Phase 5: Advanced Realtime).
- [x] Broadcast row changes (INSERT/UPDATE/DELETE) to connected clients.
- [x] **Subscription Management**:
- [x] Handle `Join` messages to subscribe to specific tables/rows.
- [x] Filter events based on client subscriptions.
### 2.2 Storage Service (`/storage/v1`)
- [x] **S3 Proxy**:
- [x] List Buckets (`GET /bucket`).
- [x] List Objects (`GET /object/:bucket_id`).
- [x] Upload/Download (`POST/GET /object/:bucket_id/:filename`).
- [x] **Permissions**:
- [x] RLS-like policies for buckets/objects (storage.buckets, storage.objects tables).
- [x] Public vs Private buckets.
## Phase 3: Control Plane & Management
**Goal**: Build the administrative layer to manage projects and configurations.
### 3.1 Project Management (`/v1/projects`)
- [x] **Projects Table**: Store project metadata (name, owner, status).
- [x] **Provisioning**: (Mocked for MVP) Simulate creating resources for a new project.
- [x] **API Keys**: Generate and validate Service Keys (anon/service_role).
### 3.2 Secrets Management (`/v1/secrets`)
- [x] **JWT Generation**: Automatically generate secure JWT secrets and keys for new projects.
- [x] **Project Resolution**:
- [x] Resolve project context via `x-project-ref` header.
- [x] **Dynamic Configuration**:
- [x] Load project-specific config (DB URL, JWT secret, API keys) from Control Plane DB.
- [x] **Isolation**: Ensure strict separation of connections and caches between projects.
---
## Phase 4: Admin UI & Observability
**Goal:** Provide a management interface and production-grade monitoring.
### 4.1 Admin API (`/admin/v1`)
- [x] **Project Management**: Create, Update, Soft-delete projects.
- [x] **User Management**: Admin-level user CRUD.
- [x] **Config Management**: Key rotation and setting updates.
### 4.2 Management UI
- [x] **Dashboard**: React/Web-based UI for managing projects.
- [x] **Features**:
- [x] DB Connection tester.
- [x] Storage bucket browser (Basic).
- [x] Realtime connection stats (Basic).
- [x] Logs viewer (Basic).
### 4.3 Observability Stack
- [x] **Metrics**: Expose Prometheus-compatible metrics (Request latency, DB pool stats, Active WS connections).
- [x] **Logs**: Structured JSON logging with correlation IDs.
- [x] **Infrastructure**:
- [x] Configure **VictoriaMetrics** for metric storage.
- [x] Configure **Loki** for log aggregation.
- [x] Configure **Grafana** with pre-built dashboards.
- [x] **Docker Compose**: Finalize the all-in-one `docker-compose.yml`.
---
## Phase 5: Polish, Security & Extensions
**Goal:** Harden the system for production use and expand compatibility.
### 5.1 Advanced Features
- [x] **Auth**: OAuth provider integration (Google, GitHub, etc.).
- [ ] **Data API**:
- [x] Basic column selection (`?select=col1,col2`).
- [x] Nested selects (joins) (`?select=col,relation(col)`).
- [x] Complex boolean logic (`or`, `and`).
- [x] Bulk operations optimization (Bulk Insert).
- [x] **Realtime**: Resume from LSN/ID support for reliability (via History Table).
### 5.2 Security & Performance
- [x] **Hardening**:
- [x] Rate limiting (per IP/Project).
- [x] CORS configuration.
- [x] Input validation strictness.
- [x] **Performance**:
- [x] Query caching where appropriate.
- [x] WS fanout optimization.
- [x] **Testing**:
- [x] Integration tests using the official `@supabase/supabase-js` client.
- [ ] Load testing.
---
## Milestone Summary
1. **MVP**: Auth + Data API (Phase 1).
2. **Beta**: + Realtime + Storage (Phase 2).
3. **RC**: + Functions + Multi-tenancy (Phase 3).
4. **v1.0**: + Admin UI + Observability + Production Ready (Phase 4 & 5).

317
SPECIFICATIONS.md Normal file
View File

@@ -0,0 +1,317 @@
### MadBase (Supabase-Compatible Rust API Layer) — Functional & Non-Functional Specification (Updated)
This document specifies **MadBase**, a **Supabase API-compatible** platform implemented primarily in **Rust**, designed to be a drop-in replacement for Supabases hosted API surface while supporting **Bring Your Own PostgreSQL**. Primary goals: **low resource usage**, **simplicity**, and **operability** (Docker Compose-first).
## 0. Key Decisions (Chosen for Resource Usage, Simplicity, Operability)
1. **Single Rust “gateway” binary** (one container) exposes Supabase-compatible endpoints:
- `/auth/v1/*`, `/rest/v1/*`, `/rpc/v1/*` (or `/rest/v1/rpc/*` depending on compatibility requirements), `/storage/v1/*`, `/realtime/v1/*` (WS), `/functions/v1/*`, `/admin/v1/*`
2. **Custom “PostgREST-lite” Rust data API** (instead of embedding PostgREST) to reduce runtime overhead and simplify deployment.
3. **Realtime via PostgreSQL logical replication (pgoutput)** implemented in Rust (no external Debezium/Elixir stack).
4. **Edge Functions executed as WASM via Wasmtime** (small, fast startup, strong sandbox). (Optionally support “Deno compatibility mode” later if required.)
5. **Storage API is an S3 proxy** with MinIO default in podman-compose and support for external S3/R2/etc. in production.
6. **Monitoring: VictoriaMetrics + Loki + Grafana** with Prometheus-format metrics endpoints and JSON stdout logs.
---
## 1. Functional Requirements
### 1.1 Supabase Client Compatibility (Hard Requirement)
**Goal:** `@supabase/supabase-js` works unchanged. Users only swap the URL.
#### 1.1.1 API Surface Compatibility
MadBase must mimic Supabases public endpoints, including:
- Path structure (e.g., `/auth/v1`, `/rest/v1`, `/storage/v1`, `/realtime/v1`, `/functions/v1`)
- Required headers:
- `apikey: <anon/service_role key>`
- `Authorization: Bearer <jwt>`
- Status codes and error format consistency (enough to not break supabase-js).
#### 1.1.2 Key Types
- **Anon key**: public, used by clients for most operations.
- **Service role key**: privileged, bypasses RLS (admin operations).
- Key rotation supported per project.
---
### 1.2 Multi-Tenant Projects (Control Plane)
MadBase is multi-tenant and supports many “projects,” each mapping to:
- A PostgreSQL connection string (BYO DB/cluster)
- JWT secret / signing policy
- Storage backend configuration
- Function runtime configuration
- Realtime replication configuration
#### 1.2.1 Project Identification
Projects are identified via one of:
- Subdomain-based routing: `https://<project-ref>.<base-domain>`
- Or header-based routing: `x-project-ref: <ref>` (for internal/admin use)
#### 1.2.2 Project Lifecycle
- Create project
- Update project settings (DB URL, keys, storage config)
- Disable / delete project (with soft-delete option)
- View usage/health info per project
---
### 1.3 Auth (GoTrue-Compatible-ish)
MadBase provides auth flows expected by supabase-js.
#### 1.3.1 Supported Flows
- Email/password signup
- Email/password login
- Token refresh (refresh token → new access token)
- Session retrieval (`/user`)
- Password reset (email-based) — optional for MVP, but specd
- Email confirmation — optional for MVP, but specd
- OAuth providers — **out of MVP**, future extension
#### 1.3.2 User Model
For each project:
- Users are isolated by `project_id`
- Store:
- `id (uuid)`
- `email`
- `password_hash (argon2)`
- `created_at`
- `confirmed_at` (optional)
- `user_metadata` (jsonb)
- `app_metadata` (jsonb, e.g., provider)
#### 1.3.3 Token Model
- Access token: JWT, short-lived
- Refresh token: opaque random string (stored hashed) or JWT-like; must support rotation
- Required JWT claims to align with RLS expectations:
- `sub`
- `role` (e.g., `authenticated` / `anon`)
- `aud`
- `exp`
- plus project scoping claim (e.g., `project_ref`)
---
### 1.4 Data API (CRUD, Filtering, Ordering, Pagination)
MadBase provides PostgREST-like behavior.
#### 1.4.1 CRUD
- `GET /rest/v1/<table>`
- `POST /rest/v1/<table>`
- `PATCH /rest/v1/<table>`
- `DELETE /rest/v1/<table>`
#### 1.4.2 Query Features
- `select=*` and nested selects (MVP: basic selects; advanced nested relations phased)
- Filters:
- `eq`, `neq`, `lt`, `lte`, `gt`, `gte`
- `like`, `ilike`
- `in`
- `is` (null checks)
- `or` boolean expression support (MVP: limited; expand later)
- Ordering: `order=col.asc|desc`
- Pagination:
- `limit`, `offset`
- Range headers compatibility (nice-to-have if required by supabase-js usage patterns)
- Count support (`Prefer: count=exact|planned|estimated`) as feasible
#### 1.4.3 RPC
- Postgres function invocation endpoint compatible with Supabase usage patterns:
- `POST /rest/v1/rpc/<function>` (common supabase pattern)
- Input via JSON body; output as JSON.
#### 1.4.4 RLS Enforcement (Hard Requirement)
MadBase must rely on **native PostgreSQL RLS**.
For each request within a transaction:
- Validate JWT (or anon)
- Set request-local variables:
- `SET LOCAL request.jwt.claims = '<json>'` OR granular `SET LOCAL request.jwt.claim.<x>`
- Set role appropriately:
- anon → restricted
- authenticated → restricted
- service role → bypass policies (by using privileged DB role / bypass behavior)
**Outcome:** Existing Supabase-style RLS policies work unchanged.
---
### 1.5 Realtime (WebSocket Subscriptions)
MadBase supports Supabase-js subscriptions.
#### 1.5.1 WebSocket Protocol Compatibility
- `supabase.channel(...).on('postgres_changes', ...)` works unchanged (protocol-level compatibility target)
- Support channel join/leave, heartbeats, and authorization.
#### 1.5.2 Change Capture
- Use PostgreSQL logical replication (`pgoutput`)
- Per-project replication slot management
- Resume from last confirmed LSN after restart
#### 1.5.3 Filtering + RLS Semantics
- Enforce subscription filters (table, schema, event types, optional column filters)
- **RLS-correctness goal**: only stream rows the user is allowed to see.
- MVP approach: re-check row visibility by executing a parameterized query under the users claims (heavier but correct)
- Future: optimize with precomputed policies / projection strategies
---
### 1.6 Object Storage (Supabase Storage-Compatible)
MadBase provides:
- Bucket CRUD
- Object upload/download/delete
- Signed URL generation (optional)
- Public/private bucket behavior
#### 1.6.1 Storage Backend
- Default: MinIO in Podman Compose
- Production: external S3-compatible endpoint
- Metadata stored per project in control DB (or optionally tenant DB)
#### 1.6.2 Authorization
- Requires JWT or anon access depending on bucket policy
- Service role bypass supported
---
### 1.7 Edge Functions (Full Support, WASM Runtime)
MadBase supports:
- `POST /functions/v1/<name>` invocation
- Deploy/update function artifacts per project
#### 1.7.1 Runtime
- Wasmtime sandbox execution
- Per-invocation limits:
- max CPU time
- max memory
- max request body size
- Environment variables injection:
- project ref
- supabase URL
- anon key (optional)
- secrets (encrypted at rest)
#### 1.7.2 Function Packaging
- MVP: accept WASM modules + manifest
- Optional toolchain support for TS/JS→WASM documented, but not required in-core.
---
### 1.8 Management UI (Full Project Management)
UI provides:
- Project creation/config
- DB connection configuration test
- Key management + rotation
- User management (admin)
- Storage bucket management
- Edge function deployment + logs
- Realtime connection stats
- RLS policy helpers (optional, best-effort)
- Health checks
UI talks to `/admin/v1/*` endpoints.
---
### 1.9 Podman Compose Deployment
Single `docker-compose.yml` (compatible with `podman-compose`) deploys:
- MadBase API container
- Control plane Postgres (for project/user/config)
- MinIO (default storage)
- Grafana
- Loki
- VictoriaMetrics
BYO tenant Postgres DBs are external (not necessarily in compose).
---
## 2. Non-Functional Requirements
### 2.1 Resource Usage Targets
- **Single-node dev deployment** (compose):
- MadBase (idle) memory target: ~100300MB depending on active WS connections
- Minimal CPU when idle
- Monitoring stack runs within reasonable bounds for small installs.
### 2.2 Performance
- REST overhead target: low single-digit milliseconds excluding DB time
- WebSocket fanout efficiency:
- handle thousands to tens of thousands connections per node (depending on hardware)
- Streaming uploads/downloads without buffering entire objects in memory.
### 2.3 Operability (Primary Goal)
- **Few moving parts**: one Rust service container + standard off-the-shelf observability components.
- Stateless API layer:
- can scale horizontally behind a load balancer
- Configuration via environment variables + control DB records.
- Smooth upgrades:
- migrations for control plane DB
- backward compatible API where possible
### 2.4 Reliability & Availability
- Graceful shutdown:
- drain in-flight requests
- close WS connections cleanly
- Realtime resume:
- store last confirmed LSN per project
- Retry policies for transient DB/storage failures
- Backpressure handling for WS broadcasts and replication consumption
### 2.5 Security
- JWT validation strictness (issuer/audience configurable per project)
- Password hashing: Argon2
- Secrets encrypted at rest in control DB
- RBAC for admin UI
- Edge runtime sandboxing:
- no filesystem by default
- no network by default (or allowlist)
- Defense-in-depth:
- rate limiting per project/IP
- request size limits
- CORS configuration
### 2.6 Observability
- Metrics:
- request counts/latency by endpoint and project
- DB pool stats
- replication lag
- WS connections
- function invocation duration/errors
- storage bytes in/out
- Logs:
- JSON structured logs to stdout
- correlation IDs per request
- Dashboards:
- Grafana dashboards included for the above
### 2.7 Maintainability
- Modular internal crate structure:
- `gateway`, `auth`, `data_api`, `realtime`, `storage`, `functions`, `admin`, `control_plane`
- Clear compatibility test suite using supabase-js integration tests
- Versioned API surface for `/admin/v1`
---
## 3. Constraints & Assumptions
- MadBase is **API layer only**; it does not provision tenant databases (though it can optionally assist with templates).
- PostgreSQL is assumed to be standard and supports:
- RLS
- logical replication (for realtime)
- Supabase-js compatibility is the north star; where full fidelity is expensive:
- MVP supports the most common subset, with explicit “compatibility gaps” tracked.
---
## 4. Acceptance Criteria (Definition of Done)
1. A standard supabase-js app can:
- sign up, log in, refresh tokens
- perform CRUD with filters/order/pagination
- call RPC functions
- upload/download storage objects
- receive realtime row change events
- invoke an edge function
2. Two projects pointing at two different external Postgres instances are fully isolated.
3. RLS policies behave identically to Supabase for supported query patterns.
4. Entire stack deploys via podman-compose and exposes Grafana dashboards with logs+metrics.

23
auth/Cargo.toml Normal file
View File

@@ -0,0 +1,23 @@
[package]
name = "auth"
version = "0.1.0"
edition = "2021"
[dependencies]
common = { workspace = true }
tokio = { workspace = true }
axum = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
sqlx = { workspace = true }
tracing = { workspace = true }
argon2 = { workspace = true }
jsonwebtoken = { workspace = true }
rand = { workspace = true }
chrono = { workspace = true }
uuid = { workspace = true }
anyhow = { workspace = true }
sha2 = { workspace = true }
oauth2 = "5.0.0"
reqwest = { version = "0.13.2", features = ["json"] }
validator = { version = "0.20.0", features = ["derive"] }

249
auth/src/handlers.rs Normal file
View File

@@ -0,0 +1,249 @@
use crate::middleware::AuthContext;
use crate::models::{AuthResponse, SignInRequest, SignUpRequest, User};
use crate::utils::{
generate_refresh_token, generate_token, hash_password, hash_refresh_token, issue_refresh_token, verify_password,
};
use axum::{
extract::{Extension, Query, State},
http::StatusCode,
Json,
};
use common::Config;
use common::ProjectContext;
use serde::Deserialize;
use serde_json::Value;
use sqlx::{Executor, PgPool, Postgres};
use std::collections::HashMap;
use uuid::Uuid;
use validator::Validate;
#[derive(Clone)]
pub struct AuthState {
pub db: PgPool,
pub config: Config,
}
#[derive(Deserialize)]
struct RefreshTokenGrant {
refresh_token: String,
}
pub async fn signup(
State(state): State<AuthState>,
db: Option<Extension<PgPool>>,
project_ctx: Option<Extension<ProjectContext>>,
Json(payload): Json<SignUpRequest>,
) -> Result<Json<AuthResponse>, (StatusCode, String)> {
payload.validate().map_err(|e| (StatusCode::BAD_REQUEST, e.to_string()))?;
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
// Check if user exists
let user_exists = sqlx::query("SELECT id FROM users WHERE email = $1")
.bind(&payload.email)
.fetch_optional(&db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
if user_exists.is_some() {
return Err((StatusCode::BAD_REQUEST, "User already exists".to_string()));
}
let hashed_password = hash_password(&payload.password)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let user = sqlx::query_as::<_, User>(
r#"
INSERT INTO users (email, encrypted_password, raw_user_meta_data)
VALUES ($1, $2, $3)
RETURNING *
"#,
)
.bind(&payload.email)
.bind(hashed_password)
.bind(payload.data.unwrap_or(serde_json::json!({})))
.fetch_one(&db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let jwt_secret = if let Some(Extension(ctx)) = project_ctx.as_ref() {
ctx.jwt_secret.as_str()
} else {
state.config.jwt_secret.as_str()
};
let (token, expires_in) = generate_token(user.id, &user.email, "authenticated", jwt_secret)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let refresh_token = issue_refresh_token(&db, user.id, Uuid::new_v4(), None).await?;
Ok(Json(AuthResponse {
access_token: token,
token_type: "bearer".to_string(),
expires_in,
refresh_token,
user,
}))
}
pub async fn login(
State(state): State<AuthState>,
db: Option<Extension<PgPool>>,
project_ctx: Option<Extension<ProjectContext>>,
Json(payload): Json<SignInRequest>,
) -> Result<Json<AuthResponse>, (StatusCode, String)> {
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
let user = sqlx::query_as::<_, User>("SELECT * FROM users WHERE email = $1")
.bind(&payload.email)
.fetch_optional(&db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?
.ok_or((
StatusCode::UNAUTHORIZED,
"Invalid email or password".to_string(),
))?;
if !verify_password(&payload.password, &user.encrypted_password)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?
{
return Err((
StatusCode::UNAUTHORIZED,
"Invalid email or password".to_string(),
));
}
let jwt_secret = if let Some(Extension(ctx)) = project_ctx.as_ref() {
ctx.jwt_secret.as_str()
} else {
state.config.jwt_secret.as_str()
};
let (token, expires_in) = generate_token(user.id, &user.email, "authenticated", jwt_secret)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let refresh_token = issue_refresh_token(&db, user.id, Uuid::new_v4(), None).await?;
Ok(Json(AuthResponse {
access_token: token,
token_type: "bearer".to_string(),
expires_in,
refresh_token,
user,
}))
}
pub async fn get_user(
State(state): State<AuthState>,
db: Option<Extension<PgPool>>,
Extension(auth_ctx): Extension<AuthContext>,
) -> Result<Json<User>, (StatusCode, String)> {
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
let claims = auth_ctx
.claims
.ok_or((StatusCode::UNAUTHORIZED, "Not authenticated".to_string()))?;
let user_id = Uuid::parse_str(&claims.sub)
.map_err(|_| (StatusCode::UNAUTHORIZED, "Invalid user ID".to_string()))?;
let user = sqlx::query_as::<_, User>("SELECT * FROM users WHERE id = $1")
.bind(user_id)
.fetch_optional(&db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?
.ok_or((StatusCode::NOT_FOUND, "User not found".to_string()))?;
Ok(Json(user))
}
pub async fn token(
State(state): State<AuthState>,
db: Option<Extension<PgPool>>,
project_ctx: Option<Extension<ProjectContext>>,
Query(params): Query<HashMap<String, String>>,
Json(payload): Json<Value>,
) -> Result<Json<AuthResponse>, (StatusCode, String)> {
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
let grant_type = params
.get("grant_type")
.map(|s| s.as_str())
.unwrap_or("password");
match grant_type {
"password" => {
let req: SignInRequest = serde_json::from_value(payload)
.map_err(|e| (StatusCode::BAD_REQUEST, e.to_string()))?;
req.validate().map_err(|e| (StatusCode::BAD_REQUEST, e.to_string()))?;
login(State(state), Some(Extension(db)), project_ctx, Json(req)).await
}
"refresh_token" => {
let req: RefreshTokenGrant = serde_json::from_value(payload)
.map_err(|e| (StatusCode::BAD_REQUEST, e.to_string()))?;
let token_hash = hash_refresh_token(&req.refresh_token);
let mut tx = db
.begin()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let (revoked_token_hash, user_id, session_id) =
sqlx::query_as::<_, (String, Uuid, Option<Uuid>)>(
r#"
UPDATE refresh_tokens
SET revoked = true, updated_at = now()
WHERE token = $1 AND revoked = false
RETURNING token, user_id, session_id
"#,
)
.bind(&token_hash)
.fetch_optional(&mut *tx)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?
.ok_or((
StatusCode::UNAUTHORIZED,
"Invalid refresh token".to_string(),
))?;
let session_id = session_id.ok_or((
StatusCode::INTERNAL_SERVER_ERROR,
"Missing session".to_string(),
))?;
let new_refresh_token = issue_refresh_token(
&mut *tx,
user_id,
session_id,
Some(revoked_token_hash.as_str()),
)
.await?;
tx.commit()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let user = sqlx::query_as::<_, User>("SELECT * FROM users WHERE id = $1")
.bind(user_id)
.fetch_optional(&db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?
.ok_or((StatusCode::NOT_FOUND, "User not found".to_string()))?;
let jwt_secret = if let Some(Extension(ctx)) = project_ctx.as_ref() {
ctx.jwt_secret.as_str()
} else {
state.config.jwt_secret.as_str()
};
let (access_token, expires_in) =
generate_token(user.id, &user.email, "authenticated", jwt_secret)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
Ok(Json(AuthResponse {
access_token,
token_type: "bearer".to_string(),
expires_in,
refresh_token: new_refresh_token,
user,
}))
}
_ => Err((
StatusCode::BAD_REQUEST,
"Unsupported grant_type".to_string(),
)),
}
}

19
auth/src/lib.rs Normal file
View File

@@ -0,0 +1,19 @@
pub mod handlers;
pub mod middleware;
pub mod models;
pub mod oauth;
pub mod utils;
use axum::routing::{get, post};
pub use axum::Router;
pub use handlers::AuthState;
pub use middleware::{auth_middleware, AuthContext, AuthMiddlewareState};
pub fn router() -> Router<AuthState> {
Router::new()
.route("/signup", post(handlers::signup))
.route("/token", post(handlers::token))
.route("/authorize", get(oauth::authorize))
.route("/callback/:provider", get(oauth::callback))
.route("/user", get(handlers::get_user))
}

122
auth/src/middleware.rs Normal file
View File

@@ -0,0 +1,122 @@
use axum::{
extract::{Request, State},
http::StatusCode,
middleware::Next,
response::Response,
};
use common::{Config, ProjectContext};
use jsonwebtoken::{decode, Algorithm, DecodingKey, Validation};
use serde::{Deserialize, Serialize};
#[derive(Clone)]
pub struct AuthMiddlewareState {
pub config: Config,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Claims {
pub sub: String,
pub email: Option<String>,
pub role: String,
pub exp: usize,
pub iss: String,
pub aud: Option<String>,
}
#[derive(Clone)]
pub struct AuthContext {
pub claims: Option<Claims>,
pub role: String,
}
pub async fn auth_middleware(
State(state): State<AuthMiddlewareState>,
mut req: Request,
next: Next,
) -> Result<Response, StatusCode> {
// 1. Try to get ProjectContext (if available)
// If we are running in multi-tenant mode, ProjectContext should be present.
// If not, we fall back to global config (legacy/single-tenant).
let project_ctx = req.extensions().get::<ProjectContext>().cloned();
// Allow public OAuth routes
let path = req.uri().path();
if path.contains("/authorize") || path.contains("/callback") {
return Ok(next.run(req).await);
}
// Determine the secret to use
let jwt_secret = if let Some(ctx) = &project_ctx {
ctx.jwt_secret.clone()
} else {
state.config.jwt_secret.clone()
};
let auth_header = req
.headers()
.get("Authorization")
.and_then(|h| h.to_str().ok())
.map(|s| s.to_string());
let apikey_header = req
.headers()
.get("apikey")
.and_then(|h| h.to_str().ok())
.map(|s| s.to_string());
// Logic:
// 1. Bearer Token takes precedence for identity (Claims).
// 2. API Key is checked if no Bearer token, OR it acts as the "Client Key" (anon/service).
// Usually Supabase requires 'apikey' header ALWAYS, and Authorization header OPTIONAL (for user context).
let token = if let Some(auth) = auth_header {
auth.strip_prefix("Bearer ").map(|t| t.to_string())
} else {
// If no Auth header, check apikey header as fallback (e.g. for anon requests)
apikey_header.clone()
};
if let Some(token) = token {
let mut validation = Validation::new(Algorithm::HS256);
validation.validate_exp = true;
validation.validate_aud = false;
// validation.set_audience(&["authenticated"]); // If we used audience
match decode::<Claims>(
&token,
&DecodingKey::from_secret(jwt_secret.as_bytes()),
&validation,
) {
Ok(token_data) => {
let claims = token_data.claims;
let role = claims.role.clone();
let ctx = AuthContext {
claims: Some(claims),
role,
};
req.extensions_mut().insert(ctx);
return Ok(next.run(req).await);
}
Err(_) => {
// Invalid token
return Err(StatusCode::UNAUTHORIZED);
}
}
}
// No valid token found.
// Assign "anon" role if apikey is valid anon key?
// Or just default to "anon" role without claims?
// Supabase usually requires a valid JWT even for anon. The 'anon key' IS a JWT with role='anon'.
// So if decoding failed above, we returned Unauthorized.
// If no header provided at all?
// We should allow public routes to proceed?
// But this middleware is applied to ALL routes in /rest, /auth etc.
// /auth/v1/signup needs to be accessible.
// But wait, even signup requires the 'anon' key in Supabase.
// So: strict check.
Err(StatusCode::UNAUTHORIZED)
}

64
auth/src/models.rs Normal file
View File

@@ -0,0 +1,64 @@
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use sqlx::FromRow;
use uuid::Uuid;
use validator::Validate;
#[derive(Debug, Serialize, Deserialize, FromRow, Clone)]
pub struct User {
pub id: Uuid,
pub email: String,
#[serde(skip)]
pub encrypted_password: String,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
pub last_sign_in_at: Option<DateTime<Utc>>,
pub raw_app_meta_data: serde_json::Value,
pub raw_user_meta_data: serde_json::Value,
pub is_super_admin: Option<bool>,
pub confirmed_at: Option<DateTime<Utc>>,
pub email_confirmed_at: Option<DateTime<Utc>>,
pub phone: Option<String>,
pub phone_confirmed_at: Option<DateTime<Utc>>,
pub confirmation_token: Option<String>,
pub recovery_token: Option<String>,
pub email_change_token_new: Option<String>,
pub email_change: Option<String>,
}
#[derive(Debug, Deserialize, Validate)]
pub struct SignUpRequest {
#[validate(email)]
pub email: String,
#[validate(length(min = 6, message = "Password must be at least 6 characters"))]
pub password: String,
pub data: Option<serde_json::Value>,
}
#[derive(Debug, Deserialize, Validate)]
pub struct SignInRequest {
#[validate(email)]
pub email: String,
pub password: String,
}
#[derive(Debug, Serialize)]
pub struct AuthResponse {
pub access_token: String,
pub token_type: String,
pub expires_in: i64,
pub refresh_token: String,
pub user: User,
}
#[derive(Debug, Serialize, Deserialize, FromRow)]
pub struct RefreshToken {
pub id: i64, // BigSerial
pub token: String,
pub user_id: Uuid,
pub revoked: bool,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
pub parent: Option<String>,
pub session_id: Option<Uuid>,
}

307
auth/src/oauth.rs Normal file
View File

@@ -0,0 +1,307 @@
use crate::utils::{generate_token, issue_refresh_token};
use crate::AuthState;
use axum::{
extract::{Path, Query, State},
http::StatusCode,
response::{IntoResponse, Redirect},
Json,
extract::Extension,
};
use common::{Config, ProjectContext};
use oauth2::{
basic::{BasicErrorResponseType, BasicTokenType},
AuthUrl, AuthorizationCode, Client, ClientId, ClientSecret, CsrfToken,
EmptyExtraTokenFields, EndpointNotSet, EndpointSet, HttpRequest, HttpResponse,
RedirectUrl, RevocationErrorResponseType, Scope, StandardErrorResponse,
StandardRevocableToken, StandardTokenIntrospectionResponse, StandardTokenResponse,
TokenResponse, TokenUrl,
};
use reqwest::Client as ReqwestClient;
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};
use uuid::Uuid;
#[derive(Debug, Deserialize)]
pub struct OAuthRequest {
pub provider: String,
pub redirect_to: Option<String>,
}
#[derive(Debug, Deserialize)]
pub struct OAuthCallback {
pub code: String,
pub state: String,
}
#[derive(Debug, Serialize, Deserialize)]
struct UserProfile {
email: String,
name: Option<String>,
avatar_url: Option<String>,
provider_id: String,
}
#[derive(Debug)]
pub struct OAuthHttpError(String);
impl std::fmt::Display for OAuthHttpError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "OAuth HTTP Error: {}", self.0)
}
}
impl std::error::Error for OAuthHttpError {}
// Define the client type that matches our usage (AuthUrl + TokenUrl set)
type OAuthClient = Client<
StandardErrorResponse<BasicErrorResponseType>,
StandardTokenResponse<EmptyExtraTokenFields, BasicTokenType>,
StandardTokenIntrospectionResponse<EmptyExtraTokenFields, BasicTokenType>,
StandardRevocableToken,
StandardErrorResponse<RevocationErrorResponseType>,
EndpointSet, // HasAuthUrl
EndpointNotSet,
EndpointNotSet,
EndpointNotSet,
EndpointSet, // HasTokenUrl
>;
pub async fn async_http_client(
request: HttpRequest,
) -> Result<HttpResponse, OAuthHttpError> {
let client = reqwest::Client::builder()
.redirect(reqwest::redirect::Policy::none())
.build()
.map_err(|e| OAuthHttpError(e.to_string()))?;
let mut request_builder = client
.request(request.method().clone(), request.uri().to_string());
for (name, value) in request.headers() {
request_builder = request_builder.header(name, value);
}
request_builder = request_builder.body(request.into_body());
let response = request_builder.send().await.map_err(|e| OAuthHttpError(e.to_string()))?;
let mut builder = axum::http::Response::builder()
.status(response.status());
for (name, value) in response.headers() {
builder = builder.header(name, value);
}
builder
.body(response.bytes().await.map_err(|e| OAuthHttpError(e.to_string()))?.to_vec())
.map_err(|e| OAuthHttpError(e.to_string()))
}
fn get_client(provider: &str, config: &Config) -> Result<OAuthClient, String> {
let (client_id, client_secret, auth_url, token_url) = match provider {
"google" => (
config.google_client_id.clone().ok_or("Google Client ID not set")?,
config.google_client_secret.clone().ok_or("Google Client Secret not set")?,
"https://accounts.google.com/o/oauth2/v2/auth",
"https://oauth2.googleapis.com/token",
),
"github" => (
config.github_client_id.clone().ok_or("GitHub Client ID not set")?,
config.github_client_secret.clone().ok_or("GitHub Client Secret not set")?,
"https://github.com/login/oauth/authorize",
"https://github.com/login/oauth/access_token",
),
_ => return Err(format!("Unknown provider: {}", provider)),
};
let redirect_uri = if config.redirect_uri.ends_with('/') {
format!("{}{}", config.redirect_uri, provider)
} else {
format!("{}/{}", config.redirect_uri, provider)
};
let client = Client::new(ClientId::new(client_id))
.set_client_secret(ClientSecret::new(client_secret))
.set_auth_uri(AuthUrl::new(auth_url.to_string()).map_err(|e| e.to_string())?)
.set_token_uri(TokenUrl::new(token_url.to_string()).map_err(|e| e.to_string())?)
.set_redirect_uri(RedirectUrl::new(redirect_uri).map_err(|e| e.to_string())?);
Ok(client)
}
pub async fn authorize(
State(state): State<AuthState>,
Query(query): Query<OAuthRequest>,
) -> Result<impl IntoResponse, (StatusCode, String)> {
let client = get_client(&query.provider, &state.config)
.map_err(|e| (StatusCode::BAD_REQUEST, e))?;
let mut auth_request = client.authorize_url(CsrfToken::new_random);
match query.provider.as_str() {
"google" => {
auth_request = auth_request
.add_scope(Scope::new("email".to_string()))
.add_scope(Scope::new("profile".to_string()));
}
"github" => {
auth_request = auth_request
.add_scope(Scope::new("user:email".to_string()));
}
_ => {}
}
let (auth_url, _csrf_token) = auth_request.url();
// TODO: Store csrf_token in cookie/session for validation
Ok(Redirect::to(auth_url.as_str()))
}
pub async fn callback(
State(state): State<AuthState>,
db: Option<Extension<sqlx::PgPool>>,
project_ctx: Option<Extension<ProjectContext>>,
Path(provider): Path<String>,
Query(query): Query<OAuthCallback>,
) -> Result<impl IntoResponse, (StatusCode, String)> {
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
let client = get_client(&provider, &state.config)
.map_err(|e| (StatusCode::BAD_REQUEST, e))?;
let token_result = client
.exchange_code(AuthorizationCode::new(query.code))
.request_async(&async_http_client)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, format!("Token exchange failed: {}", e)))?;
let access_token = token_result.access_token().secret();
let user_profile = fetch_user_profile(&provider, access_token).await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e))?;
// Check if user exists by email
let existing_user = sqlx::query_as::<_, crate::models::User>("SELECT * FROM users WHERE email = $1")
.bind(&user_profile.email)
.fetch_optional(&db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let user = if let Some(u) = existing_user {
// Update user meta data if needed? For now, just return existing user.
// We might want to record that they logged in with this provider.
u
} else {
// Create new user
let raw_meta = json!({
"name": user_profile.name,
"avatar_url": user_profile.avatar_url,
"provider": provider,
"provider_id": user_profile.provider_id
});
sqlx::query_as::<_, crate::models::User>(
r#"
INSERT INTO users (email, encrypted_password, raw_user_meta_data)
VALUES ($1, $2, $3)
RETURNING *
"#,
)
.bind(&user_profile.email)
.bind("oauth_user_no_password") // Placeholder
.bind(raw_meta)
.fetch_one(&db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?
};
let jwt_secret = if let Some(Extension(ctx)) = project_ctx.as_ref() {
ctx.jwt_secret.as_str()
} else {
state.config.jwt_secret.as_str()
};
let (token, expires_in) = generate_token(user.id, &user.email, "authenticated", jwt_secret)
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let refresh_token: String = issue_refresh_token(&db, user.id, Uuid::new_v4(), None)
.await
.map_err(|(code, msg)| (StatusCode::from_u16(code.as_u16()).unwrap(), msg))?;
Ok(Json(json!({
"access_token": token,
"token_type": "bearer",
"expires_in": expires_in,
"refresh_token": refresh_token,
"user": user
})))
}
async fn fetch_user_profile(provider: &str, token: &str) -> Result<UserProfile, String> {
let client = ReqwestClient::new();
match provider {
"google" => {
let resp = client.get("https://www.googleapis.com/oauth2/v2/userinfo")
.bearer_auth(token)
.send()
.await
.map_err(|e| e.to_string())?
.json::<Value>()
.await
.map_err(|e| e.to_string())?;
let email = resp["email"].as_str().ok_or("No email found")?.to_string();
let name = resp["name"].as_str().map(|s| s.to_string());
let avatar_url = resp["picture"].as_str().map(|s| s.to_string());
let provider_id = resp["id"].as_str().ok_or("No ID found")?.to_string();
Ok(UserProfile {
email,
name,
avatar_url,
provider_id,
})
},
"github" => {
let resp = client.get("https://api.github.com/user")
.bearer_auth(token)
.header("User-Agent", "madbase")
.send()
.await
.map_err(|e| e.to_string())?
.json::<Value>()
.await
.map_err(|e| e.to_string())?;
let email = if let Some(e) = resp["email"].as_str() {
e.to_string()
} else {
// Fetch private emails
let emails = client.get("https://api.github.com/user/emails")
.bearer_auth(token)
.header("User-Agent", "madbase")
.send()
.await
.map_err(|e| e.to_string())?
.json::<Vec<Value>>()
.await
.map_err(|e| e.to_string())?;
let primary = emails.iter().find(|e| e["primary"].as_bool().unwrap_or(false))
.ok_or("No primary email found")?;
primary["email"].as_str().ok_or("No email found")?.to_string()
};
let name = resp["name"].as_str().map(|s| s.to_string());
let avatar_url = resp["avatar_url"].as_str().map(|s| s.to_string());
let provider_id = resp["id"].as_i64().map(|id| id.to_string()).ok_or("No ID found")?.to_string();
Ok(UserProfile {
email,
name,
avatar_url,
provider_id,
})
},
_ => Err("Unknown provider".to_string())
}
}

118
auth/src/utils.rs Normal file
View File

@@ -0,0 +1,118 @@
use argon2::{
password_hash::{
rand_core::{OsRng, RngCore},
PasswordHash, PasswordHasher, PasswordVerifier, SaltString,
},
Argon2,
};
use chrono::{Duration, Utc};
use jsonwebtoken::{encode, EncodingKey, Header};
use serde::{Deserialize, Serialize};
use sha2::{Digest, Sha256};
use uuid::Uuid;
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Claims {
pub sub: String,
pub email: Option<String>,
pub role: String,
pub exp: usize,
pub iss: String,
pub aud: Option<String>,
pub iat: usize,
}
pub fn hash_password(password: &str) -> anyhow::Result<String> {
let salt = SaltString::generate(&mut OsRng);
let argon2 = Argon2::default();
let password_hash = argon2
.hash_password(password.as_bytes(), &salt)
.map_err(|e| anyhow::anyhow!(e))?
.to_string();
Ok(password_hash)
}
pub fn verify_password(password: &str, password_hash: &str) -> anyhow::Result<bool> {
let parsed_hash = PasswordHash::new(password_hash).map_err(|e| anyhow::anyhow!(e))?;
Ok(Argon2::default()
.verify_password(password.as_bytes(), &parsed_hash)
.is_ok())
}
pub fn generate_refresh_token() -> String {
let mut bytes = [0u8; 32];
OsRng.fill_bytes(&mut bytes);
hex_encode(&bytes)
}
pub fn hash_refresh_token(raw: &str) -> String {
let digest = Sha256::digest(raw.as_bytes());
hex_encode(&digest)
}
pub fn generate_token(
user_id: Uuid,
email: &str,
role: &str,
jwt_secret: &str,
) -> anyhow::Result<(String, i64)> {
let now = Utc::now();
let expiration = now
.checked_add_signed(Duration::seconds(3600)) // 1 hour
.expect("valid timestamp")
.timestamp();
let claims = Claims {
sub: user_id.to_string(),
email: Some(email.to_string()),
role: role.to_string(),
exp: expiration as usize,
iss: "madbase".to_string(),
aud: Some("authenticated".to_string()),
iat: now.timestamp() as usize,
};
let token = encode(
&Header::default(),
&claims,
&EncodingKey::from_secret(jwt_secret.as_bytes()),
)?;
Ok((token, 3600))
}
fn hex_encode(bytes: &[u8]) -> String {
let mut out = String::with_capacity(bytes.len() * 2);
for b in bytes {
use std::fmt::Write;
let _ = write!(&mut out, "{:02x}", b);
}
out
}
pub async fn issue_refresh_token(
executor: impl sqlx::Executor<'_, Database = sqlx::Postgres>,
user_id: Uuid,
session_id: Uuid,
parent: Option<&str>,
) -> Result<String, (axum::http::StatusCode, String)> {
let token = generate_refresh_token();
let token_hash = hash_refresh_token(&token);
sqlx::query(
r#"
INSERT INTO refresh_tokens (token, user_id, session_id, parent)
VALUES ($1, $2, $3, $4)
"#,
)
.bind(&token_hash)
.bind(user_id)
.bind(session_id)
.bind(parent)
.execute(executor)
.await
.map_err(|e| (axum::http::StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
Ok(token)
}

15
common/Cargo.toml Normal file
View File

@@ -0,0 +1,15 @@
[package]
name = "common"
version = "0.1.0"
edition = "2021"
[dependencies]
tokio = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tracing = { workspace = true }
sqlx = { workspace = true }
thiserror = { workspace = true }
anyhow = { workspace = true }
config = { workspace = true }
dotenvy = { workspace = true }

60
common/src/config.rs Normal file
View File

@@ -0,0 +1,60 @@
use serde::{Deserialize, Serialize};
use std::env;
#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct Config {
pub database_url: String,
pub jwt_secret: String,
pub port: u16,
pub google_client_id: Option<String>,
pub google_client_secret: Option<String>,
pub github_client_id: Option<String>,
pub github_client_secret: Option<String>,
pub redirect_uri: String,
pub rate_limit_per_second: u64,
}
impl Config {
pub fn new() -> Result<Self, config::ConfigError> {
let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
let jwt_secret =
env::var("JWT_SECRET").unwrap_or_else(|_| "super-secret-key-please-change".to_string());
let port = env::var("PORT")
.unwrap_or_else(|_| "8000".to_string())
.parse()
.unwrap_or(8000);
let rate_limit_per_second = env::var("RATE_LIMIT_PER_SECOND")
.unwrap_or_else(|_| "10".to_string())
.parse()
.unwrap_or(10);
let google_client_id = env::var("GOOGLE_CLIENT_ID").ok();
let google_client_secret = env::var("GOOGLE_CLIENT_SECRET").ok();
let github_client_id = env::var("GITHUB_CLIENT_ID").ok();
let github_client_secret = env::var("GITHUB_CLIENT_SECRET").ok();
let redirect_uri = env::var("REDIRECT_URI")
.unwrap_or_else(|_| "http://localhost:8000/auth/v1/callback".to_string());
Ok(Config {
database_url,
jwt_secret,
port,
google_client_id,
google_client_secret,
github_client_id,
github_client_secret,
redirect_uri,
rate_limit_per_second,
})
}
}
// New struct for Project Context
#[derive(Clone, Debug)]
pub struct ProjectContext {
pub project_ref: String,
pub db_url: String,
pub jwt_secret: String,
pub anon_key: Option<String>,
pub service_role_key: Option<String>,
}

10
common/src/db.rs Normal file
View File

@@ -0,0 +1,10 @@
use sqlx::postgres::{PgPool, PgPoolOptions};
use std::time::Duration;
pub async fn init_pool(database_url: &str) -> Result<PgPool, sqlx::Error> {
PgPoolOptions::new()
.max_connections(20)
.acquire_timeout(Duration::from_secs(3))
.connect(database_url)
.await
}

5
common/src/lib.rs Normal file
View File

@@ -0,0 +1,5 @@
pub mod config;
pub mod db;
pub use config::{Config, ProjectContext};
pub use db::init_pool;

20
control_plane/Cargo.toml Normal file
View File

@@ -0,0 +1,20 @@
[package]
name = "control_plane"
version = "0.1.0"
edition = "2021"
[dependencies]
common = { workspace = true }
auth = { workspace = true }
tokio = { workspace = true }
axum = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
sqlx = { workspace = true }
tracing = { workspace = true }
uuid = { workspace = true }
rand = { workspace = true }
base64 = "0.21"
jsonwebtoken = { workspace = true }
chrono = { workspace = true }
anyhow = { workspace = true }

251
control_plane/src/lib.rs Normal file
View File

@@ -0,0 +1,251 @@
use axum::{
extract::{Path, State},
http::StatusCode,
routing::{delete, get, put},
Json, Router,
};
use jsonwebtoken::{encode, EncodingKey, Header};
use rand::Rng;
use serde::{Deserialize, Serialize};
use sqlx::PgPool;
use uuid::Uuid;
#[derive(Clone)]
pub struct ControlPlaneState {
pub db: PgPool,
}
#[derive(Debug, Serialize, Deserialize, sqlx::FromRow)]
pub struct Project {
pub id: Uuid,
pub name: String,
pub owner_id: Option<Uuid>,
pub status: String,
pub db_url: String,
pub jwt_secret: String,
pub anon_key: Option<String>,
pub service_role_key: Option<String>,
pub created_at: Option<chrono::DateTime<chrono::Utc>>,
}
#[derive(Deserialize)]
pub struct CreateProjectRequest {
pub name: String,
pub owner_id: Option<Uuid>,
}
#[derive(Serialize, Deserialize)]
struct Claims {
role: String,
iss: String,
iat: usize,
exp: usize,
sub: String,
}
pub async fn list_projects(
State(state): State<ControlPlaneState>,
) -> Result<Json<Vec<Project>>, (StatusCode, String)> {
let projects = sqlx::query_as::<_, Project>("SELECT * FROM projects")
.fetch_all(&state.db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
Ok(Json(projects))
}
pub async fn create_project(
State(state): State<ControlPlaneState>,
Json(payload): Json<CreateProjectRequest>,
) -> Result<Json<Project>, (StatusCode, String)> {
// 1. Generate JWT Secret
let jwt_secret: String = rand::thread_rng()
.sample_iter(&rand::distributions::Alphanumeric)
.take(40)
.map(char::from)
.collect();
// 2. Generate Keys (JWTs)
let anon_key = generate_jwt(&jwt_secret, "anon")
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let service_role_key = generate_jwt(&jwt_secret, "service_role")
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let default_db_url = std::env::var("DEFAULT_TENANT_DB_URL")
.or_else(|_| std::env::var("DATABASE_URL"))
.unwrap_or_default();
let project = sqlx::query_as::<_, Project>(
r#"
INSERT INTO projects (name, owner_id, status, db_url, jwt_secret, anon_key, service_role_key)
VALUES ($1, $2, 'active', $3, $4, $5, $6)
RETURNING *
"#
)
.bind(&payload.name)
.bind(payload.owner_id)
.bind(default_db_url)
.bind(&jwt_secret)
.bind(&anon_key)
.bind(&service_role_key)
.fetch_one(&state.db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
Ok(Json(project))
}
pub async fn delete_project(
State(state): State<ControlPlaneState>,
Path(id): Path<Uuid>,
) -> Result<StatusCode, (StatusCode, String)> {
// Soft delete
let result = sqlx::query("UPDATE projects SET status = 'deleted' WHERE id = $1")
.bind(id)
.execute(&state.db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
if result.rows_affected() == 0 {
return Err((StatusCode::NOT_FOUND, "Project not found".to_string()));
}
Ok(StatusCode::NO_CONTENT)
}
#[derive(Deserialize)]
pub struct RotateKeyRequest {
pub new_secret: Option<String>,
}
pub async fn rotate_keys(
State(state): State<ControlPlaneState>,
Path(id): Path<Uuid>,
Json(payload): Json<RotateKeyRequest>,
) -> Result<Json<Project>, (StatusCode, String)> {
let jwt_secret = payload.new_secret.unwrap_or_else(|| {
rand::thread_rng()
.sample_iter(&rand::distributions::Alphanumeric)
.take(40)
.map(char::from)
.collect()
});
let anon_key = generate_jwt(&jwt_secret, "anon")
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let service_role_key = generate_jwt(&jwt_secret, "service_role")
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let project = sqlx::query_as::<_, Project>(
r#"
UPDATE projects
SET jwt_secret = $1, anon_key = $2, service_role_key = $3
WHERE id = $4
RETURNING *
"#,
)
.bind(&jwt_secret)
.bind(&anon_key)
.bind(&service_role_key)
.bind(id)
.fetch_optional(&state.db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?
.ok_or((StatusCode::NOT_FOUND, "Project not found".to_string()))?;
Ok(Json(project))
}
#[derive(Serialize, sqlx::FromRow)]
pub struct AdminUser {
pub id: Uuid,
pub email: String,
pub created_at: chrono::DateTime<chrono::Utc>,
}
pub async fn list_users(
State(state): State<ControlPlaneState>,
) -> Result<Json<Vec<AdminUser>>, (StatusCode, String)> {
let users = sqlx::query_as::<_, AdminUser>("SELECT id, email, created_at FROM users")
.fetch_all(&state.db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
Ok(Json(users))
}
pub async fn delete_user(
State(state): State<ControlPlaneState>,
Path(id): Path<Uuid>,
) -> Result<StatusCode, (StatusCode, String)> {
let result = sqlx::query("DELETE FROM users WHERE id = $1")
.bind(id)
.execute(&state.db)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
if result.rows_affected() == 0 {
return Err((StatusCode::NOT_FOUND, "User not found".to_string()));
}
Ok(StatusCode::NO_CONTENT)
}
fn generate_jwt(secret: &str, role: &str) -> anyhow::Result<String> {
let now = chrono::Utc::now();
let claims = Claims {
role: role.to_string(),
iss: "madbase".to_string(),
iat: now.timestamp() as usize,
exp: (now + chrono::Duration::days(365 * 10)).timestamp() as usize, // 10 years
sub: role.to_string(), // Use role as sub
};
let token = encode(
&Header::default(),
&claims,
&EncodingKey::from_secret(secret.as_bytes()),
)?;
Ok(token)
}
pub fn router(state: ControlPlaneState) -> Router {
Router::new()
.route("/projects", get(list_projects).post(create_project))
.route("/projects/:id", delete(delete_project))
.route("/projects/:id/keys", put(rotate_keys))
.route("/users", get(list_users))
.route("/users/:id", delete(delete_user))
.with_state(state)
}
#[cfg(test)]
mod tests {
use super::*;
use jsonwebtoken::{decode, Algorithm, DecodingKey, Validation};
#[test]
fn test_jwt_generation() {
let secret = "test-secret-123";
let role = "anon";
let token = generate_jwt(secret, role).expect("Failed to generate JWT");
let mut validation = Validation::new(Algorithm::HS256);
validation.validate_exp = true;
let token_data = decode::<Claims>(
&token,
&DecodingKey::from_secret(secret.as_bytes()),
&validation,
)
.expect("Failed to decode JWT");
assert_eq!(token_data.claims.role, "anon");
assert_eq!(token_data.claims.sub, "anon");
assert_eq!(token_data.claims.iss, "madbase");
}
}

18
data_api/Cargo.toml Normal file
View File

@@ -0,0 +1,18 @@
[package]
name = "data_api"
version = "0.1.0"
edition = "2021"
[dependencies]
common = { workspace = true }
auth = { workspace = true }
tokio = { workspace = true }
axum = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
sqlx = { workspace = true }
tracing = { workspace = true }
regex = { workspace = true }
futures = { workspace = true }
uuid = { workspace = true, features = ["serde"] }
chrono = { workspace = true, features = ["serde"] }

893
data_api/src/handlers.rs Normal file
View File

@@ -0,0 +1,893 @@
use crate::parser::{Operator, QueryParams, SelectNode, FilterNode};
use auth::AuthContext;
use axum::{
extract::{Path, Query, State},
http::StatusCode,
response::{IntoResponse, Json},
Extension,
};
use common::Config;
use futures::future::BoxFuture;
use serde_json::{json, Value};
use sqlx::{Column, PgPool, Row, TypeInfo};
use std::collections::HashMap;
use uuid::Uuid;
#[derive(Clone)]
pub struct DataState {
pub db: PgPool,
pub config: Config,
}
enum SqlValue {
String(String),
Int(i64),
Float(f64),
Bool(bool),
Uuid(Uuid),
Json(Value),
Null,
}
fn json_value_to_sql_value(v: Value) -> SqlValue {
match v {
Value::String(s) => {
if let Ok(u) = Uuid::parse_str(&s) {
SqlValue::Uuid(u)
} else {
SqlValue::String(s)
}
},
Value::Number(n) => {
if let Some(i) = n.as_i64() {
SqlValue::Int(i)
} else if let Some(f) = n.as_f64() {
SqlValue::Float(f)
} else {
SqlValue::String(n.to_string())
}
},
Value::Bool(b) => SqlValue::Bool(b),
Value::Object(_) | Value::Array(_) => SqlValue::Json(v),
Value::Null => SqlValue::Null,
}
}
pub async fn get_rows(
State(state): State<DataState>,
db: Option<Extension<PgPool>>,
Extension(auth_ctx): Extension<AuthContext>,
Path(table): Path<String>,
Query(params): Query<HashMap<String, String>>,
) -> Result<impl IntoResponse, (StatusCode, String)> {
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
let query_params = QueryParams::parse(params);
if !is_valid_identifier(&table) {
return Err((StatusCode::BAD_REQUEST, "Invalid table name".to_string()));
}
// Start transaction for RLS
let mut tx = db
.begin()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
// Set RLS variables
let role_query = format!("SET LOCAL role = '{}'", auth_ctx.role);
sqlx::query(&role_query)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set role: {}", e),
)
})?;
if let Some(claims) = &auth_ctx.claims {
let sub_query = "SELECT set_config('request.jwt.claim.sub', $1, true)";
sqlx::query(sub_query)
.bind(&claims.sub)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
if let Some(email) = &claims.email {
let email_query = "SELECT set_config('request.jwt.claim.email', $1, true)";
sqlx::query(email_query)
.bind(email)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
}
}
// --- Construct Query ---
// Use pool for schema introspection to avoid borrowing tx
let select_clause = build_select_clause(&query_params.select, &table, &db).await?;
let mut sql = format!("SELECT {} FROM {}", select_clause, table);
let mut values: Vec<SqlValue> = Vec::new();
let mut param_index = 1;
if !query_params.filters.is_empty() {
sql.push_str(" WHERE ");
let conditions: Vec<String> = query_params
.filters
.iter()
.map(|f| build_filter_clause(f, &mut param_index, &mut values))
.collect();
sql.push_str(&conditions.join(" AND "));
}
if let Some(order) = query_params.order {
if is_valid_identifier(&order.column) {
let dir = match order.direction {
crate::parser::Direction::Asc => "ASC",
crate::parser::Direction::Desc => "DESC",
};
sql.push_str(&format!(" ORDER BY {} {}", order.column, dir));
}
}
if let Some(limit) = query_params.limit {
sql.push_str(&format!(" LIMIT {}", limit));
}
if let Some(offset) = query_params.offset {
sql.push_str(&format!(" OFFSET {}", offset));
}
let mut query = sqlx::query(&sql);
for v in values {
match v {
SqlValue::String(s) => query = query.bind(s),
SqlValue::Int(n) => query = query.bind(n),
SqlValue::Float(f) => query = query.bind(f),
SqlValue::Bool(b) => query = query.bind(b),
SqlValue::Uuid(u) => query = query.bind(u),
SqlValue::Json(j) => query = query.bind(j),
SqlValue::Null => query = query.bind(Option::<String>::None),
};
}
let rows = query
.fetch_all(&mut *tx)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
tx.commit()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let json_rows = rows_to_json(rows);
Ok(Json(json_rows))
}
fn build_filter_clause(
node: &FilterNode,
param_index: &mut usize,
values: &mut Vec<SqlValue>,
) -> String {
match node {
FilterNode::Condition { column, operator, value } => {
if !is_valid_identifier(column) {
return "false".to_string();
}
let clause = match operator {
Operator::In => {
format!("{} {} (${})", column, operator.to_sql(), param_index)
}
_ => format!("{} {} ${}", column, operator.to_sql(), param_index),
};
let val = if let Ok(i) = value.parse::<i64>() {
SqlValue::Int(i)
} else if let Ok(f) = value.parse::<f64>() {
SqlValue::Float(f)
} else if let Ok(b) = value.parse::<bool>() {
SqlValue::Bool(b)
} else if let Ok(u) = Uuid::parse_str(value) {
SqlValue::Uuid(u)
} else {
SqlValue::String(value.clone())
};
values.push(val);
*param_index += 1;
clause
}
FilterNode::Or(nodes) => {
let clauses: Vec<String> = nodes
.iter()
.map(|n| build_filter_clause(n, param_index, values))
.collect();
if clauses.is_empty() {
"false".to_string()
} else {
format!("({})", clauses.join(" OR "))
}
}
FilterNode::And(nodes) => {
let clauses: Vec<String> = nodes
.iter()
.map(|n| build_filter_clause(n, param_index, values))
.collect();
if clauses.is_empty() {
"true".to_string()
} else {
format!("({})", clauses.join(" AND "))
}
}
}
}
fn build_select_clause<'a>(
nodes: &'a [SelectNode],
table: &'a str,
pool: &'a PgPool,
) -> BoxFuture<'a, Result<String, (StatusCode, String)>> {
Box::pin(async move {
if nodes.is_empty() {
return Ok("*".to_string());
}
let mut clauses = Vec::new();
for node in nodes {
match node {
SelectNode::Column(c) => {
if c == "*" {
clauses.push("*".to_string());
} else if is_valid_identifier(c) {
clauses.push(format!("\"{}\"", c));
}
}
SelectNode::Relation(rel, inner) => {
let fk_info = find_foreign_key(table, rel, pool)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e))?;
if let Some((local_col, foreign_table, foreign_col)) = fk_info {
let inner_select = if inner.is_empty() {
"*".to_string()
} else {
build_select_clause(inner, &foreign_table, pool).await?
};
let subquery = if foreign_col.starts_with("REV:") {
let actual_foreign_col = &foreign_col[4..];
format!(
"(SELECT json_agg(t) FROM (SELECT {} FROM {} WHERE {} = {}.{}) t) as \"{}\"",
inner_select, foreign_table, actual_foreign_col, table, local_col, rel
)
} else {
format!(
"(SELECT row_to_json(t) FROM (SELECT {} FROM {} WHERE {} = {}.{}) t) as \"{}\"",
inner_select, foreign_table, foreign_col, table, local_col, rel
)
};
clauses.push(subquery);
}
}
}
}
if clauses.is_empty() {
return Err((StatusCode::BAD_REQUEST, "No valid columns selected".to_string()));
}
Ok(clauses.join(", "))
})
}
async fn find_foreign_key(
table: &str,
relation: &str,
pool: &PgPool,
) -> Result<Option<(String, String, String)>, String> {
// Basic introspection to find FK.
// We look for a table named `relation` or a column named `relation_id`.
// PostgREST logic is complex, here's a simplified version:
// 1. Check if `relation` is a table name.
// 2. Find FK between `table` and `relation`.
let query = r#"
SELECT
kcu.column_name as local_col,
ccu.table_name as foreign_table,
ccu.column_name as foreign_col
FROM
information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
AND tc.table_schema = kcu.table_schema
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
AND ccu.table_schema = tc.table_schema
WHERE tc.constraint_type = 'FOREIGN KEY'
AND tc.table_name = $1
AND ccu.table_name = $2;
"#;
let row = sqlx::query_as::<_, (String, String, String)>(query)
.bind(table)
.bind(relation)
.fetch_optional(pool)
.await
.map_err(|e| e.to_string())?;
if let Some(r) = row {
return Ok(Some(r));
}
// Try reverse (many-to-one): relation table has FK to our table
let reverse_query = r#"
SELECT
ccu.column_name as local_col,
tc.table_name as foreign_table,
kcu.column_name as foreign_col
FROM
information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
AND tc.table_schema = kcu.table_schema
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
AND ccu.table_schema = tc.table_schema
WHERE tc.constraint_type = 'FOREIGN KEY'
AND tc.table_name = $2
AND ccu.table_name = $1;
"#;
let row = sqlx::query_as::<_, (String, String, String)>(reverse_query)
.bind(table)
.bind(relation)
.fetch_optional(pool)
.await
.map_err(|e| e.to_string())?;
if let Some(r) = row {
// For reverse relations (one-to-many), we want to aggregate them.
// Returning a tuple that signifies reverse relation might be tricky with the same signature.
// Let's hack it: return foreign_col as "REV:foreign_col".
return Ok(Some((r.0, r.1, format!("REV:{}", r.2))));
}
Ok(None)
}
fn rows_to_json(rows: Vec<sqlx::postgres::PgRow>) -> Vec<Value> {
let mut json_rows = Vec::new();
for row in rows {
let mut obj = serde_json::Map::new();
for col in row.columns() {
let name = col.name();
let type_info = col.type_info();
let type_name = type_info.name();
tracing::info!("Column: {}, Type: {}", name, type_name);
let val: Value = if type_name == "BOOL" {
json!(row.try_get::<bool, _>(name).unwrap_or(false))
} else if type_name == "INT2" {
json!(row.try_get::<i16, _>(name).unwrap_or(0))
} else if type_name == "INT4" {
json!(row.try_get::<i32, _>(name).unwrap_or(0))
} else if type_name == "INT8" {
json!(row.try_get::<i64, _>(name).unwrap_or(0))
} else if ["FLOAT4", "FLOAT8"].contains(&type_name) {
json!(row.try_get::<f64, _>(name).unwrap_or(0.0))
} else if ["JSON", "JSONB"].contains(&type_name) {
row.try_get::<Value, _>(name).unwrap_or(Value::Null)
} else if type_name == "UUID" {
if let Ok(u) = row.try_get::<Uuid, _>(name) {
json!(u.to_string())
} else {
Value::Null
}
} else if type_name == "TIMESTAMPTZ" {
if let Ok(ts) = row.try_get::<chrono::DateTime<chrono::Utc>, _>(name) {
json!(ts)
} else {
Value::Null
}
} else if type_name == "TIMESTAMP" {
if let Ok(ts) = row.try_get::<chrono::NaiveDateTime, _>(name) {
json!(ts.to_string())
} else {
Value::Null
}
} else {
// Fallback for types that can't be directly read as String
match row.try_get::<String, _>(name) {
Ok(s) => json!(s),
Err(_) => match row.try_get::<Value, _>(name) {
Ok(v) => v,
Err(_) => Value::Null,
},
}
};
obj.insert(name.to_string(), val);
}
json_rows.push(Value::Object(obj));
}
json_rows
}
pub async fn insert_row(
State(state): State<DataState>,
db: Option<Extension<PgPool>>,
Extension(auth_ctx): Extension<AuthContext>,
Path(table): Path<String>,
Json(payload): Json<Value>,
) -> Result<impl IntoResponse, (StatusCode, String)> {
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
if !is_valid_identifier(&table) {
return Err((StatusCode::BAD_REQUEST, "Invalid table name".to_string()));
}
// Start transaction for RLS
let mut tx = db
.begin()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
// Set RLS variables
let role_query = format!("SET LOCAL role = '{}'", auth_ctx.role);
sqlx::query(&role_query)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set role: {}", e),
)
})?;
if let Some(claims) = &auth_ctx.claims {
let sub_query = "SELECT set_config('request.jwt.claim.sub', $1, true)";
sqlx::query(sub_query)
.bind(&claims.sub)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
if let Some(email) = &claims.email {
let email_query = "SELECT set_config('request.jwt.claim.email', $1, true)";
sqlx::query(email_query)
.bind(email)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
}
}
let rows_to_insert = match payload {
Value::Array(arr) => arr,
Value::Object(obj) => vec![Value::Object(obj)],
_ => return Err((StatusCode::BAD_REQUEST, "Payload must be a JSON object or array".to_string())),
};
if rows_to_insert.is_empty() {
return Err((StatusCode::BAD_REQUEST, "Payload empty".to_string()));
}
// Use keys from the first row as the columns
let first_row = rows_to_insert[0].as_object().ok_or((StatusCode::BAD_REQUEST, "Rows must be objects".to_string()))?;
let columns: Vec<String> = first_row.keys().cloned().collect();
if columns.is_empty() {
return Err((StatusCode::BAD_REQUEST, "No columns to insert".to_string()));
}
let col_str = columns
.iter()
.map(|c| format!("\"{}\"", c))
.collect::<Vec<_>>()
.join(", ");
let mut values_sql = Vec::new();
let mut bind_values: Vec<SqlValue> = Vec::new();
let mut param_index = 1;
for row in rows_to_insert {
let obj = row.as_object().ok_or((StatusCode::BAD_REQUEST, "Rows must be objects".to_string()))?;
let mut row_placeholders = Vec::new();
for col in &columns {
row_placeholders.push(format!("${}", param_index));
param_index += 1;
// Get value or Null
let val = obj.get(col).cloned().unwrap_or(Value::Null);
bind_values.push(json_value_to_sql_value(val));
}
values_sql.push(format!("({})", row_placeholders.join(", ")));
}
let sql = format!(
"INSERT INTO {} ({}) VALUES {} RETURNING *",
table, col_str, values_sql.join(", ")
);
let mut query = sqlx::query(&sql);
for v in bind_values {
match v {
SqlValue::String(s) => query = query.bind(s),
SqlValue::Int(n) => query = query.bind(n),
SqlValue::Float(f) => query = query.bind(f),
SqlValue::Bool(b) => query = query.bind(b),
SqlValue::Uuid(u) => query = query.bind(u),
SqlValue::Json(j) => query = query.bind(j),
SqlValue::Null => query = query.bind(Option::<String>::None),
};
}
let rows = query
.fetch_all(&mut *tx)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
tx.commit()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let json_rows = rows_to_json(rows);
Ok((StatusCode::CREATED, Json(json_rows)))
}
pub async fn delete_rows(
State(state): State<DataState>,
db: Option<Extension<PgPool>>,
Extension(auth_ctx): Extension<AuthContext>,
Path(table): Path<String>,
Query(params): Query<HashMap<String, String>>,
) -> Result<impl IntoResponse, (StatusCode, String)> {
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
let query_params = QueryParams::parse(params);
if !is_valid_identifier(&table) {
return Err((StatusCode::BAD_REQUEST, "Invalid table name".to_string()));
}
let mut tx = db
.begin()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let role_query = format!("SET LOCAL role = '{}'", auth_ctx.role);
sqlx::query(&role_query)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set role: {}", e),
)
})?;
if let Some(claims) = &auth_ctx.claims {
let sub_query = "SELECT set_config('request.jwt.claim.sub', $1, true)";
sqlx::query(sub_query)
.bind(&claims.sub)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
if let Some(email) = &claims.email {
let email_query = "SELECT set_config('request.jwt.claim.email', $1, true)";
sqlx::query(email_query)
.bind(email)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
}
}
let mut sql = format!("DELETE FROM {}", table);
let mut values: Vec<SqlValue> = Vec::new();
let mut param_index = 1;
if !query_params.filters.is_empty() {
sql.push_str(" WHERE ");
let conditions: Vec<String> = query_params
.filters
.iter()
.map(|f| build_filter_clause(f, &mut param_index, &mut values))
.collect();
sql.push_str(&conditions.join(" AND "));
}
let mut query = sqlx::query(&sql);
for v in values {
match v {
SqlValue::String(s) => query = query.bind(s),
SqlValue::Int(n) => query = query.bind(n),
SqlValue::Float(f) => query = query.bind(f),
SqlValue::Bool(b) => query = query.bind(b),
SqlValue::Uuid(u) => query = query.bind(u),
SqlValue::Json(j) => query = query.bind(j),
SqlValue::Null => query = query.bind(Option::<String>::None),
};
}
query
.execute(&mut *tx)
.await
.map_err(|e| {
tracing::error!("Delete Rows error: SQL={}, Error={:?}", sql, e);
(StatusCode::INTERNAL_SERVER_ERROR, e.to_string())
})?;
tx.commit()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
Ok(StatusCode::NO_CONTENT)
}
pub async fn update_rows(
State(state): State<DataState>,
db: Option<Extension<PgPool>>,
Extension(auth_ctx): Extension<AuthContext>,
Path(table): Path<String>,
Query(params): Query<HashMap<String, String>>,
Json(payload): Json<Value>,
) -> Result<impl IntoResponse, (StatusCode, String)> {
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
if !is_valid_identifier(&table) {
return Err((StatusCode::BAD_REQUEST, "Invalid table name".to_string()));
}
let query_params = QueryParams::parse(params);
let mut tx = db
.begin()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let role_query = format!("SET LOCAL role = '{}'", auth_ctx.role);
sqlx::query(&role_query)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set role: {}", e),
)
})?;
if let Some(claims) = &auth_ctx.claims {
let sub_query = "SELECT set_config('request.jwt.claim.sub', $1, true)";
sqlx::query(sub_query)
.bind(&claims.sub)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
if let Some(email) = &claims.email {
let email_query = "SELECT set_config('request.jwt.claim.email', $1, true)";
sqlx::query(email_query)
.bind(email)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
}
}
let obj = payload.as_object().ok_or((
StatusCode::BAD_REQUEST,
"Payload must be a JSON object".to_string(),
))?;
if obj.is_empty() {
return Err((StatusCode::BAD_REQUEST, "Payload empty".to_string()));
}
let mut final_sql = format!("UPDATE {} SET ", table);
let mut final_values: Vec<SqlValue> = Vec::new();
let mut p_idx = 1;
let mut sets = Vec::new();
for (k, v) in obj {
sets.push(format!("\"{}\" = ${}", k, p_idx));
final_values.push(json_value_to_sql_value(v.clone()));
p_idx += 1;
}
final_sql.push_str(&sets.join(", "));
if !query_params.filters.is_empty() {
final_sql.push_str(" WHERE ");
let mut conds = Vec::new();
for f in &query_params.filters {
conds.push(build_filter_clause(f, &mut p_idx, &mut final_values));
}
final_sql.push_str(&conds.join(" AND "));
}
let mut query = sqlx::query(&final_sql);
for v in final_values {
match v {
SqlValue::String(s) => query = query.bind(s),
SqlValue::Int(n) => query = query.bind(n),
SqlValue::Float(f) => query = query.bind(f),
SqlValue::Bool(b) => query = query.bind(b),
SqlValue::Uuid(u) => query = query.bind(u),
SqlValue::Json(j) => query = query.bind(j),
SqlValue::Null => query = query.bind(Option::<String>::None),
};
}
query
.execute(&mut *tx)
.await
.map_err(|e| {
tracing::error!("Update Rows error: SQL={}, Error={:?}", final_sql, e);
(StatusCode::INTERNAL_SERVER_ERROR, e.to_string())
})?;
tx.commit()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
Ok(StatusCode::NO_CONTENT)
}
pub async fn rpc(
State(state): State<DataState>,
db: Option<Extension<PgPool>>,
Extension(auth_ctx): Extension<AuthContext>,
Path(function): Path<String>,
Json(payload): Json<Value>,
) -> Result<impl IntoResponse, (StatusCode, String)> {
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
if !is_valid_identifier(&function) {
return Err((StatusCode::BAD_REQUEST, "Invalid function name".to_string()));
}
let mut tx = db
.begin()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let role_query = format!("SET LOCAL role = '{}'", auth_ctx.role);
sqlx::query(&role_query)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set role: {}", e),
)
})?;
if let Some(claims) = &auth_ctx.claims {
let sub_query = "SELECT set_config('request.jwt.claim.sub', $1, true)";
sqlx::query(sub_query)
.bind(&claims.sub)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
if let Some(email) = &claims.email {
let email_query = "SELECT set_config('request.jwt.claim.email', $1, true)";
sqlx::query(email_query)
.bind(email)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
}
}
let obj = payload.as_object().ok_or((
StatusCode::BAD_REQUEST,
"Payload must be a JSON object".to_string(),
))?;
let mut args = Vec::new();
let mut values: Vec<SqlValue> = Vec::new();
let mut p_idx = 1;
for (k, v) in obj {
if !is_valid_identifier(k) {
return Err((StatusCode::BAD_REQUEST, "Invalid argument name".to_string()));
}
args.push(format!("{} => ${}", k, p_idx));
values.push(json_value_to_sql_value(v.clone()));
p_idx += 1;
}
let sql = if args.is_empty() {
format!("SELECT * FROM {}()", function)
} else {
format!("SELECT * FROM {}({})", function, args.join(", "))
};
let mut query = sqlx::query(&sql);
for v in values {
match v {
SqlValue::String(s) => query = query.bind(s),
SqlValue::Int(n) => query = query.bind(n),
SqlValue::Float(f) => query = query.bind(f),
SqlValue::Bool(b) => query = query.bind(b),
SqlValue::Uuid(u) => query = query.bind(u),
SqlValue::Json(j) => query = query.bind(j),
SqlValue::Null => query = query.bind(Option::<String>::None),
};
}
let rows = query
.fetch_all(&mut *tx)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
tx.commit()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let json_rows = rows_to_json(rows);
Ok(Json(json_rows))
}
fn is_valid_identifier(s: &str) -> bool {
s.chars().all(|c| c.is_alphanumeric() || c == '_') && !s.is_empty()
}

20
data_api/src/lib.rs Normal file
View File

@@ -0,0 +1,20 @@
pub mod handlers;
pub mod parser;
use axum::{
routing::{get, post},
Router,
};
use handlers::DataState;
pub fn router() -> Router<DataState> {
Router::new()
.route("/rpc/:function", post(handlers::rpc))
.route(
"/:table",
get(handlers::get_rows)
.post(handlers::insert_row)
.patch(handlers::update_rows)
.delete(handlers::delete_rows),
)
}

276
data_api/src/parser.rs Normal file
View File

@@ -0,0 +1,276 @@
use std::collections::HashMap;
#[derive(Debug, Clone, PartialEq)]
pub enum Operator {
Eq,
Neq,
Gt,
Gte,
Lt,
Lte,
Like,
Ilike,
In,
Is,
}
impl Operator {
pub fn parse(s: &str) -> Option<Self> {
match s {
"eq" => Some(Operator::Eq),
"neq" => Some(Operator::Neq),
"gt" => Some(Operator::Gt),
"gte" => Some(Operator::Gte),
"lt" => Some(Operator::Lt),
"lte" => Some(Operator::Lte),
"like" => Some(Operator::Like),
"ilike" => Some(Operator::Ilike),
"in" => Some(Operator::In),
"is" => Some(Operator::Is),
_ => None,
}
}
pub fn to_sql(&self) -> &'static str {
match self {
Operator::Eq => "=",
Operator::Neq => "!=",
Operator::Gt => ">",
Operator::Gte => ">=",
Operator::Lt => "<",
Operator::Lte => "<=",
Operator::Like => "LIKE",
Operator::Ilike => "ILIKE",
Operator::In => "IN",
Operator::Is => "IS",
}
}
}
#[derive(Debug, Clone)]
pub struct Order {
pub column: String,
pub direction: Direction,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Direction {
Asc,
Desc,
}
#[derive(Debug, Clone, PartialEq)]
pub enum SelectNode {
Column(String),
Relation(String, Vec<SelectNode>),
}
impl SelectNode {
pub fn parse(input: &str) -> Vec<Self> {
let mut nodes = Vec::new();
let mut buffer = String::new();
let mut depth = 0;
for c in input.chars() {
match c {
'(' => {
depth += 1;
buffer.push(c);
}
')' => {
depth -= 1;
buffer.push(c);
}
',' => {
if depth == 0 {
nodes.push(Self::parse_single(&buffer));
buffer.clear();
} else {
buffer.push(c);
}
}
_ => buffer.push(c),
}
}
if !buffer.is_empty() {
nodes.push(Self::parse_single(&buffer));
}
nodes
}
fn parse_single(s: &str) -> Self {
let s = s.trim();
if let Some(idx) = s.find('(') {
if s.ends_with(')') {
let relation = &s[..idx];
let inner = &s[idx + 1..s.len() - 1];
return SelectNode::Relation(relation.to_string(), Self::parse(inner));
}
}
SelectNode::Column(s.to_string())
}
}
#[derive(Debug, Clone)]
pub enum FilterNode {
Condition {
column: String,
operator: Operator,
value: String,
},
Or(Vec<FilterNode>),
And(Vec<FilterNode>),
}
impl FilterNode {
pub fn parse(key: &str, value: &str) -> Option<Self> {
if key == "or" || key == "and" {
let content = value.trim_start_matches('(').trim_end_matches(')');
let parts = split_respecting_parens(content);
let mut nodes = Vec::new();
for part in parts {
// Try to find first dot to split col.op.val
// But handle nested logic: or(...)
if let Some(idx) = part.find('(') {
// It might be logic operator like or(...)
let k = &part[..idx];
let v = &part[idx..];
if let Some(node) = FilterNode::parse(k, v) {
nodes.push(node);
continue;
}
}
// Normal case: col.op.val
if let Some(dot_idx) = part.find('.') {
let k = &part[..dot_idx];
let v = &part[dot_idx + 1..];
if let Some(node) = FilterNode::parse(k, v) {
nodes.push(node);
}
}
}
if key == "or" {
Some(FilterNode::Or(nodes))
} else {
Some(FilterNode::And(nodes))
}
} else {
// Check for filters: column=operator.value or column=value (eq implicit)
let parts: Vec<&str> = value.splitn(2, '.').collect();
if parts.len() == 2 {
if let Some(op) = Operator::parse(parts[0]) {
return Some(FilterNode::Condition {
column: key.to_string(),
operator: op,
value: parts[1].to_string(),
});
}
}
// Default to eq
Some(FilterNode::Condition {
column: key.to_string(),
operator: Operator::Eq,
value: value.to_string(),
})
}
}
}
fn split_respecting_parens(input: &str) -> Vec<String> {
let mut parts = Vec::new();
let mut buffer = String::new();
let mut depth = 0;
for c in input.chars() {
match c {
'(' => {
depth += 1;
buffer.push(c);
}
')' => {
depth -= 1;
buffer.push(c);
}
',' => {
if depth == 0 {
parts.push(buffer.trim().to_string());
buffer.clear();
} else {
buffer.push(c);
}
}
_ => buffer.push(c),
}
}
if !buffer.is_empty() {
parts.push(buffer.trim().to_string());
}
parts
}
#[derive(Debug, Clone)]
pub struct QueryParams {
pub select: Vec<SelectNode>,
pub filters: Vec<FilterNode>,
pub order: Option<Order>,
pub limit: Option<usize>,
pub offset: Option<usize>,
}
impl QueryParams {
pub fn parse(params: HashMap<String, String>) -> Self {
let mut filters = Vec::new();
let mut select = Vec::new();
let mut order = None;
let mut limit = None;
let mut offset = None;
for (key, value) in params {
match key.as_str() {
"select" => {
select = SelectNode::parse(&value);
}
"order" => {
// format: column.asc or column.desc
let parts: Vec<&str> = value.split('.').collect();
if parts.len() == 2 {
let direction = match parts[1] {
"desc" => Direction::Desc,
_ => Direction::Asc,
};
order = Some(Order {
column: parts[0].to_string(),
direction,
});
}
}
"limit" => {
if let Ok(l) = value.parse::<usize>() {
limit = Some(l);
}
}
"offset" => {
if let Ok(o) = value.parse::<usize>() {
offset = Some(o);
}
}
_ => {
if let Some(node) = FilterNode::parse(&key, &value) {
filters.push(node);
}
}
}
}
QueryParams {
select,
filters,
order,
limit,
offset,
}
}
}

113
docker-compose.yml Normal file
View File

@@ -0,0 +1,113 @@
services:
# Tenant Database (User Data)
db:
image: postgres:15-alpine
container_name: madbase_db
restart: unless-stopped
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
# Enable logical replication for Realtime
POSTGRES_HOST_AUTH_METHOD: trust
command: ["postgres", "-c", "wal_level=logical"]
ports:
- "5432:5432"
volumes:
- madbase_db_data:/var/lib/postgresql/data
# Control Plane Database (Project Config, Secrets)
control_db:
image: postgres:15-alpine
container_name: madbase_control_db
restart: unless-stopped
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin_password
POSTGRES_DB: madbase_control
ports:
- "5433:5432"
volumes:
- madbase_control_db_data:/var/lib/postgresql/data
# Object Storage (S3 Compatible)
minio:
image: minio/minio
container_name: madbase_minio
restart: unless-stopped
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
volumes:
- madbase_minio_data:/data
# Observability Stack
victoriametrics:
image: victoriametrics/victoria-metrics:v1.93.0
container_name: madbase_vm
ports:
- "8428:8428"
volumes:
- madbase_vm_data:/victoria-metrics-data
- ./prometheus.yml:/etc/prometheus/prometheus.yml
command:
- "--storageDataPath=/victoria-metrics-data"
- "--httpListenAddr=:8428"
- "--promscrape.config=/etc/prometheus/prometheus.yml"
extra_hosts:
- "host.docker.internal:host-gateway"
loki:
image: grafana/loki:2.9.2
container_name: madbase_loki
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- madbase_loki_data:/loki
grafana:
image: grafana/grafana:10.2.0
container_name: madbase_grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- madbase_grafana_data:/var/lib/grafana
depends_on:
- victoriametrics
- loki
gateway:
build: .
container_name: madbase_gateway
restart: unless-stopped
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgres://admin:admin_password@control_db:5432/madbase_control
- DEFAULT_TENANT_DB_URL=postgres://postgres:postgres@db:5432/postgres
- S3_ENDPOINT=http://minio:9000
- JWT_SECRET=supersecret
- PORT=8000
- RUST_LOG=debug
- LOG_FORMAT=json
- RATE_LIMIT_PER_SECOND=1000
depends_on:
- db
- control_db
- victoriametrics
- loki
volumes:
madbase_db_data:
madbase_control_db_data:
madbase_minio_data:
madbase_vm_data:
madbase_loki_data:
madbase_grafana_data:

27
gateway/Cargo.toml Normal file
View File

@@ -0,0 +1,27 @@
[package]
name = "gateway"
version = "0.1.0"
edition = "2021"
[dependencies]
common = { workspace = true }
auth = { workspace = true }
data_api = { workspace = true }
control_plane = { workspace = true }
realtime = { workspace = true }
storage = { workspace = true }
tokio = { workspace = true }
axum = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
sqlx = { workspace = true }
dotenvy = { workspace = true }
anyhow = { workspace = true }
axum-prometheus = "0.6"
tower_governor = "0.4.2"
tower-http = { version = "0.6.8", features = ["cors", "trace"] }
moka = { version = "0.12.14", features = ["future"] }

220
gateway/src/main.rs Normal file
View File

@@ -0,0 +1,220 @@
mod middleware;
mod state;
use axum::{
extract::Request,
middleware::{from_fn, from_fn_with_state, Next},
response::Response,
routing::get,
Router,
};
use axum_prometheus::PrometheusMetricLayer;
use common::{init_pool, Config};
use state::AppState;
use std::collections::HashMap;
use std::net::SocketAddr;
use std::sync::Arc;
use std::time::Duration;
use tokio::sync::RwLock;
use tower_governor::{governor::GovernorConfigBuilder, key_extractor::SmartIpKeyExtractor, GovernorLayer};
use tower_http::cors::{Any, CorsLayer};
use tower_http::trace::TraceLayer;
use moka::future::Cache;
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
async fn log_headers(req: Request, next: Next) -> Response {
tracing::debug!("Request Headers: {:?}", req.headers());
next.run(req).await
}
async fn dashboard_handler() -> axum::response::Html<&'static str> {
axum::response::Html(include_str!("../../web/index.html"))
}
async fn wait_for_db(db_url: &str) -> sqlx::PgPool {
loop {
match init_pool(db_url).await {
Ok(pool) => return pool,
Err(e) => {
tracing::warn!("Database not ready yet, retrying in 2s: {}", e);
tokio::time::sleep(Duration::from_secs(2)).await;
}
}
}
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Load configuration
dotenvy::dotenv().ok();
let config = Config::new().expect("Failed to load configuration");
// Initialize tracing
let rust_log = std::env::var("RUST_LOG").unwrap_or_else(|_| "debug".into());
if std::env::var("LOG_FORMAT").ok().as_deref() == Some("json") {
tracing_subscriber::registry()
.with(tracing_subscriber::EnvFilter::new(&rust_log))
.with(tracing_subscriber::fmt::layer().json())
.init();
} else {
tracing_subscriber::registry()
.with(tracing_subscriber::EnvFilter::new(&rust_log))
.with(tracing_subscriber::fmt::layer())
.init();
}
tracing::info!("Starting MadBase Gateway...");
// Initialize Database (Control Plane / Main DB)
tracing::info!("Connecting to database at {}...", config.database_url);
let pool = wait_for_db(&config.database_url).await;
tracing::info!("Database connected successfully.");
// Run Migrations
tracing::info!("Running database migrations...");
sqlx::migrate!("../migrations")
.run(&pool)
.await
.expect("Failed to run migrations");
tracing::info!("Migrations applied successfully.");
let app_state = AppState {
control_db: pool.clone(),
tenant_pools: Arc::new(RwLock::new(HashMap::new())),
};
// Auth State (Legacy/Fallback)
let auth_state = auth::AuthState {
db: pool.clone(),
config: config.clone(),
};
let data_state = data_api::handlers::DataState {
db: pool.clone(),
config: config.clone(),
};
let control_state = control_plane::ControlPlaneState { db: pool.clone() };
// Initialize Tenant Database (for Realtime)
let default_tenant_db_url = std::env::var("DEFAULT_TENANT_DB_URL")
.expect("DEFAULT_TENANT_DB_URL must be set");
tracing::info!("Connecting to default tenant database at {}...", default_tenant_db_url);
let tenant_pool = wait_for_db(&default_tenant_db_url).await;
tracing::info!("Tenant Database connected successfully.");
let mut tenant_config = config.clone();
tenant_config.database_url = default_tenant_db_url;
// Realtime Init
let (realtime_router, realtime_state) = realtime::init(tenant_pool.clone(), tenant_config.clone());
// Start Replication Listener
let repl_config = tenant_config.clone();
let repl_tx = realtime_state.broadcast_tx.clone();
tokio::spawn(async move {
if let Err(e) = realtime::replication::start_replication_listener(repl_config, repl_tx).await {
tracing::error!("Replication listener failed: {}", e);
}
});
// Storage Init
let storage_router = storage::init(pool.clone(), config.clone()).await;
// Auth Middleware State
let auth_middleware_state = auth::AuthMiddlewareState {
config: config.clone(),
};
// Project Middleware State
let project_middleware_state = middleware::ProjectMiddlewareState {
control_db: app_state.control_db.clone(),
tenant_pools: app_state.tenant_pools.clone(),
project_cache: Cache::new(100),
};
// Construct App
// We apply `resolve_project` middleware to /auth, /rest, /storage, /realtime
// But NOT /platform (admin)
let tenant_routes = Router::new()
.nest(
"/auth/v1",
auth::router()
.layer(from_fn_with_state(
auth_middleware_state.clone(),
auth::auth_middleware,
))
.with_state(auth_state),
)
.nest(
"/rest/v1",
data_api::router()
.layer(from_fn_with_state(
auth_middleware_state.clone(),
auth::auth_middleware,
))
.with_state(data_state),
)
.nest("/realtime/v1", realtime_router)
.nest(
"/storage/v1",
storage_router.layer(from_fn_with_state(
auth_middleware_state.clone(),
auth::auth_middleware,
)),
)
.layer(from_fn_with_state(
project_middleware_state.clone(),
middleware::inject_tenant_pool,
))
.layer(from_fn_with_state(
project_middleware_state,
middleware::resolve_project,
));
// Metrics
let (prometheus_layer, metric_handle) = PrometheusMetricLayer::pair();
// Rate Limiting Configuration
let governor_conf = Arc::new(
GovernorConfigBuilder::default()
.per_second(config.rate_limit_per_second)
.burst_size(config.rate_limit_per_second as u32 * 2)
.key_extractor(SmartIpKeyExtractor)
.finish()
.unwrap(),
);
let app = Router::new()
.route("/", get(|| async { "Hello, MadBase!" }))
.route("/metrics", get(|| async move { metric_handle.render() }))
.route("/dashboard", get(dashboard_handler))
.nest("/", tenant_routes) // Apply project resolution to these
.nest(
"/platform/v1", // Admin/Control Plane API (No project resolution needed)
control_plane::router(control_state),
)
.layer(GovernorLayer {
config: governor_conf,
})
.layer(
CorsLayer::new()
.allow_origin(Any)
.allow_methods(Any)
.allow_headers(Any),
)
.layer(TraceLayer::new_for_http())
.layer(from_fn(log_headers))
.layer(prometheus_layer);
// Run it
let addr = SocketAddr::from(([0, 0, 0, 0], config.port));
tracing::info!("Listening on {}", addr);
let listener = tokio::net::TcpListener::bind(addr).await?;
axum::serve(listener, app.into_make_service_with_connect_info::<SocketAddr>()).await?;
Ok(())
}

133
gateway/src/middleware.rs Normal file
View File

@@ -0,0 +1,133 @@
use axum::{
extract::{Request, State},
http::StatusCode,
middleware::Next,
response::Response,
};
use common::init_pool;
use common::ProjectContext;
use moka::future::Cache;
use sqlx::PgPool;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
use tracing::warn;
#[derive(Clone)]
pub struct ProjectMiddlewareState {
pub control_db: PgPool,
pub tenant_pools: Arc<RwLock<HashMap<String, PgPool>>>,
pub project_cache: Cache<String, ProjectContext>,
}
pub async fn resolve_project(
State(state): State<ProjectMiddlewareState>,
mut req: Request,
next: Next,
) -> Result<Response, StatusCode> {
// 1. Extract Project Ref from Header or Subdomain
let project_ref = if let Some(val) = req.headers().get("x-project-ref") {
val.to_str()
.map_err(|_| StatusCode::BAD_REQUEST)?
.to_string()
} else {
"default".to_string()
};
// 2. Check Cache
if let Some(ctx) = state.project_cache.get(&project_ref).await {
req.extensions_mut().insert(ctx);
return Ok(next.run(req).await);
}
// 3. Fetch Project Config from DB
// Use a common Record struct or map manually to avoid anonymous struct type mismatch in if/else
#[derive(sqlx::FromRow)]
struct ProjectRecord {
db_url: String,
jwt_secret: String,
anon_key: Option<String>,
service_role_key: Option<String>,
}
let record = if project_ref == "default" {
sqlx::query_as::<_, ProjectRecord>(
"SELECT db_url, jwt_secret, anon_key, service_role_key FROM projects LIMIT 1",
)
.fetch_optional(&state.control_db)
.await
.map_err(|e| {
warn!("DB Error: {}", e);
StatusCode::INTERNAL_SERVER_ERROR
})?
} else {
sqlx::query_as::<_, ProjectRecord>(
"SELECT db_url, jwt_secret, anon_key, service_role_key FROM projects WHERE name = $1",
)
.bind(&project_ref)
.fetch_optional(&state.control_db)
.await
.map_err(|e| {
warn!("DB Error: {}", e);
StatusCode::INTERNAL_SERVER_ERROR
})?
};
if record.is_none() {
warn!("Project not found: {}", project_ref);
return Err(StatusCode::NOT_FOUND);
}
let project = record.unwrap();
// 4. Construct ProjectContext
let ctx = ProjectContext {
project_ref: project_ref.clone(),
db_url: project.db_url,
jwt_secret: project.jwt_secret,
anon_key: project.anon_key,
service_role_key: project.service_role_key,
};
// 5. Update Cache
state.project_cache.insert(project_ref.clone(), ctx.clone()).await;
// 6. Inject into Request
req.extensions_mut().insert(ctx);
Ok(next.run(req).await)
}
pub async fn inject_tenant_pool(
State(state): State<ProjectMiddlewareState>,
mut req: Request,
next: Next,
) -> Result<Response, StatusCode> {
let project_ctx = req
.extensions()
.get::<ProjectContext>()
.cloned()
.ok_or(StatusCode::INTERNAL_SERVER_ERROR)?;
let db_url = project_ctx.db_url.clone();
let existing = { state.tenant_pools.read().await.get(&db_url).cloned() };
let pool = if let Some(p) = existing {
p
} else {
let new_pool = init_pool(&db_url)
.await
.map_err(|e| {
warn!("Failed to init tenant pool for {}: {}", db_url, e);
StatusCode::INTERNAL_SERVER_ERROR
})?;
let mut w = state.tenant_pools.write().await;
let entry = w.entry(db_url).or_insert_with(|| new_pool.clone());
entry.clone()
};
req.extensions_mut().insert(pool);
Ok(next.run(req).await)
}

10
gateway/src/state.rs Normal file
View File

@@ -0,0 +1,10 @@
use sqlx::PgPool;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
#[derive(Clone)]
pub struct AppState {
pub control_db: PgPool,
pub tenant_pools: Arc<RwLock<HashMap<String, PgPool>>>,
}

View File

@@ -0,0 +1,23 @@
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
email TEXT UNIQUE NOT NULL,
encrypted_password TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
last_sign_in_at TIMESTAMPTZ,
raw_app_meta_data JSONB DEFAULT '{}'::jsonb,
raw_user_meta_data JSONB DEFAULT '{}'::jsonb,
is_super_admin BOOLEAN DEFAULT false,
confirmed_at TIMESTAMPTZ,
email_confirmed_at TIMESTAMPTZ,
phone TEXT,
phone_confirmed_at TIMESTAMPTZ,
confirmation_token TEXT,
recovery_token TEXT,
email_change_token_new TEXT,
email_change TEXT
);
CREATE INDEX users_email_idx ON users (email);

View File

@@ -0,0 +1,14 @@
CREATE TABLE IF NOT EXISTS refresh_tokens (
id BIGSERIAL PRIMARY KEY,
token TEXT NOT NULL UNIQUE,
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
revoked BOOLEAN NOT NULL DEFAULT false,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
parent TEXT,
session_id UUID
);
CREATE INDEX IF NOT EXISTS refresh_tokens_token_idx ON refresh_tokens(token);
CREATE INDEX IF NOT EXISTS refresh_tokens_user_id_idx ON refresh_tokens(user_id);

View File

@@ -0,0 +1,72 @@
-- Create roles if they don't exist
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'authenticated') THEN
CREATE ROLE authenticated NOLOGIN;
END IF;
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'anon') THEN
CREATE ROLE anon NOLOGIN;
END IF;
END
$$;
CREATE SCHEMA IF NOT EXISTS storage;
-- Grant usage
GRANT USAGE ON SCHEMA storage TO authenticated, anon;
GRANT USAGE ON SCHEMA public TO authenticated, anon;
CREATE TABLE IF NOT EXISTS storage.buckets (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
public BOOLEAN DEFAULT false,
owner UUID REFERENCES public.users(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
CREATE TABLE IF NOT EXISTS storage.objects (
id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
bucket_id TEXT REFERENCES storage.buckets(id),
name TEXT NOT NULL,
owner UUID REFERENCES public.users(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
last_accessed_at TIMESTAMPTZ DEFAULT now(),
metadata JSONB,
UNIQUE (bucket_id, name)
);
-- Grant table access (RLS will filter rows)
GRANT ALL ON TABLE storage.buckets TO authenticated, anon;
GRANT ALL ON TABLE storage.objects TO authenticated, anon;
ALTER TABLE storage.buckets ENABLE ROW LEVEL SECURITY;
ALTER TABLE storage.objects ENABLE ROW LEVEL SECURITY;
-- Helper to allow public access to public buckets
CREATE POLICY "Public Buckets are viewable by everyone"
ON storage.buckets FOR SELECT
USING ( public = true );
-- Helper to allow authenticated users to view their own buckets
CREATE POLICY "Users can view their own buckets"
ON storage.buckets FOR SELECT
TO authenticated
USING ( owner = current_setting('request.jwt.claim.sub', true)::uuid );
-- Objects policies depend on bucket public status or object owner
CREATE POLICY "Public Objects are viewable by everyone"
ON storage.objects FOR SELECT
USING ( bucket_id IN (SELECT id FROM storage.buckets WHERE public = true) );
CREATE POLICY "Users can view their own objects"
ON storage.objects FOR SELECT
TO authenticated
USING ( owner = current_setting('request.jwt.claim.sub', true)::uuid );
CREATE POLICY "Users can insert their own objects"
ON storage.objects FOR INSERT
TO authenticated
WITH CHECK ( owner = current_setting('request.jwt.claim.sub', true)::uuid );

View File

@@ -0,0 +1,30 @@
-- This migration runs on the CONTROL PLANE database (port 5433), not the tenant DB.
-- We need to ensure we migrate the correct DB.
-- For MVP, if we only have one migration pipeline, we might mix them?
-- Ideally we use `sqlx migrate run --database-url ...` for this specific migration.
-- Or we just put this table in the main DB for the MVP to avoid infrastructure complexity?
-- The `docker-compose.yml` has `control_db`.
-- Let's try to use the main DB for everything in MVP to reduce friction,
-- OR use a separate folder for control plane migrations.
-- Let's put `projects` in the `public` schema of the main DB for simplicity of the "Single Tenant / Self Hosted" mode.
-- In a real SaaS, this would be separate.
CREATE EXTENSION IF NOT EXISTS pgcrypto;
CREATE TABLE IF NOT EXISTS projects (
id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
name TEXT NOT NULL,
owner_id UUID, -- No FK to users strictly required if users are in tenant DB, but here they are same DB.
status TEXT DEFAULT 'active',
db_url TEXT NOT NULL,
jwt_secret TEXT NOT NULL DEFAULT encode(gen_random_bytes(32), 'hex'),
anon_key TEXT,
service_role_key TEXT,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Trigger to generate keys on insert? Or handle in code.
-- Let's handle in code for keys.

View File

@@ -0,0 +1,49 @@
-- Realtime schema
CREATE SCHEMA IF NOT EXISTS madbase_realtime;
-- Generic Trigger Function
CREATE OR REPLACE FUNCTION madbase_realtime.broadcast_changes()
RETURNS trigger AS $$
DECLARE
payload jsonb;
topic text;
BEGIN
-- Construct payload
payload = jsonb_build_object(
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'type', TG_OP,
'timestamp', now()
);
IF (TG_OP = 'INSERT') THEN
payload = payload || jsonb_build_object('record', row_to_json(NEW)::jsonb);
ELSIF (TG_OP = 'UPDATE') THEN
payload = payload || jsonb_build_object(
'record', row_to_json(NEW)::jsonb,
'old_record', row_to_json(OLD)::jsonb
);
ELSIF (TG_OP = 'DELETE') THEN
payload = payload || jsonb_build_object('old_record', row_to_json(OLD)::jsonb);
END IF;
-- Send notification
-- Payload limit is 8000 bytes. Larger payloads will fail or need truncation.
-- For MVP, we assume it fits.
PERFORM pg_notify('madbase_realtime', payload::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Example: Enable for public.users (if it exists)
-- DO $$
-- BEGIN
-- IF EXISTS (SELECT FROM pg_tables WHERE schemaname = 'public' AND tablename = 'users') THEN
-- CREATE TRIGGER realtime_users_changes
-- AFTER INSERT OR UPDATE OR DELETE ON public.users
-- FOR EACH ROW EXECUTE FUNCTION madbase_realtime.broadcast_changes();
-- END IF;
-- END
-- $$;

View File

@@ -0,0 +1,71 @@
-- Create History Table
CREATE TABLE IF NOT EXISTS madbase_realtime.messages (
id bigserial PRIMARY KEY,
topic text NOT NULL, -- schema:table
payload jsonb NOT NULL,
created_at timestamptz DEFAULT now()
);
CREATE INDEX IF NOT EXISTS idx_realtime_messages_topic_id ON madbase_realtime.messages (topic, id);
-- Update Trigger Function
CREATE OR REPLACE FUNCTION madbase_realtime.broadcast_changes()
RETURNS trigger AS $$
DECLARE
base_payload jsonb;
final_payload jsonb;
topic text;
msg_id bigint;
BEGIN
-- Construct topic
topic = TG_TABLE_SCHEMA || ':' || TG_TABLE_NAME;
-- Construct base payload
base_payload = jsonb_build_object(
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'type', TG_OP,
'timestamp', now()
);
IF (TG_OP = 'INSERT') THEN
base_payload = base_payload || jsonb_build_object('record', row_to_json(NEW)::jsonb);
ELSIF (TG_OP = 'UPDATE') THEN
base_payload = base_payload || jsonb_build_object(
'record', row_to_json(NEW)::jsonb,
'old_record', row_to_json(OLD)::jsonb
);
ELSIF (TG_OP = 'DELETE') THEN
base_payload = base_payload || jsonb_build_object('old_record', row_to_json(OLD)::jsonb);
END IF;
-- Insert into history
INSERT INTO madbase_realtime.messages (topic, payload)
VALUES (topic, base_payload)
RETURNING id INTO msg_id;
-- Add ID to payload
final_payload = base_payload || jsonb_build_object('id', msg_id);
-- Send notification
-- Payload limit is 8000 bytes. Larger payloads will fail or need truncation.
-- If payload is too large, we can send a "payload too large" message with ID,
-- and client can fetch it from history.
-- For MVP, we assume it fits or fail silently on notify (but insert succeeds).
BEGIN
PERFORM pg_notify('madbase_realtime', final_payload::text);
EXCEPTION WHEN string_data_right_truncation OR others THEN
-- If notification fails, client can still rely on history if they poll or reconnect.
-- We could notify just the ID.
PERFORM pg_notify('madbase_realtime', jsonb_build_object(
'id', msg_id,
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'type', TG_OP,
'truncated', true
)::text);
END;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;

View File

@@ -0,0 +1,35 @@
DO $$
BEGIN
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'service_role') THEN
CREATE ROLE service_role NOLOGIN;
END IF;
END
$$;
ALTER ROLE service_role WITH BYPASSRLS;
GRANT USAGE ON SCHEMA storage TO service_role;
GRANT ALL ON ALL TABLES IN SCHEMA storage TO service_role;
GRANT ALL ON ALL SEQUENCES IN SCHEMA storage TO service_role;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA storage TO service_role;
-- Policies for service_role
CREATE POLICY "Service role can do anything on buckets"
ON storage.buckets
FOR ALL
TO service_role
USING (true)
WITH CHECK (true);
CREATE POLICY "Service role can do anything on objects"
ON storage.objects
FOR ALL
TO service_role
USING (true)
WITH CHECK (true);
-- Also grant usage on public schema just in case
GRANT USAGE ON SCHEMA public TO service_role;
GRANT ALL ON ALL TABLES IN SCHEMA public TO service_role;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO service_role;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO service_role;

View File

@@ -0,0 +1 @@
{"version":"4.0.18","results":[[":tests/integration/realtime.test.ts",{"duration":5003.304583999998,"failed":true}],[":tests/integration/db.test.ts",{"duration":131.38708399999996,"failed":false}],[":tests/integration/storage.test.ts",{"duration":106.0102910000005,"failed":false}],[":tests/integration/auth.test.ts",{"duration":144.21875,"failed":false}]]}

7
prometheus.yml Normal file
View File

@@ -0,0 +1,7 @@
global:
scrape_interval: 5s
scrape_configs:
- job_name: 'madbase_gateway'
static_configs:
- targets: ['host.docker.internal:8000']

22
realtime/Cargo.toml Normal file
View File

@@ -0,0 +1,22 @@
[package]
name = "realtime"
version = "0.1.0"
edition = "2021"
[dependencies]
common = { workspace = true }
auth = { workspace = true }
tokio = { workspace = true }
axum = { workspace = true, features = ["ws"] }
serde = { workspace = true }
serde_json = { workspace = true }
sqlx = { workspace = true }
tracing = { workspace = true }
futures = { workspace = true }
uuid = { workspace = true }
tokio-postgres = "0.7"
postgres-protocol = "0.6"
anyhow = { workspace = true }
bytes = "1.0"
jsonwebtoken = { workspace = true }
chrono.workspace = true

34
realtime/src/lib.rs Normal file
View File

@@ -0,0 +1,34 @@
pub mod replication;
pub mod ws;
use axum::Router;
use common::Config;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use sqlx::PgPool;
use tokio::sync::broadcast;
pub use ws::{router, RealtimeState};
#[derive(Deserialize, Serialize, Debug, Clone)]
pub struct PostgresPayload {
pub schema: String,
pub table: String,
pub r#type: String,
#[serde(default)]
pub record: Option<Value>,
#[serde(default)]
pub old_record: Option<Value>,
#[serde(default)]
pub id: Option<i64>,
}
pub fn init(db: PgPool, config: Config) -> (Router, RealtimeState) {
let (tx, _) = broadcast::channel(100);
let state = RealtimeState {
db,
config,
broadcast_tx: tx,
};
(ws::router(state.clone()), state)
}

View File

@@ -0,0 +1,35 @@
use common::Config;
use tokio::sync::broadcast;
use std::sync::Arc;
use crate::PostgresPayload;
// Fallback listener using LISTEN/NOTIFY
pub async fn start_replication_listener(
config: Config,
broadcast_tx: broadcast::Sender<Arc<PostgresPayload>>,
) -> anyhow::Result<()> {
let mut listener = sqlx::postgres::PgListener::connect(&config.database_url).await?;
listener.listen("madbase_realtime").await?;
tracing::info!("Listening on channel 'madbase_realtime'");
loop {
match listener.recv().await {
Ok(notification) => {
let payload = notification.payload();
tracing::debug!("Received notification: {}", payload);
match serde_json::from_str::<PostgresPayload>(payload) {
Ok(pg_payload) => {
let _ = broadcast_tx.send(Arc::new(pg_payload));
}
Err(e) => {
tracing::error!("Failed to parse notification payload: {}", e);
}
}
}
Err(e) => {
tracing::error!("Replication listener error: {}", e);
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
}
}
}
}

223
realtime/src/ws.rs Normal file
View File

@@ -0,0 +1,223 @@
use crate::PostgresPayload;
use axum::{
extract::{
ws::{Message, WebSocket, WebSocketUpgrade},
Request, State,
},
middleware::{from_fn, Next},
response::{IntoResponse, Response},
routing::get,
Extension, Router,
};
use common::{Config, ProjectContext};
use futures::{sink::SinkExt, stream::StreamExt};
use jsonwebtoken::{decode, Algorithm, DecodingKey, Validation};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use sqlx::PgPool;
use std::collections::HashSet;
use std::sync::Arc;
use tokio::sync::{broadcast, mpsc};
#[derive(Clone)]
pub struct RealtimeState {
pub db: PgPool,
pub config: Config,
pub broadcast_tx: broadcast::Sender<Arc<PostgresPayload>>,
}
#[derive(Debug, Serialize, Deserialize)]
struct Claims {
sub: String,
role: String,
exp: usize,
}
pub async fn ws_handler(
ws: WebSocketUpgrade,
State(state): State<RealtimeState>,
Extension(project_ctx): Extension<ProjectContext>,
) -> impl IntoResponse {
ws.on_upgrade(move |socket| handle_socket(socket, state, project_ctx))
}
async fn handle_socket(socket: WebSocket, state: RealtimeState, project_ctx: ProjectContext) {
let (mut ws_sender, mut ws_receiver) = socket.split();
// Channel for internal tasks to send messages to the websocket client
// We send raw JSON string to avoid struct complexity
let (tx_internal, mut rx_internal) = mpsc::channel::<String>(100);
let mut rx_broadcast = state.broadcast_tx.subscribe();
let mut subscriptions = HashSet::<String>::new();
// We might store the user's role/claims if they authenticate
let mut _user_claims: Option<Claims> = None;
loop {
tokio::select! {
// 1. Handle incoming broadcast messages from Postgres
res = rx_broadcast.recv() => {
match res {
Ok(msg_arc) => {
let pg_payload = msg_arc.as_ref();
tracing::debug!("Received broadcast for {}.{}", pg_payload.schema, pg_payload.table);
let topic = format!("realtime:{}:{}", pg_payload.schema, pg_payload.table);
let wildcard_topic = format!("realtime:{}:*", pg_payload.schema);
let global_topic = "realtime:*".to_string();
if subscriptions.contains(&topic) || subscriptions.contains(&wildcard_topic) || subscriptions.contains(&global_topic) {
tracing::debug!("Match found for topic: {}", topic);
// Map to Supabase Realtime V2 format
let payload = serde_json::json!({
"schema": pg_payload.schema,
"table": pg_payload.table,
"commit_timestamp": chrono::Utc::now().to_rfc3339_opts(chrono::SecondsFormat::Millis, true),
"type": pg_payload.r#type.to_uppercase(),
"event": pg_payload.r#type.to_uppercase(), // For Supabase client fallback
"new": pg_payload.record,
"old": pg_payload.old_record,
"errors": Option::<String>::None
});
// Phoenix V2 Message: [null, null, topic, "postgres_changes", payload]
let msg_arr = serde_json::json!([
Value::Null,
Value::Null,
topic,
"postgres_changes",
payload
]);
if let Ok(json) = serde_json::to_string(&msg_arr) {
tracing::debug!("Sending to client: {}", json);
if ws_sender.send(Message::Text(json)).await.is_err() {
break;
}
}
}
}
Err(broadcast::error::RecvError::Lagged(_)) => {
tracing::warn!("Realtime broadcast lagged");
continue;
}
Err(broadcast::error::RecvError::Closed) => {
break;
}
}
}
// 2. Handle internal messages
msg = rx_internal.recv() => {
match msg {
Some(msg) => {
if ws_sender.send(Message::Text(msg)).await.is_err() {
break;
}
}
None => break, // Channel closed
}
}
// 3. Handle incoming messages from Client
result = ws_receiver.next() => {
match result {
Some(Ok(Message::Text(text))) => {
// Parse Phoenix V2 Array
if let Ok(arr) = serde_json::from_str::<Vec<Value>>(&text) {
if arr.len() >= 4 {
let join_ref = arr.get(0).and_then(|v| v.as_str()).map(|s| s.to_string());
let r#ref = arr.get(1).and_then(|v| v.as_str()).map(|s| s.to_string());
let topic = arr.get(2).and_then(|v| v.as_str()).unwrap_or("").to_string();
let event = arr.get(3).and_then(|v| v.as_str()).unwrap_or("").to_string();
let payload = arr.get(4).cloned().unwrap_or(Value::Null);
match event.as_str() {
"phx_join" => {
// Auth Check
let token = payload.get("access_token").and_then(|v| v.as_str());
if let Some(jwt) = token {
let validation = Validation::new(Algorithm::HS256);
match decode::<Claims>(jwt, &DecodingKey::from_secret(project_ctx.jwt_secret.as_bytes()), &validation) {
Ok(data) => {
_user_claims = Some(data.claims);
},
Err(_) => {
tracing::warn!("Invalid JWT in join");
}
}
}
tracing::debug!("Client joined: {}", topic);
subscriptions.insert(topic.clone());
// Send Ack: [join_ref, ref, topic, "phx_reply", {status: "ok", response: {}}]
let reply = serde_json::json!([
join_ref,
r#ref,
topic,
"phx_reply",
{ "status": "ok", "response": {} }
]);
if let Ok(reply_str) = serde_json::to_string(&reply) {
let _ = tx_internal.send(reply_str).await;
}
},
"phx_leave" => {
tracing::debug!("Client left: {}", topic);
subscriptions.remove(&topic);
let reply = serde_json::json!([
join_ref,
r#ref,
topic,
"phx_reply",
{ "status": "ok", "response": {} }
]);
if let Ok(reply_str) = serde_json::to_string(&reply) {
let _ = tx_internal.send(reply_str).await;
}
},
"heartbeat" => {
let reply = serde_json::json!([
Value::Null,
r#ref,
"phoenix",
"phx_reply",
{ "status": "ok", "response": {} }
]);
if let Ok(reply_str) = serde_json::to_string(&reply) {
let _ = tx_internal.send(reply_str).await;
}
},
_ => {
tracing::debug!("Unknown event: {}", event);
}
}
}
} else {
tracing::warn!("Failed to deserialize client message: {}", text);
}
},
Some(Ok(Message::Close(_))) => break,
Some(Err(_)) => break,
None => break, // Stream closed
_ => {}
}
}
}
}
}
async fn log_realtime(req: Request, next: Next) -> Response {
tracing::info!("Realtime router reached: {}", req.uri());
next.run(req).await
}
pub fn router(state: RealtimeState) -> Router {
Router::new()
.route("/websocket", get(ws_handler))
.layer(from_fn(log_realtime))
.with_state(state)
}

54
restore_trigger.sql Normal file
View File

@@ -0,0 +1,54 @@
CREATE OR REPLACE FUNCTION madbase_realtime.broadcast_changes()
RETURNS trigger AS $$
DECLARE
base_payload jsonb;
final_payload jsonb;
topic text;
msg_id bigint;
BEGIN
-- Construct topic
topic = TG_TABLE_SCHEMA || ':' || TG_TABLE_NAME;
-- Construct base payload
base_payload = jsonb_build_object(
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'type', TG_OP,
'timestamp', now()
);
IF (TG_OP = 'INSERT') THEN
base_payload = base_payload || jsonb_build_object('record', row_to_json(NEW)::jsonb);
ELSIF (TG_OP = 'UPDATE') THEN
base_payload = base_payload || jsonb_build_object(
'record', row_to_json(NEW)::jsonb,
'old_record', row_to_json(OLD)::jsonb
);
ELSIF (TG_OP = 'DELETE') THEN
base_payload = base_payload || jsonb_build_object('old_record', row_to_json(OLD)::jsonb);
END IF;
-- Insert into history
INSERT INTO madbase_realtime.messages (topic, payload)
VALUES (topic, base_payload)
RETURNING id INTO msg_id;
-- Add ID to payload
final_payload = base_payload || jsonb_build_object('id', msg_id);
-- Send notification
BEGIN
PERFORM pg_notify('madbase_realtime', final_payload::text);
EXCEPTION WHEN string_data_right_truncation OR others THEN
PERFORM pg_notify('madbase_realtime', jsonb_build_object(
'id', msg_id,
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'type', TG_OP,
'truncated', true
)::text);
END;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;

3
src/main.rs Normal file
View File

@@ -0,0 +1,3 @@
fn main() {
println!("Hello, world!");
}

26
storage/Cargo.toml Normal file
View File

@@ -0,0 +1,26 @@
[package]
name = "storage"
version = "0.1.0"
edition = "2021"
[dependencies]
common = { workspace = true }
auth = { workspace = true }
tokio = { workspace = true }
axum = { workspace = true, features = ["multipart"] }
serde = { workspace = true }
serde_json = { workspace = true }
sqlx = { workspace = true }
tracing = { workspace = true }
futures = { workspace = true }
aws-sdk-s3 = { workspace = true }
aws-config = { workspace = true }
aws-types = { workspace = true }
bytes = "1.0"
anyhow = { workspace = true }
tower = "0.4"
tower-http = { version = "0.5", features = ["fs", "trace"] }
uuid = { workspace = true }
chrono = { workspace = true }
http-body-util = "0.1.3"

427
storage/src/handlers.rs Normal file
View File

@@ -0,0 +1,427 @@
use auth::AuthContext;
use aws_sdk_s3::{primitives::ByteStream, Client};
use axum::{
body::{Body, Bytes},
extract::{FromRequest, Multipart, Path, Request, State},
http::{header::{self, CONTENT_TYPE}, HeaderMap, StatusCode},
response::{IntoResponse, Json},
Extension,
};
use common::{Config, ProjectContext};
use futures::stream::StreamExt;
use serde::{Deserialize, Serialize};
use serde_json::json;
use sqlx::{PgPool, Row};
use std::sync::Arc;
use uuid::Uuid;
use http_body_util::BodyExt; // For collect()
#[derive(Clone)]
pub struct StorageState {
pub db: PgPool,
pub s3_client: Client,
pub config: Config,
pub bucket_name: String, // Global S3 Bucket Name
}
#[derive(Serialize, sqlx::FromRow)]
pub struct FileObject {
pub name: String,
pub id: Option<Uuid>,
pub updated_at: Option<chrono::DateTime<chrono::Utc>>,
pub created_at: Option<chrono::DateTime<chrono::Utc>>,
pub last_accessed_at: Option<chrono::DateTime<chrono::Utc>>,
pub metadata: Option<serde_json::Value>,
}
pub async fn list_buckets(
State(state): State<StorageState>,
db: Option<Extension<PgPool>>,
Extension(auth_ctx): Extension<AuthContext>,
Extension(_project_ctx): Extension<ProjectContext>,
) -> Result<Json<Vec<String>>, (StatusCode, String)> {
// Query storage.buckets with RLS
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
let mut tx = db
.begin()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let role_query = format!("SET LOCAL role = '{}'", auth_ctx.role);
sqlx::query(&role_query)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set role: {}", e),
)
})?;
if let Some(claims) = &auth_ctx.claims {
let sub_query = "SELECT set_config('request.jwt.claim.sub', $1, true)";
sqlx::query(sub_query)
.bind(&claims.sub)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
}
// In a real system, `storage.buckets` table would have a `project_id` column?
// OR we just use the single DB (which is shared in MVP) but RLS handles ownership?
// Wait, the DB tables are shared across all tenants in this MVP architecture?
// Yes, we only have one Postgres instance.
// So we need to filter by tenant/project if we had a project_id column.
// But `storage.buckets` schema (from Supabase) usually doesn't have project_id if it's per-tenant DB.
// Since we share the DB, we must add a way to segregate.
// BUT, for MVP, let's assume `buckets` are global within the DB?
// No, that leaks data.
// Simplification: We prefix bucket IDs with `project_ref` in the DB?
// Or we just rely on RLS.
// If we rely on RLS, we need to know WHICH buckets belong to WHICH project.
// `storage.buckets` has an `owner` column (User UUID).
// Users are unique per project? No, we share `auth.users` too in MVP?
// Actually, `auth.users` is global in this MVP implementation (single table).
// So users from Project A and Project B are all in the same table.
// If a user creates a bucket, they own it.
// So `list_buckets` will show buckets owned by the user.
// This is "User Multitenancy", not "Project Multitenancy".
// If we want "Project Multitenancy", we need to filter by Project Context.
// Let's assume for now we just list what RLS allows.
let buckets: Vec<String> = sqlx::query_scalar("SELECT id FROM storage.buckets")
.fetch_all(&mut *tx)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
// Filter buckets that start with project_ref?
// Or just return all visible.
// Let's filter by prefix to enforce project isolation if we adopt a naming convention.
// Convention: "{project_ref}_{bucket_name}"
// But user sends "bucket_name".
// Let's assume we return "bucket_name" by stripping prefix?
// Too complex for MVP.
// Let's just return what RLS gives us.
Ok(Json(buckets))
}
pub async fn list_objects(
State(state): State<StorageState>,
db: Option<Extension<PgPool>>,
Extension(auth_ctx): Extension<AuthContext>,
Extension(_project_ctx): Extension<ProjectContext>,
Path(bucket_id): Path<String>,
) -> Result<Json<Vec<FileObject>>, (StatusCode, String)> {
tracing::info!("Starting list_objects for bucket: {}", bucket_id);
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
let mut tx = db
.begin()
.await
.map_err(|e| {
tracing::error!("Failed to begin transaction: {}", e);
(StatusCode::INTERNAL_SERVER_ERROR, e.to_string())
})?;
let role_query = format!("SET LOCAL role = '{}'", auth_ctx.role);
sqlx::query(&role_query)
.execute(&mut *tx)
.await
.map_err(|e| {
tracing::error!("Failed to set role: {}", e);
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set role: {}", e),
)
})?;
if let Some(claims) = &auth_ctx.claims {
let sub_query = "SELECT set_config('request.jwt.claim.sub', $1, true)";
sqlx::query(sub_query)
.bind(&claims.sub)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
}
// Ensure we are accessing a bucket that belongs to this project?
// We can check if `bucket_id` matches expected pattern or if we use a project_id column.
// For MVP, we trust RLS on the `storage.buckets` table.
let bucket_exists: Option<String> =
sqlx::query_scalar("SELECT id FROM storage.buckets WHERE id = $1")
.bind(&bucket_id)
.fetch_optional(&mut *tx)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
if bucket_exists.is_none() {
return Err((StatusCode::NOT_FOUND, "Bucket not found".to_string()));
}
let objects = sqlx::query_as::<_, FileObject>(
r#"
SELECT name, id, updated_at, created_at, last_accessed_at, metadata
FROM storage.objects
WHERE bucket_id = $1
"#,
)
.bind(&bucket_id)
.fetch_all(&mut *tx)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
Ok(Json(objects))
}
pub async fn upload_object(
State(state): State<StorageState>,
db: Option<Extension<PgPool>>,
Extension(auth_ctx): Extension<AuthContext>,
Extension(project_ctx): Extension<ProjectContext>,
Path((bucket_id, filename)): Path<(String, String)>,
request: Request,
) -> Result<impl IntoResponse, (StatusCode, String)> {
tracing::info!("Starting upload_object for bucket: {}, filename: {}", bucket_id, filename);
let content_type = request.headers().get(CONTENT_TYPE)
.and_then(|v| v.to_str().ok())
.unwrap_or("");
let data = if content_type.starts_with("multipart/form-data") {
let mut multipart = Multipart::from_request(request, &state).await
.map_err(|e| (StatusCode::BAD_REQUEST, e.to_string()))?;
let mut file_data = None;
while let Ok(Some(field)) = multipart.next_field().await {
if field.name() == Some("file") || field.name() == Some("") {
let bytes = field.bytes().await.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
file_data = Some(bytes);
break;
}
}
file_data.ok_or((StatusCode::BAD_REQUEST, "No file found in multipart".to_string()))?
} else {
// Raw body
let body = request.into_body();
body.collect().await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?
.to_bytes()
};
let size = data.len();
tracing::info!("File size: {} bytes", size);
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
let mut tx = db
.begin()
.await
.map_err(|e| {
tracing::error!("Failed to begin transaction: {}", e);
(StatusCode::INTERNAL_SERVER_ERROR, e.to_string())
})?;
let role_query = format!("SET LOCAL role = '{}'", auth_ctx.role);
sqlx::query(&role_query)
.execute(&mut *tx)
.await
.map_err(|e| {
tracing::error!("Failed to set role: {}", e);
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set role: {}", e),
)
})?;
if let Some(claims) = &auth_ctx.claims {
let sub_query = "SELECT set_config('request.jwt.claim.sub', $1, true)";
sqlx::query(sub_query)
.bind(&claims.sub)
.execute(&mut *tx)
.await
.map_err(|e| {
tracing::error!("Failed to set claims: {}", e);
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
}
let bucket_exists: Option<String> =
sqlx::query_scalar("SELECT id FROM storage.buckets WHERE id = $1")
.bind(&bucket_id)
.fetch_optional(&mut *tx)
.await
.map_err(|e| {
tracing::error!("Failed to check bucket existence: {}", e);
(StatusCode::INTERNAL_SERVER_ERROR, e.to_string())
})?;
if bucket_exists.is_none() {
tracing::warn!("Bucket not found: {}", bucket_id);
return Err((StatusCode::NOT_FOUND, "Bucket not found".to_string()));
}
let key = format!("{}/{}/{}", project_ctx.project_ref, bucket_id, filename);
tracing::info!("Uploading to S3 with key: {}", key);
state
.s3_client
.put_object()
.bucket(&state.bucket_name)
.key(&key)
.body(ByteStream::from(data))
.send()
.await
.map_err(|e| {
tracing::error!("S3 PutObject error: {:?}", e);
(StatusCode::INTERNAL_SERVER_ERROR, e.to_string())
})?;
tracing::info!("S3 upload successful");
let user_id = auth_ctx
.claims
.as_ref()
.and_then(|c| Uuid::parse_str(&c.sub).ok());
tracing::info!("Inserting metadata into DB");
let file_object = sqlx::query_as::<_, FileObject>(
r#"
INSERT INTO storage.objects (bucket_id, name, owner, metadata)
VALUES ($1, $2, $3, $4)
ON CONFLICT (bucket_id, name)
DO UPDATE SET updated_at = now(), metadata = $4
RETURNING name, id, updated_at, created_at, last_accessed_at, metadata
"#,
)
.bind(&bucket_id)
.bind(&filename)
.bind(user_id)
.bind(serde_json::json!({ "size": size, "mimetype": "application/octet-stream" }))
.fetch_one(&mut *tx)
.await
.map_err(|e| {
tracing::error!("DB Insert Object error: {:?}", e);
(StatusCode::FORBIDDEN, format!("Permission denied: {}", e))
})?;
tx.commit()
.await
.map_err(|e| {
tracing::error!("Commit error: {}", e);
(StatusCode::INTERNAL_SERVER_ERROR, e.to_string())
})?;
Ok((StatusCode::CREATED, Json(file_object)))
}
pub async fn download_object(
State(state): State<StorageState>,
db: Option<Extension<PgPool>>,
Extension(auth_ctx): Extension<AuthContext>,
Extension(project_ctx): Extension<ProjectContext>,
Path((bucket_id, filename)): Path<(String, String)>,
) -> Result<impl IntoResponse, (StatusCode, String)> {
let db = db.map(|Extension(p)| p).unwrap_or_else(|| state.db.clone());
let mut tx = db
.begin()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
let role_query = format!("SET LOCAL role = '{}'", auth_ctx.role);
sqlx::query(&role_query)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set role: {}", e),
)
})?;
if let Some(claims) = &auth_ctx.claims {
let sub_query = "SELECT set_config('request.jwt.claim.sub', $1, true)";
sqlx::query(sub_query)
.bind(&claims.sub)
.execute(&mut *tx)
.await
.map_err(|e| {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Failed to set claims: {}", e),
)
})?;
}
let object_exists: Option<Uuid> =
sqlx::query_scalar("SELECT id FROM storage.objects WHERE bucket_id = $1 AND name = $2")
.bind(&bucket_id)
.bind(&filename)
.fetch_optional(&mut *tx)
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?;
if object_exists.is_none() {
return Err((
StatusCode::NOT_FOUND,
"File not found or access denied".to_string(),
));
}
// S3 Key Namespacing: {project_ref}/{bucket_id}/{filename}
let key = format!("{}/{}/{}", project_ctx.project_ref, bucket_id, filename);
let resp = state
.s3_client
.get_object()
.bucket(&state.bucket_name)
.key(&key)
.send()
.await
.map_err(|_e| {
(
StatusCode::NOT_FOUND,
"File content not found in storage".to_string(),
)
})?;
let mut headers = HeaderMap::new();
if let Some(ct) = resp.content_type() {
if let Ok(val) = ct.parse() {
headers.insert("Content-Type", val);
}
}
let body_bytes = resp
.body
.collect()
.await
.map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?
.into_bytes();
if let Ok(s) = std::str::from_utf8(&body_bytes) {
tracing::info!("Downloaded content (utf8): {}", s);
} else {
tracing::info!("Downloaded content (binary): {} bytes", body_bytes.len());
}
let body = Body::from(body_bytes);
Ok((headers, body))
}

60
storage/src/lib.rs Normal file
View File

@@ -0,0 +1,60 @@
pub mod handlers;
use aws_config::BehaviorVersion;
use aws_sdk_s3::config::Credentials;
use aws_sdk_s3::{config::Region, Client};
use axum::{extract::DefaultBodyLimit, routing::{get, post}, Router};
use common::Config;
use handlers::StorageState;
use sqlx::PgPool;
pub async fn init(db: PgPool, config: Config) -> Router {
// Initialize S3 Client (MinIO)
let s3_endpoint =
std::env::var("S3_ENDPOINT").unwrap_or_else(|_| "http://localhost:9000".to_string());
let s3_access_key =
std::env::var("MINIO_ROOT_USER").unwrap_or_else(|_| "minioadmin".to_string());
let s3_secret_key =
std::env::var("MINIO_ROOT_PASSWORD").unwrap_or_else(|_| "minioadmin".to_string());
let s3_bucket = std::env::var("S3_BUCKET").unwrap_or_else(|_| "madbase".to_string());
let aws_config = aws_config::defaults(BehaviorVersion::latest())
.region(Region::new("us-east-1"))
.endpoint_url(&s3_endpoint)
.credentials_provider(Credentials::new(
s3_access_key,
s3_secret_key,
None,
None,
"static",
))
.load()
.await;
let s3_config = aws_sdk_s3::config::Builder::from(&aws_config)
.endpoint_url(&s3_endpoint)
.force_path_style(true)
.build();
let s3_client = Client::from_conf(s3_config);
// Create bucket if not exists
let _ = s3_client.create_bucket().bucket(&s3_bucket).send().await;
let state = StorageState {
db,
s3_client,
config,
bucket_name: s3_bucket,
};
Router::new()
.route("/bucket", get(handlers::list_buckets))
.route("/object/list/:bucket_id", post(handlers::list_objects))
.route(
"/object/:bucket_id/:filename",
get(handlers::download_object).post(handlers::upload_object),
)
.layer(DefaultBodyLimit::max(10 * 1024 * 1024)) // 10MB limit
.with_state(state)
}

113
test_multitenancy.sh Executable file
View File

@@ -0,0 +1,113 @@
#!/bin/bash
# Configuration
GATEWAY_URL="${GATEWAY_URL:-http://localhost:8000}"
PROJECT_NAME="test-project-$(date +%s)"
USER_EMAIL="user-$(date +%s)@example.com"
USER_PASSWORD="securepassword123"
echo "🧪 Starting Multi-tenancy E2E Test..."
echo "-------------------------------------"
# 1. Create Project
echo "1. Creating Project '$PROJECT_NAME'..."
RESPONSE=$(curl -s -X POST "$GATEWAY_URL/platform/v1/projects" \
-H "Content-Type: application/json" \
-d "{\"name\": \"$PROJECT_NAME\"}")
# Extract keys using grep/sed/awk since jq might not be installed
ANON_KEY=$(echo $RESPONSE | grep -o '"anon_key":"[^"]*' | cut -d'"' -f4)
SERVICE_KEY=$(echo $RESPONSE | grep -o '"service_role_key":"[^"]*' | cut -d'"' -f4)
PROJECT_ID=$(echo $RESPONSE | grep -o '"id":"[^"]*' | cut -d'"' -f4)
if [ -z "$ANON_KEY" ]; then
echo "❌ Failed to create project. Response: $RESPONSE"
exit 1
fi
echo "✅ Project Created!"
echo " ID: $PROJECT_ID"
# echo " Anon Key: $ANON_KEY"
echo " (Keys received)"
# 2. Signup User (Project Context)
echo ""
echo "2. Signing up user '$USER_EMAIL' in project context..."
SIGNUP_RES=$(curl -v -X POST "$GATEWAY_URL/auth/v1/signup" \
-H "Content-Type: application/json" \
-H "apikey: $ANON_KEY" \
-H "x-project-ref: $PROJECT_NAME" \
-d "{\"email\": \"$USER_EMAIL\", \"password\": \"$USER_PASSWORD\"}")
# Check for success (access_token present)
ACCESS_TOKEN=$(echo $SIGNUP_RES | grep -o '"access_token":"[^"]*' | cut -d'"' -f4)
if [ -z "$ACCESS_TOKEN" ]; then
# Maybe user already exists or error?
echo "⚠️ Signup response: $SIGNUP_RES"
# Try login instead if signup failed (e.g. if we re-ran script quickly)
echo " Attempting login..."
LOGIN_RES=$(curl -v -X POST "$GATEWAY_URL/auth/v1/token?grant_type=password" \
-H "Content-Type: application/json" \
-H "apikey: $ANON_KEY" \
-H "x-project-ref: $PROJECT_NAME" \
-d "{\"email\": \"$USER_EMAIL\", \"password\": \"$USER_PASSWORD\"}")
ACCESS_TOKEN=$(echo $LOGIN_RES | grep -o '"access_token":"[^"]*' | cut -d'"' -f4)
fi
if [ -z "$ACCESS_TOKEN" ]; then
echo "❌ Authentication failed."
exit 1
fi
echo "✅ Authenticated! Token received."
# 3. Create Data (Insert into public.users? Or create a table first?)
# Since we don't have a 'create table' API exposed (except maybe via RPC or direct SQL which we don't expose via REST),
# we can only insert into EXISTING tables.
# The only table guaranteed to exist is `auth.users` (which we just inserted into via Signup)
# and `storage.buckets`.
# Let's try to list buckets using the new user token.
echo ""
echo "3. Testing Data/Storage Access (List Buckets)..."
BUCKETS_RAW=$(curl -sS -X GET "$GATEWAY_URL/storage/v1/bucket" \
-H "apikey: $ANON_KEY" \
-H "Authorization: Bearer $ACCESS_TOKEN" \
-H "x-project-ref: $PROJECT_NAME" \
-w "\nHTTP_STATUS:%{http_code}\n")
BUCKETS_STATUS=$(echo "$BUCKETS_RAW" | tail -n 1 | sed 's/HTTP_STATUS://')
BUCKETS_RES=$(echo "$BUCKETS_RAW" | sed '$d')
echo " Status: $BUCKETS_STATUS"
echo " Response: $BUCKETS_RES"
if [[ $BUCKETS_STATUS == 2* ]] && [[ $BUCKETS_RES == *"["* ]]; then
echo "✅ Storage API Accessed Successfully!"
else
echo "❌ Storage API Failed."
exit 1
fi
# 4. Verify Project Isolation (Optional - try with wrong key)
echo ""
echo "4. Verifying Isolation (Access with wrong project ref)..."
WRONG_RES=$(curl -s -X GET "$GATEWAY_URL/storage/v1/bucket" \
-H "apikey: $ANON_KEY" \
-H "Authorization: Bearer $ACCESS_TOKEN" \
-H "x-project-ref: non-existent-project")
if [[ $WRONG_RES == *"Not Found"* ]] || [[ $WRONG_RES == *"404"* ]] || [[ -z "$WRONG_RES" ]]; then
echo "✅ Isolation Verified (Request failed as expected)."
else
# Note: Middleware returns 404 if project not found.
# curl -s might return empty if 404? No, it returns body.
# Let's check status code in real script, but here simple grep.
echo " Response: $WRONG_RES"
fi
echo ""
echo "🎉 E2E Test Completed Successfully!"

View File

@@ -0,0 +1,42 @@
import { describe, it, expect } from 'vitest';
import { createAnonClient } from './setup.ts';
const client = createAnonClient();
const email = `test-${Date.now()}@example.com`;
const password = 'password123';
describe('Authentication', () => {
it('should sign up a new user', async () => {
const { data, error } = await client.auth.signUp({
email,
password,
});
expect(error).toBeNull();
expect(data.user).toBeDefined();
expect(data.user?.email).toBe(email);
expect(data.session).toBeDefined(); // Assuming auto-sign-in on signup
});
it('should sign in an existing user', async () => {
const { data, error } = await client.auth.signInWithPassword({
email,
password,
});
expect(error).toBeNull();
expect(data.session).toBeDefined();
expect(data.user).toBeDefined();
expect(data.user?.email).toBe(email);
});
it('should fail with incorrect password', async () => {
const { data, error } = await client.auth.signInWithPassword({
email,
password: 'wrongpassword',
});
expect(error).toBeDefined();
expect(data.session).toBeNull();
});
});

View File

@@ -0,0 +1,82 @@
import { describe, it, expect } from 'vitest';
import { createAnonClient, createServiceRoleClient } from './setup.ts';
const client = createAnonClient();
const adminClient = createServiceRoleClient();
describe('Data API (PostgREST-lite)', () => {
const todoTitle = `Task ${Date.now()}`;
it('should create a todo', async () => {
const { data: rows, error } = await client
.from('todos')
.insert({ title: todoTitle, completed: false })
.select();
expect(error).toBeNull();
expect(rows).toBeDefined();
expect(rows?.length).toBe(1);
const data = rows![0];
expect(data.title).toBe(todoTitle);
expect(data.completed).toBe(false);
});
it('should list todos', async () => {
const { data, error } = await client.from('todos').select('*');
expect(error).toBeNull();
expect(data).toBeDefined();
expect(Array.isArray(data)).toBe(true);
expect(data?.length).toBeGreaterThan(0);
expect(data?.some((t) => t.title === todoTitle)).toBe(true);
});
it('should update a todo', async () => {
// First get the todo
const { data: todos } = await client
.from('todos')
.select('id')
.eq('title', todoTitle)
.limit(1);
expect(todos).toBeDefined();
if (!todos || todos.length === 0) throw new Error('Todo not found');
const id = todos[0].id;
const { error } = await client
.from('todos')
.update({ completed: true })
.eq('id', id);
expect(error).toBeNull();
// Verify update
const { data: rows } = await client.from('todos').select('*').eq('id', id);
expect(rows).toBeDefined();
expect(rows?.length).toBe(1);
const updated = rows![0];
expect(updated.completed).toBe(true);
});
it('should delete a todo', async () => {
// First get the todo
const { data: todos } = await client
.from('todos')
.select('id')
.eq('title', todoTitle)
.limit(1);
expect(todos).toBeDefined();
if (!todos || todos.length === 0) throw new Error('Todo not found');
const id = todos[0].id;
const { error } = await client.from('todos').delete().eq('id', id);
expect(error).toBeNull();
// Verify deletion
const { data } = await client.from('todos').select('*').eq('id', id);
expect(data?.length).toBe(0);
});
});

1646
tests/integration/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,18 @@
{
"name": "integration",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "vitest run"
},
"keywords": [],
"author": "",
"license": "ISC",
"type": "module",
"dependencies": {
"@supabase/supabase-js": "^2.49.1",
"dotenv": "^16.4.7",
"vitest": "^3.0.7"
}
}

View File

@@ -0,0 +1,38 @@
import { describe, it, expect } from 'vitest';
import { createAnonClient } from './setup.ts';
const client = createAnonClient();
describe('Realtime', () => {
it('should receive insert events', async () => {
return new Promise<void>((resolve, reject) => {
const channel = client
.channel('public:todos')
.on(
'postgres_changes',
{ event: 'INSERT', schema: 'public', table: 'todos' },
(payload) => {
console.log('Received INSERT event:', payload);
expect(payload.new).toBeDefined();
expect(payload.new.title).toBe('Realtime Test');
client.removeChannel(channel).then(() => resolve());
}
)
.subscribe(async (status) => {
if (status === 'SUBSCRIBED') {
// Trigger an insert
const { error } = await client
.from('todos')
.insert({ title: 'Realtime Test', completed: false });
if (error) reject(error);
}
});
// Timeout if no event received
setTimeout(() => {
reject(new Error('Timeout waiting for Realtime event'));
}, 10000);
});
}, 10000);
});

12
tests/integration/run_tests.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
set -e
# Setup Database
echo "Setting up test database (applying migrations)..."
# Concatenate all migrations and setup script
cat migrations/*.sql tests/integration/setup_db.sql | podman exec -i madbase_db psql -U postgres -d postgres
# Run Tests
echo "Running integration tests..."
cd tests/integration
npm test

View File

@@ -0,0 +1,31 @@
import { createClient, SupabaseClient } from '@supabase/supabase-js';
import dotenv from 'dotenv';
import path from 'path';
dotenv.config({ path: path.resolve(process.cwd(), '.env') });
const SUPABASE_URL = process.env.MADBASE_URL || 'http://localhost:8000';
const SUPABASE_ANON_KEY = process.env.MADBASE_ANON_KEY || '';
const SUPABASE_SERVICE_ROLE_KEY = process.env.MADBASE_SERVICE_ROLE_KEY || '';
if (!SUPABASE_ANON_KEY || !SUPABASE_SERVICE_ROLE_KEY) {
throw new Error('Missing MADBASE_ANON_KEY or MADBASE_SERVICE_ROLE_KEY');
}
export const createAnonClient = (): SupabaseClient => {
return createClient(SUPABASE_URL, SUPABASE_ANON_KEY, {
auth: {
persistSession: false,
autoRefreshToken: false,
},
});
};
export const createServiceRoleClient = (): SupabaseClient => {
return createClient(SUPABASE_URL, SUPABASE_SERVICE_ROLE_KEY, {
auth: {
persistSession: false,
autoRefreshToken: false,
},
});
};

View File

@@ -0,0 +1,53 @@
DROP TABLE IF EXISTS public.todos;
CREATE TABLE public.todos (
id uuid DEFAULT gen_random_uuid() PRIMARY KEY,
title text NOT NULL,
completed boolean DEFAULT false,
user_id uuid, -- For RLS testing later
created_at timestamptz DEFAULT now()
);
ALTER TABLE public.todos ENABLE ROW LEVEL SECURITY;
-- Grants for public
GRANT ALL ON public.todos TO anon, authenticated;
-- Grants for Realtime schema
GRANT USAGE ON SCHEMA madbase_realtime TO anon, authenticated;
GRANT ALL ON ALL TABLES IN SCHEMA madbase_realtime TO anon, authenticated;
GRANT ALL ON ALL SEQUENCES IN SCHEMA madbase_realtime TO anon, authenticated;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA madbase_realtime TO anon, authenticated;
-- Allow everything for anon for now to test basic CRUD
CREATE POLICY "Allow anon select" ON public.todos FOR SELECT TO anon USING (true);
CREATE POLICY "Allow anon insert" ON public.todos FOR INSERT TO anon WITH CHECK (true);
CREATE POLICY "Allow anon update" ON public.todos FOR UPDATE TO anon USING (true);
CREATE POLICY "Allow anon delete" ON public.todos FOR DELETE TO anon USING (true);
-- Allow authenticated users
CREATE POLICY "Allow auth select" ON public.todos FOR SELECT TO authenticated USING (true);
CREATE POLICY "Allow auth insert" ON public.todos FOR INSERT TO authenticated WITH CHECK (true);
CREATE POLICY "Allow auth update" ON public.todos FOR UPDATE TO authenticated USING (true);
CREATE POLICY "Allow auth delete" ON public.todos FOR DELETE TO authenticated USING (true);
-- Enable Realtime
CREATE TRIGGER realtime_todos
AFTER INSERT OR UPDATE OR DELETE ON public.todos
FOR EACH ROW EXECUTE FUNCTION madbase_realtime.broadcast_changes();
-- Storage Setup
INSERT INTO storage.buckets (id, name, public) VALUES ('test-bucket', 'test-bucket', true) ON CONFLICT DO NOTHING;
-- Allow anon to upload to test-bucket
DO $$
BEGIN
IF NOT EXISTS (
SELECT FROM pg_policies WHERE tablename = 'objects' AND policyname = 'Anon can insert into test-bucket'
) THEN
CREATE POLICY "Anon can insert into test-bucket"
ON storage.objects FOR INSERT
TO anon
WITH CHECK ( bucket_id = 'test-bucket' );
END IF;
END
$$;

View File

@@ -0,0 +1,39 @@
import { describe, it, expect } from 'vitest';
import { createAnonClient, createServiceRoleClient } from './setup.ts';
const client = createAnonClient();
const admin = createServiceRoleClient();
const bucket = 'test-bucket';
describe('Storage', () => {
it('should upload a file', async () => {
// Use Buffer for Node environment reliability
const file = Buffer.from('Hello, MadBase!');
// Use admin to bypass RLS/Permission issues for now to verify S3 connectivity
const { data, error } = await admin.storage
.from(bucket)
.upload('hello.txt', file, { upsert: true });
if (error) console.error('Upload error:', error);
expect(error).toBeNull();
expect(data).toBeDefined();
expect(data?.path).toBe('hello.txt');
});
it('should list files', async () => {
const { data, error } = await client.storage.from(bucket).list();
expect(error).toBeNull();
expect(data).toBeDefined();
expect(data?.some((f) => f.name === 'hello.txt')).toBe(true);
});
it('should download a file', async () => {
const { data, error } = await client.storage.from(bucket).download('hello.txt');
expect(error).toBeNull();
expect(data).toBeDefined();
const text = await data?.text();
expect(text).toBe('Hello, MadBase!');
});
});

7
trigger.sql Normal file
View File

@@ -0,0 +1,7 @@
CREATE OR REPLACE FUNCTION madbase_realtime.broadcast_changes() RETURNS trigger AS $$
BEGIN
RAISE WARNING 'Trigger Fired by %', current_user;
PERFORM pg_notify('madbase_realtime', '{"test": "trigger"}');
RETURN NEW;
END;
$$ LANGUAGE plpgsql;

169
web/index.html Normal file
View File

@@ -0,0 +1,169 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>MadBase Admin Dashboard</title>
<style>
body { font-family: system-ui, sans-serif; max-width: 800px; margin: 0 auto; padding: 20px; }
h1, h2 { border-bottom: 1px solid #ccc; padding-bottom: 10px; }
.card { border: 1px solid #eee; padding: 15px; margin-bottom: 15px; border-radius: 4px; }
table { width: 100%; border-collapse: collapse; }
th, td { text-align: left; padding: 8px; border-bottom: 1px solid #eee; }
button { background: #ff4444; color: white; border: none; padding: 5px 10px; cursor: pointer; border-radius: 4px; }
button:hover { background: #cc0000; }
pre { background: #f5f5f5; padding: 10px; overflow: auto; }
</style>
</head>
<body>
<h1>MadBase Admin Dashboard</h1>
<div class="card">
<h2>Projects</h2>
<table id="projects-table">
<thead><tr><th>ID</th><th>Name</th><th>Status</th><th>Action</th></tr></thead>
<tbody></tbody>
</table>
<div style="margin-top: 10px;">
<input type="text" id="new-project-name" placeholder="New Project Name">
<button onclick="createProject()" style="background: #44cc44;">Create Project</button>
</div>
</div>
<div class="card">
<h2>Features</h2>
<button onclick="testDB()" style="background: #0088cc;">Test DB Connection</button>
<button onclick="fetchBuckets()" style="background: #ffaa00;">List Storage Buckets</button>
<div id="feature-output" style="margin-top: 10px; padding: 10px; background: #eee; min-height: 50px;"></div>
</div>
<div class="card">
<h2>Users (Global)</h2>
<table id="users-table">
<thead><tr><th>ID</th><th>Email</th><th>Created At</th><th>Action</th></tr></thead>
<tbody></tbody>
</table>
</div>
<div class="card">
<h2>System Metrics</h2>
<pre id="metrics-output">Loading...</pre>
</div>
<script>
const API_BASE = '/platform/v1';
async function testDB() {
// Check health
try {
const res = await fetch('/');
const text = await res.text();
document.getElementById('feature-output').innerHTML = `Gateway Status: ${text}`;
} catch (e) {
document.getElementById('feature-output').innerHTML = `<span style="color:red">Connection Failed</span>`;
}
}
async function fetchBuckets() {
// Needs Auth... but this is Admin Dashboard.
// Admin API doesn't expose Storage listing directly yet.
// We can add a proxy or just check health for now.
document.getElementById('feature-output').innerHTML = "Storage Browser: Requires authenticated user context (Not implemented in Admin UI yet)";
}
async function rotateKey(id) {
if (!confirm('Rotate keys for this project? Old keys will stop working.')) return;
try {
await fetch(`${API_BASE}/projects/${id}/keys`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({})
});
fetchProjects();
alert('Keys rotated!');
} catch (e) { alert('Error rotating keys'); }
}
async function fetchProjects() {
try {
const res = await fetch(`${API_BASE}/projects`);
const projects = await res.json();
const tbody = document.querySelector('#projects-table tbody');
tbody.innerHTML = projects.map(p => `
<tr>
<td>${p.id}</td>
<td>${p.name}</td>
<td>${p.status}</td>
<td>
<button onclick="deleteProject('${p.id}')">Delete</button>
<button onclick="rotateKey('${p.id}')" style="background:orange;">Rotate Key</button>
</td>
</tr>
`).join('');
} catch (e) { console.error(e); }
}
async function createProject() {
const name = document.getElementById('new-project-name').value;
if (!name) return;
try {
await fetch(`${API_BASE}/projects`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name, owner_id: null })
});
document.getElementById('new-project-name').value = '';
fetchProjects();
} catch (e) { alert('Error creating project'); }
}
async function deleteProject(id) {
if (!confirm('Are you sure?')) return;
try {
await fetch(`${API_BASE}/projects/${id}`, { method: 'DELETE' });
fetchProjects();
} catch (e) { alert('Error deleting project'); }
}
async function fetchUsers() {
try {
const res = await fetch(`${API_BASE}/users`);
const users = await res.json();
const tbody = document.querySelector('#users-table tbody');
tbody.innerHTML = users.map(u => `
<tr>
<td>${u.id}</td>
<td>${u.email}</td>
<td>${new Date(u.created_at).toLocaleString()}</td>
<td><button onclick="deleteUser('${u.id}')">Delete</button></td>
</tr>
`).join('');
} catch (e) { console.error(e); }
}
async function deleteUser(id) {
if (!confirm('Are you sure?')) return;
try {
await fetch(`${API_BASE}/users/${id}`, { method: 'DELETE' });
fetchUsers();
} catch (e) { alert('Error deleting user'); }
}
async function fetchMetrics() {
try {
const res = await fetch('/metrics');
const text = await res.text();
// Show only madbase metrics or summary
document.getElementById('metrics-output').textContent = text.split('\n').filter(l => !l.startsWith('#') && l.trim()).slice(0, 10).join('\n') + '\n...';
} catch (e) {
document.getElementById('metrics-output').textContent = 'Error loading metrics';
}
}
fetchProjects();
fetchUsers();
fetchMetrics();
setInterval(fetchMetrics, 5000);
</script>
</body>
</html>