added more support for supabase-js

This commit is contained in:
2026-03-12 10:18:52 +02:00
parent c0792f2e1d
commit 6708cf28a7
62 changed files with 6563 additions and 526 deletions

View File

@@ -0,0 +1,26 @@
I will implement extensive integration tests for Storage and Authentication, and add a placeholder for Edge Functions (as they are not yet implemented in the core).
### 1. Database Setup Update
I will update `tests/integration/setup_db.sql` to create specific buckets for testing:
- `public-bucket`: Publicly accessible.
- `private-bucket`: Private (requires authentication).
### 2. Authentication Tests (`auth.test.ts`)
I will expand the authentication tests to cover:
- **Session Persistence**: Verify `getUser()` returns the logged-in user.
- **Token Refresh**: Verify `refreshSession()` issues a new token.
- **Error Handling**: Verify robust handling of invalid credentials.
### 3. Storage Tests (`storage.test.ts`)
I will expand storage tests to verify Row Level Security (RLS) and permissions:
- **Public Access**: Verify `anon` users can download from `public-bucket`.
- **Private Access**: Verify `anon` users **cannot** download from `private-bucket`.
- **Upload Permissions**: Verify `authenticated` users can upload to `private-bucket`.
- **Deletion**: Verify `authenticated` users can delete their own files (if delete is implemented, otherwise note it).
### 4. Edge Functions Tests (`functions.test.ts`)
I will create a new test file `tests/integration/functions.test.ts` that:
- Marks the test suite as `skipped`.
- Explains that Edge Functions are specified but not yet implemented in the current codebase.
This ensures we have a comprehensive regression suite for the implemented features while tracking future requirements.

View File

@@ -0,0 +1,30 @@
I will update `ROADMAP.md` to reflect the current state of the project by removing implemented features and focusing on the remaining tasks.
**Summary of Changes:**
* **Remove Implemented Items**:
* **Auth**: Signup, Login, Token Management, Session, OAuth (Google/GitHub).
* **Data API**: CRUD, Filters, Nested Selects, RPC, RLS.
* **Realtime**: WebSocket Server, Basic Replication (LISTEN/NOTIFY), Subscriptions.
* **Storage**: S3 Proxy, RLS Policies, Public/Private Buckets.
* **Control Plane**: Project Management, Secrets, Admin API.
* **Deployment**: Docker Compose.
* **Add Missing Features (from** **`SPECIFICATIONS.md`):**
* **Auth**: Password Reset & Email Confirmation.
* **Realtime**: Advanced Replication (`pgoutput`) & Resume from LSN.
* **Storage**: Signed URLs.
* **Edge Functions**: Complete implementation (Runtime, API, Deployment).
* **Admin UI**: Advanced features (Storage

View File

@@ -0,0 +1,28 @@
I will implement the missing Authentication features (Phase 1 of the Roadmap): Password Reset and Email Confirmation.
### 1. Update `auth` Crate
I will modify `auth/src/handlers.rs` and `auth/src/lib.rs` to add the following endpoints:
- `POST /recover`: Initiates the password reset flow.
- Generates a recovery token.
- (Mock) Sends a recovery email.
- `POST /verify`: Verifies email confirmation or recovery tokens.
- Supports `type` parameter (`signup`, `recovery`, `magiclink`, `invite`).
- Updates `email_confirmed_at` or returns a session for recovery.
- `PUT /user`: Allows authenticated users to update their password and metadata.
### 2. Update `auth` Models
I will update `auth/src/models.rs` to include:
- `RecoverRequest` struct.
- `VerifyRequest` struct.
- `UserUpdateRequest` struct.
### 3. Update `auth` Utilities
I will update `auth/src/utils.rs` to include helper functions for:
- Generating random tokens (for recovery/confirmation).
### 4. Integration Tests
I will add new tests to `tests/integration/auth.test.ts` to verify:
- Password recovery flow (request -> verify -> update password).
- Email confirmation flow (signup -> verify).
This completes Phase 1 of the roadmap. Future phases (Realtime, Storage, Edge Functions) will be addressed in subsequent steps.

View File

@@ -0,0 +1,26 @@
I will implement **Phase 2: Realtime & Storage Enhancements**, specifically focusing on **Signed URLs** for Storage and **Resume Support** for Realtime.
### 1. Storage: Signed URLs
I will implement a mechanism to generate and verify time-limited access tokens for private objects.
- **`storage/src/handlers.rs`**:
- Add `sign_object` handler (`POST /object/sign/:bucket/:wildcard`):
- Checks user permissions via RLS.
- Generates a JWT containing `bucket`, `key`, and expiration.
- Returns a signed URL.
- Add `get_signed_object` handler (`GET /object/sign/:bucket/:wildcard`):
- Validates the token from the query string.
- Bypasses RLS (since token proves prior auth) and streams the file from S3.
- **`storage/src/lib.rs`**: Register the new endpoints.
- **Tests**: Add integration tests to verify signed URL generation and access.
### 2. Realtime: Resume Support
I will leverage the existing `madbase_realtime.messages` table to allow clients to catch up on missed events.
- **`realtime/src/ws.rs`**:
- Update the `phx_join` handler to accept a `last_event_id` in the payload.
- If provided, query `madbase_realtime.messages` for events occurring after that ID matching the topic.
- Send these missed events to the client immediately upon joining.
- **Tests**: Add integration tests to verify that a client reconnecting with a `last_event_id` receives missed messages.
*Note: Full replacement of `LISTEN/NOTIFY` with `pgoutput` is deferred to a later step within Phase 2 due to its complexity, but Resume Support is a critical prerequisite for reliability.*

View File

@@ -0,0 +1,33 @@
I will implement **Phase 4.1: Management UI** by upgrading the current single-file dashboard into a feature-rich Admin UI using Vue 3 (via CDN) and adding necessary backend proxy support.
### 1. Gateway Enhancements
- **Add Dependencies**: Add `reqwest` to `gateway/Cargo.toml` to enable proxying requests to internal services (Loki).
- **Loki Proxy**: Implement a `/platform/v1/logs` endpoint in `gateway/src/main.rs` that forwards LogQL queries to the internal Loki instance (`http://loki:3100`). This allows the frontend to securely query logs without exposing Loki directly.
- **Storage & Realtime Access**: Ensure existing APIs (`/storage/v1`, `/realtime/v1`) are accessible from the dashboard (using the service role key for admin actions).
### 2. Admin UI Implementation (`web/index.html`)
Refactor the existing HTML file into a **Vue 3 Single Page Application** with the following features:
- **Tabbed Interface**: Clean navigation between Dashboard, Storage, Realtime, and Logs.
- **Dashboard Tab**:
- List and manage Projects and Users (existing functionality improved).
- **Storage Browser Tab**:
- List all S3 buckets.
- Browse objects within buckets.
- Upload files directly via the UI.
- Preview/Download links for objects.
- **Realtime Inspector Tab**:
- WebSocket client to connect to `ws://localhost:8000/realtime/v1/websocket`.
- UI to subscribe to specific channels (e.g., `room:lobby`).
- Live log of sent/received messages.
- **Logs Viewer Tab**:
- Input field for LogQL queries (e.g., `{app="gateway"}`).
- Time range selector.
- Display formatted log results fetched via the new proxy endpoint.
### 3. Verification
- Rebuild and run the Gateway.
- Verify the Admin UI at `http://localhost:8000/dashboard`.
- Test each tab:
- **Storage**: Upload a test file and verify it appears in the list.
- **Realtime**: Connect and send a test message.
- **Logs**: Query logs and verify output from Loki.

View File

@@ -0,0 +1,34 @@
# Implement Missing Roadmap Features (Phase 2)
I will implement the key missing features from **Phase 2** of the roadmap to improve compatibility with the Supabase client SDK.
## 1. Realtime Presence (`realtime` crate)
**Goal**: Enable user state tracking (online/offline, custom status) compatible with `supabase-js`.
- **Dependencies**: Add `dashmap` for thread-safe concurrent state management.
- **State Management**: Update `RealtimeState` to store presence data in memory: `Arc<DashMap<Topic, DashMap<ClientID, PresenceData>>>`.
- **WebSocket Logic**:
- Handle `presence` events (join, leave, sync).
- Implement `track` (user joins/updates state) and `untrack` (user leaves).
- Broadcast `presence_diff` events to all subscribers on a topic when state changes.
## 2. Storage Image Transformations (`storage` crate)
**Goal**: Support on-the-fly image resizing and formatting via query parameters.
- **Dependencies**: Add `image` crate (with `jpeg`, `png`, `webp` support).
- **Handler Update**: Modify `download_object` to parse query parameters:
- `w` / `width`: Target width.
- `h` / `height`: Target height.
- `q` / `quality`: Compression quality.
- `f` / `format`: Output format (e.g., `webp`, `png`).
- **Processing Logic**:
- If parameters are present, decode the downloaded image bytes.
- Apply resizing (using `Lanczos3` filter for quality).
- Encode to the target format/quality.
- Return the processed image with correct `Content-Type`.
## Execution Steps
1. **Update Dependencies**: Add `dashmap` to `realtime/Cargo.toml` and `image` to `storage/Cargo.toml`.
2. **Refactor Realtime**: Modify `RealtimeState` and `ws.rs` to implement the Presence protocol.
3. **Refactor Storage**: Modify `handlers.rs` to implement the Image Transformation pipeline.
4. **Verification**: Verify compilation and basic functionality (via `cargo check` and manual review of the logic).

View File

@@ -0,0 +1,34 @@
# Implement Missing Phase 2 Features
I will implement the remaining features for Phase 2: **Advanced Replication** (Realtime) and **Resumable Uploads** (Storage).
## 1. Advanced Realtime Replication (`pgoutput`)
**Goal**: Replace the `LISTEN/NOTIFY` fallback with robust logical replication using the `pgoutput` protocol.
- **Dependencies**: Add `pgoutput` crate and enable `replication` feature for `tokio-postgres`.
- **Implementation**:
- Update `realtime/src/replication.rs` to connect to Postgres in **replication mode**.
- Create a replication slot (`madbase_slot`) and start streaming from publication (`madbase_pub`).
- Use `pgoutput::Decoder` to parse binary replication messages (`Relation`, `Insert`, `Update`, `Delete`).
- Maintain an in-memory cache of `Relation` metadata (schema, table, columns) to map relation IDs to names.
- Construct `PostgresPayload` from change events and broadcast to WebSocket clients.
## 2. Resumable Uploads (TUS Protocol)
**Goal**: Implement the TUS protocol for reliable large file uploads in the Storage service.
- **Dependencies**: Add `base64` to `storage/Cargo.toml`.
- **New Module**: Create `storage/src/tus.rs`.
- **Endpoints**:
- `POST /storage/v1/upload/resumable`: Initialize upload. Creates a local tracking file.
- `PATCH /storage/v1/upload/resumable/:id`: Append data chunk to the local file.
- `HEAD /storage/v1/upload/resumable/:id`: Return current upload offset.
- **Completion Logic**:
- When `offset == size`, stream the complete file to S3.
- Insert metadata into `storage.objects`.
- Clean up local temporary files.
## Execution Steps
1. **Update Dependencies**: Modify `realtime/Cargo.toml` and `storage/Cargo.toml`.
2. **Implement Realtime Replication**: Rewrite `realtime/src/replication.rs` with `pgoutput` logic.
3. **Implement TUS Handlers**: Create `storage/src/tus.rs` and register routes in `storage/src/lib.rs`.
4. **Verify**: Ensure compilation and check for basic logic correctness.

View File

@@ -0,0 +1,28 @@
# Implement Missing Features (Phase 4 & 5)
I will implement **Advanced Metrics** for Edge Functions and **pgvector Support** for the Data API.
## 1. pgvector Support (`data_api`)
**Goal**: Ensure `vector` columns are returned as native JSON arrays instead of strings in the Data API.
- **Modification**: Update `rows_to_json` in `data_api/src/handlers.rs`.
- **Logic**:
- Check if column type is "VECTOR".
- If yes, parse the string representation (e.g., `"[1.0,2.0,3.0]"`) into a `serde_json::Value::Array`.
- This provides seamless integration for clients using embeddings.
## 2. Advanced Metrics (`functions`)
**Goal**: Capture and log detailed execution metrics for Edge Functions.
- **Modification**: Update `functions/src/runtime.rs` and `functions/src/handlers.rs`.
- **Metrics**:
- `execution_time_ms`: Duration of the WASM execution.
- `memory_usage_bytes`: Approximate memory usage (if obtainable) or payload size.
- **Implementation**:
- Use `tracing` with structured fields (e.g., `tracing::info!(target: "function_metrics", duration_ms = 123, ...)`).
- This allows the existing **Logs Viewer** (Loki-based) to aggregate and visualize these metrics in the future.
## Execution Steps
1. **Refactor Data API**: Modify `rows_to_json` to handle `VECTOR` type.
2. **Instrument Functions**: Add timing and logging to `WasmRuntime::execute`.
3. **Verify**: Ensure compilation and correct logic.

View File

@@ -0,0 +1,28 @@
# Implement Missing Features (Phase 4 & 5)
I will implement **Advanced Metrics** for Edge Functions and **pgvector Support** for the Data API.
## 1. pgvector Support (`data_api`)
**Goal**: Ensure `vector` columns are returned as native JSON arrays instead of strings in the Data API.
- **Modification**: Update `rows_to_json` in `data_api/src/handlers.rs`.
- **Logic**:
- Check if column type is "VECTOR".
- If yes, parse the string representation (e.g., `"[1.0,2.0,3.0]"`) into a `serde_json::Value::Array`.
- This provides seamless integration for clients using embeddings.
## 2. Advanced Metrics (`functions`)
**Goal**: Capture and log detailed execution metrics for Edge Functions.
- **Modification**: Update `functions/src/runtime.rs` and `functions/src/handlers.rs`.
- **Metrics**:
- `execution_time_ms`: Duration of the WASM execution.
- `memory_usage_bytes`: Approximate memory usage (if obtainable) or payload size.
- **Implementation**:
- Use `tracing` with structured fields (e.g., `tracing::info!(target: "function_metrics", duration_ms = 123, ...)`).
- This allows the existing **Logs Viewer** (Loki-based) to aggregate and visualize these metrics in the future.
## Execution Steps
1. **Refactor Data API**: Modify `rows_to_json` to handle `VECTOR` type.
2. **Instrument Functions**: Add timing and logging to `WasmRuntime::execute`.
3. **Verify**: Ensure compilation and correct logic.

View File

@@ -0,0 +1,25 @@
# Implement MFA (TOTP) Support
I will implement **Time-based One-Time Password (TOTP)** multi-factor authentication, moving further into **Phase 5** of the roadmap.
## 1. Schema Changes
- **New Table**: `auth.mfa_factors` to store MFA secrets and status.
- Columns: `id`, `user_id`, `factor_type` (e.g., 'totp'), `secret`, `status` ('unverified', 'verified'), `created_at`, `updated_at`.
- **Migration**: Create a new SQL migration file for this table.
## 2. Dependencies
- **Crate**: Add `totp-rs` to `auth/Cargo.toml` with `qr` feature for generating QR codes.
## 3. Implementation (`auth` service)
- **New Module**: `auth/src/mfa.rs`.
- **Endpoints**:
- `POST /auth/v1/mfa/enroll`: Generates a new TOTP secret and returns it (plus QR code). Creates an `unverified` factor.
- `POST /auth/v1/mfa/verify`: Accepts a code and the factor ID. Verifies the code. If correct, marks factor as `verified`.
- `POST /auth/v1/mfa/challenge`: (Optional for MVP) Verifies a code for a verified factor to grant access.
## Execution Steps
1. **Add Dependency**: Update `auth/Cargo.toml`.
2. **Create Migration**: Add the SQL file in `migrations/`.
3. **Implement Logic**: Create `auth/src/mfa.rs` with enrollment and verification logic.
4. **Register Routes**: Update `auth/src/lib.rs` to include the new MFA endpoints.
5. **Update Roadmap**: Mark MFA as completed.

View File

@@ -0,0 +1,33 @@
# Implement Phase 5.1: Advanced Authentication
I will implement **Extended OAuth Providers** and **Enterprise SSO (OIDC)**.
## 1. Extended OAuth Providers
**Goal**: Add support for Azure (Microsoft), GitLab, Bitbucket, and Discord.
- **Config**: Update `common/src/config.rs` to read new env vars:
- `AZURE_CLIENT_ID` / `_SECRET`
- `GITLAB_CLIENT_ID` / `_SECRET`
- `BITBUCKET_CLIENT_ID` / `_SECRET`
- `DISCORD_CLIENT_ID` / `_SECRET`
- **Implementation**: Update `auth/src/oauth.rs`:
- Extend `get_client` with new provider URLs.
- Extend `fetch_user_profile` with new user info endpoints and parsing logic.
## 2. Enterprise SSO (OIDC)
**Goal**: Implement OIDC support for enterprise identity providers (e.g., Okta, Auth0, Google Workspace).
- **Dependencies**: Add `openidconnect` to `auth/Cargo.toml`.
- **Schema**: Create `auth.sso_providers` table to store OIDC config per domain/project.
- Columns: `id`, `resource_id`, `domain`, `oidc_issuer_url`, `oidc_client_id`, `oidc_client_secret`, `created_at`, `updated_at`.
- **Implementation**: Create `auth/src/sso.rs`.
- `POST /auth/v1/sso`: Accepts `domain` or `provider_id`. Discovers OIDC config, generates authorization URL.
- `GET /auth/v1/sso/callback`: Handles the code exchange, fetches user info, creates/links user.
## Execution Steps
1. **Update Config**: Modify `common/src/config.rs`.
2. **Add Dependencies**: Update `auth/Cargo.toml`.
3. **Schema Migration**: Create `migrations/20260312000001_add_sso.sql`.
4. **Implement OAuth**: Update `auth/src/oauth.rs`.
5. **Implement SSO**: Create `auth/src/sso.rs`.
6. **Register Routes**: Update `auth/src/lib.rs`.

View File

@@ -0,0 +1,15 @@
# Bump Postgres Version to 17
I will upgrade the Postgres version from `15-alpine` to `17-alpine` in `docker-compose.yml`.
## Changes
- **File**: `docker-compose.yml`
- **Action**: Update `image: postgres:15-alpine` to `image: postgres:17-alpine` for both the `db` (Tenant DB) and `control_db` (Control Plane DB) services.
## Important Note on Data
- **Volume Incompatibility**: Postgres data directories are version-specific. Upgrading the major version (15 -> 17) means the existing data in `madbase_db_data` and `madbase_control_db_data` volumes will **not be compatible**.
- **Resolution**: You will likely need to delete the existing volumes (`docker-compose down -v`) and let them be recreated, or perform a manual dump/restore if you have critical data.
## Execution Steps
1. **Update `docker-compose.yml`**: Change the image tag.
2. **Verify**: I will check the file content after the update.

View File

@@ -0,0 +1,21 @@
# Verify Postgres Bump & System Health
I will verify the system works after the Postgres version bump by resetting the environment and running the integration test suite.
## Execution Steps
1. **Reset Docker Environment**:
- `docker-compose down -v` to remove existing containers and incompatible volumes.
- `docker-compose up -d` to start fresh with Postgres 17.
- Wait for services to initialize.
2. **Initialize Database**:
- Apply all migrations from `migrations/*.sql` to `madbase_db`.
- Apply test schema from `tests/integration/setup_db.sql` to `madbase_db`.
- This will be done by piping the SQL files into `docker exec -i madbase_db psql -U postgres -d postgres`.
3. **Run Integration Tests**:
- Execute `npm test` inside `tests/integration`.
- This will run the Vitest suite covering Auth, Storage, Realtime, Functions, and Data API.
## Expected Outcome
- All tests should pass if the Postgres 17 upgrade is compatible and the system is healthy.

View File

@@ -0,0 +1,48 @@
I will implement the Deno runtime compatibility plan as described in `WASI_DENO.md`.
**Plan:**
1. **Database Migration**:
* Create a new migration `20260312000003_add_function_runtime.sql`.
* Add `runtime` column to `functions.functions` table (TEXT, DEFAULT 'wasm').
* Update existing functions to have `runtime = 'wasm'`.
2. **Update Dependencies**:
* Add `deno_core` to `functions/Cargo.toml`.
* *Note*: I will start with `deno_core` to avoid potential build OOM issues with the full `deno_runtime`. I will implement a minimal JS runtime capable of executing scripts and returning results.
3. **Update Data Models**:
* Modify `Function` struct in `functions/src/models.rs` to include the `runtime` field.
* Update `DeployRequest` struct in `functions/src/models.rs` to accept an optional `runtime` field.
4. **Implement Deno Runtime**:
* Create `functions/src/deno_runtime.rs`.
* Implement `DenoRuntime` struct using `deno_core::JsRuntime`.
* Implement `execute` method that initializes the runtime, executes the provided code, and captures output.
5. **Update Handlers**:
* Modify `deploy_function` in `functions/src/handlers.rs` to handle the `runtime` field.
* Modify `invoke_function` in `functions/src/handlers.rs` to switch between `WasmRuntime` and `DenoRuntime` based on the function's `runtime` column.
6. **Integration Testing**:
* Update `tests/integration/functions.test.ts` to include a test case for deploying and invoking a JavaScript/TypeScript function.
7. **Verification**:
* Run `cargo build` to ensure dependencies compile.
* Run `npm test functions.test.ts` to verify functionality.