Files
madbase/docker-compose.pillar-storage-ha.yml
Vlad Durnea 38cab8c246
Some checks failed
CI/CD Pipeline / lint (push) Successful in 3m45s
CI/CD Pipeline / integration-tests (push) Failing after 58s
CI/CD Pipeline / unit-tests (push) Failing after 1m2s
CI/CD Pipeline / e2e-tests (push) Has been skipped
CI/CD Pipeline / build (push) Has been skipped
Verify M2/M3 implementation, fix regressions against M0/M1
Regressions fixed:
- gateway/src/worker.rs: missing session_manager field in AuthState (M3 regression)
- gateway/src/main.rs: same missing field in monolithic gateway
- storage/src/handlers.rs: removed unused validate_role (now handled by RlsTransaction)

M2 Storage Pillar — verified complete:
- StorageBackend trait with full API (put/get/delete/copy/head/list/multipart)
- AwsS3Backend implementation with streaming get_object
- StorageMode enum (Cloud/SelfHosted) in Config
- All routes: CRUD buckets, CRUD objects, copy, move, sign, public URL, health
- Bucket constraints: file_size_limit + allowed_mime_types enforced on upload
- TUS resumable uploads with S3 multipart (5MB chunking)
- Image transforms run via spawn_blocking
- docker-compose.pillar-storage.yml, templates/storage-node.yaml
- Shared Docker network on all pillar compose files

M3 Auth Completeness — verified complete:
- POST /logout revokes refresh tokens + Redis sessions
- GET /settings returns provider availability
- POST /magiclink with hashed token storage
- DELETE /user soft-delete with token revocation
- Recovery flow accepts new password
- Email change requires re-verification via token
- OAuth callback redirects with fragment tokens
- MFA verify returns aal2 JWT with amr claims
- MFA challenge validates factor ownership
- SessionManager wired into login/logout
- GET /sessions returns active sessions
- Configurable ACCESS_TOKEN_LIFETIME
- Claims model extended with session_id, aal, amr

Tests: 62 passed, 0 failed, 11 ignored (external services)
Warnings: 0
Made-with: Cursor
2026-03-15 14:40:48 +02:00

107 lines
2.9 KiB
YAML

# MadBase - Pillar: Storage (Self-Hosted, High Availability)
# Distributed MinIO with erasure coding
#
# Requires 4 nodes minimum for erasure coding. Each node needs its own block storage volume.
# This setup provides fault tolerance with N/2 drive failure tolerance.
services:
minio1:
image: quay.io/minio/minio:RELEASE.2024-06-13T22-53-53Z
hostname: minio1
container_name: madbase_minio1
command: server http://minio{1...4}/data --console-address ":9001"
environment:
MINIO_ROOT_USER: ${S3_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${S3_SECRET_KEY}
MINIO_BROWSER_REDIRECT_URL: http://localhost:9001
volumes:
- minio1_data:/data
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
minio2:
image: quay.io/minio/minio:RELEASE.2024-06-13T22-53-53Z
hostname: minio2
container_name: madbase_minio2
command: server http://minio{1...4}/data --console-address ":9001"
environment:
MINIO_ROOT_USER: ${S3_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${S3_SECRET_KEY}
MINIO_BROWSER_REDIRECT_URL: http://localhost:9001
volumes:
- minio2_data:/data
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
minio3:
image: quay.io/minio/minio:RELEASE.2024-06-13T22-53-53Z
hostname: minio3
container_name: madbase_minio3
command: server http://minio{1...4}/data --console-address ":9001"
environment:
MINIO_ROOT_USER: ${S3_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${S3_SECRET_KEY}
MINIO_BROWSER_REDIRECT_URL: http://localhost:9001
volumes:
- minio3_data:/data
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
minio4:
image: quay.io/minio/minio:RELEASE.2024-06-13T22-53-53Z
hostname: minio4
container_name: madbase_minio4
command: server http://minio{1...4}/data --console-address ":9001"
environment:
MINIO_ROOT_USER: ${S3_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${S3_SECRET_KEY}
MINIO_BROWSER_REDIRECT_URL: http://localhost:9001
volumes:
- minio4_data:/data
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# Load balancer (optional - for production use nginx or traefik)
# This is a simple round-robin proxy
minio-lb:
image: nginx:alpine
container_name: madbase_minio_lb
ports:
- "9000:9000"
- "9001:9001"
volumes:
- ./config/nginx-minio.conf:/etc/nginx/nginx.conf:ro
depends_on:
- minio1
- minio2
- minio3
- minio4
restart: unless-stopped
volumes:
minio1_data:
minio2_data:
minio3_data:
minio4_data:
networks:
default:
name: madbase
external: true