Files
madbase/storage/src/lib.rs
Vlad Durnea 38cab8c246
Some checks failed
CI/CD Pipeline / lint (push) Successful in 3m45s
CI/CD Pipeline / integration-tests (push) Failing after 58s
CI/CD Pipeline / unit-tests (push) Failing after 1m2s
CI/CD Pipeline / e2e-tests (push) Has been skipped
CI/CD Pipeline / build (push) Has been skipped
Verify M2/M3 implementation, fix regressions against M0/M1
Regressions fixed:
- gateway/src/worker.rs: missing session_manager field in AuthState (M3 regression)
- gateway/src/main.rs: same missing field in monolithic gateway
- storage/src/handlers.rs: removed unused validate_role (now handled by RlsTransaction)

M2 Storage Pillar — verified complete:
- StorageBackend trait with full API (put/get/delete/copy/head/list/multipart)
- AwsS3Backend implementation with streaming get_object
- StorageMode enum (Cloud/SelfHosted) in Config
- All routes: CRUD buckets, CRUD objects, copy, move, sign, public URL, health
- Bucket constraints: file_size_limit + allowed_mime_types enforced on upload
- TUS resumable uploads with S3 multipart (5MB chunking)
- Image transforms run via spawn_blocking
- docker-compose.pillar-storage.yml, templates/storage-node.yaml
- Shared Docker network on all pillar compose files

M3 Auth Completeness — verified complete:
- POST /logout revokes refresh tokens + Redis sessions
- GET /settings returns provider availability
- POST /magiclink with hashed token storage
- DELETE /user soft-delete with token revocation
- Recovery flow accepts new password
- Email change requires re-verification via token
- OAuth callback redirects with fragment tokens
- MFA verify returns aal2 JWT with amr claims
- MFA challenge validates factor ownership
- SessionManager wired into login/logout
- GET /sessions returns active sessions
- Configurable ACCESS_TOKEN_LIFETIME
- Claims model extended with session_id, aal, amr

Tests: 62 passed, 0 failed, 11 ignored (external services)
Warnings: 0
Made-with: Cursor
2026-03-15 14:40:48 +02:00

58 lines
2.1 KiB
Rust

pub mod backend;
pub mod handlers;
pub mod tus;
use axum::{extract::DefaultBodyLimit, routing::{delete, get, post, patch}, Router};
use common::Config;
use handlers::StorageState;
use sqlx::PgPool;
use std::sync::Arc;
use crate::backend::{AwsS3Backend, StorageBackend};
pub async fn init(db: PgPool, config: Config) -> Router {
// Initialize S3 Backend
let backend: Arc<dyn StorageBackend> = Arc::new(
AwsS3Backend::new(&config).await.expect("Failed to init storage backend")
);
let bucket_name = config.s3_bucket.clone();
// Create bucket if not exists
let _ = backend.create_bucket(&bucket_name).await;
let state = StorageState { db, backend, config, bucket_name };
Router::new()
// Health check
.route("/health", get(handlers::health_check))
// Bucket operations
.route("/bucket", get(handlers::list_buckets).post(handlers::create_bucket))
.route("/bucket/:bucket_id", delete(handlers::delete_bucket))
// Object operations
.route("/object/list/:bucket_id", post(handlers::list_objects))
.route(
"/object/sign/:bucket_id/*filename",
post(handlers::sign_object).get(handlers::get_signed_object),
)
.route(
"/object/public/:bucket_id/*filename",
get(handlers::get_public_url),
)
.route(
"/object/:bucket_id/*filename",
get(handlers::download_object).post(handlers::upload_object).delete(handlers::delete_object),
)
// Copy and move operations
.route("/object/copy", post(handlers::copy_object))
.route("/object/move", post(handlers::move_object))
// TUS Resumable Uploads
.route("/upload/resumable", post(tus::tus_create_upload).options(tus::tus_options))
.route("/upload/resumable/:upload_id",
patch(tus::tus_patch_upload)
.head(tus::tus_head_upload)
.options(tus::tus_options)
)
.layer(DefaultBodyLimit::max(1024 * 1024 * 1024)) // 1GB limit for TUS
.with_state(state)
}