Some checks failed
CI/CD Pipeline / unit-tests (push) Failing after 1m16s
CI/CD Pipeline / integration-tests (push) Failing after 2m32s
CI/CD Pipeline / lint (push) Successful in 5m22s
CI/CD Pipeline / e2e-tests (push) Has been skipped
CI/CD Pipeline / build (push) Has been skipped
2.9 KiB
2.9 KiB
MadBase State Pillar (Autobase + Redis)
Architecture
The State Pillar (Pillar 3) is the centralized data layer of MadBase, hosting both durable and ephemeral state:
- PostgreSQL: Persistent relational data (users, projects, storage metadata)
- Autobase: HA and quorum management for PostgreSQL
- Redis: High-performance caching and distributed state
- HAProxy: Unified entry point for both databases
Components
PostgreSQL (Persistent State)
- Port: 5432 (direct), 5433 (via HAProxy)
- Purpose: ACID-compliant data storage
- Features:
- Automatic failover via Patroni
- etcd for leader election
- Replication for high availability
Redis (Ephemeral State)
- Port: 6379 (via HAProxy)
- Purpose: Shared caching and distributed coordination
- Features:
- In-memory data structures
- TTL-based auto-expiration
- Pub/Sub messaging
- Atomic operations
Autobase Integration
MadBase uses Autobase (PostgreSQL + Patroni + etcd) to provide a high-availability, self-healing database layer.
High Availability
A minimum of 3 nodes is required for quorum:
- If the primary PostgreSQL fails, Patroni promotes a standby within <30 seconds
- HAProxy automatically redirects traffic to the new leader
- Redis uses Sentinel or Cluster for automatic failover
Scaling
Initial Setup
- 1 node (non-HA, development)
Production
- 3 or 5 nodes (HA with quorum)
Scaling Command
curl -X POST http://localhost:8001/api/v1/cluster/scale \
-d '{ "target_db_count": 3, "min_ha_nodes": true }'
Use Cases
PostgreSQL (Persistent Data)
- User accounts and authentication
- Project configurations
- Storage metadata
- Function deployments
- Audit logs
Redis (Ephemeral Data)
- User sessions (shared across proxies)
- Realtime presence tracking
- Rate limiting counters
- Distributed locks
- API response caching
Monitoring
Database health is monitored via the System Node:
- Check Patroni status:
curl http://db-node:8008/health - Check Redis:
redis-cli -h db-node ping - HAProxy Stats: http://db-node:7000
- Metrics available in "State Pillar Performance" Grafana dashboard
Backup Strategy
- PostgreSQL: Daily automated backups to S3
- Redis: Periodic RDB snapshots (configured via Redis config)
- HAProxy: Configuration managed via Infrastructure as Code
Configuration
Environment Variables
DATABASE_URL="postgres://user:pass@db:5432/madbase"
REDIS_URL="redis://db:6379/0"
PATRONI_SCOPE=madbase-cluster
Resource Requirements
| Plan | RAM | CPU | Max Concurrent Connections |
|---|---|---|---|
| CX21 | 8GB | 3 | 100 |
| CX31 | 16GB | 4 | 200 |
| CX41 | 32GB | 8 | 500 |
See CACHING_STRATEGY.md for detailed caching information.