Some checks failed
CI/CD Pipeline / unit-tests (push) Failing after 1m16s
CI/CD Pipeline / integration-tests (push) Failing after 2m32s
CI/CD Pipeline / lint (push) Successful in 5m22s
CI/CD Pipeline / e2e-tests (push) Has been skipped
CI/CD Pipeline / build (push) Has been skipped
129 lines
5.8 KiB
Markdown
129 lines
5.8 KiB
Markdown
# MadBase Kernel Architecture
|
|
|
|
This document defines the core organization and security model of the MadBase infrastructure.
|
|
|
|
## Documentation Map
|
|
- [**Deployment Guide**](DEPLOYMENT_GUIDE.md): Setup, Scaling, and Provider configuration.
|
|
- [**Storage & Persistence**](STORAGE.md): DB, S3, and Backups.
|
|
- [**State Pillar (Autobase + Redis)**](AUTOBASE.md): High-Availability State Node details.
|
|
- [**Caching Strategy**](CACHING_STRATEGY.md): Two-tier caching architecture.
|
|
- [**Node Templates**](NODE_TEMPLATES.md): Reference for server plans and services.
|
|
|
|
---
|
|
|
|
The "Kernel" architecture is the simplified, core organizational model for MadBase deployments. It collapses complex node roles into four manageable pillars, each with specific scaling characteristics and duties.
|
|
|
|
## 0. System Pillar (The Foundation)
|
|
A horizontally static but **vertically scalable** "seed" node that provides the cluster's base services.
|
|
- **Components**:
|
|
- **Control Plane API**: Cluster management and orchestration.
|
|
- **Observability**: VictoriaMetrics, Loki, Grafana.
|
|
- **Scaling**: Static horizontally (1 node). Supports **Vertical Scaling** via VPS plan upgrades (e.g., CX21 to CX41).
|
|
|
|
## 1. Proxy / Public API (The Face)
|
|
This pillar handles external communication and the public-facing API layer.
|
|
- **Components**:
|
|
- **Gateway Proxy**: Ingress, SSL, and request routing.
|
|
- **Public API**: The core platform API (Auth, Storage metadata, etc.).
|
|
- **L1 Cache**: In-memory caching (moka) for ultra-low latency.
|
|
- **Scaling**:
|
|
- **Range**: 1 to 100 nodes.
|
|
- **Constraints**: Horizontally scalable via Anycast or Floating IP.
|
|
|
|
## 2. Worker (The Muscle)
|
|
This pillar executes business logic and Edge Functions.
|
|
- **Components**:
|
|
- **Compute**: Deno/Wasm runners.
|
|
- **Realtime**: WebSocket managers with presence tracking.
|
|
- **L1 Cache**: In-memory caching for function results.
|
|
- **Scaling**: 1+ nodes.
|
|
- **Constraints**: Unlimited horizontal scaling.
|
|
|
|
## 3. State Pillar (The Memory)
|
|
Ensures data persistence, consistency, and distributed coordination.
|
|
- **Components**:
|
|
- **PostgreSQL**: Primary data store (via Autobase).
|
|
- **Redis**: High-performance distributed cache.
|
|
- **HAProxy**: Unified entry point for both databases.
|
|
- **Scaling**: 1, 3, or 5 nodes (Must be odd for quorum).
|
|
- **Features**:
|
|
- Shared auth sessions across proxies
|
|
- Realtime presence tracking across workers
|
|
- Distributed locking for migrations
|
|
- Cluster-wide rate limiting
|
|
|
|
---
|
|
|
|
## Observability Strategy (Metrics & Logs)
|
|
|
|
To maintain the performance of the four pillars, we implement a dedicated **System Pillar** for observability (often co-located or separate based on scale).
|
|
|
|
- **VictoriaMetrics (VM)**: Fast, cost-effective time-series database for metrics.
|
|
- **Loki**: Distributed log aggregation.
|
|
- **Placement**:
|
|
- Small Clusters: Embedded in the **Control** nodes.
|
|
- High Throughput: Dedicated `system-node` to prevent observability overhead from impacting the application pillars.
|
|
|
|
---
|
|
|
|
## Network Isolation & Security Zones
|
|
|
|
To ensure "Defense in Depth," the kernel is divided into two distinct network zones:
|
|
|
|
### 1. Public Zone (The DMZ)
|
|
- **Deployment**: Nodes have a Public IP and are attached to the Cluster VPC.
|
|
- **Pillars**:
|
|
- **System Node**: For cluster administration and dashboard access.
|
|
- **Proxy / Public API**: For handling all incoming internet traffic.
|
|
- **Access**: Restricted to HTTPS (443) and SSH (via safe-list).
|
|
|
|
### 2. Private Zone (The Core)
|
|
- **Deployment**: Nodes have **No Public IP**. They are accessible ONLY via the Cluster VPC (Private Network).
|
|
- **Pillars**:
|
|
- **Worker Pillar**: Executes application code.
|
|
- **State Pillar**: Stores sensitive project data (PostgreSQL + Redis).
|
|
- **Access**: No direct internet access. All ingress must pass through the Proxy/API pillar. Egress is managed via a NAT Gateway (optional) or limited to OS updates.
|
|
|
|
---
|
|
|
|
## State Pillar: The "Memory" of the Cluster
|
|
|
|
The State Pillar combines **persistent storage** (PostgreSQL) and **ephemeral state** (Redis) into a single, highly-available unit:
|
|
|
|
### Why Combine Them?
|
|
|
|
1. **Resource Symmetry**: Both PostgreSQL and Redis are memory-intensive and benefit from the same VPS plans (High-RAM nodes).
|
|
2. **HA Piggybacking**: Pillar 3 already manages HA via Patroni and etcd. Redis leverages the same infrastructure.
|
|
3. **Centralized Coordination**: Having all state (durable and ephemeral) in one place simplifies the architecture.
|
|
4. **Zero Complexity**: No new pillar needed—we just enhanced the existing "Database" pillar.
|
|
|
|
### Cache Distribution
|
|
|
|
- **L1 Cache** (moka): Runs on each Proxy/Worker node for ultra-low latency
|
|
- **L2 Cache** (Redis): Runs on State Pillar nodes for shared state
|
|
|
|
```
|
|
┌─────────────┐ ┌─────────────┐
|
|
│ Proxy 1 │ │ Worker 1 │
|
|
│ (L1 Cache) │ │ (L1 Cache) │
|
|
└──────┬──────┘ └──────┬──────┘
|
|
│ │
|
|
└──────────┬───────────┘
|
|
│
|
|
┌────────▼────────┐
|
|
│ State Pillar │
|
|
│ ┌──────────┐ │
|
|
│ │PostgreSQL│ │
|
|
│ └──────────┘ │
|
|
│ ┌──────────┐ │
|
|
│ │ Redis │ │
|
|
│ │(L2 Cache)│ │
|
|
│ └──────────┘ │
|
|
│ ┌──────────┐ │
|
|
│ │ HAProxy │ │
|
|
│ └──────────┘ │
|
|
└─────────────────┘
|
|
```
|
|
|
|
For detailed caching architecture, see [CACHING_STRATEGY.md](CACHING_STRATEGY.md).
|