Some checks failed
CI/CD Pipeline / unit-tests (push) Failing after 1m16s
CI/CD Pipeline / integration-tests (push) Failing after 2m32s
CI/CD Pipeline / lint (push) Successful in 5m22s
CI/CD Pipeline / e2e-tests (push) Has been skipped
CI/CD Pipeline / build (push) Has been skipped
2.1 KiB
2.1 KiB
MadBase Deployment Guide
This guide covers everything from initial setup to high-availability scaling on Hetzner Cloud and other providers.
1. Prerequisites
- Hetzner Cloud Account with API token (or other supported provider).
- SSH Key added to your provider.
- PostgreSQL database for the Control Plane state.
- Docker installed for local development and service deployment.
2. Setting Up the Control Plane
Step 1: Environment Configuration
export HETZNER_API_KEY="your_token"
export DATABASE_URL="postgresql://user:pass@localhost/madbase_control_plane"
export HETZNER_SSH_KEY_PATH="~/.ssh/id_rsa"
Step 2: Run the API
docker run -p 8001:8001 \
-e DATABASE_URL=$DATABASE_URL \
-e HETZNER_API_KEY=$HETZNER_API_KEY \
madbase/control-plane
3. Provisioning a Cluster
Adding a Node
To add a node, send a POST request to the Control Plane API:
curl -X POST http://localhost:8001/api/v1/servers \
-d '{
"name": "worker-1",
"template": "worker-node",
"hetzner_plan": "CX11",
"region": "fsn1"
}'
Refer to NODE_TEMPLATES.md for available templates.
4. Scaling Strategies
Horizontal Scaling
The Proxy/API and Worker pillars are designed for horizontal expansion.
- Use
POST /api/v1/cluster/scaleto target a specific node count. - The system handles drain-and-remove logic for safe scale-down.
Vertical Scaling (System Node)
The System Node is non-horizontally scalable. To scale it:
- Upgrade the VPS plan in the Hetzner console.
- The Control Plane will detect the resource change on restart.
5. Security Hardening
Use the /fortify endpoint to secure your nodes:
- Configures Hetzner Cloud Firewalls.
- Disables root/password SSH access.
- Installs
fail2ban.
6. High Availability (HA)
For production deployments, always aim for:
- 3+ Database nodes (for Quorum).
- 2+ Proxy nodes (for Ingress HA).
- Distributed regions (e.g.,
fsn1,nbg1).
For more details on multiple providers, see the specialized MULTI_PROVIDER_VPS.md implementation notes.