deploy-cli (0.1.3)
Installation
registry=npm install deploy-cli@0.1.3"deploy-cli": "0.1.3"About this package
Deploy CLI
Production-ready deployment CLI with blue-green deployments, auto-scaling, and Podman/Podman Compose orchestration.
Features
- Zero-Downtime Deployments: Blue-green deployment strategy with instant rollback
- Containerized Gateway: Nginx runs as 'gateway' container for consistent environment
- Built-in Monitoring: Grafana + VictoriaMetrics + Loki automatically deployed on sentinel server
- PostgreSQL (Standard Database): pg_auto_failover for automatic failover - opinionated choice
- S3/MinIO Backups: Automated backups to S3 or self-hosted MinIO with WAL archiving
- Local Image Caching: Images cached locally for instant rollback without registry pulls
- Podman/Podman Compose: Rootless containers for better security
- SSH Hardening: Built-in
deploy hardencommand for production security - Flexible SSH Configuration: Each server can have its own SSH port or domain name
- Smart Scaling: Intelligent recommendations for adding/removing servers
- Simple Commands: Deploy, rollback, and scale with single commands
- CI/CD Ready: Non-interactive mode for automation
- Multi-Server: Scale from 1 to 100+ servers
- Cost Effective: 70-80% cheaper than managed platforms
Quick Start
Installation
# Install dependencies
npm install
# Build the CLI
npm run build
# Link for global use (optional)
npm link
Initial Setup
# 1. Initialize local environment
deploy init --local
# 2. Add your sentinel server
deploy init 192.168.1.10 # Standard SSH port 22
# OR with custom SSH port:
deploy init 192.168.1.10:50022 # Custom SSH port
# 3. Deploy your application
deploy push --apply
# Done! Your application is live with blue-green deployment
Prerequisites
- Node.js 18+
- SSH access to target servers (with SSH key authentication)
- VPS servers (Ubuntu, Debian, CentOS, RHEL, or Fedora)
- A
podman-compose.yamlfile in your project
Firewall Ports
The following ports are automatically configured during deploy init:
Required:
22(or custom) - SSH access80- HTTP traffic (gateway container)443- HTTPS traffic (optional, for SSL)
Monitoring (sentinel server only):
3000- Grafana dashboard8428- VictoriaMetrics API3100- Loki API9100- Node Exporter metrics8080- cAdvisor metrics
Database (if using PostgreSQL template):
5432- PostgreSQL primary5433- pg_auto_failover monitor
Core Commands
deploy init
Initialize the deploy environment and/or add servers.
# Setup local .deploy directory only
deploy init --local
# Add sentinel server with PostgreSQL (default)
deploy init 192.168.1.10
# Add server with custom SSH port and database
deploy init 192.168.1.10:50022 --db-name myapp --db-password MySecure123
# Skip PostgreSQL initialization
deploy init 192.168.1.10:50022 --no-db
# Use custom SSH user
deploy init 192.168.1.10:50022 --user ubuntu
What it does:
- Creates
.deploy/directory structure in current directory - Installs Podman, Podman Compose, and network utilities on the server
- Configures containerized gateway (Nginx) setup
- Sets up firewall rules and systemd services
- On sentinel server: Deploys monitoring stack (Grafana, VictoriaMetrics, Loki, Promtail)
- On sentinel server: Deploys PostgreSQL with pg_auto_failover (unless --no-db)
- Auto-generates secure database password if not provided
- Prepares image cache directory
- Configures blue-green deployment infrastructure
Sentinel server gets (by default):
- Monitoring: Grafana at
http://<server-ip>:3000(admin/admin) - Database: PostgreSQL at
<server-ip>:5432with auto-failover - VictoriaMetrics, Loki, Node Exporter, cAdvisor
- Ready for production deployments immediately
deploy push
Deploy your application with zero-downtime blue-green deployment.
# Deploy with default settings
deploy push
# Specify compose file
deploy push -f ./custom-compose.yaml
# Apply deployment (skip dry-run)
deploy push --apply
# Custom version tag
deploy push --version v1.2.3
How it works:
- Detects currently active environment (blue or green)
- Deploys to inactive environment
- Waits for health checks to pass
- Caches images locally to
/opt/deploy/images/for fast rollback - Switches Nginx to route traffic to new environment
- Keeps old environment running for instant rollback
deploy rollback
Instantly rollback to the previous environment.
# Rollback with confirmation
deploy rollback
# Rollback without confirmation (CI/CD)
deploy rollback --auto
How it works:
- Loads cached images from previous deployment (< 10 seconds)
- Falls back to registry if cache miss
- Switches Nginx configuration to previous environment
Fast rollback: < 10 seconds with cached images, no registry pulls needed.
deploy status
Show current deployment status and cluster information.
# Human-readable output
deploy status
# Machine-readable JSON (for CI/CD)
deploy status --output json
Shows:
- Active environment (blue/green)
- Server list with roles
- Running containers per server
- Recent deployment history
- Health status
deploy grow
Add servers to the cluster (always adds 2 to maintain odd numbers).
# Add 2 servers (maintains odd cluster size, default port 22)
deploy grow 192.168.1.20 192.168.1.21
# Add 2 servers with custom SSH ports (each can have different port)
deploy grow 192.168.1.20:50022 192.168.1.21:50022
# Mix different ports per server
deploy grow 192.168.1.20:50022 192.168.1.21:22
# Add sentinel server only
deploy grow 192.168.1.10:50022 --initial
What it does:
- Adds 2 servers to maintain odd cluster size (1, 3, 5, 7, 9...)
- Auto-assigns optimal roles using ScalingStrategy
- Shows layout transitions (colocated ↔ dedicated at 5 servers)
- Assigns database roles (primary, sync, async, read_replica)
- Rebalances workloads across cluster
Note: The system maintains odd numbers for proper quorum in distributed systems.
deploy shrink
Remove servers from the cluster (always removes 2 to maintain odd numbers).
# Remove 2 servers
deploy shrink 192.168.1.30 192.168.1.31
# Get recommendation on which to remove
deploy doctor # Shows safe servers to remove
What it does:
- Safely removes 2 servers
- Validates servers are not critical (never removes sentinel, primary, sync DB)
- Drains containers before removal
- Rebalances remaining cluster
- Handles layout transitions (dedicated → colocated below 5 servers)
Safety checks:
- ❌ Never removes sentinel server
- ❌ Never removes primary database
- ❌ Never removes sync_secondary database
- ✅ Prefers removing read_replicas, async_secondary, app-only nodes
deploy harden
Harden SSH security on a server (recommended before production use).
# Harden with defaults (creates 'deploy' user, changes SSH to port 22022)
deploy harden 192.168.1.10
# Specify all options
deploy harden 192.168.1.10:22 -u myuser -p mypassword -s 50022 -k ~/.ssh/id_ed25519.pub
# Use current server with custom SSH port
deploy harden 136.244.106.105:22 -s 50022
# Will generate password if not provided
deploy harden 192.168.1.10 -u deploy -s 22022
What it does:
- Creates a new non-root user (default:
deploy) with sudo access - Sets up SSH key authentication
- Changes SSH port (default:
22022, or use-s) - Disables root SSH login (root account still active, use
sudo su -) - Disables password authentication (if key provided - SSH key required)
- Updates firewall rules
- Transfers
/opt/deployownership to new user - Automatically updates server config in
.deploy/if server exists
Important:
- Root account is NOT disabled, only SSH login as root
- You can still become root:
ssh -p 50022 deploy@server, thensudo su - - The deploy user has full sudo access
IMPORTANT: Test the new connection before closing your current session!
# After running deploy harden, test from another terminal:
ssh -p 50022 deploy@192.168.1.10
# If successful, you can close old firewall port:
sudo ufw delete allow 22/tcp
deploy doctor
Get comprehensive recommendations for improving your cluster.
deploy doctor
Analyzes your entire cluster and provides actionable recommendations across:
📊 Scaling:
- Should you grow to more servers?
- Layout transition recommendations (colocated → dedicated)
- Cost/benefit analysis
🔒 Security:
- Unhardened servers detected
- Root login warnings
- SSH port recommendations
🗄️ Database:
- PostgreSQL deployment status
- Replication health
- Group 0/1 configuration
📦 Deployment:
- Application deployment status
- Image cache status
- Rollback readiness
📈 Monitoring:
- Grafana dashboard access
- Default password warnings
Overall Health Score: 0-100 based on cluster state
Example output:
📊 SCALING: ⚠ GROW to 3 servers for production
🔒 SECURITY: ⚠ 1 server(s) using root user
🗄️ DATABASE: ✓ PostgreSQL with auto-failover deployed
📦 DEPLOYMENT: ℹ No deployments yet
📈 MONITORING: ✓ Monitoring stack on sentinel
🔴 OVERALL HEALTH SCORE: 60/100
🎯 TOP PRIORITIES:
1. ⚠️ GROW to 3 servers for production
→ deploy grow <ip1> <ip2>
2. ⚠️ Harden SSH security
→ deploy harden 136.244.106.105:50022 -s 22022
Database Note:
PostgreSQL is automatically initialized during deploy init (use --no-db to skip).
Database status is shown in deploy status command.
Monitoring Stack
The monitoring stack is automatically deployed on your sentinel server.
What Gets Deployed
When you run deploy init <sentinel-server>, these services are automatically started:
Observability Stack:
- Grafana (port 3000) - Dashboards and visualization
- Default credentials:
admin/admin - Pre-configured datasources for VictoriaMetrics and Loki
- Default credentials:
- VictoriaMetrics (port 8428) - Metrics storage (Prometheus-compatible)
- 30-day retention
- Efficient time-series database
- Loki (port 3100) - Log aggregation
- 31-day retention
- Query logs from all containers
- Promtail - Log shipper (sends logs to Loki)
- Monitors system logs and container logs
- Node Exporter (port 9100) - System metrics
- CPU, memory, disk, network stats
- cAdvisor (port 8080) - Container metrics
- Per-container resource usage
Accessing Monitoring
# After deploy init completes:
# Access Grafana dashboard
http://<your-server-ip>:3000
# Login with:
# Username: admin
# Password: admin
# IMPORTANT: Change password on first login!
Using the Monitoring Stack
View Logs:
- Grafana → Explore → Select "Loki" datasource
- Query:
{job="containers"}for container logs - Query:
{job="varlogs"}for system logs
View Metrics:
- Grafana → Explore → Select "VictoriaMetrics" datasource
- Browse available metrics
- Pre-configured dashboards for nodes and containers
Monitor Deployments:
- Track deployment success/failure rates
- Monitor container health
- View application logs in real-time
- Alert on errors
PostgreSQL Database (Integrated)
PostgreSQL is automatically deployed during deploy init - opinionated database choice with auto-failover.
Initialization (part of deploy init):
# PostgreSQL deployed by default
deploy init 192.168.1.10
# ✓ Deploys pg_auto_failover monitor
# ✓ Deploys PostgreSQL primary
# ✓ Auto-generates secure password
# ✓ Creates 'app' database
# Custom database settings
deploy init 192.168.1.10 --db-name myapp --db-password MySecure123
# Skip database if not needed
deploy init 192.168.1.10 --no-db
Database status (integrated into deploy status):
deploy status
# Shows:
Database:
✓ PostgreSQL with auto-failover
● Monitor: Up 2h
● Primary: Up 2h
✓ Accepting connections
• Databases: app, postgres
• Group 0 (HA): server1
What you get automatically:
- ✅ pg_auto_failover from Citus Data for automatic failover
- ✅ Monitor + Primary node on sentinel server
- ✅ Automatic failover on primary failure
- ✅ Auto-generated secure password (32 chars)
- ✅ Production-grade high availability
- ✅ Connection string displayed on init
Connection string format:
postgresql://postgres:<password>@<server-host>:5432/<database>
Use in your application:
# podman-compose.yaml
services:
api:
environment:
DATABASE_URL: postgresql://postgres:${DB_PASSWORD}@host.containers.internal:5432/app
Why PostgreSQL + pg_auto_failover:
- Production-grade automatic failover
- No manual intervention needed
- ACID compliance, rich features
- Used by: Instagram, Reddit, Spotify
Backup & Recovery
Deploy CLI supports automated backups to S3-compatible storage (AWS S3, MinIO, Backblaze B2, DigitalOcean Spaces, etc.).
Backup to AWS S3
# Enable automated daily backups
deploy backup enable \
--s3-bucket my-prod-backups \
--s3-key AKIAIOSFODNN7EXAMPLE \
--s3-secret wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Backup operations
deploy backup create # Manual backup now
deploy backup list # List available backups
deploy backup restore <id> # Restore from backup
deploy backup disable # Turn off automated backups
Backup to MinIO (Self-Hosted)
MinIO is S3-compatible object storage you can self-host - perfect for keeping backups on your own infrastructure.
Setup MinIO server (one-time):
# On a dedicated server (or any server with storage):
podman run -d \
--name minio \
-p 9000:9000 \
-p 9001:9001 \
-v minio-data:/data \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=YourSecurePassword123 \
--restart unless-stopped \
minio/minio server /data --console-address ":9001"
# Access MinIO Console: http://your-minio-server:9001
# Login: minioadmin / YourSecurePassword123
Enable backups to MinIO:
# Remote MinIO server with HTTPS
deploy backup enable \
--s3-bucket postgres-backups \
--s3-endpoint minio.example.com:9000 \
--s3-key minioadmin \
--s3-secret YourSecurePassword123
# Local MinIO (development, no SSL)
deploy backup enable \
--s3-bucket backups \
--s3-endpoint localhost:9000 \
--no-ssl \
--s3-key minioadmin \
--s3-secret minioadmin
Why MinIO:
- ✅ Self-hosted (no AWS costs, full control)
- ✅ S3-compatible (works with all S3 tools)
- ✅ Fast (local network = faster backups/restores)
- ✅ Simple (single container deployment)
- ✅ Free and open source
- ✅ Perfect for on-premise deployments
Backup Features
What you get:
- ✅ Daily automated backups (pg_basebackup)
- ✅ Continuous WAL archiving (< 1 minute recovery point)
- ✅ Point-in-time recovery (restore to any second)
- ✅ 30-day retention (configurable)
- ✅ S3/MinIO compatible (works with any S3-compatible storage)
Data loss window:
- Single server failure (3+ nodes): ZERO (streaming replication)
- Catastrophic failure: < 1 minute (WAL archiving)
Example Workflow
Development (1 Server)
# Initialize project and server
deploy init --local
deploy init 192.168.1.10
# ✓ Monitoring automatically deployed
# ✓ PostgreSQL automatically deployed with auto-failover
# ✓ Secure password auto-generated
# ✓ Ready for deployments!
# Check what you got
deploy status
# Shows:
# - Sentinel server
# - Monitoring stack (Grafana at :3000)
# - Database (PostgreSQL with auto-failover)
# - Connection string
# Get recommendations
deploy doctor
# Shows health score and improvement suggestions
# Deploy your application
deploy push --apply
# ✓ Images cached locally for fast rollback
# View logs and metrics in Grafana
open http://192.168.1.10:3000
# Fast rollback if needed
deploy rollback
# ✓ Loads from cache (< 10 seconds)
Production (3-5 Servers)
# Start with 1 server (custom SSH port)
deploy init 192.168.1.10:50022
# Grow to 3 servers (each can have different port)
deploy grow 192.168.1.20:50022 192.168.1.21:50022
# Deploy application
deploy push --apply
# Grow to 5 servers later
deploy grow 192.168.1.30:50022 192.168.1.31:22
# Deploy again (automatically distributes across all servers)
deploy push --apply
Rollback Scenario
# Deploy new version
deploy push --apply --version v2.0.0
# Something went wrong!
# Instant rollback (< 10 seconds)
deploy rollback
# Traffic now back to previous version
deploy status
Configuration
The CLI stores all configuration in .deploy/ directory in your project root:
.deploy/
config.yaml # Main configuration
clusters/
production/
state.yaml # Server and deployment state
staging/
development/
metrics/ # Historical metrics
logs/
deploy.log # All operations
audit.log # State-changing operations
scripts/ # Generated scripts
Version Control:
- ✅ Commit:
.deploy/config.yaml,.deploy/clusters/*/state.yaml(share with team) - ❌ Ignore:
.deploy/logs/,.deploy/metrics/,.deploy/cache/(already in .gitignore)
Each project has its own .deploy/ directory - perfect for multi-project setups!
Configuration File
Edit .deploy/config.yaml in your project:
cluster: production
active_environment: blue
deployment:
canary:
stages: [10, 50, 100]
duration: 5m
rollback:
error_threshold: 5%
response_time: 2000ms
auto_approve: false
dry_run: true
runtime: podman
monitoring:
enabled: true
metrics_retention_days: 30
ci_mode: false
output_format: text
Podman Compose File
Create a podman-compose.yaml in your project:
version: '3.8'
services:
web:
image: nginx:alpine
ports:
- "8080:80"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80/health"]
interval: 30s
timeout: 10s
retries: 3
restart: unless-stopped
labels:
- "deploy.service=web"
api:
image: myapp/api:latest
ports:
- "3000:3000"
environment:
- NODE_ENV=production
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:3000/health"]
interval: 30s
restart: unless-stopped
labels:
- "deploy.service=api"
The CLI automatically:
- Adds environment suffixes (
-blue,-green) to service names - Adjusts ports to avoid conflicts (green uses +1000 offset)
- Manages container lifecycle
- Monitors health checks
CI/CD Integration
GitHub Actions
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install Deploy CLI
run: |
git clone https://github.com/your/deploy-cli
cd deploy-cli
npm install
npm run build
npm link
- name: Restore Deploy Config
env:
DEPLOY_CONFIG: ${{ secrets.DEPLOY_CONFIG_BASE64 }}
run: echo "$DEPLOY_CONFIG" | base64 -d | tar -xz -C ~/
- name: Deploy
env:
DEPLOY_CI: true
run: |
deploy push --apply
deploy status
Environment Variables
DEPLOY_CI=true- Enables CI mode (no confirmations, JSON output)DEPLOY_AUTO_APPROVE=true- Skip confirmationsDEPLOY_OUTPUT=json- Force JSON output
Architecture
Blue-Green Deployment
Nginx � Traffic routing
(Gateway)
,
� Blue Environment (Active)
web-blue
api-blue
� Green Environment (Inactive)
web-green
api-green
During deployment:
- Deploy to Green (inactive)
- Health check Green
- Switch Nginx � Green
- Blue becomes inactive (ready for rollback)
Server Roles
The CLI automatically assigns roles based on cluster size:
- 1 server: All-in-one (gateway + services + data)
- 3 servers: Gateway, Data, Monitoring
- 5 servers: 2� Gateway, 2� Data, 1� Monitoring
- 7+ servers: Dedicated roles per tier
Troubleshooting
"SSH connection failed"
- Ensure SSH key is in
~/.ssh/id_rsa(orid_ed25519,id_ecdsa) - Verify you can manually SSH:
ssh root@<server-ip> - Check firewall allows SSH (port 22)
"Podman not installed"
The deploy init command should install Podman automatically. If it fails:
# Manually SSH to server and run:
sudo apt update && sudo apt install podman # Ubuntu/Debian
sudo dnf install podman # Fedora/RHEL
"Container failed health check"
- Check your healthcheck configuration in
podman-compose.yaml - Verify the service is actually healthy:
curl http://localhost:8080/health - Check logs on the server:
podman logs <container-name>
"Nginx config test failed"
The CLI tests Nginx config before applying. If it fails:
# SSH to server
ssh root@<server-ip>
# Check Nginx config
nginx -t
# View logs
journalctl -u nginx
Development
# Install dependencies
npm install
# Run in development mode
npm run dev
# Build
npm run build
# Run built CLI
npm start -- init --local
# Or use directly
node dist/index.js --help
Testing
Deploy CLI has comprehensive test coverage for critical business logic.
Running Tests
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run tests with UI
npm run test:ui
# Run tests once (CI mode)
npm run test:run
Test Coverage
Current Coverage:
- ✅ 107 tests across 3 test suites
- ✅ 100% passing
- ✅ Core business logic thoroughly tested
What's Tested:
-
src/utils/validation.ts- Input validation (33 tests)- IP address validation
- Domain name validation
- Port number validation
- Server count validation (odd numbers)
-
src/utils/parser.ts- Address parsing (18 tests)- host:port format parsing
- Default port handling
- IP and domain support
- Error cases
-
src/core/scaling.ts- Scaling strategy (56 tests)- Sentinel allocation (1 vs 2 sentinels)
- Layout transitions (colocated ↔ dedicated)
- Database node distribution (Groups 0 & 1)
- Safe removal validation
- 40/60 resource split formula
Test Structure
tests/
├── unit/
│ ├── core/
│ │ └── scaling.test.ts # ScalingStrategy logic
│ └── utils/
│ ├── validation.test.ts # Input validation
│ └── parser.test.ts # Address parsing
├── integration/
│ └── (future: state persistence, SSH operations)
└── fixtures/
└── (future: mock states, sample configs)
Running Specific Tests
# Run only validation tests
npm test validation
# Run only scaling tests
npm test scaling
# Run with verbose output
npm test -- --reporter=verbose
Bugs Found by Tests
Tests have already found and helped fix:
- ✅ Domain validation accepting invalid IP patterns (fixed)
- ✅ Scaling formula edge cases verified
- ✅ Safe removal logic validated
Roadmap
MVP (Current):
- Blue-green deployments
- Instant rollback
- Multi-server support
- Smart scaling advice
- CI/CD integration
Phase 2:
- Metrics collection and storage
- Advanced --advise with historical analysis
- Database migration support
- SSL/TLS certificate management
- Monitoring dashboard integration
- Slack/email notifications
- Log aggregation
- Backup/restore commands
Phase 3:
- Multi-registry support
- Container image caching
- Network mesh configuration
- Advanced load balancing
- Auto-scaling triggers
License
MIT
Ready to deploy?
deploy init 192.168.1.10
deploy push --apply
Your application is now live with zero-downtime deployments!
Dependencies
Dependencies
| ID | Version |
|---|---|
| axios | ^1.6.2 |
| chalk | ^4.1.2 |
| commander | ^11.1.0 |
| dayjs | ^1.11.10 |
| js-yaml | ^4.1.0 |
| ora | ^5.4.1 |
| ssh2 | ^1.15.0 |
Development Dependencies
| ID | Version |
|---|---|
| @types/js-yaml | ^4.0.9 |
| @types/node | ^20.10.5 |
| @types/ssh2 | ^1.11.19 |
| @vitest/coverage-v8 | 3.2.4 |
| @vitest/ui | ^3.2.4 |
| typescript | ^5.3.3 |
| vitest | ^3.2.4 |