4.0 KiB
Docker
Local Dev (Compose)
docker compose up -d --build
docker compose ps
docker compose down -v
To include the observability stack (Grafana/Loki/Tempo/VictoriaMetrics) with the local compose:
docker compose --profile observability up -d --build
docker compose --profile observability down -v
To use S3-compatible object storage (MinIO) for Loki + Tempo locally:
docker compose -f docker-compose.yml -f observability/docker-compose.s3.yml --profile observability up -d --build
docker compose -f docker-compose.yml -f observability/docker-compose.s3.yml --profile observability down -v
Service ports in the default compose:
- Gateway HTTP:
http://localhost:8080 - Gateway gRPC:
localhost:8081 - Aggregate gRPC:
localhost:50051 - Aggregate HTTP:
http://localhost:18080 - Runner HTTP:
http://localhost:28080 - Control API:
http://localhost:38080 - Control UI:
http://localhost:8082 - MailHog SMTP:
smtp://localhost:1025 - MailHog UI:
http://localhost:8025 - MinIO S3 API:
http://localhost:9000 - MinIO console:
http://localhost:9001 - NATS:
nats://localhost:4222, monitoringhttp://localhost:8222
MinIO defaults:
- Credentials:
minioadmin/minioadmin - Bucket:
cloudlysis-docs-0,cloudlysis-docs-1,cloudlysis-docs-2(comma-separated docs bucket set)
Email defaults (local):
- Runner uses SMTP backend via
RUNNER_SMTP_URL=smtp://mailhog:1025 - Inspect emails at MailHog UI
http://localhost:8025
Swarm (Dev)
Build images:
sh docker/scripts/build_images.sh all
Build images for the Gitea container registry:
export IMAGE_PREFIX=git.madapes.com/madapes/cloudlysis
export IMAGE_TAG=dev
sh docker/scripts/build_images.sh all
Push images to the Gitea container registry:
docker login git.madapes.com
export IMAGE_PREFIX=git.madapes.com/madapes/cloudlysis
export IMAGE_TAG=dev
sh docker/scripts/push_images.sh all
Create dev secrets required by the observability stack:
sh docker/scripts/swarm_dev_secrets.sh
This also creates dev secrets used by the control plane for S3 document storage:
control_s3_access_key_idcontrol_s3_secret_access_key
Deploy:
export IMAGE_PREFIX=cloudlysis
export IMAGE_TAG=dev
docker stack deploy -c swarm/stacks/platform.yml cloudlysis
docker stack deploy -c swarm/stacks/control-plane.yml cloudlysis_control
docker stack deploy -c swarm/stacks/observability.yml cloudlysis_obs
Production-style control plane (no MinIO in stack; S3 is external):
# create secrets (set CONTROL_S3_ACCESS_KEY_ID / CONTROL_S3_SECRET_ACCESS_KEY first)
sh docker/scripts/swarm_dev_secrets.sh
# required env for the stack
export CONTROL_S3_ENDPOINT="https://<hetzner-endpoint>"
export CONTROL_S3_REGION="eu-central-1"
export CONTROL_S3_BUCKET_DOCS="cloudlysis-docs"
docker stack deploy -c swarm/stacks/control-plane-prod.yml cloudlysis_control
Verify production S3 bucket/prefix permissions with AWS CLI (env-gated):
# install aws cli v2, then export creds and target
export S3_ENDPOINT="https://<hetzner-endpoint>"
export S3_REGION="eu-central-1"
export S3_BUCKET_DOCS="cloudlysis-docs"
export S3_PREFIX_DOCS="docs/"
# optionally set S3_FORCE_PATH_STYLE=true for some S3-compatible endpoints
sh docker/scripts/s3_verify_docs.sh
Create/provision the docs bucket (idempotent; CI/CD-friendly):
export S3_ENDPOINT="https://<hetzner-endpoint>"
export S3_REGION="eu-central-1"
export S3_BUCKET_DOCS="cloudlysis-docs"
# optional
# export S3_ENABLE_VERSIONING=true
sh docker/scripts/s3_create_docs_bucket.sh
Apply a lifecycle policy to the docs bucket (operator; automated):
export S3_ENDPOINT="https://<hetzner-endpoint>"
export S3_REGION="eu-central-1"
export S3_BUCKET_DOCS="cloudlysis-docs"
# optional: provide your own lifecycle JSON file
# export S3_LIFECYCLE_JSON="path/to/lifecycle.json"
sh docker/scripts/s3_apply_lifecycle_docs.sh
Remove:
docker stack rm cloudlysis_obs
docker stack rm cloudlysis_control
docker stack rm cloudlysis