Docker Compose Advanced Patterns: Multi-Environment, Profiles and Extensions
Most Docker Compose users stop at defining services, volumes, and networks. That gets you surprisingly far for simple deployments, but real-world infrastructure demands more: different configurations per environment, selective service startup, shared configuration fragments, and services that wait for genuine readiness rather than just container startup. Docker Compose has evolved substantially over the past two years, and many of its most powerful features remain underused.
This guide covers the advanced Compose patterns that separate a production-grade stack from a tutorial example. Every pattern here works with Docker Compose V2 (the docker compose plugin), which is the standard as of 2025.
Compose Profiles: Selective Service Startup
Profiles let you define services that only start when explicitly requested. This is invaluable for development tools, debug utilities, and optional infrastructure components that you do not want running in every environment.
services:
app:
image: myapp:latest
ports:
- "8080:8080"
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 5
# Only starts with --profile debug
pgadmin:
image: dpage/pgadmin4
profiles: ["debug"]
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: [email protected]
PGADMIN_DEFAULT_PASSWORD: admin
# Only starts with --profile monitoring
prometheus:
image: prom/prometheus
profiles: ["monitoring"]
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
grafana:
image: grafana/grafana
profiles: ["monitoring"]
ports:
- "3000:3000"
# Only starts with --profile testing
test-runner:
image: myapp:latest
profiles: ["testing"]
command: ["pytest", "--tb=short"]
depends_on:
postgres:
condition: service_healthy
volumes:
pgdata:
Usage patterns for profiles:
# Start only core services (app + postgres)
docker compose up -d
# Start core + monitoring stack
docker compose up -d --profile monitoring
# Start core + debug tools + monitoring
docker compose up -d --profile debug --profile monitoring
# Run tests (starts test-runner + its dependencies)
docker compose run --rm --profile testing test-runner
# List services for a specific profile
docker compose --profile monitoring config --services
profiles key always start. Services with profiles only start when that profile is explicitly activated. A service can belong to multiple profiles. If a profiled service is a dependency of a non-profiled service, it starts automatically regardless of profile activation.
Extends and Include: Composable Configuration
The extends keyword lets a service inherit configuration from another service, either within the same file or from an external file. The include directive (introduced in Compose V2.20) lets you compose multiple Compose files together without merging them.
Using extends for Base Services
# base-services.yml - Shared service definitions
services:
base-app:
image: myapp:${APP_VERSION:-latest}
restart: unless-stopped
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
deploy:
resources:
limits:
cpus: "2.0"
memory: 512M
reservations:
cpus: "0.5"
memory: 128M
base-worker:
extends:
service: base-app
command: ["worker", "--concurrency=4"]
# docker-compose.yml
services:
web:
extends:
file: base-services.yml
service: base-app
ports:
- "8080:8080"
environment:
- APP_MODE=web
depends_on:
redis:
condition: service_healthy
background-worker:
extends:
file: base-services.yml
service: base-worker
environment:
- QUEUE_NAME=default
depends_on:
redis:
condition: service_healthy
email-worker:
extends:
file: base-services.yml
service: base-worker
environment:
- QUEUE_NAME=email
- SMTP_HOST=${SMTP_HOST}
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
Using include for Multi-File Composition
The include directive is cleaner than the older -f flag approach because each included file is self-contained with its own networks and volumes:
# docker-compose.yml
include:
- path: ./monitoring/docker-compose.yml
project_directory: ./monitoring
env_file: ./monitoring/.env
- path: ./logging/docker-compose.yml
project_directory: ./logging
services:
app:
image: myapp:latest
ports:
- "8080:8080"
networks:
- default
- monitoring_default
Environment-Specific Overrides
The classic pattern for multi-environment deployments uses override files. Docker Compose automatically merges docker-compose.yml with docker-compose.override.yml if both exist.
# docker-compose.yml (base - shared across all environments)
services:
app:
image: myapp:${APP_VERSION:-latest}
restart: unless-stopped
environment:
DATABASE_URL: postgresql://postgres:${DB_PASSWORD}@postgres:5432/mydb
REDIS_URL: redis://redis:6379
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
postgres:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 5
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
volumes:
pgdata:
# docker-compose.override.yml (development - auto-loaded)
services:
app:
build:
context: .
target: development
ports:
- "8080:8080"
- "5005:5005" # Debugger port
volumes:
- .:/app
- /app/node_modules
environment:
LOG_LEVEL: debug
DEBUG: "true"
postgres:
ports:
- "5432:5432" # Expose for local tools
redis:
ports:
- "6379:6379"
# docker-compose.prod.yml (production - explicit merge)
services:
app:
deploy:
replicas: 3
resources:
limits:
cpus: "2.0"
memory: 1G
environment:
LOG_LEVEL: warn
DEBUG: "false"
logging:
driver: json-file
options:
max-size: "50m"
max-file: "5"
postgres:
deploy:
resources:
limits:
cpus: "4.0"
memory: 4G
volumes:
- pgdata:/var/lib/postgresql/data
- ./postgres/postgresql.conf:/etc/postgresql/postgresql.conf
command: postgres -c config_file=/etc/postgresql/postgresql.conf
Deployment commands per environment:
# Development (auto-loads override)
docker compose up -d
# Production (explicit merge, skips override)
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# Staging (chain multiple overrides)
docker compose -f docker-compose.yml \
-f docker-compose.staging.yml up -d
# Validate the merged result
docker compose -f docker-compose.yml \
-f docker-compose.prod.yml config
Variable Interpolation and .env Files
Compose supports variable interpolation with defaults, required variables, and error messages:
services:
app:
image: myapp:${APP_VERSION:-latest} # Default value
environment:
SECRET_KEY: ${SECRET_KEY:?SECRET_KEY is required} # Error if unset
API_URL: ${API_URL:-http://localhost:8080}
DB_POOL: ${DB_POOL:-10}
WORKERS: ${WORKERS:-4}
labels:
- "deploy.version=${APP_VERSION:-unknown}"
- "deploy.env=${ENVIRONMENT:-development}"
Compose reads .env files in this priority order:
- Shell environment variables (highest priority)
--env-fileflag.envfile in the project directory- Default values in the Compose file
# .env.example (committed to version control)
APP_VERSION=latest
DB_PASSWORD=
SECRET_KEY=
ENVIRONMENT=development
DB_POOL=10
# .env (gitignored, created per environment)
APP_VERSION=2.4.1
DB_PASSWORD=supersecretpassword
SECRET_KEY=a1b2c3d4e5f6g7h8
ENVIRONMENT=production
DB_POOL=25
.env files containing secrets to version control. Commit an .env.example with placeholder values instead. For production, consider using Docker secrets or a secrets manager rather than environment variables.
YAML Anchors and Aliases
YAML anchors (&) and aliases (*) reduce duplication within a single Compose file. Combined with merge keys (<<), they create reusable configuration blocks:
x-common-env: &common-env
TZ: UTC
LOG_FORMAT: json
OTEL_EXPORTER_OTLP_ENDPOINT: http://otel-collector:4317
x-common-labels: &common-labels
com.company.project: myproject
com.company.team: platform
x-healthcheck-defaults: &healthcheck-defaults
interval: 10s
timeout: 5s
retries: 3
start_period: 30s
x-logging: &default-logging
driver: json-file
options:
max-size: "25m"
max-file: "3"
tag: "{{.Name}}"
x-deploy-defaults: &deploy-defaults
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
services:
api:
image: myapp-api:latest
environment:
<<: *common-env
SERVICE_NAME: api
PORT: "8080"
labels:
<<: *common-labels
com.company.service: api
logging: *default-logging
healthcheck:
<<: *healthcheck-defaults
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
deploy:
<<: *deploy-defaults
replicas: 2
worker:
image: myapp-worker:latest
environment:
<<: *common-env
SERVICE_NAME: worker
CONCURRENCY: "8"
labels:
<<: *common-labels
com.company.service: worker
logging: *default-logging
deploy:
<<: *deploy-defaults
replicas: 1
scheduler:
image: myapp-scheduler:latest
environment:
<<: *common-env
SERVICE_NAME: scheduler
labels:
<<: *common-labels
com.company.service: scheduler
logging: *default-logging
deploy: *deploy-defaults
The x- prefix denotes extension fields that Compose ignores during processing. They exist solely as anchor targets for reuse within the file.
depends_on with Service Conditions
The basic depends_on only ensures container start order, not readiness. With conditions, you can wait for a service to be genuinely healthy or to complete its task:
services:
migrate:
image: myapp:latest
command: ["migrate", "up"]
depends_on:
postgres:
condition: service_healthy
restart: "no" # Run once and exit
app:
image: myapp:latest
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
migrate:
condition: service_completed_successfully # Wait for migration
kafka:
condition: service_healthy
restart: true # Restart app if kafka restarts
postgres:
image: postgres:16
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 10
start_period: 10s
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
kafka:
image: confluentinc/cp-kafka:7.6
healthcheck:
test: ["CMD-SHELL", "kafka-broker-api-versions --bootstrap-server localhost:9092"]
interval: 10s
timeout: 10s
retries: 15
start_period: 30s
The three conditions available are:
| Condition | Behavior | Use Case |
|---|---|---|
service_started |
Wait for container to start (default) | Services with no health check |
service_healthy |
Wait for health check to pass | Databases, message brokers |
service_completed_successfully |
Wait for container to exit with code 0 | Migrations, seed data, init tasks |
Health Checks: Beyond the Basics
Well-designed health checks are the foundation of reliable container orchestration. Here are production-tested patterns for common services:
services:
# PostgreSQL: Check actual connectivity, not just process
postgres:
image: postgres:16
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d mydb"]
interval: 5s
timeout: 3s
retries: 5
start_period: 15s
# MySQL: Use mysqladmin ping
mysql:
image: mysql:8
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "127.0.0.1", "--silent"]
interval: 5s
timeout: 3s
retries: 5
start_period: 30s
# Redis: Verify data operations work
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep -q PONG"]
interval: 5s
timeout: 3s
retries: 5
# Elasticsearch: Check cluster health
elasticsearch:
image: elasticsearch:8.13.0
healthcheck:
test: ["CMD-SHELL", "curl -sf http://localhost:9200/_cluster/health | grep -qE '\"status\":\"(green|yellow)\"'"]
interval: 15s
timeout: 10s
retries: 10
start_period: 60s
# Custom app: HTTP endpoint
app:
image: myapp:latest
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/healthz"]
interval: 10s
timeout: 5s
retries: 3
start_period: 20s
start_period to give services time to initialize before health checks begin counting failures. Without it, slow-starting services like Elasticsearch may be marked unhealthy before they finish bootstrapping.
Compose Watch: Hot Reload for Development
Docker Compose Watch (introduced in V2.22) provides file-watching capabilities that automatically sync changes or rebuild containers during development:
services:
frontend:
build:
context: ./frontend
ports:
- "3000:3000"
develop:
watch:
# Sync source files without rebuild
- action: sync
path: ./frontend/src
target: /app/src
ignore:
- "**/*.test.js"
- "**/__snapshots__"
# Sync and restart on config changes
- action: sync+restart
path: ./frontend/config
target: /app/config
# Full rebuild on dependency changes
- action: rebuild
path: ./frontend/package.json
# Full rebuild on Dockerfile changes
- action: rebuild
path: ./frontend/Dockerfile
backend:
build:
context: ./backend
ports:
- "8080:8080"
develop:
watch:
# Go source: rebuild (compile is needed)
- action: rebuild
path: ./backend
ignore:
- "**/*_test.go"
- "./backend/tmp"
# Config files: sync and restart
- action: sync+restart
path: ./backend/config
target: /app/config
# Dependency changes: full rebuild
- action: rebuild
path: ./backend/go.mod
# Start with file watching
docker compose watch
# Or combine with up
docker compose up --watch
# Watch specific services only
docker compose watch frontend
The three watch actions behave differently:
| Action | What Happens | Best For |
|---|---|---|
sync |
Files copied into the running container | Interpreted languages with hot reload (JS, Python) |
sync+restart |
Files synced, then container restarted | Configuration files, templates |
rebuild |
Image rebuilt and container recreated | Compiled languages, dependency changes |
Init Containers Pattern
Docker Compose does not have a native init container concept like Kubernetes, but you can achieve the same effect using depends_on with service_completed_successfully:
services:
# Init container 1: Wait for database and run migrations
db-migrate:
image: myapp:latest
command: ["./migrate", "up"]
depends_on:
postgres:
condition: service_healthy
restart: "no"
environment:
DATABASE_URL: postgresql://postgres:${DB_PASSWORD}@postgres:5432/mydb
# Init container 2: Seed required data
db-seed:
image: myapp:latest
command: ["./seed", "--only-required"]
depends_on:
db-migrate:
condition: service_completed_successfully
restart: "no"
environment:
DATABASE_URL: postgresql://postgres:${DB_PASSWORD}@postgres:5432/mydb
# Init container 3: Generate TLS certificates
cert-init:
image: alpine/openssl
command: >
sh -c "if [ ! -f /certs/server.crt ]; then
openssl req -x509 -nodes -days 365
-subj '/CN=myapp.local'
-newkey rsa:2048
-keyout /certs/server.key
-out /certs/server.crt;
fi"
volumes:
- certs:/certs
restart: "no"
# Main application - waits for all init containers
app:
image: myapp:latest
depends_on:
db-seed:
condition: service_completed_successfully
cert-init:
condition: service_completed_successfully
redis:
condition: service_healthy
volumes:
- certs:/app/certs:ro
ports:
- "8443:8443"
postgres:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 5
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
volumes:
pgdata:
certs:
Key insight: Init containers should always set
restart: "no"so they run once and exit. The main service usesservice_completed_successfullyto ensure they finished without errors before starting.
Advanced Networking Patterns
For complex multi-service architectures, explicit network segmentation improves both security and clarity:
services:
# Public-facing reverse proxy
traefik:
image: traefik:v3.0
networks:
- frontend
- backend
ports:
- "80:80"
- "443:443"
# Web application
app:
image: myapp:latest
networks:
- backend
- db-network
# API service
api:
image: myapi:latest
networks:
- backend
- db-network
- cache-network
# Database (isolated from frontend)
postgres:
image: postgres:16
networks:
- db-network
# Cache (isolated from frontend)
redis:
image: redis:7-alpine
networks:
- cache-network
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: false
db-network:
driver: bridge
internal: true # No external access
cache-network:
driver: bridge
internal: true
The internal: true flag prevents containers on that network from reaching the internet, adding a layer of network isolation for sensitive services like databases.
Putting It All Together: A Production-Ready Template
Here is a Compose file that combines most of these patterns into a cohesive production setup:
# Extension fields for reuse
x-app-env: &app-env
DATABASE_URL: postgresql://postgres:${DB_PASSWORD}@postgres:5432/${DB_NAME:-mydb}
REDIS_URL: redis://redis:6379
LOG_LEVEL: ${LOG_LEVEL:-info}
TZ: ${TZ:-UTC}
x-logging: &default-logging
driver: json-file
options:
max-size: "25m"
max-file: "3"
services:
# Init: Run migrations
migrate:
image: ${APP_IMAGE:-myapp}:${APP_VERSION:-latest}
command: ["migrate", "up"]
environment:
<<: *app-env
depends_on:
postgres:
condition: service_healthy
restart: "no"
profiles: ["init"]
# Main application
app:
image: ${APP_IMAGE:-myapp}:${APP_VERSION:-latest}
environment:
<<: *app-env
PORT: "8080"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:8080/healthz"]
interval: 10s
timeout: 5s
retries: 3
start_period: 15s
logging: *default-logging
deploy:
resources:
limits:
cpus: "${APP_CPU_LIMIT:-2.0}"
memory: ${APP_MEM_LIMIT:-512M}
restart: unless-stopped
postgres:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d:ro
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD:?DB_PASSWORD is required}
POSTGRES_DB: ${DB_NAME:-mydb}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 5
start_period: 10s
logging: *default-logging
deploy:
resources:
limits:
cpus: "${PG_CPU_LIMIT:-2.0}"
memory: ${PG_MEM_LIMIT:-1G}
restart: unless-stopped
redis:
image: redis:7-alpine
command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redisdata:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
logging: *default-logging
restart: unless-stopped
# Backup sidecar
backup:
image: prodrigestivill/postgres-backup-local
profiles: ["backup"]
volumes:
- ./backups:/backups
depends_on:
postgres:
condition: service_healthy
environment:
POSTGRES_HOST: postgres
POSTGRES_DB: ${DB_NAME:-mydb}
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ${DB_PASSWORD}
SCHEDULE: "@daily"
BACKUP_KEEP_DAYS: 7
BACKUP_KEEP_WEEKS: 4
BACKUP_KEEP_MONTHS: 6
restart: unless-stopped
volumes:
pgdata:
redisdata:
This template gives you environment-specific behavior through variables, selective services through profiles, proper startup ordering through health checks and conditions, and resource governance through deploy limits. Tools like usulnet can visualize the resulting service graph and monitor the health check status of each container, making it easier to diagnose startup issues in complex stacks.
Common Pitfalls
- Forgetting
start_periodin health checks causes false negatives during container initialization, especially for JVM-based applications and databases with large datasets. - Using
depends_onwithout conditions gives you start ordering but not readiness guarantees. Always useservice_healthyfor databases. - Anchor merge conflicts: YAML merge keys (
<<) can be overridden by explicitly set keys in the same mapping, but you cannot merge two anchors into the same mapping without the second overwriting the first. - Variable interpolation in
commandarrays: Variables in array syntax (["cmd", "${VAR}"]) are interpolated by Compose, not by the shell. Use string syntax if you need shell expansion. - Override files silently loaded:
docker-compose.override.ymlis loaded automatically. This catches people off guard in CI/CD pipelines where dev overrides should not apply.
Docker Compose has grown far beyond a simple container orchestration tool. With profiles, extends, health check conditions, watch mode, and YAML extension fields, it can handle surprisingly complex deployments while keeping configuration readable and maintainable. Master these patterns, and you will spend less time fighting your infrastructure and more time building on it.