Container Orchestration Compared: Swarm vs Kubernetes vs Nomad vs Docker Compose
Container orchestration is the automation of deploying, scaling, networking, and managing containerized applications across one or more hosts. When you have a single server running Docker Compose, "orchestration" might feel like overkill. But the moment you add a second server, need automatic restarts, require zero-downtime deployments, or want to scale services based on load, you need some form of orchestration.
The challenge is choosing the right tool. The container ecosystem offers options ranging from dead-simple to extraordinarily complex. This guide provides an honest comparison of the four most relevant orchestration approaches: Docker Compose, Docker Swarm, Kubernetes, and HashiCorp Nomad.
What is Container Orchestration?
At its core, orchestration handles the following concerns:
- Scheduling: Deciding which host runs which container, based on available resources.
- Networking: Enabling containers across different hosts to communicate.
- Service discovery: Allowing containers to find each other by name rather than IP address.
- Scaling: Running multiple instances of a service and load balancing between them.
- Health management: Detecting failed containers and restarting or replacing them automatically.
- Rolling updates: Deploying new versions without downtime.
- Secret management: Distributing sensitive configuration to containers securely.
Docker Compose: Single-Node Simplicity
Docker Compose is not an orchestrator in the traditional sense. It defines and runs multi-container applications on a single host. But it is the starting point for most Docker deployments, and its simplicity is its greatest strength.
# docker-compose.yml - A typical self-hosted stack
services:
traefik:
image: traefik:v3.0
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik.yml:/etc/traefik/traefik.yml:ro
app:
image: myapp:latest
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
cpus: '0.5'
depends_on:
db:
condition: service_healthy
labels:
- "traefik.enable=true"
- "traefik.http.routers.app.rule=Host(`app.example.com`)"
db:
image: postgres:16
restart: unless-stopped
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
pgdata:
| Pros | Cons |
|---|---|
| Extremely simple to learn and use | Single node only (no multi-host) |
| Standard YAML format understood by everyone | No automatic scaling |
| Built into Docker (no extra installation) | No rolling updates (stop then start) |
| Excellent for development and small deployments | No cross-host networking |
| Health checks and restart policies | Limited load balancing |
Docker Swarm: Simple Multi-Node
Docker Swarm is Docker's built-in orchestration mode. It extends Docker Compose concepts across multiple hosts with minimal additional complexity:
# Initialize a Swarm cluster
docker swarm init --advertise-addr 192.168.1.101
# Join additional nodes
docker swarm join --token SWMTKN-1-xxxxx 192.168.1.101:2377
# Deploy a stack (uses the same Compose format)
docker stack deploy -c docker-compose.yml mystack
# docker-compose.yml for Swarm
services:
web:
image: nginx:alpine
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
rollback_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
placement:
constraints:
- node.role == worker
resources:
limits:
cpus: '0.5'
memory: 256M
ports:
- "80:80"
networks:
- frontend
api:
image: myapi:latest
deploy:
replicas: 2
update_config:
parallelism: 1
delay: 30s
secrets:
- db_password
networks:
- frontend
- backend
secrets:
db_password:
external: true
networks:
frontend:
driver: overlay
backend:
driver: overlay
# Swarm management commands
docker node ls # List cluster nodes
docker service ls # List running services
docker service ps mystack_web # Show service tasks
docker service scale mystack_web=5 # Scale a service
docker service update --image nginx:1.25 mystack_web # Rolling update
docker stack rm mystack # Remove a stack
| Pros | Cons |
|---|---|
| Built into Docker (no extra installation) | Limited ecosystem compared to Kubernetes |
| Uses familiar Compose syntax | Docker, Inc. has deprioritized Swarm development |
| Easy to set up (one command to initialize) | Fewer third-party integrations |
| Built-in overlay networking | No auto-scaling based on metrics |
| Rolling updates and rollbacks | Smaller community support |
| Built-in secrets management | Limited storage orchestration |
Kubernetes: The Industry Standard
Kubernetes (K8s) is the most powerful and complex container orchestration platform. It was designed by Google to manage containers at planet scale, and it is the de facto standard for production container deployments in enterprise environments.
# A Kubernetes Deployment (equivalent to a Compose service with replicas)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1.0
ports:
- containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80
tls:
- hosts:
- app.example.com
secretName: myapp-tls
# k3s: Lightweight Kubernetes for homelabs and edge
# Install k3s (single node, takes about 30 seconds)
curl -sfL https://get.k3s.io | sh -
# Add a worker node
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.101:6443 \
K3S_TOKEN=$(cat /var/lib/rancher/k3s/server/node-token) sh -
# kubectl basics
kubectl get nodes
kubectl get pods -A
kubectl apply -f deployment.yml
kubectl get services
kubectl logs myapp-pod-xxxxx
kubectl scale deployment myapp --replicas=5
| Pros | Cons |
|---|---|
| Industry standard with massive ecosystem | Extremely complex (steep learning curve) |
| Auto-scaling (HPA, VPA, cluster auto-scaler) | Resource-hungry (control plane needs 2+ GB RAM) |
| Advanced networking (service mesh, network policies) | YAML configuration is verbose and error-prone |
| Extensive storage orchestration (CSI drivers) | Overkill for small deployments |
| Self-healing with probes and restart policies | Operational overhead for self-hosted clusters |
| Huge community and job market | Abstractions hide Docker (debugging is harder) |
HashiCorp Nomad: The Flexible Alternative
Nomad is HashiCorp's workload orchestrator. Unlike Kubernetes, which is container-focused, Nomad can orchestrate containers, VMs, Java applications, and raw executables. It is simpler than Kubernetes while being more capable than Swarm:
# myapp.nomad.hcl - Nomad job specification
job "myapp" {
datacenters = ["dc1"]
type = "service"
group "web" {
count = 3
network {
port "http" {
to = 8080
}
}
service {
name = "myapp"
port = "http"
tags = ["traefik.enable=true"]
check {
type = "http"
path = "/health"
interval = "10s"
timeout = "5s"
}
}
task "app" {
driver = "docker"
config {
image = "myapp:v1.0"
ports = ["http"]
}
resources {
cpu = 500 # MHz
memory = 256 # MB
}
env {
DATABASE_URL = "postgres://user:pass@db:5432/myapp"
}
template {
data = <<EOF
{{ with secret "secret/data/myapp" }}
API_KEY={{ .Data.data.api_key }}
{{ end }}
EOF
destination = "secrets/env"
env = true
}
}
update {
max_parallel = 1
health_check = "checks"
min_healthy_time = "30s"
healthy_deadline = "5m"
auto_revert = true
}
}
}
# Install Nomad
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install -y nomad
# Start Nomad in dev mode (single node)
nomad agent -dev
# Deploy a job
nomad job run myapp.nomad.hcl
# Check status
nomad status myapp
nomad alloc logs <alloc-id>
| Pros | Cons |
|---|---|
| Simpler than Kubernetes, more capable than Swarm | Smaller ecosystem than Kubernetes |
| Orchestrates containers AND non-container workloads | Requires Consul for service discovery (separate tool) |
| Single binary, easy to deploy | Fewer learning resources available |
| Native Vault integration for secrets | Less job market demand than Kubernetes |
| Multi-region and federation support | HCL syntax has a learning curve |
| Excellent upgrade and canary deployment support | Networking is less feature-rich than K8s |
Comparison Table
| Feature | Compose | Swarm | Kubernetes | Nomad |
|---|---|---|---|---|
| Multi-host | No | Yes | Yes | Yes |
| Auto-scaling | No | Manual | Yes (HPA) | Yes (external) |
| Rolling updates | No | Yes | Yes | Yes |
| Service discovery | DNS (same host) | DNS + VIP | DNS + Services | Consul |
| Secrets management | Env files | Docker Secrets | K8s Secrets | Vault |
| Learning curve | Low | Low-Medium | High | Medium |
| Resource overhead | Minimal | Low | High | Low-Medium |
| Community size | Huge | Medium | Huge | Medium |
| Best for | Single server | 2-10 nodes | 10+ nodes, enterprise | Mixed workloads |
| Minimum nodes | 1 | 1 (3 for HA) | 1 (3 for HA) | 1 (3 for HA) |
When to Use Each Tool
Use Docker Compose When
- You have a single server (homelab, VPS, small business)
- You run fewer than 50 containers
- You do not need automatic failover across hosts
- Simplicity is a priority
Use Docker Swarm When
- You have 2-10 servers and want multi-host deployment
- You want the simplest path from single-node to multi-node
- Your team already knows Docker Compose
- You need basic rolling updates and high availability
Use Kubernetes When
- You have 10+ nodes or expect to grow to that scale
- You need auto-scaling based on metrics
- You need advanced networking (service mesh, network policies)
- Your organization already invests in the Kubernetes ecosystem
- You want to learn skills directly applicable to most tech companies
Use Nomad When
- You need to orchestrate mixed workloads (containers + non-containers)
- You already use other HashiCorp tools (Consul, Vault, Terraform)
- You want Kubernetes-like capabilities with less complexity
- You need multi-region federation
Migration Paths
Your orchestration needs will evolve. Here are practical migration paths:
- Compose to Swarm: The easiest migration. Swarm uses the same Compose file format. Add a
deploysection to your existing Compose files, initialize Swarm mode, and deploy as a stack. Most Compose files work in Swarm with minimal changes. - Compose to Kubernetes: Use
komposeto convert Compose files to Kubernetes manifests. The conversion is imperfect and requires manual tuning, but it provides a starting point. Consider k3s for a lightweight Kubernetes that works well in homelabs. - Swarm to Kubernetes: No direct migration tool. Rewrite your stack definitions as Kubernetes manifests. The concepts map roughly: Swarm services become Deployments, overlay networks become Services and Ingress.
- Any to Nomad: Rewrite in HCL. Nomad's Docker driver understands Docker images directly, so the container side is straightforward. The main work is translating networking and service discovery.
# Convert Docker Compose to Kubernetes with kompose
kompose convert -f docker-compose.yml
# This generates:
# - deployment.yaml for each service
# - service.yaml for exposed ports
# - persistentvolumeclaim.yaml for volumes
# Review and adjust before applying
The usulnet Approach: Multi-Node Without the Complexity
usulnet takes a different approach to multi-node Docker management. Instead of replacing Docker with an orchestration layer, it works with native Docker on each host. An agent running on each server communicates with a central master, giving you unified management, monitoring, and control across all your Docker hosts without requiring you to change how you run containers.
This approach sits between Docker Compose (single-node) and full orchestration (Swarm/Kubernetes). You keep the simplicity of Compose files on each host while gaining cross-host visibility, centralized monitoring, backup management, and security scanning. For self-hosted environments with 2-5 servers that do not need auto-scaling or cross-host container scheduling, this provides the management capabilities you actually need without the operational overhead of a full orchestrator.
The right tool is the simplest one that meets your requirements. Do not adopt Kubernetes because it is popular. Do not dismiss Docker Compose because it is simple. Start with Compose, grow to Swarm or a management tool like usulnet when you add servers, and consider Kubernetes only when you genuinely need its capabilities.