Kubernetes Basics for Docker Users: Making the Transition
You know Docker. You can write Compose files, build images, debug networking issues, and manage volumes. Now everyone is talking about Kubernetes, and the learning curve looks vertical. The good news is that most Docker concepts have direct Kubernetes equivalents. The challenge is not learning new ideas but learning new vocabulary and a different operational model.
This guide translates your Docker knowledge into Kubernetes terms, walks through the core primitives, and helps you decide whether you actually need Kubernetes at all.
Docker Concepts to Kubernetes Equivalents
| Docker Concept | Kubernetes Equivalent | Key Difference |
|---|---|---|
| Container | Container (inside a Pod) | K8s containers always run inside a Pod wrapper |
docker run |
Pod / Deployment | Declarative instead of imperative |
docker-compose.yml |
Multiple YAML manifests | Each resource is a separate object |
| Docker network | Service / NetworkPolicy | All pods can communicate by default; Services provide stable endpoints |
| Docker volume | PersistentVolume / PersistentVolumeClaim | Storage lifecycle is decoupled from pods |
| Environment variables | ConfigMap / Secret | Centralized, versioned, shareable |
docker build |
Same (build happens outside K8s) | K8s does not build images; it only runs them |
docker compose up --scale |
Deployment replicas | K8s handles scheduling across nodes |
| Docker Swarm | Kubernetes cluster | K8s is far more complex but also more capable |
| Docker healthcheck | Liveness / Readiness / Startup probes | Three types of probes for different purposes |
Pods: The Basic Unit
A Pod is the smallest deployable unit in Kubernetes. It wraps one or more containers that share networking and storage. In practice, most Pods contain a single container, analogous to a single docker run command.
# pod.yaml - Equivalent of: docker run -p 8080:8080 myapp:latest
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: "250m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
# Apply the manifest
kubectl apply -f pod.yaml
# Equivalent Docker commands as kubectl
kubectl get pods # docker ps
kubectl logs myapp # docker logs myapp
kubectl exec -it myapp -- /bin/sh # docker exec -it myapp /bin/sh
kubectl describe pod myapp # docker inspect myapp
kubectl delete pod myapp # docker rm -f myapp
Deployments: Managing Pod Lifecycle
A Deployment is the Kubernetes equivalent of defining a service in Docker Compose, with built-in scaling and rolling updates:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.2.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: myapp-config
key: log-level
resources:
requests:
cpu: "250m"
memory: "128Mi"
limits:
cpu: "1"
memory: "512Mi"
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
# Deployment operations
kubectl apply -f deployment.yaml
# Scale (like docker compose up --scale myapp=5)
kubectl scale deployment myapp --replicas=5
# Update image (triggers rolling update)
kubectl set image deployment/myapp myapp=myapp:1.3.0
# Check rollout status
kubectl rollout status deployment/myapp
# Rollback to previous version
kubectl rollout undo deployment/myapp
# View rollout history
kubectl rollout history deployment/myapp
Services: Stable Network Endpoints
In Docker Compose, containers communicate using service names. In Kubernetes, a Service provides a stable DNS name and IP address that routes traffic to the correct Pods, even as they are created and destroyed:
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp # Routes to Pods with this label
ports:
- port: 80 # Service port
targetPort: 8080 # Container port
type: ClusterIP # Internal only (default)
Service types and their Docker equivalents:
| Service Type | Docker Equivalent | Use Case |
|---|---|---|
ClusterIP |
Docker internal network | Internal service-to-service communication |
NodePort |
ports: "30080:8080" |
Expose on a static port on each node |
LoadBalancer |
External load balancer | Cloud provider load balancer integration |
ExternalName |
Docker DNS alias | DNS CNAME to external service |
Namespaces: Logical Isolation
Namespaces are like Docker Compose project names but with actual resource isolation and access control:
# Create a namespace
kubectl create namespace production
kubectl create namespace staging
# Deploy to a specific namespace
kubectl apply -f deployment.yaml -n production
# Set default namespace for kubectl
kubectl config set-context --current --namespace=production
# List resources across all namespaces
kubectl get pods --all-namespaces
kubectl get pods -A # shorthand
ConfigMaps and Secrets
In Docker, you use .env files or environment variables directly. Kubernetes centralizes configuration in ConfigMaps and sensitive data in Secrets:
# ConfigMap (like docker compose .env but versioned and shareable)
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
data:
log-level: "info"
max-connections: "100"
feature-flags: |
enable_dark_mode=true
enable_beta=false
---
# Secret (base64 encoded, encrypted at rest)
apiVersion: v1
kind: Secret
metadata:
name: myapp-secrets
type: Opaque
data:
database-url: cG9zdGdyZXNxbDovL3VzZXI6cGFzc0BkYjo1NDMyL2FwcA==
api-key: c3VwZXJzZWNyZXRrZXk=
# Create secrets from command line
kubectl create secret generic myapp-secrets \
--from-literal=database-url='postgresql://user:pass@db:5432/app' \
--from-literal=api-key='supersecretkey'
# Create ConfigMap from file
kubectl create configmap nginx-config --from-file=nginx.conf
kubectl Basics: The Essential Commands
# Get resources
kubectl get pods # List pods
kubectl get pods -o wide # With node info and IP
kubectl get deployments # List deployments
kubectl get services # List services
kubectl get all # List common resources
# Describe (detailed info, events, conditions)
kubectl describe pod myapp-xyz
# Logs
kubectl logs myapp-xyz # Current logs
kubectl logs myapp-xyz -f # Follow logs (tail -f)
kubectl logs myapp-xyz --previous # Previous container logs
kubectl logs -l app=myapp --all-containers # All pods with label
# Execute commands
kubectl exec -it myapp-xyz -- /bin/sh # Interactive shell
kubectl exec myapp-xyz -- env # Run a command
# Port forwarding (like docker -p but temporary)
kubectl port-forward pod/myapp-xyz 8080:8080
kubectl port-forward service/myapp 8080:80
# Copy files
kubectl cp myapp-xyz:/app/config.yml ./config.yml
# Apply and delete
kubectl apply -f manifest.yaml # Create or update
kubectl delete -f manifest.yaml # Delete
kubectl delete pod myapp-xyz # Delete specific pod
# Debugging
kubectl get events --sort-by=.metadata.creationTimestamp
kubectl top pods # Resource usage
kubectl top nodes
Learning Environments: minikube and k3s
minikube
# Install and start minikube
# Creates a single-node K8s cluster in a VM or container
minikube start --driver=docker --memory=4096 --cpus=2
# Use the minikube Docker daemon (build images directly)
eval $(minikube docker-env)
# Access the dashboard
minikube dashboard
# Enable addons
minikube addons enable ingress
minikube addons enable metrics-server
# Stop and delete
minikube stop
minikube delete
k3s
# Install k3s (lightweight Kubernetes)
# Single command installation
curl -sfL https://get.k3s.io | sh -
# Check the cluster
sudo k3s kubectl get nodes
# Use with standard kubectl
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
kubectl get nodes
# Add a worker node
# On the worker, run:
curl -sfL https://get.k3s.io | K3S_URL=https://master:6443 \
K3S_TOKEN=$(sudo cat /var/lib/rancher/k3s/server/node-token) sh -
A Complete Example: Docker Compose to Kubernetes
Here is a Docker Compose file and its Kubernetes equivalent:
# docker-compose.yml
services:
app:
image: myapp:1.0
ports:
- "8080:8080"
environment:
DATABASE_URL: postgresql://postgres:secret@postgres:5432/mydb
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:16
environment:
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 5s
timeout: 3s
retries: 5
volumes:
pgdata:
The equivalent in Kubernetes requires multiple manifest files:
# k8s/postgres-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pgdata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
# k8s/postgres-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
type: Opaque
data:
password: c2VjcmV0 # base64 of "secret"
---
# k8s/postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
volumeMounts:
- name: pgdata
mountPath: /var/lib/postgresql/data
readinessProbe:
exec:
command: ["pg_isready", "-U", "postgres"]
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: pgdata
---
# k8s/postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
---
# k8s/app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
value: "postgresql://postgres:secret@postgres:5432/mydb"
---
# k8s/app-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
Yes, that is significantly more YAML. This is the fundamental trade-off: Kubernetes provides more control, more features, and more resilience, at the cost of more configuration.
When Docker Compose Is Enough
Not every workload needs Kubernetes. Docker Compose is likely sufficient when:
- Single server: Your entire stack fits on one machine with room to spare.
- Small team: 1-5 developers/operators who all have SSH access.
- Fewer than 15 services: Compose handles this scale well.
- Simple scaling needs: You need 2-3 replicas, not auto-scaling based on metrics.
- No multi-region requirements: Everything runs in one data center.
- Development and staging environments: Even K8s shops often use Compose for local development.
Key insight: Kubernetes solves the problems of scale and multi-team operations. If you do not have those problems, you are paying the complexity tax without getting the benefits. Tools like usulnet extend Docker Compose's operational capabilities to multi-node setups without requiring the jump to full Kubernetes, which fills the gap for many small-to-medium deployments.
Migration Strategy: Compose to Kubernetes
If you decide Kubernetes is the right move, migrate incrementally:
- Start with a learning cluster: Set up k3s or minikube. Deploy one service.
- Use Kompose: The
komposetool converts Docker Compose files to Kubernetes manifests. The output needs refinement, but it is a useful starting point. - Migrate stateless services first: Web servers and API services are easy. Databases should be the last things you move.
- Set up CI/CD: Kubernetes shines with GitOps workflows (ArgoCD, Flux). Set this up early.
- Invest in observability: Prometheus, Grafana, and a log aggregator are non-negotiable in K8s.
# Convert Compose to K8s manifests (starting point)
kompose convert -f docker-compose.yml
# The output will need manual refinement for:
# - Resource limits
# - Health checks
# - Persistent volume claims
# - Secrets management
# - Ingress configuration
Kubernetes is a powerful platform, but it is not a mandatory upgrade from Docker Compose. Evaluate your actual needs, not the industry hype. If single-server Docker with proper monitoring and backup covers your requirements, stay with it. If multi-node orchestration, auto-scaling, and declarative infrastructure become genuine needs, Kubernetes is ready when you are.