Docker for Development: Setting Up a Perfect Local Environment
Docker in development should make you faster, not slower. Too many teams struggle with Docker dev environments that require full image rebuilds on every code change, have broken volume mounts, and take longer to start than the old "install everything locally" approach. This guide shows you how to set up a Docker development workflow that is genuinely better than running without Docker—with instant feedback, proper debugging, and reliable multi-service orchestration.
Separating Dev and Prod Configurations
The first mistake teams make is using the same Dockerfile for development and production. Development needs are fundamentally different:
| Requirement | Development | Production |
|---|---|---|
| Image size | Does not matter | Minimize |
| Build speed | Critical (fast iteration) | Less critical |
| Dev dependencies | Need everything (linters, debuggers, test frameworks) | Production deps only |
| Code changes | Instant reload (no rebuild) | Full rebuild and deploy |
| Debugging tools | Essential | Removed for security |
| Source code | Bind-mounted from host | Baked into image |
The Override Pattern
Use Docker Compose overrides to layer dev-specific configuration on top of a base file:
# docker-compose.yml (base - shared between dev and prod)
services:
app:
image: myapp:latest
environment:
DATABASE_URL: postgres://app:password@db:5432/myapp
REDIS_URL: redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: app
POSTGRES_PASSWORD: password
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d myapp"]
interval: 5s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
volumes:
pgdata:
# docker-compose.override.yml (dev - automatically loaded)
services:
app:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- ./src:/app/src
- ./package.json:/app/package.json
- /app/node_modules # Anonymous volume prevents host overwrite
ports:
- "3000:3000"
- "9229:9229" # Node.js debugger port
environment:
NODE_ENV: development
DEBUG: "app:*"
command: npm run dev
db:
ports:
- "5432:5432" # Expose DB port for local tools
cache:
ports:
- "6379:6379"
# docker-compose.prod.yml (production override)
services:
app:
image: registry.example.com/myapp:${VERSION}
deploy:
replicas: 3
resources:
limits:
memory: 256M
# No volume mounts, no debug ports
# Development (override is auto-loaded)
docker compose up
# Production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Bind Mounts for Live Reload
The key to a fast development workflow is bind-mounting your source code so changes are reflected instantly without rebuilding the image:
# Dockerfile.dev
FROM node:20-alpine
WORKDIR /app
# Install dependencies (cached)
COPY package.json package-lock.json ./
RUN npm install
# Don't COPY source code - it comes from bind mount
# The CMD uses a file watcher for auto-restart
EXPOSE 3000 9229
CMD ["npx", "nodemon", "--inspect=0.0.0.0:9229", "src/server.js"]
The bind mount in docker-compose.override.yml maps ./src from your host directly into the container. When you save a file in your editor, nodemon detects the change and restarts the application automatically.
The node_modules Problem
A common pitfall with Node.js bind mounts:
# Problem: bind mount overwrites container's node_modules
volumes:
- .:/app # Host's empty/different node_modules replaces container's
# Solution: Use an anonymous volume for node_modules
volumes:
- .:/app
- /app/node_modules # Preserves container's node_modules
# Alternative: Mount only src directory
volumes:
- ./src:/app/src
- ./package.json:/app/package.json
Docker Compose Watch
Docker Compose 2.22+ introduced the watch command, which provides smarter file synchronization than bind mounts:
# docker-compose.yml with watch configuration
services:
app:
build: .
ports:
- "3000:3000"
develop:
watch:
# Sync source files without rebuild
- action: sync
path: ./src
target: /app/src
ignore:
- node_modules/
- "*.test.js"
# Rebuild when dependencies change
- action: rebuild
path: package.json
# Sync and restart when config changes
- action: sync+restart
path: ./config
target: /app/config
# Start with watch mode
docker compose watch
# Or combine with up
docker compose up --watch
The three watch actions:
- sync — Copy changed files into the running container (like live reload)
- rebuild — Rebuild the image and recreate the container (for dependency changes)
- sync+restart — Sync files and restart the container process (for config changes)
VS Code Dev Containers
Dev Containers move your entire development environment—editor extensions, linters, debuggers, and all—into the container. Every team member gets an identical setup regardless of their host OS:
// .devcontainer/devcontainer.json
{
"name": "My App Dev",
"dockerComposeFile": ["../docker-compose.yml", "docker-compose.devcontainer.yml"],
"service": "app",
"workspaceFolder": "/app",
"customizations": {
"vscode": {
"extensions": [
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode",
"ms-python.python",
"golang.go",
"bradlc.vscode-tailwindcss"
],
"settings": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}
},
"forwardPorts": [3000, 5432, 6379],
"postCreateCommand": "npm install",
"postStartCommand": "npm run dev",
"remoteUser": "node"
}
// .devcontainer/docker-compose.devcontainer.yml
services:
app:
build:
context: ..
dockerfile: .devcontainer/Dockerfile
volumes:
- ..:/app:cached
- node-modules:/app/node_modules
command: sleep infinity # VS Code manages the process
volumes:
node-modules:
# .devcontainer/Dockerfile
FROM node:20
# Install dev tools
RUN apt-get update && apt-get install -y \
git \
curl \
zsh \
&& rm -rf /var/lib/apt/lists/*
# Install global dev tools
RUN npm install -g nodemon eslint prettier
WORKDIR /app
Debugging in Containers
Node.js Debugging
# Start with debug port exposed
docker compose exec app node --inspect=0.0.0.0:9229 src/server.js
# Or configure in docker-compose.override.yml:
services:
app:
command: node --inspect=0.0.0.0:9229 src/server.js
ports:
- "9229:9229"
// VS Code launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Docker: Attach to Node",
"type": "node",
"request": "attach",
"port": 9229,
"address": "localhost",
"localRoot": "${workspaceFolder}/src",
"remoteRoot": "/app/src",
"restart": true
}
]
}
Python Debugging
# Install debugpy in the container
RUN pip install debugpy
# Start with debugger
CMD ["python", "-m", "debugpy", "--listen", "0.0.0.0:5678", "--wait-for-client", "-m", "flask", "run", "--host=0.0.0.0"]
Go Debugging
# Install Delve debugger
FROM golang:1.22
RUN go install github.com/go-delve/delve/cmd/dlv@latest
# Start with Delve
CMD ["dlv", "debug", "--headless", "--listen=:2345", "--api-version=2", "--accept-multiclient", "./cmd/server"]
Database Seeding and Migrations
A good dev environment includes pre-populated databases:
# docker-compose.override.yml
services:
db:
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init:/docker-entrypoint-initdb.d # Auto-run on first start
# db/init/01-schema.sql
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(255) NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES users(id),
title VARCHAR(255) NOT NULL,
body TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
# db/init/02-seed.sql
INSERT INTO users (email, name) VALUES
('[email protected]', 'Alice Developer'),
('[email protected]', 'Bob Tester'),
('[email protected]', 'Carol Admin');
INSERT INTO posts (user_id, title, body) VALUES
(1, 'Getting Started with Docker', 'Docker makes development...'),
(1, 'Advanced Compose Patterns', 'When your project grows...');
Reset Database During Development
# Quick database reset
docker compose down -v # Removes volumes
docker compose up -d # Recreates with seed data
# Or add a reset script to your Makefile
reset-db:
docker compose exec db psql -U app -d myapp -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public;"
docker compose exec db psql -U app -d myapp -f /docker-entrypoint-initdb.d/01-schema.sql
docker compose exec db psql -U app -d myapp -f /docker-entrypoint-initdb.d/02-seed.sql
Multi-Service Development
Modern applications often involve multiple services. Structure your dev environment to support working on one service while others run in the background:
# docker-compose.yml - Full stack
services:
frontend:
build: ./frontend
ports:
- "3000:3000"
volumes:
- ./frontend/src:/app/src
environment:
API_URL: http://api:8080
api:
build: ./api
ports:
- "8080:8080"
volumes:
- ./api:/app
environment:
DATABASE_URL: postgres://app:password@db:5432/myapp
REDIS_URL: redis://cache:6379
worker:
build: ./worker
volumes:
- ./worker:/app
environment:
DATABASE_URL: postgres://app:password@db:5432/myapp
REDIS_URL: redis://cache:6379
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_DB: myapp
POSTGRES_USER: app
POSTGRES_PASSWORD: password
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app"]
interval: 5s
cache:
image: redis:7-alpine
mailhog:
image: mailhog/mailhog
ports:
- "8025:8025" # Web UI for testing emails
volumes:
pgdata:
# Work on just the API (other services run in background)
docker compose up -d db cache # Start infrastructure
docker compose up api # Run API in foreground with logs
# Or run the API locally and everything else in Docker
docker compose up -d db cache frontend worker
# Then run your API directly: go run ./cmd/server
Performance Tips for Docker Development
macOS and Windows Volume Performance
# Use :cached or :delegated mount options (macOS)
volumes:
- ./src:/app/src:cached # Host writes are immediately visible
# Container writes may have slight delay
# Use named volumes for generated files (node_modules, vendor)
volumes:
- .:/app:cached
- node-modules:/app/node_modules # Much faster than bind mount
# On Windows/macOS, consider using Docker Compose Watch instead of bind mounts
# for better file system performance
Speed Up Compose Start
# Pull images in parallel before starting
docker compose pull
# Build images in parallel
docker compose build --parallel
# Only start what you need
docker compose up -d db cache # Infrastructure only
docker compose up api # Just the service you're working on
For teams using Docker in development, usulnet provides visibility into all running dev containers across the team, making it easy to understand what services are running and diagnose cross-service issues without navigating multiple terminal windows.
Makefile for Common Operations
A Makefile wraps common Docker operations into memorable commands:
# Makefile
.PHONY: dev dev-up dev-down reset logs test shell
dev: dev-up ## Start development environment
docker compose up --watch
dev-up: ## Start infrastructure services
docker compose up -d db cache
dev-down: ## Stop everything and clean up
docker compose down
reset: ## Reset database with fresh seed data
docker compose down -v
docker compose up -d db cache
sleep 3
docker compose exec db psql -U app -d myapp -f /docker-entrypoint-initdb.d/01-schema.sql
docker compose exec db psql -U app -d myapp -f /docker-entrypoint-initdb.d/02-seed.sql
logs: ## Follow logs for all services
docker compose logs -f
test: ## Run tests in a container
docker compose exec app npm test
shell: ## Open a shell in the app container
docker compose exec app sh
help: ## Show this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-15s\033[0m %s\n", $$1, $$2}'
Conclusion
A well-configured Docker development environment eliminates "works on my machine" problems, provides instant feedback on code changes, and accurately mirrors production infrastructure. The key is separating dev and prod configurations, using bind mounts or Compose Watch for live reloading, and maintaining a frictionless workflow through Makefiles or scripts. Invest the time upfront to get your dev setup right—it pays for itself every day in faster iteration cycles and fewer environment-related bugs.