Docker Cleanup and Maintenance: Reclaiming Disk Space and Resources
Docker is a silent disk space consumer. Every docker pull downloads layers. Every docker build creates cached layers. Every stopped container retains its writable layer. Every unnamed volume persists indefinitely. Over weeks and months, Docker can consume tens or hundreds of gigabytes without any active effort on your part. On a production server, this leads to the dreaded "no space left on device" error at the worst possible time.
This guide provides a complete strategy for understanding, controlling, and automating Docker disk usage management.
Understanding Docker Disk Usage
Start by understanding what is consuming space:
# High-level overview
docker system df
# TYPE TOTAL ACTIVE SIZE RECLAIMABLE
# Images 47 5 12.45GB 9.87GB (79%)
# Containers 23 5 1.34GB 890MB (66%)
# Local Volumes 15 6 8.92GB 3.41GB (38%)
# Build Cache 89 5.67GB 5.67GB (100%)
# Detailed breakdown (verbose)
docker system df -v
The four categories of Docker disk usage:
| Category | What It Stores | Common Space Hog |
|---|---|---|
| Images | Downloaded and built image layers | Old image versions that have been superseded |
| Containers | Writable layer of each container (including stopped) | Stopped containers with large log files or temp data |
| Volumes | Persistent data directories | Orphaned volumes from deleted containers |
| Build Cache | Intermediate build layers from docker build |
Accumulated cache from many builds |
The Nuclear Option: docker system prune
When you need space immediately:
# Remove stopped containers, unused networks, dangling images, and build cache
docker system prune
# WARNING! This will remove:
# - all stopped containers
# - all networks not used by at least one container
# - all dangling images
# - unused build cache
# Add -a to also remove unused images (not just dangling ones)
docker system prune -a
# Also removes images not referenced by any container
# Add --volumes to also remove unused volumes
docker system prune -a --volumes
# WARNING: This destroys data in orphaned volumes!
# Remove items older than 24 hours
docker system prune -a --filter "until=24h"
docker system prune -a --volumes is destructive. It removes all data in volumes not currently attached to a running container. On a production server, this could delete database data from stopped-but-important containers. Always check what will be removed first.
Targeted Cleanup
Removing Unused Images
# List all images sorted by size
docker images --format "{{.Size}}\t{{.Repository}}:{{.Tag}}\t{{.ID}}" | sort -rh
# Remove dangling images (layers not tagged and not referenced)
docker image prune
# Reclaimed: 2.3GB
# Remove ALL unused images (not just dangling)
docker image prune -a
# Remove specific images by pattern
docker images | grep "myapp" | awk '{print $3}' | xargs docker rmi
# Remove images older than 30 days
docker image prune -a --filter "until=720h"
# Keep only the latest 3 versions of each image
# This requires a script:
docker images --format "{{.Repository}}:{{.Tag}}" | \
grep "myapp" | \
sort -V | \
head -n -3 | \
xargs -r docker rmi
Removing Stopped Containers
# List stopped containers with size
docker ps -a --filter "status=exited" --format "table {{.Names}}\t{{.Status}}\t{{.Size}}"
# Remove all stopped containers
docker container prune
# Remove containers stopped more than 24 hours ago
docker container prune --filter "until=24h"
# Remove specific containers by pattern
docker ps -a --filter "name=temp-" -q | xargs docker rm
Removing Unused Volumes
# List volumes
docker volume ls
# Identify dangling (orphaned) volumes
docker volume ls --filter "dangling=true"
# Check volume usage (which containers use which volumes)
for vol in $(docker volume ls -q); do
echo "=== $vol ==="
docker ps -a --filter "volume=$vol" --format " {{.Names}} ({{.Status}})"
done
# Remove orphaned volumes
docker volume prune
# Remove a specific volume
docker volume rm my-old-volume
# Check volume size (requires inspecting the mount point)
docker volume inspect my-volume --format '{{.Mountpoint}}'
sudo du -sh /var/lib/docker/volumes/my-volume/_data
Removing Unused Networks
# List networks
docker network ls
# Remove unused networks
docker network prune
# Remove specific networks
docker network rm my-old-network
Cleaning Build Cache
# Show build cache usage
docker builder prune --dry-run
# Remove all build cache
docker builder prune -a
# Remove cache older than 7 days
docker builder prune --filter "until=168h"
# Keep only 5GB of cache (remove oldest first)
docker builder prune --keep-storage 5GB
Container Log Management
Container logs are one of the sneakiest disk space consumers. A busy application can generate gigabytes of logs:
# Find large log files
sudo find /var/lib/docker/containers -name "*.log" -exec du -sh {} \; | sort -rh | head
# Truncate a specific container's log (immediate relief)
CONTAINER_ID=$(docker inspect --format='{{.Id}}' mycontainer)
sudo truncate -s 0 /var/lib/docker/containers/$CONTAINER_ID/$CONTAINER_ID-json.log
Configuring Log Rotation
# Global log rotation (all containers)
# /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
# Restart Docker to apply
sudo systemctl restart docker
# NOTE: Only applies to NEW containers. Existing containers keep their config.
# Per-container log rotation in Compose
services:
app:
image: myapp
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
# Alternative: Use local driver (better performance)
services:
app:
logging:
driver: local
options:
max-size: "10m"
max-file: "5"
local log driver compresses log files and is faster than json-file. The trade-off is that the log format is not JSON, so external log shipping tools may need different parsers.
Automated Cleanup Script
Create a maintenance script that runs regularly:
#!/bin/bash
# docker-cleanup.sh - Automated Docker maintenance
set -euo pipefail
LOG_FILE="/var/log/docker-cleanup.log"
DAYS_OLD=7
CACHE_KEEP="10GB"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Report current usage
log "=== Docker Cleanup Starting ==="
log "Current disk usage:"
docker system df | tee -a "$LOG_FILE"
# Remove stopped containers older than DAYS_OLD days
HOURS=$((DAYS_OLD * 24))
PRUNED=$(docker container prune -f --filter "until=${HOURS}h" 2>&1)
log "Containers pruned: $PRUNED"
# Remove dangling images
PRUNED=$(docker image prune -f 2>&1)
log "Dangling images pruned: $PRUNED"
# Remove unused images older than DAYS_OLD days
PRUNED=$(docker image prune -a -f --filter "until=${HOURS}h" 2>&1)
log "Old unused images pruned: $PRUNED"
# Remove unused volumes (be careful!)
# Only remove volumes not attached to ANY container (including stopped)
PRUNED=$(docker volume prune -f 2>&1)
log "Orphaned volumes pruned: $PRUNED"
# Clean unused networks
PRUNED=$(docker network prune -f 2>&1)
log "Unused networks pruned: $PRUNED"
# Clean build cache (keep CACHE_KEEP)
PRUNED=$(docker builder prune -f --keep-storage "$CACHE_KEEP" 2>&1)
log "Build cache pruned (keeping $CACHE_KEEP): $PRUNED"
# Report final usage
log "Final disk usage:"
docker system df | tee -a "$LOG_FILE"
log "=== Docker Cleanup Complete ==="
# Schedule with cron (weekly at 3 AM on Sundays)
sudo crontab -e
0 3 * * 0 /opt/scripts/docker-cleanup.sh >> /var/log/docker-cleanup.log 2>&1
Monitoring Disk Usage Over Time
Proactive monitoring prevents disk emergencies:
#!/bin/bash
# docker-disk-alert.sh - Alert when Docker disk usage exceeds threshold
THRESHOLD_GB=50
DOCKER_DIR="/var/lib/docker"
USAGE_KB=$(sudo du -sk "$DOCKER_DIR" | awk '{print $1}')
USAGE_GB=$((USAGE_KB / 1024 / 1024))
if [ "$USAGE_GB" -gt "$THRESHOLD_GB" ]; then
echo "ALERT: Docker disk usage is ${USAGE_GB}GB (threshold: ${THRESHOLD_GB}GB)"
docker system df
echo ""
echo "Top 10 images by size:"
docker images --format "{{.Size}}\t{{.Repository}}:{{.Tag}}" | sort -rh | head -10
echo ""
echo "Reclaimable space:"
docker system df | grep -E "RECLAIMABLE|reclaimable"
fi
# Run every hour via cron
0 * * * * /opt/scripts/docker-disk-alert.sh | mail -s "Docker Disk Alert" [email protected]
Docker Desktop Cleanup (macOS/Windows)
Docker Desktop stores data in a virtual disk image that grows but does not automatically shrink:
# Check Docker Desktop disk usage
# macOS: ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
# Windows: %LOCALAPPDATA%\Docker\wsl\data\ext4.vhdx
# On macOS, check the file size
ls -lh ~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
# Reclaim space: first prune inside Docker
docker system prune -a --volumes
# Then in Docker Desktop:
# Settings > Resources > Disk image size > Apply & Restart
# Or Settings > General > "Reclaim disk space" (if available)
# Nuclear option: Reset Docker Desktop
# This removes ALL Docker data
# Settings > Reset > Reset to factory defaults
Advanced: Analyzing Overlay2 Storage
If docker system df does not tell you enough, dig into the storage driver directly:
# Find the largest overlay2 layers
sudo du -sh /var/lib/docker/overlay2/* | sort -rh | head -20
# Map a large layer to its image or container
LAYER="abc123def456"
# Check if it belongs to an image
docker image inspect $(docker images -q) | jq -r --arg layer "$LAYER" \
'.[] | select(.GraphDriver.Data.UpperDir | contains($layer)) | .RepoTags[]'
# Check if it belongs to a container
docker inspect $(docker ps -aq) | jq -r --arg layer "$LAYER" \
'.[] | select(.GraphDriver.Data.UpperDir | contains($layer)) | .Name'
# Find what's consuming space inside a specific container
docker exec mycontainer du -sh /* 2>/dev/null | sort -rh | head
Prevention: Best Practices
- Use multi-stage builds to keep production images small
- Configure log rotation in daemon.json for all containers
- Use specific image tags instead of pulling
:latestrepeatedly - Remove temporary containers with
docker run --rm - Use
.dockerignoreto keep build contexts small - Schedule automated cleanup via cron or systemd timers
- Monitor disk usage with alerts before it becomes critical
- Use named volumes deliberately and remove them when no longer needed
- Set build cache limits:
docker builder prune --keep-storage 10GB - Clean up in CI/CD pipelines after each build
Platforms like usulnet provide a dashboard view of disk usage across all your Docker hosts, making it straightforward to identify which nodes need cleanup before disk pressure becomes a problem. This visibility is especially valuable in multi-node environments where you cannot easily check each server individually.
Quick Reference
| Task | Command |
|---|---|
| See overall usage | docker system df |
| Remove everything unused | docker system prune -a --volumes |
| Remove stopped containers | docker container prune |
| Remove unused images | docker image prune -a |
| Remove orphaned volumes | docker volume prune |
| Remove build cache | docker builder prune -a |
| Remove unused networks | docker network prune |
| Remove items older than 7d | docker system prune -a --filter "until=168h" |
| Truncate container log | truncate -s 0 /var/lib/docker/containers/ID/ID-json.log |
Conclusion
Docker disk management is not optional. Every production Docker host needs a cleanup strategy, whether it is a weekly cron job, monitoring alerts, or both. Start with docker system df to understand your current situation, set up log rotation immediately if you have not already, and schedule automated cleanup for at least stopped containers and dangling images. The goal is never to see "no space left on device" in production.