Docker Storage Drivers Explained: overlay2, btrfs, zfs and More
Every time you run docker pull or docker build, Docker is silently managing a complex layered filesystem behind the scenes. The storage driver is the component responsible for how image layers are stacked, how containers get their writable layer, and how efficiently disk space is used. Choosing the wrong driver—or misconfiguring the right one—can lead to poor performance, excessive disk usage, and stability problems.
This article explains how Docker storage drivers work, compares the available options, and helps you choose the right one for your specific workload and infrastructure.
The Copy-on-Write Concept
Docker images are built from layers. Each layer represents a set of filesystem changes (files added, modified, or deleted). When you run a container from an image, Docker does not copy all the image layers into a new directory. Instead, it uses a copy-on-write (CoW) strategy:
- All image layers are stacked together as read-only layers
- A thin writable layer is placed on top for the running container
- When a container reads a file, it reads from the highest layer that contains it
- When a container modifies a file, the file is copied up from the read-only layer to the writable layer, and the modification happens there
- When a container deletes a file, a whiteout marker is created in the writable layer
This is why 100 containers from the same image barely use more disk space than one—they all share the same read-only layers and only store their individual changes in thin writable layers.
Key insight: The storage driver determines how the copy-on-write mechanism is implemented. Different drivers use different filesystem features (union mounts, snapshots, thin provisioning) to achieve the same logical result with different performance characteristics.
overlay2: The Default Driver
overlay2 is the default and recommended storage driver for Docker on all modern Linux distributions. It uses the OverlayFS kernel module, which is built into Linux kernels 4.0+.
How overlay2 Works
OverlayFS merges two directory trees: a lowerdir (read-only) and an upperdir (writable), presenting them as a single unified merged view:
# Each layer is stored as a directory
# Docker's overlay2 structure:
/var/lib/docker/overlay2/
├── l/ # Shortened symlinks for layer IDs
├── abc123.../ # Layer 1 (base image)
│ ├── diff/ # Actual filesystem content
│ ├── link # Shortened identifier
│ └── lower # Reference to parent layer
├── def456.../ # Layer 2
│ ├── diff/
│ ├── link
│ ├── lower
│ ├── merged/ # Union mount (only for running containers)
│ └── work/ # OverlayFS work directory
└── ...
# Check your current storage driver
docker info | grep "Storage Driver"
# Storage Driver: overlay2
# See the layers for a specific image
docker inspect nginx:alpine | jq '.[0].GraphDriver'
# View the overlay mount of a running container
docker inspect $(docker ps -q) | jq '.[0].GraphDriver.Data'
# {
# "LowerDir": "/var/lib/docker/overlay2/.../diff:...",
# "MergedDir": "/var/lib/docker/overlay2/.../merged",
# "UpperDir": "/var/lib/docker/overlay2/.../diff",
# "WorkDir": "/var/lib/docker/overlay2/.../work"
# }
overlay2 Performance Characteristics
| Operation | Performance | Notes |
|---|---|---|
| Container start | Very fast | Just creates a new thin layer |
| Read (file in lower layer) | Near native | Direct read from underlying filesystem |
| First write (copy-up) | Slower | Entire file must be copied to upper layer |
| Subsequent writes | Near native | File already in upper layer |
| Many layers (>128) | Degraded | Kernel limit on number of lower directories |
btrfs
The btrfs storage driver uses Btrfs subvolumes and snapshots instead of overlay mounts. Each image layer and container writable layer is a Btrfs subvolume.
# Prerequisites: Btrfs filesystem for /var/lib/docker
# Check if /var/lib/docker is on Btrfs
df -T /var/lib/docker | grep btrfs
# Configure Docker to use btrfs
# /etc/docker/daemon.json
{
"storage-driver": "btrfs"
}
sudo systemctl restart docker
btrfs Advantages
- Native snapshots: Layer creation is instantaneous (O(1) operation)
- No copy-up overhead: Uses native CoW at the block level, not file level
- Compression: Btrfs transparent compression reduces disk usage
- Checksumming: Data integrity verification built into the filesystem
- No layer limit: Unlike overlay2's ~128 layer limit
btrfs Disadvantages
- Requires Btrfs filesystem (not commonly the default)
- Btrfs has had stability concerns historically (much improved in recent kernels)
- Higher memory usage than overlay2
- Requires careful subvolume management for cleanup
zfs
The ZFS storage driver uses ZFS datasets and snapshots. ZFS is known for its data integrity features, compression, and snapshot capabilities.
# Prerequisites: ZFS filesystem
sudo apt-get install zfsutils-linux # Ubuntu/Debian
sudo modprobe zfs
# Create a ZFS pool for Docker
sudo zpool create -f docker-pool /dev/sdX
sudo zfs create -o mountpoint=/var/lib/docker docker-pool/docker
# Configure Docker
# /etc/docker/daemon.json
{
"storage-driver": "zfs"
}
sudo systemctl restart docker
# Monitor ZFS pool usage
zpool list
zfs list -r docker-pool
ZFS Advantages
- Best data integrity: Checksums on all data and metadata
- Excellent compression: lz4 or zstd compression saves significant space
- Instant snapshots: Like Btrfs, snapshots are O(1)
- Block-level CoW: Efficient for write-heavy workloads
- Quotas: Per-container storage quotas via ZFS dataset quotas
ZFS Disadvantages
- Higher memory usage (ARC cache, typically wants 1GB+ RAM)
- Not in the mainline Linux kernel (DKMS module)
- More complex setup and management
- CDDL license creates distribution complications
# Useful ZFS commands for Docker management
# Check compression ratio
zfs get compressratio docker-pool/docker
# Set compression
zfs set compression=lz4 docker-pool/docker
# Check space usage per dataset (per container/image)
zfs list -r docker-pool/docker -o name,used,referenced,compressratio
devicemapper (Deprecated)
devicemapper uses Linux's device-mapper framework with thin provisioning. It was the default on CentOS/RHEL before overlay2 support was added.
# If you must migrate from devicemapper to overlay2:
# 1. Back up all important container data and images
# 2. Stop Docker
sudo systemctl stop docker
# 3. Change the storage driver
# /etc/docker/daemon.json
{
"storage-driver": "overlay2"
}
# 4. Remove old storage data (WARNING: destroys all images and containers)
sudo rm -rf /var/lib/docker/devicemapper
# 5. Start Docker
sudo systemctl start docker
# 6. Re-pull images and recreate containers
Choosing the Right Driver
| Scenario | Recommended Driver | Reason |
|---|---|---|
| General purpose (most users) | overlay2 | Default, stable, excellent performance, no setup needed |
| Data integrity is critical | zfs | Checksums on all data, self-healing capabilities |
| Already using Btrfs | btrfs | Native integration, block-level CoW |
| Write-heavy workloads | zfs or btrfs | Block-level CoW avoids full file copy-up |
| Very deep layer stacks | zfs or btrfs | No layer count limit |
| Minimal memory/overhead | overlay2 | Lowest resource usage of all drivers |
| Rootless Docker | overlay2 or fuse-overlayfs | Best supported in rootless mode |
Performance Benchmarks
Rough benchmarks for common operations (relative to overlay2 baseline):
| Operation | overlay2 | btrfs | zfs |
|---|---|---|---|
| Container start | 1.0x | 1.0x | 1.0x |
| Sequential read | 1.0x | 0.95x | 0.90x |
| Random read | 1.0x | 0.90x | 0.85x (with ARC: 1.2x) |
| Sequential write | 1.0x | 1.1x | 1.0x |
| First write (copy-up) | 1.0x | 2.0x faster | 2.0x faster |
| Disk space efficiency | 1.0x | 1.3x (with compression) | 1.5x (with compression) |
| Memory overhead | Low | Medium | High (ARC cache) |
Note: These numbers are approximate and vary significantly based on hardware, kernel version, and workload. Always benchmark with your actual workload before making a decision.
Disk Usage Management
Regardless of storage driver, managing Docker's disk usage is critical for production systems:
# Overall disk usage summary
docker system df
# TYPE TOTAL ACTIVE SIZE RECLAIMABLE
# Images 25 5 8.345GB 6.12GB (73%)
# Containers 8 5 234.5MB 120MB (51%)
# Local Volumes 12 6 3.456GB 1.2GB (34%)
# Build Cache 45 2.1GB 2.1GB
# Detailed breakdown
docker system df -v
# Per-container disk usage (writable layer size)
docker ps -s
# The SIZE column shows writable layer size vs virtual (total) size
# Inspect specific image layers
docker history nginx:alpine
# IMAGE CREATED SIZE COMMENT
# abc123 2 days ago 7.1MB
# def456 2 days ago 1.2kB
# ...
# Check the actual storage on disk
sudo du -sh /var/lib/docker/
sudo du -sh /var/lib/docker/overlay2/
Controlling Container Writable Layer Size
# Limit writable layer size (overlay2 with xfs)
# Requires Docker data root on XFS filesystem with project quotas
# /etc/docker/daemon.json
{
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.size=10G"
]
}
# For ZFS, use dataset quotas
zfs set quota=10G docker-pool/docker
Changing Storage Drivers
Changing the storage driver requires removing all existing Docker data. Plan carefully:
# 1. Save important images
docker save myapp:latest -o myapp.tar
# 2. Document running containers and their configurations
docker ps --format "{{.Names}}: {{.Image}}" > container-list.txt
for c in $(docker ps -q); do
docker inspect $c > inspect_$(docker inspect --format '{{.Name}}' $c).json
done
# 3. Back up all volumes
for v in $(docker volume ls -q); do
docker run --rm -v $v:/data -v $(pwd)/backups:/backup \
alpine tar czf /backup/vol_$v.tar.gz -C /data .
done
# 4. Stop Docker
sudo systemctl stop docker
# 5. Back up Docker directory (safety net)
sudo cp -a /var/lib/docker /var/lib/docker.backup
# 6. Change the driver
sudo vi /etc/docker/daemon.json
# 7. Remove old data
sudo rm -rf /var/lib/docker
# 8. Start Docker
sudo systemctl start docker
# 9. Restore images and volumes
docker load -i myapp.tar
Platforms like usulnet make this process easier by providing visibility into exactly which images, containers, and volumes exist on each node, helping you ensure nothing is missed during a storage driver migration.
Conclusion
For the vast majority of Docker users, overlay2 is the correct choice. It is the default, well-tested, performant, and requires zero configuration. Only consider alternatives if you have specific requirements: ZFS for data integrity and advanced compression, Btrfs if your infrastructure already uses it. The most important thing is not which driver you choose, but that you understand how it works and actively manage disk usage with regular cleanup and monitoring.