Docker Backup Tools Compared: Restic, Borg, Duplicati and Velero
Choosing the right backup tool for your Docker infrastructure is one of those decisions that seems unimportant until the day you need to restore from backup. Each tool in this comparison represents a different philosophy: Restic prioritizes simplicity and cloud-native storage, BorgBackup maximizes compression and deduplication efficiency, Duplicati provides a GUI for less technical users, and Velero operates in the Kubernetes ecosystem.
This comparison is based on practical experience running each tool against Docker volume data, including databases, application files, and configuration directories. The goal is to help you pick the right tool for your infrastructure, not to declare a universal winner.
Quick Comparison
| Feature | Restic | BorgBackup | Duplicati | Velero |
|---|---|---|---|---|
| Language | Go | Python/C | C# | Go |
| Deduplication | Content-defined chunking | Content-defined chunking | Block-level | Snapshot-based |
| Encryption | AES-256 (always on) | AES-256 (optional) | AES-256 (optional) | Provider-dependent |
| Compression | zstd (since 0.14) | lz4, zstd, zlib, lzma | zip, 7z, SharpCompress | N/A (raw snapshots) |
| Cloud storage | S3, B2, Azure, GCS, SFTP, rest-server | SSH/SFTP only (native) | 25+ backends | S3, Azure, GCS |
| GUI | No (CLI only) | No (CLI only) | Yes (web UI) | CLI + K8s dashboard |
| Docker integration | Manual (scripts/containers) | Manual (scripts/containers) | Docker image available | K8s native (volumes) |
| Target audience | Sysadmins, cloud users | Sysadmins, power users | Non-technical users, SMBs | Kubernetes operators |
| License | BSD-2-Clause | BSD-3-Clause | LGPL | Apache 2.0 |
Restic: Cloud-Native Simplicity
Restic is the most popular choice for backing up Docker volumes to cloud storage. It ships as a single binary with no dependencies, always encrypts data, and supports a wide range of storage backends natively.
Key Strengths
- Single static binary, trivial to install and containerize
- Encryption is always on (no option to disable)
- Native support for S3, B2, Azure, GCS, SFTP, and REST
- Fast incremental backups with content-defined chunking
- Built-in snapshot management and pruning
Docker Volume Backup with Restic
# Initialize a Restic repository on S3
export RESTIC_REPOSITORY="s3:s3.amazonaws.com/my-backups/docker"
export RESTIC_PASSWORD="your-encryption-password"
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
restic init
# Backup Docker volumes using a helper container
docker run --rm \
-v my_postgres_data:/data:ro \
-e RESTIC_REPOSITORY="$RESTIC_REPOSITORY" \
-e RESTIC_PASSWORD="$RESTIC_PASSWORD" \
-e AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" \
-e AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" \
restic/restic backup /data --tag postgres --tag production
# List snapshots
restic snapshots
# Restore a specific snapshot
restic restore latest --target /restore/path --tag postgres
# Prune old snapshots (keep 7 daily, 4 weekly, 6 monthly)
restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune
# Check repository integrity
restic check
Restic as a Docker Compose Sidecar
services:
restic-backup:
image: restic/restic
volumes:
- postgres_data:/data/postgres:ro
- app_uploads:/data/uploads:ro
- ./restic-backup.sh:/backup.sh:ro
environment:
RESTIC_REPOSITORY: s3:s3.amazonaws.com/my-backups/docker
RESTIC_PASSWORD: ${RESTIC_PASSWORD}
AWS_ACCESS_KEY_ID: ${AWS_KEY}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET}
entrypoint: /bin/sh
command: >
-c "while true; do
/backup.sh;
sleep 86400;
done"
BorgBackup: Compression Champion
BorgBackup (Borg) excels at compression efficiency and deduplication. It consistently produces the smallest backup sizes, making it ideal when storage costs or bandwidth are a concern.
Key Strengths
- Best-in-class compression with multiple algorithm choices
- Highly efficient deduplication
- Append-only repository mode for ransomware protection
- Fastest local backup speed in most benchmarks
- Mature, battle-tested (since 2015)
Docker Volume Backup with Borg
# Initialize a Borg repository
export BORG_REPO="/backups/borg-repo"
export BORG_PASSPHRASE="your-encryption-passphrase"
borg init --encryption=repokey $BORG_REPO
# Backup Docker volumes
borg create \
--compression zstd,6 \
--stats \
--progress \
$BORG_REPO::docker-{now:%Y%m%d-%H%M%S} \
/var/lib/docker/volumes/
# List archives
borg list $BORG_REPO
# Restore from an archive
borg extract $BORG_REPO::docker-20250514-020000 \
--target /restore/path
# Prune old archives
borg prune \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 12 \
$BORG_REPO
# Compact repository (reclaim space after prune)
borg compact $BORG_REPO
# Verify repository integrity
borg check $BORG_REPO
Remote Backup with Borg (via SSH)
# Borg's native remote storage uses SSH
export BORG_REPO="ssh://backup-user@backup-server:22/path/to/repo"
# Requires borg installed on the remote server
# This is Borg's main limitation vs Restic for cloud storage
# Workaround: Use rclone to sync local Borg repo to S3
borg create $LOCAL_REPO::backup-{now} /data
rclone sync $LOCAL_REPO remote:borg-backups/
--append-only mode on the repository server prevents a compromised client from deleting old backups. This is a powerful protection against ransomware. Set it up with borg init --append-only or configure it in the SSH authorized_keys options.
Duplicati: GUI-Driven Backup
Duplicati provides a web-based GUI for configuring and monitoring backups. It supports the widest range of storage backends and is the most accessible option for users who prefer not to write scripts.
Key Strengths
- Web-based GUI for configuration and monitoring
- 25+ storage backends (S3, B2, Google Drive, OneDrive, WebDAV, FTP, etc.)
- Built-in scheduling
- Email notifications
- No command-line knowledge required
Duplicati Docker Deployment
services:
duplicati:
image: lscr.io/linuxserver/duplicati
container_name: duplicati
restart: unless-stopped
ports:
- "8200:8200"
volumes:
- duplicati_config:/config
- /var/lib/docker/volumes:/source:ro
- /backups/duplicati:/backups
environment:
PUID: 0
PGID: 0
TZ: UTC
volumes:
duplicati_config:
After deployment, access the web UI at port 8200 to configure backup jobs through a wizard interface. You can schedule backups, configure retention, set up email alerts, and restore individual files through the browser.
Velero: Kubernetes-Native Backup
Velero is purpose-built for Kubernetes. It backs up cluster resources (deployments, services, configmaps) and persistent volumes. If you run Docker workloads on Kubernetes, Velero is the natural choice.
Key Strengths
- Backs up Kubernetes resources and persistent volumes together
- Disaster recovery for entire namespaces or clusters
- Cluster migration (backup from cluster A, restore to cluster B)
- Schedule-based and on-demand backups
- Plugin architecture for storage providers
Velero Setup and Usage
# Install Velero CLI
curl -fsSL -o velero.tar.gz \
https://github.com/vmware-tanzu/velero/releases/download/v1.13.0/velero-v1.13.0-linux-amd64.tar.gz
tar xzf velero.tar.gz
sudo mv velero-v1.13.0-linux-amd64/velero /usr/local/bin/
# Install Velero in the cluster (AWS example)
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.9.0 \
--bucket my-velero-backups \
--backup-location-config region=us-east-1 \
--snapshot-location-config region=us-east-1 \
--secret-file ./credentials-velero
# Create a backup of a namespace
velero backup create production-backup \
--include-namespaces production
# Create a scheduled backup
velero schedule create daily-production \
--schedule="0 2 * * *" \
--include-namespaces production \
--ttl 720h # 30 days retention
# Restore from backup
velero restore create --from-backup production-backup
# Migrate: restore to a different namespace
velero restore create --from-backup production-backup \
--namespace-mappings production:staging
# Check backup status
velero backup get
velero backup describe production-backup --details
Performance Benchmarks
Tested against a 10GB Docker volume directory containing a mix of PostgreSQL data files, application uploads (images, documents), and configuration files:
| Metric | Restic | BorgBackup | Duplicati |
|---|---|---|---|
| Initial backup time | 4m 12s | 3m 45s | 6m 30s |
| Incremental backup (1% change) | 18s | 12s | 45s |
| Repository size (initial) | 4.2 GB | 3.1 GB (zstd) | 4.8 GB |
| Repository size (30 daily snapshots) | 5.8 GB | 4.2 GB | 6.9 GB |
| Full restore time | 3m 30s | 2m 50s | 8m 15s |
| Single file restore time | 2s | 1s | 12s |
| Memory usage during backup | ~200 MB | ~350 MB | ~500 MB |
| CPU usage during backup | Moderate | High (compression) | Moderate |
Note: These benchmarks are indicative and will vary based on your data characteristics, storage backend, and hardware. Run your own benchmarks with representative data before making a decision. Borg with zstd compression consistently wins on size; Restic wins on simplicity and cloud integration.
Cloud Storage Integration
| Storage Backend | Restic | Borg | Duplicati |
|---|---|---|---|
| Amazon S3 | Native | Via rclone | Native |
| Backblaze B2 | Native | Via rclone | Native |
| Azure Blob | Native | Via rclone | Native |
| Google Cloud Storage | Native | Via rclone | Native |
| Wasabi | Native (S3) | Via rclone | Native (S3) |
| SSH/SFTP | Native | Native | Native |
| Local filesystem | Native | Native | Native |
| MinIO (self-hosted S3) | Native (S3) | Via rclone | Native (S3) |
Restore Testing Automation
No backup comparison is complete without discussing restore testing. An untested backup is not a backup.
#!/bin/bash
# restore-test.sh - Automated restore verification
set -euo pipefail
TOOL="${1:-restic}"
TIMESTAMP=$(date +%Y%m%d)
RESTORE_DIR="/tmp/restore-test-$TIMESTAMP"
log() { echo "[$(date '+%H:%M:%S')] $1"; }
cleanup() { rm -rf "$RESTORE_DIR"; }
trap cleanup EXIT
mkdir -p "$RESTORE_DIR"
case "$TOOL" in
restic)
log "Testing Restic restore..."
restic restore latest --target "$RESTORE_DIR" --tag postgres
;;
borg)
log "Testing Borg restore..."
LATEST=$(borg list --last 1 --format '{archive}' $BORG_REPO)
borg extract $BORG_REPO::$LATEST --target "$RESTORE_DIR"
;;
esac
# Verify restored data
FILE_COUNT=$(find "$RESTORE_DIR" -type f | wc -l)
TOTAL_SIZE=$(du -sh "$RESTORE_DIR" | cut -f1)
if [ "$FILE_COUNT" -gt 0 ]; then
log "PASS: Restored $FILE_COUNT files ($TOTAL_SIZE)"
# Test database integrity if PostgreSQL data exists
if [ -d "$RESTORE_DIR/pgdata" ]; then
docker run --rm \
-v "$RESTORE_DIR/pgdata":/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=test \
postgres:16 postgres --check
log "PASS: PostgreSQL data integrity verified"
fi
else
log "FAIL: No files restored"
exit 1
fi
Choosing the Right Tool
- Choose Restic if you need cloud-native storage support, want a simple CLI tool, and value the security of always-on encryption. Best for backing up Docker volumes to S3, B2, or any S3-compatible storage.
- Choose BorgBackup if storage efficiency is your priority, you have SSH access to your backup server, and you want the best compression ratios. Best for local or SSH-based backups where bandwidth or storage costs matter.
- Choose Duplicati if you want a GUI, need to support non-technical operators, or require an unusual storage backend. Best for small teams and homelab setups where ease of use matters more than performance.
- Choose Velero if you run Kubernetes. It is the only tool in this comparison that understands Kubernetes resources natively. Not applicable for Docker Compose or Swarm deployments.
The best backup tool is the one you actually use consistently. Pick a tool, automate it, test restores monthly, and monitor for failures. The specific tool matters less than having a tested, automated, and monitored backup process in place.