Every system administrator has a backup horror story. A deleted production database with no recent dump. A ransomware attack that encrypted the backup server along with everything else. A restore that failed because nobody had tested it in two years. The 3-2-1 backup strategy exists to prevent these stories from becoming yours.

The rule is simple: maintain 3 copies of your data, on 2 different types of storage media, with 1 copy stored off-site. This article goes deep into implementing this strategy with modern tools and real-world automation.

The 3-2-1 Rule Explained

The 3-2-1 rule was originally coined by photographer Peter Krogh. It is not a specific technology or product. It is a framework for thinking about data resilience:

  • 3 copies: Your primary data plus two backups. Any single failure should not result in data loss.
  • 2 different media types: Do not store all copies on the same type of hardware. SSD + external HDD, or local disk + cloud storage. This protects against media-specific failures (a firmware bug affecting all drives of the same model, for example).
  • 1 off-site copy: At least one backup must be physically separated from the others. This protects against fire, flood, theft, and ransomware that spreads across your local network.

Modern extension: 3-2-1-1-0. Some organizations add two more rules: 1 copy should be air-gapped or immutable (cannot be modified or deleted), and 0 errors verified through regular restore testing. This is the gold standard.

Understanding Backup Types

Before choosing tools, understand the three fundamental backup types:

Backup Type What It Copies Storage Used Restore Speed Backup Speed
Full Everything, every time High Fast (single file) Slow
Incremental Changes since last backup (any type) Low Slower (chain of backups) Fast
Differential Changes since last full backup Medium Medium (full + one diff) Medium

Most modern backup tools like restic and borg use deduplication, which makes the distinction between these types less relevant. They store data in content-addressed chunks, so only new or modified data uses additional storage regardless of how you label the backup.

Backup Tools Comparison

Restic: The Modern Standard

Restic is a fast, secure, cross-platform backup tool. It supports encryption by default, deduplication, and over a dozen storage backends including S3, B2, SFTP, and local filesystems.

# Install restic
sudo apt install -y restic

# Initialize a local backup repository
restic init --repo /mnt/backups/restic-repo

# Initialize an S3 backup repository
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
restic init --repo s3:s3.amazonaws.com/my-bucket/restic

# Create a backup
restic backup /opt/docker \
  --repo /mnt/backups/restic-repo \
  --exclude="*.log" \
  --exclude="*.tmp" \
  --tag docker-configs

# List snapshots
restic snapshots --repo /mnt/backups/restic-repo

# Restore a specific snapshot
restic restore latest \
  --repo /mnt/backups/restic-repo \
  --target /tmp/restore-test

# Apply retention policy (keep 7 daily, 4 weekly, 6 monthly)
restic forget \
  --repo /mnt/backups/restic-repo \
  --keep-daily 7 \
  --keep-weekly 4 \
  --keep-monthly 6 \
  --prune

BorgBackup: Maximum Efficiency

Borg excels at deduplication efficiency and compression. It is particularly well-suited for backing up large datasets with many similar files:

# Install borg
sudo apt install -y borgbackup

# Initialize a repository with encryption
borg init --encryption=repokey /mnt/backups/borg-repo

# Create a backup with compression
borg create \
  --stats --progress \
  --compression zstd,3 \
  /mnt/backups/borg-repo::'{hostname}-{now:%Y-%m-%d_%H:%M}' \
  /opt/docker \
  /etc \
  /home \
  --exclude '*.log' \
  --exclude '*/cache/*'

# List archives
borg list /mnt/backups/borg-repo

# Mount a backup for browsing (FUSE)
mkdir /tmp/borg-mount
borg mount /mnt/backups/borg-repo /tmp/borg-mount
ls /tmp/borg-mount/
fusermount -u /tmp/borg-mount

# Prune old backups
borg prune \
  --keep-daily=7 \
  --keep-weekly=4 \
  --keep-monthly=12 \
  /mnt/backups/borg-repo

Duplicati: GUI-Friendly Backups

Duplicati provides a web-based interface for managing backups, making it accessible to users who prefer not to work exclusively on the command line. It runs well as a Docker container:

# Run Duplicati as a Docker container
docker run -d \
  --name duplicati \
  --restart unless-stopped \
  -p 8200:8200 \
  -v duplicati_config:/data \
  -v /opt/docker:/source/docker:ro \
  -v /home:/source/home:ro \
  -v /mnt/backups:/backups \
  lscr.io/linuxserver/duplicati:latest

Rclone: The Swiss Army Knife for Cloud Sync

Rclone is not a backup tool per se, but it is the most versatile tool for moving data between local and remote storage. It supports over 70 cloud storage providers:

# Install rclone
curl https://rclone.org/install.sh | sudo bash

# Configure a remote (interactive)
rclone config

# Sync local backups to Backblaze B2
rclone sync /mnt/backups/restic-repo b2:my-backup-bucket/restic \
  --transfers 4 \
  --checkers 8 \
  --bwlimit 50M \
  --progress

# Sync to multiple remotes for redundancy
rclone sync /mnt/backups b2:my-backup-bucket/
rclone sync /mnt/backups wasabi:my-backup-bucket/

# Verify integrity
rclone check /mnt/backups b2:my-backup-bucket/ --one-way

Cloud Storage Options for Off-Site Backups

Provider Cost per TB/month Egress Fees Notes
Backblaze B2 $6 $0.01/GB (free with Cloudflare) Best value, S3-compatible
Wasabi $7 Free (minimum 90-day retention) No egress fees, S3-compatible
AWS S3 Glacier Deep $1 $0.09/GB + retrieval fees Cheapest storage, slow retrieval
Hetzner Storage Box ~$3.50 Free (within Hetzner network) EU-based, SFTP/SMB/rsync
Self-hosted at friend's house $0 (hardware cost) Free True off-site, you control it
Tip: For most homelab users, Backblaze B2 paired with Cloudflare (which has a bandwidth alliance with B2, eliminating egress fees) offers the best balance of cost and convenience. A typical homelab with 100GB of backed-up data costs about $0.60/month.

Automating Your Backup Pipeline

A backup strategy is only as reliable as its automation. Here is a complete backup script that implements the 3-2-1 strategy:

#!/bin/bash
# backup-321.sh - Implements the 3-2-1 backup strategy
set -euo pipefail

# Configuration
LOCAL_REPO="/mnt/backups/restic-local"
REMOTE_REPO="s3:s3.us-west-000.backblazeb2.com/my-homelab-backups"
BACKUP_PATHS="/opt/docker /etc /home"
EXCLUDE_FILE="/opt/scripts/backup-excludes.txt"
LOG_FILE="/var/log/backup-321.log"
HEALTHCHECK_URL="https://hc-ping.com/your-uuid-here"

export RESTIC_PASSWORD_FILE="/root/.restic-password"
export AWS_ACCESS_KEY_ID="your-b2-key-id"
export AWS_SECRET_ACCESS_KEY="your-b2-app-key"

log() {
  echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}

# Pre-backup: dump databases
dump_databases() {
  log "Dumping databases..."
  mkdir -p /opt/docker/db-dumps

  # PostgreSQL
  docker exec postgres pg_dumpall -U postgres | \
    gzip > /opt/docker/db-dumps/postgres_$(date +%Y%m%d).sql.gz

  # MariaDB
  docker exec mariadb mysqldump --all-databases \
    --single-transaction -u root -p"${MYSQL_PASSWORD}" | \
    gzip > /opt/docker/db-dumps/mariadb_$(date +%Y%m%d).sql.gz

  log "Database dumps completed"
}

# Copy 1: Local restic backup
backup_local() {
  log "Starting local backup..."
  restic backup $BACKUP_PATHS \
    --repo "$LOCAL_REPO" \
    --exclude-file="$EXCLUDE_FILE" \
    --tag "automated,local" \
    --cleanup-cache

  restic forget \
    --repo "$LOCAL_REPO" \
    --keep-daily 7 \
    --keep-weekly 4 \
    --keep-monthly 12 \
    --prune

  log "Local backup completed"
}

# Copy 2: Remote restic backup (off-site)
backup_remote() {
  log "Starting remote backup..."
  restic backup $BACKUP_PATHS \
    --repo "$REMOTE_REPO" \
    --exclude-file="$EXCLUDE_FILE" \
    --tag "automated,remote" \
    --cleanup-cache

  restic forget \
    --repo "$REMOTE_REPO" \
    --keep-daily 7 \
    --keep-weekly 4 \
    --keep-monthly 6 \
    --prune

  log "Remote backup completed"
}

# Verify backup integrity
verify_backups() {
  log "Verifying backup integrity..."
  restic check --repo "$LOCAL_REPO" --read-data-subset=5%
  restic check --repo "$REMOTE_REPO" --read-data-subset=2%
  log "Verification completed"
}

# Main execution
main() {
  log "=== 3-2-1 Backup Starting ==="

  dump_databases
  backup_local
  backup_remote
  verify_backups

  # Signal success to health check service
  curl -fsS --retry 3 "$HEALTHCHECK_URL" > /dev/null

  log "=== 3-2-1 Backup Completed Successfully ==="
}

# Run with error handling
if main; then
  exit 0
else
  log "ERROR: Backup failed!"
  curl -fsS --retry 3 "$HEALTHCHECK_URL/fail" > /dev/null
  exit 1
fi

Schedule with cron and add the exclude file:

# /opt/scripts/backup-excludes.txt
*.log
*.tmp
*/cache/*
*/node_modules/*
*/.git/*
*/tmp/*
*.sock

# Add to crontab
# Run at 3 AM daily
0 3 * * * /opt/scripts/backup-321.sh >> /var/log/backup-321.log 2>&1

Testing Restores: The Most Neglected Step

A backup you have never tested is not a backup. It is a hope. Schedule monthly restore tests:

#!/bin/bash
# test-restore.sh - Monthly restore verification
set -euo pipefail

REPO="/mnt/backups/restic-local"
RESTORE_DIR="/tmp/restore-test-$(date +%Y%m%d)"
export RESTIC_PASSWORD_FILE="/root/.restic-password"

log() { echo "[$(date '+%H:%M:%S')] $1"; }

# Test 1: Verify repository integrity
log "Checking repository integrity..."
restic check --repo "$REPO"

# Test 2: Restore latest snapshot
log "Restoring latest snapshot..."
mkdir -p "$RESTORE_DIR"
restic restore latest \
  --repo "$REPO" \
  --target "$RESTORE_DIR" \
  --include "/opt/docker"

# Test 3: Verify file counts match
ORIGINAL_COUNT=$(find /opt/docker -type f | wc -l)
RESTORED_COUNT=$(find "$RESTORE_DIR/opt/docker" -type f | wc -l)

log "Original files: $ORIGINAL_COUNT"
log "Restored files: $RESTORED_COUNT"

if [ "$ORIGINAL_COUNT" -eq "$RESTORED_COUNT" ]; then
  log "PASS: File count matches"
else
  log "WARNING: File count mismatch (delta: $((ORIGINAL_COUNT - RESTORED_COUNT)))"
fi

# Test 4: Verify database dumps can be read
for dump in "$RESTORE_DIR"/opt/docker/db-dumps/*.sql.gz; do
  if gunzip -t "$dump" 2>/dev/null; then
    log "PASS: $(basename $dump) is valid gzip"
  else
    log "FAIL: $(basename $dump) is corrupted"
  fi
done

# Test 5: Spin up a test database and restore
log "Testing PostgreSQL restore..."
docker run -d --name pg-restore-test \
  -e POSTGRES_PASSWORD=testpass \
  postgres:16

sleep 10

gunzip -c "$RESTORE_DIR"/opt/docker/db-dumps/postgres_*.sql.gz | \
  docker exec -i pg-restore-test psql -U postgres 2>/dev/null

TABLE_COUNT=$(docker exec pg-restore-test \
  psql -U postgres -t -c \
  "SELECT count(*) FROM information_schema.tables WHERE table_schema='public'" 2>/dev/null)

log "PostgreSQL restore: $TABLE_COUNT tables recovered"

# Cleanup
docker rm -f pg-restore-test
rm -rf "$RESTORE_DIR"

log "Restore test completed"
Warning: The most common backup failure mode is not a tool failure -- it is a configuration error that silently produces empty or incomplete backups for months. Always verify that your backups contain the data you expect, not just that the backup command exits successfully.

Disaster Recovery Planning

A 3-2-1 backup strategy is one component of a broader disaster recovery plan. Document the answers to these questions:

  1. RTO (Recovery Time Objective): How long can your services be down? 1 hour? 1 day?
  2. RPO (Recovery Point Objective): How much data can you afford to lose? The last hour? The last day?
  3. Recovery procedure: What are the exact steps to restore each service? Write a runbook.
  4. Recovery order: Which services must come up first? (DNS, then reverse proxy, then databases, then applications.)
  5. Communication plan: Who needs to know when services are down?

Write these down and store them alongside your backups. If your disaster recovery plan only exists in your head, it is not a plan.

Monitoring Your Backups

Use a dead man's switch service like Healthchecks.io (self-hostable) to detect when backups fail silently. The idea is simple: your backup script pings a URL on success. If the ping does not arrive on schedule, you get alerted:

# At the end of your backup script, ping healthchecks
curl -fsS --retry 3 https://hc-ping.com/your-uuid-here

# For more detail, send the log output
curl -fsS --retry 3 --data-raw "$(tail -50 /var/log/backup-321.log)" \
  https://hc-ping.com/your-uuid-here

With usulnet's built-in backup management, you can schedule and monitor backups across all your Docker hosts from a single interface, with automatic alerting when backup jobs fail or miss their schedule.

Complete 3-2-1 Implementation Summary

Copy Location Media Type Tool Schedule
Copy 1 (Primary) Server SSD NVMe/SSD Live data Always current
Copy 2 (Local backup) External HDD / NAS HDD / RAID Restic / Borg Daily at 3 AM
Copy 3 (Off-site) Backblaze B2 / Wasabi Cloud object storage Restic (S3 backend) Daily at 4 AM

Add monthly restore testing and monitoring, and you have a backup system that can survive hardware failure, ransomware, natural disaster, and human error. The cost is minimal: a few dollars per month for cloud storage and an hour of initial setup time. The cost of not having it is immeasurable.