Redis in Docker: Caching, Pub/Sub and Persistence Configuration
Redis is deceptively simple to run in Docker. A single docker run redis gives you a working in-memory data store. But that simplicity hides significant operational concerns: persistence modes that trade durability for performance, memory limits that trigger silent data eviction, and clustering topologies that require careful container networking. This guide covers production-grade Redis deployment in Docker, from single-instance caching to multi-node clusters with high availability.
Basic Docker Setup
Start with a properly configured single-instance deployment:
services:
redis:
image: redis:7-alpine
container_name: redis
restart: unless-stopped
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis-data:/data
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf:ro
command: redis-server /usr/local/etc/redis/redis.conf
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
memory: 2G
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 3
sysctls:
net.core.somaxconn: 1024
volumes:
redis-data:
The sysctls setting increases the TCP listen backlog. Without it, Redis logs warnings about the default 128 connection backlog being too low under high connection rates.
Persistence: RDB vs AOF
Redis offers two persistence mechanisms, and choosing the right one (or combination) depends on your durability requirements.
RDB (Redis Database Snapshots)
RDB creates point-in-time snapshots of the dataset at configured intervals:
# redis.conf - RDB configuration
save 900 1 # Save if at least 1 key changed in 900 seconds
save 300 10 # Save if at least 10 keys changed in 300 seconds
save 60 10000 # Save if at least 10000 keys changed in 60 seconds
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data
# Stop accepting writes if RDB save fails (safety measure)
stop-writes-on-bgsave-error yes
RDB advantages: compact files, faster restarts, efficient for backups. RDB disadvantage: you can lose up to the last save interval of data (up to 15 minutes with the default config).
AOF (Append-Only File)
AOF logs every write operation, providing much stronger durability guarantees:
# redis.conf - AOF configuration
appendonly yes
appendfilename "appendonly.aof"
appenddirname "appendonlydir"
# fsync policy:
# - always: fsync after every write (safest, slowest)
# - everysec: fsync once per second (good compromise)
# - no: let the OS decide (fastest, least safe)
appendfsync everysec
# Rewrite AOF when it grows to 100% of the size after last rewrite
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# Allow reads during AOF rewrite
aof-use-rdb-preamble yes
The aof-use-rdb-preamble yes setting (default since Redis 7) creates a hybrid format where the AOF file starts with an RDB snapshot followed by incremental AOF entries. This gives faster loading times while maintaining AOF's durability.
| Feature | RDB Only | AOF Only | RDB + AOF |
|---|---|---|---|
| Max data loss | Up to save interval | ~1 second (everysec) | ~1 second |
| Restart speed | Fast | Slower (replays log) | Fast (RDB preamble) |
| Disk usage | Low (compressed) | Higher (grows until rewrite) | Moderate |
| Backup simplicity | Copy dump.rdb | More complex | Copy RDB for snapshots |
| Best for | Caching only | Data that must survive restarts | Production (recommended) |
aof-use-rdb-preamble yes. RDB provides fast backups and recovery, while AOF ensures minimal data loss. Redis uses the AOF file for recovery when both are present.
Memory Management
Redis stores everything in memory. Without proper limits, it will consume all available memory and get killed by the OOM killer or, worse, cause the host to swap, degrading performance catastrophically.
# redis.conf - Memory management
maxmemory 3gb # Leave headroom below container limit
# Eviction policies (what happens when maxmemory is reached):
# allkeys-lru - Evict least recently used keys (good for caching)
# volatile-lru - Evict LRU keys with TTL set
# allkeys-lfu - Evict least frequently used (Redis 4+, best for caching)
# volatile-lfu - Evict LFU keys with TTL set
# volatile-ttl - Evict keys with shortest TTL
# noeviction - Return errors on writes (safe for databases)
maxmemory-policy allkeys-lfu
# LFU tuning
lfu-log-factor 10 # Frequency counter logarithmic factor
lfu-decay-time 1 # Frequency counter decay time in minutes
maxmemory to at most 75% of your container's memory limit. Redis needs extra memory for the output buffers, replication backlog, AOF rewrite buffer, and internal fragmentation. A container with 4GB limit should have maxmemory 3gb at most.
Monitor memory usage with:
# Check memory usage details
docker exec redis redis-cli INFO memory
# Key metrics to watch:
# used_memory - Total bytes allocated
# used_memory_rss - Resident set size (actual RAM used)
# mem_fragmentation_ratio - RSS / used_memory (> 1.5 means high fragmentation)
# maxmemory - Configured limit
# evicted_keys - Count of keys evicted due to maxmemory
Redis Sentinel for High Availability
Redis Sentinel provides automatic failover for Redis primary-replica setups. When the primary fails, Sentinel promotes a replica and reconfigures the others.
services:
redis-master:
image: redis:7-alpine
container_name: redis-master
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- redis-master-data:/data
- ./redis/master.conf:/usr/local/etc/redis/redis.conf:ro
ports:
- "127.0.0.1:6379:6379"
redis-replica-1:
image: redis:7-alpine
container_name: redis-replica-1
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- redis-replica1-data:/data
- ./redis/replica.conf:/usr/local/etc/redis/redis.conf:ro
depends_on:
- redis-master
redis-replica-2:
image: redis:7-alpine
container_name: redis-replica-2
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- redis-replica2-data:/data
- ./redis/replica.conf:/usr/local/etc/redis/redis.conf:ro
depends_on:
- redis-master
sentinel-1:
image: redis:7-alpine
container_name: sentinel-1
command: redis-sentinel /usr/local/etc/redis/sentinel.conf
volumes:
- ./redis/sentinel.conf:/usr/local/etc/redis/sentinel.conf
depends_on:
- redis-master
- redis-replica-1
- redis-replica-2
sentinel-2:
image: redis:7-alpine
container_name: sentinel-2
command: redis-sentinel /usr/local/etc/redis/sentinel.conf
volumes:
- ./redis/sentinel.conf:/usr/local/etc/redis/sentinel.conf
sentinel-3:
image: redis:7-alpine
container_name: sentinel-3
command: redis-sentinel /usr/local/etc/redis/sentinel.conf
volumes:
- ./redis/sentinel.conf:/usr/local/etc/redis/sentinel.conf
volumes:
redis-master-data:
redis-replica1-data:
redis-replica2-data:
The sentinel.conf configuration:
# sentinel.conf
port 26379
sentinel monitor mymaster redis-master 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 10000
sentinel parallel-syncs mymaster 1
sentinel auth-pass mymaster your-redis-password
The 2 in the monitor line means two sentinels must agree that the master is down before triggering failover. With three sentinels, this provides resilience against a single sentinel failure.
Redis Cluster
For datasets larger than a single server's memory, Redis Cluster shards data across multiple nodes automatically:
# redis-cluster.conf (per node)
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
maxmemory 2gb
maxmemory-policy allkeys-lfu
# Create a 6-node cluster (3 masters + 3 replicas)
docker exec redis-node-1 redis-cli --cluster create \
redis-node-1:6379 redis-node-2:6379 redis-node-3:6379 \
redis-node-4:6379 redis-node-5:6379 redis-node-6:6379 \
--cluster-replicas 1 --cluster-yes
Note: Redis Cluster requires a minimum of 6 nodes (3 masters + 3 replicas) for production. Each master handles a portion of the 16384 hash slots. Client libraries must support cluster mode to route commands to the correct node.
Pub/Sub Patterns
Redis pub/sub provides fire-and-forget messaging between services. Messages are not persisted, so subscribers must be connected when messages are published.
# Publisher (from any Redis client)
docker exec redis redis-cli PUBLISH notifications '{"user_id": 42, "event": "order_placed"}'
# Subscriber
docker exec -it redis redis-cli SUBSCRIBE notifications
# Pattern-based subscription
docker exec -it redis redis-cli PSUBSCRIBE "events.*"
For reliable messaging where messages must not be lost, use Redis Streams instead:
# Add to a stream
docker exec redis redis-cli XADD orders '*' user_id 42 product laptop amount 999
# Read from stream (consumer group for reliable processing)
docker exec redis redis-cli XGROUP CREATE orders orderprocessor $ MKSTREAM
docker exec redis redis-cli XREADGROUP GROUP orderprocessor worker1 COUNT 10 BLOCK 5000 STREAMS orders '>'
# Acknowledge processed messages
docker exec redis redis-cli XACK orders orderprocessor 1684234567890-0
Security Configuration
Redis has minimal security by default. For any deployment beyond local development, configure authentication and network restrictions:
# redis.conf - Security
requirepass your-strong-password-here
# ACL (Redis 6+): Create users with specific permissions
user default off
user appuser on >app-password ~app:* +@all -@admin -@dangerous
user readonly on >read-password ~* +@read +ping +info
# Disable dangerous commands
rename-command FLUSHALL ""
rename-command FLUSHDB ""
rename-command CONFIG ""
rename-command DEBUG ""
# Network security
bind 0.0.0.0
protected-mode yes
# TLS (Redis 6+)
tls-port 6380
tls-cert-file /tls/redis.crt
tls-key-file /tls/redis.key
tls-ca-cert-file /tls/ca.crt
tls-auth-clients optional
requirepass approach. ACLs let you create per-application users with minimal permissions, following the principle of least privilege. A caching application does not need access to FLUSHALL.
Monitoring Redis
Add redis_exporter for Prometheus metrics:
redis-exporter:
image: oliver006/redis_exporter:v1.58.0
container_name: redis-exporter
restart: unless-stopped
environment:
REDIS_ADDR: redis://redis:6379
REDIS_PASSWORD: ${REDIS_PASSWORD}
ports:
- "127.0.0.1:9121:9121"
depends_on:
- redis
Key metrics and what they mean:
| Metric | Healthy Range | Action When Unhealthy |
|---|---|---|
| used_memory / maxmemory | < 80% | Increase maxmemory or review key TTLs |
| evicted_keys (rate) | Near zero | Memory pressure; increase capacity |
| connected_clients | < maxclients * 0.8 | Connection leak; check application pools |
| keyspace_hit_ratio | > 90% | Low hit rate means cache is ineffective |
| instantaneous_ops_per_sec | Varies by workload | Sudden drops indicate issues |
| rdb_last_bgsave_status | ok | Check disk space and permissions |
For quick diagnostics, the built-in INFO command is invaluable:
# Quick health check
docker exec redis redis-cli INFO server | head -20
docker exec redis redis-cli INFO memory
docker exec redis redis-cli INFO replication
docker exec redis redis-cli INFO stats
# Slow log - find commands taking > 10ms
docker exec redis redis-cli SLOWLOG GET 10
# Monitor all commands in real-time (use briefly, impacts performance)
docker exec redis redis-cli MONITOR
Backup Strategies
Redis backup is straightforward when persistence is configured:
# Trigger a manual RDB snapshot
docker exec redis redis-cli BGSAVE
# Wait for completion
docker exec redis redis-cli LASTSAVE
# Copy the RDB file
docker cp redis:/data/dump.rdb ./backups/redis_$(date +%Y%m%d_%H%M%S).rdb
# Automated backup script
#!/bin/bash
BACKUP_DIR="/backups/redis"
mkdir -p "$BACKUP_DIR"
# Trigger save and wait
docker exec redis redis-cli BGSAVE
sleep 2
while [ "$(docker exec redis redis-cli LASTSAVE)" == "$(cat /tmp/redis_lastsave 2>/dev/null)" ]; do
sleep 1
done
docker exec redis redis-cli LASTSAVE > /tmp/redis_lastsave
# Copy and compress
docker cp redis:/data/dump.rdb "$BACKUP_DIR/dump_$(date +%Y%m%d_%H%M%S).rdb"
# Retain last 30 days
find "$BACKUP_DIR" -name "dump_*.rdb" -mtime +30 -delete
When managing Redis alongside other containerized databases, platforms like usulnet provide centralized volume monitoring and container health checks, making it easier to verify that your Redis persistence files are being written correctly and that backup processes are not silently failing.
Production Checklist
Before running Redis in production Docker environments, verify each of these:
- Persistence: AOF with
appendfsync everysecplus RDB snapshots, both writing to a named volume - Memory:
maxmemoryset to 75% of container limit, appropriate eviction policy selected - Security: AUTH enabled, dangerous commands renamed or disabled, network binding restricted
- High availability: Sentinel (3 nodes) for failover or Redis Cluster for sharding
- Monitoring: redis_exporter for Prometheus, alerts on memory usage and eviction rates
- Backups: Automated RDB snapshots copied off-host, restore procedure tested
- Kernel tuning:
vm.overcommit_memory=1andnet.core.somaxconn=1024on the Docker host - Resource limits: Docker memory limit set, CPU limits appropriate for workload
Redis is fast, but only when configured correctly. The defaults assume a development machine with unlimited memory and no durability requirements. Every one of these settings needs deliberate configuration for production.