Synchronous request-response architectures work until they do not. When sending an email takes 3 seconds, processing a payment takes 5 seconds, or generating a report takes 30 seconds, you cannot make users wait for those operations to complete within an HTTP request. Message queues decouple the producer of work from the consumer of work, allowing your application to accept requests instantly while processing tasks asynchronously in the background.

This guide covers three message queue systems that run well in Docker: RabbitMQ (the full-featured enterprise option), Redis Streams and Pub/Sub (when you already have Redis), and NATS (the lightweight high-performance option). We compare them across messaging patterns, operational complexity, and Docker deployment.

Why Message Queues

Before diving into implementations, here are the problems message queues solve:

  • Decoupling - Services communicate without knowing about each other. The order service publishes "order created" without knowing which services consume it.
  • Load leveling - During traffic spikes, messages queue up instead of overwhelming downstream services.
  • Reliability - If a worker crashes, the message stays in the queue and is redelivered to another worker.
  • Scalability - Add more workers to process messages faster, without changing the producer.
  • Async processing - Offload slow operations (email, PDF generation, image processing) from the request path.

Comparison Table

Feature RabbitMQ Redis Streams NATS
Protocol AMQP 0.9.1 Redis protocol NATS protocol (text-based)
Persistence Durable queues on disk RDB/AOF (if configured) JetStream (optional)
Message ordering Per-queue FIFO Per-stream guaranteed Per-subject (JetStream)
Acknowledgments Manual or auto ACK XACK per consumer group JetStream ACK
Dead letter queue Built-in DLX Manual implementation JetStream max deliveries
Management UI Built-in (port 15672) RedisInsight / CLI NATS Dashboard / nats CLI
Memory usage ~150MB baseline Depends on Redis config ~20MB baseline
Throughput ~50K msg/s ~100K+ msg/s ~10M+ msg/s (core NATS)
Best for Complex routing, enterprise Already using Redis Microservices, low latency

RabbitMQ Docker Setup

services:
  rabbitmq:
    image: rabbitmq:3.13-management-alpine
    container_name: rabbitmq
    restart: unless-stopped
    ports:
      - "127.0.0.1:5672:5672"    # AMQP
      - "127.0.0.1:15672:15672"  # Management UI
    environment:
      RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER:-admin}
      RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASS}
      RABBITMQ_DEFAULT_VHOST: /
    volumes:
      - rabbitmq-data:/var/lib/rabbitmq
      - ./rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf:ro
      - ./rabbitmq/definitions.json:/etc/rabbitmq/definitions.json:ro
    deploy:
      resources:
        limits:
          memory: 2G
    healthcheck:
      test: ["CMD", "rabbitmq-diagnostics", "check_port_connectivity"]
      interval: 10s
      timeout: 10s
      retries: 5

volumes:
  rabbitmq-data:

Pre-define queues and exchanges with definitions.json:

{
  "queues": [
    {
      "name": "email.send",
      "vhost": "/",
      "durable": true,
      "auto_delete": false,
      "arguments": {
        "x-dead-letter-exchange": "dlx",
        "x-dead-letter-routing-key": "email.failed",
        "x-message-ttl": 86400000
      }
    },
    {
      "name": "email.failed",
      "vhost": "/",
      "durable": true,
      "auto_delete": false,
      "arguments": {}
    }
  ],
  "exchanges": [
    {
      "name": "events",
      "vhost": "/",
      "type": "topic",
      "durable": true,
      "auto_delete": false
    },
    {
      "name": "dlx",
      "vhost": "/",
      "type": "direct",
      "durable": true,
      "auto_delete": false
    }
  ],
  "bindings": [
    {
      "source": "events",
      "vhost": "/",
      "destination": "email.send",
      "destination_type": "queue",
      "routing_key": "order.created"
    },
    {
      "source": "dlx",
      "vhost": "/",
      "destination": "email.failed",
      "destination_type": "queue",
      "routing_key": "email.failed"
    }
  ]
}

RabbitMQ Messaging Patterns

// Go: Publish to RabbitMQ
func (p *Publisher) PublishOrderCreated(ctx context.Context, order *Order) error {
    body, err := json.Marshal(order)
    if err != nil {
        return fmt.Errorf("marshal order: %w", err)
    }

    return p.channel.PublishWithContext(ctx,
        "events",          // exchange
        "order.created",   // routing key
        false,             // mandatory
        false,             // immediate
        amqp.Publishing{
            ContentType:  "application/json",
            DeliveryMode: amqp.Persistent,  // Survives broker restart
            Body:         body,
            MessageId:    uuid.New().String(),
            Timestamp:    time.Now(),
        },
    )
}

// Go: Consume from RabbitMQ
func (c *Consumer) ConsumeEmails(ctx context.Context) error {
    msgs, err := c.channel.Consume(
        "email.send",    // queue
        "email-worker",  // consumer tag
        false,           // auto-ack (false = manual ack)
        false,           // exclusive
        false,           // no-local
        false,           // no-wait
        nil,
    )
    if err != nil {
        return err
    }

    for msg := range msgs {
        if err := c.processEmail(ctx, msg.Body); err != nil {
            log.Error("Failed to process email", "error", err)
            msg.Nack(false, true)  // Requeue on failure
            continue
        }
        msg.Ack(false)  // Acknowledge success
    }
    return nil
}

Redis Streams

If you already run Redis, Streams provide reliable messaging without adding another service:

# Create a consumer group
docker exec redis redis-cli XGROUP CREATE orders orderprocessor $ MKSTREAM

# Produce a message
docker exec redis redis-cli XADD orders '*' \
  event "order.created" \
  order_id "12345" \
  user_id "67890" \
  total "99.99"

# Consume as part of a group (blocks for 5 seconds)
docker exec redis redis-cli XREADGROUP GROUP orderprocessor worker1 \
  COUNT 10 BLOCK 5000 STREAMS orders '>'

# Acknowledge processed message
docker exec redis redis-cli XACK orders orderprocessor 1684234567890-0

# Check pending (unacknowledged) messages
docker exec redis redis-cli XPENDING orders orderprocessor - + 10

# Claim abandoned messages (consumer died without ACK)
docker exec redis redis-cli XCLAIM orders orderprocessor worker2 \
  60000 1684234567890-0  # Claim if idle > 60 seconds

Redis Streams support consumer groups, which distribute messages across multiple consumers and track acknowledgments. This gives you reliable work queues using infrastructure you likely already have.

Tip: Redis Streams have a maximum length feature (MAXLEN or MINID) that automatically trims old messages. Use XADD orders MAXLEN ~ 100000 '*' ... to keep approximately 100,000 messages, preventing unbounded memory growth.

NATS

NATS is designed for high-throughput, low-latency messaging in cloud-native environments. It is remarkably lightweight compared to RabbitMQ:

services:
  nats:
    image: nats:2.10-alpine
    container_name: nats
    restart: unless-stopped
    ports:
      - "127.0.0.1:4222:4222"   # Client connections
      - "127.0.0.1:8222:8222"   # Monitoring
    volumes:
      - nats-data:/data
      - ./nats/nats-server.conf:/etc/nats/nats-server.conf:ro
    command: -c /etc/nats/nats-server.conf

volumes:
  nats-data:
# nats-server.conf
listen: 0.0.0.0:4222
http: 0.0.0.0:8222

# JetStream for persistence and reliable delivery
jetstream {
  store_dir: /data/jetstream
  max_mem: 1G
  max_file: 10G
}

# Logging
debug: false
trace: false
logtime: true

# Authentication
authorization {
  users = [
    { user: "app", password: "$APP_NATS_PASSWORD" }
    { user: "worker", password: "$WORKER_NATS_PASSWORD" }
  ]
}

NATS Core vs JetStream

NATS has two modes: core NATS (fire-and-forget, no persistence) and JetStream (persistent, acknowledgments, replay):

// Go: NATS Core - fire and forget pub/sub
nc, _ := nats.Connect("nats://localhost:4222")

// Publish
nc.Publish("events.order.created", orderJSON)

// Subscribe (all instances get the message)
nc.Subscribe("events.order.*", func(msg *nats.Msg) {
    log.Info("Received event", "subject", msg.Subject)
})

// Queue subscribe (load balanced across group members)
nc.QueueSubscribe("tasks.email", "email-workers", func(msg *nats.Msg) {
    processEmail(msg.Data)
})
// Go: NATS JetStream - persistent, acknowledged delivery
js, _ := nc.JetStream()

// Create a stream
js.AddStream(&nats.StreamConfig{
    Name:       "ORDERS",
    Subjects:   []string{"orders.>"},
    Storage:    nats.FileStorage,
    Retention:  nats.WorkQueuePolicy,
    MaxAge:     24 * time.Hour,
    Replicas:   1,
})

// Publish with acknowledgment
ack, err := js.Publish("orders.created", orderJSON)
if err != nil {
    log.Error("Failed to publish", "error", err)
}

// Durable consumer
sub, _ := js.PullSubscribe("orders.created", "order-processor",
    nats.ManualAck(),
    nats.AckWait(30*time.Second),
    nats.MaxDeliver(5),  // Max delivery attempts before DLQ
)

// Fetch and process messages
msgs, _ := sub.Fetch(10, nats.MaxWait(5*time.Second))
for _, msg := range msgs {
    if err := processOrder(msg.Data); err != nil {
        msg.Nak()  // Negative acknowledgment, will be redelivered
    } else {
        msg.Ack()
    }
}

Why NATS? NATS uses ~20MB of memory versus RabbitMQ's ~150MB. It handles millions of messages per second in core mode. If you are building microservices that need lightweight service-to-service communication, NATS is purpose-built for this. usulnet uses NATS for its master-agent communication protocol, demonstrating its suitability for distributed system coordination.

Messaging Patterns

Work Queue (Competing Consumers)

Multiple workers consume from the same queue. Each message is processed by exactly one worker.

Pub/Sub (Fan-Out)

One publisher sends a message; all subscribers receive a copy. Used for event notification.

Request/Reply

Synchronous-style communication over async infrastructure. The sender publishes a request and waits for a response on a unique reply subject.

// NATS Request/Reply pattern
// Responder (service that answers requests)
nc.Subscribe("services.user.get", func(msg *nats.Msg) {
    user := findUser(string(msg.Data))
    msg.Respond(marshal(user))
})

// Requester (sends request and waits for response)
resp, err := nc.Request("services.user.get", []byte("user-123"), 5*time.Second)
if err != nil {
    log.Error("Request timed out")
}
var user User
json.Unmarshal(resp.Data, &user)

Dead Letter Queues

Messages that fail repeatedly need to go somewhere instead of being retried forever:

# RabbitMQ: automatic DLQ via x-dead-letter-exchange
# (Configured in the queue definition above)

# NATS JetStream: max delivery then advisory
# Set MaxDeliver(5) in consumer config

# Redis Streams: manual DLQ implementation
# After N failed processing attempts, XADD to a dead-letter stream
docker exec redis redis-cli XADD dead-letters '*' \
  original_stream "orders" \
  original_id "1684234567890-0" \
  error "payment gateway timeout" \
  attempts "5"

Monitoring

Each system has different monitoring approaches:

# RabbitMQ - built-in Prometheus endpoint
# Enable: rabbitmq-plugins enable rabbitmq_prometheus
# Scrape: http://rabbitmq:15692/metrics

# NATS - built-in monitoring endpoint
# http://nats:8222/varz  (server stats)
# http://nats:8222/jsz   (JetStream stats)
# http://nats:8222/connz  (connections)

# Redis Streams monitoring
docker exec redis redis-cli XINFO STREAM orders
docker exec redis redis-cli XINFO GROUPS orders
docker exec redis redis-cli XLEN orders

Scaling Workers

Docker Compose makes scaling message consumers trivial:

services:
  email-worker:
    image: myapp:latest
    command: ["./app", "worker", "--queue", "email"]
    deploy:
      replicas: 3     # Run 3 instances
      resources:
        limits:
          memory: 512M
    depends_on:
      - rabbitmq
# Scale dynamically
docker compose up -d --scale email-worker=5
Warning: When scaling consumers, ensure your message processing is idempotent. Network failures and redeliveries mean a message may be processed more than once. Design your consumers so that processing the same message twice produces the same result (e.g., use upserts instead of inserts, check for existing records before creating).

Choosing the Right Queue

  1. Already using Redis? Start with Redis Streams. No new infrastructure required.
  2. Need complex routing? RabbitMQ's exchange types (direct, topic, fanout, headers) handle sophisticated routing without application logic.
  3. Building microservices? NATS with JetStream provides both fire-and-forget speed and persistent reliable delivery, with minimal operational overhead.
  4. Need enterprise features? RabbitMQ offers management UI, federation, shovel, and plugins that Redis and NATS do not match.