Containers produce logs. Lots of logs. And unlike traditional servers where logs land in /var/log and stay put, container logs are as ephemeral as the containers that created them. When a container is removed, its logs vanish with it. When you are running dozens or hundreds of containers across multiple hosts, finding the right log line becomes a needle-in-a-haystack problem.

Effective container logging is not about installing a tool. It is about designing a logging strategy that captures the right data, routes it to the right place, and keeps it long enough to be useful. This guide covers the full spectrum, from Docker's built-in logging mechanisms to production-grade centralized logging stacks.

How Docker Logging Works

Before configuring anything, it helps to understand Docker's logging architecture. When a containerized process writes to stdout or stderr, Docker captures that output and routes it through a logging driver. The default logging driver (json-file) writes each line as a JSON object to a file on the host:

# Default log location for a container
/var/lib/docker/containers/<container-id>/<container-id>-json.log

# Each line is a JSON object
{"log":"2025-02-10T10:30:15.123Z INFO Starting server on port 8080\n",
 "stream":"stdout",
 "time":"2025-02-10T10:30:15.123456789Z"}

The docker logs command reads from this file, which is why it only works with certain logging drivers (json-file and journald) and not with drivers that ship logs elsewhere.

Docker Log Drivers

Docker supports multiple logging drivers that determine where log output goes. You configure them globally in /etc/docker/daemon.json or per-container:

Driver Destination docker logs support Best For
json-file Local JSON files Yes Development, small deployments
local Optimized local files Yes Better performance than json-file
journald systemd journal Yes Linux systems using systemd
syslog Syslog daemon No Existing syslog infrastructure
fluentd Fluentd collector No Complex routing and transformation
gelf Graylog (GELF format) No Graylog-based logging stacks
awslogs Amazon CloudWatch No AWS-native workloads
gcplogs Google Cloud Logging No GCP-native workloads
none Nowhere (disabled) No High-throughput apps that log internally
# Set the default logging driver globally
# /etc/docker/daemon.json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3",
    "labels": "service,environment",
    "tag": "{{.Name}}/{{.ID}}"
  }
}

# Override per container
docker run -d \
  --log-driver=fluentd \
  --log-opt fluentd-address=localhost:24224 \
  --log-opt tag="app.{{.Name}}" \
  my-app:latest

# Override in docker-compose.yml
services:
  web:
    image: my-app:latest
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "5"

Log Rotation: Preventing Disk Disasters

The single most common Docker logging problem is running out of disk space. By default, the json-file driver has no size limit — a busy container can fill a disk in hours.

Critical: Always configure log rotation. This is the single most impactful logging change you can make. Set it globally in /etc/docker/daemon.json and it applies to all new containers.
# /etc/docker/daemon.json - Essential log rotation config
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

# Restart Docker to apply
sudo systemctl restart docker

This limits each container to 3 log files of 10 MB each (30 MB total per container). Adjust based on your disk capacity and the number of containers. For a host running 50 containers, this configuration uses at most 1.5 GB for logs.

Note that changing daemon.json only affects newly created containers. Existing containers keep their original logging configuration. To apply to existing containers, you need to recreate them.

Centralized Logging with the ELK Stack

The ELK stack (Elasticsearch, Logstash, Kibana) is the classic solution for centralized logging. For Docker environments, the architecture typically looks like this: containers ship logs to Logstash (or directly to Elasticsearch), which indexes them for search and analysis through Kibana.

# docker-compose.yml - ELK Stack for Docker logging
version: "3.8"

services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.12.0
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
      - esdata:/usr/share/elasticsearch/data
    ports:
      - "9200:9200"

  logstash:
    image: docker.elastic.co/logstash/logstash:8.12.0
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline
    ports:
      - "5044:5044"     # Beats input
      - "12201:12201/udp" # GELF input
    depends_on:
      - elasticsearch

  kibana:
    image: docker.elastic.co/kibana/kibana:8.12.0
    ports:
      - "5601:5601"
    environment:
      ELASTICSEARCH_HOSTS: http://elasticsearch:9200
    depends_on:
      - elasticsearch

volumes:
  esdata:

The Logstash pipeline configuration for receiving Docker logs via GELF:

# logstash/pipeline/docker.conf
input {
  gelf {
    port => 12201
    type => "docker"
  }
}

filter {
  # Parse JSON logs if the application outputs JSON
  if [message] =~ /^\{/ {
    json {
      source => "message"
      target => "parsed"
    }
  }

  # Add container metadata
  mutate {
    add_field => {
      "container_name" => "%{[docker][name]}"
      "container_id" => "%{[docker][id]}"
    }
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "docker-logs-%{+YYYY.MM.dd}"
  }
}

Then configure your application containers to use the GELF log driver:

services:
  web:
    image: my-app:latest
    logging:
      driver: gelf
      options:
        gelf-address: "udp://localhost:12201"
        tag: "web-app"

Grafana Loki: The Lightweight Alternative

Grafana Loki is a horizontally-scalable log aggregation system inspired by Prometheus. Unlike Elasticsearch, Loki does not index the full text of logs — it only indexes labels (metadata). This makes it dramatically cheaper to operate while still providing fast log searching for most use cases.

# docker-compose.yml - Loki + Grafana + Promtail
version: "3.8"

services:
  loki:
    image: grafana/loki:2.9.4
    ports:
      - "3100:3100"
    volumes:
      - ./loki-config.yml:/etc/loki/local-config.yaml
      - loki-data:/loki
    command: -config.file=/etc/loki/local-config.yaml

  promtail:
    image: grafana/promtail:2.9.4
    volumes:
      - /var/log:/var/log:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./promtail-config.yml:/etc/promtail/config.yml
    command: -config.file=/etc/promtail/config.yml

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    volumes:
      - grafana-data:/var/lib/grafana
    environment:
      GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD}

volumes:
  loki-data:
  grafana-data:

The Promtail configuration to scrape Docker container logs:

# promtail-config.yml
server:
  http_listen_port: 9080

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
  - job_name: docker
    docker_sd_configs:
      - host: unix:///var/run/docker.sock
        refresh_interval: 5s
    relabel_configs:
      - source_labels: ['__meta_docker_container_name']
        target_label: 'container'
      - source_labels: ['__meta_docker_container_log_stream']
        target_label: 'stream'
      - source_labels: ['__meta_docker_container_label_com_docker_compose_service']
        target_label: 'service'

Loki uses LogQL for querying, which will feel familiar if you know PromQL:

# Find error logs from the web service
{service="web"} |= "error"

# Parse JSON logs and filter by status code
{service="api"} | json | status >= 500

# Count errors per minute
rate({service="web"} |= "error" [1m])

# Top 10 slowest requests
{service="api"} | json | __error__="" | line_format "{{.duration}}"
  | unwrap duration | avg_over_time([5m]) by (endpoint)

Fluentd and Fluent Bit

Fluentd is a flexible log processor that can receive logs from Docker, transform them, and route them to any destination. Fluent Bit is a lighter-weight alternative designed for resource-constrained environments:

# fluent-bit.conf for Docker log collection
[SERVICE]
    Flush        1
    Daemon       Off
    Log_Level    info
    Parsers_File parsers.conf

[INPUT]
    Name              forward
    Listen            0.0.0.0
    Port              24224

[FILTER]
    Name              parser
    Match             *
    Key_Name          log
    Parser            json
    Reserve_Data      On

[FILTER]
    Name              modify
    Match             *
    Add               hostname ${HOSTNAME}
    Add               environment production

[OUTPUT]
    Name              loki
    Match             *
    Host              loki
    Port              3100
    Labels            job=docker,host=${HOSTNAME}
    Label_keys        $container_name,$service

[OUTPUT]
    Name              stdout
    Match             *
    Format            json_lines

Use the Fluentd log driver in your containers:

# Per container
docker run -d \
  --log-driver=fluentd \
  --log-opt fluentd-address=localhost:24224 \
  --log-opt fluentd-async=true \
  --log-opt tag="docker.{{.Name}}" \
  my-app:latest

# Global default in daemon.json
{
  "log-driver": "fluentd",
  "log-opts": {
    "fluentd-address": "localhost:24224",
    "fluentd-async": "true",
    "tag": "docker.{{.Name}}"
  }
}

Structured Logging

The single most impactful thing you can do for your logging is to output structured (JSON) logs from your applications. Structured logs are machine-parseable, searchable, and can carry rich metadata:

// Bad: unstructured log
console.log(`User ${userId} placed order ${orderId} for $${amount}`);
// Output: User 12345 placed order abc-789 for $99.99

// Good: structured JSON log
const log = {
  level: "info",
  event: "order_placed",
  userId: 12345,
  orderId: "abc-789",
  amount: 99.99,
  currency: "USD",
  timestamp: new Date().toISOString()
};
console.log(JSON.stringify(log));
// Output: {"level":"info","event":"order_placed","userId":12345,...}

Popular logging libraries for structured output:

  • Node.js: pino, winston (with JSON format), bunyan
  • Python: structlog, python-json-logger
  • Go: zerolog, zap, slog (standard library)
  • Java: Logback with JSON encoder, Log4j2 JSON layout
# Python example with structlog
import structlog

logger = structlog.get_logger()

logger.info("order_placed",
    user_id=12345,
    order_id="abc-789",
    amount=99.99,
    currency="USD"
)
# Output: {"event": "order_placed", "user_id": 12345, ...}

Multi-Host Logging Architecture

When running containers across multiple Docker hosts, you need a strategy for aggregating logs centrally. Here is a recommended architecture:

  1. Collection tier: Run Promtail or Fluent Bit on each Docker host as a lightweight collector
  2. Aggregation tier: Loki or Elasticsearch receives and indexes logs from all collectors
  3. Visualization tier: Grafana or Kibana provides search, dashboards, and alerting
# Deploy Promtail as a global service in Docker Swarm
docker service create \
  --name promtail \
  --mode global \
  --mount type=bind,source=/var/lib/docker/containers,target=/var/lib/docker/containers,readonly \
  --mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock,readonly \
  --mount type=bind,source=/opt/promtail/config.yml,target=/etc/promtail/config.yml \
  grafana/promtail:2.9.4 \
  -config.file=/etc/promtail/config.yml

Logging Pitfalls to Avoid

  • Logging to files inside containers — Logs written to files instead of stdout are invisible to Docker. Use stdout/stderr, or mount a volume for the log files.
  • No log rotation — The json-file driver with no max-size will consume all disk space eventually. Always configure rotation.
  • Blocking log drivers — Some log drivers (like fluentd without async) block the container if the log destination is unavailable. Use async modes or the local driver as a buffer.
  • Logging sensitive data — Be careful not to log passwords, tokens, or PII. Use structured logging to control exactly what gets logged.
  • Ignoring container metadata — Always tag logs with the container name, image, and host. Without metadata, logs from 50 containers are useless noise.
Tip: usulnet provides built-in log viewing for all managed containers, making it easy to quickly check logs without SSH-ing into individual hosts. For production monitoring, pair this with a centralized logging stack for historical analysis and alerting.

Recommended Stack by Scale

Scale Recommended Stack Estimated Resource Cost
1-10 containers json-file driver with rotation + docker logs Zero additional resources
10-50 containers Loki + Promtail + Grafana 1 GB RAM, 10 GB disk
50-200 containers Loki cluster or ELK stack 4-8 GB RAM, 100+ GB disk
200+ containers ELK with dedicated nodes or managed service Significant (scale with data)

Conclusion

Docker logging does not have to be complicated, but it does have to be intentional. Start with the basics: configure log rotation globally, use structured JSON output from your applications, and ensure container metadata is attached to every log line. As your infrastructure grows, add centralized logging with Loki or ELK to gain search, dashboards, and alerting.

The key insight is that logging infrastructure should scale with your needs. Do not deploy a full ELK stack for five containers, and do not rely on docker logs for fifty. Match the solution to the problem, and remember that the best logging setup is one your team actually uses to diagnose issues.