A reverse proxy is the front door to your self-hosted infrastructure. It terminates TLS, routes requests to the correct backend container, and provides security features like rate limiting, authentication, and header manipulation. For Docker environments, the reverse proxy also needs to discover new containers automatically and provision SSL certificates without manual intervention. Four solutions dominate this space, each with a fundamentally different philosophy.

The Contenders

Feature Nginx Traefik Caddy HAProxy
Language C Go Go C
Primary philosophy Performance, flexibility Docker-native, auto-discovery Simplicity, auto-HTTPS Load balancing, reliability
Configuration Config files (nginx.conf) Labels, YAML, TOML Caddyfile or JSON Config files (haproxy.cfg)
Auto-HTTPS (Let's Encrypt) Via certbot or companion Built-in Built-in (default) Via acme.sh or external
Docker auto-discovery Via nginx-proxy companion Native (Docker provider) Via plugins or Caddyfile No (manual or templates)
Hot reload nginx -s reload Automatic Automatic (API or signal) haproxy -sf
Dashboard Nginx Plus only (paid) Built-in web dashboard Admin API Built-in stats page
RAM usage (idle) ~5-10 MB ~50-80 MB ~20-30 MB ~5-10 MB
Market share Dominant (33%+ of web) Growing (Docker ecosystem) Growing (simplicity appeal) Strong (enterprise LB)
License BSD-2-Clause MIT Apache-2.0 GPL-2.0 / HAProxy Community

Docker Integration

Nginx: The Manual Approach

Nginx itself does not discover Docker containers. You manually write config files or use nginx-proxy, a companion container that generates Nginx config from Docker container labels:

# docker-compose.yml with nginx-proxy
version: "3.8"
services:
  nginx-proxy:
    image: nginxproxy/nginx-proxy:latest
    container_name: nginx-proxy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - nginx_certs:/etc/nginx/certs
      - nginx_vhost:/etc/nginx/vhost.d
      - nginx_html:/usr/share/nginx/html
    environment:
      - DEFAULT_HOST=example.com

  # Auto Let's Encrypt companion
  acme-companion:
    image: nginxproxy/acme-companion:latest
    container_name: acme-companion
    restart: unless-stopped
    volumes_from:
      - nginx-proxy
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - acme_state:/etc/acme.sh
    environment:
      - [email protected]

  # Example service (auto-discovered)
  myapp:
    image: myapp:latest
    environment:
      - VIRTUAL_HOST=app.example.com
      - LETSENCRYPT_HOST=app.example.com
    expose:
      - "8080"

volumes:
  nginx_certs:
  nginx_vhost:
  nginx_html:
  acme_state:

Manual Nginx Configuration

# /etc/nginx/conf.d/myapp.conf
upstream myapp_backend {
    server myapp:8080;
    keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;

    # Security headers
    add_header Strict-Transport-Security "max-age=63072000" always;
    add_header X-Content-Type-Options nosniff;
    add_header X-Frame-Options DENY;
    add_header X-XSS-Protection "1; mode=block";

    # Rate limiting
    limit_req zone=api burst=20 nodelay;
    limit_conn addr 100;

    location / {
        proxy_pass http://myapp_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }
}

server {
    listen 80;
    server_name app.example.com;
    return 301 https://$server_name$request_uri;
}

Traefik: Docker-Native Auto-Discovery

Traefik was designed from the ground up for Docker. It watches the Docker socket and automatically creates routes based on container labels:

# docker-compose.yml for Traefik
version: "3.8"
services:
  traefik:
    image: traefik:v3.0
    container_name: traefik
    restart: unless-stopped
    command:
      - "--api.dashboard=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
      - "[email protected]"
      - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
      - "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
      - "--metrics.prometheus=true"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - traefik_letsencrypt:/letsencrypt
    labels:
      # Dashboard
      - "traefik.enable=true"
      - "traefik.http.routers.dashboard.rule=Host(`traefik.example.com`)"
      - "traefik.http.routers.dashboard.service=api@internal"
      - "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
      - "traefik.http.routers.dashboard.middlewares=auth"
      - "traefik.http.middlewares.auth.basicauth.users=admin:$$apr1$$..."

  # Example service - just add labels
  myapp:
    image: myapp:latest
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.myapp.rule=Host(`app.example.com`)"
      - "traefik.http.routers.myapp.tls.certresolver=letsencrypt"
      - "traefik.http.services.myapp.loadbalancer.server.port=8080"
      # Middleware chain
      - "traefik.http.routers.myapp.middlewares=ratelimit,headers"
      - "traefik.http.middlewares.ratelimit.ratelimit.average=100"
      - "traefik.http.middlewares.ratelimit.ratelimit.burst=50"
      - "traefik.http.middlewares.headers.headers.stsSeconds=63072000"

volumes:
  traefik_letsencrypt:

Caddy: Simplicity and Auto-HTTPS

Caddy's defining feature is automatic HTTPS. Every site gets a TLS certificate by default with zero configuration:

# Caddyfile - the simplest reverse proxy configuration
app.example.com {
    reverse_proxy myapp:8080
}

wiki.example.com {
    reverse_proxy bookstack:80
}

git.example.com {
    reverse_proxy gitea:3000
}

vault.example.com {
    reverse_proxy vaultwarden:80

    # WebSocket support
    @websocket {
        path /notifications/hub
    }
    reverse_proxy @websocket vaultwarden:3012
}

# That's it. Caddy automatically:
# - Obtains Let's Encrypt certificates
# - Redirects HTTP to HTTPS
# - Renews certificates before expiry
# - Uses modern TLS defaults
# docker-compose.yml for Caddy
version: "3.8"
services:
  caddy:
    image: caddy:2
    container_name: caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
      - caddy_config:/config

volumes:
  caddy_data:
  caddy_config:

Caddy with Docker Labels (caddy-docker-proxy)

# docker-compose.yml for Caddy with auto-discovery
version: "3.8"
services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:latest
    container_name: caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - caddy_data:/data
    labels:
      - [email protected]

  # Auto-discovered service
  myapp:
    image: myapp:latest
    labels:
      - caddy=app.example.com
      - caddy.reverse_proxy={{upstreams 8080}}

HAProxy: Enterprise Load Balancing

# haproxy.cfg
global
    maxconn 50000
    log stdout format raw local0
    ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256
    ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384
    ssl-default-bind-options prefer-client-ciphers no-sslv3 no-tlsv10 no-tlsv11

defaults
    mode http
    timeout connect 5s
    timeout client 30s
    timeout server 30s
    option httplog
    option dontlognull
    option forwardfor

frontend https
    bind *:443 ssl crt /etc/haproxy/certs/ alpn h2,http/1.1
    bind *:80
    redirect scheme https code 301 if !{ ssl_fc }

    # ACLs for routing
    acl host_app hdr(host) -i app.example.com
    acl host_wiki hdr(host) -i wiki.example.com
    acl host_git hdr(host) -i git.example.com

    use_backend app_backend if host_app
    use_backend wiki_backend if host_wiki
    use_backend git_backend if host_git

backend app_backend
    server app1 myapp:8080 check inter 5s fall 3 rise 2
    # Health check
    option httpchk GET /health
    http-check expect status 200
    # Load balancing (if multiple instances)
    balance roundrobin

backend wiki_backend
    server wiki1 bookstack:80 check

backend git_backend
    server git1 gitea:3000 check

# Stats dashboard
frontend stats
    bind *:8404
    stats enable
    stats uri /stats
    stats refresh 10s
    stats admin if LOCALHOST
# docker-compose.yml for HAProxy
version: "3.8"
services:
  haproxy:
    image: haproxy:2.9
    container_name: haproxy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "8404:8404"  # Stats
    volumes:
      - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
      - ./certs:/etc/haproxy/certs:ro

  # Certificate management with acme.sh
  acme:
    image: neilpang/acme.sh
    container_name: acme
    volumes:
      - acme_data:/acme.sh
      - ./certs:/certs
    command: daemon

Performance Comparison

Benchmarks with wrk (10 concurrent connections, 30 seconds, proxying to a static backend):

Metric Nginx Traefik Caddy HAProxy
Requests/sec (HTTP) ~45,000 ~28,000 ~32,000 ~50,000
Requests/sec (HTTPS) ~38,000 ~22,000 ~27,000 ~42,000
P99 latency ~2 ms ~5 ms ~3 ms ~1.5 ms
Memory (idle) ~5 MB ~60 MB ~25 MB ~5 MB
Memory (under load) ~20 MB ~120 MB ~60 MB ~15 MB

For the vast majority of self-hosted environments, the performance difference between these proxies is irrelevant. At typical self-hosted traffic levels (hundreds to thousands of requests per minute), all four handle the load effortlessly. Choose based on operational needs, not raw performance.

Let's Encrypt Support

Feature Nginx Traefik Caddy HAProxy
Built-in ACME No (external tooling) Yes Yes (automatic) No (external tooling)
HTTP-01 challenge Via certbot Yes Yes Via acme.sh
DNS-01 challenge Via certbot + plugin Yes (many providers) Yes (many providers) Via acme.sh + plugin
Wildcard certificates Via certbot DNS Yes (DNS challenge) Yes (DNS challenge) Via acme.sh DNS
Auto-renewal Cron + certbot renew Automatic Automatic Cron + acme.sh
Zero-downtime renewal Requires reload Yes Yes Requires reload

Middleware and Features

Traefik Middleware (label-based)

# Rate limiting
- "traefik.http.middlewares.ratelimit.ratelimit.average=100"
- "traefik.http.middlewares.ratelimit.ratelimit.burst=50"

# Basic authentication
- "traefik.http.middlewares.auth.basicauth.users=admin:$$hashed$$"

# IP whitelist
- "traefik.http.middlewares.ipwhitelist.ipwhitelist.sourcerange=192.168.1.0/24"

# Redirect (www to non-www)
- "traefik.http.middlewares.redirect-www.redirectregex.regex=^https://www\\.(.*)"
- "traefik.http.middlewares.redirect-www.redirectregex.replacement=https://$${1}"

# Compress responses
- "traefik.http.middlewares.compress.compress=true"

# Chain multiple middlewares
- "traefik.http.routers.myapp.middlewares=ratelimit,headers,compress"

Caddy Middleware (Caddyfile directives)

app.example.com {
    # Rate limiting (via plugin)
    rate_limit {
        zone dynamic_zone {
            key {remote_host}
            events 100
            window 1m
        }
    }

    # Basic auth
    basicauth /admin/* {
        admin $2a$14$hashed_password
    }

    # IP filtering
    @blocked {
        not remote_ip 192.168.1.0/24
    }
    respond @blocked 403

    # Compression
    encode gzip zstd

    # Security headers
    header {
        Strict-Transport-Security "max-age=63072000"
        X-Content-Type-Options nosniff
        X-Frame-Options DENY
        -Server
    }

    reverse_proxy myapp:8080
}
Tip: For Docker environments with many containers, Traefik's label-based configuration is a significant operational advantage. Adding a new service only requires adding labels to its Docker Compose definition; no reverse proxy configuration files need to be edited or reloaded. This is especially valuable for teams using tools like usulnet to manage containers, as the reverse proxy configuration stays with the container definition.

Configuration Complexity

Task Nginx Traefik Caddy HAProxy
Simple reverse proxy 10 lines 5 labels 2 lines 15 lines
Add new service New config file + reload Add labels to container Add to Caddyfile + reload Edit config + reload
SSL certificate Certbot + config Automatic (1 label) Automatic (0 config) acme.sh + config
WebSocket proxy Explicit upgrade headers Automatic Automatic Explicit configuration
Learning curve Moderate (well documented) Steep (concepts are different) Low (intuitive) Steep (enterprise features)
Warning: Both Traefik and nginx-proxy require access to the Docker socket (/var/run/docker.sock), which grants effective root access to the host. In production, use a Docker socket proxy (like Tecnativa/docker-socket-proxy) to limit API access to read-only operations the reverse proxy actually needs.

Which One Should You Choose?

  • Choose Nginx if you already know Nginx, need maximum performance, want the most documentation and community resources available, or need advanced features like Lua scripting. Best for experienced admins who prefer explicit configuration over auto-discovery.
  • Choose Traefik if you run many Docker containers and want automatic service discovery, SSL provisioning, and label-based configuration. Best for dynamic Docker environments where containers are frequently added, removed, or scaled.
  • Choose Caddy if you want the simplest possible setup with automatic HTTPS and minimal configuration. Best for small to medium deployments where simplicity and developer productivity matter more than advanced features.
  • Choose HAProxy if you need advanced load balancing, health checking, and maximum reliability. Best for high-availability setups, TCP load balancing (databases, mail), and environments where every millisecond of latency matters.

For most self-hosters starting out, Caddy is the best default choice. Its automatic HTTPS and minimal configuration eliminate an entire class of common errors. As your infrastructure grows, Traefik's Docker-native auto-discovery becomes increasingly valuable. Nginx and HAProxy shine in high-performance and enterprise scenarios.

Regardless of which reverse proxy you choose, managing the proxy container alongside all the backend services it routes to is simplified by container management platforms like usulnet, which provides a unified view of your entire Docker infrastructure including health status, logs, and resource usage.