An API gateway is the front door to your microservices. Instead of exposing every service directly to the internet, you expose a single gateway that handles cross-cutting concerns: authentication, rate limiting, request routing, load balancing, SSL termination, and monitoring. Without a gateway, every service must implement these features independently, leading to inconsistency and duplicated effort.

This guide compares three API gateway approaches in Docker: Kong (full-featured, plugin-based), Traefik (Docker-native with automatic service discovery), and custom Nginx-based solutions (lightweight and transparent). Each includes complete Docker Compose examples you can deploy immediately.

The API Gateway Pattern

An API gateway sits between clients and backend services, providing a unified entry point:

Concern Without Gateway With Gateway
SSL/TLS Each service manages its own certificates Gateway handles all SSL termination
Authentication Each service validates tokens Gateway validates once, forwards identity
Rate limiting Implemented per-service or not at all Centralized, consistent policies
Logging/Monitoring Scattered across services Centralized access logging
Service discovery Clients must know every service URL Gateway routes based on path/host
Load balancing External load balancer needed Built into the gateway

Kong Gateway

Kong is the most feature-rich open-source API gateway. It is built on Nginx/OpenResty and supports hundreds of plugins for authentication, rate limiting, transformation, logging, and more.

services:
  kong-database:
    image: postgres:16-alpine
    container_name: kong-db
    environment:
      POSTGRES_USER: kong
      POSTGRES_PASSWORD: ${KONG_DB_PASSWORD}
      POSTGRES_DB: kong
    volumes:
      - kong-db-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U kong"]
      interval: 10s
      timeout: 5s
      retries: 5

  kong-migration:
    image: kong:3.6
    command: kong migrations bootstrap
    depends_on:
      kong-database:
        condition: service_healthy
    environment:
      KONG_DATABASE: postgres
      KONG_PG_HOST: kong-database
      KONG_PG_USER: kong
      KONG_PG_PASSWORD: ${KONG_DB_PASSWORD}

  kong:
    image: kong:3.6
    container_name: kong
    restart: unless-stopped
    depends_on:
      kong-migration:
        condition: service_completed_successfully
    environment:
      KONG_DATABASE: postgres
      KONG_PG_HOST: kong-database
      KONG_PG_USER: kong
      KONG_PG_PASSWORD: ${KONG_DB_PASSWORD}
      KONG_PROXY_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_PROXY_ERROR_LOG: /dev/stderr
      KONG_ADMIN_ERROR_LOG: /dev/stderr
      KONG_ADMIN_LISTEN: 0.0.0.0:8001
      KONG_ADMIN_GUI_URL: http://localhost:8002
    ports:
      - "80:8000"      # Proxy
      - "443:8443"     # Proxy SSL
      - "127.0.0.1:8001:8001"  # Admin API
      - "127.0.0.1:8002:8002"  # Admin GUI
    healthcheck:
      test: ["CMD", "kong", "health"]
      interval: 10s
      timeout: 10s
      retries: 5

volumes:
  kong-db-data:

Configuring Kong Routes and Services

# Add an upstream service
curl -i -X POST http://localhost:8001/services/ \
  --data name=user-service \
  --data url=http://user-api:8080

# Add a route to the service
curl -i -X POST http://localhost:8001/services/user-service/routes \
  --data 'paths[]=/api/users' \
  --data strip_path=true

# Enable rate limiting plugin
curl -i -X POST http://localhost:8001/services/user-service/plugins \
  --data name=rate-limiting \
  --data config.minute=100 \
  --data config.policy=local

# Enable JWT authentication
curl -i -X POST http://localhost:8001/services/user-service/plugins \
  --data name=jwt

# Enable request/response logging
curl -i -X POST http://localhost:8001/services/user-service/plugins \
  --data name=file-log \
  --data config.path=/tmp/api-access.log

# Enable CORS
curl -i -X POST http://localhost:8001/services/user-service/plugins \
  --data name=cors \
  --data config.origins=https://myapp.com \
  --data config.methods=GET,POST,PUT,DELETE \
  --data config.headers=Authorization,Content-Type
Tip: Kong also supports declarative configuration via YAML files (DB-less mode), which is better for version-controlled infrastructure. Set KONG_DATABASE=off and mount a kong.yml file instead of using the Admin API.

Kong DB-less Mode

# kong.yml - Declarative configuration
_format_version: "3.0"

services:
  - name: user-service
    url: http://user-api:8080
    routes:
      - name: user-routes
        paths: ["/api/users"]
        strip_path: true
    plugins:
      - name: rate-limiting
        config:
          minute: 100
          policy: local
      - name: key-auth
        config:
          key_names: [apikey]

  - name: product-service
    url: http://product-api:8080
    routes:
      - name: product-routes
        paths: ["/api/products"]
        strip_path: true

consumers:
  - username: mobile-app
    keyauth_credentials:
      - key: ${MOBILE_API_KEY}
  - username: web-frontend
    keyauth_credentials:
      - key: ${WEB_API_KEY}

Traefik (Docker-Native)

Traefik excels in Docker environments because it discovers services automatically via Docker labels. No manual route configuration needed:

services:
  traefik:
    image: traefik:v3.0
    container_name: traefik
    restart: unless-stopped
    command:
      - --api.dashboard=true
      - --api.insecure=true
      - --providers.docker=true
      - --providers.docker.exposedbydefault=false
      - --entrypoints.web.address=:80
      - --entrypoints.websecure.address=:443
      - --certificatesresolvers.letsencrypt.acme.httpchallenge=true
      - --certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web
      - [email protected]
      - --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
      - --entrypoints.web.http.redirections.entrypoint.to=websecure
      - --entrypoints.web.http.redirections.entrypoint.scheme=https
    ports:
      - "80:80"
      - "443:443"
      - "127.0.0.1:8080:8080"  # Dashboard
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - letsencrypt:/letsencrypt

  # Services declare their own routing via labels
  user-api:
    image: myorg/user-api:latest
    labels:
      - traefik.enable=true
      - traefik.http.routers.user-api.rule=Host(`api.example.com`) && PathPrefix(`/users`)
      - traefik.http.routers.user-api.entrypoints=websecure
      - traefik.http.routers.user-api.tls.certresolver=letsencrypt
      - traefik.http.services.user-api.loadbalancer.server.port=8080
      # Rate limiting middleware
      - traefik.http.middlewares.user-ratelimit.ratelimit.average=100
      - traefik.http.middlewares.user-ratelimit.ratelimit.burst=50
      - traefik.http.routers.user-api.middlewares=user-ratelimit

  product-api:
    image: myorg/product-api:latest
    labels:
      - traefik.enable=true
      - traefik.http.routers.product-api.rule=Host(`api.example.com`) && PathPrefix(`/products`)
      - traefik.http.routers.product-api.entrypoints=websecure
      - traefik.http.routers.product-api.tls.certresolver=letsencrypt
      - traefik.http.services.product-api.loadbalancer.server.port=8080

volumes:
  letsencrypt:

Traefik's Docker integration means new services are automatically discovered when they start. Add the right labels to any container and Traefik routes traffic to it within seconds, with automatic SSL via Let's Encrypt.

Traefik Middleware Stack

# Compose labels for common middleware
labels:
  # Basic auth
  - traefik.http.middlewares.admin-auth.basicauth.users=admin:$$apr1$$xyz...

  # IP whitelist
  - traefik.http.middlewares.internal-only.ipallowlist.sourcerange=10.0.0.0/8,172.16.0.0/12

  # Headers
  - traefik.http.middlewares.security-headers.headers.stsSeconds=31536000
  - traefik.http.middlewares.security-headers.headers.contentTypeNosniff=true
  - traefik.http.middlewares.security-headers.headers.frameDeny=true

  # Circuit breaker
  - traefik.http.middlewares.cb.circuitbreaker.expression=LatencyAtQuantileMS(50.0) > 1000

  # Chain middleware
  - traefik.http.routers.myservice.middlewares=security-headers,user-ratelimit

Custom Nginx API Gateway

For teams that want full control without the overhead of Kong or do not need Traefik's auto-discovery, Nginx works as a lightweight, transparent API gateway:

services:
  api-gateway:
    image: nginx:1.25-alpine
    container_name: api-gateway
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/gateway.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/certs:/etc/nginx/certs:ro
    depends_on:
      - user-api
      - product-api
      - order-api
# nginx/gateway.conf
worker_processes auto;
events { worker_connections 4096; }

http {
    # Rate limiting zones
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/m;
    limit_req_zone $http_x_api_key zone=key_limit:10m rate=1000r/m;

    # Upstream definitions with health checks
    upstream user_service {
        server user-api:8080 max_fails=3 fail_timeout=30s;
        keepalive 32;
    }

    upstream product_service {
        server product-api:8080 max_fails=3 fail_timeout=30s;
        keepalive 32;
    }

    upstream order_service {
        server order-api:8080 max_fails=3 fail_timeout=30s;
        keepalive 32;
    }

    server {
        listen 80;
        server_name api.example.com;
        return 301 https://$host$request_uri;
    }

    server {
        listen 443 ssl http2;
        server_name api.example.com;

        ssl_certificate /etc/nginx/certs/fullchain.pem;
        ssl_certificate_key /etc/nginx/certs/privkey.pem;

        # Security headers
        add_header X-Content-Type-Options nosniff always;
        add_header X-Frame-Options DENY always;
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

        # API key validation (simple)
        location /api/ {
            if ($http_x_api_key = "") {
                return 401 '{"error": "API key required"}';
            }
        }

        # User service
        location /api/users {
            limit_req zone=api_limit burst=20 nodelay;
            proxy_pass http://user_service/users;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_connect_timeout 5s;
            proxy_read_timeout 30s;
        }

        # Product service
        location /api/products {
            limit_req zone=api_limit burst=50 nodelay;
            proxy_pass http://product_service/products;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }

        # Order service (lower rate limit)
        location /api/orders {
            limit_req zone=api_limit burst=10 nodelay;
            proxy_pass http://order_service/orders;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }

        # Health check endpoint
        location /health {
            access_log off;
            return 200 '{"status": "healthy"}';
            add_header Content-Type application/json;
        }
    }
}

Comparison: Which Gateway to Choose

Criteria Kong Traefik Custom Nginx
Setup complexity High (needs database) Low (Docker labels) Medium (manual config)
Docker integration Manual configuration Automatic via labels Manual upstream config
Plugin ecosystem Extensive (100+ plugins) Middleware (built-in) Nginx modules
SSL management Manual or plugin Automatic Let's Encrypt Manual or certbot
Performance High (Nginx-based) Good (Go-based) Highest (pure Nginx)
Memory usage ~200MB + database ~50-100MB ~10-30MB
Best for Enterprise, API-first Docker/Kubernetes native Simple setups, full control

Recommendation: Use Traefik if you are already running Docker and want automatic SSL and service discovery. Use Kong if you need advanced API management features (developer portal, analytics, plugin marketplace). Use custom Nginx if you want minimal overhead and full control over every aspect of the configuration.

Circuit Breaker Pattern

Circuit breakers prevent a failing service from cascading failures to the gateway and other services:

# Traefik circuit breaker (via labels)
labels:
  - traefik.http.middlewares.cb.circuitbreaker.expression=LatencyAtQuantileMS(50.0) > 1000 || NetworkErrorRatio() > 0.30
  - traefik.http.middlewares.cb.circuitbreaker.checkperiod=10s
  - traefik.http.middlewares.cb.circuitbreaker.fallbackduration=30s
  - traefik.http.middlewares.cb.circuitbreaker.recoveryduration=60s

Monitoring API Gateways

# Kong: Built-in Prometheus plugin
curl -X POST http://localhost:8001/plugins \
  --data name=prometheus

# Metrics at: http://kong:8001/metrics

# Traefik: Built-in Prometheus metrics
# Add to Traefik command:
# --metrics.prometheus=true
# --metrics.prometheus.addEntryPointsLabels=true
# --metrics.prometheus.addServicesLabels=true
# Metrics at: http://traefik:8080/metrics

# Nginx: Use nginx-prometheus-exporter
services:
  nginx-exporter:
    image: nginx/nginx-prometheus-exporter:1.1
    command: --nginx.scrape-uri=http://api-gateway:80/nginx_status
    ports:
      - "9113:9113"

Regardless of which API gateway you choose, monitoring it is essential. Platforms like usulnet can track the health and resource usage of your gateway container alongside the backend services it routes to, giving you a unified view of your entire request path from gateway to application to database.

Gateway Selection Checklist

  1. How many services? Under 5 services: Nginx or Traefik. Over 10: Kong or Traefik.
  2. Need auto-SSL? Traefik has the best Let's Encrypt integration.
  3. Running Docker Compose? Traefik's label-based discovery is purpose-built for this.
  4. Need an API developer portal? Kong Enterprise has this built in.
  5. Minimal resources? Custom Nginx uses the least memory.
  6. Planning Kubernetes migration? Both Kong and Traefik have Kubernetes Ingress controllers.