The traditional network security model is a castle with a moat: a hardened perimeter (firewall) protecting a trusted internal network. Once you are inside the perimeter, everything trusts everything. This model fails catastrophically when an attacker breaches the perimeter — through a phishing attack, a compromised VPN credential, or a vulnerable public-facing service — because lateral movement inside the "trusted" network is unrestricted.

Zero trust eliminates the concept of a trusted network. Every request, whether it originates from inside or outside the network, must be authenticated, authorized, and encrypted. The network location of a device or service is never a factor in access decisions. For self-hosted infrastructure running Docker containers across one or more servers, applying zero trust principles significantly reduces the blast radius of any single compromise.

Core Zero Trust Principles

  1. Never trust, always verify. Every access request is fully authenticated and authorized, regardless of network location.
  2. Least privilege access. Users and services get the minimum access needed, for the minimum time needed.
  3. Assume breach. Design systems assuming the attacker is already inside your network. Minimize blast radius through segmentation.
  4. Verify explicitly. Use all available data points (identity, device health, location, behavior) for access decisions.
  5. Encrypt everything. All communication is encrypted, even within the "internal" network.

Traditional VPN vs Zero Trust

Aspect Traditional VPN Zero Trust
Access model Network-level (IP-based) Identity-based (per-user, per-service)
After authentication Full network access Access only to authorized services
Lateral movement Unrestricted once connected Blocked by default
Compromise impact Full network exposure Limited to authorized services
Granularity Network/subnet level Per-service, per-user, per-request
User experience VPN client, split tunneling issues Transparent (identity-based)

A VPN says: "You proved your identity at the front door, so you are trusted everywhere inside." Zero trust says: "You proved your identity at the front door, but you still need to prove it at every room you want to enter."

Identity-Based Access

The foundation of zero trust is strong identity. Every user and every service must have a verifiable identity that is checked on every request.

For Users

  • Single Sign-On (SSO) with an identity provider (Authentik, Keycloak, Authelia)
  • Multi-factor authentication (MFA) on every service, not just the VPN
  • Short-lived sessions that require re-authentication
  • Device trust verification (managed device, up-to-date OS, endpoint protection)

For Services

  • Mutual TLS (mTLS) certificates for service-to-service authentication
  • Service accounts with API keys or OAuth2 client credentials
  • Short-lived certificates rotated automatically
# Example: Authelia configuration for forward auth
# Traefik middleware that forwards all requests to Authelia for verification
services:
  authelia:
    image: authelia/authelia:latest
    volumes:
      - ./authelia/configuration.yml:/config/configuration.yml:ro
    environment:
      TZ: UTC

  traefik:
    image: traefik:v3.0
    labels:
      # Forward auth middleware
      - "traefik.http.middlewares.authelia.forwardAuth.address=http://authelia:9091/api/verify?rd=https://auth.example.com/"
      - "traefik.http.middlewares.authelia.forwardAuth.trustForwardHeader=true"
      - "traefik.http.middlewares.authelia.forwardAuth.authResponseHeaders=Remote-User,Remote-Groups,Remote-Email"

  # Apply to protected services
  my-app:
    labels:
      - "traefik.http.routers.my-app.middlewares=authelia@docker"

Mutual TLS (mTLS)

Standard TLS only authenticates the server to the client. Mutual TLS (mTLS) also authenticates the client to the server. Both parties present certificates, and both verify the other's identity. This is the gold standard for service-to-service communication in zero trust architectures.

# Generate a CA for your internal services
# Using step-ca (Smallstep) for automated certificate management
step ca init --name "Internal CA" --provisioner admin \
  --dns ca.internal --address :443

# Issue a certificate for a service
step ca certificate web.internal web.crt web.key

# Configure Nginx with mTLS
# nginx.conf
server {
    listen 443 ssl;
    server_name api.internal;

    # Server certificate
    ssl_certificate /etc/nginx/certs/api.crt;
    ssl_certificate_key /etc/nginx/certs/api.key;

    # Client certificate verification (mTLS)
    ssl_client_certificate /etc/nginx/certs/ca.crt;
    ssl_verify_client on;

    # Only allow connections from services with valid client certs
    location / {
        proxy_pass http://backend:3000;
        proxy_set_header X-Client-CN $ssl_client_s_dn_cn;
    }
}

Automated Certificate Management with step-ca

services:
  step-ca:
    image: smallstep/step-ca:latest
    volumes:
      - step_ca_data:/home/step
    ports:
      - "9000:9000"
    environment:
      DOCKER_STEPCA_INIT_NAME: "Internal CA"
      DOCKER_STEPCA_INIT_DNS_NAMES: "ca.internal,localhost"
      DOCKER_STEPCA_INIT_REMOTE_MANAGEMENT: "true"

  # Services auto-enroll for certificates
  api:
    image: my-api:latest
    environment:
      STEP_CA_URL: https://step-ca:9000
      STEP_CA_FINGERPRINT: "abc123..."
    # Use step agent for automatic cert renewal
    command: >
      step-agent certificate api.internal /certs/api.crt /certs/api.key
      --ca-url https://step-ca:9000
      --provisioner-password-file /run/secrets/ca_password
      --daemon

volumes:
  step_ca_data:

Network Segmentation with Docker

Docker's network model supports zero trust through network isolation. By default, all containers on the same Docker network can communicate freely. You can enforce segmentation using multiple isolated networks:

services:
  # Public-facing reverse proxy
  traefik:
    image: traefik:v3.0
    networks:
      - public
      - frontend

  # Web application (only talks to traefik and API)
  web:
    image: my-web:latest
    networks:
      - frontend

  # API service (only talks to web and database)
  api:
    image: my-api:latest
    networks:
      - frontend
      - backend

  # Database (only talks to API, never to public)
  postgres:
    image: postgres:16
    networks:
      - backend

  # Redis cache (only backend services)
  redis:
    image: redis:7-alpine
    networks:
      - backend

networks:
  public:
    # Accessible from outside (port mapping)
  frontend:
    internal: true
    # Web and API can talk, but no external access
  backend:
    internal: true
    # API, database, and cache only
    # Completely isolated from public traffic
Tip: Use Docker's internal: true network option to create networks that have no outbound internet access. Backend services like databases should never have a route to the internet. If they need to pull updates, use a pull-through cache on an allowed network.

Tailscale: Zero Trust Networking Made Simple

Tailscale builds a WireGuard-based mesh VPN that implements zero trust principles with minimal configuration. Every device gets a stable IP on the tailnet, and access is controlled by ACLs based on user identity.

# Install Tailscale on your server
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up --advertise-routes=10.0.0.0/24

# Docker containers can access Tailscale via the host
# Or use the Tailscale sidecar container:
services:
  tailscale:
    image: tailscale/tailscale:latest
    hostname: my-server
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    volumes:
      - tailscale_data:/var/lib/tailscale
    environment:
      TS_AUTHKEY: ${TAILSCALE_AUTHKEY}
      TS_STATE_DIR: /var/lib/tailscale
      TS_USERSPACE: "false"
    network_mode: host

  # Services are only accessible via Tailscale IP
  my-app:
    image: my-app:latest
    ports:
      - "100.x.y.z:8080:8080"  # Bind to Tailscale IP only

volumes:
  tailscale_data:

Tailscale ACLs

// Tailscale ACL policy (tailscale admin console)
{
  "acls": [
    // Admins can access everything
    {
      "action": "accept",
      "src": ["group:admin"],
      "dst": ["*:*"]
    },
    // Developers can access web apps and SSH
    {
      "action": "accept",
      "src": ["group:dev"],
      "dst": [
        "tag:webserver:80,443",
        "tag:webserver:22"
      ]
    },
    // Monitoring can access metrics endpoints
    {
      "action": "accept",
      "src": ["tag:monitoring"],
      "dst": ["*:9090,9100,8080"]
    }
  ],
  "tagOwners": {
    "tag:webserver": ["group:admin"],
    "tag:monitoring": ["group:admin"]
  },
  "groups": {
    "group:admin": ["[email protected]"],
    "group:dev": ["[email protected]", "[email protected]"]
  }
}

Headscale: Self-Hosted Tailscale Control Plane

Headscale is an open-source, self-hosted implementation of the Tailscale control plane. It gives you the same WireGuard mesh VPN with identity-based access, but without depending on Tailscale's hosted service.

services:
  headscale:
    image: headscale/headscale:latest
    container_name: headscale
    restart: unless-stopped
    volumes:
      - headscale_data:/var/lib/headscale
      - ./headscale/config.yaml:/etc/headscale/config.yaml:ro
    ports:
      - "8080:8080"  # gRPC/HTTP API
      - "443:443"    # HTTPS
    command: serve

  # Optional: Headscale UI
  headscale-ui:
    image: ghcr.io/gurucomputing/headscale-ui:latest
    restart: unless-stopped
    ports:
      - "9443:443"

volumes:
  headscale_data:
# headscale/config.yaml
server_url: https://headscale.example.com
listen_addr: 0.0.0.0:8080
metrics_listen_addr: 0.0.0.0:9090
private_key_path: /var/lib/headscale/private.key
noise:
  private_key_path: /var/lib/headscale/noise_private.key
ip_prefixes:
  - 100.64.0.0/10
  - fd7a:115c:a1e0::/48
db_type: sqlite3
db_path: /var/lib/headscale/db.sqlite
dns:
  magic_dns: true
  base_domain: internal.example.com
  nameservers:
    - 1.1.1.1
# Register nodes with Headscale
headscale users create myuser
headscale preauthkeys create --user myuser --reusable --expiration 24h

# On the node:
tailscale up --login-server https://headscale.example.com \
  --authkey YOUR_PREAUTH_KEY

Cloudflare Tunnels: Zero Trust Without Open Ports

Cloudflare Tunnels (formerly Argo Tunnel) create encrypted tunnels from your infrastructure to Cloudflare's edge, allowing you to expose services without opening any inbound ports on your firewall.

services:
  cloudflared:
    image: cloudflare/cloudflared:latest
    container_name: cloudflared
    restart: unless-stopped
    command: tunnel --no-autoupdate run
    environment:
      TUNNEL_TOKEN: ${CLOUDFLARE_TUNNEL_TOKEN}
    networks:
      - internal

  # Your services don't need any port mappings
  web:
    image: my-web-app:latest
    networks:
      - internal
    # No ports exposed! Only accessible via Cloudflare Tunnel

  api:
    image: my-api:latest
    networks:
      - internal

networks:
  internal:
# Tunnel configuration (Cloudflare dashboard or config file)
# ~/.cloudflared/config.yml
tunnel: my-tunnel-id
credentials-file: /root/.cloudflared/credentials.json

ingress:
  - hostname: app.example.com
    service: http://web:3000
  - hostname: api.example.com
    service: http://api:8080
    originRequest:
      noTLSVerify: false
  # Cloudflare Access policy controls who can reach these services
  - service: http_status:404

With Cloudflare Access (Zero Trust), you can require authentication before anyone reaches your tunnel endpoints:

  • Identity provider integration (Google, GitHub, Okta, SAML)
  • Per-application access policies
  • Device posture checks
  • Session duration limits
  • Geo-restrictions

Implementation Steps

Implementing zero trust for self-hosted infrastructure is an incremental process:

  1. Inventory your services and map communication patterns. Which services talk to which? Who accesses what?
  2. Segment your Docker networks. Move from a single flat network to multiple purpose-specific networks with internal: true.
  3. Deploy an identity provider (Authentik, Keycloak, or Authelia) and enforce SSO + MFA for all user-facing services.
  4. Set up a mesh VPN (Tailscale or Headscale) for administrative access. Stop exposing SSH and management interfaces to the public internet.
  5. Implement mTLS for service-to-service communication, starting with the most sensitive services.
  6. Close inbound ports. Use Cloudflare Tunnels or similar to expose public services without opening firewall ports.
  7. Monitor and audit. Log all access decisions and review them regularly. Use tools like CrowdSec for threat detection.
  8. Iterate. Zero trust is a continuous process, not a one-time project. Tighten policies as you gain confidence.
Warning: Do not try to implement zero trust all at once. Start with the highest-risk services (databases, management interfaces, SSH) and expand outward. A partial zero trust implementation is significantly better than no implementation. The perfect is the enemy of the good.

Zero trust networking transforms your security posture from "defend the perimeter and trust everything inside" to "verify everything, everywhere, every time." For self-hosted infrastructure, this means your Docker containers, management interfaces, and administrative access all benefit from identity-based access control, encryption, and continuous verification. The tools are mature, the patterns are well-established, and the incremental approach makes adoption practical for any scale of deployment.