Docker Container Security Hardening: 15 Essential Practices

Containers provide process isolation, but isolation is not security. A default Docker container runs with more privileges than most applications need, and a compromised container can be a stepping stone to compromising the host system. Security hardening is the practice of reducing the attack surface by removing unnecessary capabilities, restricting access, and implementing defense-in-depth measures.

These 15 practices are ordered from the most fundamental (and easiest to implement) to the more advanced. Even implementing the first five will significantly improve your container security posture.

1. Run Containers as Non-Root Users

By default, containers run as root. This means if an attacker exploits a vulnerability in your application, they have root access inside the container, which in some configurations can lead to host escalation.

# In your Dockerfile, create and switch to a non-root user
FROM node:20-alpine

# Create a non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Set ownership of the application directory
WORKDIR /app
COPY --chown=appuser:appgroup . .

# Switch to non-root user
USER appuser

CMD ["node", "server.js"]

At runtime, you can also enforce non-root execution:

# Force a specific user at runtime
docker run --user 1000:1000 myapp

# In Docker Compose
services:
  app:
    image: myapp
    user: "1000:1000"

2. Use Read-Only Filesystems

Making the container filesystem read-only prevents attackers from modifying binaries, installing malware, or creating persistence mechanisms.

# Run with read-only root filesystem
docker run --read-only myapp

# Allow specific writable directories using tmpfs
docker run --read-only \
  --tmpfs /tmp:rw,noexec,nosuid,size=100m \
  --tmpfs /var/run:rw,noexec,nosuid \
  myapp

# In Docker Compose
services:
  app:
    image: myapp
    read_only: true
    tmpfs:
      - /tmp:size=100m
      - /var/run

Tip: Some applications write to unexpected locations. Test your application with --read-only first and identify which directories need to be writable. Add them as tmpfs mounts or volume mounts as needed.

3. Drop All Linux Capabilities and Add Only What's Needed

Linux capabilities divide root privileges into distinct units. Docker grants a default set of capabilities to containers, many of which are unnecessary. The secure approach is to drop all capabilities and add back only what your application requires.

# Drop all capabilities, add only NET_BIND_SERVICE (for binding to ports below 1024)
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp

# Common capabilities and when you need them:
# NET_BIND_SERVICE — Bind to privileged ports (<1024)
# CHOWN           — Change file ownership
# SETUID/SETGID   — Change user/group IDs
# DAC_OVERRIDE    — Bypass file permission checks
# SYS_PTRACE      — Debug/trace processes (needed by some monitoring tools)

In Docker Compose:

services:
  web:
    image: nginx
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE
      - CHOWN
      - SETUID
      - SETGID

4. Apply Seccomp Profiles

Seccomp (Secure Computing Mode) filters which system calls a container can make to the kernel. Docker applies a default seccomp profile that blocks about 44 of the 300+ syscalls, but you can create a more restrictive custom profile.

# Run with the default seccomp profile (applied automatically)
docker run myapp

# Run with a custom seccomp profile
docker run --security-opt seccomp=custom-profile.json myapp

# Inspect the default profile
docker info --format '{{.SecurityOptions}}'

A minimal custom seccomp profile for a Node.js application:

{
  "defaultAction": "SCMP_ACT_ERRNO",
  "architectures": ["SCMP_ARCH_X86_64"],
  "syscalls": [
    {
      "names": [
        "accept", "accept4", "access", "bind", "brk", "clock_getres",
        "clock_gettime", "clone", "close", "connect", "dup", "dup2",
        "epoll_create1", "epoll_ctl", "epoll_wait", "eventfd2",
        "execve", "exit", "exit_group", "fchmod", "fchown", "fcntl",
        "fstat", "futex", "getcwd", "getdents64", "getegid", "geteuid",
        "getgid", "getpid", "getppid", "getuid", "ioctl", "listen",
        "lseek", "madvise", "mmap", "mprotect", "munmap", "nanosleep",
        "open", "openat", "pipe", "pipe2", "poll", "pread64", "pwrite64",
        "read", "readlink", "recvfrom", "recvmsg", "rename", "rt_sigaction",
        "rt_sigprocmask", "rt_sigreturn", "sendmsg", "sendto", "set_tid_address",
        "setgroups", "setsockopt", "shutdown", "socket", "stat", "uname",
        "unlink", "wait4", "write", "writev"
      ],
      "action": "SCMP_ACT_ALLOW"
    }
  ]
}

Warning: Never run containers with --security-opt seccomp=unconfined in production. This disables all syscall filtering and significantly increases the attack surface.

5. Enable AppArmor or SELinux Profiles

AppArmor and SELinux provide mandatory access control (MAC) that limits what files and resources a container process can access, even if running as root.

# Run with the default AppArmor profile
docker run myapp
# Docker automatically applies the "docker-default" AppArmor profile

# Run with a custom AppArmor profile
docker run --security-opt apparmor=my-custom-profile myapp

# Check which AppArmor profile is active
docker inspect --format='{{.AppArmorProfile}}' mycontainer

For SELinux-based systems (RHEL, CentOS, Fedora):

# Run with SELinux labels
docker run --security-opt label=type:container_runtime_t myapp

# Disable SELinux label (not recommended)
docker run --security-opt label=disable myapp

6. Use Minimal Base Images

Every binary in your container image is a potential attack vector. Minimize the attack surface by using the smallest base image possible.

# Bad: Full Ubuntu image (~77 MB, includes apt, bash, coreutils, etc.)
FROM ubuntu:22.04

# Better: Alpine-based image (~5 MB)
FROM node:20-alpine

# Best: Distroless image (~2 MB, no shell, no package manager)
FROM gcr.io/distroless/nodejs20-debian12

# Even better for compiled languages: scratch (0 bytes)
FROM scratch
COPY myapp /myapp
ENTRYPOINT ["/myapp"]

Distroless images are particularly effective because they contain no shell, no package manager, and no debugging utilities. If an attacker gets into the container, there are no tools available to them for further exploitation.

7. Scan Images for Vulnerabilities

Regularly scan your container images for known vulnerabilities (CVEs). Integrate scanning into your CI/CD pipeline so vulnerable images never reach production.

# Using Docker Scout (built into Docker Desktop and CLI)
docker scout cves myapp:latest

# Using Trivy (popular open-source scanner)
trivy image myapp:latest

# Using Grype
grype myapp:latest

# Scan and fail CI if critical vulnerabilities found
trivy image --exit-code 1 --severity CRITICAL myapp:latest

Set a policy for your team: no images with critical or high-severity CVEs in production, and a 30-day remediation window for medium-severity issues.

8. Isolate Container Networks

By default, all containers on the default bridge network can communicate with each other. Create dedicated networks to isolate services that don't need to talk to each other.

# Create isolated networks
docker network create --driver bridge frontend
docker network create --driver bridge backend
docker network create --internal db-only  # No external access

# Attach containers to appropriate networks
docker run --network frontend nginx
docker run --network frontend --network backend api
docker run --network backend --network db-only postgres

In Docker Compose, network isolation is natural:

services:
  web:
    image: nginx
    networks:
      - frontend

  api:
    image: myapi
    networks:
      - frontend
      - backend

  db:
    image: postgres
    networks:
      - backend

networks:
  frontend:
  backend:
    internal: true  # No external access

9. Never Store Secrets in Images

Secrets like API keys, database passwords, and TLS certificates must never be baked into Docker images. They are visible in image layers and can be extracted by anyone with access to the image.

# BAD: Secret in environment variable in Dockerfile
ENV DATABASE_PASSWORD=mysecretpassword

# BAD: Secret copied into the image
COPY credentials.json /app/credentials.json

# GOOD: Use Docker secrets (Swarm mode)
docker secret create db_password secret.txt
docker service create --secret db_password myapp

# GOOD: Use environment variables at runtime
docker run -e DATABASE_PASSWORD="$(cat /path/to/secret)" myapp

# GOOD: Use BuildKit secrets for build-time secrets
docker build --secret id=npmrc,src=$HOME/.npmrc .

# In Dockerfile:
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc npm install

10. Limit Resource Consumption

Unbounded resource usage is a denial-of-service vector. A compromised container or a resource leak can consume all available memory or CPU, affecting other containers and the host.

# Set memory and CPU limits
docker run --memory=512m --cpus=1.0 --pids-limit=256 myapp

# In Docker Compose
services:
  app:
    image: myapp
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: '1.0'
        reservations:
          memory: 256M
          cpus: '0.5'
    pids_limit: 256

The --pids-limit flag is often overlooked but important. It prevents fork bombs by limiting the number of processes a container can create.

11. Enable Content Trust and Image Signing

Docker Content Trust (DCT) uses digital signatures to verify that images haven't been tampered with between the publisher and your Docker host.

# Enable content trust globally
export DOCKER_CONTENT_TRUST=1

# Now docker pull will only fetch signed images
docker pull myregistry.com/myapp:latest

# Sign an image when pushing
docker push myregistry.com/myapp:latest
# You'll be prompted to create signing keys on first use

# Using Cosign (sigstore) for keyless signing
cosign sign --yes myregistry.com/myapp@sha256:abc123

# Verify a cosign signature
cosign verify myregistry.com/myapp@sha256:abc123

12. Use Multi-Stage Builds to Exclude Build Tools

Build tools, compilers, and development dependencies should never be in your production image. Multi-stage builds solve this cleanly.

# Multi-stage build: build tools stay in the first stage
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o myapp .

# Production stage: only the binary, no Go toolchain
FROM alpine:3.19
RUN apk --no-cache add ca-certificates
RUN adduser -D appuser
USER appuser
COPY --from=builder /app/myapp /usr/local/bin/myapp
ENTRYPOINT ["myapp"]

This eliminates compilers, package managers, source code, and test files from the production image, drastically reducing the attack surface.

13. Prevent Privilege Escalation

Even when running as non-root, a process might try to escalate privileges through setuid binaries or other mechanisms. Block this explicitly:

# Prevent privilege escalation
docker run --security-opt=no-new-privileges:true myapp

# In Docker Compose
services:
  app:
    image: myapp
    security_opt:
      - no-new-privileges:true

This sets the no_new_privs kernel flag, which ensures that child processes cannot gain more privileges than their parent. This blocks setuid/setgid binaries from granting elevated privileges.

14. Implement Logging and Audit Trails

Security without visibility is incomplete. Ensure all container activity is logged and auditable.

# Configure logging driver with size limits
docker run --log-driver json-file \
  --log-opt max-size=10m \
  --log-opt max-file=3 \
  --log-opt labels=app,environment \
  myapp

# Send logs to a centralized system
docker run --log-driver syslog \
  --log-opt syslog-address=udp://logserver:514 \
  --log-opt tag="{{.Name}}" \
  myapp

# Enable Docker daemon audit logging
# Add to /etc/docker/daemon.json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "5"
  }
}

On the host level, enable audit rules for Docker activities:

# /etc/audit/rules.d/docker.rules
-w /usr/bin/docker -p rwxa -k docker
-w /var/lib/docker -p rwxa -k docker
-w /etc/docker -p rwxa -k docker
-w /usr/lib/systemd/system/docker.service -p rwxa -k docker
-w /var/run/docker.sock -p rwxa -k docker

15. Protect the Docker Socket

The Docker socket (/var/run/docker.sock) is the most sensitive resource on a Docker host. Any container with access to the socket effectively has root access to the entire host.

# NEVER do this in production without understanding the implications
docker run -v /var/run/docker.sock:/var/run/docker.sock myapp

# If socket access is required (management tools, CI runners):
# 1. Use a read-only socket proxy
docker run -d --name docker-proxy \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -p 2375:2375 \
  tecnativa/docker-socket-proxy

# 2. Limit the API endpoints the proxy exposes
docker run -d --name docker-proxy \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e CONTAINERS=1 \
  -e IMAGES=1 \
  -e NETWORKS=1 \
  -e VOLUMES=1 \
  -e POST=0 \
  tecnativa/docker-socket-proxy

Tip: Management tools like usulnet need Docker socket access to function. usulnet minimizes the security risk by running as a self-hosted solution on your infrastructure with no external data transmission, and you can use TLS-secured remote connections instead of direct socket mounting for multi-host setups.

Security Hardening Checklist

Use this checklist to audit your container deployments:

Practice Priority Difficulty Status
Non-root user Critical Easy ---
Read-only filesystem High Medium ---
Drop capabilities High Easy ---
Seccomp profiles High Medium ---
AppArmor/SELinux Medium Medium ---
Minimal base images High Easy ---
Image scanning Critical Easy ---
Network isolation High Easy ---
No secrets in images Critical Easy ---
Resource limits High Easy ---
Image signing Medium Medium ---
Multi-stage builds High Easy ---
No privilege escalation High Easy ---
Centralized logging Medium Medium ---
Socket protection Critical Medium ---

Container security is not a one-time configuration; it is an ongoing process. Regularly review your container configurations, update base images, rescan for vulnerabilities, and audit access. With these 15 practices implemented, your container infrastructure will be significantly more resilient against common attack vectors. Tools like usulnet help by providing visibility into your container configurations, making it easier to spot containers that are running with excessive privileges or missing security controls.