Docker's default bridge networking works well for most use cases, but production environments often require more sophisticated network configurations: containers that need real IP addresses on the LAN, IPv6 connectivity, cross-host communication without Swarm, or fine-grained network performance tuning. This guide goes beyond the basics and explores the network drivers, configurations, and troubleshooting techniques that advanced Docker deployments demand.

Bridge Network Deep Dive

Before exploring advanced drivers, it is essential to understand exactly what happens when Docker creates a bridge network, because every other driver builds on or diverges from this model.

# Create a custom bridge network
docker network create --driver bridge \
  --subnet 172.20.0.0/16 \
  --gateway 172.20.0.1 \
  --opt com.docker.network.bridge.name=br-custom \
  --opt com.docker.network.bridge.enable_icc=true \
  --opt com.docker.network.driver.mtu=1500 \
  custom-net

# Examine the Linux bridge created
ip link show br-custom
bridge link show br-custom
iptables -t nat -L POSTROUTING -n | grep 172.20

When Docker creates a bridge network, it performs these kernel operations:

  1. Creates a Linux bridge device (e.g., br-custom)
  2. Assigns the gateway IP to the bridge interface
  3. Creates iptables NAT rules for outbound traffic (MASQUERADE)
  4. Creates iptables FORWARD rules to allow or deny inter-container communication (ICC)
  5. For each connected container: creates a veth pair, attaches one end to the bridge and the other to the container's network namespace
# Inspect the veth pair for a running container
# From the host:
PID=$(docker inspect --format '{{.State.Pid}}' mycontainer)
nsenter -t $PID -n ip addr show
# Shows eth0 inside the container, linked to a vethXXXXXX on the host

# See the bridge connections
bridge link show br-custom
# Shows all veth interfaces connected to the bridge

Macvlan Networks

Macvlan allows containers to appear as physical devices on the network, each with its own MAC address and IP address on the host's LAN. This is essential for services that need to be directly accessible without port mapping, such as DHCP servers, network appliances, or services requiring multicast.

# Create a macvlan network
docker network create -d macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  --ip-range=192.168.1.192/26 \
  -o parent=eth0 \
  lan-net

# Run a container with a real LAN IP
docker run -d --name webserver \
  --network lan-net \
  --ip 192.168.1.200 \
  nginx:alpine

# The container is now accessible at 192.168.1.200 from any device on the LAN
# No port mapping needed - it has its own IP
Feature Bridge Macvlan IPvlan L2 IPvlan L3
Container gets real LAN IP No Yes Yes No (routed)
Unique MAC per container Yes (internal) Yes No (shared) No (shared)
Host can reach container Yes No * No * Yes
Multicast support Limited Yes Yes No
Performance overhead Medium (NAT) Low Lowest Low
Switch MAC table impact None High None None

* The host cannot communicate with macvlan/ipvlan L2 containers directly because the kernel prevents traffic from the parent interface to its own macvlan sub-interfaces. The workaround is to create a macvlan interface on the host itself:

# Allow host-to-container communication with macvlan
ip link add macvlan-shim link eth0 type macvlan mode bridge
ip addr add 192.168.1.250/32 dev macvlan-shim
ip link set macvlan-shim up
ip route add 192.168.1.192/26 dev macvlan-shim
Warning: Macvlan creates additional MAC addresses on the network. Some switches limit the number of MAC addresses per port, and some cloud providers (AWS, GCP) do not allow arbitrary MAC addresses on instances. Always verify your network infrastructure supports macvlan before deploying.

IPvlan Networks

IPvlan is similar to macvlan but shares the parent interface's MAC address. This avoids the MAC address proliferation issue and works in environments that restrict MAC addresses.

# IPvlan L2 mode (containers on the same subnet as the host)
docker network create -d ipvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  -o parent=eth0 \
  -o ipvlan_mode=l2 \
  ipvlan-l2-net

# IPvlan L3 mode (containers on a separate routed subnet)
docker network create -d ipvlan \
  --subnet=10.10.10.0/24 \
  -o parent=eth0 \
  -o ipvlan_mode=l3 \
  ipvlan-l3-net

# L3 mode requires adding routes on your router:
# ip route add 10.10.10.0/24 via 192.168.1.100
# (where 192.168.1.100 is the Docker host)

Overlay Network Internals

Overlay networks enable container-to-container communication across different Docker hosts. They use VXLAN encapsulation to tunnel Layer 2 frames through a Layer 3 network.

# Overlay networks require Docker Swarm or a key-value store
# With Swarm:
docker network create -d overlay \
  --subnet=10.0.9.0/24 \
  --opt encrypted \
  --attachable \
  my-overlay

# The --encrypted flag enables IPsec encryption for all overlay traffic
# The --attachable flag allows standalone containers to join (not just services)

How Overlay Works

When a container on Host A sends a packet to a container on Host B:

  1. The packet leaves the container's eth0 into the local bridge
  2. The VXLAN tunnel endpoint (VTEP) on Host A encapsulates the frame in a UDP packet (port 4789)
  3. The UDP packet travels over the underlay network to Host B
  4. Host B's VTEP decapsulates the frame and delivers it to the destination container's bridge
  5. The original packet arrives at the destination container's eth0
# Inspect overlay network internals
# On a Swarm manager:
docker network inspect my-overlay --format '{{json .Peers}}' | jq .

# View VXLAN interfaces on a node
ip -d link show type vxlan
# Shows vxlan interfaces with VNI (VXLAN Network Identifier)

IPv6 Configuration

Docker supports IPv6 networking, but it is not enabled by default and requires explicit configuration at both the daemon and network level.

# Enable IPv6 in Docker daemon
# /etc/docker/daemon.json
{
  "ipv6": true,
  "fixed-cidr-v6": "fd00:dead:beef::/48",
  "ip6tables": true,
  "experimental": true
}

# Restart Docker after changing daemon.json
sudo systemctl restart docker
# Create a dual-stack network
docker network create --ipv6 \
  --subnet=172.28.0.0/16 \
  --subnet=fd00:db8:1::/64 \
  --gateway=172.28.0.1 \
  --gateway=fd00:db8:1::1 \
  dual-stack-net

# Run a container with both IPv4 and IPv6
docker run -d --name web --network dual-stack-net nginx:alpine

# Verify dual-stack connectivity
docker exec web ip addr show eth0
# Should show both 172.28.x.x and fd00:db8:1::x addresses

docker exec web ping -6 fd00:db8:1::1
# Should reach the gateway
Tip: For public-facing services needing IPv6, combine Docker's IPv6 support with a reverse proxy like Traefik or nginx that handles the public IPv6 addresses. This keeps your container networking simple while still offering IPv6 to end users.

Container DNS

Docker runs an embedded DNS server at 127.0.0.11 inside every container connected to a user-defined network. Understanding how this works is crucial for troubleshooting service discovery issues.

# DNS resolution order in Docker containers:
# 1. Container's /etc/hosts (container name, hostname, network aliases)
# 2. Docker's embedded DNS server (resolves other container names)
# 3. External DNS servers (from daemon config or host /etc/resolv.conf)

# Inspect DNS configuration inside a container
docker exec mycontainer cat /etc/resolv.conf
# nameserver 127.0.0.11
# options ndots:0

# Test DNS resolution
docker exec mycontainer nslookup othercontainer
# Should resolve to the other container's IP on the shared network

# Containers on the default bridge do NOT get DNS resolution
# Only user-defined networks provide automatic DNS
# Custom DNS configuration per container
docker run -d \
  --dns 1.1.1.1 \
  --dns 8.8.8.8 \
  --dns-search example.com \
  --hostname myservice \
  myapp

# Network-level DNS options
docker network create \
  --opt com.docker.network.bridge.host_binding_ipv4=0.0.0.0 \
  my-net

Troubleshooting with tcpdump and nsenter

When container networking misbehaves, these tools let you see exactly what is happening at the packet level:

# Capture traffic on a container's network interface
# Method 1: nsenter into the container's network namespace
PID=$(docker inspect --format '{{.State.Pid}}' mycontainer)
sudo nsenter -t $PID -n tcpdump -i eth0 -nn -c 50

# Method 2: Find the veth pair on the host
VETH=$(ip link | grep -A1 "veth" | grep -oP 'veth\w+' | head -1)
sudo tcpdump -i $VETH -nn -c 50

# Method 3: Use a debug container
docker run --rm -it --network container:mycontainer \
  nicolaka/netshoot tcpdump -i eth0 -nn
# Common troubleshooting commands using netshoot
docker run --rm -it --network container:mycontainer nicolaka/netshoot

# Inside netshoot:
# Test DNS resolution
dig myservice.my-net +short
nslookup myservice

# Test connectivity
curl -v http://myservice:8080/health
nc -zv myservice 5432

# Trace route to another container
traceroute myservice

# Check iptables rules affecting this container
iptables -L -n -v

Network Performance Tuning

MTU Configuration

# Set MTU at the daemon level
# /etc/docker/daemon.json
{
  "mtu": 9000  # Jumbo frames (if your network supports it)
}

# Or per network
docker network create --opt com.docker.network.driver.mtu=9000 perf-net

# For overlay networks, account for VXLAN overhead (50 bytes)
# If underlay MTU is 1500, overlay MTU should be 1450
docker network create -d overlay --opt com.docker.network.driver.mtu=1450 my-overlay

Kernel Tuning for Container Networking

# /etc/sysctl.d/99-docker-network.conf

# Increase connection tracking table (essential for many containers)
net.netfilter.nf_conntrack_max = 1048576

# Increase local port range
net.ipv4.ip_local_port_range = 1024 65535

# Reduce TIME_WAIT duration
net.ipv4.tcp_fin_timeout = 15

# Enable TCP window scaling
net.ipv4.tcp_window_scaling = 1

# Increase socket buffer sizes
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Increase connection backlog
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 5000

# Apply
sysctl --system

Performance note: For maximum throughput between containers on the same host, use --network host mode, which eliminates the network namespace boundary entirely. This trades isolation for performance and is appropriate for high-throughput services like reverse proxies and load balancers.

Advanced Docker networking requires understanding the Linux kernel's networking stack because Docker's network drivers are thin abstractions over kernel features. Mastering macvlan, ipvlan, overlay internals, and the troubleshooting tools gives you the ability to design container network topologies that meet any requirement, from simple development setups to complex multi-host production architectures. Tools like usulnet can help visualize these network topologies and monitor traffic patterns across your container infrastructure.