Networking is one of the most powerful and most misunderstood parts of Docker. Containers need to communicate with each other, with the host, and with the outside world. Docker provides several network drivers to handle these scenarios, each designed for specific use cases. Choosing the wrong network type leads to connectivity issues, performance problems, or security gaps.

This guide takes you through every Docker network type with practical examples, explains the built-in DNS system, and provides troubleshooting techniques for when things go wrong.

Docker Network Fundamentals

When Docker is installed, it creates three default networks:

$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
a1b2c3d4e5f6   bridge    bridge    local
f6e5d4c3b2a1   host      host      local
9876543210ab   none      null      local

Every container connects to a network. If you do not specify one, Docker attaches the container to the default bridge network. Understanding the differences between network drivers is essential for building reliable container infrastructure.

Bridge Networks

Bridge networking is Docker's default and most commonly used network type. It creates a software bridge (virtual switch) on the host, and containers connected to the bridge can communicate with each other. The bridge is isolated from the host network by default.

The Default Bridge

When you run docker run without specifying a network, the container joins the default bridge network called bridge:

# These two containers are on the default bridge
docker run -d --name web nginx
docker run -d --name api my-api

# They can communicate via IP address
docker exec web ping 172.17.0.3

# But NOT via container name (no DNS on default bridge!)
docker exec web ping api  # FAILS

The default bridge has a significant limitation: it does not provide DNS resolution between containers. This is a legacy behavior. Containers on the default bridge can only reach each other by IP address, which is fragile because IPs are assigned dynamically.

User-Defined Bridge Networks

User-defined bridge networks solve the DNS problem and offer better isolation:

# Create a user-defined bridge network
docker network create my-app-net

# Run containers on the custom network
docker run -d --name web --network my-app-net nginx
docker run -d --name api --network my-app-net my-api

# Now DNS works - containers can reach each other by name
docker exec web ping api       # WORKS
docker exec api ping web       # WORKS

# Containers on different networks are isolated
docker run -d --name other --network bridge nginx
docker exec other ping api     # FAILS (different network)

User-defined bridge networks provide:

  • Automatic DNS resolution — Containers resolve each other by name
  • Better isolation — Only containers on the same network can communicate
  • Live connection/disconnection — Attach and detach running containers without restart
  • Custom subnets — Define your own IP ranges
# Create a network with custom subnet and gateway
docker network create \
  --driver bridge \
  --subnet 172.28.0.0/16 \
  --gateway 172.28.0.1 \
  --ip-range 172.28.5.0/24 \
  my-custom-net

# Assign a static IP to a container
docker run -d \
  --name db \
  --network my-custom-net \
  --ip 172.28.5.10 \
  postgres:16

In Docker Compose, each project automatically gets its own bridge network:

# docker-compose.yml
services:
  web:
    image: nginx
    ports:
      - "80:80"
  api:
    image: my-api
  db:
    image: postgres:16

# Docker Compose creates a network called "projectname_default"
# All three services can reach each other by service name:
# web -> api, api -> db, etc.

Host Network

The host network driver removes network isolation between the container and the host. The container shares the host's network namespace directly — it uses the host's IP address and port space without any NAT layer:

# Container binds directly to host port 80
docker run -d --network host nginx

# No port mapping needed (or possible)
# nginx is accessible at host-ip:80
# The container sees the same network interfaces as the host

When to use host networking:

  • Maximum network performance — No NAT overhead, no bridge packet processing. Benchmarks show 10-20% better throughput.
  • Applications that need to bind to many ports — Services like monitoring agents or network tools that use dynamic port ranges.
  • Legacy applications — Software that expects specific network interfaces or broadcasts.

Caveats:

  • Port conflicts — Two containers cannot both bind to port 80
  • No network isolation — The container can access all host network interfaces
  • Only works on Linux (on macOS/Windows, Docker runs in a VM, so "host" is the VM, not your machine)

Overlay Networks

Overlay networks enable containers running on different Docker hosts to communicate as if they were on the same local network. They work by encapsulating container network traffic in VXLAN packets that travel over the host network:

# Overlay networks require Docker Swarm mode
docker swarm init

# Create an overlay network
docker network create \
  --driver overlay \
  --subnet 10.0.9.0/24 \
  my-overlay

# Deploy services on the overlay
docker service create \
  --name web \
  --network my-overlay \
  --replicas 3 \
  nginx

docker service create \
  --name api \
  --network my-overlay \
  my-api:latest

# Containers can communicate across hosts using service names
# "web" containers on host-1 can reach "api" containers on host-2

Attachable Overlay Networks

By default, only Swarm services can use overlay networks. The --attachable flag allows standalone containers to join too:

# Create an attachable overlay
docker network create \
  --driver overlay \
  --attachable \
  shared-overlay

# Standalone container can now join
docker run -d \
  --name debug-tools \
  --network shared-overlay \
  nicolaka/netshoot

Encrypted Overlay Networks

By default, overlay data traffic is unencrypted (control plane traffic is always encrypted). Enable data encryption with:

docker network create \
  --driver overlay \
  --opt encrypted \
  secure-overlay

This enables IPsec encryption for all data traffic on the overlay. Note that this adds CPU overhead and may reduce throughput by 20-30%.

Macvlan Networks

Macvlan assigns a real MAC address to each container, making it appear as a physical device on the network. Containers get IP addresses from your physical network's DHCP server or a predefined range:

# Create a macvlan network
docker network create \
  --driver macvlan \
  --subnet 192.168.1.0/24 \
  --gateway 192.168.1.1 \
  --ip-range 192.168.1.128/25 \
  -o parent=eth0 \
  my-macvlan

# Container gets an IP on the physical network
docker run -d \
  --name webserver \
  --network my-macvlan \
  --ip 192.168.1.130 \
  nginx

# The container is directly accessible at 192.168.1.130
# from any device on the physical network

When to use macvlan:

  • Migrating from VMs — Applications that expect to be on the physical network
  • Network appliances — Firewalls, DHCP servers, or monitoring tools that need L2 network access
  • IoT and homelab — Devices that need to be discovered on the local network (e.g., Home Assistant, Pi-hole)

Important limitation: By default, the host cannot communicate with macvlan containers. This is a kernel-level limitation. The workaround is to create a macvlan sub-interface on the host:

# Create a macvlan interface on the host for communication
ip link add mac0 link eth0 type macvlan mode bridge
ip addr add 192.168.1.200/32 dev mac0
ip link set mac0 up
ip route add 192.168.1.128/25 dev mac0

None Network

The none network completely disables networking for a container. The container only has a loopback interface:

docker run -d --network none --name isolated alpine sleep 3600

docker exec isolated ip addr
# Only shows lo (loopback) interface

Use cases: Security-sensitive batch processing, containers that only work with mounted volumes, or applications that handle their own networking.

Docker DNS Resolution

Docker runs an embedded DNS server at 127.0.0.11 inside every container on user-defined networks. This DNS server resolves:

  • Container namesping my-container resolves to the container's IP
  • Service names (Swarm) — Resolves to a VIP that load balances across replicas
  • Network aliases — Custom DNS names for containers
  • External domains — Falls back to the host's DNS resolver
# Set network aliases for a container
docker run -d \
  --name postgres-primary \
  --network my-net \
  --network-alias db \
  --network-alias database \
  postgres:16

# All these resolve to the same container
docker exec app ping db
docker exec app ping database
docker exec app ping postgres-primary

In Docker Compose, both the service name and container name are resolvable. Service names are more portable:

# In docker-compose.yml, use service names in connection strings
services:
  api:
    image: my-api
    environment:
      DATABASE_URL: postgres://user:pass@db:5432/mydb
      REDIS_URL: redis://cache:6379
  db:
    image: postgres:16
  cache:
    image: redis:7

Connecting Containers Across Networks

A container can be connected to multiple networks, acting as a bridge between them:

# Create two isolated networks
docker network create frontend
docker network create backend

# API server connects to both
docker run -d --name api --network frontend my-api
docker network connect backend api

# Web server only on frontend
docker run -d --name web --network frontend nginx

# Database only on backend
docker run -d --name db --network backend postgres:16

# api can reach both web and db
# web CANNOT reach db (different networks, no shared container)

This pattern is excellent for security: your database is never on the same network as your public-facing web server. Only the API tier bridges the two.

Network Troubleshooting

When container networking goes wrong, these tools and techniques help diagnose the problem:

Inspect Network Configuration

# List all networks
docker network ls

# Inspect a network (shows connected containers, subnet, etc.)
docker network inspect my-app-net

# See a container's network settings
docker inspect --format '{{json .NetworkSettings.Networks}}' my-container | jq

# Check which ports are exposed and mapped
docker port my-container

Use a Network Debug Container

The nicolaka/netshoot image contains every network troubleshooting tool you could need:

# Attach a debug container to the same network
docker run -it --rm \
  --network my-app-net \
  nicolaka/netshoot

# Inside the container:
# DNS lookup
nslookup api
dig api

# Connectivity test
ping db
curl -v http://api:8080/health

# TCP connection test
nc -zv db 5432

# Trace the route
traceroute api

# Capture packets
tcpdump -i eth0 port 5432

# Check open ports on another container
nmap -sT api

Common Issues and Fixes

Problem Likely Cause Fix
Cannot resolve container name Using default bridge network Use a user-defined network
Connection refused App binds to 127.0.0.1, not 0.0.0.0 Configure app to listen on all interfaces
Port already in use Host port conflict Change host port mapping or use host mode selectively
Containers on different hosts cannot talk No overlay network Create an overlay network (requires Swarm)
Slow network performance Unnecessary NAT layers Consider host network for high-throughput services
Cannot access macvlan container from host Kernel limitation Create macvlan sub-interface on host

Checking iptables Rules

Docker manages iptables rules to handle port mapping and inter-container communication. When things break, check the rules:

# Show Docker-managed iptables rules
sudo iptables -L -n -v --line-numbers
sudo iptables -t nat -L -n -v

# Check if Docker's iptables integration is enabled
docker info | grep -i iptables

# If using firewalld, check for conflicts
sudo firewall-cmd --list-all

Network Performance Comparison

Driver Latency Overhead Throughput Isolation
Host None Native speed None
Macvlan Minimal Near-native L2 level
Bridge Low (~0.1ms) 90-95% native Good
Overlay Medium (~0.5ms) 70-85% native Good
Encrypted Overlay Higher (~1ms) 60-75% native Excellent
Tip: When managing containers across multiple nodes with usulnet, network configuration is visualized in the dashboard. You can see which containers share networks, inspect their IP assignments, and troubleshoot connectivity without memorizing docker network inspect output.

Conclusion

Docker networking is more capable than most teams realize. The key is matching the right network driver to each use case: user-defined bridges for most single-host scenarios, overlay for multi-host communication, macvlan when containers need to appear on the physical network, and host when you need maximum performance.

Always use user-defined bridge networks instead of the default bridge. The DNS resolution alone makes them worth it. And remember that network isolation is a security feature: keep databases on backend networks, expose only what needs to be public, and use multi-network containers to bridge tiers when necessary.