VPN Mesh Networking: Connecting Your Infrastructure with Tailscale and Headscale
Traditional VPNs use a hub-and-spoke topology: all traffic routes through a central server, creating a bottleneck and a single point of failure. Mesh VPNs eliminate this by creating direct, encrypted connections between every pair of nodes. When your server in Frankfurt needs to talk to your server in Tokyo, the traffic goes directly between them, not through a VPN concentrator in New York.
Tailscale and its self-hosted alternative Headscale have made mesh networking accessible to everyone, from homelabbers connecting a few devices to organizations connecting hundreds of servers across multiple data centers. Both build on WireGuard, the modern VPN protocol that has been merged into the Linux kernel since version 5.6.
Mesh vs Hub-and-Spoke Topologies
| Characteristic | Hub-and-Spoke (Traditional VPN) | Mesh (Tailscale/WireGuard) |
|---|---|---|
| Traffic path | All through central server | Direct between nodes |
| Latency | High (double hop for node-to-node) | Low (direct connection) |
| Single point of failure | Yes (the hub) | No (control plane only, not data plane) |
| Bandwidth bottleneck | Hub server bandwidth limits all traffic | Each connection uses its own path |
| Configuration complexity | N configs (one per client) | Automatic (coordination server handles it) |
| NAT traversal | Requires port forwarding or relay | Automatic (STUN/DERP) |
| Adding a new node | Update hub + all clients | Install client, authenticate, done |
WireGuard: The Foundation
Both Tailscale and Headscale use WireGuard as their underlying protocol. WireGuard is notable for its simplicity (roughly 4,000 lines of code versus OpenVPN's 100,000+), its performance, and its cryptographic design:
- Noise Protocol Framework for key exchange
- Curve25519 for Diffie-Hellman
- ChaCha20-Poly1305 for symmetric encryption
- BLAKE2s for hashing
- SipHash24 for hashtable keys
# Basic WireGuard setup (for comparison with Tailscale)
# This is what Tailscale automates for you
# Install WireGuard
sudo apt install wireguard # Debian/Ubuntu
sudo pacman -S wireguard-tools # Arch
# Generate keys
wg genkey | tee privatekey | wg pubkey > publickey
# Server configuration (/etc/wireguard/wg0.conf)
[Interface]
PrivateKey = SERVER_PRIVATE_KEY
Address = 10.0.0.1/24
ListenPort = 51820
[Peer]
PublicKey = CLIENT_PUBLIC_KEY
AllowedIPs = 10.0.0.2/32
# Start the interface
sudo wg-quick up wg0
# Check status
sudo wg show
Managing WireGuard directly is feasible for 2-3 nodes but becomes tedious at scale. Each new node requires updating the configuration on every existing node. This is the problem Tailscale solves.
Tailscale: Managed Mesh Networking
Tailscale provides a coordination server that distributes WireGuard keys and connection information to all nodes. The coordination server never sees your traffic; it only helps nodes find each other.
# Install Tailscale
# Linux (one-liner)
curl -fsSL https://tailscale.com/install.sh | sh
# Arch Linux
sudo pacman -S tailscale
# Start the daemon
sudo systemctl enable --now tailscaled
# Authenticate
sudo tailscale up
# Check status
tailscale status
# See IP address
tailscale ip -4
# Ping another node by name
tailscale ping my-server
# SSH to a node (Tailscale SSH)
tailscale ssh user@my-server
Key Tailscale Features
Exit Nodes
An exit node routes all internet traffic from a device through another node in your network. Useful for accessing geo-restricted content or routing through a trusted network:
# On the exit node server
sudo tailscale up --advertise-exit-node
# Approve in Tailscale admin console, then on the client:
sudo tailscale up --exit-node=exit-server
# Or specify by IP
sudo tailscale up --exit-node=100.64.0.5
# Disable exit node
sudo tailscale up --exit-node=
Subnet Routers
Subnet routers expose an entire local network to your Tailscale mesh without installing Tailscale on every device. This is essential for accessing IoT devices, printers, or legacy systems:
# Advertise local subnet
sudo tailscale up --advertise-routes=192.168.1.0/24,10.0.0.0/24
# On Linux, enable IP forwarding
echo 'net.ipv4.ip_forward = 1' | sudo tee /etc/sysctl.d/99-tailscale.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
sudo sysctl --system
# Approve the routes in the admin console
# Then clients can access 192.168.1.x directly through Tailscale
MagicDNS
MagicDNS provides automatic DNS for every node in your network. Instead of remembering IP addresses, you use the machine name:
# Access services by name
curl http://my-server:8080
ssh user@my-server
psql -h db-server -U postgres
# MagicDNS also supports search domains
# my-server.your-tailnet.ts.net resolves automatically
Access Control Lists (ACLs)
Tailscale ACLs control which nodes can communicate with which services. Written in a JSON-like syntax:
{
"acls": [
// Admins can access everything
{"action": "accept", "src": ["group:admin"], "dst": ["*:*"]},
// Developers can access dev servers and databases
{"action": "accept", "src": ["group:dev"], "dst": [
"tag:dev-server:*",
"tag:database:5432",
"tag:redis:6379"
]},
// Web servers can reach databases
{"action": "accept", "src": ["tag:web"], "dst": [
"tag:database:5432",
"tag:redis:6379",
"tag:cache:11211"
]},
// Everyone can use the exit node
{"action": "accept", "src": ["autogroup:member"], "dst": [
"autogroup:internet:*"
]}
],
"groups": {
"group:admin": ["[email protected]"],
"group:dev": ["[email protected]", "[email protected]"]
},
"tagOwners": {
"tag:web": ["group:admin"],
"tag:database": ["group:admin"],
"tag:dev-server": ["group:admin", "group:dev"]
}
}
Headscale: Self-Hosted Tailscale
Headscale is an open-source, self-hosted implementation of the Tailscale coordination server. It provides most of Tailscale's features without relying on Tailscale's infrastructure. Your nodes still use the standard Tailscale client but point to your Headscale server for coordination.
Deploying Headscale with Docker Compose
services:
headscale:
image: headscale/headscale:latest
container_name: headscale
restart: unless-stopped
ports:
- "8080:8080" # HTTP/gRPC
- "9090:9090" # Metrics
volumes:
- headscale_data:/var/lib/headscale
- ./config.yaml:/etc/headscale/config.yaml:ro
command: serve
headscale-ui:
image: ghcr.io/gurucomputing/headscale-ui:latest
ports:
- "8443:443"
restart: unless-stopped
volumes:
headscale_data:
# config.yaml for Headscale
server_url: https://headscale.example.com
listen_addr: 0.0.0.0:8080
metrics_listen_addr: 0.0.0.0:9090
private_key_path: /var/lib/headscale/private.key
noise:
private_key_path: /var/lib/headscale/noise_private.key
prefixes:
v4: 100.64.0.0/10
v6: fd7a:115c:a1e0::/48
derp:
server:
enabled: true
region_id: 999
region_code: "custom"
region_name: "My DERP"
stun_listen_addr: 0.0.0.0:3478
urls: []
paths: []
auto_update_enabled: true
update_frequency: 24h
dns:
magic_dns: true
base_domain: mesh.example.com
nameservers:
global:
- 1.1.1.1
- 8.8.8.8
database:
type: sqlite
sqlite:
path: /var/lib/headscale/db.sqlite
Managing Headscale
# Create a user (namespace)
docker exec headscale headscale users create myuser
# Generate a pre-auth key
docker exec headscale headscale preauthkeys create \
--user myuser --reusable --expiration 24h
# Register a node (on the client machine)
sudo tailscale up --login-server=https://headscale.example.com \
--authkey=YOUR_PREAUTH_KEY
# List nodes
docker exec headscale headscale nodes list
# Enable a route (subnet router)
docker exec headscale headscale routes list
docker exec headscale headscale routes enable -r ROUTE_ID
# Tag a node
docker exec headscale headscale nodes tag -i NODE_ID -t tag:web
| Feature | Tailscale (Managed) | Headscale (Self-Hosted) |
|---|---|---|
| Setup effort | Minimal | Moderate |
| Coordination server | Hosted by Tailscale | Self-hosted |
| DERP relay servers | Global network provided | Self-hosted or use Tailscale's |
| ACLs | Full support + web UI | Full support (YAML/JSON config) |
| MagicDNS | Yes | Yes |
| SSO/OIDC | Built-in | Supported via OIDC |
| Cost | Free tier, paid for teams | Free (you host it) |
| Data sovereignty | Keys on Tailscale servers | Everything on your infrastructure |
Docker Container Access via Tailscale
There are several patterns for exposing Docker containers through your Tailscale network:
Pattern 1: Tailscale on the Host
# Install Tailscale on the Docker host
# Containers are accessible via the host's Tailscale IP
# Simplest approach, works with any container
# Access via host Tailscale IP + mapped port
curl http://docker-host:8080 # MagicDNS name of the host
Pattern 2: Tailscale Sidecar Container
services:
tailscale:
image: tailscale/tailscale:latest
hostname: my-service
environment:
TS_AUTHKEY: ${TS_AUTHKEY}
TS_STATE_DIR: /var/lib/tailscale
TS_USERSPACE: "false"
volumes:
- tailscale_state:/var/lib/tailscale
- /dev/net/tun:/dev/net/tun
cap_add:
- NET_ADMIN
- SYS_MODULE
restart: unless-stopped
app:
image: myapp:latest
network_mode: service:tailscale # Share Tailscale's network
depends_on:
- tailscale
volumes:
tailscale_state:
Pattern 3: Tailscale as Subnet Router for Docker Networks
# Advertise Docker bridge network via Tailscale
sudo tailscale up --advertise-routes=172.17.0.0/16
# Now any Tailscale node can access Docker containers directly
# by their container IP (172.17.0.x)
Multi-Site Connectivity
The most powerful use of mesh networking is connecting infrastructure across multiple locations. A typical setup might include:
# Site A: Home lab (behind NAT)
# - Docker host running 10 services
# - Tailscale on host as subnet router
sudo tailscale up --advertise-routes=192.168.1.0/24 --hostname=homelab
# Site B: Cloud VPS (AWS/Hetzner)
# - Docker host running production services
# - Tailscale on host as exit node + subnet router
sudo tailscale up --advertise-routes=10.0.0.0/24 \
--advertise-exit-node --hostname=cloud-prod
# Site C: Office network
# - Tailscale on a small device (Pi) as subnet router
sudo tailscale up --advertise-routes=10.10.0.0/24 --hostname=office-router
# Result: All three sites can communicate directly
# homelab services can reach cloud databases
# office machines can reach homelab services
# all without opening any firewall ports
Key insight: Mesh VPNs like Tailscale solve the "how do I connect to my servers" problem elegantly. No port forwarding, no dynamic DNS, no firewall rule management. Install the client, authenticate, and your servers are accessible from anywhere on your mesh. This is especially valuable for managing Docker infrastructure across multiple locations, where tools like usulnet can use the Tailscale mesh to communicate with agent nodes on remote servers.
Troubleshooting Mesh Networks
# Check connection quality between nodes
tailscale ping --verbose other-node
# Check if using direct connection or DERP relay
tailscale status
# "direct" = WireGuard peer-to-peer
# "relay" = going through DERP (slower)
# Debug connectivity
tailscale netcheck
# Shows: UDP, mapping varies, nearest DERP, latency to DERPs
# Check firewall (WireGuard uses UDP 41641 by default)
sudo iptables -L -n | grep 41641
# View Tailscale logs
journalctl -u tailscaled -f
# Reset Tailscale state
sudo tailscale down
sudo tailscale up --reset
Mesh networking has fundamentally simplified infrastructure connectivity. Whether you use Tailscale's managed service or self-host with Headscale, the result is the same: every device in your network can communicate securely with every other device, regardless of NAT, firewalls, or physical location. For distributed Docker infrastructure, this means your monitoring, backup, and management tools can operate as if all servers were on the same LAN.