Self-Hosted VPN Solutions: WireGuard, OpenVPN and Tailscale Compared
A VPN is the gateway to your self-hosted infrastructure. Whether you need secure remote access to your home lab, a private network between cloud servers, or a way to route traffic through your own infrastructure, choosing the right VPN protocol and deployment strategy matters significantly. The three dominant approaches are WireGuard (modern, fast, kernel-level), OpenVPN (established, widely compatible), and Tailscale/Headscale (mesh networking built on WireGuard).
Architecture Overview
| Feature | WireGuard | OpenVPN | Tailscale / Headscale |
|---|---|---|---|
| Protocol | UDP (kernel module) | TCP or UDP (userspace) | WireGuard (enhanced) |
| Topology | Point-to-point or hub-and-spoke | Client-server | Full mesh |
| Encryption | ChaCha20, Poly1305, Curve25519 | OpenSSL (configurable, AES-256-GCM typical) | Same as WireGuard |
| Codebase size | ~4,000 lines | ~100,000+ lines | WireGuard + control plane |
| Kernel support | Built into Linux 5.6+ | Userspace (tun/tap) | Uses kernel WireGuard |
| NAT traversal | Manual (port forward required) | Manual (port forward required) | Automatic (DERP relay + hole punching) |
| Configuration | Simple config files | Complex (PKI, certificates) | Automatic (control server) |
| Mobile support | iOS, Android (official apps) | iOS, Android (OpenVPN Connect) | iOS, Android (official apps) |
| License | GPL-2.0 | GPL-2.0 | Tailscale: proprietary / Headscale: BSD-3 |
Performance Benchmarks
VPN throughput tested on a 1 Gbps connection between two servers (same datacenter, AMD EPYC, Linux 6.1):
| Metric | WireGuard | OpenVPN (UDP) | OpenVPN (TCP) | Tailscale (direct) |
|---|---|---|---|---|
| Throughput | ~900 Mbps | ~400 Mbps | ~250 Mbps | ~880 Mbps |
| Latency overhead | ~0.5 ms | ~1.5 ms | ~3 ms | ~0.7 ms |
| CPU usage (1 Gbps) | ~5% | ~30% | ~40% | ~6% |
| Connection establishment | ~100 ms | ~2-5 seconds | ~3-8 seconds | ~200 ms |
| Handshake overhead | 1 RTT | 6-8 RTTs | 6-8 RTTs + TCP | 1 RTT |
WireGuard's performance advantage comes from running in kernel space and using modern, fixed cryptographic primitives. OpenVPN runs in userspace, requiring context switches and supporting legacy cipher negotiation that adds overhead. Tailscale inherits WireGuard's performance for direct connections, with slightly higher overhead due to its control plane.
Docker Deployment: WireGuard
# docker-compose.yml for WireGuard (using wg-easy for web UI)
version: "3.8"
services:
wg-easy:
image: ghcr.io/wg-easy/wg-easy:latest
container_name: wg-easy
restart: unless-stopped
environment:
- WG_HOST=vpn.example.com
- PASSWORD_HASH=${WG_ADMIN_PASSWORD_HASH}
- WG_PORT=51820
- WG_DEFAULT_DNS=1.1.1.1,8.8.8.8
- WG_DEFAULT_ADDRESS=10.8.0.x
- WG_ALLOWED_IPS=0.0.0.0/0,::/0
- WG_PERSISTENT_KEEPALIVE=25
volumes:
- wg_easy_data:/etc/wireguard
ports:
- "51820:51820/udp"
- "51821:51821/tcp" # Web UI
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
- net.ipv4.ip_forward=1
volumes:
wg_easy_data:
Manual WireGuard Configuration
# Server configuration (/etc/wireguard/wg0.conf)
[Interface]
PrivateKey = SERVER_PRIVATE_KEY
Address = 10.0.0.1/24
ListenPort = 51820
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
[Peer]
# Client 1 - Laptop
PublicKey = CLIENT1_PUBLIC_KEY
AllowedIPs = 10.0.0.2/32
[Peer]
# Client 2 - Phone
PublicKey = CLIENT2_PUBLIC_KEY
AllowedIPs = 10.0.0.3/32
# Client configuration
[Interface]
PrivateKey = CLIENT_PRIVATE_KEY
Address = 10.0.0.2/24
DNS = 1.1.1.1
[Peer]
PublicKey = SERVER_PUBLIC_KEY
Endpoint = vpn.example.com:51820
AllowedIPs = 0.0.0.0/0, ::/0 # Route all traffic
# AllowedIPs = 10.0.0.0/24, 192.168.1.0/24 # Split tunnel
PersistentKeepalive = 25
Docker Deployment: OpenVPN
# docker-compose.yml for OpenVPN
version: "3.8"
services:
openvpn:
image: kylemanna/openvpn:latest
container_name: openvpn
restart: unless-stopped
volumes:
- openvpn_data:/etc/openvpn
ports:
- "1194:1194/udp"
cap_add:
- NET_ADMIN
volumes:
openvpn_data:
# Initial setup (run once):
# docker run -v openvpn_data:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig -u udp://vpn.example.com
# docker run -v openvpn_data:/etc/openvpn --rm -it kylemanna/openvpn ovpn_initpki
# docker run -v openvpn_data:/etc/openvpn --rm -it kylemanna/openvpn easyrsa build-client-full client1 nopass
# docker run -v openvpn_data:/etc/openvpn --rm kylemanna/openvpn ovpn_getclient client1 > client1.ovpn
OpenVPN requires a PKI (Public Key Infrastructure) with a Certificate Authority, server certificate, and per-client certificates. This adds security (mutual TLS authentication) at the cost of complexity:
# Managing OpenVPN clients
# Generate a new client certificate
docker run -v openvpn_data:/etc/openvpn --rm -it kylemanna/openvpn easyrsa build-client-full newclient nopass
docker run -v openvpn_data:/etc/openvpn --rm kylemanna/openvpn ovpn_getclient newclient > newclient.ovpn
# Revoke a client
docker run -v openvpn_data:/etc/openvpn --rm -it kylemanna/openvpn ovpn_revokeclient clientname
# List active clients
docker run -v openvpn_data:/etc/openvpn --rm kylemanna/openvpn ovpn_listclients
Docker Deployment: Headscale (Self-Hosted Tailscale)
Headscale is an open-source implementation of the Tailscale coordination server. It lets you use Tailscale clients with your own control plane:
# docker-compose.yml for Headscale
version: "3.8"
services:
headscale:
image: headscale/headscale:latest
container_name: headscale
restart: unless-stopped
command: serve
volumes:
- headscale_data:/var/lib/headscale
- ./headscale/config.yaml:/etc/headscale/config.yaml:ro
ports:
- "8080:8080" # HTTP API
- "443:443" # HTTPS + DERP
environment:
- TZ=America/New_York
# Optional: Headscale UI
headscale-ui:
image: ghcr.io/gurucomputing/headscale-ui:latest
container_name: headscale-ui
restart: unless-stopped
ports:
- "8443:443"
volumes:
headscale_data:
# headscale/config.yaml
server_url: https://vpn.example.com
listen_addr: 0.0.0.0:8080
metrics_listen_addr: 0.0.0.0:9090
private_key_path: /var/lib/headscale/private.key
noise:
private_key_path: /var/lib/headscale/noise_private.key
ip_prefixes:
- 100.64.0.0/10
- fd7a:115c:a1e0::/48
derp:
server:
enabled: true
region_id: 999
region_code: "self"
region_name: "Self-hosted"
stun_listen_addr: "0.0.0.0:3478"
urls: [] # Disable Tailscale's DERP servers
paths: []
auto_update_enabled: false
database:
type: sqlite3
sqlite:
path: /var/lib/headscale/db.sqlite
dns_config:
nameservers:
- 1.1.1.1
- 8.8.8.8
magic_dns: true
base_domain: vpn.example.com
# Managing Headscale
# Create a user (namespace)
docker exec headscale headscale users create myuser
# Generate a pre-auth key
docker exec headscale headscale preauthkeys create --user myuser --reusable --expiration 24h
# Register a node
# On the client device:
tailscale up --login-server https://vpn.example.com --authkey YOUR_PREAUTH_KEY
# List registered nodes
docker exec headscale headscale nodes list
# Enable routes advertised by a node
docker exec headscale headscale routes enable -r 1
server1.vpn.example.com) across all connected devices.
Configuration Complexity
| Task | WireGuard | OpenVPN | Headscale |
|---|---|---|---|
| Initial server setup | Simple (one config file) | Complex (PKI + config) | Moderate (config + setup) |
| Adding a client | Generate keypair, add peer | Generate cert, create .ovpn | Install client, authenticate |
| Revoking a client | Remove peer from config | Revoke cert, update CRL | Remove node from admin |
| Routing changes | Manual iptables rules | Push routes via config | Advertise routes, approve |
| DNS configuration | Manual (per-client) | Push via server config | Automatic (MagicDNS) |
| Multi-site networking | Manual routing tables | Complex (multi-server) | Automatic (mesh) |
Use Cases
Use WireGuard When:
- You need maximum throughput and minimal latency
- You have a simple topology (home server, single VPS)
- You want a lightweight, always-on VPN on mobile devices
- You prefer manual control over all routing and configuration
Use OpenVPN When:
- You need to traverse restrictive firewalls (TCP mode on port 443)
- You require certificate-based mutual authentication
- You need compatibility with legacy systems and corporate networks
- You need per-client configuration pushed from the server
Use Headscale/Tailscale When:
- You manage services across multiple locations or cloud providers
- You need devices behind NAT to connect without port forwarding
- You want a mesh network where every node can reach every other node
- You want minimal configuration and automatic peer discovery
NET_ADMIN capability and often SYS_MODULE for loading kernel modules. These are privileged capabilities that effectively give the container control over host networking. Ensure your VPN container images come from trusted sources and are kept updated.
Integrating VPN with Self-Hosted Services
A common pattern is to expose self-hosted services only through the VPN, eliminating the need for public-facing reverse proxies:
# Docker network approach: bind services to VPN interface only
services:
internal-app:
# Only accessible via VPN network
networks:
- vpn_internal
# Do NOT expose ports publicly
# ports:
# - "8080:8080" # Don't do this
wireguard:
networks:
- vpn_internal
ports:
- "51820:51820/udp" # Only VPN port is public
networks:
vpn_internal:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
Tools like usulnet benefit from VPN integration as well. By placing the management interface behind a VPN, you add a strong network-level security layer on top of application-level authentication, ensuring your Docker management dashboard is never directly exposed to the internet.
Summary
For most self-hosters, WireGuard provides the best combination of performance, simplicity, and security. If you need mesh networking across multiple locations without manual routing, Headscale gives you the Tailscale experience with full self-hosted control. OpenVPN remains relevant for firewall traversal and legacy compatibility but is increasingly being replaced by WireGuard-based solutions. Choose based on your topology needs, not just raw performance numbers.