Containers did not appear overnight when Docker launched in 2013. The ideas behind process isolation, filesystem separation, and resource control have been evolving for over four decades. Understanding this history provides essential context for why containers work the way they do, what problems each technology solved, and where the ecosystem is headed next.

1979: chroot - The Original Container

The history of containers begins with chroot, added to Version 7 Unix in 1979 by Bill Joy during the development of 4BSD. The chroot system call changes the apparent root directory for a process and its children, creating an isolated filesystem view.

# Basic chroot usage - still works identically today
mkdir -p /jail/{bin,lib,lib64,etc}
cp /bin/bash /jail/bin/
cp /lib/x86_64-linux-gnu/libtinfo.so.6 /jail/lib/
cp /lib/x86_64-linux-gnu/libdl.so.2 /jail/lib/
cp /lib/x86_64-linux-gnu/libc.so.6 /jail/lib/
cp /lib64/ld-linux-x86-64.so.2 /jail/lib64/

chroot /jail /bin/bash
# Now running in an isolated filesystem view

chroot was originally designed for build system isolation and testing, not security. It provides only filesystem isolation: a chrooted process still shares the network stack, process table, IPC, and user namespace with the host. A root user inside a chroot can escape trivially.

Legacy: chroot remains in every Unix-like operating system today and is still used for rescue operations, build environments, and as a foundational component in more sophisticated isolation systems.

2000: FreeBSD Jails

FreeBSD 4.0 introduced jails in March 2000, designed by Poul-Henning Kamp. Jails extended the chroot concept by adding process isolation, network stack separation, and restricted superuser capabilities. A jailed process could not see or interact with processes outside its jail, could not modify network configurations, and had a limited set of system calls available.

# Creating a FreeBSD jail (modern syntax)
jail -c name=webserver \
  path=/jails/webserver \
  host.hostname=webserver.local \
  ip4.addr=192.168.1.100 \
  mount.devfs \
  exec.start="/bin/sh /etc/rc" \
  exec.stop="/bin/sh /etc/rc.shutdown"

Jails were the first true operating-system-level virtualization technology. They solved real problems for hosting providers who needed to give customers isolated environments without the overhead of full virtual machines. FreeBSD jails were production-ready years before any Linux equivalent existed.

2004: Solaris Zones

Sun Microsystems introduced Solaris Zones (also called Solaris Containers) in Solaris 10 in 2004. Zones provided a more comprehensive isolation model than FreeBSD jails, with resource management, network virtualization, and file system snapshotting built in from the start.

Feature chroot FreeBSD Jails Solaris Zones
Filesystem isolation Yes Yes Yes (ZFS clones)
Process isolation No Yes Yes
Network isolation No Yes (IP-based) Yes (full stack)
Resource limits No Limited Yes (resource pools)
Snapshotting No No Yes (ZFS)
Live migration No No Yes

Solaris Zones introduced the concept of "branded zones," which could run Linux binaries through the lx brand zone. This presaged the cross-platform container execution that Docker would later make mainstream.

2005: OpenVZ

OpenVZ brought OS-level virtualization to Linux in 2005, using a modified kernel to create isolated containers (called virtual environments or VEs). It was widely used by hosting providers to offer "VPS" services at much lower overhead than full virtual machines.

# Creating an OpenVZ container (legacy syntax)
vzctl create 101 --ostemplate debian-9.0-x86_64
vzctl set 101 --hostname container101.example.com --save
vzctl set 101 --ipadd 192.168.1.101 --save
vzctl set 101 --ram 512M --save
vzctl start 101

OpenVZ's critical limitation was that it required a patched kernel. It was never merged into the mainline Linux kernel, which meant administrators had to run a special kernel version and could not use the latest features. This constraint would eventually lead to OpenVZ being superseded by technologies built on mainline kernel features.

2006-2008: cgroups and Namespaces

The two foundational technologies that make modern Linux containers possible were developed during this period and merged into the mainline Linux kernel.

cgroups (Control Groups)

Originally developed by Google engineers Paul Menage and Rohit Seth in 2006 under the name "process containers," cgroups were merged into Linux 2.6.24 in January 2008. They provide resource limiting, prioritization, accounting, and control for groups of processes.

# Modern cgroups v2 usage
# Create a cgroup
mkdir /sys/fs/cgroup/mygroup

# Set resource limits
echo "500000 1000000" > /sys/fs/cgroup/mygroup/cpu.max    # 50% CPU
echo "536870912" > /sys/fs/cgroup/mygroup/memory.max       # 512MB RAM
echo "100" > /sys/fs/cgroup/mygroup/pids.max               # 100 processes

# Add a process to the cgroup
echo $PID > /sys/fs/cgroup/mygroup/cgroup.procs

Linux Namespaces

Namespaces provide isolation for various system resources, making processes inside a namespace believe they have their own instance of that resource. The namespace types were added to the kernel incrementally:

Namespace Kernel Version Year Isolates
Mount (mnt) 2.4.19 2002 Mount points
UTS 2.6.19 2006 Hostname and domain name
IPC 2.6.19 2006 Inter-process communication
PID 2.6.24 2008 Process IDs
Network (net) 2.6.29 2009 Network stack
User 3.8 2013 User and group IDs
Cgroup 4.6 2016 Cgroup root directory
Time 5.6 2020 System clocks
# Creating an isolated namespace manually
unshare --mount --uts --ipc --net --pid --fork /bin/bash
# Now running in a new set of namespaces

# Verify namespace isolation
ls -la /proc/self/ns/
# Each namespace has a unique inode number

Together, cgroups and namespaces provide all the isolation primitives that Docker and every other Linux container runtime uses. Every container is fundamentally just a regular Linux process with its own set of namespaces and cgroup resource limits.

2008: LXC (Linux Containers)

LXC was the first complete container manager built on cgroups and namespaces. Created by IBM engineers, it provided a userspace interface for creating and managing containers using the mainline kernel's native features.

# LXC container management
lxc-create -t download -n mycontainer -- -d ubuntu -r jammy -a amd64
lxc-start -n mycontainer
lxc-attach -n mycontainer
lxc-stop -n mycontainer
lxc-destroy -n mycontainer

LXC focused on system containers, providing an experience similar to a lightweight virtual machine. Each container ran a full init system and could host multiple services. This approach was familiar to administrators coming from OpenVZ or virtual machines but was quite different from the application container model Docker would later popularize.

Tip: LXC and its successor LXD (now Incus) remain actively developed and are excellent choices for system-level containers where you need a full Linux environment rather than a single-process application container.

2013: Docker - The Container Revolution

Docker, initially released by Solomon Hykes at dotCloud in March 2013, did not invent containers. It made them usable. Docker's breakthrough was combining existing technologies into a cohesive developer experience with three key innovations:

  1. The Dockerfile: A declarative, reproducible way to build container images from a text file
  2. Image layering: A union filesystem approach where each instruction creates a cached layer, making builds fast and images shareable
  3. Docker Hub: A public registry where anyone could share and discover container images
# The Dockerfile format that changed everything
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y python python-pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "app.py"]

Docker's initial implementation used LXC as its execution backend, but it quickly developed its own container runtime (libcontainer, later runc) to remove the LXC dependency and gain more control over the container lifecycle.

The adoption curve was unprecedented in infrastructure tooling. Within two years, Docker went from a demo at PyCon to being supported by every major cloud provider and used by millions of developers.

2015: The OCI Standard

Docker's rapid success created concern about vendor lock-in. In June 2015, Docker, CoreOS, Google, and other companies formed the Open Container Initiative (OCI) under the Linux Foundation to create open standards for container formats and runtimes.

The OCI defined two specifications:

  • Runtime Specification (runtime-spec): How to run a container from a filesystem bundle. The reference implementation is runc.
  • Image Specification (image-spec): The format for container images, including layers, manifests, and configuration.

These standards ensured that containers built with one tool could run with another, breaking the Docker monopoly on the container runtime layer and enabling an ecosystem of compatible tools.

2015-2017: containerd and CRI-O

Docker's monolithic architecture became a liability as the ecosystem matured. In response, Docker donated its core container runtime to the CNCF as containerd, a standalone daemon that manages the complete container lifecycle on a host.

# containerd architecture
# Docker CLI -> Docker daemon -> containerd -> runc
# Kubernetes -> CRI -> containerd -> runc
# nerdctl -> containerd -> runc

# Using containerd directly via nerdctl
nerdctl run -d --name web nginx:alpine
nerdctl ps
nerdctl logs web

Meanwhile, Red Hat developed CRI-O, a lightweight container runtime specifically designed for Kubernetes that implements the Container Runtime Interface (CRI) without any Docker dependency. This demonstrated that Docker itself was not necessary to run containers in production.

2014-2018: Kubernetes Ascendance

Google released Kubernetes in June 2014, drawing on 15 years of experience running containers internally with Borg and Omega. Kubernetes introduced declarative infrastructure management to the container world:

# Kubernetes deployment - declarative container orchestration
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: myapp:v2.1.0
        resources:
          limits:
            memory: "256Mi"
            cpu: "500m"
        ports:
        - containerPort: 8080

Kubernetes won the container orchestration war against Docker Swarm and Apache Mesos by 2018, becoming the de facto standard for running containers at scale. In December 2020, Kubernetes deprecated its Docker-specific runtime (dockershim), completing the decoupling from Docker and using containerd or CRI-O directly.

The Current Landscape (2025)

The container ecosystem has matured into a layered stack of specialized tools:

Layer Tools Purpose
Build Docker, Buildah, Kaniko, BuildKit Creating container images
Runtime runc, crun, gVisor, Kata Containers Executing containers
Daemon containerd, CRI-O, Podman Managing container lifecycle
Orchestration Kubernetes, Docker Swarm, Nomad Scheduling and scaling
Management usulnet, Portainer, Rancher, Lens Visual management and operations
Security Trivy, Falco, OPA/Gatekeeper Scanning, runtime security, policy

Future Directions

Several trends are shaping the next phase of container technology:

  • WebAssembly (Wasm) containers: Running Wasm modules as containers, offering faster startup, smaller footprint, and true cross-platform portability. Containerd already supports Wasm via the runwasi shim.
  • Confidential containers: Using hardware Trusted Execution Environments (TEEs) like Intel TDX and AMD SEV to run containers with encrypted memory, protecting against compromised hypervisors and cloud operators.
  • eBPF-powered networking and security: Cilium and other eBPF-based tools are replacing iptables-based networking with programmable, high-performance alternatives.
  • Rootless and unprivileged containers: The push toward running the entire container stack without root privileges, eliminating an entire class of security vulnerabilities.
  • Unikernels and microVMs: Projects like Firecracker (used by AWS Lambda) and Kata Containers blend the isolation of VMs with the density of containers.

Perspective: From chroot's 50 lines of kernel code in 1979 to the hundreds of thousands of lines powering Kubernetes today, the trajectory has always been toward stronger isolation with lower overhead. Each generation solved the previous generation's limitations while maintaining backward compatibility with the fundamental Unix process model.

The container ecosystem continues to evolve, but the core concepts remain the same ones that chroot pioneered 45 years ago: give a process an isolated view of the resources it needs, and nothing more.