Testing against mocks produces false confidence. Your unit tests pass, your mocked database returns the expected data, and then production fails because your SQL query has a syntax error that SQLite does not catch but PostgreSQL does. Docker eliminates this class of bugs entirely: test against real databases, real message queues, and real service dependencies, with the same ease as running unit tests.

This guide covers testing strategies at every level: unit tests in Docker for reproducible environments, integration tests with Testcontainers and Docker Compose, database testing with real database instances, API testing, end-to-end testing with Playwright and Selenium in containers, CI/CD test pipelines, test data management, and parallel testing for fast feedback.

Why Test in Docker

Problem Without Docker With Docker
Database differences Tests use SQLite, prod uses PostgreSQL Tests use same PostgreSQL version as prod
Environment setup "Install Redis, PostgreSQL, and RabbitMQ locally" docker compose up
Test isolation Tests share a local database, interfere with each other Each test run gets a fresh container
CI/CD parity Tests pass locally but fail in CI Same containers run everywhere
Cleanup Leftover data between test runs Containers are destroyed after tests

Testcontainers

Testcontainers is a library that starts Docker containers programmatically from within your test code. Each test gets a fresh, isolated container that is automatically cleaned up:

Go (testcontainers-go)

package repository_test

import (
    "context"
    "testing"

    "github.com/testcontainers/testcontainers-go"
    "github.com/testcontainers/testcontainers-go/modules/postgres"
    "github.com/testcontainers/testcontainers-go/wait"
)

func TestUserRepository(t *testing.T) {
    ctx := context.Background()

    // Start a PostgreSQL container
    pgContainer, err := postgres.Run(ctx, "postgres:16-alpine",
        postgres.WithDatabase("testdb"),
        postgres.WithUsername("testuser"),
        postgres.WithPassword("testpass"),
        testcontainers.WithWaitStrategy(
            wait.ForLog("database system is ready to accept connections").
                WithOccurrence(2).
                WithStartupTimeout(30*time.Second),
        ),
    )
    if err != nil {
        t.Fatal(err)
    }
    defer pgContainer.Terminate(ctx)

    // Get connection string
    connStr, err := pgContainer.ConnectionString(ctx, "sslmode=disable")
    if err != nil {
        t.Fatal(err)
    }

    // Run migrations
    db, err := sql.Open("pgx", connStr)
    if err != nil {
        t.Fatal(err)
    }

    // Run your actual migrations
    if err := runMigrations(db); err != nil {
        t.Fatal(err)
    }

    // Test your repository
    repo := NewUserRepository(db)

    t.Run("CreateUser", func(t *testing.T) {
        user := &User{
            Email: "[email protected]",
            Name:  "Test User",
        }
        err := repo.Create(ctx, user)
        if err != nil {
            t.Fatalf("expected no error, got %v", err)
        }
        if user.ID == "" {
            t.Fatal("expected user ID to be set")
        }
    })

    t.Run("FindByEmail", func(t *testing.T) {
        user, err := repo.FindByEmail(ctx, "[email protected]")
        if err != nil {
            t.Fatalf("expected no error, got %v", err)
        }
        if user.Name != "Test User" {
            t.Fatalf("expected 'Test User', got %q", user.Name)
        }
    })
}

Multiple Services with Testcontainers

func TestServiceIntegration(t *testing.T) {
    ctx := context.Background()

    // Start PostgreSQL
    pgContainer, _ := postgres.Run(ctx, "postgres:16-alpine",
        postgres.WithDatabase("testdb"),
        postgres.WithUsername("test"),
        postgres.WithPassword("test"),
    )
    defer pgContainer.Terminate(ctx)

    // Start Redis
    redisContainer, err := testcontainers.GenericContainer(ctx,
        testcontainers.GenericContainerRequest{
            ContainerRequest: testcontainers.ContainerRequest{
                Image:        "redis:7-alpine",
                ExposedPorts: []string{"6379/tcp"},
                WaitingFor:   wait.ForLog("Ready to accept connections"),
            },
            Started: true,
        },
    )
    if err != nil {
        t.Fatal(err)
    }
    defer redisContainer.Terminate(ctx)

    // Get connection details
    pgConnStr, _ := pgContainer.ConnectionString(ctx, "sslmode=disable")
    redisHost, _ := redisContainer.Host(ctx)
    redisPort, _ := redisContainer.MappedPort(ctx, "6379")
    redisAddr := fmt.Sprintf("%s:%s", redisHost, redisPort.Port())

    // Initialize your service with real dependencies
    svc := NewUserService(pgConnStr, redisAddr)

    // Test caching behavior with real Redis
    t.Run("cached lookup", func(t *testing.T) {
        // First call: cache miss, queries database
        user1, _ := svc.GetUser(ctx, "user-1")

        // Second call: should hit cache
        user2, _ := svc.GetUser(ctx, "user-1")

        // Verify both return the same data
        if user1.ID != user2.ID {
            t.Fatal("cached result does not match")
        }
    })
}
Tip: For faster test suites, use a shared test container that is created once for the entire test package rather than per-test. Use TestMain in Go to start containers before all tests run and terminate them after. The trade-off is that tests must clean up their own data (truncate tables between tests) instead of relying on a fresh container.

Docker Compose for Integration Tests

For tests that need multiple services, Docker Compose provides a declarative way to spin up the entire dependency graph:

# docker-compose.test.yml
services:
  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: test
      POSTGRES_PASSWORD: test
      POSTGRES_DB: testdb
    ports:
      - "15432:5432"  # Offset port to avoid conflicts
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U test -d testdb"]
      interval: 5s
      timeout: 3s
      retries: 5
    tmpfs:
      - /var/lib/postgresql/data  # RAM disk for faster tests

  redis:
    image: redis:7-alpine
    ports:
      - "16379:6379"
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 3s
      retries: 5

  nats:
    image: nats:2.10-alpine
    ports:
      - "14222:4222"
    command: -js  # Enable JetStream
#!/bin/bash
# scripts/test-integration.sh
set -e

echo "Starting test infrastructure..."
docker compose -f docker-compose.test.yml up -d --wait

echo "Running integration tests..."
DATABASE_URL="postgres://test:test@localhost:15432/testdb?sslmode=disable" \
REDIS_URL="redis://localhost:16379" \
NATS_URL="nats://localhost:14222" \
go test -v -race -tags=integration ./...

EXIT_CODE=$?

echo "Cleaning up..."
docker compose -f docker-compose.test.yml down -v

exit $EXIT_CODE

Use tmpfs for database data directories in test containers. This stores data in RAM instead of disk, dramatically speeding up database operations during tests. Since test data is disposable, this is safe.

Database Testing

Testing database interactions properly requires running your actual migrations and testing against the real database engine:

// test_helpers.go - Shared test database setup
package testutil

import (
    "database/sql"
    "fmt"
    "os"
    "sync"
    "testing"
)

var (
    testDB   *sql.DB
    initOnce sync.Once
)

// GetTestDB returns a shared test database connection
func GetTestDB(t *testing.T) *sql.DB {
    t.Helper()
    initOnce.Do(func() {
        dbURL := os.Getenv("DATABASE_URL")
        if dbURL == "" {
            t.Skip("DATABASE_URL not set, skipping integration test")
        }
        var err error
        testDB, err = sql.Open("pgx", dbURL)
        if err != nil {
            t.Fatalf("Failed to connect to test database: %v", err)
        }
    })
    return testDB
}

// CleanTables truncates specified tables between tests
func CleanTables(t *testing.T, db *sql.DB, tables ...string) {
    t.Helper()
    for _, table := range tables {
        _, err := db.Exec(fmt.Sprintf("TRUNCATE TABLE %s CASCADE", table))
        if err != nil {
            t.Fatalf("Failed to truncate %s: %v", table, err)
        }
    }
}

API Testing

Test your API endpoints against a running application with real dependencies:

// api_test.go
func TestAPIEndpoints(t *testing.T) {
    // Start the application with test config
    app := setupTestApp(t) // Connects to test containers
    defer app.Shutdown()

    server := httptest.NewServer(app.Handler())
    defer server.Close()

    t.Run("POST /api/users creates user", func(t *testing.T) {
        body := `{"email":"[email protected]","name":"New User"}`
        resp, err := http.Post(
            server.URL+"/api/users",
            "application/json",
            strings.NewReader(body),
        )
        if err != nil {
            t.Fatal(err)
        }
        defer resp.Body.Close()

        if resp.StatusCode != http.StatusCreated {
            t.Fatalf("expected 201, got %d", resp.StatusCode)
        }

        var result map[string]interface{}
        json.NewDecoder(resp.Body).Decode(&result)
        if result["id"] == nil {
            t.Fatal("expected id in response")
        }
    })

    t.Run("GET /api/users/:id returns user", func(t *testing.T) {
        // ... test implementation
    })
}

End-to-End Testing with Playwright

Run end-to-end browser tests entirely in Docker with Playwright:

# docker-compose.e2e.yml
services:
  app:
    build: .
    environment:
      - DATABASE_URL=postgres://test:test@postgres:5432/testdb
      - REDIS_URL=redis://redis:6379
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy

  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: test
      POSTGRES_PASSWORD: test
      POSTGRES_DB: testdb
    healthcheck:
      test: ["CMD-SHELL", "pg_isready"]
      interval: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      retries: 5

  playwright:
    image: mcr.microsoft.com/playwright:v1.42.0-noble
    working_dir: /tests
    volumes:
      - ./e2e:/tests
    environment:
      - BASE_URL=http://app:8080
    command: npx playwright test
    depends_on:
      - app
// e2e/tests/login.spec.ts
import { test, expect } from '@playwright/test';

test.describe('Authentication', () => {
  test('user can log in with valid credentials', async ({ page }) => {
    await page.goto('/login');

    await page.fill('[name="email"]', '[email protected]');
    await page.fill('[name="password"]', 'testpassword');
    await page.click('button[type="submit"]');

    await expect(page).toHaveURL('/dashboard');
    await expect(page.locator('h1')).toContainText('Dashboard');
  });

  test('shows error for invalid credentials', async ({ page }) => {
    await page.goto('/login');

    await page.fill('[name="email"]', '[email protected]');
    await page.fill('[name="password"]', 'wrongpassword');
    await page.click('button[type="submit"]');

    await expect(page.locator('.error-message')).toBeVisible();
    await expect(page).toHaveURL('/login');
  });
});

Selenium Grid in Docker

services:
  selenium-hub:
    image: selenium/hub:4.18
    ports:
      - "4442:4442"
      - "4443:4443"
      - "4444:4444"

  chrome:
    image: selenium/node-chrome:4.18
    depends_on:
      - selenium-hub
    environment:
      - SE_EVENT_BUS_HOST=selenium-hub
      - SE_EVENT_BUS_PUBLISH_PORT=4442
      - SE_EVENT_BUS_SUBSCRIBE_PORT=4443
    shm_size: 2gb
    deploy:
      replicas: 3  # Run 3 Chrome instances in parallel

CI/CD Test Pipeline

Structure your CI pipeline to run tests in the correct order with proper isolation:

# .github/workflows/test.yml
name: Test Pipeline
on: [push, pull_request]

jobs:
  unit-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with: { go-version: '1.22' }
      - run: go test -short -race ./...  # Skip integration tests

  integration-tests:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16-alpine
        env: { POSTGRES_PASSWORD: test, POSTGRES_DB: testdb }
        ports: ['5432:5432']
        options: --health-cmd pg_isready --health-interval 5s --health-retries 5
      redis:
        image: redis:7-alpine
        ports: ['6379:6379']
        options: --health-cmd "redis-cli ping" --health-interval 5s --health-retries 5
    env:
      DATABASE_URL: postgres://postgres:test@localhost:5432/testdb?sslmode=disable
      REDIS_URL: redis://localhost:6379
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with: { go-version: '1.22' }
      - run: go test -race -tags=integration ./...

  e2e-tests:
    needs: [unit-tests, integration-tests]
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: docker compose -f docker-compose.e2e.yml up --build --abort-on-container-exit --exit-code-from playwright
      - uses: actions/upload-artifact@v4
        if: failure()
        with:
          name: playwright-report
          path: e2e/playwright-report/

Test Data Management

Approaches for managing test data in Dockerized test environments:

  1. Migration + seed files - Run migrations, then apply SQL seed data. Repeatable and version-controlled.
  2. Factory functions - Create test data programmatically in test setup. Most flexible, but more code to maintain.
  3. Database snapshots - Create a Docker image with pre-populated data. Fastest startup but harder to update.
  4. Table truncation between tests - Fast cleanup that keeps the schema intact.
// Factory pattern for test data
func CreateTestUser(t *testing.T, db *sql.DB, overrides ...func(*User)) *User {
    t.Helper()
    user := &User{
        Email: fmt.Sprintf("user-%[email protected]", uuid.New().String()[:8]),
        Name:  "Test User",
        Role:  "viewer",
    }
    for _, override := range overrides {
        override(user)
    }
    err := db.QueryRow(
        "INSERT INTO users (email, name, role) VALUES ($1, $2, $3) RETURNING id",
        user.Email, user.Name, user.Role,
    ).Scan(&user.ID)
    if err != nil {
        t.Fatalf("failed to create test user: %v", err)
    }
    return user
}

// Usage in tests
func TestSomething(t *testing.T) {
    admin := CreateTestUser(t, db, func(u *User) {
        u.Role = "admin"
        u.Name = "Admin User"
    })
    // ... test with admin user
}

Parallel Testing

Docker enables parallel test execution by providing isolated environments for each test batch:

# Run Go tests in parallel with separate databases
# Each test package gets its own database schema

# Makefile
test-parallel:
	@echo "Starting test databases..."
	docker compose -f docker-compose.test.yml up -d --wait
	@echo "Running tests (8 packages in parallel)..."
	go test -race -parallel 8 -tags=integration ./...
	docker compose -f docker-compose.test.yml down -v
Warning: Parallel tests that share the same database must use separate schemas or test-specific table prefixes to avoid data collisions. Alternatively, use Testcontainers to give each parallel test group its own database container, though this increases resource usage and startup time.

Testing Strategy Summary

Test Type Docker Approach Speed Confidence
Unit tests Run natively (no Docker needed) Fast (ms) Low (mocks)
Integration tests Testcontainers or Compose services Medium (seconds) High (real deps)
API tests Full app + Compose dependencies Medium (seconds) High
E2E tests Full stack in Compose + Playwright Slow (minutes) Highest

The testing pyramid still applies: many unit tests (fast, cheap), fewer integration tests (medium, real dependencies), and a focused set of E2E tests (slow, full confidence). Docker shifts the cost equation by making integration tests nearly as easy to write as unit tests, which means you can write more of them without the traditional setup burden.

Container management tools like usulnet can help you inspect and debug test containers that fail to start or produce unexpected results. By providing a web interface to running containers, you can quickly check logs, inspect running processes, and verify database state during test development without memorizing Docker CLI commands.