website-and-documentation/content/en/docs/components/physical-envs/docker.md
Manuel Ganter d39ffeb08a
Some checks failed
Hugo Site Tests / test (push) Failing after 1s
ci / build (push) Successful in 55s
added physical-envs/docker
2025-12-16 11:41:16 +01:00

24 KiB

title linkTitle weight description
Docker Docker 10 Container runtime for running containerized applications

Overview

Docker is a container platform that packages applications and dependencies into standardized units called containers. In the Edge Developer Platform, Docker serves three primary functions: powering local development environments through Docker Desktop, building container images in CI/CD pipelines, and providing Docker-in-Docker (DinD) execution environments for Forgejo Actions runners.

Docker provides a consistent runtime environment across development, testing, and production, ensuring applications behave identically regardless of the underlying infrastructure.

Key Features

  • Container Runtime: Execute isolated application containers with process, network, and filesystem isolation
  • Image Building: Create container images using Dockerfile specifications and layer caching
  • Docker-in-Docker: Nested Docker execution for CI/CD runners and containerized build environments
  • Multi-stage Builds: Optimize image size through efficient build processes
  • Volume Management: Persistent data storage with bind mounts and named volumes
  • Network Isolation: Software-defined networking with bridge, host, and overlay networks
  • Resource Control: CPU, memory, and I/O limits through Linux cgroups

Purpose in EDP

Docker plays several critical roles in the Edge Developer Platform:

Local Development: Docker Desktop provides developers with containerized development environments, enabling consistent tooling and dependencies across different machines. Developers can run entire application stacks locally using docker-compose configurations.

Image Building: CI/CD pipelines use Docker to build container images from source code. The docker build command transforms Dockerfiles into layered images that can be deployed to Kubernetes or other container orchestration platforms.

CI/CD Execution: Forgejo Actions runners use Docker-in-Docker to execute workflow steps in isolated containers. Each job runs in a fresh container environment, ensuring clean state and reproducible builds.

Repository

Code: Docker is an open-source project

Documentation:

Getting Started

Prerequisites

  • Linux kernel 3.10+ with namespace and cgroup support
  • 64-bit processor architecture (x86_64, ARM64)
  • At least 4GB RAM for Docker Desktop
  • 20GB available disk space for images and containers

Quick Start

Install Docker on Linux:

# Install Docker using official script
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# Add user to docker group (optional, allows running docker without sudo)
sudo usermod -aG docker $USER

# Start Docker daemon
sudo systemctl start docker
sudo systemctl enable docker

Install Docker Desktop (macOS/Windows):

  1. Download Docker Desktop from docker.com
  2. Run the installer
  3. Launch Docker Desktop from Applications
  4. Verify installation in system tray/menu bar

Verification

Verify Docker installation:

# Check Docker version
docker --version

# Verify daemon is running
docker info

# Run test container
docker run hello-world

# Check running containers
docker ps

# View all containers (including stopped)
docker ps -a

# List downloaded images
docker images

Usage Examples

Running Containers

Basic Container Execution:

# Run container in foreground
docker run ubuntu:22.04 echo "Hello from container"

# Run container in background (detached)
docker run -d --name nginx-server nginx:latest

# Run interactive container with shell
docker run -it ubuntu:22.04 /bin/bash

# Run container with port mapping
docker run -d -p 8080:80 nginx:latest

# Run container with volume mount
docker run -d -v /host/data:/container/data ubuntu:22.04

# Run container with environment variables
docker run -d -e POSTGRES_PASSWORD=secret postgres:15

# Run container with resource limits
docker run -d --memory=512m --cpus=1.5 nginx:latest

Container Management:

# Stop container
docker stop nginx-server

# Start stopped container
docker start nginx-server

# Restart container
docker restart nginx-server

# View container logs
docker logs nginx-server
docker logs -f nginx-server  # Follow logs

# Execute command in running container
docker exec -it nginx-server /bin/bash

# Copy files between host and container
docker cp myfile.txt nginx-server:/tmp/
docker cp nginx-server:/tmp/myfile.txt ./

# Inspect container details
docker inspect nginx-server

# View container resource usage
docker stats nginx-server

# Remove container
docker rm nginx-server
docker rm -f nginx-server  # Force remove running container

Building Images

Simple Dockerfile:

# Dockerfile
FROM ubuntu:22.04

# Install dependencies
RUN apt-get update && apt-get install -y \
    curl \
    vim \
    && rm -rf /var/lib/apt/lists/*

# Set working directory
WORKDIR /app

# Copy application files
COPY . /app

# Set environment variable
ENV APP_ENV=production

# Expose port
EXPOSE 8080

# Define entrypoint
CMD ["./start.sh"]

Build and Tag Image:

# Build image from Dockerfile
docker build -t myapp:1.0 .

# Build with build arguments
docker build --build-arg VERSION=1.0 -t myapp:1.0 .

# Build without cache
docker build --no-cache -t myapp:1.0 .

# Tag image for registry
docker tag myapp:1.0 registry.example.com/myapp:1.0

# Push to registry
docker push registry.example.com/myapp:1.0

# Pull from registry
docker pull registry.example.com/myapp:1.0

Multi-stage Build:

# Dockerfile with multi-stage build
FROM golang:1.21 AS builder

WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download

COPY . .
RUN CGO_ENABLED=0 go build -o /app/server

# Final stage
FROM alpine:3.19

RUN apk --no-cache add ca-certificates
COPY --from=builder /app/server /usr/local/bin/server

USER nobody
EXPOSE 8080
CMD ["server"]

Multi-stage builds reduce final image size by excluding build tools and intermediate artifacts.

Docker-in-Docker for CI/CD

DinD Container for Building Images:

# Start DinD daemon
docker run -d \
  --name dind \
  --privileged \
  -e DOCKER_TLS_CERTDIR=/certs \
  -v docker-certs:/certs \
  docker:28.0.4-dind

# Run build container connected to DinD
docker run --rm \
  -e DOCKER_HOST=tcp://dind:2376 \
  -e DOCKER_TLS_VERIFY=1 \
  -e DOCKER_CERT_PATH=/certs/client \
  -v docker-certs:/certs:ro \
  -v $(pwd):/workspace \
  -w /workspace \
  --link dind:dind \
  docker:28.0.4-cli \
  docker build -t myapp:latest .

Kubernetes DinD Sidecar (Forgejo Runner Pattern):

apiVersion: v1
kind: Pod
metadata:
  name: forgejo-runner
spec:
  containers:
  - name: runner
    image: code.forgejo.org/forgejo/runner:6.4.0
    env:
    - name: DOCKER_HOST
      value: tcp://localhost:2376
    - name: DOCKER_TLS_VERIFY
      value: "1"
    - name: DOCKER_CERT_PATH
      value: /certs/client
    volumeMounts:
    - name: docker-certs
      mountPath: /certs
      readOnly: true
    - name: runner-data
      mountPath: /data

  - name: dind
    image: docker:28.0.4-dind
    securityContext:
      privileged: true
    env:
    - name: DOCKER_TLS_CERTDIR
      value: /certs
    volumeMounts:
    - name: docker-certs
      mountPath: /certs
    - name: docker-storage
      mountPath: /var/lib/docker

  volumes:
  - name: docker-certs
    emptyDir: {}
  - name: runner-data
    emptyDir: {}
  - name: docker-storage
    emptyDir: {}

This configuration runs a Forgejo Actions runner with a DinD sidecar, enabling containerized builds within Kubernetes pods.

Local Development with Docker Compose

Docker Compose Configuration:

# docker-compose.yml
version: '3.8'

services:
  app:
    build: .
    ports:
      - "8080:8080"
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/appdb
    depends_on:
      - db
      - redis
    volumes:
      - ./src:/app/src

  db:
    image: postgres:15
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=appdb
    volumes:
      - postgres-data:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  postgres-data:

Compose Commands:

# Start all services
docker-compose up -d

# View logs
docker-compose logs -f

# Stop services
docker-compose down

# Rebuild and restart
docker-compose up -d --build

# Run command in service
docker-compose exec app /bin/bash

# Scale service
docker-compose up -d --scale app=3

Image Management

Image Operations:

# List images
docker images

# Remove image
docker rmi myapp:1.0

# Remove unused images
docker image prune

# Remove all unused images
docker image prune -a

# Inspect image layers
docker history myapp:1.0

Registry Operations:

# Login to registry
docker login registry.example.com

# Login with credentials
echo $PASSWORD | docker login -u $USERNAME --password-stdin registry.example.com

# Push image
docker push registry.example.com/myapp:1.0

# Pull image
docker pull registry.example.com/myapp:1.0

# Search registry
docker search nginx

Architecture

Docker Architecture Overview

Docker follows a client-server architecture with several key components:

Docker Client: Command-line tool (docker CLI) that sends API requests to the Docker daemon. Developers interact with Docker through client commands like docker run, docker build, and docker ps.

Docker Daemon (dockerd): Background service that manages containers, images, networks, and volumes. The daemon listens for Docker API requests and coordinates with lower-level runtime components.

Containerd: High-level container runtime that manages container lifecycle operations. Containerd handles image transfer and storage, container execution supervision, and low-level storage and network attachments.

runc: Low-level OCI (Open Container Initiative) runtime that creates and runs containers. Runc interfaces directly with Linux kernel features to spawn and manage container processes.

Container Runtime Layers

Docker's layered architecture separates concerns across multiple components:

┌─────────────────────────────────┐
│      Docker Client (CLI)        │
│   docker run, build, push       │
└────────────┬────────────────────┘
             │ Docker API
┌────────────▼────────────────────┐
│      Docker Daemon (dockerd)    │
│   API, Image management, Auth   │
└────────────┬────────────────────┘
             │ Container API
┌────────────▼────────────────────┐
│         Containerd              │
│  Lifecycle, Image pull/push     │
└────────────┬────────────────────┘
             │ OCI Runtime API
┌────────────▼────────────────────┐
│           runc                  │
│  Namespace, cgroup creation     │
└────────────┬────────────────────┘
             │ System Calls
┌────────────▼────────────────────┐
│       Linux Kernel              │
│  namespaces, cgroups, seccomp   │
└─────────────────────────────────┘

Docker Daemon Layer: Handles high-level operations like image building, authentication, and API endpoint management. The daemon translates user commands into lower-level runtime operations.

Containerd Layer: Manages container lifecycle independent of Docker-specific features. Containerd can be used directly by Kubernetes and other orchestrators, providing a standard container runtime interface.

runc Layer: Implements the OCI runtime specification, creating container processes with Linux kernel isolation features. Runc configures namespaces, cgroups, and security policies before executing container entrypoints.

Linux Kernel Features

Docker leverages several Linux kernel capabilities for container isolation:

Namespaces: Provide process isolation by creating separate views of system resources:

  • PID namespace: Isolates process IDs so containers see only their own processes
  • Network namespace: Provides separate network stacks with unique IP addresses and routing tables
  • Mount namespace: Isolates filesystem mount points so containers have independent filesystem views
  • UTS namespace: Separates hostname and domain name
  • IPC namespace: Isolates inter-process communication resources like shared memory
  • User namespace: Maps container user IDs to different host user IDs for privilege separation

Control Groups (cgroups): Limit and account for resource usage:

  • CPU allocation and throttling
  • Memory limits and swap control
  • Block I/O bandwidth limits
  • Network bandwidth control (via tc integration)

Capabilities: Fine-grained privilege control that breaks down root privileges into discrete capabilities. Containers run with reduced capability sets, dropping dangerous privileges like CAP_SYS_ADMIN.

Seccomp: Filters system calls that containerized processes can make, reducing the kernel attack surface. Docker applies a default seccomp profile blocking ~44 dangerous syscalls.

AppArmor/SELinux: Mandatory access control systems that enforce security policies on container processes, restricting file access and operations.

Image Storage and OverlayFS

Docker uses storage drivers to manage image layers and container filesystems. The preferred storage driver is overlay2, which uses OverlayFS:

Image Layers: Docker images consist of read-only layers stacked on top of each other. Each Dockerfile instruction creates a new layer. Layers are shared between images, reducing storage requirements.

Copy-on-Write (CoW): When a container modifies a file from an image layer, OverlayFS copies the file to the container's writable layer. The original image layer remains unchanged, enabling efficient image reuse.

OverlayFS Structure:

Container Writable Layer (upperdir)
          │
          ├─ Modified files
          │
Image Layers (lowerdir)
          │
          ├─ Layer 3 (READ-ONLY)
          ├─ Layer 2 (READ-ONLY)
          └─ Layer 1 (READ-ONLY)
          │
    Merged View (merged)
          │
          └─ Union of all layers

LowerDir: Read-only image layers containing base filesystem and application files

UpperDir: Writable container layer where all changes are stored

MergedDir: Union mount that presents a unified view of all layers to the container

WorkDir: Internal working directory used by OverlayFS for atomic operations

When a container reads a file, OverlayFS serves it from the topmost layer where it exists. Writes always go to the upperdir, leaving image layers immutable.

Networking Architecture

Docker provides several networking modes:

Bridge Network: Default network mode that creates a virtual bridge (docker0) on the host. Containers connect to the bridge and receive private IP addresses. Network Address Translation (NAT) enables outbound connectivity.

Host Network: Container shares the host's network namespace, using the host's IP address directly. Offers maximum network performance but reduces isolation.

Overlay Network: Multi-host networking for container communication across different Docker hosts. Used by Docker Swarm and can be integrated with Kubernetes.

None: Disables networking for maximum isolation.

Docker-in-Docker (DinD) Architecture

Docker-in-Docker runs a nested Docker daemon inside a container. This is used in CI/CD runners to provide isolated build environments:

Privileged Container: DinD requires privileged mode to mount filesystems and create namespaces within the container. The --privileged flag grants extended capabilities.

Separate Daemon: A complete Docker daemon runs inside the container, managing its own containers, images, and networks independently of the host daemon.

Certificate Management: DinD uses mutual TLS authentication between the inner client and daemon. Certificates are shared through volumes mounted at /certs.

Storage Driver: The inner Docker daemon typically uses vfs or overlay2 storage driver. VFS provides maximum compatibility but larger storage overhead.

Use in Forgejo Runners:

containers:
  - name: runner
    image: code.forgejo.org/forgejo/runner:6.4.0
    env:
      - name: DOCKER_HOST
        value: tcp://localhost:2376
      - name: DOCKER_TLS_VERIFY
        value: "1"

  - name: dind
    image: docker:28.0.4-dind
    securityContext:
      privileged: true
    volumeMounts:
      - name: docker-certs
        mountPath: /certs

The runner container connects to the DinD container via TCP, allowing workflow steps to execute docker commands for building and testing.

Configuration

Docker Daemon Configuration

The Docker daemon reads configuration from /etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "default-address-pools": [
    {
      "base": "172.17.0.0/16",
      "size": 24
    }
  ],
  "dns": ["8.8.8.8", "8.8.4.4"],
  "insecure-registries": [],
  "registry-mirrors": [],
  "features": {
    "buildkit": true
  },
  "max-concurrent-downloads": 10,
  "max-concurrent-uploads": 5
}

Key Configuration Options:

  • log-driver: Logging mechanism (json-file, syslog, journald, etc.)
  • storage-driver: Filesystem driver (overlay2, devicemapper, btrfs, zfs)
  • insecure-registries: Registries that don't require HTTPS
  • registry-mirrors: Mirror registries for faster pulls
  • buildkit: Enable BuildKit for improved build performance

Apply Configuration Changes:

# Restart Docker daemon
sudo systemctl restart docker

# Verify configuration
docker info

Docker-in-Docker Configuration

Environment Variables:

  • DOCKER_TLS_CERTDIR: Directory for TLS certificates (typically /certs)
  • DOCKER_HOST: Docker daemon address (e.g., tcp://localhost:2376)
  • DOCKER_TLS_VERIFY: Enable TLS verification (1 or 0)
  • DOCKER_CERT_PATH: Path to client certificates

DinD Security Considerations:

DinD requires privileged mode, which grants extended capabilities. Use DinD only in trusted environments:

  • CI/CD runners in isolated namespaces
  • Development environments
  • Build systems with network isolation

Avoid using DinD for untrusted workloads or multi-tenant environments.

Integration Points

  • Kubernetes: Can use Docker (via dockershim, deprecated) or containerd directly as container runtime
  • Forgejo Actions: Uses Docker-in-Docker for isolated build execution in CI/CD pipelines
  • Container Registries: Pushes and pulls images to/from OCI-compliant registries
  • Development Environments: Docker Desktop provides local container runtime for development
  • Image Scanning Tools: Integrates with security scanners like Trivy and Clair
  • Monitoring Systems: Exports metrics via Prometheus exporters and logging drivers

Troubleshooting

Docker Daemon Won't Start

Problem: Docker daemon fails to start or crashes immediately

Solution:

  1. Check daemon logs:

    sudo journalctl -u docker.service
    sudo cat /var/log/docker.log
    
  2. Verify kernel support:

    docker info | grep -i kernel
    grep CONFIG_NAMESPACES /boot/config-$(uname -r)
    
  3. Test daemon in debug mode:

    sudo dockerd --debug
    
  4. Check for port conflicts:

    sudo netstat -tulpn | grep docker
    

Container Cannot Connect to Network

Problem: Container has no network connectivity or DNS resolution fails

Solution:

  1. Check container network mode:

    docker inspect container-name | grep -i network
    
  2. Verify DNS configuration:

    docker exec container-name cat /etc/resolv.conf
    docker exec container-name ping 8.8.8.8
    docker exec container-name ping google.com
    
  3. Check firewall rules:

    sudo iptables -L -n | grep DOCKER
    
  4. Restart Docker network:

    sudo systemctl restart docker
    
  5. Recreate default bridge:

    docker network rm bridge
    sudo systemctl restart docker
    

Out of Disk Space

Problem: Docker runs out of disk space for images or containers

Solution:

  1. Check disk usage:

    docker system df
    docker system df -v  # Verbose output
    
  2. Remove unused containers:

    docker container prune
    
  3. Remove unused images:

    docker image prune -a
    
  4. Remove unused volumes:

    docker volume prune
    
  5. Complete cleanup:

    docker system prune -a --volumes
    
  6. Configure log rotation in /etc/docker/daemon.json:

    {
      "log-opts": {
        "max-size": "10m",
        "max-file": "3"
      }
    }
    

Docker Build Fails

Problem: Image build fails with errors or hangs

Solution:

  1. Build with verbose output:

    docker build --progress=plain --no-cache -t myapp .
    
  2. Check Dockerfile syntax:

    docker build --check -t myapp .
    
  3. Verify base image exists:

    docker pull ubuntu:22.04
    
  4. Increase build memory (Docker Desktop):

    • Open Docker Desktop settings
    • Increase memory allocation to 4GB+
  5. Check build context size:

    docker build --progress=plain -t myapp . 2>&1 | grep "transferring context"
    
  6. Use .dockerignore to exclude large files:

    # .dockerignore
    node_modules
    .git
    *.log
    

Docker-in-Docker Container Cannot Build Images

Problem: DinD container fails to build images or start daemon

Solution:

  1. Verify privileged mode:

    docker inspect dind | grep -i privileged
    
  2. Check DinD daemon logs:

    docker logs dind
    
  3. Verify certificate volumes:

    docker exec dind ls -la /certs
    
  4. Test Docker client connection:

    docker run --rm \
      -e DOCKER_HOST=tcp://dind:2376 \
      -e DOCKER_TLS_VERIFY=1 \
      -e DOCKER_CERT_PATH=/certs/client \
      -v docker-certs:/certs:ro \
      --link dind:dind \
      docker:28.0.4-cli \
      docker info
    
  5. Check storage driver:

    docker exec dind docker info | grep "Storage Driver"
    

Permission Denied When Running Docker

Problem: User cannot run Docker commands without sudo

Solution:

  1. Add user to docker group:

    sudo usermod -aG docker $USER
    
  2. Log out and back in to apply group changes

  3. Verify group membership:

    groups $USER
    
  4. Test Docker access:

    docker ps
    
  5. If issue persists, check socket permissions:

    sudo chmod 666 /var/run/docker.sock
    

Note: Adding users to the docker group grants root-equivalent privileges. Only add trusted users.

Additional Resources