Docker is a container platform that packages applications and dependencies into standardized units called containers. In the Edge Developer Platform, Docker serves three primary functions: powering local development environments through Docker Desktop, building container images in CI/CD pipelines, and providing Docker-in-Docker (DinD) execution environments for Forgejo Actions runners.
Docker provides a consistent runtime environment across development, testing, and production, ensuring applications behave identically regardless of the underlying infrastructure.
Docker plays several critical roles in the Edge Developer Platform:
**Local Development**: Docker Desktop provides developers with containerized development environments, enabling consistent tooling and dependencies across different machines. Developers can run entire application stacks locally using docker-compose configurations.
**Image Building**: CI/CD pipelines use Docker to build container images from source code. The `docker build` command transforms Dockerfiles into layered images that can be deployed to Kubernetes or other container orchestration platforms.
**CI/CD Execution**: Forgejo Actions runners use Docker-in-Docker to execute workflow steps in isolated containers. Each job runs in a fresh container environment, ensuring clean state and reproducible builds.
Docker follows a client-server architecture with several key components:
**Docker Client**: Command-line tool (docker CLI) that sends API requests to the Docker daemon. Developers interact with Docker through client commands like `docker run`, `docker build`, and `docker ps`.
**Docker Daemon (dockerd)**: Background service that manages containers, images, networks, and volumes. The daemon listens for Docker API requests and coordinates with lower-level runtime components.
**Containerd**: High-level container runtime that manages container lifecycle operations. Containerd handles image transfer and storage, container execution supervision, and low-level storage and network attachments.
**runc**: Low-level OCI (Open Container Initiative) runtime that creates and runs containers. Runc interfaces directly with Linux kernel features to spawn and manage container processes.
### Container Runtime Layers
Docker's layered architecture separates concerns across multiple components:
```
┌─────────────────────────────────┐
│ Docker Client (CLI) │
│ docker run, build, push │
└────────────┬────────────────────┘
│ Docker API
┌────────────▼────────────────────┐
│ Docker Daemon (dockerd) │
│ API, Image management, Auth │
└────────────┬────────────────────┘
│ Container API
┌────────────▼────────────────────┐
│ Containerd │
│ Lifecycle, Image pull/push │
└────────────┬────────────────────┘
│ OCI Runtime API
┌────────────▼────────────────────┐
│ runc │
│ Namespace, cgroup creation │
└────────────┬────────────────────┘
│ System Calls
┌────────────▼────────────────────┐
│ Linux Kernel │
│ namespaces, cgroups, seccomp │
└─────────────────────────────────┘
```
**Docker Daemon Layer**: Handles high-level operations like image building, authentication, and API endpoint management. The daemon translates user commands into lower-level runtime operations.
**Containerd Layer**: Manages container lifecycle independent of Docker-specific features. Containerd can be used directly by Kubernetes and other orchestrators, providing a standard container runtime interface.
**runc Layer**: Implements the OCI runtime specification, creating container processes with Linux kernel isolation features. Runc configures namespaces, cgroups, and security policies before executing container entrypoints.
### Linux Kernel Features
Docker leverages several Linux kernel capabilities for container isolation:
**Namespaces**: Provide process isolation by creating separate views of system resources:
- **PID namespace**: Isolates process IDs so containers see only their own processes
- **Network namespace**: Provides separate network stacks with unique IP addresses and routing tables
- **Mount namespace**: Isolates filesystem mount points so containers have independent filesystem views
- **UTS namespace**: Separates hostname and domain name
- **IPC namespace**: Isolates inter-process communication resources like shared memory
- **User namespace**: Maps container user IDs to different host user IDs for privilege separation
**Control Groups (cgroups)**: Limit and account for resource usage:
**Capabilities**: Fine-grained privilege control that breaks down root privileges into discrete capabilities. Containers run with reduced capability sets, dropping dangerous privileges like CAP_SYS_ADMIN.
**Seccomp**: Filters system calls that containerized processes can make, reducing the kernel attack surface. Docker applies a default seccomp profile blocking ~44 dangerous syscalls.
**Image Layers**: Docker images consist of read-only layers stacked on top of each other. Each Dockerfile instruction creates a new layer. Layers are shared between images, reducing storage requirements.
**Copy-on-Write (CoW)**: When a container modifies a file from an image layer, OverlayFS copies the file to the container's writable layer. The original image layer remains unchanged, enabling efficient image reuse.
**OverlayFS Structure**:
```
Container Writable Layer (upperdir)
│
├─ Modified files
│
Image Layers (lowerdir)
│
├─ Layer 3 (READ-ONLY)
├─ Layer 2 (READ-ONLY)
└─ Layer 1 (READ-ONLY)
│
Merged View (merged)
│
└─ Union of all layers
```
**LowerDir**: Read-only image layers containing base filesystem and application files
**UpperDir**: Writable container layer where all changes are stored
**MergedDir**: Union mount that presents a unified view of all layers to the container
**WorkDir**: Internal working directory used by OverlayFS for atomic operations
When a container reads a file, OverlayFS serves it from the topmost layer where it exists. Writes always go to the upperdir, leaving image layers immutable.
### Networking Architecture
Docker provides several networking modes:
**Bridge Network**: Default network mode that creates a virtual bridge (docker0) on the host. Containers connect to the bridge and receive private IP addresses. Network Address Translation (NAT) enables outbound connectivity.
**Host Network**: Container shares the host's network namespace, using the host's IP address directly. Offers maximum network performance but reduces isolation.
**Overlay Network**: Multi-host networking for container communication across different Docker hosts. Used by Docker Swarm and can be integrated with Kubernetes.
**None**: Disables networking for maximum isolation.
### Docker-in-Docker (DinD) Architecture
Docker-in-Docker runs a nested Docker daemon inside a container. This is used in CI/CD runners to provide isolated build environments:
**Privileged Container**: DinD requires privileged mode to mount filesystems and create namespaces within the container. The `--privileged` flag grants extended capabilities.
**Separate Daemon**: A complete Docker daemon runs inside the container, managing its own containers, images, and networks independently of the host daemon.
**Certificate Management**: DinD uses mutual TLS authentication between the inner client and daemon. Certificates are shared through volumes mounted at `/certs`.
**Storage Driver**: The inner Docker daemon typically uses vfs or overlay2 storage driver. VFS provides maximum compatibility but larger storage overhead.
**Use in Forgejo Runners**:
```yaml
containers:
- name: runner
image: code.forgejo.org/forgejo/runner:6.4.0
env:
- name: DOCKER_HOST
value: tcp://localhost:2376
- name: DOCKER_TLS_VERIFY
value: "1"
- name: dind
image: docker:28.0.4-dind
securityContext:
privileged: true
volumeMounts:
- name: docker-certs
mountPath: /certs
```
The runner container connects to the DinD container via TCP, allowing workflow steps to execute docker commands for building and testing.