Compare commits

..

1 commit

414 changed files with 40423 additions and 35690 deletions

View file

@ -1,74 +0,0 @@
# Technical Documentation Guidelines
You are an expert technical writer with deep expertise in creating clear, concise, and well-structured documentation. Your goal is to produce documentation that flows naturally while maintaining technical accuracy.
## Core Principles
### 1. Conciseness and Clarity
- Use clear, direct language
- Eliminate unnecessary words and redundancy
- Make every sentence count
- Prefer active voice over passive voice
- Use short paragraphs (3-5 sentences maximum)
### 2. Structure and Organization
- Start with the most important information
- Use logical hierarchies with consistent heading levels
- Group related concepts together
- Provide clear navigation through table of contents when appropriate
- Use lists for sequential steps or related items
### 3. Flow and Readability
- Ensure smooth transitions between sections
- Connect ideas logically
- Build complexity gradually
- Use examples to illustrate concepts
- Maintain consistent terminology throughout
### 4. Technical Accuracy
- Be precise with technical terms
- Include relevant code examples that are tested and functional
- Document edge cases and limitations
- Provide accurate command syntax and parameters
- Link to related documentation when appropriate
## Documentation Structure
### Standard Document Layout
1. **Title** - Clear, descriptive heading
2. **Overview** - Brief introduction (2-3 sentences)
3. **Prerequisites** - What the reader needs to know or have
4. **Main Content** - Organized in logical sections
5. **Examples** - Practical, real-world use cases
6. **Troubleshooting** - Common issues and solutions (when applicable)
7. **Related Resources** - Links to additional documentation
### Code Examples
- Provide complete, runnable examples
- Include comments for complex logic
- Show expected output
- Use consistent formatting and syntax highlighting
### Commands and APIs
- Show full syntax with all parameters
- Indicate required vs optional parameters
- Provide parameter descriptions
- Include return values or output format
## Writing Style
- **Be direct**: "Configure the database" not "You should configure the database"
- **Be specific**: "Set timeout to 30 seconds" not "Set an appropriate timeout"
- **Be consistent**: Use the same terms for the same concepts
- **Be complete**: Don't assume implicit knowledge; explain as needed
## When Uncertain
**If you don't know something or need clarification:**
- Ask specific questions
- Request examples or use cases
- Clarify technical details or edge cases
- Verify terminology and naming conventions
- Confirm target audience and their expected knowledge level
Your expertise is in writing excellent documentation. Use your judgment to create documentation that serves the reader's needs effectively. When in doubt, ask rather than guess.

View file

@ -1,22 +0,0 @@
.github
.gitignore
.vscode
.devbox
.devcontainer
node_modules
public
resources
tmp
*.md
!content/**/*.md
TESTING.md
Taskfile.yml
devbox.json
devbox.lock
.hugo_build.lock
.htmltest.yml
.htmlvalidate.json
.markdownlint.json
# Ensure package-lock.json is included for npm ci
!package-lock.json

View file

@ -1,9 +0,0 @@
# Tool versions for development and CI/CD
# These versions are used in:
# - devbox.json (pinned versions)
# - Dockerfile (build arguments)
# - .github/workflows/ci.yaml (CI/CD pipeline)
NODE_VERSION=24.10.0
GO_VERSION=1.25.1
HUGO_VERSION=0.151.0

View file

@ -1 +0,0 @@
use flake

View file

@ -1,79 +0,0 @@
name: ci
on:
push:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
- name: Load versions from .env.versions
id: versions
run: |
# Source the versions file
set -a
source .env.versions
set +a
echo "node_version=${NODE_VERSION}" >> "$GITHUB_OUTPUT"
echo "go_version=${GO_VERSION}" >> "$GITHUB_OUTPUT"
echo "hugo_version=${HUGO_VERSION}" >> "$GITHUB_OUTPUT"
echo "Node: ${NODE_VERSION}"
echo "Go: ${GO_VERSION}"
echo "Hugo: ${HUGO_VERSION}"
- name: Repository meta
id: repository
run: |
registry=${{ github.server_url }}
registry=${registry##http*://}
echo "registry=${registry}" >> "$GITHUB_OUTPUT"
echo "registry=${registry}"
repository="$(echo "${{ github.repository }}" | tr '[:upper:]' '[:lower:]')"
echo "repository=${repository}" >> "$GITHUB_OUTPUT"
echo "repository=${repository}"
- name: Docker meta
uses: docker/metadata-action@v5
id: docker
with:
images: ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}
tags: |
type=sha,prefix=
type=ref,event=tag
-
name: Login to registry
uses: docker/login-action@v3
with:
registry: ${{ steps.repository.outputs.registry }}
username: "${{ secrets.PACKAGES_USER }}"
password: "${{ secrets.PACKAGES_TOKEN }}"
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
buildkitd-flags: '--allow-insecure-entitlement network.host'
driver-opts: network=host
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: true
allow: network.host
network: host
tags: ${{ steps.docker.outputs.tags }}
labels: ${{ steps.docker.outputs.labels }}
build-args: |
NODE_VERSION=${{ steps.versions.outputs.node_version }}
GO_VERSION=${{ steps.versions.outputs.go_version }}
HUGO_VERSION=${{ steps.versions.outputs.hugo_version }}

View file

@ -1,32 +0,0 @@
name: delete-edge
on:
workflow_run:
workflows: [build]
types:
- completed
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Replace Image Version
run: |
sha="${{ github.sha }}"
shortSha="${sha:0:7}"
echo "Setting image version to: edp.buildth.ing/devfw-cicd/website-and-documentation:${shortSha}"
sed -i "s@###IMAGETAG###@edp.buildth.ing/devfw-cicd/website-and-documentation:${shortSha}@g" ./k8s-deployment.yaml
- name: Delete action
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-delete-action@main
id: delete
with:
configFile: ./edgeconnectdeployment.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}

View file

@ -1,32 +0,0 @@
name: deploy-edge
on:
workflow_run:
workflows: [build]
types:
- completed
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Replace Image Version
run: |
sha="${{ github.sha }}"
shortSha="${sha:0:7}"
echo "Setting image version to: edp.buildth.ing/devfw-cicd/website-and-documentation:${shortSha}"
sed -i "s@###IMAGETAG###@edp.buildth.ing/devfw-cicd/website-and-documentation:${shortSha}@g" ./k8s-deployment.yaml
- name: Deploy action
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
id: deploy
with:
configFile: ./edgeconnectdeployment.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}

View file

@ -1,149 +0,0 @@
name: release
on:
push:
tags:
- 'v*.*.*' # Triggert auf Semantic Versioning Tags (v1.0.0, v2.1.3, etc.)
permissions:
contents: write
packages: write
jobs:
release:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
- name: Load versions from .env.versions
id: versions
run: |
set -a
source .env.versions
set +a
echo "node_version=${NODE_VERSION}" >> "$GITHUB_OUTPUT"
echo "go_version=${GO_VERSION}" >> "$GITHUB_OUTPUT"
echo "hugo_version=${HUGO_VERSION}" >> "$GITHUB_OUTPUT"
echo "Node: ${NODE_VERSION}"
echo "Go: ${GO_VERSION}"
echo "Hugo: ${HUGO_VERSION}"
- name: Extract version from tag
id: version
run: |
VERSION=${GITHUB_REF#refs/tags/v}
echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
echo "Version: ${VERSION}"
- name: Repository meta
id: repository
run: |
registry=${{ github.server_url }}
registry=${registry##http*://}
echo "registry=${registry}" >> "$GITHUB_OUTPUT"
echo "registry=${registry}"
repository="$(echo "${{ github.repository }}" | tr '[:upper:]' '[:lower:]')"
echo "repository=${repository}" >> "$GITHUB_OUTPUT"
echo "repository=${repository}"
- name: Docker meta
uses: docker/metadata-action@v5
id: docker
with:
images: ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=raw,value=latest
- name: Login to registry
uses: docker/login-action@v3
with:
registry: ${{ steps.repository.outputs.registry }}
username: "${{ secrets.PACKAGES_USER }}"
password: "${{ secrets.PACKAGES_TOKEN }}"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
buildkitd-flags: '--allow-insecure-entitlement network.host'
driver-opts: network=host
- name: Build and push release images
uses: docker/build-push-action@v6
with:
context: .
push: true
allow: network.host
network: host
platforms: linux/amd64,linux/arm64
tags: ${{ steps.docker.outputs.tags }}
labels: ${{ steps.docker.outputs.labels }}
build-args: |
NODE_VERSION=${{ steps.versions.outputs.node_version }}
GO_VERSION=${{ steps.versions.outputs.go_version }}
HUGO_VERSION=${{ steps.versions.outputs.hugo_version }}
- name: Generate changelog
id: changelog
run: |
# Finde vorheriges Tag
PREVIOUS_TAG=$(git describe --abbrev=0 --tags ${GITHUB_REF}^ 2>/dev/null || echo "")
if [ -z "$PREVIOUS_TAG" ]; then
echo "Erster Release - Changelog von Anfang an"
CHANGELOG=$(git log --pretty=format:"- %s (%h)" --no-merges)
else
echo "Changelog seit ${PREVIOUS_TAG}"
CHANGELOG=$(git log ${PREVIOUS_TAG}..${GITHUB_REF} --pretty=format:"- %s (%h)" --no-merges)
fi
# Schreibe in Output-Datei (multiline)
{
echo 'changelog<<EOF'
echo "$CHANGELOG"
echo 'EOF'
} >> "$GITHUB_OUTPUT"
- name: Create Forgejo/Gitea Release
uses: actions/forgejo-release@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
direction: upload
release-dir: .
title: "Release ${{ steps.version.outputs.version }}"
tag: ${{ github.ref_name }}
token: ${{ secrets.GITHUB_TOKEN }}
release-notes: |
# Release ${{ steps.version.outputs.version }}
## Docker Images
Multi-platform images (linux/amd64, linux/arm64) sind verfügbar:
```bash
docker pull ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}:${{ steps.version.outputs.version }}
docker pull ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}:latest
```
## Build Versions
- Node.js: ${{ steps.versions.outputs.node_version }}
- Go: ${{ steps.versions.outputs.go_version }}
- Hugo: ${{ steps.versions.outputs.hugo_version }}
## Changes
${{ steps.changelog.outputs.changelog }}

View file

@ -1,51 +0,0 @@
name: Hugo Site Tests
on:
# push:
# branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
- name: Setup Hugo
uses: peaceiris/actions-hugo@v3
with:
hugo-version: 'latest'
extended: true
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '24'
cache: 'npm'
- name: Install dependencies
run: |
npm ci
go install github.com/wjdp/htmltest@latest
- name: Run tests
run: |
npm run test:build
npm run test:markdown
npm run test:html
- name: Run link checker
run: htmltest
continue-on-error: true
- name: Upload htmltest results
uses: actions/upload-artifact@v4
if: always()
with:
name: htmltest-report
path: tmp/.htmltest/

42
.gitignore vendored
View file

@ -1,41 +1,7 @@
# Hugo
.hugo_build.lock
public/
resources/_gen/
public
# Node.js / NPM
node_modules/
# for npm devcontainer cli
package-lock.json
package.json
# Test outputs
tmp/
.htmltest/
# devbox
.devbox/
# Task cache
.task/
# Generated build data
data/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
Thumbs.db
# Logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
### direnv ###
.direnv
.envrc

View file

@ -1,24 +0,0 @@
DirectoryPath: "public"
CheckExternal: true
CheckInternalHash: true
IgnoreURLs:
- "^https://example\\.docsy\\.dev"
- "^https://example\\.com"
- "^http://localhost"
- "^/livereload\\.js"
- "^https://cnoe\\.localtest\\.me"
- "^https://technologyconversations\\.com"
- "^https://developers\\.redhat\\.com"
- "^https://platformengineering\\.org"
- "^https://cnoe\\.io"
- "^https://console\\.otc\\.t-systems\\.com"
IgnoreInternalURLs:
- "/docs-old/"
- "/blog/"
- "/docs/v1/"
- "/docs/architecture/"
- "/docs/documentation/"
IgnoreInternalEmptyHashes: true
IgnoreDirectoryMissingTrailingSlash: true
IgnoreAltMissing: true
CheckDoctype: true

View file

@ -1,27 +0,0 @@
{
"extends": ["html-validate:recommended"],
"rules": {
"no-inline-style": "off",
"require-sri": "off",
"no-trailing-whitespace": "off",
"void-style": "off",
"wcag/h30": "off",
"wcag/h32": "off",
"wcag/h37": "off",
"no-redundant-role": "off",
"unique-landmark": "off",
"no-multiple-main": "off",
"no-dup-id": "off",
"element-permitted-content": "off",
"attr-quotes": "off",
"empty-heading": "off",
"element-required-content": "off",
"long-title": "off",
"no-raw-characters": "off",
"valid-id": "off",
"doctype-style": "off"
},
"elements": [
"html5"
]
}

View file

@ -1,8 +0,0 @@
{
"default": true,
"MD013": false,
"MD033": false,
"MD041": false,
"MD024": { "siblings_only": true },
"MD025": { "front_matter_title": "" }
}

View file

@ -1,4 +0,0 @@
# Ignore v1 documentation (legacy content with pre-existing lint issues)
content/en/docs/v1/**
content/en/blog/**
content/en/docs-old/**

View file

@ -1,69 +0,0 @@
# Build arguments for version pinning (matching devbox.json)
ARG NODE_VERSION=24.10.0
ARG GO_VERSION=1.25.1
ARG HUGO_VERSION=0.151.0
# Build stage - use same versions as local devbox environment
FROM node:${NODE_VERSION}-bookworm AS builder
# Get target architecture for multi-platform builds
ARG TARGETARCH
# Install Git (needed for Hugo's enableGitInfo)
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install Go (map TARGETARCH: amd64->amd64, arm64->arm64)
ARG GO_VERSION
RUN wget -q https://go.dev/dl/go${GO_VERSION}.linux-${TARGETARCH}.tar.gz && \
tar -C /usr/local -xzf go${GO_VERSION}.linux-${TARGETARCH}.tar.gz && \
rm go${GO_VERSION}.linux-${TARGETARCH}.tar.gz
ENV PATH="/usr/local/go/bin:${PATH}"
ENV GOPATH="/go"
ENV PATH="${GOPATH}/bin:${PATH}"
# Install Hugo extended (map TARGETARCH: amd64->amd64, arm64->arm64)
ARG HUGO_VERSION
RUN wget -q https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-${TARGETARCH}.tar.gz && \
tar -xzf hugo_extended_${HUGO_VERSION}_linux-${TARGETARCH}.tar.gz && \
mv hugo /usr/local/bin/ && \
rm hugo_extended_${HUGO_VERSION}_linux-${TARGETARCH}.tar.gz && \
hugo version
WORKDIR /src
# Copy package files and install npm dependencies
COPY package*.json ./
RUN npm ci
# Copy all source files
COPY . .
# Build Hugo site (Git info wird aus dem aktuellen Kontext genommen, nicht aus .git)
# Hugo sucht nach .git, findet es nicht, und überspringt Git-Info automatisch
RUN hugo --gc --minify
# Runtime stage - nginx to serve static content
FROM nginx:1.27-alpine
# Copy built site from builder
COPY --from=builder /src/public /usr/share/nginx/html
# Copy custom nginx config
RUN echo 'server {' > /etc/nginx/conf.d/default.conf && \
echo ' listen 80;' >> /etc/nginx/conf.d/default.conf && \
echo ' server_name _;' >> /etc/nginx/conf.d/default.conf && \
echo ' root /usr/share/nginx/html;' >> /etc/nginx/conf.d/default.conf && \
echo ' index index.html;' >> /etc/nginx/conf.d/default.conf && \
echo '' >> /etc/nginx/conf.d/default.conf && \
echo ' location / {' >> /etc/nginx/conf.d/default.conf && \
echo ' try_files $uri $uri/ /index.html;' >> /etc/nginx/conf.d/default.conf && \
echo ' }' >> /etc/nginx/conf.d/default.conf && \
echo '' >> /etc/nginx/conf.d/default.conf && \
echo ' gzip on;' >> /etc/nginx/conf.d/default.conf && \
echo ' gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;' >> /etc/nginx/conf.d/default.conf && \
echo '}' >> /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

View file

@ -1,33 +1,88 @@
# EDP - Edge Developer Platform
# IPCEICIS-DeveloperFramework Documentation
Documentation for the edgeDeveloperFramework (eDF) project and the resulting Edge Developer Platform (EDP) product suite.
This repo contains business and architectural design and documentation of the DeveloperFramework subproject of IPCEI-CIS.
## Quick Start
## How to read and contribute to this documentation locally
```bash
# Install dependencies
task deps
The documentation is done in [Hugo-format](https://gohugo.io).
# Start local development server
task serve
Hugo is a static site renderer - so to get the documentation site presented you need a running Hugo processor. Therefore there is
# Run tests
task test
* either a Hugo [`.devcontainer`-definition](https://containers.dev/) - just run a devcontainer aware IDE or CLI, e.g. Visual Studio code
* or a Hugo [`Devbox`-definition](https://www.jetify.com/devbox/) - in this case just run a devbox shell
# Build production site
task build
```
## Local installation of the Hugo documentation system
## Documentation
We describe two possible ways (one with devcontainer, one with devbox) to get the Hugo-documentation system locally running.
* [Developer Guide](doc/README-developer.md)
* [Technical Writer Guide](doc/README-technical-writer.md)
* [Release Notes](doc/RELEASE.md)
For both prepare the following three steps:
## Project
1. open a terminal on your local box
2. clone this repo: `git clone https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW/website-and-documentation`
3. change to the repo working dir: `cd website-and-documentation`
This is a Hugo-based documentation site for the Edge Developer Platform, built as part of the IPCEI-CIS project.
### Possibility 1: Hugo in a devcontainer
**Website:** Access the documentation at the deployed URL or run locally with `task serve`
[`devcontainers`](https://containers.dev/) are running containers as virtual systems on your local box. The defintion is in the `.devcontainer` folder.
Thus as preliminary you need a container daemon running, e.g. Docker.
For detailed information, see the documentation in the `doc/` folder.
There are several options to create and run the devcontainer - we present here two:
#### Option 1: Run the container triggered by and connected to an IDE, e.g. VS Code
1. open the repo in an [Devcontainer-aware tool/IDE](https://containers.dev/supporting) (e.g. `code .`)
1. start the `devcontainer` (in VSC it's `F1 + Reopen in Devcontainer`)
1. when the container is up & running just open your browser with `http://localhost:1313/`
#### Option 2: Run the container natively
An alternative to get the container image is the [devcontainer CLI](https://github.com/devcontainers/cli), then you can run the devcontainer without VS Code.
Thus as preliminary you need to do the install steps of the devconatiner cli.
1. start the devcontainer by running: `devcontainer up --workspace-folder .`
1. find out the IP address of the devconatiner by using `docker ps` and `docker inspect <id of container>`
1. when the container is up & running just open your browser with `http://<DOCKER IP>:1313/`
### Possibility 2: Hugo in a devbox
[`Devboxes`](https://www.jetify.com/devbox/) are locally isolated environments, managed by the [Nix package manager](https://nix.dev/). So first [prepare the devbox](https://www.jetify.com/docs/devbox/installing_devbox/).
Then
1. ```devbox shell```
1. In the shell: ```hugo serve```
## Editing
### Documentation language
The documentation is done in [Docsy-Theme](https://www.docsy.dev/).
So for editing content just goto the `content`-folder and edit content arrording to the [Docsy documentation](https://www.docsy.dev/docs/adding-content/)
## Commiting
After having finished a unit of work commit and push.
# Annex
## Installation steps illustrated
When you run the above installation, the outputs could typically look like this:
### In Visual Studio Code
#### Reopen in Container
![vsc-f1](./assets/images/vsc-f1.png)
#### Hugo server is running and (typically) listens to localhost:1313
After some installation time you have:
![vsc-hugo](./assets/images/vsc-hugo.png)
### Final result in a web browser
![browser](./assets/images/browser.png)

View file

@ -1,225 +0,0 @@
version: '3'
vars:
HUGO_CMD: hugo
NPM_CMD: npm
tasks:
default:
desc: Show available tasks
cmds:
- task --list
# Build tasks
build:
desc: Build Hugo site
deps:
- deps:ensure-npm
- build:generate-info
cmds:
- "{{.HUGO_CMD}} --gc --minify"
build:dev:
desc: Build Hugo site for development
deps:
- deps:ensure-npm
- build:generate-info
cmds:
- "{{.HUGO_CMD}}"
build:generate-info:
desc: Generate build information (git commit, version, etc.)
sources:
- .git/HEAD
- .git/refs/**/*
generates:
- data/build_info.json
cmds:
- ./scripts/generate-build-info.sh
serve:
desc: Start Hugo dev server
deps:
- deps:ensure-npm
- build:generate-info
cmds:
- "{{.HUGO_CMD}} server --noHTTPCache"
clean:
desc: Clean build artifacts
cmds:
- rm -rf public resources/_gen .hugo_build.lock
# Test tasks
test:
desc: Run all tests
deps:
- test:build
- test:markdown
- test:html
- likec4:validate
test:hugo:
desc: Run Hugo-only tests (markdown, HTML, build)
deps:
- test:build
- test:markdown
- test:html
test:full:
desc: Run all tests including link check (may have errors in legacy content)
deps:
- test:build
- test:markdown
- test:html
- test:links
- likec4:validate
test:quick:
desc: Run quick tests (without link check)
deps:
- test:build
- test:markdown
- likec4:validate
test:build:
desc: Test Hugo build
deps:
- deps:ensure-npm
- build:generate-info
cmds:
- "{{.HUGO_CMD}} --gc --minify --logLevel info"
test:markdown:
desc: Lint markdown files
deps:
- deps:ensure-npm
cmds:
- "{{.NPM_CMD}} run test:markdown"
test:html:
desc: Validate HTML
deps:
- deps:ensure-npm
cmds:
- "{{.NPM_CMD}} run test:html"
test:links:
desc: Check links (skips legacy content)
cmds:
- |
# Move legacy dirs outside public temporarily
mkdir -p /tmp/htmltest-backup-$$
if [ -d "public/docs-old" ]; then mv public/docs-old /tmp/htmltest-backup-$$/; fi
if [ -d "public/blog" ]; then mv public/blog /tmp/htmltest-backup-$$/; fi
if [ -d "public/_print/docs-old" ]; then mv public/_print/docs-old /tmp/htmltest-backup-$$/docs-old-print; fi
# Run htmltest
htmltest || EXIT_CODE=$?
# Restore directories
if [ -d "/tmp/htmltest-backup-$$/docs-old" ]; then mv /tmp/htmltest-backup-$$/docs-old public/; fi
if [ -d "/tmp/htmltest-backup-$$/blog" ]; then mv /tmp/htmltest-backup-$$/blog public/; fi
if [ -d "/tmp/htmltest-backup-$$/docs-old-print" ]; then mv /tmp/htmltest-backup-$$/docs-old-print public/_print/docs-old; fi
rm -rf /tmp/htmltest-backup-$$
# Exit with the original exit code
exit ${EXIT_CODE:-0}
# LikeC4 tasks
likec4:generate:
desc: Generate LikeC4 webcomponent (includes all architecture projects)
cmds:
- npx likec4 codegen webcomponent --webcomponent-prefix likec4 --outfile static/js/likec4-webcomponent.js resources/edp-likec4 resources/doc-likec4
likec4:validate:
desc: Validate LikeC4 models
cmds:
- echo "Validating EDP architecture models..."
- npx likec4 validate --ignore-layout resources/edp-likec4
- echo "Validating Documentation platform models..."
- npx likec4 validate --ignore-layout resources/doc-likec4
- echo "✓ All LikeC4 models validated successfully"
likec4:validate:layout:
desc: Validate LikeC4 models including layout
cmds:
- echo "Validating EDP architecture models (including layout)..."
- npx likec4 validate resources/edp-likec4
- echo "Validating Documentation platform models (including layout)..."
- npx likec4 validate resources/doc-likec4
- echo "✓ All LikeC4 models and layouts validated successfully"
likec4:update:
desc: Update LikeC4 to latest version
cmds:
- npm update likec4 --prefix resources/edp-likec4
- npm update likec4 --prefix resources/doc-likec4
- echo "✓ LikeC4 updated in both projects"
# Development tasks
deps:ensure-npm:
desc: Ensure npm dependencies are installed
sources:
- package.json
- package-lock.json
generates:
- node_modules/.package-lock.json
cmds:
- "{{.NPM_CMD}} ci"
status:
- test -d node_modules
deps:install:
desc: Install all dependencies
cmds:
- "{{.NPM_CMD}} ci"
- "{{.HUGO_CMD}} mod get -u"
- "{{.HUGO_CMD}} mod tidy"
deps:update:
desc: Update dependencies
cmds:
- devbox update
- "{{.NPM_CMD}} update"
- "{{.HUGO_CMD}} mod get -u"
# CI/CD
ci:
desc: Run CI pipeline locally
deps:
- test
build:oci-image:
desc: Build OCI/Docker image with versions from .env.versions
cmds:
- |
set -a
source .env.versions
set +a
echo "Building OCI image with versions:"
echo " NODE_VERSION=${NODE_VERSION}"
echo " GO_VERSION=${GO_VERSION}"
echo " HUGO_VERSION=${HUGO_VERSION}"
docker build --network=host \
--build-arg NODE_VERSION=${NODE_VERSION} \
--build-arg GO_VERSION=${GO_VERSION} \
--build-arg HUGO_VERSION=${HUGO_VERSION} \
-t ipceicis-developerframework:latest \
-t ipceicis-developerframework:$(git rev-parse --short HEAD) \
.
test:oci-image:
desc: Test the built OCI image
deps:
- build:oci-image
cmds:
- |
echo "Starting container on port 8080..."
docker run -d -p 8080:80 --name hugo-test ipceicis-developerframework:latest
sleep 2
echo "Testing endpoint..."
curl -f http://localhost:8080 > /dev/null && echo "✓ Container is running and responding" || echo "✗ Container test failed"
echo "Cleaning up..."
docker stop hugo-test
docker rm hugo-test

View file

@ -1,28 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: docs
namespace: argocd
labels:
env: prod
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
destination:
name: in-cluster
namespace: docs
syncOptions:
- CreateNamespace=true
sources:
- repoURL: https://edp.buildth.ing/DevFW-CICD/website-and-documentation
targetRevision: HEAD
path: argocd-stack/helm
helm:
parameters:
- name: image.tag
value: $ARGOCD_APP_REVISION_SHORT

View file

@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View file

@ -1,24 +0,0 @@
apiVersion: v2
name: helm
description: Deploy documentation to edp.buildth.ing
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View file

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: docs
name: docs
spec:
replicas: 1
selector:
matchLabels:
app: docs
strategy: {}
template:
metadata:
labels:
app: docs
spec:
containers:
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
name: docs
ports:
- name: http
containerPort: 80
protocol: TCP
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: docs
spec:
selector:
app: docs
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: docs
annotations:
cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: docs.edp.buildth.ing
http:
paths:
- backend:
service:
name: docs
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- docs.edp.buildth.ing
secretName: docs-edp-buildth-ing-tls

View file

@ -1,4 +0,0 @@
image:
repository: edp.buildth.ing/devfw-cicd/website-and-documentation
tag: "UNKNOWN_TAG"

View file

@ -1,524 +1,6 @@
/*
* Telekom-inspired Theme Variables
* Based on https://edp.buildth.ing Telekom Design System
*/
// Bootstrap/Docsy Variable Overrides (must be before imports)
$primary: #E20074 !default;
$secondary: #B6B6B6 !default;
$success: #00b367 !default;
$info: #0070ad !default;
$warning: #ffcc00 !default;
$danger: #d52b1e !default;
$dark: #000000 !default;
$light: #f9fafb !default;
Add styles or override variables from the theme here.
// Link colors
$link-color: #E20074 !default;
$link-hover-color: #C2005E !default;
// Body
$body-bg: #ffffff !default;
$body-color: #000000 !default;
// Navbar
$navbar-light-color: #000000 !default;
$navbar-light-hover-color: #E20074 !default;
$navbar-light-active-color: #E20074 !default;
// Fonts
$font-family-sans-serif: 'TeleNeo', -apple-system, "Segoe UI", system-ui, Roboto, "Helvetica Neue", Arial, sans-serif !default;
$font-family-base: $font-family-sans-serif !default;
// Telekom TeleNeo Fonts
@font-face {
font-family: 'TeleNeo';
src: url('../fonts/TeleNeoOffice-Thin.53627df9.ttf') format('truetype');
font-weight: 300;
font-style: normal;
}
@font-face {
font-family: 'TeleNeo';
src: url('../fonts/TeleNeoOffice-Regular.b0a2cff1.ttf') format('truetype');
font-weight: 400;
font-style: normal;
}
@font-face {
font-family: 'TeleNeo';
src: url('../fonts/TeleNeoOffice-Medium.79fb426d.ttf') format('truetype');
font-weight: 500;
font-style: normal;
}
@font-face {
font-family: 'TeleNeo';
src: url('../fonts/TeleNeoOffice-Bold.a7bb592b.ttf') format('truetype');
font-weight: 600;
font-style: normal;
}
@font-face {
font-family: 'TeleNeo';
src: url('../fonts/TeleNeoOffice-ExtraBold.fbe9fe42.ttf') format('truetype');
font-weight: 700;
font-style: normal;
}
// Primary Colors - Telekom Magenta
:root {
// Telekom Primary Color (Magenta)
--color-primary: #E20074;
--color-primary-contrast: #ffffff;
--color-primary-dark-1: #C2005E;
--color-primary-dark-2: #A5004D;
--color-primary-dark-3: #87003D;
--color-primary-light-1: #E7338A;
--color-primary-light-2: #EC66A1;
--color-primary-light-3: #F299B8;
--color-primary-light-4: #F7CCCF;
--color-primary-light-5: #FCEFF6;
--color-primary-light-6: #FFF5FA;
// Secondary Colors
--color-secondary: #B6B6B6;
--color-secondary-dark: #6a7178;
--color-secondary-light: #f9fafb;
// Semantic Colors
--color-success: #00b367;
--color-warning: #ffcc00;
--color-error: #d52b1e;
--color-info: #0070ad;
// Text Colors
--color-text: #000000;
--color-text-light: #666666;
--color-text-dark: #000000;
// Background Colors
--color-body: #ffffff;
--color-card: #F1F1F1;
--color-hover: #F1F1F1;
--color-active: #F1F1F1;
// Navigation
--color-nav-bg: #ffffff;
--color-nav-text: #000000;
--nav-border-color: #B6B6B6;
// UI Elements
--color-input-background: #ffffff;
--color-input-border: #cccccc;
--color-input-text: #000000;
--color-box-body: #f2f2f2;
--color-box-header: #e6e6e6;
// Shadows & Overlays
--color-shadow: rgba(0, 0, 0, 0.15);
--color-overlay-backdrop: rgba(0, 0, 0, 0.5);
// Font Settings
--font-family-base: 'TeleNeo', -apple-system, "Segoe UI", system-ui, Roboto, "Helvetica Neue", Arial, sans-serif;
--nav-text-font-weight: 600;
}
// Apply TeleNeo font globally
body {
font-family: var(--font-family-base);
}
// Dark Mode Support
@media (prefers-color-scheme: dark) {
:root {
// Primary Colors remain same
--color-primary: #E20074;
--color-primary-contrast: #000000;
// Dark Mode Adjustments
--color-primary-dark-1: #E7338A;
--color-primary-dark-2: #EC66A1;
--color-primary-light-1: #C2005E;
--color-primary-light-2: #A5004D;
// Secondary Colors for Dark Mode
--color-secondary: #1c1c1e;
--color-secondary-dark: #4D4D4D;
--color-secondary-light: #0D0D0D;
// Text Colors
--color-text: #FFFFFF;
--color-text-light: #CCCCCC;
--color-text-dark: #FFFFFF;
// Background Colors
--color-body: #000000;
--color-card: #1c1c1e;
--color-hover: #1c1c1e;
--color-active: #0D0D0D;
// Navigation
--color-nav-bg: #000000;
--color-nav-text: #FFFFFF;
// UI Elements
--color-input-background: #1c1c1e;
--color-input-border: #4D4D4D;
--color-input-text: #FFFFFF;
--color-box-body: #000000;
--color-box-header: #1A1A1A;
// Semantic Colors for Dark Mode
--color-success: #00A94F;
--color-warning: #FFCC00;
--color-error: #D52B1E;
--color-info: #0070AD;
// Shadows
--color-shadow: rgba(0, 0, 0, 0.35);
}
}
// Telekom-inspired Component Styling
.td-navbar {
background-color: var(--color-nav-bg) !important;
border-bottom: 1px solid var(--nav-border-color);
.navbar-brand,
.nav-link {
color: var(--color-nav-text) !important;
font-weight: var(--nav-text-font-weight);
}
.nav-link:hover,
.nav-link.active {
color: var(--color-primary) !important;
background: transparent !important;
}
}
// Primary Buttons - Telekom Magenta
.btn-primary {
background-color: var(--color-primary) !important;
border-color: var(--color-primary) !important;
color: var(--color-primary-contrast) !important;
&:hover {
background-color: var(--color-primary-dark-1) !important;
border-color: var(--color-primary-dark-1) !important;
}
&:active,
&:focus {
background-color: var(--color-primary-dark-2) !important;
border-color: var(--color-primary-dark-2) !important;
}
}
// Links
a {
color: var(--color-primary);
&:hover {
color: var(--color-primary-dark-1);
}
}
// Cards with Telekom Style
.card {
background-color: var(--color-card);
border: 1px solid var(--nav-border-color);
&:hover {
background-color: var(--color-hover);
}
}
// Active/Selected States - REMOVED harsh black backgrounds
// Now using soft Telekom colors instead
// Sidebar Navigation
.td-sidebar-nav {
.td-sidebar-link {
&:hover {
background-color: var(--color-primary-light-5) !important;
color: var(--color-primary) !important;
}
&.active {
background-color: var(--color-primary-light-6);
color: var(--color-primary);
font-weight: 500;
border-left: 3px solid var(--color-primary);
}
}
// All list items in sidebar
li a {
&:hover {
background-color: var(--color-primary-light-5) !important;
color: var(--color-primary) !important;
}
}
}
// Main navigation tabs
.td-sidebar {
.td-sidebar-nav__section {
.ul-1 > li > a {
&.active,
&.td-sidebar-link--active {
background-color: var(--color-primary-light-6) !important;
color: var(--color-primary) !important;
font-weight: 500;
border-left: 3px solid var(--color-primary);
}
&:hover {
background-color: var(--color-primary-light-5) !important;
color: var(--color-primary) !important;
}
}
// All nested levels
li a:hover {
background-color: var(--color-primary-light-5) !important;
color: var(--color-primary) !important;
}
}
}
// Top navigation breadcrumb area
.td-sidebar__inner {
.td-sidebar-nav__section-title {
&.active {
background-color: var(--color-primary-light-5) !important;
color: var(--color-primary) !important;
}
}
}
// Breadcrumb navigation in header
.breadcrumb {
.active {
color: var(--color-primary) !important;
}
a:hover {
color: var(--color-primary-dark-1) !important;
}
}
// Remove harsh black backgrounds globally
.active,
.selected {
background-color: var(--color-primary-light-6) !important;
color: var(--color-primary) !important;
}
// Softer hover states
*:hover {
transition: all 0.2s ease-in-out;
}
// Override any dark/black hover backgrounds in navigation
nav, .td-sidebar, .td-sidebar-nav {
a:hover,
li:hover > a,
.nav-link:hover {
background-color: var(--color-primary-light-5) !important;
color: var(--color-primary) !important;
}
}
// Code Blocks
pre,
code {
background-color: var(--color-box-body);
border: 1px solid var(--color-input-border);
}
// Inline code (backticks in text)
code {
background-color: var(--color-primary-light-6);
color: var(--color-primary-dark-2);
padding: 2px 6px;
border-radius: 3px;
border: 1px solid var(--color-primary-light-3);
font-size: 0.9em;
}
// Code blocks (fenced code)
pre {
background-color: var(--color-box-body);
border: 1px solid var(--color-input-border);
padding: 1rem;
border-radius: 4px;
code {
background-color: transparent;
border: none;
padding: 0;
color: inherit;
}
}
// Tables
table {
thead {
background-color: var(--color-box-header);
}
tbody tr:hover {
background-color: var(--color-hover);
}
}
// Alerts/Notifications
.alert-success {
background-color: var(--color-success);
border-color: var(--color-success);
}
.alert-warning {
background-color: var(--color-warning);
border-color: var(--color-warning);
color: #000000;
}
.alert-danger {
background-color: var(--color-error);
border-color: var(--color-error);
}
.alert-info {
background-color: var(--color-info);
border-color: var(--color-info);
}
// Docsy Homepage Components
.td-cover-block {
background-color: var(--color-primary) !important;
h1, h2, h3, h4, h5, h6, p {
color: var(--color-primary-contrast) !important;
}
}
// Lead blocks with primary color background
.td-block--primary,
section[class*="bg-primary"],
section[class*="color-primary"] {
background-color: var(--color-primary) !important;
* {
color: #FFFFFF !important;
}
h1, h2, h3, h4, h5, h6, p, a, .lead {
color: #FFFFFF !important;
text-shadow: 0 1px 3px rgba(0, 0, 0, 0.3);
}
// Prevent white background on hover
&:hover,
*:hover {
background-color: transparent !important;
color: #FFFFFF !important;
}
a:hover {
color: #FFFFFF !important;
text-decoration: underline;
}
}
.td-box {
background-color: var(--color-card);
border: 1px solid var(--nav-border-color);
&:hover {
background-color: var(--color-hover);
border-color: var(--color-primary);
}
&--primary {
background-color: var(--color-primary);
border-color: var(--color-primary);
color: var(--color-primary-contrast);
}
&--secondary {
background-color: var(--color-secondary);
border-color: var(--color-secondary);
}
}
// Hero/Cover sections
.td-cover {
background: linear-gradient(135deg, var(--color-primary) 0%, var(--color-primary-dark-2) 100%);
.display-1, .display-2, .display-3, .display-4 {
color: var(--color-primary-contrast) !important;
}
}
// Section backgrounds
.td-section {
&--primary {
background-color: var(--color-primary-light-6);
}
&--secondary {
background-color: var(--color-secondary-light);
}
}
// Feature boxes
.td-feature {
border: 1px solid var(--nav-border-color);
background-color: var(--color-card);
&:hover {
border-color: var(--color-primary);
box-shadow: 0 4px 12px var(--color-shadow);
}
}
// Feature blocks on homepage (blocks/feature)
.td-box--dark,
.td-box--colored,
section[class*="bg-dark"] .td-box,
section[class*="color-dark"] .td-box {
.h2, .h3, .h4, .h5, h2, h3, h4, h5, p, a {
color: #FFFFFF !important;
}
&:hover {
background-color: rgba(0, 0, 0, 0.8) !important;
.h2, .h3, .h4, .h5, h2, h3, h4, h5, p, a {
color: #FFFFFF !important;
}
}
}
// Ensure text stays visible in dark sections
section[class*="bg-dark"],
section[class*="color-dark"] {
* {
color: #FFFFFF !important;
}
.td-box, .card {
&:hover {
background-color: rgba(0, 0, 0, 0.8) !important;
* {
color: #FFFFFF !important;
}
}
}
}
*/

View file

@ -3,84 +3,5 @@ title: IPCEI-CIS Developer Framework
---
{{< blocks/cover title="IPCEI-CIS Developer Framework" image_anchor="top" height="full" >}}
<p class="lead mt-5">
A comprehensive enterprise development platform enabling teams to build, deploy, and operate cloud-native applications with ease.
</p>
{{< blocks/link-down color="info" >}}
{{< /blocks/cover >}}
{{% blocks/lead color="primary" %}}
The IPCEI-CIS Developer Framework provides everything you need to deliver modern applications at scale.
Built on open standards and battle-tested technologies.
{{% /blocks/lead %}}
{{% blocks/section color="dark" type="row" %}}
{{% blocks/feature icon="fa-solid fa-diagram-project" title="Edge Developer Platform (EDP)" url="/docs/edp/" %}}
Understand EDP as the developer platform hub (Forgejo, CI/CD, deployment, operations) and how it connects inner loop and outer loop workflows.
**Dive into EDP docs →**
{{% /blocks/feature %}}
{{% blocks/feature icon="fa-solid fa-cloud" title="EdgeConnect Cloud" url="/docs/edgeconnect/" %}}
Learn what EdgeConnect is, how it is consumed via stable entry points (CLI, SDK, Terraform), and how EDP integrates with it as a deployment target.
**Explore EdgeConnect →**
{{% /blocks/feature %}}
{{% blocks/feature icon="fa-solid fa-scale-balanced" title="Governance" url="/docs/governance/" %}}
Read the project history, decision context, and audit-oriented traceability to primary sources and repository artifacts.
**Go to Governance →**
{{% /blocks/feature %}}
{{% /blocks/section %}}
{{% blocks/section color="white" %}}
## What's in the Platform?
<div class="row">
<div class="col-lg-4 mb-4">
### 🚀 Developer Experience
* **Backstage Portal** - Self-service platform
* **GitOps Workflows** - Automated deployments
* **Golden Paths** - Best practices built-in
</div>
<div class="col-lg-4 mb-4">
### 🛠️ Infrastructure as Code
* **Crossplane** - Cloud resource provisioning
* **ArgoCD** - Declarative GitOps
* **Terraform** - Infrastructure automation
</div>
<div class="col-lg-4 mb-4">
### 📊 Observability
* **Prometheus & Grafana** - Metrics & dashboards
* **Loki** - Log aggregation
* **OpenTelemetry** - Distributed tracing
</div>
</div>
{{% /blocks/section %}}
{{% blocks/section color="light" %}}
## Get Started
Whether you're a **platform engineer**, **application developer**, or **auditor**, we have resources for you:
* 📖 Start at [Documentation](/docs/)
* 🧭 Read [Edge Developer Platform (EDP)](/docs/edp/)
* ☁️ Read [EdgeConnect Cloud](/docs/edgeconnect/)
* 🧾 Read [Governance](/docs/governance/)
{{% /blocks/section %}}

View file

@ -0,0 +1,40 @@
---
title: Architecture session
weight: 20
---
## Platform Generics
* https://tag-app-delivery.cncf.io/whitepapers/platforms/#capabilities-of-platforms
* https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/
* https://humanitec.com/blog/wtf-internal-developer-platform-vs-internal-developer-portal-vs-paas
## reference architecture + Portfolio
* https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures
* https://humanitec.com/reference-architectures
* https://www.youtube.com/watch?v=AimSwK8Mw-U
## Platform Portfolio
### Viktor Farcic
* https://technologyconversations.com/
* https://technologyconversations.com/2024/01/08/the-best-devops-tools-platforms-and-services-in-2024/
### Internal devloper platform
* https://internaldeveloperplatform.org/core-components/
### Workflow / CI/CD
* https://cnoe.io/blog/optimizing-data-quality-in-dev-portals

View file

@ -1,23 +0,0 @@
---
title: Legacy Documentation
linkTitle: Docs (Old)
menu:
main:
weight: 50
weight: 50
cascade:
- type: docs
---
# Legacy Documentation
This section contains the previous version of the documentation for reference purposes.
**Note**: This documentation is archived and may be outdated. Please refer to the main [Documentation](../docs/) section for current information.
## Available Sections
* [Architecture](architecture/) - System architecture and diagrams
* [Documentation](documentation/) - Meta documentation about the documentation system
* [Platform Overview](platform-overview/) - Overview document
* [v1 (Legacy)](v1/) - Original v1 documentation

View file

@ -1,9 +0,0 @@
---
title: "Architecture"
linkTitle: "Architecture"
weight: 3
description: >
System architecture documentation and interactive diagrams
---
This section contains architecture documentation for the IPCEI-CIS Developer Framework, including interactive C4 architecture diagrams.

View file

@ -1,79 +0,0 @@
---
title: "High Level Architecture"
linkTitle: "High Level Architecture"
weight: 1
description: >
Interactive high-level architecture overview of the Enterprise Development Platform
---
This document describes the high-level architecture of our Enterprise Development Platform (EDP) system.
## Interactive Architecture Diagram
{{< likec4-view view="otc-faas" project="architecture" title="Enterprise Development Platform - OTC FaaS Deployment Architecture" >}}
{{< alert title="Interactive Diagram" >}}
The diagram above is interactive when viewed in a compatible browser.
You can click on components to explore the architecture details.
**Note:** The interactive diagram requires the LikeC4 webcomponent to be generated.
See the [setup instructions]({{< ref "/docs-old/architecture/setup" >}}) for details.
{{< /alert >}}
## Architecture Overview
The Enterprise Development Platform consists of several key components working together to provide a comprehensive development and deployment environment.
### Key Components
1. **OTC Foundry** - Central management and orchestration layer
2. **Per-Tenant EDP** - Isolated development environments for each tenant
3. **FaaS Environment** - Function-as-a-Service deployment targets on Open Telekom Cloud
4. **Cloud Services** - Managed services including databases, storage, and monitoring
### Deployment Environments
- **Development Environment** (`*.t09.de`) - For platform team development and testing
- **Production Environment** (`*.buildth.ing`) - For production workloads and tenant services
## Component Details
The interactive diagram above shows the relationships between different components and how they interact within the system architecture. You can explore the diagram by clicking on different elements to see more details.
### Infrastructure Components
- **Kubernetes Clusters** - Container orchestration using OTC CCE (Cloud Container Engine)
- **ArgoCD** - GitOps continuous deployment and application lifecycle management
- **Forgejo** - Git repository management and CI/CD pipelines
- **Observability Stack** - Monitoring (Prometheus, Grafana), logging (Loki), and alerting
### Security and Management
- **Keycloak** - Identity and access management (IAM)
- **OpenBao** - Secrets management (Hashicorp Vault fork)
- **External Secrets Operator** - Kubernetes secrets integration
- **Crossplane** - Infrastructure as Code and cloud resource provisioning
### Developer Experience
- **Backstage** - Internal developer portal and service catalog
- **Forgejo Actions** - CI/CD pipeline execution
- **Development Workflows** - GitOps-based inner and outer loop workflows
## Setup and Maintenance
To update or modify the architecture diagrams:
1. Edit the `.c4` files in `resources/edp-likec4/`
2. Regenerate the webcomponent:
```bash
cd resources/edp-likec4
npx likec4 codegen webcomponent \
--webcomponent-prefix likec4 \
--outfile ../../static/js/likec4-webcomponent.js
```
3. Commit both the model changes and the regenerated JavaScript file
For more information, see the [LikeC4 Integration Guide]({{< ref "/docs-old/architecture/setup" >}}).

View file

@ -1,297 +0,0 @@
---
title: "LikeC4 Setup Guide"
linkTitle: "Setup"
weight: 10
description: >
How to set up and use LikeC4 interactive architecture diagrams
---
This guide explains how to set up and use LikeC4 interactive architecture diagrams in this documentation.
## Overview
LikeC4 enables you to create interactive C4 architecture diagrams as code. The diagrams are defined in `.c4` files and compiled into a web component that can be embedded in any HTML page.
## Prerequisites
- Node.js (v18 or later)
- npm or yarn
## Initial Setup
### 1. Install Dependencies
Navigate to the LikeC4 directory and install dependencies:
```bash
cd resources/edp-likec4
npm install
```
### 2. Generate the Web Component
Create the web component that Hugo will load:
```bash
npx likec4 codegen webcomponent \
--webcomponent-prefix likec4 \
--outfile ../../static/js/likec4-webcomponent.js
```
This command:
- Reads all `.c4` files from `models/` and `views/`
- Generates a single JavaScript file with all architecture views
- Outputs to `static/js/likec4-webcomponent.js`
### 3. Verify Integration
The integration should already be configured in:
- `hugo.toml` - Contains `params.likec4.enable = true`
- `layouts/partials/hooks/head-end.html` - Loads CSS and loader script
- `static/css/likec4-styles.css` - Diagram styling
- `static/js/likec4-loader.js` - Dynamic module loader
## Directory Structure
```plaintext
resources/edp-likec4/
├── models/ # C4 model definitions
│ ├── components/ # Component models
│ ├── containers/ # Container models
│ ├── context/ # System context
│ └── code/ # Code-level workflows
├── views/ # View definitions
│ ├── deployment/ # Deployment views
│ ├── edp/ # EDP views
│ ├── high-level-concept/ # Conceptual views
│ └── dynamic/ # Process flows
├── package.json # Dependencies
└── INTEGRATION.md # Integration docs
```
## Using in Documentation
### Basic Usage
Add this to any Markdown file:
```html
<div class="likec4-container">
<div class="likec4-header">
Your Diagram Title
</div>
<likec4-view view-id="YOUR-VIEW-ID" browser="true"></likec4-view>
<div class="likec4-loading" id="likec4-loading">
Loading architecture diagram...
</div>
</div>
```
### Available View IDs
To find available view IDs, search the `.c4` files:
```bash
cd resources/edp-likec4
grep -r "view\s\+\w" views/ models/ --include="*.c4"
```
Common views:
- `otc-faas` - OTC FaaS deployment
- `edp` - EDP overview
- `landscape` - Developer landscape
- `edpbuilderworkflow` - Builder workflow
- `keycloak` - Keycloak component
### With Hugo Alert
Combine with Docsy alerts for better UX:
```markdown
<div class="likec4-container">
<div class="likec4-header">
System Architecture
</div>
<likec4-view view-id="otc-faas" browser="true"></likec4-view>
<div class="likec4-loading" id="likec4-loading">
Loading...
</div>
</div>
{{</* alert title="Note" */>}}
Click on components in the diagram to explore the architecture.
{{</* /alert */>}}
```
## Workflow for Changes
### 1. Modify Architecture Models
Edit the `.c4` files in `resources/edp-likec4/`:
```bash
# Edit a model
vi resources/edp-likec4/models/containers/argocd.c4
# Or edit a view
vi resources/edp-likec4/views/deployment/otc/otc-faas.c4
```
### 2. Preview Changes Locally
Use the LikeC4 CLI to preview:
```bash
cd resources/edp-likec4
# Start preview server
npx likec4 start
# Opens browser at http://localhost:5173
```
### 3. Regenerate Web Component
After making changes:
```bash
cd resources/edp-likec4
npx likec4 codegen webcomponent \
--webcomponent-prefix likec4 \
--outfile ../../static/js/likec4-webcomponent.js
```
### 4. Test in Hugo
Start the Hugo development server:
```bash
# From repository root
hugo server -D
# Open http://localhost:1313
```
### 5. Commit Changes
Commit both the model files and the regenerated web component:
```bash
git add resources/edp-likec4/
git add static/js/likec4-webcomponent.js
git commit -m "feat: update architecture diagrams"
```
## Advanced Configuration
### Custom Styling
Modify `static/css/likec4-styles.css` to customize appearance:
```css
.likec4-container {
height: 800px; /* Adjust height */
border-radius: 8px; /* Rounder corners */
}
```
### Multiple Diagrams Per Page
You can include multiple diagrams on a single page:
```html
<!-- First diagram -->
<div class="likec4-container">
<div class="likec4-header">Deployment View</div>
<likec4-view view-id="otc-faas" browser="true"></likec4-view>
<div class="likec4-loading">Loading...</div>
</div>
<!-- Second diagram -->
<div class="likec4-container">
<div class="likec4-header">Component View</div>
<likec4-view view-id="edp" browser="true"></likec4-view>
<div class="likec4-loading">Loading...</div>
</div>
```
### Disable for Specific Pages
Add to page front matter:
```yaml
---
title: "My Page"
params:
disable_likec4: true
---
```
Then update `layouts/partials/hooks/head-end.html`:
```html
{{ if and .Site.Params.likec4.enable (not .Params.disable_likec4) }}
<!-- LikeC4 scripts -->
{{ end }}
```
## Troubleshooting
### Diagram Not Loading
1. **Check browser console** (F12 → Console)
2. **Verify webcomponent exists:**
```bash
ls -lh static/js/likec4-webcomponent.js
```
3. **Regenerate if missing:**
```bash
cd resources/edp-likec4
npm install
npx likec4 codegen webcomponent \
--webcomponent-prefix likec4 \
--outfile ../../static/js/likec4-webcomponent.js
```
### View Not Found
- Check view ID matches exactly (case-sensitive)
- Search for the view in `.c4` files:
```bash
grep -r "view otc-faas" resources/edp-likec4/
```
### Styling Issues
- Clear browser cache (Ctrl+Shift+R)
- Check `static/css/likec4-styles.css` is loaded in browser DevTools → Network
### Build Errors
If LikeC4 codegen fails:
```bash
cd resources/edp-likec4
rm -rf node_modules package-lock.json
npm install
```
## Resources
- [LikeC4 Documentation](https://likec4.dev/)
- [C4 Model](https://c4model.com/)
- [Web Components](https://developer.mozilla.org/en-US/docs/Web/Web_Components)
- [Hugo Documentation](https://gohugo.io/documentation/)
## Migration Notes
This LikeC4 integration was migrated from the edp-doc repository. This repository (`ipceicis-developerframework`) is now the primary source for architecture models.
The edp-doc repository can reference these models via git submodule if needed.

View file

@ -1,126 +0,0 @@
# CI/CD pipeline tools for composable pipeline
## Context and Problem Statement
In order to build a composable pipeline that provides a golden path and reusable components, we need to define the tools that will be used to execute the pipeline.
ArgoCD is considered set in stone as the tool to manage the deployment of applications. However, the tools to compose and execute the pipeline are still up for debate.
> Note: The pipeline will use many other tools to perform certain actions such as testing, building, and deploying. This ADR is focused on the tools that will be used to compose and execute the pipeline itself.
In general, there are 2 decisions to make:
* What tools should we use to execute the pipeline?
* What tools should we use to compose the pipeline?
The following use-cases should be considered for this decision:
* **User who wants to manage their own runners (???)**
* User who only wants to use our golden path
* User who wants to use our golden path and add custom actions
* User who wants to use their own templates and import some of our actions
* User who wants to import an existing GitHub repository with a pipeline
## Considered Options
* Argo Workflows + Events
* Argo Workflows + Events + Additional Composition tool
* Forgejo Actions
* Forgejo Actions + Additional Composition tool
* Dagger (as Engine)
* Shuttle (as Engine)
## Decision Outcome
TBD
## Pros and Cons of the Options
### Argo Workflows + Events
#### Pro
* integration with ArgoCD
* ability to trigger additional workflows based on events.
* level of maturity and community support.
#### Con
* Ability to self-host runners?
* way how composition for pipelines works (based on Kubernetes CRDs)
* Templates must be available in the cluster where the pipelines are executed, so any imported templates must be applied into the cluster before the pipeline can be executed and cannot simply reference a repository
* This makes it difficult to import existing templates from other repositories when using self-hosted runners
* This also makes it difficult to use our golden path, or at least we will need to provide a way to import our golden path into the cluster
* This also makes the split of every component has its own repo very difficult
* additional UI to manage the pipeline
* Additional complexity
### Argo Workflows + Events + Additional Composition tool
#### Pro
* Composability can be offloaded to another tool
#### Con
* All cons of the previous option (except composability)
* Additional complexity by adding another tool
### Forgejo Actions
#### Pro
* tight integration with GitHub Actions providing a familiar interface for developers and a vast catalog of actions to choose from
* ability to compose pipelines without relying on another tool
* Self-hosting of runners possible
* every component can have its own repository and use different tools (e.g. written in go, bash, python etc.)
#### Con
* level of maturity - will require additional investments to provide a production-grade system
### Forgejo Actions + Additional Tool
#### Pro
* may be possible to use GitHub actions alongside another tool
#### Con
* additional complexity by adding another tool
### Shuttle
#### Pro
* Possibility to clearly define interfaces for pipeline steps
* Relatively simple
#### Con
* basically backed by only one company
* **centralized templates**, so no mechanism for composing pipelines from multiple repositories
### Dagger
#### Pro
* Pipeline as code
* if it runs it should run anywhere and produce the "same" / somewhat stable results
* build environments are defined within containers / the dagger config. Dagger is the only dependency one has to install on a machine
* DX is extremely nice, especially if you have to debug (image) builds, also type safety due to the ability to code your build in a strong language
* additional tooling, like trivy, is added to a build pipeline with low effort due to containers and existing plugin/wrappers
* you can create complex test environments similar to test containers and docker compose
#### Con
* relies heavily containers, which might not be available some environments (due to policy etc), it also has an effect on reproducibility and verifiability
* as a dev you need to properly understand containers
* dagger engine has to run privileged locally and/or in the cloud which might be a blocker or at least a big pain in the ...
#### Suggestion Patrick
* dagger is a heavy weight and might not be as productive in a dev workflow as it seems (setup lsp etc)
* it might be too opinionated to force on teams, especially since it is not near mainstream enough, community might be too small
* it feels like dagger gets you 95% of the way, but the remaining 5% are a real struggle
* if we like it, we should check the popularity in the dev community before further considering as it has a direct impact on teams and their preferences

View file

@ -1,5 +0,0 @@
# ADRs
Architecture Decision Records (ADRs) are a way to capture the important architectural decisions made during the development of a project. They are a way to document the context, the decision, and the consequences of the decision. They are a way to keep track of the architectural decisions made in a project and to communicate them to the team.
The [Markdown Architectural Decision Records](https://adr.github.io/madr/) (MADR) format is a simple and easy-to-use format for writing ADRs in Markdown.

View file

@ -1,67 +0,0 @@
<!-- we need to disable MD025, because we use the different heading "ADR Template" in the homepage (see above) than it is foreseen in the template -->
<!-- markdownlint-disable-next-line MD025 -->
# {short title, representative of solved problem and found solution}
## Context and Problem Statement
{Describe the context and problem statement, e.g., in free form using two to three sentences or in the form of an illustrative story. You may want to articulate the problem in form of a question and add links to collaboration boards or issue management systems.}
<!-- This is an optional element. Feel free to remove. -->
## Decision Drivers
* {decision driver 1, e.g., a force, facing concern, …}
* {decision driver 2, e.g., a force, facing concern, …}
* … <!-- numbers of drivers can vary -->
## Considered Options
* {title of option 1}
* {title of option 2}
* {title of option 3}
* … <!-- numbers of options can vary -->
## Decision Outcome
Chosen option: "{title of option 1}", because {justification. e.g., only option, which meets k.o. criterion decision driver | which resolves force {force} | … | comes out best (see below)}.
<!-- This is an optional element. Feel free to remove. -->
### Consequences
* Good, because {positive consequence, e.g., improvement of one or more desired qualities, …}
* Bad, because {negative consequence, e.g., compromising one or more desired qualities, …}
* … <!-- numbers of consequences can vary -->
<!-- This is an optional element. Feel free to remove. -->
### Confirmation
{Describe how the implementation of/compliance with the ADR can/will be confirmed. Are the design that was decided for and its implementation in line with the decision made? E.g., a design/code review or a test with a library such as ArchUnit can help validate this. Not that although we classify this element as optional, it is included in many ADRs.}
<!-- This is an optional element. Feel free to remove. -->
## Pros and Cons of the Options
### {title of option 1}
<!-- This is an optional element. Feel free to remove. -->
{example | description | pointer to more information | …}
* Good, because {argument a}
* Good, because {argument b}
<!-- use "neutral" if the given argument weights neither for good nor bad -->
* Neutral, because {argument c}
* Bad, because {argument d}
* … <!-- numbers of pros and cons can vary -->
### {title of other option}
{example | description | pointer to more information | …}
* Good, because {argument a}
* Good, because {argument b}
* Neutral, because {argument c}
* Bad, because {argument d}
* …
<!-- This is an optional element. Feel free to remove. -->
## More Information
{You might want to provide additional evidence/confidence for the decision outcome here and/or document the team agreement on the decision and/or define when/how this decision the decision should be realized and if/when it should be re-visited. Links to other decisions and resources might appear here as well.}

View file

@ -1,43 +0,0 @@
---
title: "Documentation About Documentation"
linkTitle: "Documentation"
weight: 10
description: >
Learn how to create, maintain, and publish documentation for the developer platform.
---
Welcome to the meta-documentation! This section explains how our documentation platform works and guides you through the technicalWriter role.
## What is a Technical Writer?
A **Technical Writer** is responsible for creating, maintaining, and publishing the developer platform documentation. This includes:
- Writing and updating content in Markdown
- Creating architecture diagrams with LikeC4
- Testing locally before committing
- Following the CI/CD pipeline to production
## Documentation Platform Architecture
Our documentation is built on a modern stack:
- **Hugo** with the **Docsy** theme for static site generation
- **LikeC4** for architecture visualization
- **Taskfile** for local development automation
- **GitHub Actions** for continuous testing
- **Edge deployment** for hosting
### System Overview
{{< likec4-view view="overview" project="documentation-platform" >}}
This high-level view shows all major components of the documentation platform.
## Getting Started
Continue to the next sections to learn about:
1. [Local Development](local-development/) - How to work on documentation locally
2. [Testing](testing/) - Quality assurance processes
3. [CI/CD Pipeline](cicd/) - Automated testing and deployment
4. [Publishing](publishing/) - How documentation reaches production

View file

@ -1,264 +0,0 @@
---
title: "CI/CD Pipeline"
linkTitle: "CI/CD"
weight: 40
description: >
Automated testing and container build process.
---
## Overview
Our documentation uses a continuous integration and deployment pipeline to ensure quality and automate deployment.
{{< likec4-view view="cicdPipeline" project="documentation-platform" >}}
## GitHub Actions Workflow
The CI/CD pipeline is defined in `.github/workflows/test.yml` and runs on:
- **Pushes to `main` branch**
- **Pull requests to `main` branch**
### Workflow Steps
#### 1. Checkout Code
```yaml
- uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
```
- Clones repository with full history
- Includes Git submodules (Hugo modules)
#### 2. Setup Hugo
```yaml
- name: Setup Hugo
uses: peaceiris/actions-hugo@v3
with:
hugo-version: 'latest'
extended: true
```
- Installs Hugo Extended
- Uses latest stable version
#### 3. Setup Node.js
```yaml
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: '24'
cache: 'npm'
```
- Installs Node.js v24
- Caches npm dependencies for faster builds
#### 4. Install Dependencies
```bash
npm ci
go install github.com/wjdp/htmltest@latest
```
- Installs npm packages (markdownlint, htmlvalidate)
- Installs htmltest for link checking
#### 5. Run Tests
```bash
npm run test:build
npm run test:markdown
npm run test:html
```
- Validates Hugo build
- Lints Markdown files
- Validates HTML output
#### 6. Link Checking
```yaml
- name: Run link checker
run: htmltest
continue-on-error: true
```
- Checks all links
- Continues even if links fail (soft requirement)
#### 7. Upload Results
```yaml
- name: Upload htmltest results
uses: actions/upload-artifact@v4
if: always()
with:
name: htmltest-report
path: tmp/.htmltest/
```
- Uploads link check report
- Available for download from GitHub Actions
## Container Build Process
After tests pass, a container image is built:
```bash
task build:oci-image
```
### Build Process
1. **Reads version information** from `.env.versions`:
- `NODE_VERSION`
- `GO_VERSION`
- `HUGO_VERSION`
2. **Builds Docker image** using `Dockerfile`:
- Multi-stage build
- Hugo generates static site
- Nginx serves the content
3. **Tags image** with:
- `latest`
- Git commit SHA (short)
### Dockerfile Structure
```dockerfile
# Build stage
FROM node:${NODE_VERSION} as builder
# Install Hugo, build dependencies
# Run: hugo --gc --minify
# Output: public/ directory
# Runtime stage
FROM nginx:alpine
# Copy public/ to /usr/share/nginx/html/
# Configure Nginx
```
### Testing the Container
```bash
task test:oci-image
```
This:
1. Builds the image
2. Starts container on port 8080
3. Tests HTTP endpoint
4. Cleans up container
## Package.json Scripts
The `package.json` defines test scripts:
```json
{
"scripts": {
"test:build": "hugo --gc --minify --logLevel info",
"test:markdown": "markdownlint 'content/**/*.md'",
"test:html": "htmlvalidate 'public/**/*.html'"
}
}
```
## Running CI Locally
Simulate the CI environment locally:
```bash
task ci
```
This runs the same tests as GitHub Actions.
## Monitoring CI Results
### Successful Build
✅ All tests pass → Ready to deploy
### Failed Build
❌ Tests fail:
1. Click on the failed workflow in GitHub Actions
2. Expand the failed step
3. Read the error message
4. Fix locally: `task test:<specific-test>`
5. Commit and push fix
### Viewing Artifacts
1. Go to GitHub Actions
2. Click on workflow run
3. Scroll to "Artifacts" section
4. Download `htmltest-report`
## Best Practices
1. **Don't push to main directly** - Use feature branches and PRs
2. **Wait for CI before merging** - Green checkmark required
3. **Fix broken builds immediately** - Don't let main stay red
4. **Review CI logs** - Understand why tests fail
5. **Update dependencies** - Keep versions current in `.env.versions`
## Continuous Deployment
After successful CI:
1. Container image is built
2. Image is pushed to registry
3. Deployment process begins (see [Publishing](../publishing/))
## Troubleshooting
### Tests pass locally but fail in CI
**Possible causes:**
- Different Hugo version
- Different Node.js version
- Missing dependencies
- Environment-specific issues
**Solution:** Check versions in `.env.versions` and ensure local matches CI
### Build timeouts
**Possible causes:**
- Link checker taking too long
- Large number of external links
**Solution:**
- Use `continue-on-error: true` for link checks
- Configure `.htmltest.yml` to skip slow checks
### Cache issues
**Solution:** Clear GitHub Actions cache:
```yaml
- uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
```
Update the cache key to force refresh.
## Next Steps
Learn about [deployment to Edge environment](../publishing/).

View file

@ -1,234 +0,0 @@
---
title: "Local Development"
linkTitle: "Local Development"
weight: 20
description: >
Set up your local environment and learn the technicalWriter workflow.
---
## Prerequisites
Before you start, ensure you have:
- **Devbox** or the following tools installed:
- Hugo Extended (latest version)
- Node.js (v24+)
- Go (for htmltest)
- Git
## Installation
1. Clone the repository:
```bash
git clone <repository-url>
cd ipceicis-developerframework
```
2. Install dependencies:
```bash
task deps:install
```
3. **If using Devbox**, enter the Devbox shell:
```bash
devbox shell
```
This ensures all tools (Hugo, Node.js, Go) are available in the correct versions.
## Local Development Workflow
{{< likec4-view view="localDevelopment" project="documentation-platform" >}}
### Starting the Development Server
The easiest way to work locally is to start the Hugo development server:
```bash
task serve
```
This will:
- Generate build information (git commit, version)
- Start Hugo server on `http://localhost:1313`
- Enable hot reload - changes appear instantly in the browser
### Content Structure
```text
content/
└── en/ # English content
├── _index.md # Homepage
├── blog/ # Blog posts
└── docs/ # Documentation
├── architecture/ # Architecture docs
├── decisions/ # ADRs
└── v1/ # Version-specific docs
```
### Creating Content
1. **Add a new documentation page:**
```bash
# Create a new markdown file
vim content/en/docs/your-topic/_index.md
```
2. **Add frontmatter:**
```yaml
---
title: "Your Topic"
linkTitle: "Your Topic"
weight: 10
description: >
Brief description of your topic.
---
```
3. **Write your content** in Markdown
4. **Preview changes** - they appear immediately if `task serve` is running
### Creating Architecture Diagrams
Architecture diagrams are created with LikeC4:
1. **Navigate to the appropriate LikeC4 project:**
- `resources/edp-likec4/` - Platform architecture
- `resources/doc-likec4/` - Documentation platform architecture
2. **Edit or create `.c4` files** with your model
Example: Create a simple view in `resources/edp-likec4/views/my-view.c4`:
```likec4
specification {
element myperson
element mysystem
}
model {
customer = myperson 'Customer' {
description 'End user of the platform'
}
mySystem = mysystem 'My System' {
description 'Example system component'
}
customer -> mySystem 'uses'
}
views {
view myCustomView {
title "My Custom Architecture View"
include customer
include mySystem
autoLayout TopBottom
}
}
```
3. **Regenerate webcomponents:**
```bash
task likec4:generate
```
4. **Embed diagrams in Markdown:**
```markdown
{{</* likec4-view view="myCustomView" project="architecture" title="My Custom Architecture View" */>}}
```
**Finding available view IDs:**
- Open the `.c4` files in your project directory
- Look for `view <viewId> {` declarations
- The `<viewId>` is what you use in the `view` parameter
- Or use: `grep -r "^view " resources/edp-likec4/ --include="*.c4"`
## Available Tasks
View all available tasks:
```bash
task --list
```
### Common Development Tasks
| Task | Description |
|------|-------------|
| `task serve` | Start development server with hot reload |
| `task build` | Build production-ready site |
| `task build:dev` | Build development version |
| `task clean` | Remove build artifacts |
| `task test` | Run all tests |
| `task test:quick` | Run tests without link checking |
## Quick Testing
Before committing, run quick tests:
```bash
task test:quick
```
This validates:
- Hugo build succeeds
- Markdown syntax is correct
For comprehensive testing, including link checking:
```bash
task test
```
## Tips for Technical Writers
1. **Write in present tense** - "The system processes..." not "The system will process..."
2. **Use code blocks** with syntax highlighting
3. **Include diagrams** for complex concepts
4. **Test locally** before pushing
5. **Keep it concise** - readers appreciate brevity
6. **Update regularly** - stale docs are worse than no docs
## Troubleshooting
### Port 1313 already in use
```bash
# Find and kill the process
lsof -ti:1313 | xargs kill -9
```
### Build errors
```bash
# Clean and rebuild
task clean
task build:dev
```
### Missing dependencies
```bash
# Reinstall all dependencies
task deps:install
```
## Next Steps
Now that you can develop locally, learn about:
- [Testing processes](../testing/)
- [CI/CD pipeline](../cicd/)

View file

@ -1,339 +0,0 @@
---
title: "Publishing to Edge"
linkTitle: "Publishing"
weight: 50
description: >
How documentation is deployed to the edge environment.
---
## Deployment Overview
After successful CI/CD, the documentation is deployed to an edge computing environment.
{{< likec4-view view="deploymentFlow" project="documentation-platform" >}}
## Deployment Architecture
### Edge Connect Platform
Our documentation is deployed using **Edge Connect**, which orchestrates deployments to edge cloudlets.
Configuration: `edgeconnectdeployment.yaml`
```yaml
kind: edgeconnect-deployment
metadata:
name: "edpdoc"
appVersion: "1.0.0"
organization: "edp2"
spec:
k8sApp:
manifestFile: "./k8s-deployment.yaml"
infraTemplate:
- region: "EU"
cloudletOrg: "TelekomOP"
cloudletName: "Munich"
flavorName: "EU.small"
```
**Key settings:**
- **Deployment name:** `edpdoc`
- **Region:** EU (Munich)
- **Cloudlet:** TelekomOP Munich
- **Flavor:** EU.small (resource allocation)
### Kubernetes Deployment
The application runs on Kubernetes: `k8s-deployment.yaml`
#### Service Definition
```yaml
apiVersion: v1
kind: Service
metadata:
name: edpdoc
labels:
run: edpdoc
spec:
type: LoadBalancer
ports:
- name: tcp80
protocol: TCP
port: 80
targetPort: 80
selector:
run: edpdoc
```
- **Type:** LoadBalancer (external access)
- **Port:** 80 (HTTP)
- **Selector:** Routes traffic to pods with label `run: edpdoc`
#### Deployment Configuration
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: edpdoc
spec:
replicas: 1
selector:
matchLabels:
run: edpdoc
template:
metadata:
labels:
run: edpdoc
mexDeployGen: kubernetes-basic
spec:
containers:
- name: edpdoc
image: ###IMAGETAG###
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
```
- **Replicas:** 1 (single instance)
- **Image:** Injected by deployment pipeline (`###IMAGETAG###` placeholder)
- **Pull policy:** Always (ensures latest version)
### Network Configuration
Outbound connections are configured in `edgeconnectdeployment.yaml`:
```yaml
network:
outboundConnections:
- protocol: "tcp"
portRangeMin: 80
portRangeMax: 80
remoteCIDR: "0.0.0.0/0"
- protocol: "tcp"
portRangeMin: 443
portRangeMax: 443
remoteCIDR: "0.0.0.0/0"
```
- **Port 80:** HTTP outbound
- **Port 443:** HTTPS outbound
- **CIDR:** `0.0.0.0/0` (all destinations)
## Deployment Process
### 1. Container Image Ready
After CI passes:
- Docker image built with `task build:oci-image`
- Tagged with git commit SHA
- Pushed to container registry
### 2. Edge Connect Orchestration
Edge Connect:
1. Pulls container image
2. Reads `edgeconnectdeployment.yaml`
3. Provisions resources on Munich cloudlet
4. Applies Kubernetes manifests
### 3. Kubernetes Deployment
Kubernetes:
1. Creates deployment with 1 replica
2. Pulls container image (`imagePullPolicy: Always`)
3. Starts pod running Nginx + static Hugo site
4. Creates LoadBalancer service
5. Assigns external IP
### 4. Service Available
Documentation is now accessible:
- **Protocol:** HTTP
- **Port:** 80
- **IP:** Assigned by LoadBalancer
## Complete Workflow
{{< likec4-view view="fullWorkflow" project="documentation-platform" >}}
### End-to-End Process
1. **Technical Writer writes content** (Markdown, LikeC4 models)
2. **Local testing** with `task serve` and `task test`
3. **Commit and push** to Git repository
4. **GitHub Actions triggered** on push to main
5. **CI tests run** (build, markdown, HTML, links)
6. **Container image built** if tests pass
7. **Image pushed** to registry
8. **Edge deployment triggered**
9. **Kubernetes applies** manifests
10. **Service available** on edge cloudlet
## Monitoring Deployment
### Check Deployment Status
```bash
kubectl get deployments -n <namespace>
kubectl get pods -n <namespace>
kubectl get services -n <namespace>
```
### View Logs
```bash
kubectl logs deployment/edpdoc -n <namespace>
```
### Access Documentation
Find the LoadBalancer external IP:
```bash
kubectl get service edpdoc -n <namespace>
```
Access via: `http://<EXTERNAL-IP>`
## Rollback
If issues occur after deployment:
### Option 1: Revert Commit
```bash
git revert <bad-commit>
git push origin main
```
CI will rebuild and redeploy.
### Option 2: Manual Rollback
```bash
kubectl rollout undo deployment/edpdoc -n <namespace>
```
Returns to previous deployment version.
### Option 3: Deploy Specific Version
Update image tag in deployment:
```bash
kubectl set image deployment/edpdoc edpdoc=<registry>/<image>:<tag> -n <namespace>
```
## Scaling
Currently: **1 replica**
To scale for higher traffic:
```yaml
spec:
replicas: 3
```
Then apply:
```bash
kubectl apply -f k8s-deployment.yaml
```
Or scale dynamically:
```bash
kubectl scale deployment/edpdoc --replicas=3 -n <namespace>
```
## Security Considerations
1. **Image scanning** - Scan container images for vulnerabilities
2. **Resource limits** - Set CPU/memory limits in deployment
3. **Network policies** - Restrict pod-to-pod communication
4. **HTTPS** - Consider adding TLS termination (Ingress)
## Performance Optimization
1. **CDN** - Add CDN in front of LoadBalancer
2. **Caching** - Configure Nginx caching headers
3. **Compression** - Enable gzip in Nginx
4. **Image optimization** - Compress images in documentation
## Troubleshooting
### Pod not starting
```bash
kubectl describe pod <pod-name> -n <namespace>
```
Check:
- Image pull errors
- Resource constraints
- Configuration errors
### Service unreachable
```bash
kubectl describe service edpdoc -n <namespace>
```
Check:
- LoadBalancer IP assigned
- Port configuration
- Network policies
### Old content served
Check:
- `imagePullPolicy: Always` in deployment
- Image tag is updated
- Pod has restarted
Force pod restart:
```bash
kubectl rollout restart deployment/edpdoc -n <namespace>
```
## Best Practices
1. **Test before deploying** - Always run `task test` locally
2. **Use feature branches** - Don't deploy directly from local
3. **Monitor after deployment** - Check logs and access
4. **Document changes** - Update RELEASE.md
5. **Version control** - Tag releases in Git
## Future Enhancements
Potential improvements:
- **Blue-green deployment** - Zero-downtime updates
- **Canary releases** - Gradual rollout to subset of users
- **Auto-scaling** - HorizontalPodAutoscaler based on traffic
- **Multi-region** - Deploy to multiple cloudlets
- **HTTPS** - TLS certificates and Ingress controller
## Summary
The deployment process is automated and reliable:
**CI ensures quality** - Tests prevent broken deployments
**Edge infrastructure** - Low-latency access from EU
**Kubernetes orchestration** - Reliable, scalable platform
**Simple rollback** - Easy to recover from issues
As a technicalWriter, focus on content quality. The platform handles deployment automatically! 🚀

View file

@ -1,282 +0,0 @@
---
title: "Quick Reference"
linkTitle: "Quick Reference"
weight: 60
description: >
Cheat sheet for common technicalWriter tasks.
---
## Common Commands
### Local Development
```bash
# Start development server (with hot reload)
task serve
# Build for production
task build
# Build for development (faster, no minification)
task build:dev
# Clean build artifacts
task clean
```
### Testing
```bash
# Quick tests (build + markdown)
task test:quick
# Full test suite
task test
# Individual tests
task test:build # Hugo build validation
task test:markdown # Markdown linting
task test:html # HTML validation
task test:links # Link checking
```
### Dependencies
```bash
# Install dependencies
task deps:install
# Update dependencies
task deps:update
# Ensure npm dependencies (auto-installs if missing)
task deps:ensure-npm
```
### Container Operations
```bash
# Build OCI/Docker image
task build:oci-image
# Build and test container
task test:oci-image
```
## File Locations
### Content
| Path | Description |
|------|-------------|
| `content/en/docs/` | Main documentation |
| `content/en/blog/` | Blog posts |
| `content/en/_index.md` | Homepage |
### Architecture Models
| Path | Description |
|------|-------------|
| `resources/edp-likec4/` | Platform architecture models |
| `resources/doc-likec4/` | Documentation platform models |
### Configuration
| File | Purpose |
|------|---------|
| `hugo.toml` | Hugo configuration |
| `config.yaml` | Docsy theme config |
| `Taskfile.yml` | Task definitions |
| `package.json` | npm dependencies and scripts |
| `.markdownlint.json` | Markdown linting rules |
| `.htmlvalidate.json` | HTML validation rules |
| `.htmltest.yml` | Link checker config |
### Build Output
| Path | Description |
|------|-------------|
| `public/` | Generated static site |
| `resources/_gen/` | Generated resources (Hugo) |
| `data/build_info.json` | Build metadata (git commit, version) |
## Markdown Frontmatter
### Standard Page
```yaml
---
title: "Page Title"
linkTitle: "Short Title"
weight: 10
description: >
Brief description for SEO and navigation.
---
```
### Blog Post
```yaml
---
title: "Post Title"
date: 2025-01-15
author: "Your Name"
description: >
Post summary.
---
```
## Embedding Architecture Diagrams
### Basic Embed
```markdown
{{< likec4-view view="view-name" project="project-name" >}}
```
### Parameters
- `view` (required) - The view ID from your LikeC4 model
- `project` (optional, default: "architecture") - The LikeC4 project name
- `title` (optional, default: "Architecture View: {view}") - Custom header text above the diagram
### Examples
```markdown
{{< likec4-view view="overview" project="documentation-platform" >}}
{{< likec4-view view="localDevelopment" project="documentation-platform" >}}
{{< likec4-view view="cicdPipeline" project="documentation-platform" >}}
{{< likec4-view view="otc-faas" project="architecture" title="OTC FaaS Deployment" >}}
```
## LikeC4 Commands
### Regenerate Webcomponents
After modifying `.c4` files:
```bash
task likec4:generate
```
This regenerates both:
- `static/js/likec4-webcomponent.js` (EDP architecture)
- `static/js/likec4-doc-webcomponent.js` (Documentation platform)
### Start Development Server
```bash
cd resources/doc-likec4 # or resources/edp-likec4
npm install
npm start
```
Opens LikeC4 IDE at `http://localhost:5173`
### Export Diagrams
```bash
cd resources/doc-likec4
npx likec4 export png -o ./images .
```
## Git Workflow
### Feature Branch
```bash
# Create feature branch
git checkout -b feature/your-feature
# Make changes and test
task serve
task test:quick
# Commit
git add .
git commit -m "Description of changes"
# Push
git push origin feature/your-feature
# Create pull request on GitHub
```
### Update from Main
```bash
git checkout main
git pull origin main
git checkout feature/your-feature
git rebase main
```
## Troubleshooting
### Port 1313 in use
```bash
lsof -ti:1313 | xargs kill -9
```
### Build errors
```bash
task clean
task build:dev
```
### Missing dependencies
```bash
task deps:install
```
### Hugo module issues
```bash
hugo mod clean
hugo mod get -u
hugo mod tidy
```
### LikeC4 language server
In VS Code: `Ctrl+Shift+P` → "LikeC4: restart language server"
## URLs
### Local Development
- **Documentation:** <http://localhost:1313>
- **LikeC4 IDE:** <http://localhost:5173> (when running `npm start` in likec4 folder)
### Production
Check `edgeconnectdeployment.yaml` for deployment URL or run:
```bash
kubectl get service edpdoc -n <namespace>
```
## Quick Checks Before Committing
1. ✅ `task test:quick` passes
2. ✅ Preview looks correct in browser
3. ✅ No broken links (visual check)
4. ✅ Architecture diagrams render
5. ✅ Frontmatter is correct
## Getting Help
- **Hugo docs:** <https://gohugo.io/documentation/>
- **Docsy theme:** <https://www.docsy.dev/docs/>
- **LikeC4:** <https://likec4.dev/>
- **Task:** <https://taskfile.dev/>
## View Documentation Architecture
To understand how this documentation platform works:
→ Start here: [Documentation About Documentation](../)

View file

@ -1,229 +0,0 @@
---
title: "Testing"
linkTitle: "Testing"
weight: 30
description: >
Quality assurance processes for documentation.
---
## Testing Philosophy
Quality documentation requires testing. Our testing process validates:
- **Build integrity** - Hugo can generate the site
- **Content quality** - Markdown follows best practices
- **HTML validity** - Generated HTML is well-formed
- **Link integrity** - No broken internal or external links
## Testing Capabilities
{{< likec4-view view="testingCapabilities" project="documentation-platform" >}}
## Local Testing
Before committing changes, run tests locally:
### Quick Tests
For rapid feedback during development:
```bash
task test:quick
```
This runs:
- `task test:build` - Hugo build validation
- `task test:markdown` - Markdown linting
### Full Test Suite
Before creating a pull request:
```bash
task test
```
This runs all tests including:
- `task test:build` - Build validation
- `task test:markdown` - Markdown linting
- `task test:html` - HTML validation
- `task test:links` - Link checking
## Individual Tests
You can run individual tests:
### Build Test
Validates that Hugo can build the site:
```bash
task test:build
```
This runs: `hugo --gc --minify --logLevel info`
**What it checks:**
- Hugo configuration is valid
- Content files have correct frontmatter
- Templates render without errors
- No circular dependencies in content structure
### Markdown Lint
Checks Markdown syntax and style:
```bash
task test:markdown
```
This uses `markdownlint` with custom rules in `.markdownlint.json`.
**What it checks:**
- Consistent heading hierarchy
- Proper list formatting
- Code blocks have language tags
- No trailing whitespace
- Consistent line length (where applicable)
**Common issues:**
- Missing blank lines around code blocks
- Inconsistent list markers
- Heading levels skipped
### HTML Validation
Validates generated HTML:
```bash
task test:html
```
This uses `htmlvalidate` with rules in `.htmlvalidate.json`.
**What it checks:**
- Well-formed HTML5
- Proper nesting of elements
- Valid attributes
- Accessible markup
### Link Checking
Verifies all links are valid:
```bash
task test:links
```
This uses `htmltest` configured in `.htmltest.yml`.
**What it checks:**
- Internal links point to existing pages
- External links are reachable
- Anchor links target existing elements
- No redirects (301/302)
**Note:** This test can be slow for large sites with many external links.
## CI Testing
All tests run automatically on:
- **Push to `main`** - Full test suite
- **Pull requests** - Full test suite
View the GitHub Actions workflow: `.github/workflows/test.yml`
### CI Test Results
If tests fail in CI:
1. Check the GitHub Actions logs
2. Look for specific test failures
3. Run the same test locally: `task test:<name>`
4. Fix the issue
5. Commit and push
### Artifacts
CI uploads test artifacts:
- `htmltest-report/` - Link checking results
Download these from the GitHub Actions run to investigate failures.
## Test Configuration Files
| File | Purpose |
|------|---------|
| `.markdownlint.json` | Markdown linting rules |
| `.htmlvalidate.json` | HTML validation rules |
| `.htmltest.yml` | Link checker configuration |
## Best Practices
1. **Test early, test often** - Run `task test:quick` frequently
2. **Fix issues immediately** - Don't accumulate technical debt
3. **Understand failures** - Read error messages carefully
4. **Update tests** - If rules change, update config files
5. **Document exceptions** - If you need to ignore a rule, document why
## Common Issues and Solutions
### Markdown: MD031 - Blank lines around fences
**Problem:** Missing blank line before/after code block
**Solution:** Add blank lines:
```markdown
Some text
```bash
command here
```
More text
```
### Markdown: MD032 - Blank lines around lists
**Problem:** Missing blank line before/after list
**Solution:** Add blank lines:
```markdown
Text before
- List item 1
- List item 2
Text after
```
### HTML: Invalid nesting
**Problem:** Elements improperly nested
**Solution:** Check template files and shortcodes
### Link Check: 404 Not Found
**Problem:** Link points to non-existent page
**Solution:**
- Fix the link
- Create the missing page
- Remove the link if no longer relevant
## Next Steps
Learn about the automated [CI/CD pipeline](../cicd/).

View file

@ -1,75 +0,0 @@
---
title: "eDF Documentation Overview"
description: "Comprehensive guide for users and auditors to understand and use the eDF."
---
# Meta
## Guidelines
1. for users/developers/engineers we describe our output / outcome as product
* it is usable
* there are links / lists to repos
2. we have and describe a 'product-structure-tree'
3. for auditors / governance we have a list / cross reference to Jira tickets
* R&D ?,
* mappen auf die projektphasen, wie erstellt ein team eine plattform?
* stw. mobbing, mob programming
* mapping auf deliverables von IPCEI-CIS ???, bzw. mapping auf epics?
* projekthistorie, projektdynamic, teilprojekt von eDF , teilprojekt-abhängigkiet zB 'Platform'
* friendly user phase
* forgejo community, OSS, PR handling
* externe stakeholder, user experience, think ahead integration
* technolgien, technologie-schwerpunkte, cutting-edge research
* design/specification und bewertung von lösungsentürfen (zB VictoriaMetrics, GARM, terraform, argoCD, ...)
* CI/CD, golden paths (anm.: ist in grobkonzept, deployment von apps von developern fehlt)
# Introduction
- Purpose of the eDF
- Target audience (developers, engineers, auditors)
- High-level product structure overview
- High-level architecture overview
# eDF Components Overview
- List of all major components
- Vertical and horizontal layers explained
- Component maturity/status (fully integrated, partial, experimental)
# Getting Started
- Quickstart guide for developers
- Onboarding steps for eDF engineers
- Prerequisites and environment setup
# Component Details
For each component:
- Description and purpose
- Repository link
- README summary
- Architecture diagrams (link to Miro/Lucid)
- Usage instructions
- Integration points
# Development Experience
- How to contribute
- Local development workflow
- CI/CD pipelines
- Testing and validation
# Operational Experience
- Deployment guides
- Monitoring and observability
- Troubleshooting
# Audit & Compliance
- Overview of implemented controls
- Ticket references (Jira, changelogs)
- Documentation of decisions and reviews
- Evidence of value and coverage
# FAQ & Support
- Common issues and solutions
- Contact points for help
# Appendix
- Glossary
- References to external resources

View file

@ -1,13 +0,0 @@
---
title: "Documentation (v1 - Legacy)"
linkTitle: "v1 (Legacy)"
weight: 100
description: >
Legacy documentation - archived version of the original content.
---
{{% alert title="Note" color="warning" %}}
This is the legacy documentation (v1). For the latest version, please visit the [current documentation](/docs/).
{{% /alert %}}
This section contains the original documentation that is being migrated to a new structure.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 154 KiB

View file

@ -1,7 +0,0 @@
---
title: Humanitec
weight: 4
---
tbd

View file

@ -1,269 +0,0 @@
# Ideen
* Kanban
* product review sessions
# ChatGPT
## Fehlendes Backlog
> welche agilen vorgehensmodelle bzw. best practices gibt es in der software-entwicklung, wenn das product backlog noch nicht da ist. es gibt zwar eine produktvision und auch schon einen PoC, aber das Produkt ist sowohl technisch als auch nutzenseitig immer nocht nicht planbar in user stories. daher ist SCRUM nicht machbar. das team ist ca. 8 personen groß, und wir überlgen, einen meilenstein in den nächsten 8 wochen einfach durch tägliches 'PDCA' zu erstellen: ein morgendliches planning, dann in subgruppen ein erforschen, dann im Plenum wieder C (Check) zusammenführen/integrieren, und ein abschliessendes 'A' als Ergebnissicherung, Versionierierung, Dokumentation. Was hälst du von diesem Vorgehen, bzw. welches Risiken und Verbesserungsvorschläge hast du? Oder gibt es ein Standard-Vorgehen hierfür?
Dein beschriebenes Szenario Vision und PoC vorhanden, aber kein ausformuliertes Product Backlog ist typisch für eine frühe **Explorations- oder Inceptionsphase** in der Produktentwicklung. In dieser Phase ist klassisches **Scrum** tatsächlich oft zu starr, weil es eine gewisse Klarheit über Features und Stories voraussetzt. Das von euch überlegte Vorgehen klingt nach einer sehr pragmatischen und iterativen Methode, die im Kern gut zu den **Lean-Startup-Prinzipien** oder **Design Thinking + Agile Engineering** passt.
## Bewertung eures PDCA-basierten Vorgehens
**Positiv:**
* **Täglicher PDCA-Zyklus** (Plan-Do-Check-Act) sorgt für schnelle Feedbackschleifen.
* **Morgendliches Planning** und **Check-Meeting im Plenum** fördern Transparenz und Selbstorganisation.
* **Subgruppen-Erkundung** erlaubt parallele Experimente.
* **Abschließendes "A"** zur Ergebnissicherung ist essenziell, sonst bleibt es bei "busy work".
**Risiken:**
1. **Fehlende Langfriststruktur:** Ohne grobe übergreifende Richtung (z.B. Meilensteinplan, Zielbild) kann es leicht in ziellosem Explorieren enden.
2. **Uneinheitlicher Erkenntnisgewinn:** Subgruppen könnten redundant oder inkompatibel arbeiten, wenn kein gemeinsames Verständnis besteht.
3. **Dokumentation als „Nachsorge“:** Wenn die Doku erst am Ende passiert, droht Wissensverlust lieber „Living Docs“ in Echtzeit pflegen.
4. **Stakeholder-Abkopplung:** Ohne explizite Reviews oder Demo-Rituale kann Feedback von außen fehlen.
## Verbesserungsvorschläge
1. **Exploration Backlog oder Hypothesenboard:** Auch ohne klassisches Product Backlog könnt ihr mit einem **Experiment-/Hypothesenboard** (à la Lean Startup) arbeiten. Zum Beispiel:
* Hypothese: „Feature X wird den Use Case Y verbessern.“
* Experiment: „Prototyp bauen und testen mit Nutzergruppe Z.“
* Ergebnis & Learnings dokumentieren.
2. **Wöchentliche Zielsetzungen:** Jeden Montag ein Weekly Planning mit klaren Zielen für die Woche (Outcome-fokussiert, nicht Task-fokussiert).
3. **Integration mit Dual Track Agile:** Das Vorgehen ähnelt dem **Dual-Track-Modell**: ein **Discovery-Track** (Erkunden, Validieren, Prototypisieren) läuft parallel zum späteren **Delivery-Track** (Releases, Sprints). Ihr seid noch voll im Discovery-Track.
4. **Design Studio oder Spike-Zeitfenster:** Strukturiert Zeit für z.B. 2-Tages-Design-Studios oder 1-Tages-Spikes das erhöht die Qualität der Exploration.
5. **Review/Share-Session freitags:** Eine fixe wöchentliche Review-/Demo-Session (auch intern) erhöht die Disziplin und hilft beim „Check“-Teil im PDCA.
## Gängige Standard-Methoden in dieser Phase
* **Design Thinking** (für Nutzerzentrierte Exploration)
* **Lean UX / Lean Startup** (Hypothesengetrieben)
* **Dual Track Agile**
* **Google Design Sprint (5 Tage)** extrem hilfreich, wenn es um das Testen von Nutzerideen geht
* **Shape Up (Basecamp)** für nicht-sprintbasiertes Arbeiten mit "Bet" auf 6-Wochen-Projekte
---
**Fazit:**
Euer PDCA-Zyklus ist ein solider, pragmatischer Einstieg in eine unsichere, vorstrukturierte Produktphase. Mit ein paar Ergänzungen (Exploration Backlog, feste Review-Elemente, Hypothesenstruktur) könnt ihr daraus ein effektives Discovery-Vorgehen machen, das den Übergang zu einem klassischen agilen Framework (z.B. Scrum) vorbereitet.
Möchtest du eine konkrete Vorlage für ein Exploration Backlog oder Hypothesenboard für euer Team?
## Architektur vorhanden
> ok, du findest vor allem Tages-Zyklen zu kurz. Ich möchte noch ergänzen, dass wir eine Architektur haben. Das klingt vielleicht paradox, aber es ist tatsächlich so, dass wir das Software-syewtm bereits designed haben, und die Komponenten auch bereits kennen, aber jetzt vor allem die Wege des automatischen Deployments in eine Cloud und die Konfigurations- und Verwaltungsstrukturen unsrerer Software herausfinden müssen. Das ist da, was sich gerade nicht weiter planen lässt. Kannst du mit diesen Ergänzungen dein Feedbach nochmal schärfen?
Danke für die Ergänzung das ändert die Ausgangslage entscheidend und erlaubt eine deutlich präzisere Einordnung.
### Neue Ausgangslage
* Ihr **habt eine Architektur**: Die Komponenten und Schnittstellen sind klar.
* Es gibt **bereits einen PoC** und eine Produktvision.
* Das **Backlog fehlt**, weil ihr gerade die **"nicht-funktionalen" Themen wie Deployment, Cloud-Umgebung, Konfiguration und Betrieb** klärt.
* Das System ist **technisch grob fertig designt**, aber die **Operations- und Deployment-Prozesse** sind unklar und werden explorativ entwickelt.
### Schärferes Feedback
#### 1. **Tages-PDCA-Zyklen wirken in diesem Kontext zu granular**
Bei Infrastruktur-, Deployment- oder Konfigurationsfragen entstehen oft tiefe „Spikes“ (z.B. CI/CD-Pipeline aufsetzen, Helm-Chart modellieren, Secrets Handling evaluieren). Diese brauchen oft mehr als einen Tag bis zum „Check“, weil Abhängigkeiten (z.B. Rechte, Cloud-Zugänge, Testumgebungen) entstehen. Ein **2- bis 3-Tages-Rhythmus** mit klaren Zwischenzielen wäre realistischer, ergänzt um:
* **Daily Standup als Taktgeber**, aber nicht zwangsläufig als vollständiger PDCA-Zyklus.
* **Weekly Planning mit Zielvorgaben und Review-Ritualen**, um Fortschritt messbar zu machen.
#### 2. **Was ihr macht, ist kein Produkt-Delivery, sondern "System Enablement"**
Ihr steckt im Übergang von Architektur zu einem **Infrastructure as Code + Plattform Enablement Track**. Das bedeutet:
* Die „User Stories“ sind keine klassischen Features, sondern z.B.:
* „Als Team möchte ich unsere Software mit einem Befehl deployen können.“
* „Als Betreiber will ich wissen, wie ich Services konfiguriere.“
Das spricht für ein **Infrastructure Kanban Board**, keine Sprints.
#### 3. **Mögliche Vorgehensmodelle / Best Practices**
| Modell / Methode | Warum es jetzt passt |
|------------------------------------------|------------------------------------------------------------------|
| **Kanban für technische Tasks** | Flexible Work-in-Progress-Kontrolle, ideal für Exploration. |
| **Spikes** (aus XP) | Timeboxed Exploration zu offenen Fragen (z.B. Helm, Secrets). |
| **Enabler Epics (SAFe)** | Deployment- und Ops-Fragen formal als "Enabler" fassen. |
| **Team Topologies Platform Team Light**| Ihr agiert gerade wie ein internes Plattform-Team. |
| **Weekly DevOps Review** | Alle 57 Tage Stand der Automatisierung/Doku/Strukturen. |
| **ADRs & Living Docs** | Architectural Decision Records für Deployment-/Ops-Entscheidungen.|
#### 4. **Konkrete Empfehlungen zur Strukturierung**
* **Ziele für die 8 Wochen formulieren** als **"Capabilities"**, z.B.:
* „Greenfield-Clouddeployment in AWS/GCP über GitHub Actions.“
* „Parameterisierbares Helm-Chart mit Doku.“
* „Konfigurierbarkeit über ENV vs. Secrets evaluieren.“
* Dazu **12 Woche cadence**, jede Woche ein Review/Checkpoint.
* **Boards mit Task-Kategorien**:
* 🧪 Spike
* 🛠️ Tooling/Automation
* 📦 Infrastruktur/Dienst
* 📓 Dokumentation
* **Ergebnisse sofort versionieren und dokumentieren**, z.B.:
* GitOps Repo-Struktur
* ADRs
* Beispiel-Deployments
---
### Fazit
Ihr seid im **"technisch operativen Enablement"** eures Systems, nicht im Produkt-Delivery. Das rechtfertigt:
* ein **kanban-artiges, flexibles Modell mit timeboxed Reviews**,
* tägliche Abstimmungen, aber nicht zwingend tägliche Zyklen,
* Fokus auf **Enabler-Ziele statt funktionale Stories**.
SCRUM ist hier fehl am Platz, aber euer Ansatz ist sehr nahe an einer **modernen DevOps-orientierten Arbeitsweise**, die z.B. von **Platform Engineering Teams** eingesetzt wird.
Möchtest du ein Beispiel für ein solches Kanban-Board mit passenden Task-Typen und Zielstruktur?
## Kanban
> ja, gerne!
Hier ist ein konkretes Beispiel für ein **Kanban-Board**, das speziell auf eure Situation als „Platform/Enablement“-Team mit einem vorhandenen Architekturbild, aber offenem Deployment- und Betriebsdesign zugeschnitten ist.
---
## 🎯 **Zielstruktur (übergeordnet für 8 Wochen)**
Diese formuliert ihr am besten als **Capabilities** Fähigkeiten, die das System nachher haben soll:
| Capability ID | Zielbeschreibung |
|---------------|---------------------------------------------------------------------------------------------|
| C1 | Anwendung kann automatisiert in Cloudumgebung X deployed werden (inkl. Secrets Mgmt) |
| C2 | Konfiguration des Systems ist externisiert und dokumentiert (ENV, YAML, o. ä.) |
| C3 | Monitoring- und Logging-Infrastruktur ist einsatzbereit und dokumentiert |
| C4 | Dev- und Test-Umgebungen sind klar getrennt und automatisch provisionierbar |
| C5 | Alle Plattformentscheidungen (z.B. Helm vs. Kustomize) sind versioniert und begründet |
Diese Capabilities bilden **Spalten** oder **Swimlanes** im Board (wenn euer Tool das unterstützt, z.B. GitHub Projects, Jira oder Trello mit Labels).
---
## 🗂️ **Kanban-Board-Spalten (klassisch)**
| Spalte | Zweck |
|------------------|-----------------------------------------------------------|
| 🔍 Backlog | Ideen, Hypothesen, Tasks priorisiert nach Capabilities |
| 🧪 In Exploration | Aktive Spikes, Proofs, technische Evaluierungen |
| 🛠️ In Progress | Umsetzung mit konkretem Ziel |
| ✅ Review / Check | Funktionsprüfung, internes Review |
| 📦 Done | Abgeschlossen, dokumentiert, ggf. in Repo |
---
## 🏷️ **Task-Typen (Labels oder Emojis zur Kennzeichnung)**
| Symbol / Label | Typ | Beispiel |
|------------------|-----------------------------|--------------------------------------------------------------------------|
| 🧪 Spike | Technische Untersuchung | „Untersuche ArgoCD vs. Flux für GitOps Deployment“ |
| 📦 Infra | Infrastruktur | „Provisioniere dev/test/stage in GCP mit Terraform“ |
| 🔐 Secrets | Sicherheitsrelevante Aufgabe| „Design für Secret-Handling mit Sealed Secrets“ |
| 📓 Docs | Dokumentation | „README für Developer Setup schreiben“ |
| 🧰 Tooling | CI/CD, Pipelines, Linter | „GitHub Action für Build & Deploy schreiben“ |
| 🔁 Entscheidung | Architekturentscheidung | „ADR: Helm vs. Kustomize für Service Deployment“ |
---
## 🧩 Beispielhafte Tasks für Capability „C1 Deployment automatisieren“
| Task | Typ | Status |
|----------------------------------------------|----------|----------------|
| Write GitHub Action for Docker image push | 🧰 Tooling | 🛠️ In Progress |
| Evaluate ArgoCD vs. Flux | 🧪 Spike | ✅ Review |
| Create Helm Chart for Service A | 📦 Infra | 🔍 Backlog |
| Create ADR: Deployment-Toolentscheidung | 🔁 Entscheidung | 🔍 Backlog |
| Add deployment doc in Wiki | 📓 Docs | 🔍 Backlog |
---
## 📅 Cadence-Empfehlung
* **Daily Standup (15min)**: Was läuft, was blockiert? Pull statt Push!
* **Weekly Planning (Montag)**: Priorisieren und Backlog-Grooming (welche Capabilities sind diese Woche dran?)
* **Weekly Review (Freitag)**: Erkenntnisse, Entscheidungen und Dokumentation präsentieren
* Optional: **Retrospektive alle 2 Wochen**, falls das Team Energie hat
---
Wenn du magst, kann ich dir ein Beispiel-Board als Markdown-Vorlage oder als CSV-Datei zum Import in Trello, GitHub Projects, Jira etc. machen. Wäre das hilfreich?
## Living Docs
> kannst du noch etwas mehr zu 'LivingDocs' sagen?
Ja, gerne! **"Living Docs"** (bzw. **Living Documentation**) ist ein Konzept aus der agilen Softwareentwicklung und DevOps, das sich auf **aktuelle, direkt aus dem System oder aus dem Arbeitsprozess generierte Dokumentation** bezieht. Ziel ist es, dass die Dokumentation *nicht veraltet*, sondern so eng mit Code und Systemzustand gekoppelt ist, dass sie sich (teilweise) selbst aktualisiert oder sich mit dem Entwicklungsprozess mitbewegt.
---
## 🔍 **Was macht "Living Documentation" aus?**
| Merkmal | Beschreibung |
|----------------------------------|------------------------------------------------------------------------------|
| 🔄 Automatisch erzeugt oder aktualisiert | Generiert aus Code, Tests, Pipelines, ADRs oder Deployments |
| 📌 Versionskontrolliert | Im Git abgelegt, gemeinsam mit dem Code gepflegt |
| 🧑‍💻 Entwicklernah | Entwickelt sich mit dem Code weiter keine Trennung zwischen "Docs" und Dev |
| 📈 Änderbar & nachvollziehbar | Jede Änderung an Code/Doku hat einen Commit & Kontext |
| 📚 Mehrwert für alle Beteiligten| Richtet sich an Devs, Ops, PMs oder andere Teams nicht nur "für später" |
---
## 🧰 Typische Formen von Living Docs
| Typ | Beschreibung & Tools |
|----------------------|----------------------------------------------------------------------------------------|
| **Architecture Decision Records (ADRs)** | Markdown-Dateien im Repo (z.B. `docs/adr/001-helm-vs-kustomize.md`) |
| **Code-Doku aus Source** | Mit Tools wie JSDoc, TypeDoc, Sphinx, Doxygen, etc. |
| **API-Doku** | Automatisch aus Code oder OpenAPI (Swagger) generiert |
| **Test-Doku (z.B. BDD)** | z.B. Gherkin-Style: `Given/When/Then`-Spezifikationen als Dokumentation |
| **Monitoring & Deployment-Status** | z.B. Dashboards mit Infrastructure-Doku (Grafana, Backstage, Argo UI) |
| **DevDocs im Repo** | z.B. `README.md`, `CONTRIBUTING.md`, `how-to/`, `docs/`, `adr/` |
| **Playbooks / Runbooks** | Schritt-für-Schritt-Anleitungen für Betrieb oder Störungen |
---
## ✅ **Best Practices**
1. **Nutze Markdown + Git:** Einfach, versionierbar, lesbar → ADRs, Runbooks, HowTos.
2. **Dokumentiere Entscheidungen bewusst:** Lieber knappe, saubere ADRs als keine Doku.
3. **Verknüpfe Doku mit Code / CI/CD:** Beispiel: Build-Tool erzeugt automatisch API-Doku oder ADR-Lint-Checks.
4. **Verlinke Logs, Dashboards, Monitoring:** z.B. in einem `system-overview.md`.
5. **Behandle Doku wie Code:** Reviewpflichtig, Pull Request basiert, CI-checkbar.
---
## 🛠️ Tools & Frameworks
| Zweck | Tools (Beispiele) |
|------------------------|-------------------------------------------------------------------------|
| ADRs | [`adr-tools`](https://github.com/npryce/adr-tools), einfaches Markdown |
| Markdown-Doku | MkDocs, Docusaurus, GitBook |
| API-Doku | Swagger/OpenAPI, Redocly |
| Infrastruktur-Doku | Backstage (Spotify), Terraform-docs |
| Doku-Automatisierung | GitHub Actions, pre-commit Hooks, CI-Checks |
---
## 🧠 Denkweise: Doku = Gespräch in Zeit
> "Dokumentation ist kein Nachtrag, sondern ein asynchrones Gespräch mit deinem zukünftigen Ich oder dem nächsten Teammitglied."
Gerade in eurem Kontext (technisches Enablement, Deployment, Betrieb) ist das **entscheidungsorientierte, leichtgewichtige Festhalten von Kontext** Gold wert. ADRs und `docs/runbooks/`-Verzeichnisse sind ein idealer Startpunkt.
---
Möchtest du eine Vorlage für ein ADR oder ein Beispielverzeichnis für eine sinnvolle Living-Docs-Struktur im Repo?

View file

@ -1,15 +0,0 @@
---
title: PoC Structure
weight: 5
description: Building plan of the PoC milestone (end 2024) output
---
Presented and approved on tuesday, 26.11.2024 within the team:
![alt text](./_assets/image.png)
The use cases/application lifecycle and deployment flow is drawn here: https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024
![alt text](./_assets/image-1.png)

View file

@ -1,15 +0,0 @@
---
title: Deployment
weight: 3
---
> **Mantra**:
>
> 1. Everything as Code.
> 1. Cloud natively deployable everywhere.
> 1. Ramping up and tearing down oftenly is a no-brainer.
> 1. Especially locally (whereby 'locally' means 'under my own control')
## Entwurf (28.8.24)
![Deployment 2024](./deployment.drawio.png)

View file

@ -1,8 +0,0 @@
---
title: POCs
weight: 2
---
## Further ideas for POSs
* see sample apps 'metaphor' in https://docs.kubefirst.io/

View file

@ -1,39 +0,0 @@
# why we have architectural documentation
TN: Robert, Patrick, Stefan, Stephan
25.2.25, 13-14h
## referring Tickets / Links
* https://jira.telekom-mms.com/browse/IPCEICIS-2424
* https://jira.telekom-mms.com/browse/IPCEICIS-478
* Confluence: https://confluence.telekom-mms.com/display/IPCEICIS/Architecture
## charts
we need charts, because:
* external stakeholders (especially architects) want to understand our product and component structure(*)
* our team needs visualization in technical discussions(**)
* we need to have discussions during creating the documentation
(*): marker: "jetzt hab' ich das erste mal so halbwegs verstanden was ihr da überhaupt macht"
(**) marker: ????
## typed of charts
* schichtenmodell (frontend, middleware, backend)
* bebauungsplan mit abhängigkeiten, domänen
* kontext von außen
* komponentendiagramm,
## decisions
* openbao is backend-system, wird über apis erreicht
## further topics / new requirements
* runbook (compare to openbao discussions)
* persistenz der EDP konfiguartion (zb postgres)
* OIDC vs. SSI

View file

@ -1,85 +0,0 @@
# arbeitsteilung arcihtekur, nach innen und nach aussen
Sebastiano, Stefan, Robert, Patrick, Stephan
25.2.25, 14-15h
## links
* https://confluence.telekom-mms.com/display/IPCEICIS/Team+Members
# montags-call
* Sebasriano im montags-call, inklusive florian, mindestens interim, solange wir keinen architektur-aussenminister haben
# workshops
* nach abstimmung mit hasan zu platform workshops
* weitere beteiligung in weiteren workshop-serien to be defined
# programm-alignment
* sponsoren finden
* erledigt sich durch die workshop-serien
# interne architekten
* robert und patrick steigen ein
* themen-teilung
# produkt struktur
edp standalone
ipcei edp
# architektur themen
## stl
produktstruktur
application model (cnoe, oam, score, xrd, ...)
api
backstage (usage scenarios)
pipelining
'everything as code', deklaratives deployment, crossplane (bzw. orchestrator)
ggf:
identity mgmt
nicht:
security
monitoring
kubernetes internals
## robert
pipelining
kubernetes-inetrnals
api
crossplane
platforming - erzeugen von ressourcen in 'clouds' (e.g. gcp, und hetzner :-) )
## patrick
security
identity-mgmt (SSI)
EaC
und alles andere macht mir auch total spass!
# einschätzungen
* ipceicis-pltaform ist wichtigstes teilprojekt (hasan + patrick)
* offener punkt: workload-steuerung, application model (kompatibility mit EDP)
* thema security, siehe ssi vs. oidc
* wir brauchen eigene workshops zum definieren der zusammenarbiets-modi
# committements
* patrick und robert nehmen teil an architektur
# offen
* sebastian schwaar onboarding? (>=50%) --- robert fragt
* alternative: consulting/support anfallsweise
* hält eine kubernetes einführungs-schulung --> termine sind zu vereinbaren (liegt bei sophie)

View file

@ -1,23 +0,0 @@
# crossplane dawn?
* Monday, March 31, 2025
## Issue
Robert worked on the kindserver reconciling.
He got aware that crossplane is able to delete clusters when drift is detected. This mustnt happen for sure in productive clusters.
Even worse, if crossplane did delete the cluster and then set it up again correctly, argocd would be out of sync and had no idea by default how to relate the old and new cluster.
## Decisions
1. quick solution: crosspllane doesn't delete clusters.
* If it detects drift with a kind cluster, it shall create an alert (like email) but not act in any way
2. analyze how crossplane orchestration logic calls 'business logic' to decide what to do.
* In this logic we could decide whether to delete resources like clusters and if so then how. Secondly an 'orchestration' or let's workflow how to correctly set the old state with respect to argocd could be implemented there.
3. keep terraform in mind
* we probably will need it in adapters anyway
* if the crossplane design does not fir, or the benefit is too small, or we definetly ahve more ressources in developing terraform, the we could completley switch
4. focus on EDP domain and application logic
* for the momen (in MVP1) we need to focus on EDP higher level functionality

View file

@ -1,42 +0,0 @@
# platform-team austausch
## stefan
* initiale fragen:
* vor 2 wochen workshop tapeten-termin
* wer nimmt an den workshops teil?
* was bietet platform an?
* EDP: könnte 5mio/a kosten
* -> produkt pitch mit marko
* -> edp ist unabhängig von ipceicis cloud continuum*
* generalisierte quality of services ( <-> platform schnittstelle)
## Hasan
* martin macht: agent based iac generation
* platform-workshops mitgestalten
* mms-fokus
* connectivity enabled cloud offering, e2e von infrastruktur bis endgerät
* sdk für latenzarme systeme, beraten und integrieren
* monitoring in EDP?
* beispiel 'unity'
* vorstellung im arch call
* wie können unterschieldiche applikationsebenen auf unterschiedliche infrastruktur(compute ebenen) verteit werden
* zero touch application deployment model
* ich werde gerade 'abgebremst'
* workshop beteiligung, TPM application model
## martin
* edgeXR erlaubt keine persistenz
* openai, llm als abstarktion nicht vorhanden
* momentan nur compute vorhanden
* roaming von applikationen --> EDP muss das unterstützen
* anwendungsfall: sprachmodell übersetzt design-artifakte in architektur, dann wird provisionierung ermöglicht
? Applikations-modelle
? zusammenhang mit golden paths
* zB für reines compute faas

View file

@ -1,55 +0,0 @@
+++
title = "Existing Backstage Plugins"
weight = 4
+++
1. **Catalog**:
* Used for managing services and microservices, including registration, visualization, and the ability to track dependencies and relationships between services. It serves as a central directory for all services in an organization.
2. **Docs**:
* Designed for creating and managing documentation, supporting formats such as Markdown. It helps teams organize and access technical and non-technical documentation in a unified interface.
3. **API Docs**:
* Automatically generates API documentation based on OpenAPI specifications or other API definitions, ensuring that your API information is always up to date and accessible for developers.
4. **TechDocs**:
* A tool for creating and publishing technical documentation. It is integrated directly into Backstage, allowing developers to host and maintain documentation alongside their projects.
5. **Scaffolder**:
* Allows the rapid creation of new projects based on predefined templates, making it easier to deploy services or infrastructure with consistent best practices.
6. **CI/CD**:
* Provides integration with CI/CD systems such as GitHub Actions and Jenkins, allowing developers to view build status, logs, and pipelines directly in Backstage.
7. **Metrics**:
* Offers the ability to monitor and visualize performance metrics for applications, helping teams to keep track of key indicators like response times and error rates.
8. **Snyk**:
* Used for dependency security analysis, scanning your codebase for vulnerabilities and helping to manage any potential security risks in third-party libraries.
9. **SonarQube**:
* Integrates with SonarQube to analyze code quality, providing insights into code health, including issues like technical debt, bugs, and security vulnerabilities.
10. **GitHub**:
* Enables integration with GitHub repositories, displaying information such as commits, pull requests, and other repository activity, making collaboration more transparent and efficient.
11. **CircleCI**:
* Allows seamless integration with CircleCI for managing CI/CD workflows, giving developers insight into build pipelines, test results, and deployment statuses.
12. **Kubernetes**:
* Provides tools to manage Kubernetes clusters, including visualizing pod status, logs, and cluster health, helping teams maintain and troubleshoot their cloud-native applications.
13. **Cloud**:
* Includes plugins for integration with cloud providers like AWS and Azure, allowing teams to manage cloud infrastructure, services, and billing directly from Backstage.
14. **OpenTelemetry**:
* Helps with monitoring distributed applications by integrating OpenTelemetry, offering powerful tools to trace requests, detect performance bottlenecks, and ensure application health.
15. **Lighthouse**:
* Integrates Google Lighthouse to analyze web application performance, helping teams identify areas for improvement in metrics like load times, accessibility, and SEO.

28
content/en/docs/_index.md Normal file → Executable file
View file

@ -1,27 +1,9 @@
---
title: "Documentation"
linkTitle: "Documentation"
menu:
main:
weight: 20
title: Developer Framework Documentation
linkTitle: Docs
menu: {main: {weight: 20}}
weight: 20
---
# Edge Developer Platform (EDP) Documentation
This section is the project documentation for IPCEI-CIS Developer Framework.
Welcome to the EDP documentation. This documentation serves developers, engineers, and auditors who want to understand, use, and audit the Edge Developer Platform.
It describes the outcomes and products of the edgeDeveloperFramework (eDF) sub-project within IPCEI-CIS.
## Target Audience
* **Developers & Engineers**: Learn how to use the platform, deploy applications, and integrate services
* **Platform Engineers**: Understand the architecture, components, and operational aspects
* **Auditors & Governance**: Access project history, decisions, and compliance information
## Documentation Structure
The documentation is organized into three core areas:
* **[Edge Developer Platform (EDP)](/docs/edp/)**: The central platform to support developers working at the edge, based around Forgejo
* **[EdgeConnect Cloud](/docs/edgeconnect/)**: The sovereign edge cloud context and key deployment target for EDP integrations
* **[Governance](/docs/governance/)**: Project history, decision context, and audit-oriented traceability

View file

@ -4,7 +4,7 @@ weight: 2
description: The golden paths in the engineers and product development domain
---
## Rationale
## Rationale
The challenge of IPCEI-CIS Developer Framework is to provide value for DTAG customers, and more specifically: for Developers of DTAG customers.
@ -40,22 +40,10 @@ Deploy and develop the famous socks shops:
* https://github.com/kezoo/nestjs-reactjs-graphql-typescript-boilerplate-example
### Telemetry Use Case with respect to the Fibonacci workload
The Fibonacci App on the cluster can be accessed on the path https://cnoe.localtest.me/fibonacci.
It can be called for example by using the URL https://cnoe.localtest.me/fibonacci?number=5000000.
The resulting ressource spike can be observed one the Grafana dashboard "Kubernetes / Compute Resources / Cluster".
The resulting visualization should look similar like this:
![alt text](fibonacci-app_cpu-spike.png)
## When and how to use the developer framework?
### e.g. an example
.... taken from https://cloud.google.com/blog/products/application-development/common-myths-about-platform-engineering?hl=en
![alt text](image.png)
![alt text](image.png)

View file

Before

Width:  |  Height:  |  Size: 944 KiB

After

Width:  |  Height:  |  Size: 944 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 160 KiB

After

Width:  |  Height:  |  Size: 160 KiB

Before After
Before After

View file

@ -6,10 +6,3 @@ description: What in terms of components or building blocks is needed in a platf
> This page is in work. Right now we have in the index a collection of links describing and listing typical components and building blocks of platforms. Also we have a growing number of subsections regarding special types of components.
See also:
* https://thenewstack.io/build-an-open-source-kubernetes-gitops-platform-part-1/
* https://thenewstack.io/build-an-open-source-kubernetes-gitops-platform-part-2/
![alt text](7b748ff4-image2-1024x580.png)

View file

@ -47,11 +47,11 @@ Components are the composable and self-contained building blocks for the context
Components must be as small as possible and follow the same concepts of software development and deployment as any other software product. In particular, they must have the following characteristics:
* designed for a single task
* provide a clear and intuitive output
* easy to compose
* easily customizable or interchangeable
* automatically testable
- designed for a single task
- provide a clear and intuitive output
- easy to compose
- easily customizable or interchangeable
- automatically testable
In the EDF components are divided into different categories. Each category contains components that perform similar actions. For example, the `build` category contains components that compile code, while the `deploy` category contains components that automate the management of the artefacts created in a production-like system.

View file

@ -8,4 +8,4 @@ There is no continuous whatever step inbetween ... Gitops is just 'overwriting'
This means whatever quality ensuring steps have to take part before 'overwriting' have to be defined as state changer in the repos, not in the environments.
Conclusio: I think we only have three contexts, or let's say we don't have the contect 'continuous delivery'
Conclusio: I think we only have three contexts, or let's say we don't have the contect 'continuous delivery'

View file

@ -33,4 +33,4 @@ https://www.getport.io/compare/backstage-vs-port
* [port-vs-backstage-choosing-your-internal-developer-portal](https://medium.com/@vaibhavgupta0702/port-vs-backstage-choosing-your-internal-developer-portal-71c6a6acd979)
* [idp-vs-self-service-portal-a-platform-engineering-showdown](https://thenewstack.io/idp-vs-self-service-portal-a-platform-engineering-showdown)
* [portals-vs-platform-orchestrator](https://humanitec.com/portals-vs-platform-orchestrator)
* [internal-developer-portal-vs-internal-developer-platform](https://www.cortex.io/post/internal-developer-portal-vs-internal-developer-platform)
* [internal-developer-portal-vs-internal-developer-platform](https://www.cortex.io/post/internal-developer-portal-vs-internal-developer-platform)

View file

@ -17,7 +17,7 @@ description: "The new kid on the block since 2023 ist 'Platform Orchestrating':
* cnoe.io
#### Resources
#### Resources
* [CNOE IDPBuilder](https://cnoe.io/docs/reference-implementation/installations/idpbuilder)
* https://github.com/csantanapr/cnoe-examples/tree/main
* https://github.com/csantanapr/cnoe-examples/tree/main

View file

@ -29,8 +29,8 @@ description: An currently uncurated list of references with respect to typical p
| Core Component | Short Description |
| ---- | --- |
| Application Configuration Management | Manage application configuration in a dynamic, scalable and reliable way. |
| Infrastructure Orchestration | Orchestrate your infrastructure in a dynamic and intelligent way depending on the context. |
| Environment Management | Enable developers to create new and fully provisioned environments whenever needed. |
| Deployment Management | Implement a delivery pipeline for Continuous Delivery or even Continuous Deployment (CD). |
| Role-Based Access Control | Manage who can do what in a scalable way. |
| Application Configuration Management | Manage application configuration in a dynamic, scalable and reliable way. |
| Infrastructure Orchestration | Orchestrate your infrastructure in a dynamic and intelligent way depending on the context. |
| Environment Management | Enable developers to create new and fully provisioned environments whenever needed. |
| Deployment Management | Implement a delivery pipeline for Continuous Delivery or even Continuous Deployment (CD). |
| Role-Based Access Control | Manage who can do what in a scalable way. |

View file

@ -5,7 +5,7 @@ description: Theory and general blue prints of the platform engineering discipli
---
## Rationale
## Rationale
IPCEI-CIS Developer Framework is part of a cloud native technology stack. To design the capabilities and architecture of the Developer Framework we need to define the surounding context and internal building blocks, both aligned with cutting edge cloud native methodologies and research results.
@ -16,7 +16,6 @@ In CNCF the discipline of building stacks to enhance the developer experience is
[CNCF first asks](https://tag-app-delivery.cncf.io/whitepapers/platforms/) why we need platform engineering:
> The desire to refocus delivery teams on their core focus and reduce duplication of effort across the organisation has motivated enterprises to implement platforms for cloud-native computing. By investing in platforms, enterprises can:
>
> * Reduce the cognitive load on product teams and thereby accelerate product development and delivery
> * Improve reliability and resiliency of products relying on platform capabilities by dedicating experts to configure and manage them
> * Accelerate product development and delivery by reusing and sharing platform tools and knowledge across many teams in an enterprise
@ -41,7 +40,7 @@ https://humanitec.com/blog/wtf-internal-developer-platform-vs-internal-developer
## Internal Developer Platform
> In IPCEI-CIS right now (July 2024) we are primarily interested in understanding how IDPs are built as one option to implement an IDP is to build it ourselves.
> In IPCEI-CIS right now (July 2024) we are primarily interested in understanding how IDPs are built as one option to implement an IDP is to build it ourselves.
The outcome of the Platform Engineering discipline is - created by the platform engineering team - a so called 'Internal Developer Platform'.
@ -70,4 +69,4 @@ The amount of available IDPs as product is rapidly growing.
## Platform 'Initiatives' aka Use Cases
Cortex is [talking about Use Cases (aka Initiatives):](https://www.youtube.com/watch?v=LrEC-fkBbQo) (or https://www.brighttalk.com/webcast/20257/601901)
![alt text](cortex-use-cases.png)
![alt text](cortex-use-cases.png)

View file

@ -7,14 +7,14 @@ weight = 1
date = '2024-07-30'
+++
## [The Structure of a Successful Internal Developer Platform](https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures)
## [The Structure of a Successful Internal Developer Platform](https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures)
In a platform reference architecture there are five main planes that make up an IDP:
1. Developer Control Plane this is the primary configuration layer and interaction point for the platform users. Components include Workload specifications such as Score and a portal for developers to interact with.
2. Integration and Delivery Plane this plane is about building and storing the image, creating app and infra configs, and deploying the final state. It usually contains a CI pipeline, an image registry, a Platform Orchestrator, and the CD system.
3. Resource Plane this is where the actual infrastructure exists including clusters, databases, storage or DNS services.
4, Monitoring and Logging Plane provides real-time metrics and logs for apps and infrastructure.
4, Monitoring and Logging Plane provides real-time metrics and logs for apps and infrastructure.
5. Security Plane manages secrets and identity to protect sensitive information, e.g., storing, managing, and security retrieving API keys and credentials/secrets.
![idp](../idp.webp)
@ -29,9 +29,12 @@ https://github.com/humanitec-architecture
https://humanitec.com/reference-architectures
## Create a reference architecture
[Create your own platform reference architecture](https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures)
[Reference arch slide deck](https://docs.google.com/presentation/d/1yAf_FSjiA0bAFukgu5p1DRMvvGGE1fF4KhvZbb7gn2I/edit?pli=1#slide=id.g1ef66f3349b_3_3)

View file

@ -1,16 +1,16 @@
---
title: CNOE
weight: 4
---
+++
title = "CNOE"
weight = 4
+++
* https://cnoe.io/docs/intro
> The goal for the CNOE framework is to bring together a cohort of enterprises operating at the same scale so that they can navigate their operational technology decisions together, de-risk their tooling bets, coordinate contribution, and offer guidance to large enterprises on which CNCF technologies to use together to achieve the best cloud efficiencies.
### Aussprache
* Englisch Kuh.noo,
* Englisch Kuh.noo,
* also 'Kanu' im Deutschen
@ -26,7 +26,6 @@ See https://cnoe.io/docs/reference-implementation/integrations/reference-impl:
# in a local terminal with docker and kind
idpbuilder create --use-path-routing --log-level debug --package-dir https://github.com/cnoe-io/stacks//ref-implementation
```
### Output
```bash
@ -151,7 +150,7 @@ Data:
USER_PASSWORD : RwCHPvPVMu+fQM4L6W/q-Wq79MMP+3CN-Jeo
```
### login to backstage
### login to backstage
login geht mit den Creds, siehe oben:

View file

Before

Width:  |  Height:  |  Size: 113 KiB

After

Width:  |  Height:  |  Size: 113 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 364 KiB

After

Width:  |  Height:  |  Size: 364 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 208 KiB

After

Width:  |  Height:  |  Size: 208 KiB

Before After
Before After

View file

@ -0,0 +1,7 @@
+++
title = "Humanitec"
weight = 4
+++
tbd

View file

@ -1,46 +0,0 @@
---
title: EdgeConnect
linkTitle: EdgeConnect Cloud
weight: 20
description: >
Sovereign edge cloud for running applications
---
## Overview
EdgeConnect is a custom cloud provided by the project as a whole. It has several goals, including retaining sovereign control over cloud compute resources, and supporting sustainability-aware infrastructure choices.
While EdgeConnect is managed outwith our Edge Developer Platform, we have produced a number of tools to facilitate its use and broaden its applicability. These are an [SDK](/docs/edgeconnect/edgeconnect-sdk/), command-line [client](/docs/edgeconnect/edgeconnect-client/), bespoke [provider](/docs/edgeconnect/terraform-provider/) for [Terraform](https://developer.hashicorp.com/terraform), and tailor-made [Forgejo Actions](/docs/edgeconnect/edgeconnect-actions/).
{{< likec4-view view="edgeconnect-context" project="architecture" title="EdgeConnect Context View: Users, Tooling and Control Plane" >}}
The diagram summarizes how EdgeConnect is typically consumed and operated. Developers and automation do not interact with edge clusters directly; instead they use stable entry points (CLI, SDK, Terraform) that talk to the EdgeConnect API.
EdgeConnect itself is shown as a single cloud boundary that contains the control plane (API + controllers) and the managed resource model (e.g., App, AppInstance). Controllers continuously reconcile the desired state expressed via the API and drive deployments into the runtime.
EDP appears here as an external consumer: it can automate provisioning and deployment workflows (for example via Terraform) while EdgeConnect remains a separately managed cloud. This separation clarifies responsibilities: EDP orchestrates delivery processes, EdgeConnect provides the target runtime and lifecycle management.
## Key Features
* Managed by the broader project, not specifically by EDP
* Focus on sovereignty and sustainability
* Utilities such as [CLI](/docs/edgeconnect/edgeconnect-client/) and [Terraform provider](/docs/edgeconnect/terraform-provider/) encourage widespread platform use
* [EDP](/docs/edp/) products such as [Forgejo](/docs/edp/forgejo/) are hosted on [OTC](/docs/edp/deployment/otc/) rather than EdgeConnect
## Purpose in EDP
EdgeConnect is documented here because it is a key deployment target and integration point for the broader platform. Even though EdgeConnect is operated separately from EDP (and core EDP services are hosted on OTC), EDP tooling and automation frequently needs to provision or deploy workloads into EdgeConnect in a consistent, repeatable way.
Working with EdgeConnect also helps ensure that our developer workflows and platform components remain portable and “cloud-ready” beyond a single environment. By integrating with a sovereign system and making sustainability-aware choices visible in practice, we align platform engineering with the projects wider goals and enable closer collaboration with the teams operating the EdgeConnect cloud.
### Access
* [Gardener console access](https://gardener.apps.mg3.mdb.osc.live/namespace/garden-platform/shoots)
- Choose `Log in with mg3` then `platform` before entering credentials set up by the Platform Team.
* [Edge cluster](https://hub.apps.edge.platform.mg3.mdb.osc.live/)
* [Orca cluster](https://hub.apps.orca.platform.mg3.mdb.osc.live/)
### Notes
Documentation for EdgeConnect is provided using other systems, including Confluence.

View file

@ -1,286 +0,0 @@
---
title: Forgejo Actions
linkTitle: Forgejo Actions
weight: 40
description: >
CI/CD actions for automated EdgeConnect deployment and deletion
---
## Overview
The EdgeConnect Actions are custom composite actions for use in [Forgejo](/docs/edp/forgejo/actions/)/[GitHub Actions](https://forgejo.org/docs/latest/user/actions/github-actions/) that automate EdgeConnect application deployments in CI/CD pipelines. They wrap the [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) to provide a simple, declarative way to deploy and delete applications without manual CLI installation or configuration.
Two actions are available:
- **edge-connect-deploy-action**: Deploys applications using declarative YAML configuration
- **edge-connect-delete-action**: Deletes applications and their instances from EdgeConnect
## Key Features
* **Zero installation**: Actions automatically download and use the EdgeConnect Client
* **Declarative workflow**: Deploy applications using YAML configuration files
* **CI/CD optimized**: Designed for automated pipelines with auto-approve and dry-run support
* **Version pinning**: Specify exact EdgeConnect Client version for reproducible builds
* **Secrets management**: Credentials passed securely through workflow secrets
* **Compatible with GitHub and Forgejo Actions**: Works in both ecosystems
## Purpose in EDP
CI/CD automation is essential for modern development workflows. While the [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) provides powerful deployment capabilities, integrating it into CI/CD pipelines requires downloading binaries, managing credentials, and configuring authentication for each workflow run.
These actions eliminate that boilerplate by:
- Automatically fetching the correct Client version
- Handling authentication setup
- Providing a clean, reusable action interface
- Reducing pipeline configuration to a few lines
This enables teams to focus on application configuration rather than pipeline plumbing, while maintaining the full power of declarative EdgeConnect deployments.
The actions complement the [Terraform provider](/docs/edgeconnect/terraform-provider/) by offering a simpler option for teams already using Forgejo/GitHub Actions who want deployment automation without adopting Terraform.
## Repository
**Deploy Action**: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action
**Delete Action**: https://edp.buildth.ing/DevFW-CICD/edge-connect-delete-action
**Demo Repository**: https://edp.buildth.ing/DevFW-CICD/edgeconnect-action-demo
## Getting Started
### Prerequisites
* Forgejo or GitHub repository with Actions enabled
* EdgeConnect access credentials (username and password)
* `EdgeConnectConfig.yaml` file defining your application (see [YAML Configuration Format](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format))
* For Kubernetes apps: K8s manifest file referenced in the config
* Repository secrets configured with EdgeConnect credentials
### Quick Start
1. Create an `EdgeConnectConfig.yaml` file in your repository defining your application (see [Client documentation](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format))
2. Add EdgeConnect credentials as repository secrets:
- `EDGEXR_PLATFORM_USERNAME`
- `EDGEXR_PLATFORM_PASSWORD`
3. Create a workflow file (e.g., `.forgejo/workflows/deploy.yaml`) using the action
4. Commit and push to trigger the workflow
### Verification
After the workflow runs successfully:
- Check the workflow logs for deployment status
- Verify resources appear in the [EdgeConnect console](https://hub.apps.edge.platform.mg3.mdb.osc.live/)
- Test application endpoints are accessible
## Usage Examples
### Minimal Deploy Action
```yaml
- name: Deploy to EdgeConnect
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Minimal Delete Action
```yaml
- name: Delete from EdgeConnect
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-delete-action@main
with:
configFile: ./EdgeConnectConfig.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Complete Workflow Example
A typical deployment workflow that builds, tags, and deploys:
```yaml
name: deploy
on:
workflow_run:
workflows: [build]
types:
- completed
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Update manifest with image tag
run: |
sha="${{ github.sha }}"
shortSha="${sha:0:7}"
echo "Setting image version to: registry.example.com/myapp:${shortSha}"
sed -i "s@###IMAGETAG###@registry.example.com/myapp:${shortSha}@g" ./k8s-deployment.yaml
- name: Deploy to EdgeConnect
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Dry-Run Mode
Preview changes without applying them:
```yaml
- name: Preview deployment
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
dryRun: 'true'
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Version Pinning
Use a specific EdgeConnect Client version:
```yaml
- name: Deploy with specific version
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
version: 'v2.0.1'
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
## Integration Points
* **EdgeConnect Client**: Actions download and execute the Client CLI tool
* **EdgeConnect SDK**: Client uses the SDK for all API interactions
* **Forgejo/GitHub Actions**: Native integration with both action ecosystems
* **EdgeConnect API**: All operations communicate with EdgeConnect platform APIs
* **Container Registries**: Works with any registry for application images
## Configuration
### Action Inputs
Both deploy and delete actions accept the same inputs:
| Input | Required | Default | Description |
|-------|----------|---------|-------------|
| `configFile` | Yes | - | Path to EdgeConnectConfig.yaml file |
| `baseUrl` | Yes | - | EdgeConnect API base URL (e.g., https://hub.apps.edge.platform.mg3.mdb.osc.live) |
| `username` | Yes | - | EdgeConnect username for authentication |
| `password` | Yes | - | EdgeConnect password for authentication |
| `dryRun` | No | `false` | Preview changes without applying (set to `'true'` to enable) |
| `version` | No | `v2.0.1` | EdgeConnect Client version to download and use |
### YAML Configuration File
The `configFile` parameter points to an `EdgeConnectConfig.yaml` that defines your application and deployment targets. See the [EdgeConnect Client YAML Configuration Format](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format) for the complete specification.
Example structure:
```yaml
kind: edgeconnect-deployment
metadata:
name: "my-app"
appVersion: "1.0.0"
organization: "myorg"
spec:
k8sApp:
manifestFile: "./k8s-deployment.yaml"
infraTemplate:
- region: "EU"
cloudletOrg: "TelekomOp"
cloudletName: "Munich"
flavorName: "EU.small"
```
### Secrets Management
Configure repository secrets in Forgejo/GitHub:
1. Navigate to repository Settings → Secrets
2. Add secrets:
- Name: `EDGEXR_PLATFORM_USERNAME`, Value: your EdgeConnect username
- Name: `EDGEXR_PLATFORM_PASSWORD`, Value: your EdgeConnect password
3. Reference in workflows using `${{ secrets.SECRET_NAME }}`
## Troubleshooting
### Action Fails with "Failed to download edge-connect-client"
**Problem**: Action cannot download the Client binary
**Solution**:
- Verify the `version` parameter matches an actual release version
- Ensure the release exists at https://edp.buildth.ing/DevFW-CICD/edge-connect-client/releases
- Check network connectivity from the runner
- Try using default version by omitting the `version` parameter
### Authentication Errors
**Problem**: "authentication failed" or "unauthorized" errors
**Solution**:
- Verify secrets are correctly configured in repository settings
- Check secret names match exactly (case-sensitive)
- Ensure `baseUrl` is correct for your target environment (Edge vs Orca)
- Confirm credentials work by testing with the [client](../edgeconnect-client/)
### "Configuration validation failed"
**Problem**: YAML configuration file validation errors
**Solution**:
- Verify `configFile` path is correct relative to repository root
- Check YAML syntax is valid (use a YAML validator)
- Ensure all required fields are present (see [Client docs](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format))
- Verify manifest file paths in the config exist and are correct
### Resources Not Appearing in Console
**Problem**: Action succeeds but resources don't appear in EdgeConnect console
**Solution**:
- Verify you're checking the correct environment (Edge vs Orca)
- Ensure `baseUrl` parameter matches the console you're viewing
- Check organization name in config matches your console access
- Review action logs for any warnings or skipped operations
### Deployment Succeeds but App Doesn't Work
**Problem**: Deployment completes but application is not functioning
**Solution**:
- Check application logs in the EdgeConnect console
- Verify image tags are correct (common issue with placeholder replacement)
- Ensure manifest files reference correct image registry and paths
- Check network configuration allows required outbound connections
- Verify cloudlet has sufficient resources for the specified flavor
## Status
**Maturity**: Production
## Additional Resources
* [EdgeConnect Client Documentation](/docs/edgeconnect/edgeconnect-client/)
* [EdgeConnect SDK Documentation](/docs/edgeconnect/edgeconnect-sdk/)
* [Terraform Provider Documentation](/docs/edgeconnect/terraform-provider/)
* [EdgeConnect Console](https://hub.apps.edge.platform.mg3.mdb.osc.live/)
* [Demo Repository](https://edp.buildth.ing/DevFW-CICD/edgeconnect-action-demo)
* [Forgejo Actions Documentation](https://forgejo.org/docs/latest/user/actions/)

View file

@ -1,246 +0,0 @@
---
title: EdgeConnect Client
linkTitle: Client
weight: 20
description: >
Client software for establishing EdgeConnect connections
---
## Overview
The EdgeConnect Client is a command-line tool for managing EdgeConnect applications and instances. It is built using our Golang [SDK](/docs/edgeconnect/edgeconnect-sdk/), and supports functionality to create, destroy, describe and list various resources.
The tool provides both imperative commands (for direct resource management) and declarative workflows (using YAML configuration files) to deploy applications across multiple edge cloudlets. It supports different EdgeConnect deployment environments through an API version selector.
## Key Features
* **Dual workflow support**: Imperative commands for direct operations, declarative YAML for infrastructure-as-code
* **Multi-cloudlet deployment**: Deploy applications to multiple edge locations from a single configuration
* **Deployment planning**: Preview and approve changes before applying them (dry-run mode)
* **Environment compatibility**: Works with different EdgeConnect deployment environments (configured via `api-version`)
* **CI/CD ready**: Designed for automated deployments with auto-approve and exit codes
## Purpose in EDP
No system can be considered useful unless it is actually, in practice, used. While the Edge Connect [console](https://hub.apps.edge.platform.mg3.mdb.osc.live/) and [API](https://swagger.edge.platform.mg3.mdb.osc.live/) are essential tools to allow the platform to be used by developers, there are numerous use cases for interaction that is automated but simpler to use than an API.
The EdgeConnect Client bridges the gap between manual console operations and direct API integration, enabling automated deployments in CI/CD pipelines, infrastructure-as-code workflows, and scripted operations while maintaining simplicity and usability.
## Repository
**Code**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client
**Releases**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client/releases
## Getting Started
### Prerequisites
* Access credentials for the EdgeConnect platform (username and password)
* Knowledge of your target deployment environment (determines `api-version` setting)
* For Kubernetes deployments: K8s manifest files
* For Docker deployments: Docker image reference
### Quick Start
1. Download the Edge Connect Client binary from the Forgejo [releases page](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/releases) for your platform (Linux, macOS, or Windows)
2. Extract and move to your PATH: `tar -xzf edge-connect-client_*.tar.gz && sudo mv edge-connect /usr/local/bin/`
3. Configure authentication using environment variables or a config file (see Configuration section)
4. Verify installation: `edge-connect --help`
### Verification
Run `edge-connect app list --org <your-org> --region <region>` to verify you can authenticate and communicate with the EdgeConnect API.
## Usage Examples
### Declarative Deployment (Recommended)
Create an `EdgeConnectConfig.yaml` file defining your application and deployment targets, then apply it:
```bash
edge-connect apply -f EdgeConnectConfig.yaml
```
Use `--dry-run` to preview changes without applying them, and `--auto-approve` for automated CI/CD workflows.
### Imperative Commands
Direct resource management using CLI commands:
```bash
# Create an application
edge-connect app create --org myorg --name myapp --version 1.0.0 --region EU
# Create an instance on a specific cloudlet
edge-connect instance create --org myorg --name myinstance \
--app myapp --version 1.0.0 --region EU \
--cloudlet Munich --cloudlet-org TelekomOp --flavor EU.small
# List resources
edge-connect app list --org myorg --region EU
edge-connect instance list --org myorg --region EU
# Delete resources
edge-connect instance delete --org myorg --name myinstance --region EU \
--cloudlet Munich --cloudlet-org TelekomOp
edge-connect app delete --org myorg --name myapp --version 1.0.0 --region EU
```
## Integration Points
* **EdgeConnect API**: Communicates with EdgeConnect platform APIs for all resource operations
* **EdgeConnect SDK**: Built on top of the Golang SDK, sharing authentication and client implementation
* **CI/CD Pipelines**: Designed for integration with GitLab CI, GitHub Actions, and other automation tools
* **Infrastructure-as-Code**: YAML configuration files enable GitOps workflows
## Configuration
### Global Settings
The client can be configured via config file, environment variables, or command-line flags (in order of precedence: flags > env vars > config file).
**Config File** (`~/.edge-connect.yaml` or use `--config` flag):
```yaml
base_url: "https://hub.apps.edge.platform.mg3.mdb.osc.live"
username: "your-username@example.com"
password: "your-password"
api_version: "v2" # v1 or v2 - identifies deployment environment
```
**Environment Variables**:
- `EDGE_CONNECT_BASE_URL`: API base URL
- `EDGE_CONNECT_USERNAME`: Authentication username
- `EDGE_CONNECT_PASSWORD`: Authentication password
- `EDGE_CONNECT_API_VERSION`: API version selector (v1 or v2, default: v2)
**Global Flags** (available on all commands):
- `--base-url`: API base URL
- `--username`: Authentication username
- `--password`: Authentication password
- `--api-version`: API version selector (v1 or v2) - specifies which deployment environment to target
- `--config`: Path to config file
- `--debug`: Enable debug logging
**Note on API Versions**: The `api-version` setting (v1 or v2) is an internal label used to distinguish between different EdgeConnect deployment environments, not an official API version designation from the platform.
### Commands
**App Management** (`edge-connect app <command>`):
CLI command `app` corresponds to **App** in the platform console.
- `create`: Create app (flags: `--org`, `--name`, `--version`, `--region`)
- `show`: Show app details (flags: same as create)
- `list`: List apps (flags: `--org`, `--region`, optional: `--name`, `--version`)
- `delete`: Delete app (flags: `--org`, `--name`, `--version`, `--region`)
**App Instance Management** (`edge-connect instance <command>`):
CLI command `instance` corresponds to **App Instance** in the platform console.
- `create`: Create app instance (flags: `--org`, `--name`, `--app`, `--version`, `--region`, `--cloudlet`, `--cloudlet-org`, `--flavor`)
- `show`: Show app instance details (flags: `--org`, `--name`, `--cloudlet`, `--cloudlet-org`, `--region`, `--app-id`)
- `list`: List app instances (flags: same as show, all optional)
- `delete`: Delete app instance (flags: `--org`, `--name`, `--cloudlet`, `--cloudlet-org`, `--region`)
**Declarative Operations**:
- `apply`: Deploy from YAML (flags: `-f <file>`, `--dry-run`, `--auto-approve`)
- `delete`: Delete from YAML (flags: `-f <file>`, `--dry-run`, `--auto-approve`)
### YAML Configuration Format
The `EdgeConnectConfig.yaml` file defines apps and their deployment targets:
```yaml
kind: edgeconnect-deployment
metadata:
name: "my-app" # App name (required)
appVersion: "1.0.0" # App version (required)
organization: "myorg" # Organization (required)
spec:
# Choose ONE: k8sApp OR dockerApp
k8sApp:
manifestFile: "./k8s-deployment.yaml" # Path to K8s manifest
# OR dockerApp:
# image: "registry.example.com/myimage:tag"
# manifestFile: "./docker-compose.yaml" # Optional
# Deployment targets (at least one required)
infraTemplate:
- region: "EU" # Region (required)
cloudletOrg: "TelekomOp" # Cloudlet provider (required)
cloudletName: "Munich" # Cloudlet name (required)
flavorName: "EU.small" # Instance size (required)
- region: "US"
cloudletOrg: "TelekomOp"
cloudletName: "gardener-shepherd-test"
flavorName: "default"
# Optional network configuration
network:
outboundConnections:
- protocol: "tcp" # tcp, udp, or icmp
portRangeMin: 80
portRangeMax: 80
remoteCIDR: "0.0.0.0/0"
- protocol: "tcp"
portRangeMin: 443
portRangeMax: 443
remoteCIDR: "0.0.0.0/0"
# Optional deployment strategy (default: recreate)
deploymentStrategy: "recreate" # recreate, blue-green, or rolling
```
**Key Points**:
- Manifest file paths are relative to the config file location
- Multiple `infraTemplate` entries deploy to multiple cloudlets simultaneously
- Network configuration is optional; outbound connections default to platform settings
- Deployment strategy currently only supports "recreate" (others planned)
## Troubleshooting
### Authentication Failures
**Problem**: Errors like "authentication failed" or "unauthorized"
**Solution**:
- Verify credentials are correct in config file or environment variables
- Ensure `base_url` includes the scheme (https://) and has no trailing path
- Check that you're connecting to the correct cloud instance (Edge or Orca)
- Ensure the correct `api-version` is set for your deployment environment
### "Configuration validation failed" Errors
**Problem**: YAML configuration file validation errors
**Solution**:
- Check that all required fields are present (name, appVersion, organization)
- Ensure you have exactly one of `k8sApp` or `dockerApp` (not both, not neither)
- Verify manifest file paths exist relative to the config file location
- Check for leading/trailing whitespace in string values
- Ensure at least one `infraTemplate` entry is defined
### Wrong API Version or Cloud Instance
**Problem**: Commands work but resources don't appear in the console, or vice versa
**Solution**: Verify both the `base_url` and `api-version` match your target environment. There are two cloud instances (Edge and Orca) with different URLs and API versions. Check with your platform administrator for the correct configuration.
## Status
**Maturity**: Production
## Additional Resources
* [EdgeConnect SDK Documentation](/docs/edgeconnect/edgeconnect-sdk/)
* **Edge Cloud**: [Console](https://hub.apps.edge.platform.mg3.mdb.osc.live/) | [API Docs](https://swagger.edge.platform.mg3.mdb.osc.live/)
* **Orca Cloud**: [Console](https://hub.apps.orca.platform.mg3.mdb.osc.live/) | [API Docs](https://swagger.orca.platform.mg3.mdb.osc.live/)
* [Source Code Repository](https://edp.buildth.ing/DevFW-CICD/edge-connect-client)

View file

@ -1,70 +0,0 @@
---
title: EdgeConnect SDK
linkTitle: SDK
weight: 10
description: >
Software Development Kit for interacting with EdgeConnect
---
## Overview
The EdgeConnect SDK is a Go library which provides a simple method for interacting with Edge Connect within programs. It is designed to be used by other tools, such as the [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) or [Terraform provider](/docs/edgeconnect/terraform-provider/),
## Key Features
* Allows querying endpoints without the need to manage API calls and responses directly
* Wraps the existing [Edge Connect API](https://swagger.edge.platform.mg3.mdb.osc.live/)
* Supports multiple unnumbered versions of the API
## Purpose in EDP
No system can be considered useful unless it is actually, in practice, used. While the Edge Connect [console](https://hub.apps.edge.platform.mg3.mdb.osc.live/) and [API](https://swagger.edge.platform.mg3.mdb.osc.live/) are essential tools to allow the platform to be used by developers, there are numerous use cases for interaction that is automated but simpler to use than an API. These include a [command-line tool](/docs/edgeconnect/edgeconnect-client/) and [Terraform provider](/docs/edgeconnect/terraform-provider/).
While each such tool could simply independently wrap existing endpoints, this is generally too low-level for sustainable development. It would involve extensive boilerplate code in each such package, plus small changes to API endpoints or error handling may require constant rework.
To avoid this, the Edge Connect SDK aims to provide a common library for interacting with EdgeConnect, allowing the abstraction of HTTP requests and authentication procedures while nonetheless allowing access directly to the endpoints available.
## Repository
**Code**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client
**Documentation**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk
## Getting Started
### Prerequisites
* Golang
* Edge Connect credentials
### Quick Start
[Step-by-step guide to get started with this component]
1. Simply [import](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#installation) the SDK to your project
2. [Initialise and configure](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#configuration-options) a client with your credentials
3. [Build](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#examples) your code around the existing endpoints
### Verification
[How to verify the component is working correctly]
## Usage Examples
See [README](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#examples) for simple code examples, or repositories for [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) and [Terraform provider](/docs/edgeconnect/terraform-provider/) for full projects relying on it.
## Troubleshooting
### Varying code versions
**Problem**: While the Edge Connect API does not (at time of writing) have different semantic versions, it does have different iterations which function differently. The SDK provides two different libraries, labelled [v1](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk/edgeconnect) and [v2](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk/edgeconnect/v2) and referring to API definitions similarly stored as [v1](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/api/swagger_v1.json) and [v2](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/api/swagger_v2.json).
**Solution**: If you receive errors when using the SDK, consider changing the version you import:
```go
import v1 "edp.buildth.ing/DevFW-CICD/edge-connect-client/sdk/edgeconnect"
import v2 "edp.buildth.ing/DevFW-CICD/edge-connect-client/v2/sdk/edgeconnect/v2"
```
## Status
**Maturity**: Beta

View file

@ -1,80 +0,0 @@
---
title: Terraform provider for Edge cloud
linkTitle: Terraform provider
weight: 30
description: Custom Terraform provider for orchestrating Edge deployments
---
## Overview
This work-in-progress Terraform provider for Edge cloud allows orchestration of selected resources using flexible, concise [HCL](https://developer.hashicorp.com/terraform/language). This allows deployment to Edge Cloud through a familiar format, abstracting away specific endpoints and authentication elements, and allowing seamless combination of Edge resources with others: on OTC, other clouds, or local utilities.
## Key Features
* Interact with Apps and AppInstances using widely-used Terraform framework
* Using Terraform's systems, provide minimal configuration: just an endpoint and credentials, then no need to deal with headers or other API boilerplate
* Also works with community-driven OpenTofu
* Provider currently under development: more features can be added when requested.
## Purpose in EDP
Interacting with infrastructure is a complex process, with many parameters and components working together. Doing so by clicking buttons in a web UI ("ClickOps") is extremely difficult to scale, rapidly becoming highly confusing.
Instead, automations are possible through APIs and SDKs. Working directly with an API (e.g. via `curl`) inevitably tends to involve large amounts of boilerplate code to manage authentication, rarely-changing configuration such as region/tenant selection, and more. When one resource (say, a web server) must interact with another (say, a DNS record), the cross-references further increase this complexity.
An SDK mitigates this complexity when coding software, by providing library functions which interact with the API in abstracted ways which require a minimum of necessary information. Our SDK for Edge Connect is described in a [separate section](/docs/edgeconnect/edgeconnect-sdk/).
However, when simply wanting to deploy infrastructure in isolation - say, updating the status of a Kubernetes or App resource after a change in configuration - an SDK is still an overly complicated tool.
This is where [Terraform](https://developer.hashicorp.com/terraform) or its community-led alternative [OpenTofu](https://opentofu.org/), come in. They provide a simple language for defining resources, with a level of abstraction that retains the power and flexibility of the API while greatly simplifying definitions and execution.
Terraform is widely used for major infrastructure systems such as [AWS](https://registry.terraform.io/providers/hashicorp/aws/latest/docs), [Azure](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs) or general [Kubernetes](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs). However, it is highly flexible, supporting a range of resource types which are not inherently tied to infrastructure: [file](https://registry.terraform.io/search/providers?q=file) manipulation; package setup through [Ansible](https://registry.terraform.io/providers/ansible/aap/1.4.0); secret generation in [Vault](https://registry.terraform.io/providers/hashicorp/vault/latest/docs).
As a result of this breadth of functionality and cross-compatibility, Terraform support is considered by some as necessary for a platform to be used 'seriously' - that is, at scale, or in major workloads. Our provider thus unlocks broad market relevance for the platform in a way few other tools or features could.
## Repository
**Code**: https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect
**Documentation**: Provider is intended to ultimately wrap each resource-based endpoint of the [Edge API](https://swagger.edge.platform.mg3.mdb.osc.live/), but currently supports a limited [subset of resources](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect#resources).
## Getting Started
### Prerequisites
* [Terraform](https://developer.hashicorp.com/terraform) or [OpenTofu](https://opentofu.org/)
* Edge access and credentials
### Quick Start
1. Configure Terraform to use the provider by [including it](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect#using-terraform-registry-recommended) in `provider.tf`
1. In the same directory, create terraform resources in `.tf` files according to the [spec](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect#resources)
1. [Set up credentials](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect/src/branch/main/README.md#provider-configuration) using environment variables or a `provider` block
1. Run `terraform init` in the directory
1. Execute `terraform plan` and/or `terraform apply` to deploy your application
1. `terraform destroy` can be used to remove all deployed resources
### Verification
If `terraform apply` completes successfully (without errors), the provider is working correctly. You can also manually validate in the Edge UI that your resources have been deployed/reconfigured as Terraform indicated.
## Status
**Maturity**: Experimental
## Additional Resources
* [Terralist](https://www.terralist.io/)
* [Terraform](https://developer.hashicorp.com/terraform)
* [OpenTofu](https://opentofu.org/)
* [Edge Connect API](https://swagger.edge.platform.mg3.mdb.osc.live)
## Integration Points
* **Edge Connect SDK**: The provider uses the [Edge Connect SDK](http://localhost:1313/docs/components/deployments/edgeconnect/edgeconnect-sdk/) under the hood.
* **Terralist**: The provider is published using a [custom instance](https://terralist.garm-provider-test.t09.de/) of [Terralist](https://www.terralist.io/). This [can only](https://edp.buildth.ing/DevFW-CICD/stacks/src/commit/5b438097bbd027f0025d6198c34c22f856392a03/template/stacks/terralist/terralist/values.yaml#L9-L38) be written to with a login via [Forgejo](https://edp.buildth.ing/), but can be read publicly.
### Component Architecture (C4)
<likec4-view view-id="provider" browser="true"></likec4-view>

View file

@ -1,52 +0,0 @@
---
title: Edge Developer Platform
linkTitle: Edge Developer Platform
weight: 10
description: >
A platform to support developers working in the Edge, based around Forgejo
---
## Purpose
The Edge Developer Platform (EDP) is a comprehensive DevOps platform designed to enable developers to build, deploy, and operate cloud-native applications at the edge. It provides an integrated suite of tools and services covering the entire software development lifecycle.
{{< likec4-view view="application-transition" project="architecture" title="EDP Context View: Edge Developer Platform Components and User Interaction" >}}
The magenta **EDP** represents the developer platform: a shared, productized layer that enables modern DevOps by standardizing how applications are described, built, deployed, and observed. In the **inner loop**, developers iterate locally (fast feedback: code → run → test). EDP then connects that work to an **outer loop** where additional roles (review, test, operations, audit/compliance) contribute feedback and controls for production readiness.
In this modern DevOps setup, EDP acts as the hub: it synchronizes with local development and **deploys applications to target clouds** (for example, an EdgeConnect cloud), while providing the operational capabilities needed to run them safely. Agentic AI can support both loops—for example by assisting developers with implementation and testing in the inner loop, and by automating reviews, policy checks, release notes, and deployment verification (including drift detection and remediation) in the outer loop.
## Product Structure
EDP consists of multiple integrated components organized in layers:
### Core Platform Services
The foundation layer provides essential platform capabilities including source code management, CI/CD, and container orchestration.
For documentation, see: [Basic Platform Concepts](./deployment/basics/) and [Forgejo](./forgejo/)
### Developer Experience
Tools and services that developers interact with directly to build, test, and deploy applications.
For documentation, see: [Forgejo](./forgejo/) and [Deployment](./deployment/)
### Infrastructure & Operations
Infrastructure automation, observability, and operational tooling for platform management.
For documentation, see: [Operations](./operations/) and [Infrastructure as Code](./deployment/infrastructure/)
## Getting Started
EDP is available at https://edp.buildth.ing.
EDP includes a Forgejo instance that hosts both public and private repositories containing all EDP components.
To request access and get onboarded, start with the welcome repository:
- https://edp.buildth.ing/edp-team/welcome
Once you have access to the repositories, you can explore the EDP documentation according to the product structure above.

View file

@ -1,509 +0,0 @@
---
title: Deployment
linkTitle: Deployment
weight: 10
description: >
Platform-level component provisioning via Stacks - Orchestrating the platform infrastructure itself
---
## Overview
Platform Orchestration refers to the automation and management of the platform infrastructure itself. This includes the provisioning, configuration, and lifecycle management of all components that make up the Internal Developer Platform (IDP).
In the context of IPCEI-CIS, Platform Orchestration means:
- **Platform Bootstrap**: Initial setup of Kubernetes clusters and core services
- **Platform Services Management**: Deployment and management of ArgoCD, Forgejo, Keycloak, etc.
- **Infrastructure-as-Code**: Declarative management using Terraform and GitOps
- **Multi-Cluster Orchestration**: Coordination across different Kubernetes clusters
- **Platform Stacks**: Reusable bundles of platform components (CNOE concept)
### Target Audience
Platform Orchestration is primarily aimed at:
- **Platform Engineering Teams**: Teams that build and operate the IDP
- **Infrastructure Architects**: Those responsible for the platform architecture
- **SRE Teams**: Teams responsible for reliability and operations
## Key Features
### Declarative Platform Definition
The entire platform is defined declaratively as code:
- **GitOps-First**: Everything is versioned in Git and traceable
- **Reproducibility**: The platform can be rebuilt at any time
- **Environment Parity**: Consistency between Dev, Test, and Production
- **Auditability**: Complete history of all changes
### Self-Bootstrapping
The platform can bootstrap itself:
1. **Initial Bootstrap**: Minimal tool (like `idpbuilder`) starts the platform
2. **Self-Management**: After bootstrap, ArgoCD takes over management
3. **Continuous Reconciliation**: Platform is continuously reconciled with Git state
4. **Self-Healing**: Automatic recovery on deviations
### Stack-based Composition
Platform components are organized as reusable stacks (CNOE concept):
- **Modularity**: Components can be updated individually
- **Reusability**: Stacks can be used across different environments
- **Composability**: Compose complex platforms from simple building blocks
- **Versioning**: Stacks can be versioned and tested
**In IPCEI-CIS**: The stacks concept from CNOE is the core organizational principle for platform components.
### Multi-Cluster Support
Platform Orchestration supports different cluster topologies:
- **Control Plane + Worker Clusters**: Centralized control, distributed workloads
- **Hub-and-Spoke**: One management cluster manages multiple target clusters
- **Federation**: Coordination across multiple independent clusters
## Purpose in EDP
Platform Orchestration is the foundation of the IPCEI-CIS Edge Developer Platform. It enables:
### Foundation for Developer Self-Service
Platform Orchestration ensures all services are available that developers need for self-service:
- **GitOps Engine** (ArgoCD) for continuous deployment
- **Source Control** (Forgejo) for code and configuration management
- **Identity Management** (Keycloak) for authentication and authorization
- **Observability** (Grafana, Prometheus) for monitoring and logging
- **CI/CD** (Forgejo Actions/Pipelines) for automated build and test
### Consistency Across Environments
Through declarative definition, consistency is guaranteed:
- Development, test, and production environments are identically configured
- No "configuration drift" between environments
- Predictable behavior across all stages
### Platform as Code
The platform itself is treated like software:
- **Version Control**: All changes are versioned in Git
- **Code Review**: Platform changes go through review processes
- **Testing**: Platform configurations can be tested
- **Rollback**: Easy rollback on problems
### Reduced Operational Overhead
Automation reduces manual effort:
- No manual installation steps
- Automatic updates and patching
- Self-healing on failures
- Standardized deployment processes
## Repository
**CNOE Reference Implementation**: [cnoe-io/stacks](https://github.com/cnoe-io/stacks)
**CNOE idpbuilder**: [cnoe-io/idpbuilder](https://github.com/cnoe-io/idpbuilder)
**Documentation**: [CNOE.io Documentation](https://cnoe.io/docs/)
## Getting Started
### Prerequisites
- **Docker**: For local Kubernetes clusters (Kind)
- **kubectl**: Kubernetes CLI tool
- **Git**: For repository management
- **idpbuilder**: CNOE bootstrap tool
### Quick Start
Platform Orchestration with CNOE Reference Implementation:
```bash
# 1. Install idpbuilder
curl -fsSL https://cnoe.io/install.sh | bash
# 2. Bootstrap platform
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation
# 3. Wait for platform ready (ca. 10 minutes)
kubectl get applications -A
```
### Verification
Verify the platform is running correctly:
```bash
# Get platform secrets (credentials)
idpbuilder get secrets
# Check all ArgoCD applications
kubectl get applications -n argocd
# Expected: All applications "Synced" and "Healthy"
```
Access URLs (with path-routing):
- **ArgoCD**: `https://cnoe.localtest.me:8443/argocd`
- **Forgejo**: `https://cnoe.localtest.me:8443/gitea`
- **Keycloak**: `https://cnoe.localtest.me:8443/keycloak`
## Usage Examples
### Use Case 1: Platform Bootstrap
Initial bootstrapping of a new platform instance:
```bash
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation \
--log-level debug
# Workflow:
# 1. Creates Kind cluster
# 2. Installs ingress-nginx
# 3. Clones and installs ArgoCD
# 4. Installs Forgejo
# 5. Waits for core services
# 6. Creates technical users
# 7. Configures Git repositories
# 8. Installs remaining stacks via ArgoCD
```
After approximately 10 minutes, the platform is fully deployed.
### Use Case 2: Adding New Platform Components
Add new platform components via ArgoCD:
```bash
# Create ArgoCD Application for new component
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-secrets
namespace: argocd
spec:
project: default
source:
repoURL: https://charts.external-secrets.io
targetRevision: 0.9.9
chart: external-secrets
destination:
server: https://kubernetes.default.svc
namespace: external-secrets-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
```
### Use Case 3: Platform Updates
Update platform components:
```bash
# 1. Update via Git (GitOps)
cd your-platform-config-repo
git pull
# 2. Update stack version
vim argocd/applications/component.yaml
# Change targetRevision to new version
# 3. Commit and push
git add .
git commit -m "Update component to v1.2.3"
git push
# 4. ArgoCD will automatically sync
# 5. Monitor the update
argocd app sync component --watch
```
## Integration Points
### ArgoCD Integration
- **Bootstrap**: ArgoCD is initially installed via idpbuilder
- **Self-Management**: After bootstrap, ArgoCD manages itself via Application CRD
- **Platform Coordination**: ArgoCD orchestrates all other platform components
- **Health Monitoring**: ArgoCD monitors health status of all platform services
### Forgejo Integration
- **Source of Truth**: Git repositories contain all platform definitions
- **GitOps Workflow**: Changes in Git trigger platform updates
- **Backup**: Git serves as backup of platform configuration
- **Audit Trail**: Git history documents all platform changes
- **CI/CD**: Forgejo Actions can automate platform operations
### Terraform Integration
- **Infrastructure Provisioning**: Terraform provisions cloud resources for platform
- **State Management**: Terraform state tracks infrastructure
- **Integration**: Terraform can be triggered via Forgejo pipelines
- **Multi-Cloud**: Support for multiple cloud providers
## Architecture
### Platform Orchestration Flow
```text
┌─────────────────┐
│ idpbuilder │ Bootstrap Tool
│ (Initial Run) │
└────────┬────────┘
┌─────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ ArgoCD │────────▶│ Forgejo │ │
│ │ (GitOps) │ │ (Git Repo) │ │
│ └──────┬───────┘ └──────────────┘ │
│ │ │
│ │ Monitors & Syncs │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────┐ │
│ │ Platform Stacks │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ │ │
│ │ │Forgejo │ │Keycloak │ │ │
│ │ └──────────┘ └──────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ │ │
│ │ │Observ- │ │Ingress │ │ │
│ │ │ability │ │ │ │ │
│ │ └──────────┘ └──────────┘ │ │
│ └──────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
```
### Platform Bootstrap Sequence
The idpbuilder executes the following workflow:
1. Create Kind Kubernetes cluster
2. Install ingress-nginx controller
3. Install ArgoCD
4. Install Forgejo Git server
5. Wait for services to be ready
6. Create technical users in Forgejo
7. Create repository for platform state in Forgejo
8. Push platform stacks to Forgejo
9. Create ArgoCD Applications for all stacks
10. ArgoCD takes over continuous synchronization
### Deployment Architecture
The platform is deployed in different namespaces:
- `argocd`: ArgoCD and its components
- `gitea`: Forgejo Git server
- `keycloak`: Identity and access management
- `observability`: Prometheus, Grafana, etc.
- `ingress-nginx`: Ingress controller
## Configuration
### idpbuilder Configuration
Key configuration options for idpbuilder:
```bash
# Path-based routing (recommended for local development)
idpbuilder create --use-path-routing
# Custom package directory
idpbuilder create --package-dir /path/to/custom/packages
# Custom Kind cluster config
idpbuilder create --kind-config custom-kind.yaml
# Enable debug logging
idpbuilder create --log-level debug
```
### ArgoCD Configuration
Important ArgoCD configurations for platform orchestration:
```yaml
# argocd-cm ConfigMap
data:
# Enable automatic sync
application.instanceLabelKey: argocd.argoproj.io/instance
# Repository credentials
repositories: |
- url: https://github.com/cnoe-io/stacks
name: cnoe-stacks
type: git
# Resource exclusions
resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
```
### Platform Stack Configuration
Configuration of platform stacks via Kustomize:
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: platform-system
resources:
- argocd-app.yaml
- forgejo-app.yaml
- keycloak-app.yaml
patches:
- target:
kind: Application
patch: |-
- op: add
path: /spec/syncPolicy
value:
automated:
prune: true
selfHeal: true
```
## Troubleshooting
### Platform not reachable
**Problem**: After `idpbuilder create`, platform services are not reachable
**Solution**:
```bash
# 1. Check if all pods are running
kubectl get pods -A
# 2. Check ArgoCD application status
kubectl get applications -n argocd
# 3. Check ingress
kubectl get ingress -A
# 4. Verify DNS resolution
nslookup cnoe.localtest.me
# 5. Check idpbuilder logs
idpbuilder get logs
```
### ArgoCD Applications not synchronized
**Problem**: ArgoCD Applications show status "OutOfSync"
**Solution**:
```bash
# 1. Check application details
argocd app get <app-name>
# 2. View sync status
argocd app sync <app-name> --dry-run
# 3. Force sync
argocd app sync <app-name> --force
# 4. Check for errors in ArgoCD logs
kubectl logs -n argocd deployment/argocd-application-controller
```
### Git Repository Connection Issues
**Problem**: ArgoCD cannot access Git repository
**Solution**:
```bash
# 1. Verify repository configuration
argocd repo list
# 2. Test connection
argocd repo get https://your-git-repo
# 3. Check credentials
kubectl get secret -n argocd
# 4. Re-add repository with correct credentials
argocd repo add https://your-git-repo \
--username <user> \
--password <token>
```
## Platform Orchestration Best Practices
Based on experience and [CNCF Guidelines](https://tag-app-delivery.cncf.io/whitepapers/platforms/):
1. **Start Simple**: Begin with the CNOE reference stack, extend gradually
2. **Automate Everything**: Manual platform changes are anti-pattern
3. **Monitor Continuously**: Use observability tools for platform health
4. **Document Well**: Platform documentation is essential for adoption
5. **Version Everything**: All platform components should be versioned
6. **Test Changes**: Platform updates should be tested in non-prod
7. **Plan for Disaster**: Backup and disaster recovery strategies are important
8. **Use Stacks**: Organize platform components as reusable stacks
## Status
**Maturity**: Production (for CNOE Reference Implementation)
**Stability**: Stable
**Support**: Community Support via CNOE Community
## Additional Resources
### CNOE Resources
- [CNOE Official Website](https://cnoe.io/)
- [CNOE GitHub Organization](https://github.com/cnoe-io)
- [CNOE Reference Implementation](https://github.com/cnoe-io/stacks)
- [CNOE Community Slack](https://cloud-native.slack.com/archives/C05TN9WFN5S)
### Platform Engineering
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)
- [Platform Engineering Maturity Model](https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/)
- [Team Topologies](https://teamtopologies.com/) - Organizational patterns
### GitOps
- [GitOps Working Group](https://opengitops.dev/)
- [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
- [GitOps Principles](https://opengitops.dev/)
### CNOE Stacks
- [Understanding CNOE Stacks](https://cnoe.io/docs/reference-implementation/stacks/)
- [Creating Custom Stacks](https://cnoe.io/docs/reference-implementation/customization/)

View file

@ -1,479 +0,0 @@
---
title: Basic Concepts
linkTitle: Basic Concepts
weight: 1
description: >
Platform-level component provisioning via Stacks - Orchestrating the platform infrastructure itself
---
## Overview
Platform Orchestration refers to the automation and management of the platform infrastructure itself. This includes the provisioning, configuration, and lifecycle management of all components that make up the Internal Developer Platform (IDP).
In the context of IPCEI-CIS, Platform Orchestration means:
- **Platform Bootstrap**: Initial setup of Kubernetes clusters and core services
- **Platform Services Management**: Deployment and management of ArgoCD, Forgejo, Keycloak, etc.
- **Infrastructure-as-Code**: Declarative management using Terraform and GitOps
- **Multi-Cluster Orchestration**: Coordination across different Kubernetes clusters
- **Platform Stacks**: Reusable bundles of platform components (CNOE concept)
### Target Audience
Platform Orchestration is primarily aimed at:
- **Platform Engineering Teams**: Teams that build and operate the IDP
- **Infrastructure Architects**: Those responsible for the platform architecture
- **SRE Teams**: Teams responsible for reliability and operations
## Key Features
### Declarative Platform Definition
The entire platform is defined declaratively as code:
- **GitOps-First**: Everything is versioned in Git and traceable
- **Reproducibility**: The platform can be rebuilt at any time
- **Environment Parity**: Consistency between Dev, Test, and Production
- **Auditability**: Complete history of all changes
### Self-Bootstrapping
The platform can bootstrap itself:
1. **Initial Bootstrap**: Minimal tool (like `idpbuilder`) starts the platform
2. **Self-Management**: After bootstrap, ArgoCD takes over management
3. **Continuous Reconciliation**: Platform is continuously reconciled with Git state
4. **Self-Healing**: Automatic recovery on deviations
### Stack-based Composition
Platform components are organized as reusable stacks (CNOE concept):
- **Modularity**: Components can be updated individually
- **Reusability**: Stacks can be used across different environments
- **Composability**: Compose complex platforms from simple building blocks
- **Versioning**: Stacks can be versioned and tested
**In IPCEI-CIS**: The stacks concept from CNOE is the core organizational principle for platform components.
### Multi-Cluster Support
Platform Orchestration supports different cluster topologies:
- **Control Plane + Worker Clusters**: Centralized control, distributed workloads
- **Hub-and-Spoke**: One management cluster manages multiple target clusters
- **Federation**: Coordination across multiple independent clusters
## Purpose in EDP
Platform Orchestration is the foundation of the IPCEI-CIS Edge Developer Platform. It enables:
### Foundation for Developer Self-Service
Platform Orchestration ensures all services are available that developers need for self-service:
- **GitOps Engine** (ArgoCD) for continuous deployment
- **Source Control** (Forgejo) for code and configuration management
- **Identity Management** (Keycloak) for authentication and authorization
- **Observability** (Grafana, Prometheus) for monitoring and logging
- **CI/CD** (Forgejo Actions/Pipelines) for automated build and test
### Consistency Across Environments
Through declarative definition, consistency is guaranteed:
- Development, test, and production environments are identically configured
- No "configuration drift" between environments
- Predictable behavior across all stages
### Platform as Code
The platform itself is treated like software:
- **Version Control**: All changes are versioned in Git
- **Code Review**: Platform changes go through review processes
- **Testing**: Platform configurations can be tested
- **Rollback**: Easy rollback on problems
### Reduced Operational Overhead
Automation reduces manual effort:
- No manual installation steps
- Automatic updates and patching
- Self-healing on failures
- Standardized deployment processes
## Repository
**CNOE Reference Implementation**: [cnoe-io/stacks](https://github.com/cnoe-io/stacks)
**CNOE idpbuilder**: [cnoe-io/idpbuilder](https://github.com/cnoe-io/idpbuilder)
**Documentation**: [CNOE.io Documentation](https://cnoe.io/docs/)
## Getting Started
### Prerequisites
- **Docker**: For local Kubernetes clusters (Kind)
- **kubectl**: Kubernetes CLI tool
- **Git**: For repository management
- **idpbuilder**: CNOE bootstrap tool
### Quick Start
Platform Orchestration with CNOE Reference Implementation:
```bash
# 1. Install idpbuilder
curl -fsSL https://cnoe.io/install.sh | bash
# 2. Bootstrap platform
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation
# 3. Wait for platform ready (ca. 10 minutes)
kubectl get applications -A
```
### Verification
Verify the platform is running correctly:
```bash
# Get platform secrets (credentials)
idpbuilder get secrets
# Check all ArgoCD applications
kubectl get applications -n argocd
# Expected: All applications "Synced" and "Healthy"
```
Access URLs (with path-routing):
- **ArgoCD**: `https://cnoe.localtest.me:8443/argocd`
- **Forgejo**: `https://cnoe.localtest.me:8443/gitea`
- **Keycloak**: `https://cnoe.localtest.me:8443/keycloak`
## Usage Examples
### Use Case 1: Platform Bootstrap
Initial bootstrapping of a new platform instance:
```bash
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation \
--log-level debug
# Workflow:
# 1. Creates Kind cluster
# 2. Installs ingress-nginx
# 3. Clones and installs ArgoCD
# 4. Installs Forgejo
# 5. Waits for core services
# 6. Creates technical users
# 7. Configures Git repositories
# 8. Installs remaining stacks via ArgoCD
```
After approximately 10 minutes, the platform is fully deployed.
### Use Case 2: Adding New Platform Components
Add new platform components via ArgoCD:
```bash
# Create ArgoCD Application for new component
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-secrets
namespace: argocd
spec:
project: default
source:
repoURL: https://charts.external-secrets.io
targetRevision: 0.9.9
chart: external-secrets
destination:
server: https://kubernetes.default.svc
namespace: external-secrets-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
```
### Use Case 3: Platform Updates
Update platform components:
```bash
# 1. Update via Git (GitOps)
cd your-platform-config-repo
git pull
# 2. Update stack version
vim argocd/applications/component.yaml
# Change targetRevision to new version
# 3. Commit and push
git add .
git commit -m "Update component to v1.2.3"
git push
# 4. ArgoCD will automatically sync
# 5. Monitor the update
argocd app sync component --watch
```
## Integration Points
### ArgoCD Integration
- **Bootstrap**: ArgoCD is initially installed via idpbuilder
- **Self-Management**: After bootstrap, ArgoCD manages itself via Application CRD
- **Platform Coordination**: ArgoCD orchestrates all other platform components
- **Health Monitoring**: ArgoCD monitors health status of all platform services
### Forgejo Integration
- **Source of Truth**: Git repositories contain all platform definitions
- **GitOps Workflow**: Changes in Git trigger platform updates
- **Backup**: Git serves as backup of platform configuration
- **Audit Trail**: Git history documents all platform changes
- **CI/CD**: Forgejo Actions can automate platform operations
### Terraform Integration
- **Infrastructure Provisioning**: Terraform provisions cloud resources for platform
- **State Management**: Terraform state tracks infrastructure
- **Integration**: Terraform can be triggered via Forgejo pipelines
- **Multi-Cloud**: Support for multiple cloud providers
## Architecture
### Platform Orchestration Flow
{{< likec4-view view="platform_orchestration_flow" title="Platform Orchestration Flow" >}}
### Platform Bootstrap Sequence
The idpbuilder executes the following workflow:
1. Create Kind Kubernetes cluster
2. Install ingress-nginx controller
3. Install ArgoCD
4. Install Forgejo Git server
5. Wait for services to be ready
6. Create technical users in Forgejo
7. Create repository for platform state in Forgejo
8. Push platform stacks to Forgejo
9. Create ArgoCD Applications for all stacks
10. ArgoCD takes over continuous synchronization
### Deployment Architecture
The platform is deployed in different namespaces:
- `argocd`: ArgoCD and its components
- `gitea`: Forgejo Git server
- `keycloak`: Identity and access management
- `observability`: Prometheus, Grafana, etc.
- `ingress-nginx`: Ingress controller
## Configuration
### idpbuilder Configuration
Key configuration options for idpbuilder:
```bash
# Path-based routing (recommended for local development)
idpbuilder create --use-path-routing
# Custom package directory
idpbuilder create --package-dir /path/to/custom/packages
# Custom Kind cluster config
idpbuilder create --kind-config custom-kind.yaml
# Enable debug logging
idpbuilder create --log-level debug
```
### ArgoCD Configuration
Important ArgoCD configurations for platform orchestration:
```yaml
# argocd-cm ConfigMap
data:
# Enable automatic sync
application.instanceLabelKey: argocd.argoproj.io/instance
# Repository credentials
repositories: |
- url: https://github.com/cnoe-io/stacks
name: cnoe-stacks
type: git
# Resource exclusions
resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
```
### Platform Stack Configuration
Configuration of platform stacks via Kustomize:
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: platform-system
resources:
- argocd-app.yaml
- forgejo-app.yaml
- keycloak-app.yaml
patches:
- target:
kind: Application
patch: |-
- op: add
path: /spec/syncPolicy
value:
automated:
prune: true
selfHeal: true
```
## Troubleshooting
### Platform not reachable
**Problem**: After `idpbuilder create`, platform services are not reachable
**Solution**:
```bash
# 1. Check if all pods are running
kubectl get pods -A
# 2. Check ArgoCD application status
kubectl get applications -n argocd
# 3. Check ingress
kubectl get ingress -A
# 4. Verify DNS resolution
nslookup cnoe.localtest.me
# 5. Check idpbuilder logs
idpbuilder get logs
```
### ArgoCD Applications not synchronized
**Problem**: ArgoCD Applications show status "OutOfSync"
**Solution**:
```bash
# 1. Check application details
argocd app get <app-name>
# 2. View sync status
argocd app sync <app-name> --dry-run
# 3. Force sync
argocd app sync <app-name> --force
# 4. Check for errors in ArgoCD logs
kubectl logs -n argocd deployment/argocd-application-controller
```
### Git Repository Connection Issues
**Problem**: ArgoCD cannot access Git repository
**Solution**:
```bash
# 1. Verify repository configuration
argocd repo list
# 2. Test connection
argocd repo get https://your-git-repo
# 3. Check credentials
kubectl get secret -n argocd
# 4. Re-add repository with correct credentials
argocd repo add https://your-git-repo \
--username <user> \
--password <token>
```
## Platform Orchestration Best Practices
Based on experience and [CNCF Guidelines](https://tag-app-delivery.cncf.io/whitepapers/platforms/):
1. **Start Simple**: Begin with the CNOE reference stack, extend gradually
2. **Automate Everything**: Manual platform changes are anti-pattern
3. **Monitor Continuously**: Use observability tools for platform health
4. **Document Well**: Platform documentation is essential for adoption
5. **Version Everything**: All platform components should be versioned
6. **Test Changes**: Platform updates should be tested in non-prod
7. **Plan for Disaster**: Backup and disaster recovery strategies are important
8. **Use Stacks**: Organize platform components as reusable stacks
## Status
**Maturity**: Production (for CNOE Reference Implementation)
**Stability**: Stable
**Support**: Community Support via CNOE Community
## Additional Resources
### CNOE Resources
- [CNOE Official Website](https://cnoe.io/)
- [CNOE GitHub Organization](https://github.com/cnoe-io)
- [CNOE Reference Implementation](https://github.com/cnoe-io/stacks)
- [CNOE Community Slack](https://cloud-native.slack.com/archives/C05TN9WFN5S)
### Platform Engineering
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)
- [Platform Engineering Maturity Model](https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/)
- [Team Topologies](https://teamtopologies.com/) - Organizational patterns
### GitOps
- [GitOps Working Group](https://opengitops.dev/)
- [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
- [GitOps Principles](https://opengitops.dev/)
### CNOE Stacks
- [Understanding CNOE Stacks](https://cnoe.io/docs/reference-implementation/stacks/)
- [Creating Custom Stacks](https://cnoe.io/docs/reference-implementation/customization/)

View file

@ -1,776 +0,0 @@
---
title: "Application Orchestration"
linkTitle: "Application Orchestration"
weight: 30
description: >
Application deployment via CI/CD pipelines and GitOps - Orchestrating application deployments
---
## Overview
Application Orchestration deals with the automation of application deployment and lifecycle management. It encompasses the entire workflow from source code to running application in production.
In the context of IPCEI-CIS, Application Orchestration includes:
- **CI/CD Pipelines**: Automated build, test, and deployment pipelines
- **GitOps Deployment**: Declarative application deployment via ArgoCD
- **Progressive Delivery**: Canary deployments, blue-green deployments
- **Application Configuration**: Environment-specific configuration management
- **Golden Paths**: Standardized deployment templates and workflows
### Target Audience
Application Orchestration is primarily for:
- **Application Developers**: Teams developing and deploying applications
- **DevOps Teams**: Teams responsible for deployment automation
- **Product Teams**: Teams responsible for application lifecycle
## Key Features
### Automated CI/CD Pipelines
Forgejo Actions provides GitHub Actions-compatible CI/CD:
- **Build Automation**: Automatic building of container images
- **Test Automation**: Automated unit, integration, and E2E tests
- **Security Scanning**: Vulnerability scanning of dependencies and images
- **Artifact Publishing**: Publishing to container registries
- **Deployment Triggering**: Automatic deployment after successful build
### GitOps-based Deployment
ArgoCD enables declarative application deployment:
- **Declarative Configuration**: Applications defined as Kubernetes manifests
- **Automated Sync**: Automatic synchronization between Git and cluster
- **Rollback Capability**: Easy rollback to previous versions
- **Multi-Environment**: Consistent deployment across Dev/Test/Prod
- **Health Monitoring**: Continuous monitoring of application health
### Progressive Delivery
Support for advanced deployment strategies:
- **Canary Deployments**: Gradual rollout to subset of users
- **Blue-Green Deployments**: Zero-downtime deployments with instant rollback
- **A/B Testing**: Traffic splitting for feature testing
- **Feature Flags**: Dynamic feature enablement without deployment
### Configuration Management
Flexible configuration for different environments:
- **Environment Variables**: Configuration via environment variables
- **ConfigMaps**: Kubernetes-native configuration
- **Secrets Management**: Secure handling of sensitive data
- **External Secrets**: Integration with external secret stores (Vault, etc.)
## Purpose in EDP
Application Orchestration is the core of developer experience in IPCEI-CIS Edge Developer Platform.
### Developer Self-Service
Developers can deploy applications independently:
- **Self-Service Deployment**: No dependency on operations team
- **Standardized Workflows**: Clear, documented deployment processes
- **Fast Feedback**: Quick feedback through automated pipelines
- **Environment Parity**: Consistent behavior across all environments
### Quality and Security
Automated checks ensure quality and security:
- **Automated Testing**: All changes are automatically tested
- **Security Scans**: Vulnerability scanning of dependencies and images
- **Policy Enforcement**: Automated policy checks (OPA, Kyverno)
- **Compliance**: Auditability of all deployments
### Efficiency and Productivity
Automation increases team efficiency:
- **Faster Time-to-Market**: Faster deployment of new features
- **Reduced Manual Work**: Automation of repetitive tasks
- **Fewer Errors**: Fewer manual mistakes through automation
- **Better Collaboration**: Clear interfaces between Dev and Ops
## Repository
**Forgejo**: [forgejo.org](https://forgejo.org/)
**Forgejo Actions**: [Forgejo Actions Documentation](https://forgejo.org/docs/latest/user/actions/)
**ArgoCD**: [argoproj.github.io/cd](https://argoproj.github.io/cd/)
## Getting Started
### Prerequisites
- **Forgejo Account**: Access to Forgejo instance
- **Kubernetes Cluster**: Target cluster for deployments
- **ArgoCD Access**: Access to ArgoCD instance
- **Git**: For repository management
### Quick Start: Application Deployment
1. **Create Application Repository**
```bash
# Create new repository in Forgejo
git init my-application
cd my-application
# Add application code and Dockerfile
cat > Dockerfile <<EOF
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
EOF
```
2. **Add CI/CD Pipeline**
Create `.forgejo/workflows/build.yaml`:
```yaml
name: Build and Push
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Registry
uses: docker/login-action@v2
with:
registry: registry.example.com
username: ${{ secrets.REGISTRY_USER }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: ${{ github.event_name == 'push' }}
tags: registry.example.com/my-app:${{ github.sha }}
```
3. **Create Kubernetes Manifests**
Create `k8s/deployment.yaml`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-application
spec:
replicas: 3
selector:
matchLabels:
app: my-application
template:
metadata:
labels:
app: my-application
spec:
containers:
- name: app
image: registry.example.com/my-app:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
---
apiVersion: v1
kind: Service
metadata:
name: my-application
spec:
selector:
app: my-application
ports:
- port: 80
targetPort: 3000
```
4. **Configure ArgoCD Application**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-application
namespace: argocd
spec:
project: default
source:
repoURL: https://forgejo.example.com/myteam/my-application
targetRevision: main
path: k8s
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
```
5. **Deploy**
```bash
# Commit and push
git add .
git commit -m "Add application and deployment configuration"
git push origin main
# ArgoCD will automatically deploy the application
argocd app sync my-application --watch
```
## Usage Examples
### Use Case 1: Multi-Environment Deployment
Deploy application to multiple environments:
**Repository Structure:**
```text
my-application/
├── .forgejo/
│ └── workflows/
│ └── build.yaml
├── base/
│ ├── deployment.yaml
│ ├── service.yaml
│ └── kustomization.yaml
├── overlays/
│ ├── dev/
│ │ ├── kustomization.yaml
│ │ └── patches.yaml
│ ├── staging/
│ │ ├── kustomization.yaml
│ │ └── patches.yaml
│ └── production/
│ ├── kustomization.yaml
│ └── patches.yaml
```
**Kustomize Base** (`base/kustomization.yaml`):
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
commonLabels:
app: my-application
```
**Environment Overlay** (`overlays/production/kustomization.yaml`):
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: production
replicas:
- name: my-application
count: 5
images:
- name: my-app
newTag: v1.2.3
patches:
- patches.yaml
```
**ArgoCD Applications for each environment:**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-application-prod
namespace: argocd
spec:
project: default
source:
repoURL: https://forgejo.example.com/myteam/my-application
targetRevision: main
path: overlays/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
```
### Use Case 2: Canary Deployment
Progressive rollout with canary strategy:
**Argo Rollouts Canary:**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: my-application
spec:
replicas: 10
strategy:
canary:
steps:
- setWeight: 10
- pause: {duration: 5m}
- setWeight: 30
- pause: {duration: 5m}
- setWeight: 60
- pause: {duration: 5m}
- setWeight: 100
selector:
matchLabels:
app: my-application
template:
metadata:
labels:
app: my-application
spec:
containers:
- name: app
image: registry.example.com/my-app:v2.0.0
```
### Use Case 3: Feature Flags
Dynamic feature control without deployment:
**Application Code with Feature Flag:**
```javascript
const Unleash = require('unleash-client');
const unleash = new Unleash({
url: 'http://unleash.platform/api/',
appName: 'my-application',
customHeaders: {
Authorization: process.env.UNLEASH_API_TOKEN
}
});
// Use feature flag
if (unleash.isEnabled('new-checkout-flow')) {
// New checkout implementation
renderNewCheckout();
} else {
// Old checkout implementation
renderOldCheckout();
}
```
## Integration Points
### Forgejo Integration
Forgejo serves as central source code management and CI/CD platform:
- **Source Control**: Git repositories for application code
- **CI/CD Pipelines**: Forgejo Actions for automated builds and tests
- **Container Registry**: Built-in container registry for images
- **Webhook Integration**: Triggers for external systems
- **Pull Request Workflows**: Code review and approval processes
### ArgoCD Integration
ArgoCD handles declarative application deployment:
- **GitOps Sync**: Continuous synchronization with Git state
- **Health Monitoring**: Application health status monitoring
- **Rollback Support**: Easy rollback to previous versions
- **Multi-Cluster**: Deployment to multiple clusters
- **UI and CLI**: Web interface and command-line access
### Observability Integration
Integration with monitoring and logging:
- **Metrics**: Prometheus metrics from applications
- **Logs**: Centralized log collection via Loki/ELK
- **Tracing**: Distributed tracing with Jaeger/Tempo
- **Alerting**: Alert rules for application issues
## Architecture
### Application Deployment Flow
{{< likec4-view view="application_deployment_flow" title="Application Deployment Flow" >}}
### CI/CD Pipeline Architecture
Typical Forgejo Actions pipeline stages:
1. **Checkout**: Clone source code
2. **Build**: Compile application and dependencies
3. **Test**: Run unit and integration tests
4. **Security Scan**: Scan dependencies and code for vulnerabilities
5. **Build Image**: Create container image
6. **Push Image**: Push to container registry
7. **Update Manifests**: Update Kubernetes manifests with new image tag
8. **Notify**: Send notifications on success/failure
## Configuration
### Forgejo Actions Configuration
Example for Node.js application:
```yaml
name: CI/CD Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
env:
REGISTRY: registry.example.com
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Run linter
run: npm run lint
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: '.'
format: 'sarif'
output: 'trivy-results.sarif'
build-and-push:
needs: [test, security]
runs-on: ubuntu-latest
if: github.event_name == 'push'
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ secrets.REGISTRY_USER }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=sha,prefix={{branch}}-
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
```
### ArgoCD Application Configuration
Complete configuration example:
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-application
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://forgejo.example.com/myteam/my-application
targetRevision: main
path: k8s/overlays/production
# Kustomize options
kustomize:
version: v5.0.0
images:
- my-app=registry.example.com/my-app:v1.2.3
destination:
server: https://kubernetes.default.svc
namespace: production
# Sync policy
syncPolicy:
automated:
prune: true # Delete resources not in Git
selfHeal: true # Override manual changes
allowEmpty: false # Don't delete everything on empty repo
syncOptions:
- CreateNamespace=true
- PruneLast=true
- RespectIgnoreDifferences=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
# Ignore differences (avoid sync loops)
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas # Ignore if HPA manages replicas
```
## Troubleshooting
### Pipeline Fails
**Problem**: Forgejo Actions pipeline fails
**Solution**:
```bash
# 1. Check pipeline logs in Forgejo UI
# Navigate to: Repository → Actions → Select failed run
# 2. Check runner status
# In Forgejo: Site Admin → Actions → Runners
# 3. Check runner logs
kubectl logs -n forgejo-runner deployment/act-runner
# 4. Test pipeline locally with act
act -l # List available jobs
act -j build # Run specific job
```
### ArgoCD Application OutOfSync
**Problem**: Application shows "OutOfSync" status
**Solution**:
```bash
# 1. Check differences
argocd app diff my-application
# 2. View sync status details
argocd app get my-application
# 3. Manual sync
argocd app sync my-application
# 4. Hard refresh (ignore cache)
argocd app sync my-application --force
# 5. Check for ignored differences
argocd app get my-application --show-operation
```
### Application Deployment Fails
**Problem**: Application pod crashes after deployment
**Solution**:
```bash
# 1. Check pod status
kubectl get pods -n production
# 2. View pod logs
kubectl logs -n production deployment/my-application
# 3. Describe pod for events
kubectl describe pod -n production <pod-name>
# 4. Check resource limits
kubectl top pod -n production
# 5. Rollback via ArgoCD
argocd app rollback my-application
```
### Image Pull Errors
**Problem**: Kubernetes cannot pull container image
**Solution**:
```bash
# 1. Verify image exists
docker pull registry.example.com/my-app:v1.2.3
# 2. Check image pull secret
kubectl get secret -n production regcred
# 3. Create image pull secret if missing
kubectl create secret docker-registry regcred \
--docker-server=registry.example.com \
--docker-username=user \
--docker-password=password \
-n production
# 4. Reference secret in deployment
kubectl patch deployment my-application -n production \
-p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"regcred"}]}}}}'
```
## Best Practices
### Golden Path Templates
Provide standardized templates for common use cases:
1. **Web Application Template**: Node.js, Python, Go web services
2. **API Service Template**: RESTful API with OpenAPI
3. **Batch Job Template**: Kubernetes CronJob configurations
4. **Microservice Template**: Service mesh integration
Example repository template structure:
```text
application-template/
├── .forgejo/
│ └── workflows/
│ ├── build.yaml
│ ├── test.yaml
│ └── deploy.yaml
├── k8s/
│ ├── base/
│ └── overlays/
├── src/
│ └── ...
├── Dockerfile
├── README.md
└── .gitignore
```
### Deployment Checklist
Before deploying to production:
- ✅ All tests passing
- ✅ Security scans completed
- ✅ Resource limits defined
- ✅ Health checks configured
- ✅ Monitoring and alerts set up
- ✅ Backup strategy defined
- ✅ Rollback plan documented
- ✅ Team notified about deployment
### Configuration Management
- Use ConfigMaps for non-sensitive configuration
- Use Secrets for sensitive data
- Use External Secrets Operator for vault integration
- Never commit secrets to Git
- Use environment-specific overlays (Kustomize)
- Document all configuration options
## Status
**Maturity**: Production
**Stability**: Stable
**Support**: Internal Platform Team
## Additional Resources
### Forgejo
- [Forgejo Documentation](https://forgejo.org/docs/latest/)
- [Forgejo Actions Guide](https://forgejo.org/docs/latest/user/actions/)
- [Forgejo API Reference](https://forgejo.org/docs/latest/api/)
### ArgoCD
- [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
- [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
- [ArgoCD Sync Waves](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/)
### GitOps
- [GitOps Principles](https://opengitops.dev/)
- [GitOps Patterns](https://www.gitops.tech/)
- [Kubernetes Deployment Strategies](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy)
### CI/CD
- [GitHub Actions Documentation](https://docs.github.com/en/actions) (Forgejo Actions compatible)
- [Docker Best Practices](https://docs.docker.com/develop/dev-best-practices/)
- [Container Security Best Practices](https://kubernetes.io/docs/concepts/security/pod-security-standards/)

View file

@ -1,224 +0,0 @@
---
title: Platform Orchestration
linkTitle: Platform Orchestration
weight: 1
description: >
Orchestration in the context of Platform Engineering - coordinating infrastructure, platform, and application delivery.
---
## Overview
Orchestration in the context of Platform Engineering refers to the coordinated automation and management of infrastructure, platform, and application components throughout their entire lifecycle. It is a fundamental concept that bridges the gap between declarative specifications (what should be deployed) and actual execution (how it is deployed).
## The Role of Orchestration in Platform Engineering
Platform Engineering has emerged as a discipline to improve developer experience and reduce cognitive load on development teams ([CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)). Orchestration is the central mechanism that enables this vision:
1. **Automation of Complex Workflows**: Orchestration coordinates multiple steps and dependencies automatically
2. **Consistency and Reproducibility**: Guaranteed, repeatable deployments across different environments
3. **Self-Service Capabilities**: Developers can independently orchestrate resources and deployments
4. **Governance and Compliance**: Centralized control over policies and best practices
### What Does Orchestration Do?
Orchestration systems perform the following tasks:
- **Workflow Coordination**: Coordination of complex, multi-step deployment processes
- **Dependency Management**: Resolution and management of dependencies between components
- **State Management**: Continuous monitoring and reconciliation between desired and actual state
- **Resource Provisioning**: Automatic provisioning of infrastructure and services
- **Configuration Management**: Management of configurations across different environments
- **Health Monitoring**: Monitoring the health of deployed resources
## Three Layers of Orchestration
In modern Platform Engineering, we distinguish three fundamental layers of orchestration:
### [Infrastructure Orchestration](../infrastructure/)
Infrastructure Orchestration deals with the lowest level - the physical and virtual infrastructure layer. This includes:
- Provisioning of compute, network, and storage resources
- Cloud resource management (VMs, networking, storage)
- Infrastructure-as-Code deployment (Terraform, etc.)
- Bare metal and hypervisor management
**Target Audience**: Infrastructure Engineers, Cloud Architects
**Note**: Detailed documentation for Infrastructure Orchestration is maintained separately.
More details: [Infrastructure Orchestration →](../infrastructure/)
### [Platform Orchestration](../otc/)
Platform Orchestration focuses on deploying and managing the platform itself - the services and tools that development teams use. This includes:
- Installation and configuration of Kubernetes clusters
- Deployment of platform services (GitOps tools, Observability, Security)
- Management of platform components via Stacks
- Multi-cluster orchestration
**Target Audience**: Platform Engineering Teams, SRE Teams
**In IPCEI-CIS**: Platform orchestration is realized using the CNOE stack concept with ArgoCD and Forgejo.
More details: [Platform Orchestration →](../otc/)
### [Application Orchestration](application/)
Application Orchestration concentrates on the deployment and lifecycle management of applications running on the platform. This includes:
- Deployment of microservices and containerized applications
- CI/CD pipeline orchestration
- Configuration management and secrets handling
- Application health monitoring and auto-scaling
**Target Audience**: Application Developers, DevOps Engineers
**In IPCEI-CIS**: Application orchestration uses Forgejo pipelines for CI/CD and ArgoCD for GitOps-based deployment.
More details: [Application Orchestration →](application/)
## GitOps as Orchestration Paradigm
A central approach in modern platform orchestration solutions is **GitOps**. GitOps uses Git repositories as the single source of truth for declarative infrastructure and applications:
- **Declarative Approach**: The desired state is defined in Git
- **Automatic Synchronization**: Controllers monitor Git and reconcile the live state
- **Audit Trail**: All changes are traceable in Git history
- **Rollback Capability**: Easy rollback through Git revert
### Continuous Reconciliation
An important concept is **continuous reconciliation**:
1. The orchestrator monitors both the source (Git) and the target (e.g., Kubernetes cluster)
2. Deviations trigger automatic corrective actions
3. Health checks validate that the desired state has been achieved
4. Drift detection warns of unexpected changes
## Orchestration Tools in IPCEI-CIS
Within the IPCEI-CIS platform, we utilize the [CNOE (Cloud Native Operational Excellence)](https://cnoe.io/) stack concept with the following orchestration components:
### ArgoCD
- **Continuous Delivery** for Kubernetes based on GitOps
- Synchronizes Kubernetes manifests from Git repositories
- Supports Helm Charts, Kustomize, Jsonnet, and plain YAML
- Multi-cluster deployment capabilities
- Application Sets for parameterized deployments
**Role in IPCEI-CIS**: ArgoCD is the central component for GitOps-based deployment management. After the initial bootstrapping phase, ArgoCD takes over the technical coordination of all components.
### Forgejo
- **Git Repository Management** and source control
- **CI/CD Pipelines** via Forgejo Actions (GitHub Actions compatible)
- **Developer Portal Capabilities** (initially planned, project discontinued)
- Package registry and artifact management
- Integration with ArgoCD for GitOps workflows
**Role in IPCEI-CIS**: Forgejo serves as the Git repository host and CI/CD engine. It was initially planned as a developer portal (similar to Backstage's role in other stacks) but this aspect was not fully realized before project completion.
**Note on Backstage**: In typical CNOE implementations, Backstage serves as the developer portal providing golden paths through software templates. IPCEI-CIS initially planned to use Forgejo for this purpose but the project concluded before full implementation.
### Terraform
- **Infrastructure-as-Code** provisioning
- Multi-cloud resource management
- State management for infrastructure
- Integration with Forgejo pipelines for automated deployment
**Role in IPCEI-CIS**: Terraform handles infrastructure provisioning at the infrastructure orchestration layer, integrated into automated workflows via Forgejo pipelines.
### CNOE Stacks Concept
- **Modular Platform Components** bundled as stacks
- Reusable, composable platform building blocks
- Version-controlled stack definitions
- GitOps-based stack deployment via ArgoCD
**Role in IPCEI-CIS**: The stacks concept from CNOE provides the structural foundation for platform orchestration, enabling modular deployment and management of platform components.
## The Orchestration Workflow
A typical orchestration workflow in the IPCEI-CIS platform:
{{< likec4-view view="orchestration_workflow" title="Orchestration Workflow" >}}
**Workflow Steps**:
1. **Definition**: Developer defines application/infrastructure as code
2. **Commit**: Changes are committed to Forgejo Git repository
3. **CI Pipeline**: Forgejo Actions build, test, and package the application
4. **Sync**: ArgoCD detects changes and triggers deployment
5. **Provision**: Terraform orchestrates required cloud resources (if needed)
6. **Deploy**: Application is deployed to Kubernetes
7. **Monitor**: Continuous monitoring and health checks
8. **Reconcile**: Automatic correction on drift detection
## Benefits of Coordinated Orchestration
The integration of infrastructure, platform, and application orchestration provides crucial advantages:
- **Reduced Complexity**: Developers don't need to know all infrastructure details
- **Faster Time-to-Market**: Automated workflows accelerate deployments
- **Consistency**: Standardized patterns across all teams
- **Governance**: Central policies are automatically enforced
- **Scalability**: Platform teams can support many application teams
- **Self-Service**: Developers can provision services independently
- **Audit and Compliance**: Complete traceability through Git history
## Best Practices
Successful orchestration follows proven principles ([Platform Engineering Principles](https://platformengineering.org/blog/what-is-platform-engineering)):
1. **Platform as a Product**: Treat the platform as a product with focus on user experience
2. **Self-Service First**: Enable developers to use services autonomously
3. **Documentation**: Comprehensive documentation of golden paths
4. **Feedback Loops**: Continuous improvement through user feedback
5. **Thin Platform Layer**: Use managed services where possible instead of building everything
6. **Progressive Disclosure**: Offer different abstraction levels
7. **Focus on Common Problems**: Solve recurring problems centrally
8. **Treat Glue as Valuable**: Integration of different tools is valuable
9. **Clear Mission**: Define clear goals and responsibilities
## Avoiding Anti-Patterns
Common mistakes in platform orchestration ([How to fail at Platform Engineering](https://www.cncf.io/blog/2024/03/08/how-to-fail-at-platform-engineering/)):
- **Product Misfit**: Building platform without involving developers
- **Overly Complex Design**: Too many features and unnecessary complexity
- **Swiss Knife Syndrome**: Trying to solve all problems with one tool
- **Insufficient Documentation**: Missing or outdated documentation
- **Siloed Development**: Platform and development teams working in isolation
- **Stagnant Platform**: Platform not continuously evolved
## Sub-Components
The orchestration component includes the following sub-areas:
- **[Infrastructure Orchestration](infrastructure/)**: Low-level infrastructure deployment and provisioning
- **[Platform Orchestration](platform/)**: Platform-level component deployment via Stacks
- **[Application Orchestration](application/)**: Application-level deployment and CI/CD
- **[Stacks](stacks/)**: Reusable component bundles and compositions
## Further Resources
### Fundamentals
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/) - Comprehensive paper on Platform Engineering
- [Platform Engineering Definition](https://platformengineering.org/blog/what-is-platform-engineering) - What is Platform Engineering?
- [Team Topologies](https://teamtopologies.com/) - Organizational structures for modern teams
### GitOps
- [GitOps Principles](https://opengitops.dev/) - Official GitOps principles
- [ArgoCD Documentation](https://argo-cd.readthedocs.io/) - ArgoCD documentation
### Tools
- [CNOE.io](https://cnoe.io/) - Cloud Native Operational Excellence Framework
- [Forgejo](https://forgejo.org/) - Self-hosted Git service with CI/CD
- [Terraform](https://www.terraform.io/) - Infrastructure as Code tool

View file

@ -1,201 +0,0 @@
---
title: Infrastructure as Code
linkTitle: Infrastructure as Code
weight: 10
description: >
Managing infrastructure through machine-readable definition files rather than manual configuration
---
## Overview
Infrastructure as Code (IaC) is the practice of managing and provisioning infrastructure through code rather than manual processes. Instead of clicking through web consoles or running one-off commands, infrastructure is defined in version-controlled files that can be executed repeatedly to produce identical environments.
This approach treats infrastructure with the same rigor as application code: it's versioned, reviewed, tested, and deployed through automated pipelines.
## Why Infrastructure as Code?
### The problem with manual infrastructure
Traditional infrastructure management faces several challenges:
- **Inconsistency**: Manual steps vary between operators and environments
- **Undocumented**: Critical knowledge exists only in operators' heads
- **Error-Prone**: Human mistakes during repetitive tasks
- **Slow**: Manual provisioning takes hours or days
- **Untrackable**: No audit trail of what changed, when, or why
- **Irreproducible**: Difficulty recreating environments exactly
### The IaC solution
Infrastructure as Code addresses these challenges by making infrastructure:
**Declarative** - Describe the desired state, not the steps to achieve it. The IaC tool handles the implementation details.
**Versioned** - Every infrastructure change is committed to Git, providing complete history and the ability to rollback.
**Automated** - Infrastructure deploys through pipelines without human intervention, eliminating manual errors.
**Testable** - Infrastructure changes can be validated before production deployment.
**Documented** - The code itself is the documentation, always current and accurate.
**Reproducible** - The same code produces identical infrastructure every time, across all environments.
## Core Concepts
### Declarative vs imperative
**Imperative** approaches specify the exact steps: "Create a server, then install software, then configure networking."
**Declarative** approaches specify the desired outcome: "I need a server with this software and network configuration." The IaC tool determines the necessary steps.
Most modern IaC tools use the declarative approach, making them more maintainable and resilient.
### State Management
IaC tools maintain a "state" - a record of what infrastructure currently exists. When you change your code and re-run the tool, it compares the desired state (your code) with the actual state (what exists) and makes only the necessary changes.
This enables:
- **Drift detection** - Identify manual changes made outside IaC
- **Safe updates** - Modify only what changed
- **Dependency management** - Update resources in the correct order
### Idempotency
Running the same IaC code multiple times produces the same result. If infrastructure already matches the code, the tool makes no changes. This property is called idempotency and is essential for reliable automation.
## Infrastructure as Code in EDP
The Edge Developer Platform uses IaC extensively:
### Terraform and Terragrunt
[Terraform](terraform/) is our primary IaC tool for provisioning cloud resources. We use [Terragrunt](https://terragrunt.gruntwork.io/) as an orchestration layer to manage multiple Terraform modules and reduce code duplication.
Our implementation includes:
- **[infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue)** - Reusable infrastructure components (modules, units, and stacks)
- **[infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy)** - Full environment definitions using catalogue components
### Platform stacks
We organize infrastructure into [stacks](stacks/) - coherent bundles of related components:
- **[Core Stack](stacks/core/)** - Essential platform services
- **[Forgejo Stack](stacks/forgejo/)** - Source control and CI/CD
- **[Observability Stack](stacks/observability/)** - Monitoring and logging
- **[OTC Stack](stacks/otc/)** - Cloud provider resources
- **[Coder Stack](stacks/coder/)** - Development environments
- **[Terralist Stack](stacks/terralist/)** - Terraform registry
Each stack is defined as code, versioned independently, and can be deployed across different environments.
### GitOps integration
Our IaC integrates with GitOps principles:
1. All infrastructure definitions live in Git repositories
2. Changes go through code review processes
3. Automated pipelines deploy infrastructure
4. ArgoCD continuously reconciles Kubernetes resources with Git state
This creates an auditable, automated, and reliable deployment process.
## Benefits realized
### Consistency across environments
Development, testing, and production environments are deployed from the same code. This eliminates the "works on my machine" problem at the infrastructure level.
### Rapid environment provisioning
A complete EDP environment can be provisioned in minutes rather than days. This enables:
- Quick disaster recovery
- Easy creation of test environments
- Fast onboarding for new team members
### Reduced operational risk
Code review catches infrastructure errors before deployment. Automated testing validates changes. Version control enables instant rollback if problems occur.
### Knowledge sharing
Infrastructure configuration is explicit and discoverable in code. New team members can understand the platform by reading the repository rather than shadowing experienced operators.
### Compliance and auditability
Every infrastructure change is tracked in Git history with author, timestamp, and reason. This provides audit trails required for compliance and simplifies troubleshooting.
## Getting started
To work with EDP's Infrastructure as Code:
1. **Understand Terraform basics** - Review [Terraform documentation](https://developer.hashicorp.com/terraform)
2. **Explore infra-catalogue** - Browse [infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue) to understand available components
3. **Review existing deployments** - Examine [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy) to see how components are composed
4. **Follow the Terraform guide** - See [Terraform-based deployment](terraform/) for detailed instructions
## Best Practices
Based on our experience building and operating IaC:
**Version everything** - All infrastructure code belongs in version control. No exceptions.
**Keep it simple** - Start with basic modules. Add abstraction only when duplication becomes painful.
**Test before production** - Deploy infrastructure changes to test environments first.
**Use meaningful commit messages** - Explain why changes were made, not just what changed.
**Review all changes** - Infrastructure changes should go through the same review process as application code.
**Document assumptions** - Use code comments to explain non-obvious decisions.
**Manage secrets securely** - Never commit credentials to version control. Use secret management tools.
**Plan for drift** - Regularly compare actual infrastructure with code state to detect manual changes.
## Challenges and limitations
Infrastructure as Code is powerful but has challenges:
**Learning curve** - Teams need to learn IaC tools and practices. Initial productivity may decrease.
**State management complexity** - State files must be stored securely and accessed by multiple team members. State corruption can cause serious issues.
**Provider limitations** - Not all infrastructure can be managed as code. Some resources require manual configuration.
**Breaking changes** - Poorly written code can destroy infrastructure. Safeguards and testing are essential.
**Tool lock-in** - Switching IaC tools (e.g., Terraform to Pulumi) requires rewriting infrastructure code.
Despite these challenges, the benefits far outweigh the costs for any infrastructure of meaningful complexity.
## Why we invest in IaC
The IPCEI-CIS Edge Developer Platform requires reliable, reproducible infrastructure. Manual provisioning cannot meet these requirements at scale.
By investing in Infrastructure as Code:
- We can deploy complete environments consistently
- Platform engineers can focus on improvement rather than repetitive tasks
- Infrastructure changes are transparent and auditable
- New team members can contribute confidently
- Disaster recovery becomes routine rather than heroic
Our IaC tools ([infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue) and [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy)) embody these principles and enable the platform's reliability.
## Additional Resources
### Terraform Ecosystem
- [Terraform Documentation](https://developer.hashicorp.com/terraform)
- [OpenTofu](https://opentofu.org/) - Community-driven Terraform fork
- [Terragrunt](https://terragrunt.gruntwork.io/) - Terraform orchestration
### Infrastructure as Code Concepts
- [Infrastructure as Code book](https://www.oreilly.com/library/view/infrastructure-as-code/9781098114664/) by Kief Morris
- [Terraform Best Practices](https://www.terraform-best-practices.com/)
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)
### EDP-Specific Resources
- [Terraform-based deployment](terraform/) - Detailed deployment guide
- [Infrastructure Stacks](stacks/) - Reusable component bundles
- [Platform Orchestration](../) - How IaC fits into overall deployment

Binary file not shown.

Before

Width:  |  Height:  |  Size: 333 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 188 KiB

View file

@ -1,519 +0,0 @@
---
title: "Stacks"
linkTitle: "Stacks"
weight: 40
description: >
Platform-level component provisioning via Stacks
---
## Overview
The `stacks` and `stacks-instances` repositories form the core of a GitOps-based system for provisioning Edge Developer Platforms (EDP). They implement a template-instance pattern that enables the deployment of reusable platform components across different environments. The concept of "stacks" originates from the CNOE.io project (Cloud Native Operational Excellence), which can be traced through the evolutionary development from `edpbuilder` (derived from CNOE.io's `EDPbuilder`) to `infra-deploy`.
## Key Features of the Everything-as-Code Stacks Approach
This declarative Stacks provisioning architecture is characterized by the following central properties:
### Complete Code Declaration
**Platform as Code**: All Kubernetes resources, Helm charts, and application manifests are declaratively versioned as YAML files. The entire platform topology is traceable in Git.
**Configuration as Code**: Environment-specific configurations are generated through template hydration, not manually edited. Gomplate transforms generic templates into concrete configurations.
### GitOps-Native Architecture
**Single Source of Truth**: Git is the sole source of truth for the desired state of all infrastructure and platform components.
**Declarative State Management**: ArgoCD continuously synchronizes the actual state with the desired state defined in Git. Deviations are automatically corrected.
**Audit Trail**: Every change to infrastructure or platform is documented through Git commits, with author, timestamp, and change description.
**Pull-based Deployment**: ArgoCD pulls changes from Git, rather than external systems requiring push access to the cluster. This significantly increases security.
### Template-Instance Separation
**DRY Principle (Don't Repeat Yourself)**: Common platform components are defined once as templates and reused for all environments.
**Environment Promotion**: New environments can be quickly created through template hydration. Consistency across environments is guaranteed.
**Centralized Maintainability**: Updates to stack definitions can be made centrally in the `stacks` repository and then selectively rolled out to instances.
**Customization Points**: Despite reuse, environment-specific customizations remain possible through values files and manifest overlays.
### Modular Composition
**Stack-based Architecture**: Platform capabilities are organized into independent, reusable stacks (core, otc, forgejo, observability).
**Selective Deployment**: Through the `STACKS` environment variable, only required components can be deployed selectively.
**Mix-and-Match**: Different stack combinations yield different platform profiles (Development, Production, Observability clusters).
**Pluggable Components**: New stacks can be added without modifying existing ones.
### Environment Agnosticism
**Cloud Provider Abstraction**: Templates are formulated generically. Provider-specific details are introduced through hydration.
**Multi-Cloud Ready**: The architecture supports various cloud providers (currently OTC, historically KIND, extensible to AWS/Azure/GCP).
**Environment Variables as Interface**: All environment-specific aspects are controlled through clearly defined environment variables.
**Portable Definitions**: Stack definitions can be ported between environments and even cloud providers.
### Self-Healing and Drift Detection
**Automated Reconciliation**: ArgoCD detects deviations from the desired state and corrects them automatically.
**Continuous Monitoring**: Permanent monitoring of cluster state compared to Git definition.
**Declarative State Recovery**: After failures or manual changes, the declared state is automatically restored.
**Sync Policies**: Configurable sync strategies (automated, manual, with pruning) per application.
### Secrets Management
**Secrets Outside Git**: Sensitive data is not stored in Git but generated at runtime or injected from secret stores.
**Generated Credentials**: Passwords, tokens, and secrets are generated during deployment and directly created as Kubernetes Secrets.
**Sealed Secrets Ready**: The architecture is compatible with Sealed Secrets or External Secrets Operators for encrypted secret storage in Git.
**Credential Rotation**: Secrets can be regenerated through re-deployment.
### Observability and Auditability
**Declarative Monitoring**: Observability stacks are part of the Platform-as-Code definition.
**Deployment History**: Complete history of all deployments and changes through Git log.
**ArgoCD UI**: Graphical representation of sync status and application topology.
**Infrastructure Events**: Terraform state changes and Terragrunt outputs document infrastructure changes.
### Idempotence and Reproducibility
**Idempotent Operations**: Repeated execution of the same declaration leads to the same result without side effects.
**Deterministic Builds**: Same input parameters (Git commit + environment variables) produce identical environments.
**Disaster Recovery**: Complete environments can be rebuilt from code without restoring backups.
**Testing in Production-Like Environments**: Development and staging environments are code-identical to production, only with different parameter values.
## Purpose in EDP
A 'stack' is the declarative description for the platform provisionning in an EDP installation.
## Repository
**Code**:
* [Stacks Templates Repo](https://edp.buildth.ing/DevFW-CICD/stacks)
* [Stacks Instances Repo, used for ArgoCD Gitops](https://edp.buildth.ing/DevFW-CICD/stacks-instances)
* [EDP Stacks Deployment mechanism](https://edp.buildth.ing/DevFW/infra-deploy)
**Documentation**: [Link to component-specific documentation]
* [Outdated: The former 'edpbuilder' as script, derived from CNOE's 'idpbuilder](https://edp.buildth.ing/DevFW/edpbuilder)
## The stacks Repository
### Purpose and Structure
The `stacks` repository contains reusable template definitions for platform components. It serves as a central library of building blocks from which Edge Developer Platforms can be composed.
```
stacks/
└── template/
├── edfbuilder.yaml
├── registry/
│ ├── core.yaml
│ ├── otc.yaml
│ ├── forgejo.yaml
│ ├── observability.yaml
│ └── observability-client.yaml
└── stacks/
├── core/
├── otc/
├── forgejo/
├── observability/
└── observability-client/
```
### Components
**edfbuilder.yaml**: The central bootstrap definition. This is an ArgoCD Application that references the `registry` directory and serves as the entry point for the entire platform provisioning.
**registry/**: Contains ArgoCD ApplicationSets that function as a meta-layer. Each file defines a category of stacks (e.g., core, forgejo, observability) and references the corresponding subdirectory in `stacks/`.
**stacks/**: The actual platform components, organized into thematic categories:
- **core**: Fundamental components such as ArgoCD, CloudNative PostgreSQL, Dex (SSO)
- **otc**: Cloud-provider-specific components for Open Telekom Cloud (cert-manager, ingress-nginx, StorageClasses)
- **forgejo**: Git server and CI runners
- **observability**: Central observability components (Grafana, Victoria Metrics Stack)
- **observability-client**: Client-side metrics collection for non-observability clusters
Each stack consists of:
- YAML definitions (primarily ArgoCD Applications)
- `values.yaml` files for Helm charts
- `manifests/` directories for additional Kubernetes resources
### Templating Mechanism
The templates use Gomplate with delimiter syntax `{{{ }}}` for environment variables:
```yaml
repoURL: "https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}"
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/core"
```
These placeholders are replaced with environment-specific values during the deployment phase.
## The stacks-instances Repository
### Purpose and Structure
The `stacks-instances` repository contains the materialized, environment-specific configurations. While `stacks` provides the blueprints, `stacks-instances` contains the actual deployment definitions for concrete environments.
```
stacks-instances/
└── otc/
├── osctest.t09.de/
│ ├── edfbuilder.yaml
│ ├── registry/
│ └── stacks/
├── backup-test-manu.t09.de/
│ ├── edfbuilder.yaml
│ ├── registry/
│ └── stacks/
└── ...
```
### Organizational Principle
The structure follows the schema `{cloud-provider}/{domain}/`:
- **cloud-provider**: Identifies the cloud environment (e.g., `otc` for Open Telekom Cloud)
- **domain**: The fully qualified domain name of the environment (e.g., `osctest.t09.de`)
Each environment replicates the structure of `stacks/template`, but with resolved template variables and environment-specific customizations.
### Usage by ArgoCD
ArgoCD synchronizes directly from this repository. Applications reference paths such as:
```yaml
source:
path: "otc/osctest.t09.de/stacks/core"
repoURL: "https://edp.buildth.ing/DevFW-CICD/stacks-instances"
targetRevision: HEAD
```
This enables true GitOps: every change to the configurations is traceable through Git commits and automatically synchronized by ArgoCD in the target environment.
## The infra-deploy Repository
### Role in the Overall Architecture
The `infra-deploy` repository is the orchestration layer that coordinates both infrastructure and platform provisioning. It represents the evolution of `edpbuilder`, which was originally derived from the CNOE.io project's `EDPbuilder`.
### Two-Phase Provisioning
**Phase 1: Infrastructure Provisioning**
Uses Terragrunt Stacks (experimental feature) to provision cloud resources:
```
infra-deploy/
├── root.hcl
├── non-prod/
│ ├── tenant.hcl
│ ├── dns_zone/
│ │ ├── terragrunt.hcl
│ │ ├── terragrunt.stack.hcl
│ │ └── terragrunt.values.hcl
│ └── testing/
├── prod/
└── templates/
└── forgejo/
├── terragrunt.hcl
└── terragrunt.stack.hcl
```
Terragrunt Stacks provision:
- VPC and network segments
- Kubernetes clusters (CCE on OTC)
- Managed databases (RDS PostgreSQL)
- Load balancers and DNS entries
- Security groups and other cloud resources
**Phase 2: Platform Provisioning**
The script `scripts/edp-install.sh` executes the following steps:
1. **Template Hydration**:
- Checkout of the `stacks` repository
- Execution of Gomplate to resolve template variables
- Generation of environment-specific manifests
2. **Instance Management**:
- Checkout/update of the `stacks-instances` repository
- During CI execution: commit and push of the new instance
3. **Secrets Management**:
- Generation of credentials (database passwords, SSO secrets, API tokens)
- Creation of Kubernetes Secrets
4. **Bootstrap**:
- Helm-based installation of ArgoCD
- Application of `edfbuilder.yaml` or selective registry entries
5. **GitOps Handover**:
- ArgoCD takes over further synchronization from `stacks-instances`
- Continuous monitoring and self-healing
### GitHub Actions Workflows
The `.github/workflows/` directory contains three central workflows:
**deploy.yaml**: Complete deployment pipeline with the following inputs:
- Cluster environment and tenant (prod/non-prod)
- Node flavor and availability zone
- Stack selection (core, otc, forgejo, observability, etc.)
- Infra-catalogue version
**plan.yaml**: Terraform/Terragrunt plan preview without execution
**destroy.yaml**: Controlled teardown of environments
## Deployment Workflow
The complete provisioning process proceeds as follows:
1. **Initiation**: GitHub Actions workflow is triggered (manually or automatically)
2. **Environment Preparation**:
```bash
export CLUSTER_ENVIRONMENT=qa-stage
cd scripts
./new-otc-env.sh # Creates Terragrunt configuration if new
```
3. **Infrastructure Provisioning**:
```bash
./ensure-cluster.sh otc
# Internally executes:
# - ./ensure-otc-cluster.sh
# - terragrunt stack run apply
```
4. **Platform Provisioning**:
```bash
./edp-install.sh
# Executes:
# - Checkout of stacks
# - Gomplate hydration
# - Checkout/update of stacks-instances
# - Secrets generation
# - ArgoCD installation
# - Bootstrap of stacks
```
5. **ArgoCD Synchronization**: ArgoCD continuously reads from `stacks-instances` and synchronizes the desired state
## The CNOE.io Stacks Concept
The term "stacks" originates from the Cloud Native Operational Excellence (CNOE.io) project. The core idea is the composition of platform capabilities from modular, reusable building blocks.
### Principles
**Modularity**: Each stack is a self-contained unit with clear dependencies
**Composability**: Stacks can be freely combined to create different platform profiles
**Declarativeness**: All configurations are declarative and GitOps-capable
**Environment-agnostic**: Templates are generic; environment specifics are introduced through hydration
### Stack Selection and Combinations
The environment variable `STACKS` controls which components are deployed:
```bash
# Complete EDP with central observability
STACKS="core,otc,forgejo,observability"
# Application cluster with client-side observability
STACKS="core,otc,forgejo,observability-client"
# Minimal development environment
STACKS="core,forgejo"
```
## Data Flow and Dependencies
```
┌─────────────────┐
│ GitHub Actions │
│ (deploy.yaml) │
└────────┬────────┘
├─> Phase 1: Infrastructure
│ ┌──────────────────┐
│ │ infra-deploy │
│ │ (Terragrunt) │
│ └────────┬─────────┘
│ │
│ v
│ ┌──────────────────┐
│ │ Cloud Provider │
│ │ (OTC) │
│ │ - VPC │
│ │ - K8s Cluster │
│ │ - RDS │
│ └──────────────────┘
└─> Phase 2: Platform
┌──────────────────┐
│ edp-install.sh │
└────────┬─────────┘
├─> Checkout: stacks (Templates)
│ └─> Gomplate Hydration
├─> Checkout/Update: stacks-instances
├─> Secrets Generation
├─> ArgoCD Installation (Helm)
└─> Bootstrap (edfbuilder.yaml)
v
┌────────────────┐
│ ArgoCD │
└────────┬───────┘
└─> Continuous Synchronization
from stacks-instances
v
┌──────────────┐
│ Kubernetes │
│ Cluster │
└──────────────┘
```
## Historical Context: edpbuilder to infra-deploy
The evolution from `edpbuilder` to `infra-deploy` demonstrates the maturation of the architecture:
**edpbuilder** (Origin):
- Directly derived from CNOE.io's `EDPbuilder`
- Focus on local KIND clusters
- Manual configuration
- Monolithic structure
**infra-deploy** (Current):
- Production-ready for cloud deployments (OTC)
- Terragrunt-based infrastructure orchestration
- CI/CD integration via GitHub Actions
- Clear separation between infrastructure and platform
- Template-instance separation through stacks/stacks-instances
## Technical Particularities
### Gomplate Templating
Gomplate is used with custom delimiters `{{{ }}}` to avoid conflicts with Helm templating (`{{ }}`):
```bash
gomplate --input-dir="stacks/template" \
--output-dir="work" \
--left-delim "{{{" \
--right-delim "}}}"
```
### Terragrunt Experimental Stacks
The use of Terragrunt Stacks requires the experimental flag:
```bash
export TG_EXPERIMENT_MODE=true
terragrunt stack run apply
```
This enables hierarchical organization of Terraform modules with dependency management.
### ArgoCD ApplicationSets
The registry pattern uses ArgoCD Applications that reference directories:
```yaml
source:
path: "otc/osctest.t09.de/stacks/core"
```
ArgoCD automatically detects all YAML files in the path and synchronizes them as Applications.
## Best Practices and Patterns
**Immutable Infrastructure**: Every environment is fully defined in Git
**Secrets Outside Git**: Sensitive data is generated at runtime or injected from secret stores
**Progressive Rollouts**: New environments start as template instances, then are individually customized
**Version Pinning**: Critical components (Helm charts, Terragrunt modules) are pinned to specific versions
**Namespace Isolation**: Each stack deploys into dedicated namespaces
**Self-Healing**: ArgoCD's automated sync policy enables automatic drift correction
## Usage Examples
### Deployment by Pipeline
The platform deployment is the second part of the EDP installtaion. First there is the infrastructure setup, which ends with a created kubernetes cluster. Then the platform provisioning by the defined stacks is done. Both is runnable by the `deploy`pipelien in `infra-deploy`:
![alt text](./deploy-action.png)
The green pipeline looks liek this:
![alt text](./green-deploy-pipeline.png)
### Local setup with 'kind'
It's also possible to just run the second part, the stcks provisionning. Then you need to have a kubernetes cluster already running, which is e.g. feasable by a local kind-cluster.
So imagine, you want to to the stacks 'core,observability' on your local machine. Then you can run the local entzr
```bash
# have kind insatlled
# in /infra-deploy
# provide a kind cluster
kind delete clusters --all
./scripts/ensure-kind-cluster.sh -r
# provide some emnv vars
export TERRAFORM=/bin/bash
export LOADBALANCER_ID=ABC
export DOMAIN=ABC
export DOMAIN_GITEA=ABC
export OS_ACCESS_KEY=ABC
export OS_SECRET_KEY=ABC
export STACKS=core,observability
# deploy
./scripts/edp-install.sh
```
## Status
**Maturity**: [Production]
## Additional Resources
* [CNOE](https://cnoe.io/docs/overview/cnoe)

View file

@ -1,368 +0,0 @@
---
title: "Coder"
linkTitle: "Coder"
weight: 20
description: >
Cloud Development Environments for secure, scalable remote development
---
## Overview
Coder is an enterprise cloud development environment (CDE) platform that provisions secure, consistent remote development workspaces. As part of the Edge Developer Platform, Coder enables developers to work in standardized, on-demand environments defined as code, moving development workloads from local machines to centrally managed infrastructure.
The Coder stack deploys a self-hosted Coder instance with PostgreSQL database backend, integrated authentication, and edge connectivity capabilities.
## Key Features
* **Infrastructure as Code Workspaces**: Development environments defined using Terraform templates
* **IDE Agnostic**: Supports browser-based IDEs, VS Code, JetBrains IDEs, and other development tools
* **Secure Remote Access**: Workspaces run in controlled cloud or on-premises infrastructure
* **On-Demand Provisioning**: Developers create ephemeral or persistent workspaces as needed
* **AI Agent Support**: Secure execution environment for AI coding assistants
* **Template-Based Deployment**: Reusable workspace templates ensure consistency across teams
## Repository
**Code**: [Coder Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/coder)
**Documentation**:
* [Coder Official Documentation](https://coder.com/docs)
* [Coder GitHub Repository](https://github.com/coder/coder)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* CloudNativePG operator (provided by `core` stack)
* Ingress controller configured (provided by `otc` stack)
* cert-manager for TLS certificate management (provided by `otc` stack)
* Domain name configured via `DOMAIN_GITEA` environment variable
### Quick Start
The Coder stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then the domain will be `coder.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- PostgreSQL database cluster (CloudNativePG)
- Coder application (Helm chart v2.28.3)
- Ingress configuration with TLS
- Database credentials and edge connectivity secrets
### Verification
Verify the Coder deployment:
```bash
# Check ArgoCD application status
kubectl get application coder -n argocd
# Verify Coder pods are running
kubectl get pods -n coder
# Check PostgreSQL cluster status
kubectl get cluster coder-db -n coder
# Verify ingress configuration
kubectl get ingress -n coder
```
Access the Coder web interface at `https://coder.{DOMAIN_GITEA}`.
## Architecture
### Component Architecture
The Coder stack consists of:
**Coder Control Plane**:
- Web application for workspace management
- API server for workspace provisioning
- Terraform executor for infrastructure operations
**PostgreSQL Database**:
- Single-instance CloudNativePG cluster
- Stores workspace metadata, templates, and user data
- Managed database user with `coder-db-user` secret
- 10Gi persistent storage on `csi-disk` storage class
**Networking**:
- ClusterIP service for internal communication
- Nginx ingress with TLS termination
- cert-manager integration for automatic certificate management
## Configuration
### Environment Variables
The Coder application is configured through environment variables in `values.yaml`:
**Access Configuration**:
- `CODER_ACCESS_URL`: Public URL where Coder is accessible (`https://coder.{DOMAIN_GITEA}`)
**Database Configuration**:
- `CODER_PG_CONNECTION_URL`: PostgreSQL connection string (from `coder-db-user` secret)
**Authentication**:
- `CODER_OAUTH2_GITHUB_DEFAULT_PROVIDER_ENABLE`: GitHub OAuth integration (disabled by default)
**Edge Connectivity**:
- `EDGE_CONNECT_ENDPOINT`: Edge connection endpoint (from `edge-credential` secret)
- `EDGE_CONNECT_USERNAME`: Edge authentication username
- `EDGE_CONNECT_PASSWORD`: Edge authentication password
### Helm Chart Configuration
Key Helm values configured in `stacks/coder/coder/values.yaml`:
```yaml
coder:
env:
- name: CODER_ACCESS_URL
value: "https://coder.{DOMAIN_GITEA}"
- name: CODER_PG_CONNECTION_URL
valueFrom:
secretKeyRef:
name: coder-db-user
key: uri
service:
type: ClusterIP
ingress:
enable: true
className: nginx
host: "coder.{DOMAIN_GITEA}"
annotations:
cert-manager.io/cluster-issuer: main
tls:
enable: true
secretName: coder-tls-secret
```
**Important**: Do not override `CODER_HTTP_ADDRESS`, `CODER_TLS_ENABLE`, `CODER_TLS_CERT_FILE`, or `CODER_TLS_KEY_FILE` as these are managed by the Helm chart.
### PostgreSQL Database Configuration
Defined in `stacks/coder/coder/manifests/postgres.yaml`:
**Cluster Specification**:
- 1 instance (single-node cluster)
- Primary update strategy: unsupervised
- Resource requests/limits: 1 CPU, 1Gi memory
- Storage: 10Gi using `csi-disk` storage class
**Managed Roles**:
- User: `coder`
- Permissions: createdb, login
- Password stored in `coder-db-user` secret
### ArgoCD Application Configuration
**Registry Application** (`template/registry/coder.yaml`):
- Name: `coder-reg`
- Manages the Coder stack directory
- Automated sync with prune and self-heal enabled
**Stack Application** (`template/stacks/coder/coder.yaml`):
- Name: `coder`
- Deploys Coder Helm chart v2.28.3 from `https://helm.coder.com/v2`
- Automated self-healing enabled
- Creates namespace automatically
- References values from `stacks-instances` repository
## Usage Examples
### Creating a Workspace Template
After deployment, create workspace templates using Terraform:
1. **Access Coder Dashboard**
```bash
open https://coder.${DOMAIN_GITEA}
```
2. **Create Template Repository**
Create a Git repository with a Terraform template:
```hcl
# main.tf
terraform {
required_providers {
coder = {
source = "coder/coder"
version = "~> 0.12"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.23"
}
}
}
resource "coder_agent" "main" {
os = "linux"
arch = "amd64"
}
resource "kubernetes_pod" "main" {
metadata {
name = "coder-${data.coder_workspace.me.owner}-${data.coder_workspace.me.name}"
namespace = "coder-workspaces"
}
spec {
container {
name = "dev"
image = "codercom/enterprise-base:ubuntu"
command = ["sh", "-c", coder_agent.main.init_script]
}
}
}
```
3. **Push Template to Coder**
```bash
coder templates push kubernetes-dev
```
### Provisioning a Development Workspace
```bash
# Create a new workspace from template
coder create my-workspace --template kubernetes-dev
# Connect via SSH
coder ssh my-workspace
# Open in VS Code
coder open my-workspace --ide vscode
# Stop workspace when not in use
coder stop my-workspace
# Delete workspace
coder delete my-workspace
```
### Integrating with Platform Services
Access EDP platform services from Coder workspaces:
```bash
# Connect to platform PostgreSQL
psql "postgresql://myuser@postgres.core.svc.cluster.local:5432/mydb"
# Access Forgejo
git clone https://forgejo.${DOMAIN_GITEA}/myorg/myrepo.git
# Query platform metrics
curl https://grafana.${DOMAIN}/api/datasources
```
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration and CloudNativePG operator for database management
* **OTC Stack**: Requires ingress-nginx controller and cert-manager for external access and TLS
* **Forgejo Stack**: Workspace templates can integrate with platform Git repositories
* **Observability Stack**: Workspace metrics can be collected by platform observability tools
* **Dex (SSO)**: Can be configured for centralized authentication (requires additional configuration)
## Troubleshooting
### Coder Pods Not Starting
**Problem**: Coder pods remain in `Pending` or `CrashLoopBackOff` state
**Solution**:
1. Check PostgreSQL cluster status:
```bash
kubectl get cluster coder-db -n coder
kubectl describe cluster coder-db -n coder
```
2. Verify database credentials secret:
```bash
kubectl get secret coder-db-user -n coder
kubectl get secret coder-db-user -n coder -o jsonpath='{.data.uri}' | base64 -d
```
3. Check Coder logs:
```bash
kubectl logs -n coder -l app=coder
```
### Cannot Access Coder UI
**Problem**: Coder web interface is not accessible at configured URL
**Solution**:
1. Verify ingress configuration:
```bash
kubectl get ingress -n coder
kubectl describe ingress -n coder
```
2. Check TLS certificate status:
```bash
kubectl get certificate -n coder
kubectl describe certificate coder-tls-secret -n coder
```
3. Verify DNS resolution:
```bash
nslookup coder.${DOMAIN_GITEA}
```
### Database Connection Errors
**Problem**: Coder cannot connect to PostgreSQL database
**Solution**:
1. Verify PostgreSQL cluster health:
```bash
kubectl get pods -n coder -l cnpg.io/cluster=coder-db
kubectl logs -n coder -l cnpg.io/cluster=coder-db
```
2. Check database and user creation:
```bash
kubectl get database coder -n coder
kubectl exec -it coder-db-1 -n coder -- psql -U postgres -c "\l"
kubectl exec -it coder-db-1 -n coder -- psql -U postgres -c "\du"
```
3. Test connection string:
```bash
kubectl exec -it coder-db-1 -n coder -- psql "$(kubectl get secret coder-db-user -n coder -o jsonpath='{.data.uri}' | base64 -d)"
```
### Workspace Provisioning Fails
**Problem**: Workspaces fail to provision from templates
**Solution**:
1. Check Coder provisioner logs:
```bash
kubectl logs -n coder -l app=coder --tail=100
```
2. Verify Kubernetes permissions for workspace creation:
```bash
kubectl auth can-i create pods --as=system:serviceaccount:coder:coder -n coder-workspaces
```
3. Review template Terraform configuration for errors
## Additional Resources
* [Coder Documentation](https://coder.com/docs)
* [Coder Templates Repository](https://github.com/coder/coder)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
* [Coder Blog: 2025 Launch Week](https://coder.com/blog/launch-week-2025-instant-infrastructure)

View file

@ -1,480 +0,0 @@
---
title: "Core"
linkTitle: "Core"
weight: 10
description: >
Essential infrastructure components for GitOps, database management, and single sign-on
---
## Overview
The Core stack provides foundational infrastructure components required by all other Edge Developer Platform stacks. It establishes the base layer for continuous deployment, database services, and centralized authentication, enabling a secure, scalable platform architecture.
The Core stack deploys ArgoCD for GitOps orchestration, CloudNativePG for PostgreSQL database management, and Dex for OpenID Connect single sign-on capabilities.
## Key Features
* **GitOps Continuous Deployment**: ArgoCD manages declarative infrastructure and application deployments
* **Database Operator**: CloudNativePG provides enterprise-grade PostgreSQL clusters for platform services
* **Single Sign-On**: Dex offers centralized OIDC authentication across platform components
* **Automated Synchronization**: Self-healing deployments with automatic drift correction
* **Role-Based Access Control**: Integrated RBAC for secure platform administration
* **TLS Certificate Management**: Automated certificate provisioning and renewal
## Repository
**Code**: [Core Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/core)
**Documentation**:
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [Dex Documentation](https://dexidp.io/docs/)
## Getting Started
### Prerequisites
* Kubernetes cluster (1.24+)
* kubectl configured with cluster access
* Ingress controller (nginx recommended)
* cert-manager for TLS certificate management
* Domain names configured for platform services
### Quick Start
The Core stack is deployed as the foundation of the EDP installation:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then domains will be `argocd.test-me.t09.de`, `dex.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Bootstrap**
The deployment automatically provisions:
- ArgoCD control plane in `argocd` namespace
- CloudNativePG operator in `cloudnative-pg` namespace
- Dex identity provider in `dex` namespace
- Ingress configurations with TLS certificates
- OIDC authentication integration
### Verification
Verify the Core stack deployment:
```bash
# Check ArgoCD installation
kubectl get application -n argocd
kubectl get pods -n argocd
# Verify CloudNativePG operator
kubectl get pods -n cloudnative-pg
kubectl get crd | grep cnpg.io
# Check Dex deployment
kubectl get pods -n dex
kubectl get ingress -n dex
# Verify ingress configurations
kubectl get ingress -n argocd
```
Access ArgoCD at `https://argocd.{DOMAIN}` and authenticate via Dex SSO. Or use username `admin` and the secret inside of kubernetes `argocd/argocd-initial-admin-secret` as password `kubectl get secret -n argocd argocd-initial-admin-secret -ojson | jq -r .data.password | base64 -d`.
## Architecture
### Component Architecture
The Core stack establishes a three-tier foundation:
**ArgoCD Control Plane**:
- Application management and GitOps reconciliation
- Multi-repository tracking with automated sync
- Resource health monitoring and drift detection
- Integrated RBAC with SSO authentication
**CloudNativePG Operator**:
- PostgreSQL cluster lifecycle management
- Automated backup and recovery
- High availability and failover
- Storage provisioning via CSI drivers
**Dex Identity Provider**:
- OpenID Connect authentication service
- Multiple connector support (Forgejo/Gitea, LDAP, SAML)
- Static client registration for platform services
- Token issuance and validation
### Networking
**Ingress Architecture**:
- nginx ingress controller for external access
- TLS termination with cert-manager integration
- Domain-based routing for platform services
**Kubernetes Services**:
- Internal service communication via ClusterIP
- DNS-based service discovery
- Network policies for security segmentation
## Configuration
### ArgoCD Configuration
Deployed via Helm chart v9.1.5 with custom values in `stacks/core/argocd/values.yaml`:
**OIDC Authentication**:
```yaml
configs:
cm:
url: "https://{DOMAIN_ARGOCD}"
oidc.config: |
name: Forgejo
issuer: https://{DOMAIN_DEX}
clientID: controller-argocd-dex
clientSecret: $dex-controller-argocd-dex:dex-controller-argocd-dex
requestedScopes: ["openid", "profile", "email", "groups"]
```
**RBAC Policy**:
```yaml
policy.csv: |
g, DevFW, role:admin
```
**Server Settings**:
- Insecure mode enabled (TLS handled by ingress)
- Annotation-based resource tracking
- 60-second reconciliation timeout
- Resource exclusions for ProviderConfigUsage and CiliumIdentity
### CloudNativePG Configuration
Deployed via Helm chart v0.26.1 with values in `stacks/core/cloudnative-pg/values.yaml`:
**Operator Settings**:
- Namespace: `cloudnative-pg`
- Automated database cluster provisioning
- Custom resource definitions for Cluster, Database, and Pooler resources
**Storage Configuration**:
- Uses `csi-disk` storage class by default
- PVC provisioning for PostgreSQL data
- Backup storage integration (S3-compatible)
### Dex Configuration
Deployed via Helm chart v0.23.0 with values in `stacks/core/dex/values.yaml`:
**Issuer Configuration**:
```yaml
config:
issuer: https://{DOMAIN_DEX}
storage:
type: memory # Use persistent storage for production
oauth2:
skipApprovalScreen: true
alwaysShowLoginScreen: false
```
**Forgejo Connector**:
```yaml
connectors:
- type: gitea
id: forgejo
name: Forgejo
config:
clientID: $FORGEJO_CLIENT_ID
clientSecret: $FORGEJO_CLIENT_SECRET
redirectURI: https://{DOMAIN_DEX}/callback
baseURL: https://edp.buildth.ing
orgs:
- name: DevFW
```
**Static OAuth2 Clients**:
- ArgoCD: `controller-argocd-dex`
- Grafana: `controller-grafana-dex`
### Environment Variables
Core stack services use the following environment variables:
**Domain Configuration**:
- `DOMAIN_ARGOCD`: ArgoCD web interface URL
- `DOMAIN_DEX`: Dex authentication service URL
- `DOMAIN_GITEA`: Forgejo/Gitea repository URL
- `DOMAIN_GRAFANA`: Grafana observability dashboard URL
**Repository Configuration**:
- `CLIENT_REPO_ID`: Repository identifier for stack configurations
- `CLIENT_REPO_DOMAIN`: Git repository domain
- `CLIENT_REPO_ORG_NAME`: Organization name for stack instances
## Usage Examples
### Managing Applications with ArgoCD
Access and manage applications through ArgoCD:
```bash
# Login to ArgoCD CLI
argocd login argocd.${DOMAIN} --sso
# List all applications
argocd app list
# Get application status
argocd app get coder
# Sync application manually
argocd app sync coder
# View application logs
argocd app logs coder
# Diff application state
argocd app diff coder
```
### Creating a PostgreSQL Database
Deploy a PostgreSQL cluster using CloudNativePG:
```yaml
# database-cluster.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: app-db
namespace: my-app
spec:
instances: 3
storage:
size: 20Gi
storageClass: csi-disk
postgresql:
parameters:
max_connections: "100"
shared_buffers: "256MB"
bootstrap:
initdb:
database: appdb
owner: appuser
```
Apply the configuration:
```bash
kubectl apply -f database-cluster.yaml
# Check cluster status
kubectl get cluster app-db -n my-app
kubectl get pods -n my-app -l cnpg.io/cluster=app-db
# Get connection credentials
kubectl get secret app-db-app -n my-app -o jsonpath='{.data.password}' | base64 -d
```
### Configuring SSO for Applications
Add OAuth2 applications to Dex for SSO integration:
```yaml
# Add to dex values.yaml
staticClients:
- id: my-app-client
redirectURIs:
- 'https://myapp.{DOMAIN}/callback'
name: 'My Application'
secretEnv: MY_APP_CLIENT_SECRET
```
Configure the application to use Dex:
```bash
# Application OIDC configuration
OIDC_ISSUER=https://dex.${DOMAIN}
OIDC_CLIENT_ID=my-app-client
OIDC_CLIENT_SECRET=${MY_APP_CLIENT_SECRET}
OIDC_REDIRECT_URI=https://myapp.${DOMAIN}/callback
```
### Deploying Applications via ArgoCD
Create an ArgoCD Application manifest:
```yaml
# my-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: 'https://github.com/myorg/my-app'
targetRevision: main
path: k8s
destination:
server: 'https://kubernetes.default.svc'
namespace: my-app
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
```
Push it to [stacks instances](https://edp.buildth.ing/DevFW-CICD/stacks-instances) to be picked up by argo
## Integration Points
* **All Stacks**: Core stack is a prerequisite for all other EDP stacks
* **OTC Stack**: Provides ingress-nginx and cert-manager dependencies
* **Coder Stack**: Uses CloudNativePG for workspace database management
* **Forgejo Stack**: Integrates with Dex for SSO and ArgoCD for deployment
* **Observability Stack**: Uses Dex for Grafana authentication and ArgoCD for deployment
* **Provider Stack**: Deploys Terraform providers via ArgoCD
## Troubleshooting
### ArgoCD Not Accessible
**Problem**: Cannot access ArgoCD web interface
**Solution**:
1. Verify ingress configuration:
```bash
kubectl get ingress -n argocd
kubectl describe ingress -n argocd
```
2. Check ArgoCD server status:
```bash
kubectl get pods -n argocd
kubectl logs -n argocd -l app.kubernetes.io/name=argocd-server
```
3. Verify TLS certificate:
```bash
kubectl get certificate -n argocd
kubectl describe certificate -n argocd
```
4. Test DNS resolution:
```bash
nslookup argocd.${DOMAIN}
```
### Dex Authentication Failing
**Problem**: SSO login fails or redirects incorrectly
**Solution**:
1. Check Dex logs:
```bash
kubectl logs -n dex -l app.kubernetes.io/name=dex
```
2. Verify Forgejo connector configuration:
```bash
kubectl get secret -n dex
kubectl get configmap -n dex dex -o yaml
```
3. Test Dex issuer endpoint:
```bash
curl https://dex.${DOMAIN}/.well-known/openid-configuration
```
4. Verify OAuth2 client credentials match in both Dex and consuming application
### CloudNativePG Operator Not Running
**Problem**: PostgreSQL clusters fail to provision
**Solution**:
1. Check operator status:
```bash
kubectl get pods -n cloudnative-pg
kubectl logs -n cloudnative-pg -l app.kubernetes.io/name=cloudnative-pg
```
2. Verify CRDs are installed:
```bash
kubectl get crd | grep cnpg.io
kubectl describe crd clusters.postgresql.cnpg.io
```
3. Check operator logs for errors:
```bash
kubectl logs -n cloudnative-pg -l app.kubernetes.io/name=cloudnative-pg --tail=100
```
### Application Sync Failures
**Problem**: ArgoCD applications remain out of sync or fail to deploy
**Solution**:
1. Check application status:
```bash
argocd app get <app-name>
kubectl describe application <app-name> -n argocd
```
2. Review sync operation logs:
```bash
argocd app logs <app-name>
```
3. Verify repository access:
```bash
argocd repo list
argocd repo get <repo-url>
```
4. Check for resource conflicts or missing dependencies:
```bash
kubectl get events -n <app-namespace> --sort-by='.lastTimestamp'
```
### Database Connection Issues
**Problem**: Applications cannot connect to CloudNativePG databases
**Solution**:
1. Verify cluster is ready:
```bash
kubectl get cluster <cluster-name> -n <namespace>
kubectl describe cluster <cluster-name> -n <namespace>
```
2. Check database credentials secret:
```bash
kubectl get secret <cluster-name>-app -n <namespace>
kubectl get secret <cluster-name>-app -n <namespace> -o yaml
```
3. Test connection from a pod:
```bash
kubectl run -it --rm psql-test --image=postgres:16 --restart=Never -- \
psql "$(kubectl get secret <cluster-name>-app -n <namespace> -o jsonpath='{.data.uri}' | base64 -d)"
```
4. Review PostgreSQL logs:
```bash
kubectl logs -n <namespace> <cluster-name>-1
```
## Additional Resources
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
* [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [CloudNativePG Architecture](https://cloudnative-pg.io/documentation/current/architecture/)
* [Dex Documentation](https://dexidp.io/docs/)
* [Dex Connectors](https://dexidp.io/docs/connectors/)
* [OpenID Connect Specification](https://openid.net/connect/)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 146 KiB

Some files were not shown because too many files have changed in this diff Show more