Compare commits

..

68 commits
main ... main

Author SHA1 Message Date
2c981d3ce3 refactor(architecture): Refactor text architecture to likec4 2025-12-23 13:38:07 +01:00
203e45d8b3 fix(docs): update links in orchestration documentation for correct navigation 2025-12-23 13:06:29 +01:00
cebe8d9158 docs(governance): update compliance and audit documentation for clarity and detail 2025-12-19 17:03:19 +01:00
9bcaa73747 Merge remote-tracking branch 'refs/remotes/edp/main' 2025-12-19 16:36:48 +01:00
cfbf23c8a5 feat(governance): refactored into sections, added more content to external stakeholder workshops 2025-12-19 16:36:17 +01:00
885c5c9ac0
Fix broken links throughout docs 2025-12-19 15:54:03 +01:00
e9b4299696
Remove blog pages 2025-12-19 15:53:23 +01:00
145705edf7 docs(index): update 'front door consistency' and remove drafts 2025-12-19 15:46:38 +01:00
eeb623517b docs(governance): Clarify terminology and repository references in governance documentation 2025-12-19 15:24:32 +01:00
67ef9d8c6e docs(architecture): switched text diagram to likec4 2025-12-19 15:13:04 +01:00
a0fb081d80 docs(governance): completely revised governance documentation based on confluence and old edp-doc content analysis 2025-12-19 15:01:21 +01:00
977e9d7e8a docs(getting-started): remove draft 'Getting Started' guide for Edge Developer Platform, reworked Getting Started on chapter page 2025-12-19 13:41:03 +01:00
d106cc2b11 docs(documentation): finished documentation entry point 'Edge Connect Cloud' 2025-12-19 13:16:01 +01:00
840c607d27 docs(documentation): finished documentation entry points 'Documentation' and 'Edge Developer platform' 2025-12-19 12:47:20 +01:00
a915c372db feat(footer): Added commit hash as build number and changed authors in footer 2025-12-19 11:16:10 +01:00
13a303095d docs(navigation): remove blog and old-docs from publishing 2025-12-19 10:53:35 +01:00
d390953891 docs(likec4): Enhance documentation for LikeC4 MCP server and AI agent integration 2025-12-19 10:48:32 +01:00
5f4296204f Merge remote-tracking branch 'refs/remotes/edp/main' 2025-12-19 10:27:39 +01:00
dfedb5431b docs(forgejo): added hint to a possibly broken link 2025-12-19 10:27:09 +01:00
7c2f320fc1
Partially revert 880c0d5e's incomplete additions 2025-12-19 10:25:59 +01:00
ca8931d5f6 fix(likec4): resolved likec4 validation error 2025-12-19 10:05:34 +01:00
97c7d647d1 chore(nodejs): node modules update 2025-12-19 10:04:23 +01:00
5452937473
feat(otc): Added section on managed services 2025-12-18 17:17:37 +01:00
25f228f001
feat(operations): Reworded the operations section 2025-12-18 15:59:14 +01:00
cb7de08c7b
feat(iac): Add intro to Infrastructure as Code 2025-12-18 15:47:15 +01:00
610e7d2767
feat(docs): Added some text describing the documentation itself 2025-12-18 15:00:08 +01:00
ad0052c0a7
feat(otc): Added OTC overview and intro to deployments 2025-12-18 14:25:26 +01:00
48a9eed862
feat(actions): add docs for EdgeConnect Actions 2025-12-18 11:51:41 +01:00
41e3306942
feat(docs): Restructure entire documentation 2025-12-18 10:25:07 +01:00
880c0d5ec9
WIP potentially to be dropped 2025-12-18 09:21:05 +01:00
10cce1376a
feat(deployment): add ArgoCD deployment stack 2025-12-17 17:10:36 +01:00
288eb7a91c
feat(project_level_issues): Summarise issues work and why it was discontinued 2025-12-17 13:56:29 +01:00
927fc778d5
feat(edgeconnect): Add docs for SDK and EdgeConnect client 2025-12-16 16:37:26 +01:00
72a792ccfe docs(operations): add authentication hint for accessing cluster context 2025-12-16 13:52:47 +01:00
9f1206e57a docs(): improved image headline 2025-12-16 13:01:35 +01:00
ae95fb86a8 docs(): removed internal documentation chapter, added an exsiting high level vision chart on the intro page 2025-12-16 12:34:50 +01:00
9c16f17968 docs(operations): added operations chapter 2025-12-16 12:16:35 +01:00
c75aa06d04 chore(resources): updated node modules 2025-12-16 12:13:34 +01:00
d39ffeb08a
added physical-envs/docker 2025-12-16 11:41:16 +01:00
bbbb40d178
added terralist stack docs 2025-12-16 11:23:03 +01:00
46a8c9dbb3
added otc stack docs 2025-12-16 11:16:08 +01:00
babd8df7b5
added obs-client stack docs 2025-12-16 11:03:37 +01:00
eb1aaec0bc
added observability stacks docs 2025-12-16 10:56:33 +01:00
5be5493015
added forgejo stack 2025-12-16 10:39:28 +01:00
876cfd9ba0
added core stacks docs 2025-12-16 10:25:14 +01:00
c7ec23e9a0
added coder stacks doc 2025-12-16 10:12:40 +01:00
c62e38f824
feat(sdk): complete first draft of EdgeConnect SDK docs 2025-12-08 17:23:44 +01:00
3fff08f9d7
feat(provider): Complete first draft of provider docs with diagram 2025-12-04 13:24:53 +01:00
92a1f4c1c5
added seqeuence diagram 2025-12-03 11:54:56 +01:00
c345d3b3b5
feat(provider): add partial terraform provider docs 2025-12-03 11:35:27 +01:00
1ab3e6c262
disable local server cache to prevent likec4 file caching 2025-12-03 10:29:45 +01:00
9bc6f6e795
added actions doc 2025-12-02 11:57:24 +01:00
3fceb4a5de
docs(terraform): update terraform docs per feedback 2025-12-02 11:31:06 +01:00
fb941c6766
added forgejo runner docs 2025-12-02 11:19:00 +01:00
88d3aee150
introduced runner orchestration doc 2025-12-02 09:15:05 +01:00
5b4fbcbb54 feat(stacks): enhance Stacks documentation, IPCEICIS-6729 2025-11-30 22:56:06 +01:00
ca53ac2250 refactor(forgejo): restructured and distributed the content to governance and GARM. Closes https://jira.telekom-mms.com/browse/IPCEICIS-6731 2025-11-26 23:57:58 +01:00
1853f37f53 docs(components): added review comments on PCEICIS-6732, terraform 2025-11-26 23:08:46 +01:00
64d7c77b6f docs(ipceicis-trl): added a draft TLR - technical readiness level - estimation to the product tree 2025-11-25 09:42:11 +01:00
753a218d3c
Add infra-deploy and infra-catalogue documentation 2025-11-24 09:20:33 +01:00
ac1a2965f2 docs(orchestration);: WiP -IPCEICIS-6734 2025-11-24 00:54:31 +01:00
f452a5e663
docs(forgejo): 💬Added KPI intro 2025-11-21 11:48:24 +01:00
710f9a1dc9
build: Make use of npm lock file and updated lock file 2025-11-19 17:10:27 +01:00
f9eba62e8d
fix(build): Added postcss config pointing to local plugins 2025-11-19 17:10:27 +01:00
49a9d1efe7
chore: Added nix flake 2025-11-19 17:10:27 +01:00
ffb9d063a3
docs(forgejo): 💄Updated docs and added diagrams 2025-11-19 16:50:06 +01:00
1eff967f09 Merge branch 'main' into development 2025-11-18 09:50:57 +01:00
df2a132202
docs(forgejo): 📈Add KPIs 2025-11-17 17:11:48 +01:00
132 changed files with 17239 additions and 4381 deletions

74
.claude/CLAUDE.md Normal file
View file

@ -0,0 +1,74 @@
# Technical Documentation Guidelines
You are an expert technical writer with deep expertise in creating clear, concise, and well-structured documentation. Your goal is to produce documentation that flows naturally while maintaining technical accuracy.
## Core Principles
### 1. Conciseness and Clarity
- Use clear, direct language
- Eliminate unnecessary words and redundancy
- Make every sentence count
- Prefer active voice over passive voice
- Use short paragraphs (3-5 sentences maximum)
### 2. Structure and Organization
- Start with the most important information
- Use logical hierarchies with consistent heading levels
- Group related concepts together
- Provide clear navigation through table of contents when appropriate
- Use lists for sequential steps or related items
### 3. Flow and Readability
- Ensure smooth transitions between sections
- Connect ideas logically
- Build complexity gradually
- Use examples to illustrate concepts
- Maintain consistent terminology throughout
### 4. Technical Accuracy
- Be precise with technical terms
- Include relevant code examples that are tested and functional
- Document edge cases and limitations
- Provide accurate command syntax and parameters
- Link to related documentation when appropriate
## Documentation Structure
### Standard Document Layout
1. **Title** - Clear, descriptive heading
2. **Overview** - Brief introduction (2-3 sentences)
3. **Prerequisites** - What the reader needs to know or have
4. **Main Content** - Organized in logical sections
5. **Examples** - Practical, real-world use cases
6. **Troubleshooting** - Common issues and solutions (when applicable)
7. **Related Resources** - Links to additional documentation
### Code Examples
- Provide complete, runnable examples
- Include comments for complex logic
- Show expected output
- Use consistent formatting and syntax highlighting
### Commands and APIs
- Show full syntax with all parameters
- Indicate required vs optional parameters
- Provide parameter descriptions
- Include return values or output format
## Writing Style
- **Be direct**: "Configure the database" not "You should configure the database"
- **Be specific**: "Set timeout to 30 seconds" not "Set an appropriate timeout"
- **Be consistent**: Use the same terms for the same concepts
- **Be complete**: Don't assume implicit knowledge; explain as needed
## When Uncertain
**If you don't know something or need clarification:**
- Ask specific questions
- Request examples or use cases
- Clarify technical details or edge cases
- Verify terminology and naming conventions
- Confirm target audience and their expected knowledge level
Your expertise is in writing excellent documentation. Use your judgment to create documentation that serves the reader's needs effectively. When in doubt, ask rather than guess.

1
.envrc.example Normal file
View file

@ -0,0 +1 @@
use flake

View file

@ -22,11 +22,11 @@ jobs:
set -a set -a
source .env.versions source .env.versions
set +a set +a
echo "node_version=${NODE_VERSION}" >> "$GITHUB_OUTPUT" echo "node_version=${NODE_VERSION}" >> "$GITHUB_OUTPUT"
echo "go_version=${GO_VERSION}" >> "$GITHUB_OUTPUT" echo "go_version=${GO_VERSION}" >> "$GITHUB_OUTPUT"
echo "hugo_version=${HUGO_VERSION}" >> "$GITHUB_OUTPUT" echo "hugo_version=${HUGO_VERSION}" >> "$GITHUB_OUTPUT"
echo "Node: ${NODE_VERSION}" echo "Node: ${NODE_VERSION}"
echo "Go: ${GO_VERSION}" echo "Go: ${GO_VERSION}"
echo "Hugo: ${HUGO_VERSION}" echo "Hugo: ${HUGO_VERSION}"

View file

@ -26,11 +26,11 @@ jobs:
set -a set -a
source .env.versions source .env.versions
set +a set +a
echo "node_version=${NODE_VERSION}" >> "$GITHUB_OUTPUT" echo "node_version=${NODE_VERSION}" >> "$GITHUB_OUTPUT"
echo "go_version=${GO_VERSION}" >> "$GITHUB_OUTPUT" echo "go_version=${GO_VERSION}" >> "$GITHUB_OUTPUT"
echo "hugo_version=${HUGO_VERSION}" >> "$GITHUB_OUTPUT" echo "hugo_version=${HUGO_VERSION}" >> "$GITHUB_OUTPUT"
echo "Node: ${NODE_VERSION}" echo "Node: ${NODE_VERSION}"
echo "Go: ${GO_VERSION}" echo "Go: ${GO_VERSION}"
echo "Hugo: ${HUGO_VERSION}" echo "Hugo: ${HUGO_VERSION}"
@ -100,7 +100,7 @@ jobs:
run: | run: |
# Finde vorheriges Tag # Finde vorheriges Tag
PREVIOUS_TAG=$(git describe --abbrev=0 --tags ${GITHUB_REF}^ 2>/dev/null || echo "") PREVIOUS_TAG=$(git describe --abbrev=0 --tags ${GITHUB_REF}^ 2>/dev/null || echo "")
if [ -z "$PREVIOUS_TAG" ]; then if [ -z "$PREVIOUS_TAG" ]; then
echo "Erster Release - Changelog von Anfang an" echo "Erster Release - Changelog von Anfang an"
CHANGELOG=$(git log --pretty=format:"- %s (%h)" --no-merges) CHANGELOG=$(git log --pretty=format:"- %s (%h)" --no-merges)
@ -108,7 +108,7 @@ jobs:
echo "Changelog seit ${PREVIOUS_TAG}" echo "Changelog seit ${PREVIOUS_TAG}"
CHANGELOG=$(git log ${PREVIOUS_TAG}..${GITHUB_REF} --pretty=format:"- %s (%h)" --no-merges) CHANGELOG=$(git log ${PREVIOUS_TAG}..${GITHUB_REF} --pretty=format:"- %s (%h)" --no-merges)
fi fi
# Schreibe in Output-Datei (multiline) # Schreibe in Output-Datei (multiline)
{ {
echo 'changelog<<EOF' echo 'changelog<<EOF'
@ -128,22 +128,22 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.GITHUB_TOKEN }}
release-notes: | release-notes: |
# Release ${{ steps.version.outputs.version }} # Release ${{ steps.version.outputs.version }}
## Docker Images ## Docker Images
Multi-platform images (linux/amd64, linux/arm64) sind verfügbar: Multi-platform images (linux/amd64, linux/arm64) sind verfügbar:
```bash ```bash
docker pull ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}:${{ steps.version.outputs.version }} docker pull ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}:${{ steps.version.outputs.version }}
docker pull ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}:latest docker pull ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}:latest
``` ```
## Build Versions ## Build Versions
- Node.js: ${{ steps.versions.outputs.node_version }} - Node.js: ${{ steps.versions.outputs.node_version }}
- Go: ${{ steps.versions.outputs.go_version }} - Go: ${{ steps.versions.outputs.go_version }}
- Hugo: ${{ steps.versions.outputs.hugo_version }} - Hugo: ${{ steps.versions.outputs.hugo_version }}
## Changes ## Changes
${{ steps.changelog.outputs.changelog }} ${{ steps.changelog.outputs.changelog }}

View file

@ -1,15 +1,15 @@
name: Hugo Site Tests name: Hugo Site Tests
on: on:
push: # push:
branches: [ main ] # branches: [ main ]
pull_request: pull_request:
branches: [ main ] branches: [ main ]
jobs: jobs:
test: test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
@ -38,7 +38,7 @@ jobs:
npm run test:build npm run test:build
npm run test:markdown npm run test:markdown
npm run test:html npm run test:html
- name: Run link checker - name: Run link checker
run: htmltest run: htmltest
continue-on-error: true continue-on-error: true

4
.gitignore vendored
View file

@ -35,3 +35,7 @@ Thumbs.db
npm-debug.log* npm-debug.log*
yarn-debug.log* yarn-debug.log*
yarn-error.log* yarn-error.log*
### direnv ###
.direnv
.envrc

View file

@ -43,7 +43,7 @@ tasks:
- deps:ensure-npm - deps:ensure-npm
- build:generate-info - build:generate-info
cmds: cmds:
- "{{.HUGO_CMD}} server" - "{{.HUGO_CMD}} server --noHTTPCache"
clean: clean:
desc: Clean build artifacts desc: Clean build artifacts
@ -113,16 +113,16 @@ tasks:
if [ -d "public/docs-old" ]; then mv public/docs-old /tmp/htmltest-backup-$$/; fi if [ -d "public/docs-old" ]; then mv public/docs-old /tmp/htmltest-backup-$$/; fi
if [ -d "public/blog" ]; then mv public/blog /tmp/htmltest-backup-$$/; fi if [ -d "public/blog" ]; then mv public/blog /tmp/htmltest-backup-$$/; fi
if [ -d "public/_print/docs-old" ]; then mv public/_print/docs-old /tmp/htmltest-backup-$$/docs-old-print; fi if [ -d "public/_print/docs-old" ]; then mv public/_print/docs-old /tmp/htmltest-backup-$$/docs-old-print; fi
# Run htmltest # Run htmltest
htmltest || EXIT_CODE=$? htmltest || EXIT_CODE=$?
# Restore directories # Restore directories
if [ -d "/tmp/htmltest-backup-$$/docs-old" ]; then mv /tmp/htmltest-backup-$$/docs-old public/; fi if [ -d "/tmp/htmltest-backup-$$/docs-old" ]; then mv /tmp/htmltest-backup-$$/docs-old public/; fi
if [ -d "/tmp/htmltest-backup-$$/blog" ]; then mv /tmp/htmltest-backup-$$/blog public/; fi if [ -d "/tmp/htmltest-backup-$$/blog" ]; then mv /tmp/htmltest-backup-$$/blog public/; fi
if [ -d "/tmp/htmltest-backup-$$/docs-old-print" ]; then mv /tmp/htmltest-backup-$$/docs-old-print public/_print/docs-old; fi if [ -d "/tmp/htmltest-backup-$$/docs-old-print" ]; then mv /tmp/htmltest-backup-$$/docs-old-print public/_print/docs-old; fi
rm -rf /tmp/htmltest-backup-$$ rm -rf /tmp/htmltest-backup-$$
# Exit with the original exit code # Exit with the original exit code
exit ${EXIT_CODE:-0} exit ${EXIT_CODE:-0}
@ -166,14 +166,14 @@ tasks:
generates: generates:
- node_modules/.package-lock.json - node_modules/.package-lock.json
cmds: cmds:
- "{{.NPM_CMD}} install" - "{{.NPM_CMD}} ci"
status: status:
- test -d node_modules - test -d node_modules
deps:install: deps:install:
desc: Install all dependencies desc: Install all dependencies
cmds: cmds:
- "{{.NPM_CMD}} install" - "{{.NPM_CMD}} ci"
- "{{.HUGO_CMD}} mod get -u" - "{{.HUGO_CMD}} mod get -u"
- "{{.HUGO_CMD}} mod tidy" - "{{.HUGO_CMD}} mod tidy"

28
argocd-stack/docs.yaml Normal file
View file

@ -0,0 +1,28 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: docs
namespace: argocd
labels:
env: prod
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
destination:
name: in-cluster
namespace: docs
syncOptions:
- CreateNamespace=true
sources:
- repoURL: https://edp.buildth.ing/DevFW-CICD/website-and-documentation
targetRevision: HEAD
path: argocd-stack/helm
helm:
parameters:
- name: image.tag
value: $ARGOCD_APP_REVISION_SHORT

View file

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View file

@ -0,0 +1,24 @@
apiVersion: v2
name: helm
description: Deploy documentation to edp.buildth.ing
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View file

@ -0,0 +1,62 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: docs
name: docs
spec:
replicas: 1
selector:
matchLabels:
app: docs
strategy: {}
template:
metadata:
labels:
app: docs
spec:
containers:
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
name: docs
ports:
- name: http
containerPort: 80
protocol: TCP
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: docs
spec:
selector:
app: docs
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: docs
annotations:
cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: docs.edp.buildth.ing
http:
paths:
- backend:
service:
name: docs
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- docs.edp.buildth.ing
secretName: docs-edp-buildth-ing-tls

View file

@ -0,0 +1,4 @@
image:
repository: edp.buildth.ing/devfw-cicd/website-and-documentation
tag: "UNKNOWN_TAG"

View file

@ -16,22 +16,22 @@ Built on open standards and battle-tested technologies.
{{% blocks/section color="dark" type="row" %}} {{% blocks/section color="dark" type="row" %}}
{{% blocks/feature icon="fa-solid fa-diagram-project" title="Architecture Documentation" url="/docs/architecture/" %}} {{% blocks/feature icon="fa-solid fa-diagram-project" title="Edge Developer Platform (EDP)" url="/docs/edp/" %}}
Explore the platform's architecture with interactive C4 diagrams. Understand the system design, components, and deployment topology. Understand EDP as the developer platform hub (Forgejo, CI/CD, deployment, operations) and how it connects inner loop and outer loop workflows.
**Dive into the architecture →** **Dive into EDP docs →**
{{% /blocks/feature %}} {{% /blocks/feature %}}
{{% blocks/feature icon="fa-solid fa-book-open" title="Technical Writer Guide" url="/docs/documentation/" %}} {{% blocks/feature icon="fa-solid fa-cloud" title="EdgeConnect Cloud" url="/docs/edgeconnect/" %}}
Learn how to contribute to this documentation. Write content, test locally, and understand the CI/CD pipeline. Learn what EdgeConnect is, how it is consumed via stable entry points (CLI, SDK, Terraform), and how EDP integrates with it as a deployment target.
**Start documenting →** **Explore EdgeConnect →**
{{% /blocks/feature %}} {{% /blocks/feature %}}
{{% blocks/feature icon="fa-solid fa-archive" title="Legacy Documentation (v1)" url="/docs/v1/" %}} {{% blocks/feature icon="fa-solid fa-scale-balanced" title="Governance" url="/docs/governance/" %}}
Access the previous version of our documentation including historical project information and early architecture decisions. Read the project history, decision context, and audit-oriented traceability to primary sources and repository artifacts.
**Browse v1 docs →** **Go to Governance →**
{{% /blocks/feature %}} {{% /blocks/feature %}}
{{% /blocks/section %}} {{% /blocks/section %}}
@ -76,11 +76,11 @@ Access the previous version of our documentation including historical project in
## Get Started ## Get Started
Whether you're a **platform engineer**, **application developer**, or **technicalWriter**, we have resources for you: Whether you're a **platform engineer**, **application developer**, or **auditor**, we have resources for you:
* 📖 Read the [Documentation](/docs/) to understand the platform * 📖 Start at [Documentation](/docs/)
* 🏗️ Explore [Platform Components](/docs/components/) and their usage * 🧭 Read [Edge Developer Platform (EDP)](/docs/edp/)
* ✍️ Learn [How to Document](/docs/DOCUMENTATION-GUIDE/) and contribute * ☁️ Read [EdgeConnect Cloud](/docs/edgeconnect/)
* 🔍 Browse [Legacy Documentation](/docs-old/) for historical context * 🧾 Read [Governance](/docs/governance/)
{{% /blocks/section %}} {{% /blocks/section %}}

View file

@ -1,84 +0,0 @@
# Review
1) 09h35 Marco
business plan
issue: value of software, depreciation
FTE: around 100 overall, 3 full teams of developers
tax discussion
10h04 Discussions
2) 10h10 Julius
3) 10h27 Sebastiano - DevDay bis 10h40
schriften bei votes größer - fragen sollten lesbar sein!
devops is dead .... claim
4) Stephan bis 10h55
5) christopher 10h58
6) robert 11:11
* app
* devops-pipelines
* edp in osc deployed
7) michal has nothing to show
8) evgenii wants to finish -- 11:30
9) patrick 11:32
====
projekt management meeting
workshops, externe teams
customer episodes
wem was wo prinzipien
|
Rollen, Personas
weiter die perspektive des nutzers bekommen, inneres verlangen eines developers, mein anspruch an das EDP
(bekommen wir das hin, möchte ic damit arbeiten)
level 2 erklimmen
workshops halten
senioren bekommen
level1: source code structure, artefakte builden, revision control, branching model, e.g. pull requesting, tests der software, local debugging
level2: automatisierung des artefakte-builds, versionsmgmt, milestones, tickets, issues, compliances an security
level3: deployment auf stages, feedback pipeline verhalten
level4: feedback app-verhalten (logs, metrics, alerts) + development loop
level5: 3rd level support in production
level1: coding
source code structure, artefakte builden, revision control, branching model, e.g. pull requesting, tests der software, local debugging
level2: reaching the outdside world with output
automatisierung des artefakte-builds, versionsmgmt, milestones, tickets, issues, compliances an security
level3: run the app anywhere
deployment auf stages, feedback pipeline verhalten
level4: monitoring the app
feedback app-verhalten (logs, metrics, alerts) + development loop
level5: support
3rd level support in production (or any outer stage)
sprint 4
leveraging säule
eigene app säule
chore säule

View file

@ -1,6 +0,0 @@
---
title: important links
weight: 20
---
* Gardener login to Edge and orca cluster: IPCEICIS-6222

View file

@ -1,40 +0,0 @@
---
title: Architecture session
weight: 20
---
## Platform Generics
* https://tag-app-delivery.cncf.io/whitepapers/platforms/#capabilities-of-platforms
* https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/
* https://humanitec.com/blog/wtf-internal-developer-platform-vs-internal-developer-portal-vs-paas
## reference architecture + Portfolio
* https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures
* https://humanitec.com/reference-architectures
* https://www.youtube.com/watch?v=AimSwK8Mw-U
## Platform Portfolio
### Viktor Farcic
* https://technologyconversations.com/
* https://technologyconversations.com/2024/01/08/the-best-devops-tools-platforms-and-services-in-2024/
### Internal devloper platform
* https://internaldeveloperplatform.org/core-components/
### Workflow / CI/CD
* https://cnoe.io/blog/optimizing-data-quality-in-dev-portals

View file

@ -6,24 +6,12 @@ menu:
weight: 20 weight: 20
--- ---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
# Edge Developer Platform (EDP) Documentation # Edge Developer Platform (EDP) Documentation
Welcome to the EDP documentation. This documentation serves developers, engineers, and auditors who want to understand, use, and audit the Edge Developer Platform. Welcome to the EDP documentation. This documentation serves developers, engineers, and auditors who want to understand, use, and audit the Edge Developer Platform.
It describes the outcomes and products of the edgeDeveloperFramework (eDF) sub-project within IPCEI-CIS.
## Target Audience ## Target Audience
* **Developers & Engineers**: Learn how to use the platform, deploy applications, and integrate services * **Developers & Engineers**: Learn how to use the platform, deploy applications, and integrate services
@ -32,14 +20,8 @@ Welcome to the EDP documentation. This documentation serves developers, engineer
## Documentation Structure ## Documentation Structure
The documentation follows a top-down approach focusing on outcomes and practical usage: The documentation is organized into three core areas:
* **Platform Overview**: High-level introduction and product structure * **[Edge Developer Platform (EDP)](/docs/edp/)**: The central platform to support developers working at the edge, based around Forgejo
* **Components**: Individual platform components and their usage * **[EdgeConnect Cloud](/docs/edgeconnect/)**: The sovereign edge cloud context and key deployment target for EDP integrations
* **Getting Started**: Onboarding and quick start guides * **[Governance](/docs/governance/)**: Project history, decision context, and audit-oriented traceability
* **Operations**: Deployment, monitoring, and troubleshooting
* **Governance**: Project history, decisions, and compliance
## Purpose
This documentation describes the outcomes and products of the edgeDeveloperFramework (eDF) project. The EDP is designed as a usable, integrated platform with clear links to repositories and implementation details.

View file

@ -1,141 +0,0 @@
---
title: "[Component Name]"
linkTitle: "[Short Name]"
weight: 1
description: >
[Brief one-line description of the component]
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### C4 charts
Embed C4 charts this way:
1. add a likec4-view with the name of the view
{{< likec4-view view="components-template-documentation" project="architecture" title="Example Documentation Diagram" >}}
2. create the LikeC4 view somewhere in ```./resources/edp-likec4/views```, the example above is in ```./resources/edp-likec4/views/documentation/components-template-documentation.c4```
3. run ```task likec4:generate``` to create the webcomponent
4. if you are in ```task:serve``` hot-reload mode the view will show up directly
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,39 +0,0 @@
---
title: "Components"
linkTitle: "Components"
weight: 30
description: >
Overview of EDP platform components and their integration.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: Stephan
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
This section documents all components of the Edge Developer Platform based on the product structure.
## Component Categories
The EDP consists of the following main component categories:
* **Orchestrator**: Platform and infrastructure orchestration
* **Forgejo & CI/CD**: Source code management and automation
* **Deployments**: Deployment targets and edge connectivity
* **Dev Environments**: Development environment provisioning
* **Physical Environments**: Runtime infrastructure
### Product Component Structure
[TODO] Links
![alt text](website-and-documentation_resources_product-structure.svg)

View file

@ -1,28 +0,0 @@
---
title: "Deployments"
linkTitle: "Deployments"
weight: 40
description: >
Deployment targets and edge connectivity solutions.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6733](https://jira.telekom-mms.com/browse/IPCEICIS-6733)
* **Assignee**: Patrick
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
Deployment components manage connections to various deployment targets including cloud infrastructure and edge devices.
## Components
* **OTC**: Open Telekom Cloud deployment target
* **EdgeConnect**: Secure edge connectivity solution

View file

@ -1,128 +0,0 @@
---
title: "EdgeConnect"
linkTitle: "EdgeConnect"
weight: 20
description: >
Secure connectivity solution for edge devices and environments
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6734](https://jira.telekom-mms.com/browse/IPCEICIS-6734)
* **Assignee**: Waldemar
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "EdgeConnect Client"
linkTitle: "EdgeConnect Client"
weight: 30
description: >
Client software for establishing EdgeConnect connections
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6734](https://jira.telekom-mms.com/browse/IPCEICIS-6734)
* **Assignee**: Waldemar
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "EdgeConnect SDK"
linkTitle: "EdgeConnect SDK"
weight: 10
description: >
Software Development Kit for establishing EdgeConnect connections
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6734](https://jira.telekom-mms.com/browse/IPCEICIS-6734)
* **Assignee**: Waldemar
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "OTC (Open Telekom Cloud)"
linkTitle: "OTC"
weight: 10
description: >
Open Telekom Cloud deployment and infrastructure target
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6733](https://jira.telekom-mms.com/browse/IPCEICIS-6733)
* **Assignee**: Patrick
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "Development Environments"
linkTitle: "DevEnvironments"
weight: 30
description: >
Development environment provisioning and management
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,27 +0,0 @@
---
title: "Documentation System"
linkTitle: "Documentation System"
weight: 100
description: The developer 'documentation as code' documentation System we use ourselfes and over to use for each development team.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6736](https://jira.telekom-mms.com/browse/IPCEICIS-6736)
* **Assignee**: Stephan
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
The Orchestration manages platform and infrastructure provisioning, providing the foundation for the EDP deployment model.
## Sub-Components
* **Infrastructure Provisioning**: Low-level infrastructure deployment (infra-deploy, infra-catalogue)
* **Platform Provisioning**: Platform-level component deployment via Stacks

View file

@ -1,28 +0,0 @@
---
title: "Forgejo"
linkTitle: "Forgejo"
weight: 20
description: >
Self-hosted Git service with project management and CI/CD capabilities.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
Forgejo provides source code management, project management, and CI/CD automation for the EDP.
## Sub-Components
* **Project Management**: Issue tracking and project management features
* **Actions**: CI/CD automation (see CI/CD section)

View file

@ -1,27 +0,0 @@
---
title: "Forgejo Actions"
linkTitle: "Forgejo Actions"
weight: 20
description: Forgejo Actions.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6730](https://jira.telekom-mms.com/browse/IPCEICIS-6730)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
Forgejo provides source code management, project management, and CI/CD automation for the EDP.
## Sub-Components
* **Project Management**: Issue tracking and project management features
* **Actions**: CI/CD automation (see CI/CD section)

View file

@ -1,127 +0,0 @@
---
title: "Forgejo Actions"
linkTitle: "Actions"
weight: 10
description: GitHub Actions-compatible CI/CD automation
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,127 +0,0 @@
---
title: "Runner Orchestration"
linkTitle: "Runner Orchestration"
weight: 30
description: GARM
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "Action Runner"
linkTitle: "Runner"
weight: 20
description: >
Self-hosted runner infrastructure with orchestration capabilities
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,66 +0,0 @@
---
title: "Forgejo Integration, Extension, and Community Collaboration"
linkTitle: Forgejo Software Forge
date: "2025-11-17"
description: "Summary of the project's work integrating GARM with Forgejo and contributing key features back to the community."
tags: ["Forgejo", "GARM", "CI/CD", "OSS", "Community", "Project Report"]
categories: ["Workpackage Results"]
weight: 10
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6731](https://jira.telekom-mms.com/browse/IPCEICIS-6731)
* **Assignee**: Daniel
* **Status**: Draft
* **Last Updated**: 2025-11-17
* **TODO**:
* [ ] Add concrete quick start steps
* [ ] Include prerequisites and access information
* [ ] Create first application tutorial
* **Review/Feedback**:
* [ ] Stephan:
* in general:
* [ ] some parts are worth to go th 'Governance'
* [ ] perhaps we should remove the emojis?
* [ ] perhaps we should avoid the impression that the text was copy/pated from AI
* some details/further ideas:
* [ ] where is it, this Forgejo? Why is it called 'edp.buildth.ing'?
* [ ] what are the components we use - package managament, actions, ...
* [ ] Friendly users? organisations? Public/private stuff?
* [ ] App Management discussions (we don't!)?
* [ ] what about code snippets how forgejo is deployed? SSO? user base? Federation options?
* [ ] storages, Redis, Postgres ... deployment options ... helm charts ...
* [ ] Migrations we did, where is the migration code?
* [ ] git POSIX filesystem concurrency discussion, S/3 bucket
* [ ] what is our general experience?
* [ ] repository centric domain data model
* [ ] how did we develop? which version did we take first? how did we upgrade?
* [ ] which development flows did we use? which pipleines?
* [ ] provide codeberg links for the PRs
* [ ] provide architecture drawings and repo links for the cache registry thing
* [ ] provide a hight level actions arch diagram from the perspective of forgejo - link to the GARM component here
{{% /alert %}}
## 🧾 Result short description / cognitions
Here is the management summary of the work package results:
* **📈 Strategic Selection:** We chose **[Forgejo](https://forgejo.org/)** as the project's self-hosted Git service. This decision was based on several key strategic factors:
* **EU-Based & Data Sovereignty:** The project is stewarded by **[Codeberg e.V.](https://docs.codeberg.org/getting-started/what-is-codeberg/)**, a non-profit based in Berlin, Germany. This is a massive win for our "funding agency" stakeholders, as it aligns with **GDPR, compliance, and data sovereignty goals**. It's governed by EU laws, not a US tech entity.
* **True Open Source (GPL v3+):** Forgejo is a community-driven fork of Gitea, created to *guarantee* it stays 100% free and open-source (FOSS).
* **License Protects Our Contributions:** It uses the **GPL v3+ "copyleft" license**. This is *perfect* for our collaboration goal. It legally ensures that the features we contribute back (like GARM support) can **never be taken and locked into a proprietary, closed-source product by anyone**. It protects our work and keeps the community open.
* **⚙️ Core Use Case:** Forgejo is used for all project source code **versioning** and as the backbone for our **CI/CD (Continuous Integration/Continuous Deployment)** pipelines.
* **🛠️ Key Extension (GARM Support):** The main technical achievement was integrating **[GARM (GitHub Actions Runner Manager)](https://github.com/cloudbase/garm)**. This was *not* supported by Forgejo out-of-the-box.
* **✨ Required Enhancements:** To make GARM work, our team developed and implemented several critical features:
* Webhook support for workflow events (to tell runners when to start).
* Support for ephemeral runners (for secure, clean-slate builds every time).
* GitHub API-compatible endpoints (to allow the runners to register themselves correctly).
* **💖 Community Contribution:** We didn't just keep this for ourselves! We contributed all these features **directly back to the upstream Forgejo community**. This wasn't just a code-dump; we actively collaborated via **issues**, **feature requests**, and **pull requests (PRs) on [codeberg.org](https://codeberg.org/)**.
* **🚀 Bonus Functionality:** We also implemented **artifact caching**. This configures Forgejo to act as a **pull-through proxy** for remote container registries (like Docker Hub), which seriously speeds up our build times and saves bandwidth.

View file

@ -1,128 +0,0 @@
---
title: "Project Management"
linkTitle: "Forgejo Project Mgmt"
weight: 50
description: >
Project and issue management capabilities within Forgejo
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,28 +0,0 @@
---
title: "Orchestratiion"
linkTitle: "Orchestration"
weight: 10
description: >
Platform and infrastructure orchestration components.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6734](https://jira.telekom-mms.com/browse/IPCEICIS-6734)
* **Assignee**: Stephan
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
The Orchestration manages platform and infrastructure provisioning, providing the foundation for the EDP deployment model.
## Sub-Components
* **Infrastructure Provisioning**: Low-level infrastructure deployment (infra-deploy, infra-catalogue)
* **Platform Provisioning**: Platform-level component deployment via Stacks

View file

@ -1,128 +0,0 @@
---
title: "Application Orchestration"
linkTitle: "Application Orchestration"
weight: 30
description: >
Application-level component provisioning via Stacks
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "Infrastructure Orchestration"
linkTitle: "Infrastructure Orchestration"
weight: 10
description: >
Infrastructure deployment and catalog management (infra-deploy, infra-catalogue)
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6732](https://jira.telekom-mms.com/browse/IPCEICIS-6732)
* **Assignee**: Martin
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,127 +0,0 @@
---
title: "Provider"
linkTitle: "Provider"
weight: 20
description: Used Provider we deploy on
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6732](https://jira.telekom-mms.com/browse/IPCEICIS-6732)
* **Assignee**: Martin
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "Terrafrom"
linkTitle: "Terraform"
weight: 10
description: >
Infrastructure deployment and catalog management (infra-deploy, infra-catalogue)
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6732](https://jira.telekom-mms.com/browse/IPCEICIS-6732)
* **Assignee**: Martin
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "Platform Orchestration"
linkTitle: "Platform Orchestration"
weight: 20
description: >
Platform-level component provisioning via Stacks
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "Stacks"
linkTitle: "Stacks"
weight: 40
description: >
Platform-level component provisioning via Stacks
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6729](https://jira.telekom-mms.com/browse/IPCEICIS-6729)
* **Assignee**: Stephan
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "Component 1"
linkTitle: "Component 1"
weight: 20
description: >
Component 1
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TBD]
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "Component 2"
linkTitle: "Component 2"
weight: 30
description: >
Component 2
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TBD]
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,16 +0,0 @@
---
title: "Physical Environments"
linkTitle: "Physical Envs"
weight: 60
description: >
Physical runtime environments and infrastructure providers.
---
Physical environment components provide the runtime infrastructure for deploying and running applications.
## Components
* **Docker**: Container runtime
* **Kubernetes**: Container orchestration
* **LXC**: Linux Containers
* **Provider**: Infrastructure provider abstraction

View file

@ -1,128 +0,0 @@
---
title: "Docker"
linkTitle: "Docker"
weight: 10
description: >
Container runtime for running containerized applications
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "Kubernetes"
linkTitle: "Kubernetes"
weight: 20
description: >
Container orchestration platform for managing containerized workloads
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "LXC"
linkTitle: "LXC"
weight: 30
description: >
Linux Containers for lightweight system-level virtualization
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -1,128 +0,0 @@
---
title: "Infrastructure Provider"
linkTitle: "Provider"
weight: 40
description: >
Infrastructure provider abstraction for managing physical resources
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 92 KiB

View file

@ -1,151 +0,0 @@
---
title: "WiP Documentation Guide"
linkTitle: "WiP Doc Guide"
weight: 1
description: Guidelines and templates for creating EDP documentation. This page will be removed in the final documentation.
---
{{% alert title="WiP - Only during creation phase" %}}
This page will be removed in the final documentation.
{{% /alert %}}
## Purpose
This guide helps team members create consistent, high-quality documentation for the Edge Developer Platform.
## Documentation Principles
### 1. Focus on Outcomes
1. Describe how the platform is comprised and which Products we deliver
2. If you need inspiration for our EDP product structure look at [EDP product structure tree](../components/website-and-documentation_resources_product-structure.svg)
2. Include links to repositories for deeper technical information or for not beeing too verbose and redundant with existing doumentation within the IPCEI-CIS scope or our EDP repos scope.
### 2. Write for the Audience
1. **Developers**: How to use the software products
2. **Engineers**: Architecture
3. **Auditors**: Project history, decisions, compliance information
### 3. Keep It Concise
1. Top-down approach: start with overview, drill down as needed
2. Less is more - avoid deep nested structures
3. Avoid emojis
4. **When using AI**: Review the text that you paste, check integration into the rest of the documentation
### 4. Maintain Quality
1. Use present tense ("The system processes..." not "will process")
2. Run `task test:quick` before committing changes
## Documentation Structure
The EDP documentation is organized into five main sections:
### 1. Platform Overview
High-level introduction to EDP, target audience, purpose, and product structure.
**Content focus**: Why EDP exists, who uses it, what it provides
### 2. Getting Started
Onboarding guides and quick start instructions.
**Content focus**: Prerequisites, step-by-step setup, first application deployment
### 3. Components
Detailed documentation for each platform component.
**Content focus**: What each component does, how to use it, integration points
**Template**: Use `components/TEMPLATE.md` as starting point
### 4. Operations
Deployment, monitoring, troubleshooting, and maintenance procedures.
**Content focus**: How to operate the platform, resolve issues, maintain health
### 5. Governance
Project history, architecture decisions, compliance, and audit information.
**Content focus**: Why decisions were made, project evolution, external relations
## Writing Documentation
### Components
#### Using Templates
In section 'Components' Templates are provided for common documentation types:
* **Component Documentation**: `content/en/docs/components/TEMPLATE.md`
#### Content Structure
Follow this pattern for component documentation:
1. **Overview**: What it is and what it does
2. **Key Features**: Bullet list of main capabilities
3. **Purpose in EDP**: Why it's part of the platform
4. **Getting Started**: Quick start guide
5. **Usage Examples**: Common scenarios
6. **Integration Points**: How it connects to other components
7. **Status**: Current maturity level
8. **Documentation Notes**: Instructions for filling in details (remove when complete)
### Frontmatter
Every markdown file starts with YAML frontmatter according to [Docsy](https://www.docsy.dev/docs/adding-content/content/#page-frontmatter):
```yaml
---
title: "Full Page Title"
linkTitle: "Short Nav Title"
weight: 10
description: >
Brief description for search and previews.
---
```
* **title**: Full page title (appears in page header)
* **linkTitle**: Shorter title for navigation menu
* **weight**: Sort order (lower numbers appear first)
* **description**: Brief summary for SEO and page previews
## Testing Documentation
Before committing changes:
```bash
# Run all tests
task test:quick
# Build site locally
task build
# Preview changes
task serve
```
## Adding New Sections
When adding a new documentation section:
1. Create directory: `content/en/docs/[section-name]/`
2. Create index file: `_index.md` with frontmatter
3. Add weight to control sort order
4. Update navigation in parent `_index.md` if needed
5. Test with `task test`
## Reference
* **Main README**: `/doc/README-technical-writer.md`
* **Component Template**: `/content/en/docs/components/TEMPLATE.md`
* **Hugo Documentation**: <https://gohugo.io/documentation/>
* **Docsy Theme**: <https://www.docsy.dev/docs/>

View file

@ -0,0 +1,46 @@
---
title: EdgeConnect
linkTitle: EdgeConnect Cloud
weight: 20
description: >
Sovereign edge cloud for running applications
---
## Overview
EdgeConnect is a custom cloud provided by the project as a whole. It has several goals, including retaining sovereign control over cloud compute resources, and supporting sustainability-aware infrastructure choices.
While EdgeConnect is managed outwith our Edge Developer Platform, we have produced a number of tools to facilitate its use and broaden its applicability. These are an [SDK](/docs/edgeconnect/edgeconnect-sdk/), command-line [client](/docs/edgeconnect/edgeconnect-client/), bespoke [provider](/docs/edgeconnect/terraform-provider/) for [Terraform](https://developer.hashicorp.com/terraform), and tailor-made [Forgejo Actions](/docs/edgeconnect/edgeconnect-actions/).
{{< likec4-view view="edgeconnect-context" project="architecture" title="EdgeConnect Context View: Users, Tooling and Control Plane" >}}
The diagram summarizes how EdgeConnect is typically consumed and operated. Developers and automation do not interact with edge clusters directly; instead they use stable entry points (CLI, SDK, Terraform) that talk to the EdgeConnect API.
EdgeConnect itself is shown as a single cloud boundary that contains the control plane (API + controllers) and the managed resource model (e.g., App, AppInstance). Controllers continuously reconcile the desired state expressed via the API and drive deployments into the runtime.
EDP appears here as an external consumer: it can automate provisioning and deployment workflows (for example via Terraform) while EdgeConnect remains a separately managed cloud. This separation clarifies responsibilities: EDP orchestrates delivery processes, EdgeConnect provides the target runtime and lifecycle management.
## Key Features
* Managed by the broader project, not specifically by EDP
* Focus on sovereignty and sustainability
* Utilities such as [CLI](/docs/edgeconnect/edgeconnect-client/) and [Terraform provider](/docs/edgeconnect/terraform-provider/) encourage widespread platform use
* [EDP](/docs/edp/) products such as [Forgejo](/docs/edp/forgejo/) are hosted on [OTC](/docs/edp/deployment/otc/) rather than EdgeConnect
## Purpose in EDP
EdgeConnect is documented here because it is a key deployment target and integration point for the broader platform. Even though EdgeConnect is operated separately from EDP (and core EDP services are hosted on OTC), EDP tooling and automation frequently needs to provision or deploy workloads into EdgeConnect in a consistent, repeatable way.
Working with EdgeConnect also helps ensure that our developer workflows and platform components remain portable and “cloud-ready” beyond a single environment. By integrating with a sovereign system and making sustainability-aware choices visible in practice, we align platform engineering with the projects wider goals and enable closer collaboration with the teams operating the EdgeConnect cloud.
### Access
* [Gardener console access](https://gardener.apps.mg3.mdb.osc.live/namespace/garden-platform/shoots)
- Choose `Log in with mg3` then `platform` before entering credentials set up by the Platform Team.
* [Edge cluster](https://hub.apps.edge.platform.mg3.mdb.osc.live/)
* [Orca cluster](https://hub.apps.orca.platform.mg3.mdb.osc.live/)
### Notes
Documentation for EdgeConnect is provided using other systems, including Confluence.

View file

@ -0,0 +1,286 @@
---
title: Forgejo Actions
linkTitle: Forgejo Actions
weight: 40
description: >
CI/CD actions for automated EdgeConnect deployment and deletion
---
## Overview
The EdgeConnect Actions are custom composite actions for use in [Forgejo](/docs/edp/forgejo/actions/)/[GitHub Actions](https://forgejo.org/docs/latest/user/actions/github-actions/) that automate EdgeConnect application deployments in CI/CD pipelines. They wrap the [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) to provide a simple, declarative way to deploy and delete applications without manual CLI installation or configuration.
Two actions are available:
- **edge-connect-deploy-action**: Deploys applications using declarative YAML configuration
- **edge-connect-delete-action**: Deletes applications and their instances from EdgeConnect
## Key Features
* **Zero installation**: Actions automatically download and use the EdgeConnect Client
* **Declarative workflow**: Deploy applications using YAML configuration files
* **CI/CD optimized**: Designed for automated pipelines with auto-approve and dry-run support
* **Version pinning**: Specify exact EdgeConnect Client version for reproducible builds
* **Secrets management**: Credentials passed securely through workflow secrets
* **Compatible with GitHub and Forgejo Actions**: Works in both ecosystems
## Purpose in EDP
CI/CD automation is essential for modern development workflows. While the [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) provides powerful deployment capabilities, integrating it into CI/CD pipelines requires downloading binaries, managing credentials, and configuring authentication for each workflow run.
These actions eliminate that boilerplate by:
- Automatically fetching the correct Client version
- Handling authentication setup
- Providing a clean, reusable action interface
- Reducing pipeline configuration to a few lines
This enables teams to focus on application configuration rather than pipeline plumbing, while maintaining the full power of declarative EdgeConnect deployments.
The actions complement the [Terraform provider](/docs/edgeconnect/terraform-provider/) by offering a simpler option for teams already using Forgejo/GitHub Actions who want deployment automation without adopting Terraform.
## Repository
**Deploy Action**: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action
**Delete Action**: https://edp.buildth.ing/DevFW-CICD/edge-connect-delete-action
**Demo Repository**: https://edp.buildth.ing/DevFW-CICD/edgeconnect-action-demo
## Getting Started
### Prerequisites
* Forgejo or GitHub repository with Actions enabled
* EdgeConnect access credentials (username and password)
* `EdgeConnectConfig.yaml` file defining your application (see [YAML Configuration Format](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format))
* For Kubernetes apps: K8s manifest file referenced in the config
* Repository secrets configured with EdgeConnect credentials
### Quick Start
1. Create an `EdgeConnectConfig.yaml` file in your repository defining your application (see [Client documentation](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format))
2. Add EdgeConnect credentials as repository secrets:
- `EDGEXR_PLATFORM_USERNAME`
- `EDGEXR_PLATFORM_PASSWORD`
3. Create a workflow file (e.g., `.forgejo/workflows/deploy.yaml`) using the action
4. Commit and push to trigger the workflow
### Verification
After the workflow runs successfully:
- Check the workflow logs for deployment status
- Verify resources appear in the [EdgeConnect console](https://hub.apps.edge.platform.mg3.mdb.osc.live/)
- Test application endpoints are accessible
## Usage Examples
### Minimal Deploy Action
```yaml
- name: Deploy to EdgeConnect
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Minimal Delete Action
```yaml
- name: Delete from EdgeConnect
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-delete-action@main
with:
configFile: ./EdgeConnectConfig.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Complete Workflow Example
A typical deployment workflow that builds, tags, and deploys:
```yaml
name: deploy
on:
workflow_run:
workflows: [build]
types:
- completed
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Update manifest with image tag
run: |
sha="${{ github.sha }}"
shortSha="${sha:0:7}"
echo "Setting image version to: registry.example.com/myapp:${shortSha}"
sed -i "s@###IMAGETAG###@registry.example.com/myapp:${shortSha}@g" ./k8s-deployment.yaml
- name: Deploy to EdgeConnect
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Dry-Run Mode
Preview changes without applying them:
```yaml
- name: Preview deployment
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
dryRun: 'true'
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Version Pinning
Use a specific EdgeConnect Client version:
```yaml
- name: Deploy with specific version
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
version: 'v2.0.1'
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
## Integration Points
* **EdgeConnect Client**: Actions download and execute the Client CLI tool
* **EdgeConnect SDK**: Client uses the SDK for all API interactions
* **Forgejo/GitHub Actions**: Native integration with both action ecosystems
* **EdgeConnect API**: All operations communicate with EdgeConnect platform APIs
* **Container Registries**: Works with any registry for application images
## Configuration
### Action Inputs
Both deploy and delete actions accept the same inputs:
| Input | Required | Default | Description |
|-------|----------|---------|-------------|
| `configFile` | Yes | - | Path to EdgeConnectConfig.yaml file |
| `baseUrl` | Yes | - | EdgeConnect API base URL (e.g., https://hub.apps.edge.platform.mg3.mdb.osc.live) |
| `username` | Yes | - | EdgeConnect username for authentication |
| `password` | Yes | - | EdgeConnect password for authentication |
| `dryRun` | No | `false` | Preview changes without applying (set to `'true'` to enable) |
| `version` | No | `v2.0.1` | EdgeConnect Client version to download and use |
### YAML Configuration File
The `configFile` parameter points to an `EdgeConnectConfig.yaml` that defines your application and deployment targets. See the [EdgeConnect Client YAML Configuration Format](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format) for the complete specification.
Example structure:
```yaml
kind: edgeconnect-deployment
metadata:
name: "my-app"
appVersion: "1.0.0"
organization: "myorg"
spec:
k8sApp:
manifestFile: "./k8s-deployment.yaml"
infraTemplate:
- region: "EU"
cloudletOrg: "TelekomOp"
cloudletName: "Munich"
flavorName: "EU.small"
```
### Secrets Management
Configure repository secrets in Forgejo/GitHub:
1. Navigate to repository Settings → Secrets
2. Add secrets:
- Name: `EDGEXR_PLATFORM_USERNAME`, Value: your EdgeConnect username
- Name: `EDGEXR_PLATFORM_PASSWORD`, Value: your EdgeConnect password
3. Reference in workflows using `${{ secrets.SECRET_NAME }}`
## Troubleshooting
### Action Fails with "Failed to download edge-connect-client"
**Problem**: Action cannot download the Client binary
**Solution**:
- Verify the `version` parameter matches an actual release version
- Ensure the release exists at https://edp.buildth.ing/DevFW-CICD/edge-connect-client/releases
- Check network connectivity from the runner
- Try using default version by omitting the `version` parameter
### Authentication Errors
**Problem**: "authentication failed" or "unauthorized" errors
**Solution**:
- Verify secrets are correctly configured in repository settings
- Check secret names match exactly (case-sensitive)
- Ensure `baseUrl` is correct for your target environment (Edge vs Orca)
- Confirm credentials work by testing with the [client](../edgeconnect-client/)
### "Configuration validation failed"
**Problem**: YAML configuration file validation errors
**Solution**:
- Verify `configFile` path is correct relative to repository root
- Check YAML syntax is valid (use a YAML validator)
- Ensure all required fields are present (see [Client docs](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format))
- Verify manifest file paths in the config exist and are correct
### Resources Not Appearing in Console
**Problem**: Action succeeds but resources don't appear in EdgeConnect console
**Solution**:
- Verify you're checking the correct environment (Edge vs Orca)
- Ensure `baseUrl` parameter matches the console you're viewing
- Check organization name in config matches your console access
- Review action logs for any warnings or skipped operations
### Deployment Succeeds but App Doesn't Work
**Problem**: Deployment completes but application is not functioning
**Solution**:
- Check application logs in the EdgeConnect console
- Verify image tags are correct (common issue with placeholder replacement)
- Ensure manifest files reference correct image registry and paths
- Check network configuration allows required outbound connections
- Verify cloudlet has sufficient resources for the specified flavor
## Status
**Maturity**: Production
## Additional Resources
* [EdgeConnect Client Documentation](/docs/edgeconnect/edgeconnect-client/)
* [EdgeConnect SDK Documentation](/docs/edgeconnect/edgeconnect-sdk/)
* [Terraform Provider Documentation](/docs/edgeconnect/terraform-provider/)
* [EdgeConnect Console](https://hub.apps.edge.platform.mg3.mdb.osc.live/)
* [Demo Repository](https://edp.buildth.ing/DevFW-CICD/edgeconnect-action-demo)
* [Forgejo Actions Documentation](https://forgejo.org/docs/latest/user/actions/)

View file

@ -0,0 +1,246 @@
---
title: EdgeConnect Client
linkTitle: Client
weight: 20
description: >
Client software for establishing EdgeConnect connections
---
## Overview
The EdgeConnect Client is a command-line tool for managing EdgeConnect applications and instances. It is built using our Golang [SDK](/docs/edgeconnect/edgeconnect-sdk/), and supports functionality to create, destroy, describe and list various resources.
The tool provides both imperative commands (for direct resource management) and declarative workflows (using YAML configuration files) to deploy applications across multiple edge cloudlets. It supports different EdgeConnect deployment environments through an API version selector.
## Key Features
* **Dual workflow support**: Imperative commands for direct operations, declarative YAML for infrastructure-as-code
* **Multi-cloudlet deployment**: Deploy applications to multiple edge locations from a single configuration
* **Deployment planning**: Preview and approve changes before applying them (dry-run mode)
* **Environment compatibility**: Works with different EdgeConnect deployment environments (configured via `api-version`)
* **CI/CD ready**: Designed for automated deployments with auto-approve and exit codes
## Purpose in EDP
No system can be considered useful unless it is actually, in practice, used. While the Edge Connect [console](https://hub.apps.edge.platform.mg3.mdb.osc.live/) and [API](https://swagger.edge.platform.mg3.mdb.osc.live/) are essential tools to allow the platform to be used by developers, there are numerous use cases for interaction that is automated but simpler to use than an API.
The EdgeConnect Client bridges the gap between manual console operations and direct API integration, enabling automated deployments in CI/CD pipelines, infrastructure-as-code workflows, and scripted operations while maintaining simplicity and usability.
## Repository
**Code**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client
**Releases**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client/releases
## Getting Started
### Prerequisites
* Access credentials for the EdgeConnect platform (username and password)
* Knowledge of your target deployment environment (determines `api-version` setting)
* For Kubernetes deployments: K8s manifest files
* For Docker deployments: Docker image reference
### Quick Start
1. Download the Edge Connect Client binary from the Forgejo [releases page](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/releases) for your platform (Linux, macOS, or Windows)
2. Extract and move to your PATH: `tar -xzf edge-connect-client_*.tar.gz && sudo mv edge-connect /usr/local/bin/`
3. Configure authentication using environment variables or a config file (see Configuration section)
4. Verify installation: `edge-connect --help`
### Verification
Run `edge-connect app list --org <your-org> --region <region>` to verify you can authenticate and communicate with the EdgeConnect API.
## Usage Examples
### Declarative Deployment (Recommended)
Create an `EdgeConnectConfig.yaml` file defining your application and deployment targets, then apply it:
```bash
edge-connect apply -f EdgeConnectConfig.yaml
```
Use `--dry-run` to preview changes without applying them, and `--auto-approve` for automated CI/CD workflows.
### Imperative Commands
Direct resource management using CLI commands:
```bash
# Create an application
edge-connect app create --org myorg --name myapp --version 1.0.0 --region EU
# Create an instance on a specific cloudlet
edge-connect instance create --org myorg --name myinstance \
--app myapp --version 1.0.0 --region EU \
--cloudlet Munich --cloudlet-org TelekomOp --flavor EU.small
# List resources
edge-connect app list --org myorg --region EU
edge-connect instance list --org myorg --region EU
# Delete resources
edge-connect instance delete --org myorg --name myinstance --region EU \
--cloudlet Munich --cloudlet-org TelekomOp
edge-connect app delete --org myorg --name myapp --version 1.0.0 --region EU
```
## Integration Points
* **EdgeConnect API**: Communicates with EdgeConnect platform APIs for all resource operations
* **EdgeConnect SDK**: Built on top of the Golang SDK, sharing authentication and client implementation
* **CI/CD Pipelines**: Designed for integration with GitLab CI, GitHub Actions, and other automation tools
* **Infrastructure-as-Code**: YAML configuration files enable GitOps workflows
## Configuration
### Global Settings
The client can be configured via config file, environment variables, or command-line flags (in order of precedence: flags > env vars > config file).
**Config File** (`~/.edge-connect.yaml` or use `--config` flag):
```yaml
base_url: "https://hub.apps.edge.platform.mg3.mdb.osc.live"
username: "your-username@example.com"
password: "your-password"
api_version: "v2" # v1 or v2 - identifies deployment environment
```
**Environment Variables**:
- `EDGE_CONNECT_BASE_URL`: API base URL
- `EDGE_CONNECT_USERNAME`: Authentication username
- `EDGE_CONNECT_PASSWORD`: Authentication password
- `EDGE_CONNECT_API_VERSION`: API version selector (v1 or v2, default: v2)
**Global Flags** (available on all commands):
- `--base-url`: API base URL
- `--username`: Authentication username
- `--password`: Authentication password
- `--api-version`: API version selector (v1 or v2) - specifies which deployment environment to target
- `--config`: Path to config file
- `--debug`: Enable debug logging
**Note on API Versions**: The `api-version` setting (v1 or v2) is an internal label used to distinguish between different EdgeConnect deployment environments, not an official API version designation from the platform.
### Commands
**App Management** (`edge-connect app <command>`):
CLI command `app` corresponds to **App** in the platform console.
- `create`: Create app (flags: `--org`, `--name`, `--version`, `--region`)
- `show`: Show app details (flags: same as create)
- `list`: List apps (flags: `--org`, `--region`, optional: `--name`, `--version`)
- `delete`: Delete app (flags: `--org`, `--name`, `--version`, `--region`)
**App Instance Management** (`edge-connect instance <command>`):
CLI command `instance` corresponds to **App Instance** in the platform console.
- `create`: Create app instance (flags: `--org`, `--name`, `--app`, `--version`, `--region`, `--cloudlet`, `--cloudlet-org`, `--flavor`)
- `show`: Show app instance details (flags: `--org`, `--name`, `--cloudlet`, `--cloudlet-org`, `--region`, `--app-id`)
- `list`: List app instances (flags: same as show, all optional)
- `delete`: Delete app instance (flags: `--org`, `--name`, `--cloudlet`, `--cloudlet-org`, `--region`)
**Declarative Operations**:
- `apply`: Deploy from YAML (flags: `-f <file>`, `--dry-run`, `--auto-approve`)
- `delete`: Delete from YAML (flags: `-f <file>`, `--dry-run`, `--auto-approve`)
### YAML Configuration Format
The `EdgeConnectConfig.yaml` file defines apps and their deployment targets:
```yaml
kind: edgeconnect-deployment
metadata:
name: "my-app" # App name (required)
appVersion: "1.0.0" # App version (required)
organization: "myorg" # Organization (required)
spec:
# Choose ONE: k8sApp OR dockerApp
k8sApp:
manifestFile: "./k8s-deployment.yaml" # Path to K8s manifest
# OR dockerApp:
# image: "registry.example.com/myimage:tag"
# manifestFile: "./docker-compose.yaml" # Optional
# Deployment targets (at least one required)
infraTemplate:
- region: "EU" # Region (required)
cloudletOrg: "TelekomOp" # Cloudlet provider (required)
cloudletName: "Munich" # Cloudlet name (required)
flavorName: "EU.small" # Instance size (required)
- region: "US"
cloudletOrg: "TelekomOp"
cloudletName: "gardener-shepherd-test"
flavorName: "default"
# Optional network configuration
network:
outboundConnections:
- protocol: "tcp" # tcp, udp, or icmp
portRangeMin: 80
portRangeMax: 80
remoteCIDR: "0.0.0.0/0"
- protocol: "tcp"
portRangeMin: 443
portRangeMax: 443
remoteCIDR: "0.0.0.0/0"
# Optional deployment strategy (default: recreate)
deploymentStrategy: "recreate" # recreate, blue-green, or rolling
```
**Key Points**:
- Manifest file paths are relative to the config file location
- Multiple `infraTemplate` entries deploy to multiple cloudlets simultaneously
- Network configuration is optional; outbound connections default to platform settings
- Deployment strategy currently only supports "recreate" (others planned)
## Troubleshooting
### Authentication Failures
**Problem**: Errors like "authentication failed" or "unauthorized"
**Solution**:
- Verify credentials are correct in config file or environment variables
- Ensure `base_url` includes the scheme (https://) and has no trailing path
- Check that you're connecting to the correct cloud instance (Edge or Orca)
- Ensure the correct `api-version` is set for your deployment environment
### "Configuration validation failed" Errors
**Problem**: YAML configuration file validation errors
**Solution**:
- Check that all required fields are present (name, appVersion, organization)
- Ensure you have exactly one of `k8sApp` or `dockerApp` (not both, not neither)
- Verify manifest file paths exist relative to the config file location
- Check for leading/trailing whitespace in string values
- Ensure at least one `infraTemplate` entry is defined
### Wrong API Version or Cloud Instance
**Problem**: Commands work but resources don't appear in the console, or vice versa
**Solution**: Verify both the `base_url` and `api-version` match your target environment. There are two cloud instances (Edge and Orca) with different URLs and API versions. Check with your platform administrator for the correct configuration.
## Status
**Maturity**: Production
## Additional Resources
* [EdgeConnect SDK Documentation](/docs/edgeconnect/edgeconnect-sdk/)
* **Edge Cloud**: [Console](https://hub.apps.edge.platform.mg3.mdb.osc.live/) | [API Docs](https://swagger.edge.platform.mg3.mdb.osc.live/)
* **Orca Cloud**: [Console](https://hub.apps.orca.platform.mg3.mdb.osc.live/) | [API Docs](https://swagger.orca.platform.mg3.mdb.osc.live/)
* [Source Code Repository](https://edp.buildth.ing/DevFW-CICD/edge-connect-client)

View file

@ -0,0 +1,70 @@
---
title: EdgeConnect SDK
linkTitle: SDK
weight: 10
description: >
Software Development Kit for interacting with EdgeConnect
---
## Overview
The EdgeConnect SDK is a Go library which provides a simple method for interacting with Edge Connect within programs. It is designed to be used by other tools, such as the [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) or [Terraform provider](/docs/edgeconnect/terraform-provider/),
## Key Features
* Allows querying endpoints without the need to manage API calls and responses directly
* Wraps the existing [Edge Connect API](https://swagger.edge.platform.mg3.mdb.osc.live/)
* Supports multiple unnumbered versions of the API
## Purpose in EDP
No system can be considered useful unless it is actually, in practice, used. While the Edge Connect [console](https://hub.apps.edge.platform.mg3.mdb.osc.live/) and [API](https://swagger.edge.platform.mg3.mdb.osc.live/) are essential tools to allow the platform to be used by developers, there are numerous use cases for interaction that is automated but simpler to use than an API. These include a [command-line tool](/docs/edgeconnect/edgeconnect-client/) and [Terraform provider](/docs/edgeconnect/terraform-provider/).
While each such tool could simply independently wrap existing endpoints, this is generally too low-level for sustainable development. It would involve extensive boilerplate code in each such package, plus small changes to API endpoints or error handling may require constant rework.
To avoid this, the Edge Connect SDK aims to provide a common library for interacting with EdgeConnect, allowing the abstraction of HTTP requests and authentication procedures while nonetheless allowing access directly to the endpoints available.
## Repository
**Code**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client
**Documentation**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk
## Getting Started
### Prerequisites
* Golang
* Edge Connect credentials
### Quick Start
[Step-by-step guide to get started with this component]
1. Simply [import](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#installation) the SDK to your project
2. [Initialise and configure](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#configuration-options) a client with your credentials
3. [Build](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#examples) your code around the existing endpoints
### Verification
[How to verify the component is working correctly]
## Usage Examples
See [README](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#examples) for simple code examples, or repositories for [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) and [Terraform provider](/docs/edgeconnect/terraform-provider/) for full projects relying on it.
## Troubleshooting
### Varying code versions
**Problem**: While the Edge Connect API does not (at time of writing) have different semantic versions, it does have different iterations which function differently. The SDK provides two different libraries, labelled [v1](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk/edgeconnect) and [v2](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk/edgeconnect/v2) and referring to API definitions similarly stored as [v1](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/api/swagger_v1.json) and [v2](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/api/swagger_v2.json).
**Solution**: If you receive errors when using the SDK, consider changing the version you import:
```go
import v1 "edp.buildth.ing/DevFW-CICD/edge-connect-client/sdk/edgeconnect"
import v2 "edp.buildth.ing/DevFW-CICD/edge-connect-client/v2/sdk/edgeconnect/v2"
```
## Status
**Maturity**: Beta

View file

@ -0,0 +1,80 @@
---
title: Terraform provider for Edge cloud
linkTitle: Terraform provider
weight: 30
description: Custom Terraform provider for orchestrating Edge deployments
---
## Overview
This work-in-progress Terraform provider for Edge cloud allows orchestration of selected resources using flexible, concise [HCL](https://developer.hashicorp.com/terraform/language). This allows deployment to Edge Cloud through a familiar format, abstracting away specific endpoints and authentication elements, and allowing seamless combination of Edge resources with others: on OTC, other clouds, or local utilities.
## Key Features
* Interact with Apps and AppInstances using widely-used Terraform framework
* Using Terraform's systems, provide minimal configuration: just an endpoint and credentials, then no need to deal with headers or other API boilerplate
* Also works with community-driven OpenTofu
* Provider currently under development: more features can be added when requested.
## Purpose in EDP
Interacting with infrastructure is a complex process, with many parameters and components working together. Doing so by clicking buttons in a web UI ("ClickOps") is extremely difficult to scale, rapidly becoming highly confusing.
Instead, automations are possible through APIs and SDKs. Working directly with an API (e.g. via `curl`) inevitably tends to involve large amounts of boilerplate code to manage authentication, rarely-changing configuration such as region/tenant selection, and more. When one resource (say, a web server) must interact with another (say, a DNS record), the cross-references further increase this complexity.
An SDK mitigates this complexity when coding software, by providing library functions which interact with the API in abstracted ways which require a minimum of necessary information. Our SDK for Edge Connect is described in a [separate section](/docs/edgeconnect/edgeconnect-sdk/).
However, when simply wanting to deploy infrastructure in isolation - say, updating the status of a Kubernetes or App resource after a change in configuration - an SDK is still an overly complicated tool.
This is where [Terraform](https://developer.hashicorp.com/terraform) or its community-led alternative [OpenTofu](https://opentofu.org/), come in. They provide a simple language for defining resources, with a level of abstraction that retains the power and flexibility of the API while greatly simplifying definitions and execution.
Terraform is widely used for major infrastructure systems such as [AWS](https://registry.terraform.io/providers/hashicorp/aws/latest/docs), [Azure](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs) or general [Kubernetes](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs). However, it is highly flexible, supporting a range of resource types which are not inherently tied to infrastructure: [file](https://registry.terraform.io/search/providers?q=file) manipulation; package setup through [Ansible](https://registry.terraform.io/providers/ansible/aap/1.4.0); secret generation in [Vault](https://registry.terraform.io/providers/hashicorp/vault/latest/docs).
As a result of this breadth of functionality and cross-compatibility, Terraform support is considered by some as necessary for a platform to be used 'seriously' - that is, at scale, or in major workloads. Our provider thus unlocks broad market relevance for the platform in a way few other tools or features could.
## Repository
**Code**: https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect
**Documentation**: Provider is intended to ultimately wrap each resource-based endpoint of the [Edge API](https://swagger.edge.platform.mg3.mdb.osc.live/), but currently supports a limited [subset of resources](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect#resources).
## Getting Started
### Prerequisites
* [Terraform](https://developer.hashicorp.com/terraform) or [OpenTofu](https://opentofu.org/)
* Edge access and credentials
### Quick Start
1. Configure Terraform to use the provider by [including it](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect#using-terraform-registry-recommended) in `provider.tf`
1. In the same directory, create terraform resources in `.tf` files according to the [spec](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect#resources)
1. [Set up credentials](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect/src/branch/main/README.md#provider-configuration) using environment variables or a `provider` block
1. Run `terraform init` in the directory
1. Execute `terraform plan` and/or `terraform apply` to deploy your application
1. `terraform destroy` can be used to remove all deployed resources
### Verification
If `terraform apply` completes successfully (without errors), the provider is working correctly. You can also manually validate in the Edge UI that your resources have been deployed/reconfigured as Terraform indicated.
## Status
**Maturity**: Experimental
## Additional Resources
* [Terralist](https://www.terralist.io/)
* [Terraform](https://developer.hashicorp.com/terraform)
* [OpenTofu](https://opentofu.org/)
* [Edge Connect API](https://swagger.edge.platform.mg3.mdb.osc.live)
## Integration Points
* **Edge Connect SDK**: The provider uses the [Edge Connect SDK](http://localhost:1313/docs/components/deployments/edgeconnect/edgeconnect-sdk/) under the hood.
* **Terralist**: The provider is published using a [custom instance](https://terralist.garm-provider-test.t09.de/) of [Terralist](https://www.terralist.io/). This [can only](https://edp.buildth.ing/DevFW-CICD/stacks/src/commit/5b438097bbd027f0025d6198c34c22f856392a03/template/stacks/terralist/terralist/values.yaml#L9-L38) be written to with a login via [Forgejo](https://edp.buildth.ing/), but can be read publicly.
### Component Architecture (C4)
<likec4-view view-id="provider" browser="true"></likec4-view>

View file

@ -0,0 +1,52 @@
---
title: Edge Developer Platform
linkTitle: Edge Developer Platform
weight: 10
description: >
A platform to support developers working in the Edge, based around Forgejo
---
## Purpose
The Edge Developer Platform (EDP) is a comprehensive DevOps platform designed to enable developers to build, deploy, and operate cloud-native applications at the edge. It provides an integrated suite of tools and services covering the entire software development lifecycle.
{{< likec4-view view="application-transition" project="architecture" title="EDP Context View: Edge Developer Platform Components and User Interaction" >}}
The magenta **EDP** represents the developer platform: a shared, productized layer that enables modern DevOps by standardizing how applications are described, built, deployed, and observed. In the **inner loop**, developers iterate locally (fast feedback: code → run → test). EDP then connects that work to an **outer loop** where additional roles (review, test, operations, audit/compliance) contribute feedback and controls for production readiness.
In this modern DevOps setup, EDP acts as the hub: it synchronizes with local development and **deploys applications to target clouds** (for example, an EdgeConnect cloud), while providing the operational capabilities needed to run them safely. Agentic AI can support both loops—for example by assisting developers with implementation and testing in the inner loop, and by automating reviews, policy checks, release notes, and deployment verification (including drift detection and remediation) in the outer loop.
## Product Structure
EDP consists of multiple integrated components organized in layers:
### Core Platform Services
The foundation layer provides essential platform capabilities including source code management, CI/CD, and container orchestration.
For documentation, see: [Basic Platform Concepts](./deployment/basics/) and [Forgejo](./forgejo/)
### Developer Experience
Tools and services that developers interact with directly to build, test, and deploy applications.
For documentation, see: [Forgejo](./forgejo/) and [Deployment](./deployment/)
### Infrastructure & Operations
Infrastructure automation, observability, and operational tooling for platform management.
For documentation, see: [Operations](./operations/) and [Infrastructure as Code](./deployment/infrastructure/)
## Getting Started
EDP is available at https://edp.buildth.ing.
EDP includes a Forgejo instance that hosts both public and private repositories containing all EDP components.
To request access and get onboarded, start with the welcome repository:
- https://edp.buildth.ing/edp-team/welcome
Once you have access to the repositories, you can explore the EDP documentation according to the product structure above.

View file

@ -0,0 +1,509 @@
---
title: Deployment
linkTitle: Deployment
weight: 10
description: >
Platform-level component provisioning via Stacks - Orchestrating the platform infrastructure itself
---
## Overview
Platform Orchestration refers to the automation and management of the platform infrastructure itself. This includes the provisioning, configuration, and lifecycle management of all components that make up the Internal Developer Platform (IDP).
In the context of IPCEI-CIS, Platform Orchestration means:
- **Platform Bootstrap**: Initial setup of Kubernetes clusters and core services
- **Platform Services Management**: Deployment and management of ArgoCD, Forgejo, Keycloak, etc.
- **Infrastructure-as-Code**: Declarative management using Terraform and GitOps
- **Multi-Cluster Orchestration**: Coordination across different Kubernetes clusters
- **Platform Stacks**: Reusable bundles of platform components (CNOE concept)
### Target Audience
Platform Orchestration is primarily aimed at:
- **Platform Engineering Teams**: Teams that build and operate the IDP
- **Infrastructure Architects**: Those responsible for the platform architecture
- **SRE Teams**: Teams responsible for reliability and operations
## Key Features
### Declarative Platform Definition
The entire platform is defined declaratively as code:
- **GitOps-First**: Everything is versioned in Git and traceable
- **Reproducibility**: The platform can be rebuilt at any time
- **Environment Parity**: Consistency between Dev, Test, and Production
- **Auditability**: Complete history of all changes
### Self-Bootstrapping
The platform can bootstrap itself:
1. **Initial Bootstrap**: Minimal tool (like `idpbuilder`) starts the platform
2. **Self-Management**: After bootstrap, ArgoCD takes over management
3. **Continuous Reconciliation**: Platform is continuously reconciled with Git state
4. **Self-Healing**: Automatic recovery on deviations
### Stack-based Composition
Platform components are organized as reusable stacks (CNOE concept):
- **Modularity**: Components can be updated individually
- **Reusability**: Stacks can be used across different environments
- **Composability**: Compose complex platforms from simple building blocks
- **Versioning**: Stacks can be versioned and tested
**In IPCEI-CIS**: The stacks concept from CNOE is the core organizational principle for platform components.
### Multi-Cluster Support
Platform Orchestration supports different cluster topologies:
- **Control Plane + Worker Clusters**: Centralized control, distributed workloads
- **Hub-and-Spoke**: One management cluster manages multiple target clusters
- **Federation**: Coordination across multiple independent clusters
## Purpose in EDP
Platform Orchestration is the foundation of the IPCEI-CIS Edge Developer Platform. It enables:
### Foundation for Developer Self-Service
Platform Orchestration ensures all services are available that developers need for self-service:
- **GitOps Engine** (ArgoCD) for continuous deployment
- **Source Control** (Forgejo) for code and configuration management
- **Identity Management** (Keycloak) for authentication and authorization
- **Observability** (Grafana, Prometheus) for monitoring and logging
- **CI/CD** (Forgejo Actions/Pipelines) for automated build and test
### Consistency Across Environments
Through declarative definition, consistency is guaranteed:
- Development, test, and production environments are identically configured
- No "configuration drift" between environments
- Predictable behavior across all stages
### Platform as Code
The platform itself is treated like software:
- **Version Control**: All changes are versioned in Git
- **Code Review**: Platform changes go through review processes
- **Testing**: Platform configurations can be tested
- **Rollback**: Easy rollback on problems
### Reduced Operational Overhead
Automation reduces manual effort:
- No manual installation steps
- Automatic updates and patching
- Self-healing on failures
- Standardized deployment processes
## Repository
**CNOE Reference Implementation**: [cnoe-io/stacks](https://github.com/cnoe-io/stacks)
**CNOE idpbuilder**: [cnoe-io/idpbuilder](https://github.com/cnoe-io/idpbuilder)
**Documentation**: [CNOE.io Documentation](https://cnoe.io/docs/)
## Getting Started
### Prerequisites
- **Docker**: For local Kubernetes clusters (Kind)
- **kubectl**: Kubernetes CLI tool
- **Git**: For repository management
- **idpbuilder**: CNOE bootstrap tool
### Quick Start
Platform Orchestration with CNOE Reference Implementation:
```bash
# 1. Install idpbuilder
curl -fsSL https://cnoe.io/install.sh | bash
# 2. Bootstrap platform
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation
# 3. Wait for platform ready (ca. 10 minutes)
kubectl get applications -A
```
### Verification
Verify the platform is running correctly:
```bash
# Get platform secrets (credentials)
idpbuilder get secrets
# Check all ArgoCD applications
kubectl get applications -n argocd
# Expected: All applications "Synced" and "Healthy"
```
Access URLs (with path-routing):
- **ArgoCD**: `https://cnoe.localtest.me:8443/argocd`
- **Forgejo**: `https://cnoe.localtest.me:8443/gitea`
- **Keycloak**: `https://cnoe.localtest.me:8443/keycloak`
## Usage Examples
### Use Case 1: Platform Bootstrap
Initial bootstrapping of a new platform instance:
```bash
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation \
--log-level debug
# Workflow:
# 1. Creates Kind cluster
# 2. Installs ingress-nginx
# 3. Clones and installs ArgoCD
# 4. Installs Forgejo
# 5. Waits for core services
# 6. Creates technical users
# 7. Configures Git repositories
# 8. Installs remaining stacks via ArgoCD
```
After approximately 10 minutes, the platform is fully deployed.
### Use Case 2: Adding New Platform Components
Add new platform components via ArgoCD:
```bash
# Create ArgoCD Application for new component
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-secrets
namespace: argocd
spec:
project: default
source:
repoURL: https://charts.external-secrets.io
targetRevision: 0.9.9
chart: external-secrets
destination:
server: https://kubernetes.default.svc
namespace: external-secrets-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
```
### Use Case 3: Platform Updates
Update platform components:
```bash
# 1. Update via Git (GitOps)
cd your-platform-config-repo
git pull
# 2. Update stack version
vim argocd/applications/component.yaml
# Change targetRevision to new version
# 3. Commit and push
git add .
git commit -m "Update component to v1.2.3"
git push
# 4. ArgoCD will automatically sync
# 5. Monitor the update
argocd app sync component --watch
```
## Integration Points
### ArgoCD Integration
- **Bootstrap**: ArgoCD is initially installed via idpbuilder
- **Self-Management**: After bootstrap, ArgoCD manages itself via Application CRD
- **Platform Coordination**: ArgoCD orchestrates all other platform components
- **Health Monitoring**: ArgoCD monitors health status of all platform services
### Forgejo Integration
- **Source of Truth**: Git repositories contain all platform definitions
- **GitOps Workflow**: Changes in Git trigger platform updates
- **Backup**: Git serves as backup of platform configuration
- **Audit Trail**: Git history documents all platform changes
- **CI/CD**: Forgejo Actions can automate platform operations
### Terraform Integration
- **Infrastructure Provisioning**: Terraform provisions cloud resources for platform
- **State Management**: Terraform state tracks infrastructure
- **Integration**: Terraform can be triggered via Forgejo pipelines
- **Multi-Cloud**: Support for multiple cloud providers
## Architecture
### Platform Orchestration Flow
```text
┌─────────────────┐
│ idpbuilder │ Bootstrap Tool
│ (Initial Run) │
└────────┬────────┘
┌─────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ ArgoCD │────────▶│ Forgejo │ │
│ │ (GitOps) │ │ (Git Repo) │ │
│ └──────┬───────┘ └──────────────┘ │
│ │ │
│ │ Monitors & Syncs │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────┐ │
│ │ Platform Stacks │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ │ │
│ │ │Forgejo │ │Keycloak │ │ │
│ │ └──────────┘ └──────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ │ │
│ │ │Observ- │ │Ingress │ │ │
│ │ │ability │ │ │ │ │
│ │ └──────────┘ └──────────┘ │ │
│ └──────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
```
### Platform Bootstrap Sequence
The idpbuilder executes the following workflow:
1. Create Kind Kubernetes cluster
2. Install ingress-nginx controller
3. Install ArgoCD
4. Install Forgejo Git server
5. Wait for services to be ready
6. Create technical users in Forgejo
7. Create repository for platform state in Forgejo
8. Push platform stacks to Forgejo
9. Create ArgoCD Applications for all stacks
10. ArgoCD takes over continuous synchronization
### Deployment Architecture
The platform is deployed in different namespaces:
- `argocd`: ArgoCD and its components
- `gitea`: Forgejo Git server
- `keycloak`: Identity and access management
- `observability`: Prometheus, Grafana, etc.
- `ingress-nginx`: Ingress controller
## Configuration
### idpbuilder Configuration
Key configuration options for idpbuilder:
```bash
# Path-based routing (recommended for local development)
idpbuilder create --use-path-routing
# Custom package directory
idpbuilder create --package-dir /path/to/custom/packages
# Custom Kind cluster config
idpbuilder create --kind-config custom-kind.yaml
# Enable debug logging
idpbuilder create --log-level debug
```
### ArgoCD Configuration
Important ArgoCD configurations for platform orchestration:
```yaml
# argocd-cm ConfigMap
data:
# Enable automatic sync
application.instanceLabelKey: argocd.argoproj.io/instance
# Repository credentials
repositories: |
- url: https://github.com/cnoe-io/stacks
name: cnoe-stacks
type: git
# Resource exclusions
resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
```
### Platform Stack Configuration
Configuration of platform stacks via Kustomize:
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: platform-system
resources:
- argocd-app.yaml
- forgejo-app.yaml
- keycloak-app.yaml
patches:
- target:
kind: Application
patch: |-
- op: add
path: /spec/syncPolicy
value:
automated:
prune: true
selfHeal: true
```
## Troubleshooting
### Platform not reachable
**Problem**: After `idpbuilder create`, platform services are not reachable
**Solution**:
```bash
# 1. Check if all pods are running
kubectl get pods -A
# 2. Check ArgoCD application status
kubectl get applications -n argocd
# 3. Check ingress
kubectl get ingress -A
# 4. Verify DNS resolution
nslookup cnoe.localtest.me
# 5. Check idpbuilder logs
idpbuilder get logs
```
### ArgoCD Applications not synchronized
**Problem**: ArgoCD Applications show status "OutOfSync"
**Solution**:
```bash
# 1. Check application details
argocd app get <app-name>
# 2. View sync status
argocd app sync <app-name> --dry-run
# 3. Force sync
argocd app sync <app-name> --force
# 4. Check for errors in ArgoCD logs
kubectl logs -n argocd deployment/argocd-application-controller
```
### Git Repository Connection Issues
**Problem**: ArgoCD cannot access Git repository
**Solution**:
```bash
# 1. Verify repository configuration
argocd repo list
# 2. Test connection
argocd repo get https://your-git-repo
# 3. Check credentials
kubectl get secret -n argocd
# 4. Re-add repository with correct credentials
argocd repo add https://your-git-repo \
--username <user> \
--password <token>
```
## Platform Orchestration Best Practices
Based on experience and [CNCF Guidelines](https://tag-app-delivery.cncf.io/whitepapers/platforms/):
1. **Start Simple**: Begin with the CNOE reference stack, extend gradually
2. **Automate Everything**: Manual platform changes are anti-pattern
3. **Monitor Continuously**: Use observability tools for platform health
4. **Document Well**: Platform documentation is essential for adoption
5. **Version Everything**: All platform components should be versioned
6. **Test Changes**: Platform updates should be tested in non-prod
7. **Plan for Disaster**: Backup and disaster recovery strategies are important
8. **Use Stacks**: Organize platform components as reusable stacks
## Status
**Maturity**: Production (for CNOE Reference Implementation)
**Stability**: Stable
**Support**: Community Support via CNOE Community
## Additional Resources
### CNOE Resources
- [CNOE Official Website](https://cnoe.io/)
- [CNOE GitHub Organization](https://github.com/cnoe-io)
- [CNOE Reference Implementation](https://github.com/cnoe-io/stacks)
- [CNOE Community Slack](https://cloud-native.slack.com/archives/C05TN9WFN5S)
### Platform Engineering
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)
- [Platform Engineering Maturity Model](https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/)
- [Team Topologies](https://teamtopologies.com/) - Organizational patterns
### GitOps
- [GitOps Working Group](https://opengitops.dev/)
- [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
- [GitOps Principles](https://opengitops.dev/)
### CNOE Stacks
- [Understanding CNOE Stacks](https://cnoe.io/docs/reference-implementation/stacks/)
- [Creating Custom Stacks](https://cnoe.io/docs/reference-implementation/customization/)

View file

@ -0,0 +1,479 @@
---
title: Basic Concepts
linkTitle: Basic Concepts
weight: 1
description: >
Platform-level component provisioning via Stacks - Orchestrating the platform infrastructure itself
---
## Overview
Platform Orchestration refers to the automation and management of the platform infrastructure itself. This includes the provisioning, configuration, and lifecycle management of all components that make up the Internal Developer Platform (IDP).
In the context of IPCEI-CIS, Platform Orchestration means:
- **Platform Bootstrap**: Initial setup of Kubernetes clusters and core services
- **Platform Services Management**: Deployment and management of ArgoCD, Forgejo, Keycloak, etc.
- **Infrastructure-as-Code**: Declarative management using Terraform and GitOps
- **Multi-Cluster Orchestration**: Coordination across different Kubernetes clusters
- **Platform Stacks**: Reusable bundles of platform components (CNOE concept)
### Target Audience
Platform Orchestration is primarily aimed at:
- **Platform Engineering Teams**: Teams that build and operate the IDP
- **Infrastructure Architects**: Those responsible for the platform architecture
- **SRE Teams**: Teams responsible for reliability and operations
## Key Features
### Declarative Platform Definition
The entire platform is defined declaratively as code:
- **GitOps-First**: Everything is versioned in Git and traceable
- **Reproducibility**: The platform can be rebuilt at any time
- **Environment Parity**: Consistency between Dev, Test, and Production
- **Auditability**: Complete history of all changes
### Self-Bootstrapping
The platform can bootstrap itself:
1. **Initial Bootstrap**: Minimal tool (like `idpbuilder`) starts the platform
2. **Self-Management**: After bootstrap, ArgoCD takes over management
3. **Continuous Reconciliation**: Platform is continuously reconciled with Git state
4. **Self-Healing**: Automatic recovery on deviations
### Stack-based Composition
Platform components are organized as reusable stacks (CNOE concept):
- **Modularity**: Components can be updated individually
- **Reusability**: Stacks can be used across different environments
- **Composability**: Compose complex platforms from simple building blocks
- **Versioning**: Stacks can be versioned and tested
**In IPCEI-CIS**: The stacks concept from CNOE is the core organizational principle for platform components.
### Multi-Cluster Support
Platform Orchestration supports different cluster topologies:
- **Control Plane + Worker Clusters**: Centralized control, distributed workloads
- **Hub-and-Spoke**: One management cluster manages multiple target clusters
- **Federation**: Coordination across multiple independent clusters
## Purpose in EDP
Platform Orchestration is the foundation of the IPCEI-CIS Edge Developer Platform. It enables:
### Foundation for Developer Self-Service
Platform Orchestration ensures all services are available that developers need for self-service:
- **GitOps Engine** (ArgoCD) for continuous deployment
- **Source Control** (Forgejo) for code and configuration management
- **Identity Management** (Keycloak) for authentication and authorization
- **Observability** (Grafana, Prometheus) for monitoring and logging
- **CI/CD** (Forgejo Actions/Pipelines) for automated build and test
### Consistency Across Environments
Through declarative definition, consistency is guaranteed:
- Development, test, and production environments are identically configured
- No "configuration drift" between environments
- Predictable behavior across all stages
### Platform as Code
The platform itself is treated like software:
- **Version Control**: All changes are versioned in Git
- **Code Review**: Platform changes go through review processes
- **Testing**: Platform configurations can be tested
- **Rollback**: Easy rollback on problems
### Reduced Operational Overhead
Automation reduces manual effort:
- No manual installation steps
- Automatic updates and patching
- Self-healing on failures
- Standardized deployment processes
## Repository
**CNOE Reference Implementation**: [cnoe-io/stacks](https://github.com/cnoe-io/stacks)
**CNOE idpbuilder**: [cnoe-io/idpbuilder](https://github.com/cnoe-io/idpbuilder)
**Documentation**: [CNOE.io Documentation](https://cnoe.io/docs/)
## Getting Started
### Prerequisites
- **Docker**: For local Kubernetes clusters (Kind)
- **kubectl**: Kubernetes CLI tool
- **Git**: For repository management
- **idpbuilder**: CNOE bootstrap tool
### Quick Start
Platform Orchestration with CNOE Reference Implementation:
```bash
# 1. Install idpbuilder
curl -fsSL https://cnoe.io/install.sh | bash
# 2. Bootstrap platform
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation
# 3. Wait for platform ready (ca. 10 minutes)
kubectl get applications -A
```
### Verification
Verify the platform is running correctly:
```bash
# Get platform secrets (credentials)
idpbuilder get secrets
# Check all ArgoCD applications
kubectl get applications -n argocd
# Expected: All applications "Synced" and "Healthy"
```
Access URLs (with path-routing):
- **ArgoCD**: `https://cnoe.localtest.me:8443/argocd`
- **Forgejo**: `https://cnoe.localtest.me:8443/gitea`
- **Keycloak**: `https://cnoe.localtest.me:8443/keycloak`
## Usage Examples
### Use Case 1: Platform Bootstrap
Initial bootstrapping of a new platform instance:
```bash
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation \
--log-level debug
# Workflow:
# 1. Creates Kind cluster
# 2. Installs ingress-nginx
# 3. Clones and installs ArgoCD
# 4. Installs Forgejo
# 5. Waits for core services
# 6. Creates technical users
# 7. Configures Git repositories
# 8. Installs remaining stacks via ArgoCD
```
After approximately 10 minutes, the platform is fully deployed.
### Use Case 2: Adding New Platform Components
Add new platform components via ArgoCD:
```bash
# Create ArgoCD Application for new component
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-secrets
namespace: argocd
spec:
project: default
source:
repoURL: https://charts.external-secrets.io
targetRevision: 0.9.9
chart: external-secrets
destination:
server: https://kubernetes.default.svc
namespace: external-secrets-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
```
### Use Case 3: Platform Updates
Update platform components:
```bash
# 1. Update via Git (GitOps)
cd your-platform-config-repo
git pull
# 2. Update stack version
vim argocd/applications/component.yaml
# Change targetRevision to new version
# 3. Commit and push
git add .
git commit -m "Update component to v1.2.3"
git push
# 4. ArgoCD will automatically sync
# 5. Monitor the update
argocd app sync component --watch
```
## Integration Points
### ArgoCD Integration
- **Bootstrap**: ArgoCD is initially installed via idpbuilder
- **Self-Management**: After bootstrap, ArgoCD manages itself via Application CRD
- **Platform Coordination**: ArgoCD orchestrates all other platform components
- **Health Monitoring**: ArgoCD monitors health status of all platform services
### Forgejo Integration
- **Source of Truth**: Git repositories contain all platform definitions
- **GitOps Workflow**: Changes in Git trigger platform updates
- **Backup**: Git serves as backup of platform configuration
- **Audit Trail**: Git history documents all platform changes
- **CI/CD**: Forgejo Actions can automate platform operations
### Terraform Integration
- **Infrastructure Provisioning**: Terraform provisions cloud resources for platform
- **State Management**: Terraform state tracks infrastructure
- **Integration**: Terraform can be triggered via Forgejo pipelines
- **Multi-Cloud**: Support for multiple cloud providers
## Architecture
### Platform Orchestration Flow
{{< likec4-view view="platform_orchestration_flow" title="Platform Orchestration Flow" >}}
### Platform Bootstrap Sequence
The idpbuilder executes the following workflow:
1. Create Kind Kubernetes cluster
2. Install ingress-nginx controller
3. Install ArgoCD
4. Install Forgejo Git server
5. Wait for services to be ready
6. Create technical users in Forgejo
7. Create repository for platform state in Forgejo
8. Push platform stacks to Forgejo
9. Create ArgoCD Applications for all stacks
10. ArgoCD takes over continuous synchronization
### Deployment Architecture
The platform is deployed in different namespaces:
- `argocd`: ArgoCD and its components
- `gitea`: Forgejo Git server
- `keycloak`: Identity and access management
- `observability`: Prometheus, Grafana, etc.
- `ingress-nginx`: Ingress controller
## Configuration
### idpbuilder Configuration
Key configuration options for idpbuilder:
```bash
# Path-based routing (recommended for local development)
idpbuilder create --use-path-routing
# Custom package directory
idpbuilder create --package-dir /path/to/custom/packages
# Custom Kind cluster config
idpbuilder create --kind-config custom-kind.yaml
# Enable debug logging
idpbuilder create --log-level debug
```
### ArgoCD Configuration
Important ArgoCD configurations for platform orchestration:
```yaml
# argocd-cm ConfigMap
data:
# Enable automatic sync
application.instanceLabelKey: argocd.argoproj.io/instance
# Repository credentials
repositories: |
- url: https://github.com/cnoe-io/stacks
name: cnoe-stacks
type: git
# Resource exclusions
resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
```
### Platform Stack Configuration
Configuration of platform stacks via Kustomize:
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: platform-system
resources:
- argocd-app.yaml
- forgejo-app.yaml
- keycloak-app.yaml
patches:
- target:
kind: Application
patch: |-
- op: add
path: /spec/syncPolicy
value:
automated:
prune: true
selfHeal: true
```
## Troubleshooting
### Platform not reachable
**Problem**: After `idpbuilder create`, platform services are not reachable
**Solution**:
```bash
# 1. Check if all pods are running
kubectl get pods -A
# 2. Check ArgoCD application status
kubectl get applications -n argocd
# 3. Check ingress
kubectl get ingress -A
# 4. Verify DNS resolution
nslookup cnoe.localtest.me
# 5. Check idpbuilder logs
idpbuilder get logs
```
### ArgoCD Applications not synchronized
**Problem**: ArgoCD Applications show status "OutOfSync"
**Solution**:
```bash
# 1. Check application details
argocd app get <app-name>
# 2. View sync status
argocd app sync <app-name> --dry-run
# 3. Force sync
argocd app sync <app-name> --force
# 4. Check for errors in ArgoCD logs
kubectl logs -n argocd deployment/argocd-application-controller
```
### Git Repository Connection Issues
**Problem**: ArgoCD cannot access Git repository
**Solution**:
```bash
# 1. Verify repository configuration
argocd repo list
# 2. Test connection
argocd repo get https://your-git-repo
# 3. Check credentials
kubectl get secret -n argocd
# 4. Re-add repository with correct credentials
argocd repo add https://your-git-repo \
--username <user> \
--password <token>
```
## Platform Orchestration Best Practices
Based on experience and [CNCF Guidelines](https://tag-app-delivery.cncf.io/whitepapers/platforms/):
1. **Start Simple**: Begin with the CNOE reference stack, extend gradually
2. **Automate Everything**: Manual platform changes are anti-pattern
3. **Monitor Continuously**: Use observability tools for platform health
4. **Document Well**: Platform documentation is essential for adoption
5. **Version Everything**: All platform components should be versioned
6. **Test Changes**: Platform updates should be tested in non-prod
7. **Plan for Disaster**: Backup and disaster recovery strategies are important
8. **Use Stacks**: Organize platform components as reusable stacks
## Status
**Maturity**: Production (for CNOE Reference Implementation)
**Stability**: Stable
**Support**: Community Support via CNOE Community
## Additional Resources
### CNOE Resources
- [CNOE Official Website](https://cnoe.io/)
- [CNOE GitHub Organization](https://github.com/cnoe-io)
- [CNOE Reference Implementation](https://github.com/cnoe-io/stacks)
- [CNOE Community Slack](https://cloud-native.slack.com/archives/C05TN9WFN5S)
### Platform Engineering
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)
- [Platform Engineering Maturity Model](https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/)
- [Team Topologies](https://teamtopologies.com/) - Organizational patterns
### GitOps
- [GitOps Working Group](https://opengitops.dev/)
- [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
- [GitOps Principles](https://opengitops.dev/)
### CNOE Stacks
- [Understanding CNOE Stacks](https://cnoe.io/docs/reference-implementation/stacks/)
- [Creating Custom Stacks](https://cnoe.io/docs/reference-implementation/customization/)

View file

@ -0,0 +1,776 @@
---
title: "Application Orchestration"
linkTitle: "Application Orchestration"
weight: 30
description: >
Application deployment via CI/CD pipelines and GitOps - Orchestrating application deployments
---
## Overview
Application Orchestration deals with the automation of application deployment and lifecycle management. It encompasses the entire workflow from source code to running application in production.
In the context of IPCEI-CIS, Application Orchestration includes:
- **CI/CD Pipelines**: Automated build, test, and deployment pipelines
- **GitOps Deployment**: Declarative application deployment via ArgoCD
- **Progressive Delivery**: Canary deployments, blue-green deployments
- **Application Configuration**: Environment-specific configuration management
- **Golden Paths**: Standardized deployment templates and workflows
### Target Audience
Application Orchestration is primarily for:
- **Application Developers**: Teams developing and deploying applications
- **DevOps Teams**: Teams responsible for deployment automation
- **Product Teams**: Teams responsible for application lifecycle
## Key Features
### Automated CI/CD Pipelines
Forgejo Actions provides GitHub Actions-compatible CI/CD:
- **Build Automation**: Automatic building of container images
- **Test Automation**: Automated unit, integration, and E2E tests
- **Security Scanning**: Vulnerability scanning of dependencies and images
- **Artifact Publishing**: Publishing to container registries
- **Deployment Triggering**: Automatic deployment after successful build
### GitOps-based Deployment
ArgoCD enables declarative application deployment:
- **Declarative Configuration**: Applications defined as Kubernetes manifests
- **Automated Sync**: Automatic synchronization between Git and cluster
- **Rollback Capability**: Easy rollback to previous versions
- **Multi-Environment**: Consistent deployment across Dev/Test/Prod
- **Health Monitoring**: Continuous monitoring of application health
### Progressive Delivery
Support for advanced deployment strategies:
- **Canary Deployments**: Gradual rollout to subset of users
- **Blue-Green Deployments**: Zero-downtime deployments with instant rollback
- **A/B Testing**: Traffic splitting for feature testing
- **Feature Flags**: Dynamic feature enablement without deployment
### Configuration Management
Flexible configuration for different environments:
- **Environment Variables**: Configuration via environment variables
- **ConfigMaps**: Kubernetes-native configuration
- **Secrets Management**: Secure handling of sensitive data
- **External Secrets**: Integration with external secret stores (Vault, etc.)
## Purpose in EDP
Application Orchestration is the core of developer experience in IPCEI-CIS Edge Developer Platform.
### Developer Self-Service
Developers can deploy applications independently:
- **Self-Service Deployment**: No dependency on operations team
- **Standardized Workflows**: Clear, documented deployment processes
- **Fast Feedback**: Quick feedback through automated pipelines
- **Environment Parity**: Consistent behavior across all environments
### Quality and Security
Automated checks ensure quality and security:
- **Automated Testing**: All changes are automatically tested
- **Security Scans**: Vulnerability scanning of dependencies and images
- **Policy Enforcement**: Automated policy checks (OPA, Kyverno)
- **Compliance**: Auditability of all deployments
### Efficiency and Productivity
Automation increases team efficiency:
- **Faster Time-to-Market**: Faster deployment of new features
- **Reduced Manual Work**: Automation of repetitive tasks
- **Fewer Errors**: Fewer manual mistakes through automation
- **Better Collaboration**: Clear interfaces between Dev and Ops
## Repository
**Forgejo**: [forgejo.org](https://forgejo.org/)
**Forgejo Actions**: [Forgejo Actions Documentation](https://forgejo.org/docs/latest/user/actions/)
**ArgoCD**: [argoproj.github.io/cd](https://argoproj.github.io/cd/)
## Getting Started
### Prerequisites
- **Forgejo Account**: Access to Forgejo instance
- **Kubernetes Cluster**: Target cluster for deployments
- **ArgoCD Access**: Access to ArgoCD instance
- **Git**: For repository management
### Quick Start: Application Deployment
1. **Create Application Repository**
```bash
# Create new repository in Forgejo
git init my-application
cd my-application
# Add application code and Dockerfile
cat > Dockerfile <<EOF
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
EOF
```
2. **Add CI/CD Pipeline**
Create `.forgejo/workflows/build.yaml`:
```yaml
name: Build and Push
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Registry
uses: docker/login-action@v2
with:
registry: registry.example.com
username: ${{ secrets.REGISTRY_USER }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: ${{ github.event_name == 'push' }}
tags: registry.example.com/my-app:${{ github.sha }}
```
3. **Create Kubernetes Manifests**
Create `k8s/deployment.yaml`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-application
spec:
replicas: 3
selector:
matchLabels:
app: my-application
template:
metadata:
labels:
app: my-application
spec:
containers:
- name: app
image: registry.example.com/my-app:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
---
apiVersion: v1
kind: Service
metadata:
name: my-application
spec:
selector:
app: my-application
ports:
- port: 80
targetPort: 3000
```
4. **Configure ArgoCD Application**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-application
namespace: argocd
spec:
project: default
source:
repoURL: https://forgejo.example.com/myteam/my-application
targetRevision: main
path: k8s
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
```
5. **Deploy**
```bash
# Commit and push
git add .
git commit -m "Add application and deployment configuration"
git push origin main
# ArgoCD will automatically deploy the application
argocd app sync my-application --watch
```
## Usage Examples
### Use Case 1: Multi-Environment Deployment
Deploy application to multiple environments:
**Repository Structure:**
```text
my-application/
├── .forgejo/
│ └── workflows/
│ └── build.yaml
├── base/
│ ├── deployment.yaml
│ ├── service.yaml
│ └── kustomization.yaml
├── overlays/
│ ├── dev/
│ │ ├── kustomization.yaml
│ │ └── patches.yaml
│ ├── staging/
│ │ ├── kustomization.yaml
│ │ └── patches.yaml
│ └── production/
│ ├── kustomization.yaml
│ └── patches.yaml
```
**Kustomize Base** (`base/kustomization.yaml`):
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
commonLabels:
app: my-application
```
**Environment Overlay** (`overlays/production/kustomization.yaml`):
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: production
replicas:
- name: my-application
count: 5
images:
- name: my-app
newTag: v1.2.3
patches:
- patches.yaml
```
**ArgoCD Applications for each environment:**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-application-prod
namespace: argocd
spec:
project: default
source:
repoURL: https://forgejo.example.com/myteam/my-application
targetRevision: main
path: overlays/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
```
### Use Case 2: Canary Deployment
Progressive rollout with canary strategy:
**Argo Rollouts Canary:**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: my-application
spec:
replicas: 10
strategy:
canary:
steps:
- setWeight: 10
- pause: {duration: 5m}
- setWeight: 30
- pause: {duration: 5m}
- setWeight: 60
- pause: {duration: 5m}
- setWeight: 100
selector:
matchLabels:
app: my-application
template:
metadata:
labels:
app: my-application
spec:
containers:
- name: app
image: registry.example.com/my-app:v2.0.0
```
### Use Case 3: Feature Flags
Dynamic feature control without deployment:
**Application Code with Feature Flag:**
```javascript
const Unleash = require('unleash-client');
const unleash = new Unleash({
url: 'http://unleash.platform/api/',
appName: 'my-application',
customHeaders: {
Authorization: process.env.UNLEASH_API_TOKEN
}
});
// Use feature flag
if (unleash.isEnabled('new-checkout-flow')) {
// New checkout implementation
renderNewCheckout();
} else {
// Old checkout implementation
renderOldCheckout();
}
```
## Integration Points
### Forgejo Integration
Forgejo serves as central source code management and CI/CD platform:
- **Source Control**: Git repositories for application code
- **CI/CD Pipelines**: Forgejo Actions for automated builds and tests
- **Container Registry**: Built-in container registry for images
- **Webhook Integration**: Triggers for external systems
- **Pull Request Workflows**: Code review and approval processes
### ArgoCD Integration
ArgoCD handles declarative application deployment:
- **GitOps Sync**: Continuous synchronization with Git state
- **Health Monitoring**: Application health status monitoring
- **Rollback Support**: Easy rollback to previous versions
- **Multi-Cluster**: Deployment to multiple clusters
- **UI and CLI**: Web interface and command-line access
### Observability Integration
Integration with monitoring and logging:
- **Metrics**: Prometheus metrics from applications
- **Logs**: Centralized log collection via Loki/ELK
- **Tracing**: Distributed tracing with Jaeger/Tempo
- **Alerting**: Alert rules for application issues
## Architecture
### Application Deployment Flow
{{< likec4-view view="application_deployment_flow" title="Application Deployment Flow" >}}
### CI/CD Pipeline Architecture
Typical Forgejo Actions pipeline stages:
1. **Checkout**: Clone source code
2. **Build**: Compile application and dependencies
3. **Test**: Run unit and integration tests
4. **Security Scan**: Scan dependencies and code for vulnerabilities
5. **Build Image**: Create container image
6. **Push Image**: Push to container registry
7. **Update Manifests**: Update Kubernetes manifests with new image tag
8. **Notify**: Send notifications on success/failure
## Configuration
### Forgejo Actions Configuration
Example for Node.js application:
```yaml
name: CI/CD Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
env:
REGISTRY: registry.example.com
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Run linter
run: npm run lint
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: '.'
format: 'sarif'
output: 'trivy-results.sarif'
build-and-push:
needs: [test, security]
runs-on: ubuntu-latest
if: github.event_name == 'push'
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ secrets.REGISTRY_USER }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=sha,prefix={{branch}}-
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
```
### ArgoCD Application Configuration
Complete configuration example:
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-application
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://forgejo.example.com/myteam/my-application
targetRevision: main
path: k8s/overlays/production
# Kustomize options
kustomize:
version: v5.0.0
images:
- my-app=registry.example.com/my-app:v1.2.3
destination:
server: https://kubernetes.default.svc
namespace: production
# Sync policy
syncPolicy:
automated:
prune: true # Delete resources not in Git
selfHeal: true # Override manual changes
allowEmpty: false # Don't delete everything on empty repo
syncOptions:
- CreateNamespace=true
- PruneLast=true
- RespectIgnoreDifferences=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
# Ignore differences (avoid sync loops)
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas # Ignore if HPA manages replicas
```
## Troubleshooting
### Pipeline Fails
**Problem**: Forgejo Actions pipeline fails
**Solution**:
```bash
# 1. Check pipeline logs in Forgejo UI
# Navigate to: Repository → Actions → Select failed run
# 2. Check runner status
# In Forgejo: Site Admin → Actions → Runners
# 3. Check runner logs
kubectl logs -n forgejo-runner deployment/act-runner
# 4. Test pipeline locally with act
act -l # List available jobs
act -j build # Run specific job
```
### ArgoCD Application OutOfSync
**Problem**: Application shows "OutOfSync" status
**Solution**:
```bash
# 1. Check differences
argocd app diff my-application
# 2. View sync status details
argocd app get my-application
# 3. Manual sync
argocd app sync my-application
# 4. Hard refresh (ignore cache)
argocd app sync my-application --force
# 5. Check for ignored differences
argocd app get my-application --show-operation
```
### Application Deployment Fails
**Problem**: Application pod crashes after deployment
**Solution**:
```bash
# 1. Check pod status
kubectl get pods -n production
# 2. View pod logs
kubectl logs -n production deployment/my-application
# 3. Describe pod for events
kubectl describe pod -n production <pod-name>
# 4. Check resource limits
kubectl top pod -n production
# 5. Rollback via ArgoCD
argocd app rollback my-application
```
### Image Pull Errors
**Problem**: Kubernetes cannot pull container image
**Solution**:
```bash
# 1. Verify image exists
docker pull registry.example.com/my-app:v1.2.3
# 2. Check image pull secret
kubectl get secret -n production regcred
# 3. Create image pull secret if missing
kubectl create secret docker-registry regcred \
--docker-server=registry.example.com \
--docker-username=user \
--docker-password=password \
-n production
# 4. Reference secret in deployment
kubectl patch deployment my-application -n production \
-p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"regcred"}]}}}}'
```
## Best Practices
### Golden Path Templates
Provide standardized templates for common use cases:
1. **Web Application Template**: Node.js, Python, Go web services
2. **API Service Template**: RESTful API with OpenAPI
3. **Batch Job Template**: Kubernetes CronJob configurations
4. **Microservice Template**: Service mesh integration
Example repository template structure:
```text
application-template/
├── .forgejo/
│ └── workflows/
│ ├── build.yaml
│ ├── test.yaml
│ └── deploy.yaml
├── k8s/
│ ├── base/
│ └── overlays/
├── src/
│ └── ...
├── Dockerfile
├── README.md
└── .gitignore
```
### Deployment Checklist
Before deploying to production:
- ✅ All tests passing
- ✅ Security scans completed
- ✅ Resource limits defined
- ✅ Health checks configured
- ✅ Monitoring and alerts set up
- ✅ Backup strategy defined
- ✅ Rollback plan documented
- ✅ Team notified about deployment
### Configuration Management
- Use ConfigMaps for non-sensitive configuration
- Use Secrets for sensitive data
- Use External Secrets Operator for vault integration
- Never commit secrets to Git
- Use environment-specific overlays (Kustomize)
- Document all configuration options
## Status
**Maturity**: Production
**Stability**: Stable
**Support**: Internal Platform Team
## Additional Resources
### Forgejo
- [Forgejo Documentation](https://forgejo.org/docs/latest/)
- [Forgejo Actions Guide](https://forgejo.org/docs/latest/user/actions/)
- [Forgejo API Reference](https://forgejo.org/docs/latest/api/)
### ArgoCD
- [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
- [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
- [ArgoCD Sync Waves](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/)
### GitOps
- [GitOps Principles](https://opengitops.dev/)
- [GitOps Patterns](https://www.gitops.tech/)
- [Kubernetes Deployment Strategies](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy)
### CI/CD
- [GitHub Actions Documentation](https://docs.github.com/en/actions) (Forgejo Actions compatible)
- [Docker Best Practices](https://docs.docker.com/develop/dev-best-practices/)
- [Container Security Best Practices](https://kubernetes.io/docs/concepts/security/pod-security-standards/)

View file

@ -0,0 +1,224 @@
---
title: Platform Orchestration
linkTitle: Platform Orchestration
weight: 1
description: >
Orchestration in the context of Platform Engineering - coordinating infrastructure, platform, and application delivery.
---
## Overview
Orchestration in the context of Platform Engineering refers to the coordinated automation and management of infrastructure, platform, and application components throughout their entire lifecycle. It is a fundamental concept that bridges the gap between declarative specifications (what should be deployed) and actual execution (how it is deployed).
## The Role of Orchestration in Platform Engineering
Platform Engineering has emerged as a discipline to improve developer experience and reduce cognitive load on development teams ([CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)). Orchestration is the central mechanism that enables this vision:
1. **Automation of Complex Workflows**: Orchestration coordinates multiple steps and dependencies automatically
2. **Consistency and Reproducibility**: Guaranteed, repeatable deployments across different environments
3. **Self-Service Capabilities**: Developers can independently orchestrate resources and deployments
4. **Governance and Compliance**: Centralized control over policies and best practices
### What Does Orchestration Do?
Orchestration systems perform the following tasks:
- **Workflow Coordination**: Coordination of complex, multi-step deployment processes
- **Dependency Management**: Resolution and management of dependencies between components
- **State Management**: Continuous monitoring and reconciliation between desired and actual state
- **Resource Provisioning**: Automatic provisioning of infrastructure and services
- **Configuration Management**: Management of configurations across different environments
- **Health Monitoring**: Monitoring the health of deployed resources
## Three Layers of Orchestration
In modern Platform Engineering, we distinguish three fundamental layers of orchestration:
### [Infrastructure Orchestration](../infrastructure/)
Infrastructure Orchestration deals with the lowest level - the physical and virtual infrastructure layer. This includes:
- Provisioning of compute, network, and storage resources
- Cloud resource management (VMs, networking, storage)
- Infrastructure-as-Code deployment (Terraform, etc.)
- Bare metal and hypervisor management
**Target Audience**: Infrastructure Engineers, Cloud Architects
**Note**: Detailed documentation for Infrastructure Orchestration is maintained separately.
More details: [Infrastructure Orchestration →](../infrastructure/)
### [Platform Orchestration](../otc/)
Platform Orchestration focuses on deploying and managing the platform itself - the services and tools that development teams use. This includes:
- Installation and configuration of Kubernetes clusters
- Deployment of platform services (GitOps tools, Observability, Security)
- Management of platform components via Stacks
- Multi-cluster orchestration
**Target Audience**: Platform Engineering Teams, SRE Teams
**In IPCEI-CIS**: Platform orchestration is realized using the CNOE stack concept with ArgoCD and Forgejo.
More details: [Platform Orchestration →](../otc/)
### [Application Orchestration](application/)
Application Orchestration concentrates on the deployment and lifecycle management of applications running on the platform. This includes:
- Deployment of microservices and containerized applications
- CI/CD pipeline orchestration
- Configuration management and secrets handling
- Application health monitoring and auto-scaling
**Target Audience**: Application Developers, DevOps Engineers
**In IPCEI-CIS**: Application orchestration uses Forgejo pipelines for CI/CD and ArgoCD for GitOps-based deployment.
More details: [Application Orchestration →](application/)
## GitOps as Orchestration Paradigm
A central approach in modern platform orchestration solutions is **GitOps**. GitOps uses Git repositories as the single source of truth for declarative infrastructure and applications:
- **Declarative Approach**: The desired state is defined in Git
- **Automatic Synchronization**: Controllers monitor Git and reconcile the live state
- **Audit Trail**: All changes are traceable in Git history
- **Rollback Capability**: Easy rollback through Git revert
### Continuous Reconciliation
An important concept is **continuous reconciliation**:
1. The orchestrator monitors both the source (Git) and the target (e.g., Kubernetes cluster)
2. Deviations trigger automatic corrective actions
3. Health checks validate that the desired state has been achieved
4. Drift detection warns of unexpected changes
## Orchestration Tools in IPCEI-CIS
Within the IPCEI-CIS platform, we utilize the [CNOE (Cloud Native Operational Excellence)](https://cnoe.io/) stack concept with the following orchestration components:
### ArgoCD
- **Continuous Delivery** for Kubernetes based on GitOps
- Synchronizes Kubernetes manifests from Git repositories
- Supports Helm Charts, Kustomize, Jsonnet, and plain YAML
- Multi-cluster deployment capabilities
- Application Sets for parameterized deployments
**Role in IPCEI-CIS**: ArgoCD is the central component for GitOps-based deployment management. After the initial bootstrapping phase, ArgoCD takes over the technical coordination of all components.
### Forgejo
- **Git Repository Management** and source control
- **CI/CD Pipelines** via Forgejo Actions (GitHub Actions compatible)
- **Developer Portal Capabilities** (initially planned, project discontinued)
- Package registry and artifact management
- Integration with ArgoCD for GitOps workflows
**Role in IPCEI-CIS**: Forgejo serves as the Git repository host and CI/CD engine. It was initially planned as a developer portal (similar to Backstage's role in other stacks) but this aspect was not fully realized before project completion.
**Note on Backstage**: In typical CNOE implementations, Backstage serves as the developer portal providing golden paths through software templates. IPCEI-CIS initially planned to use Forgejo for this purpose but the project concluded before full implementation.
### Terraform
- **Infrastructure-as-Code** provisioning
- Multi-cloud resource management
- State management for infrastructure
- Integration with Forgejo pipelines for automated deployment
**Role in IPCEI-CIS**: Terraform handles infrastructure provisioning at the infrastructure orchestration layer, integrated into automated workflows via Forgejo pipelines.
### CNOE Stacks Concept
- **Modular Platform Components** bundled as stacks
- Reusable, composable platform building blocks
- Version-controlled stack definitions
- GitOps-based stack deployment via ArgoCD
**Role in IPCEI-CIS**: The stacks concept from CNOE provides the structural foundation for platform orchestration, enabling modular deployment and management of platform components.
## The Orchestration Workflow
A typical orchestration workflow in the IPCEI-CIS platform:
{{< likec4-view view="orchestration_workflow" title="Orchestration Workflow" >}}
**Workflow Steps**:
1. **Definition**: Developer defines application/infrastructure as code
2. **Commit**: Changes are committed to Forgejo Git repository
3. **CI Pipeline**: Forgejo Actions build, test, and package the application
4. **Sync**: ArgoCD detects changes and triggers deployment
5. **Provision**: Terraform orchestrates required cloud resources (if needed)
6. **Deploy**: Application is deployed to Kubernetes
7. **Monitor**: Continuous monitoring and health checks
8. **Reconcile**: Automatic correction on drift detection
## Benefits of Coordinated Orchestration
The integration of infrastructure, platform, and application orchestration provides crucial advantages:
- **Reduced Complexity**: Developers don't need to know all infrastructure details
- **Faster Time-to-Market**: Automated workflows accelerate deployments
- **Consistency**: Standardized patterns across all teams
- **Governance**: Central policies are automatically enforced
- **Scalability**: Platform teams can support many application teams
- **Self-Service**: Developers can provision services independently
- **Audit and Compliance**: Complete traceability through Git history
## Best Practices
Successful orchestration follows proven principles ([Platform Engineering Principles](https://platformengineering.org/blog/what-is-platform-engineering)):
1. **Platform as a Product**: Treat the platform as a product with focus on user experience
2. **Self-Service First**: Enable developers to use services autonomously
3. **Documentation**: Comprehensive documentation of golden paths
4. **Feedback Loops**: Continuous improvement through user feedback
5. **Thin Platform Layer**: Use managed services where possible instead of building everything
6. **Progressive Disclosure**: Offer different abstraction levels
7. **Focus on Common Problems**: Solve recurring problems centrally
8. **Treat Glue as Valuable**: Integration of different tools is valuable
9. **Clear Mission**: Define clear goals and responsibilities
## Avoiding Anti-Patterns
Common mistakes in platform orchestration ([How to fail at Platform Engineering](https://www.cncf.io/blog/2024/03/08/how-to-fail-at-platform-engineering/)):
- **Product Misfit**: Building platform without involving developers
- **Overly Complex Design**: Too many features and unnecessary complexity
- **Swiss Knife Syndrome**: Trying to solve all problems with one tool
- **Insufficient Documentation**: Missing or outdated documentation
- **Siloed Development**: Platform and development teams working in isolation
- **Stagnant Platform**: Platform not continuously evolved
## Sub-Components
The orchestration component includes the following sub-areas:
- **[Infrastructure Orchestration](infrastructure/)**: Low-level infrastructure deployment and provisioning
- **[Platform Orchestration](platform/)**: Platform-level component deployment via Stacks
- **[Application Orchestration](application/)**: Application-level deployment and CI/CD
- **[Stacks](stacks/)**: Reusable component bundles and compositions
## Further Resources
### Fundamentals
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/) - Comprehensive paper on Platform Engineering
- [Platform Engineering Definition](https://platformengineering.org/blog/what-is-platform-engineering) - What is Platform Engineering?
- [Team Topologies](https://teamtopologies.com/) - Organizational structures for modern teams
### GitOps
- [GitOps Principles](https://opengitops.dev/) - Official GitOps principles
- [ArgoCD Documentation](https://argo-cd.readthedocs.io/) - ArgoCD documentation
### Tools
- [CNOE.io](https://cnoe.io/) - Cloud Native Operational Excellence Framework
- [Forgejo](https://forgejo.org/) - Self-hosted Git service with CI/CD
- [Terraform](https://www.terraform.io/) - Infrastructure as Code tool

View file

@ -0,0 +1,201 @@
---
title: Infrastructure as Code
linkTitle: Infrastructure as Code
weight: 10
description: >
Managing infrastructure through machine-readable definition files rather than manual configuration
---
## Overview
Infrastructure as Code (IaC) is the practice of managing and provisioning infrastructure through code rather than manual processes. Instead of clicking through web consoles or running one-off commands, infrastructure is defined in version-controlled files that can be executed repeatedly to produce identical environments.
This approach treats infrastructure with the same rigor as application code: it's versioned, reviewed, tested, and deployed through automated pipelines.
## Why Infrastructure as Code?
### The problem with manual infrastructure
Traditional infrastructure management faces several challenges:
- **Inconsistency**: Manual steps vary between operators and environments
- **Undocumented**: Critical knowledge exists only in operators' heads
- **Error-Prone**: Human mistakes during repetitive tasks
- **Slow**: Manual provisioning takes hours or days
- **Untrackable**: No audit trail of what changed, when, or why
- **Irreproducible**: Difficulty recreating environments exactly
### The IaC solution
Infrastructure as Code addresses these challenges by making infrastructure:
**Declarative** - Describe the desired state, not the steps to achieve it. The IaC tool handles the implementation details.
**Versioned** - Every infrastructure change is committed to Git, providing complete history and the ability to rollback.
**Automated** - Infrastructure deploys through pipelines without human intervention, eliminating manual errors.
**Testable** - Infrastructure changes can be validated before production deployment.
**Documented** - The code itself is the documentation, always current and accurate.
**Reproducible** - The same code produces identical infrastructure every time, across all environments.
## Core Concepts
### Declarative vs imperative
**Imperative** approaches specify the exact steps: "Create a server, then install software, then configure networking."
**Declarative** approaches specify the desired outcome: "I need a server with this software and network configuration." The IaC tool determines the necessary steps.
Most modern IaC tools use the declarative approach, making them more maintainable and resilient.
### State Management
IaC tools maintain a "state" - a record of what infrastructure currently exists. When you change your code and re-run the tool, it compares the desired state (your code) with the actual state (what exists) and makes only the necessary changes.
This enables:
- **Drift detection** - Identify manual changes made outside IaC
- **Safe updates** - Modify only what changed
- **Dependency management** - Update resources in the correct order
### Idempotency
Running the same IaC code multiple times produces the same result. If infrastructure already matches the code, the tool makes no changes. This property is called idempotency and is essential for reliable automation.
## Infrastructure as Code in EDP
The Edge Developer Platform uses IaC extensively:
### Terraform and Terragrunt
[Terraform](terraform/) is our primary IaC tool for provisioning cloud resources. We use [Terragrunt](https://terragrunt.gruntwork.io/) as an orchestration layer to manage multiple Terraform modules and reduce code duplication.
Our implementation includes:
- **[infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue)** - Reusable infrastructure components (modules, units, and stacks)
- **[infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy)** - Full environment definitions using catalogue components
### Platform stacks
We organize infrastructure into [stacks](stacks/) - coherent bundles of related components:
- **[Core Stack](stacks/core/)** - Essential platform services
- **[Forgejo Stack](stacks/forgejo/)** - Source control and CI/CD
- **[Observability Stack](stacks/observability/)** - Monitoring and logging
- **[OTC Stack](stacks/otc/)** - Cloud provider resources
- **[Coder Stack](stacks/coder/)** - Development environments
- **[Terralist Stack](stacks/terralist/)** - Terraform registry
Each stack is defined as code, versioned independently, and can be deployed across different environments.
### GitOps integration
Our IaC integrates with GitOps principles:
1. All infrastructure definitions live in Git repositories
2. Changes go through code review processes
3. Automated pipelines deploy infrastructure
4. ArgoCD continuously reconciles Kubernetes resources with Git state
This creates an auditable, automated, and reliable deployment process.
## Benefits realized
### Consistency across environments
Development, testing, and production environments are deployed from the same code. This eliminates the "works on my machine" problem at the infrastructure level.
### Rapid environment provisioning
A complete EDP environment can be provisioned in minutes rather than days. This enables:
- Quick disaster recovery
- Easy creation of test environments
- Fast onboarding for new team members
### Reduced operational risk
Code review catches infrastructure errors before deployment. Automated testing validates changes. Version control enables instant rollback if problems occur.
### Knowledge sharing
Infrastructure configuration is explicit and discoverable in code. New team members can understand the platform by reading the repository rather than shadowing experienced operators.
### Compliance and auditability
Every infrastructure change is tracked in Git history with author, timestamp, and reason. This provides audit trails required for compliance and simplifies troubleshooting.
## Getting started
To work with EDP's Infrastructure as Code:
1. **Understand Terraform basics** - Review [Terraform documentation](https://developer.hashicorp.com/terraform)
2. **Explore infra-catalogue** - Browse [infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue) to understand available components
3. **Review existing deployments** - Examine [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy) to see how components are composed
4. **Follow the Terraform guide** - See [Terraform-based deployment](terraform/) for detailed instructions
## Best Practices
Based on our experience building and operating IaC:
**Version everything** - All infrastructure code belongs in version control. No exceptions.
**Keep it simple** - Start with basic modules. Add abstraction only when duplication becomes painful.
**Test before production** - Deploy infrastructure changes to test environments first.
**Use meaningful commit messages** - Explain why changes were made, not just what changed.
**Review all changes** - Infrastructure changes should go through the same review process as application code.
**Document assumptions** - Use code comments to explain non-obvious decisions.
**Manage secrets securely** - Never commit credentials to version control. Use secret management tools.
**Plan for drift** - Regularly compare actual infrastructure with code state to detect manual changes.
## Challenges and limitations
Infrastructure as Code is powerful but has challenges:
**Learning curve** - Teams need to learn IaC tools and practices. Initial productivity may decrease.
**State management complexity** - State files must be stored securely and accessed by multiple team members. State corruption can cause serious issues.
**Provider limitations** - Not all infrastructure can be managed as code. Some resources require manual configuration.
**Breaking changes** - Poorly written code can destroy infrastructure. Safeguards and testing are essential.
**Tool lock-in** - Switching IaC tools (e.g., Terraform to Pulumi) requires rewriting infrastructure code.
Despite these challenges, the benefits far outweigh the costs for any infrastructure of meaningful complexity.
## Why we invest in IaC
The IPCEI-CIS Edge Developer Platform requires reliable, reproducible infrastructure. Manual provisioning cannot meet these requirements at scale.
By investing in Infrastructure as Code:
- We can deploy complete environments consistently
- Platform engineers can focus on improvement rather than repetitive tasks
- Infrastructure changes are transparent and auditable
- New team members can contribute confidently
- Disaster recovery becomes routine rather than heroic
Our IaC tools ([infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue) and [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy)) embody these principles and enable the platform's reliability.
## Additional Resources
### Terraform Ecosystem
- [Terraform Documentation](https://developer.hashicorp.com/terraform)
- [OpenTofu](https://opentofu.org/) - Community-driven Terraform fork
- [Terragrunt](https://terragrunt.gruntwork.io/) - Terraform orchestration
### Infrastructure as Code Concepts
- [Infrastructure as Code book](https://www.oreilly.com/library/view/infrastructure-as-code/9781098114664/) by Kief Morris
- [Terraform Best Practices](https://www.terraform-best-practices.com/)
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)
### EDP-Specific Resources
- [Terraform-based deployment](terraform/) - Detailed deployment guide
- [Infrastructure Stacks](stacks/) - Reusable component bundles
- [Platform Orchestration](../) - How IaC fits into overall deployment

Binary file not shown.

After

Width:  |  Height:  |  Size: 333 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

View file

@ -0,0 +1,519 @@
---
title: "Stacks"
linkTitle: "Stacks"
weight: 40
description: >
Platform-level component provisioning via Stacks
---
## Overview
The `stacks` and `stacks-instances` repositories form the core of a GitOps-based system for provisioning Edge Developer Platforms (EDP). They implement a template-instance pattern that enables the deployment of reusable platform components across different environments. The concept of "stacks" originates from the CNOE.io project (Cloud Native Operational Excellence), which can be traced through the evolutionary development from `edpbuilder` (derived from CNOE.io's `EDPbuilder`) to `infra-deploy`.
## Key Features of the Everything-as-Code Stacks Approach
This declarative Stacks provisioning architecture is characterized by the following central properties:
### Complete Code Declaration
**Platform as Code**: All Kubernetes resources, Helm charts, and application manifests are declaratively versioned as YAML files. The entire platform topology is traceable in Git.
**Configuration as Code**: Environment-specific configurations are generated through template hydration, not manually edited. Gomplate transforms generic templates into concrete configurations.
### GitOps-Native Architecture
**Single Source of Truth**: Git is the sole source of truth for the desired state of all infrastructure and platform components.
**Declarative State Management**: ArgoCD continuously synchronizes the actual state with the desired state defined in Git. Deviations are automatically corrected.
**Audit Trail**: Every change to infrastructure or platform is documented through Git commits, with author, timestamp, and change description.
**Pull-based Deployment**: ArgoCD pulls changes from Git, rather than external systems requiring push access to the cluster. This significantly increases security.
### Template-Instance Separation
**DRY Principle (Don't Repeat Yourself)**: Common platform components are defined once as templates and reused for all environments.
**Environment Promotion**: New environments can be quickly created through template hydration. Consistency across environments is guaranteed.
**Centralized Maintainability**: Updates to stack definitions can be made centrally in the `stacks` repository and then selectively rolled out to instances.
**Customization Points**: Despite reuse, environment-specific customizations remain possible through values files and manifest overlays.
### Modular Composition
**Stack-based Architecture**: Platform capabilities are organized into independent, reusable stacks (core, otc, forgejo, observability).
**Selective Deployment**: Through the `STACKS` environment variable, only required components can be deployed selectively.
**Mix-and-Match**: Different stack combinations yield different platform profiles (Development, Production, Observability clusters).
**Pluggable Components**: New stacks can be added without modifying existing ones.
### Environment Agnosticism
**Cloud Provider Abstraction**: Templates are formulated generically. Provider-specific details are introduced through hydration.
**Multi-Cloud Ready**: The architecture supports various cloud providers (currently OTC, historically KIND, extensible to AWS/Azure/GCP).
**Environment Variables as Interface**: All environment-specific aspects are controlled through clearly defined environment variables.
**Portable Definitions**: Stack definitions can be ported between environments and even cloud providers.
### Self-Healing and Drift Detection
**Automated Reconciliation**: ArgoCD detects deviations from the desired state and corrects them automatically.
**Continuous Monitoring**: Permanent monitoring of cluster state compared to Git definition.
**Declarative State Recovery**: After failures or manual changes, the declared state is automatically restored.
**Sync Policies**: Configurable sync strategies (automated, manual, with pruning) per application.
### Secrets Management
**Secrets Outside Git**: Sensitive data is not stored in Git but generated at runtime or injected from secret stores.
**Generated Credentials**: Passwords, tokens, and secrets are generated during deployment and directly created as Kubernetes Secrets.
**Sealed Secrets Ready**: The architecture is compatible with Sealed Secrets or External Secrets Operators for encrypted secret storage in Git.
**Credential Rotation**: Secrets can be regenerated through re-deployment.
### Observability and Auditability
**Declarative Monitoring**: Observability stacks are part of the Platform-as-Code definition.
**Deployment History**: Complete history of all deployments and changes through Git log.
**ArgoCD UI**: Graphical representation of sync status and application topology.
**Infrastructure Events**: Terraform state changes and Terragrunt outputs document infrastructure changes.
### Idempotence and Reproducibility
**Idempotent Operations**: Repeated execution of the same declaration leads to the same result without side effects.
**Deterministic Builds**: Same input parameters (Git commit + environment variables) produce identical environments.
**Disaster Recovery**: Complete environments can be rebuilt from code without restoring backups.
**Testing in Production-Like Environments**: Development and staging environments are code-identical to production, only with different parameter values.
## Purpose in EDP
A 'stack' is the declarative description for the platform provisionning in an EDP installation.
## Repository
**Code**:
* [Stacks Templates Repo](https://edp.buildth.ing/DevFW-CICD/stacks)
* [Stacks Instances Repo, used for ArgoCD Gitops](https://edp.buildth.ing/DevFW-CICD/stacks-instances)
* [EDP Stacks Deployment mechanism](https://edp.buildth.ing/DevFW/infra-deploy)
**Documentation**: [Link to component-specific documentation]
* [Outdated: The former 'edpbuilder' as script, derived from CNOE's 'idpbuilder](https://edp.buildth.ing/DevFW/edpbuilder)
## The stacks Repository
### Purpose and Structure
The `stacks` repository contains reusable template definitions for platform components. It serves as a central library of building blocks from which Edge Developer Platforms can be composed.
```
stacks/
└── template/
├── edfbuilder.yaml
├── registry/
│ ├── core.yaml
│ ├── otc.yaml
│ ├── forgejo.yaml
│ ├── observability.yaml
│ └── observability-client.yaml
└── stacks/
├── core/
├── otc/
├── forgejo/
├── observability/
└── observability-client/
```
### Components
**edfbuilder.yaml**: The central bootstrap definition. This is an ArgoCD Application that references the `registry` directory and serves as the entry point for the entire platform provisioning.
**registry/**: Contains ArgoCD ApplicationSets that function as a meta-layer. Each file defines a category of stacks (e.g., core, forgejo, observability) and references the corresponding subdirectory in `stacks/`.
**stacks/**: The actual platform components, organized into thematic categories:
- **core**: Fundamental components such as ArgoCD, CloudNative PostgreSQL, Dex (SSO)
- **otc**: Cloud-provider-specific components for Open Telekom Cloud (cert-manager, ingress-nginx, StorageClasses)
- **forgejo**: Git server and CI runners
- **observability**: Central observability components (Grafana, Victoria Metrics Stack)
- **observability-client**: Client-side metrics collection for non-observability clusters
Each stack consists of:
- YAML definitions (primarily ArgoCD Applications)
- `values.yaml` files for Helm charts
- `manifests/` directories for additional Kubernetes resources
### Templating Mechanism
The templates use Gomplate with delimiter syntax `{{{ }}}` for environment variables:
```yaml
repoURL: "https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}"
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/core"
```
These placeholders are replaced with environment-specific values during the deployment phase.
## The stacks-instances Repository
### Purpose and Structure
The `stacks-instances` repository contains the materialized, environment-specific configurations. While `stacks` provides the blueprints, `stacks-instances` contains the actual deployment definitions for concrete environments.
```
stacks-instances/
└── otc/
├── osctest.t09.de/
│ ├── edfbuilder.yaml
│ ├── registry/
│ └── stacks/
├── backup-test-manu.t09.de/
│ ├── edfbuilder.yaml
│ ├── registry/
│ └── stacks/
└── ...
```
### Organizational Principle
The structure follows the schema `{cloud-provider}/{domain}/`:
- **cloud-provider**: Identifies the cloud environment (e.g., `otc` for Open Telekom Cloud)
- **domain**: The fully qualified domain name of the environment (e.g., `osctest.t09.de`)
Each environment replicates the structure of `stacks/template`, but with resolved template variables and environment-specific customizations.
### Usage by ArgoCD
ArgoCD synchronizes directly from this repository. Applications reference paths such as:
```yaml
source:
path: "otc/osctest.t09.de/stacks/core"
repoURL: "https://edp.buildth.ing/DevFW-CICD/stacks-instances"
targetRevision: HEAD
```
This enables true GitOps: every change to the configurations is traceable through Git commits and automatically synchronized by ArgoCD in the target environment.
## The infra-deploy Repository
### Role in the Overall Architecture
The `infra-deploy` repository is the orchestration layer that coordinates both infrastructure and platform provisioning. It represents the evolution of `edpbuilder`, which was originally derived from the CNOE.io project's `EDPbuilder`.
### Two-Phase Provisioning
**Phase 1: Infrastructure Provisioning**
Uses Terragrunt Stacks (experimental feature) to provision cloud resources:
```
infra-deploy/
├── root.hcl
├── non-prod/
│ ├── tenant.hcl
│ ├── dns_zone/
│ │ ├── terragrunt.hcl
│ │ ├── terragrunt.stack.hcl
│ │ └── terragrunt.values.hcl
│ └── testing/
├── prod/
└── templates/
└── forgejo/
├── terragrunt.hcl
└── terragrunt.stack.hcl
```
Terragrunt Stacks provision:
- VPC and network segments
- Kubernetes clusters (CCE on OTC)
- Managed databases (RDS PostgreSQL)
- Load balancers and DNS entries
- Security groups and other cloud resources
**Phase 2: Platform Provisioning**
The script `scripts/edp-install.sh` executes the following steps:
1. **Template Hydration**:
- Checkout of the `stacks` repository
- Execution of Gomplate to resolve template variables
- Generation of environment-specific manifests
2. **Instance Management**:
- Checkout/update of the `stacks-instances` repository
- During CI execution: commit and push of the new instance
3. **Secrets Management**:
- Generation of credentials (database passwords, SSO secrets, API tokens)
- Creation of Kubernetes Secrets
4. **Bootstrap**:
- Helm-based installation of ArgoCD
- Application of `edfbuilder.yaml` or selective registry entries
5. **GitOps Handover**:
- ArgoCD takes over further synchronization from `stacks-instances`
- Continuous monitoring and self-healing
### GitHub Actions Workflows
The `.github/workflows/` directory contains three central workflows:
**deploy.yaml**: Complete deployment pipeline with the following inputs:
- Cluster environment and tenant (prod/non-prod)
- Node flavor and availability zone
- Stack selection (core, otc, forgejo, observability, etc.)
- Infra-catalogue version
**plan.yaml**: Terraform/Terragrunt plan preview without execution
**destroy.yaml**: Controlled teardown of environments
## Deployment Workflow
The complete provisioning process proceeds as follows:
1. **Initiation**: GitHub Actions workflow is triggered (manually or automatically)
2. **Environment Preparation**:
```bash
export CLUSTER_ENVIRONMENT=qa-stage
cd scripts
./new-otc-env.sh # Creates Terragrunt configuration if new
```
3. **Infrastructure Provisioning**:
```bash
./ensure-cluster.sh otc
# Internally executes:
# - ./ensure-otc-cluster.sh
# - terragrunt stack run apply
```
4. **Platform Provisioning**:
```bash
./edp-install.sh
# Executes:
# - Checkout of stacks
# - Gomplate hydration
# - Checkout/update of stacks-instances
# - Secrets generation
# - ArgoCD installation
# - Bootstrap of stacks
```
5. **ArgoCD Synchronization**: ArgoCD continuously reads from `stacks-instances` and synchronizes the desired state
## The CNOE.io Stacks Concept
The term "stacks" originates from the Cloud Native Operational Excellence (CNOE.io) project. The core idea is the composition of platform capabilities from modular, reusable building blocks.
### Principles
**Modularity**: Each stack is a self-contained unit with clear dependencies
**Composability**: Stacks can be freely combined to create different platform profiles
**Declarativeness**: All configurations are declarative and GitOps-capable
**Environment-agnostic**: Templates are generic; environment specifics are introduced through hydration
### Stack Selection and Combinations
The environment variable `STACKS` controls which components are deployed:
```bash
# Complete EDP with central observability
STACKS="core,otc,forgejo,observability"
# Application cluster with client-side observability
STACKS="core,otc,forgejo,observability-client"
# Minimal development environment
STACKS="core,forgejo"
```
## Data Flow and Dependencies
```
┌─────────────────┐
│ GitHub Actions │
│ (deploy.yaml) │
└────────┬────────┘
├─> Phase 1: Infrastructure
│ ┌──────────────────┐
│ │ infra-deploy │
│ │ (Terragrunt) │
│ └────────┬─────────┘
│ │
│ v
│ ┌──────────────────┐
│ │ Cloud Provider │
│ │ (OTC) │
│ │ - VPC │
│ │ - K8s Cluster │
│ │ - RDS │
│ └──────────────────┘
└─> Phase 2: Platform
┌──────────────────┐
│ edp-install.sh │
└────────┬─────────┘
├─> Checkout: stacks (Templates)
│ └─> Gomplate Hydration
├─> Checkout/Update: stacks-instances
├─> Secrets Generation
├─> ArgoCD Installation (Helm)
└─> Bootstrap (edfbuilder.yaml)
v
┌────────────────┐
│ ArgoCD │
└────────┬───────┘
└─> Continuous Synchronization
from stacks-instances
v
┌──────────────┐
│ Kubernetes │
│ Cluster │
└──────────────┘
```
## Historical Context: edpbuilder to infra-deploy
The evolution from `edpbuilder` to `infra-deploy` demonstrates the maturation of the architecture:
**edpbuilder** (Origin):
- Directly derived from CNOE.io's `EDPbuilder`
- Focus on local KIND clusters
- Manual configuration
- Monolithic structure
**infra-deploy** (Current):
- Production-ready for cloud deployments (OTC)
- Terragrunt-based infrastructure orchestration
- CI/CD integration via GitHub Actions
- Clear separation between infrastructure and platform
- Template-instance separation through stacks/stacks-instances
## Technical Particularities
### Gomplate Templating
Gomplate is used with custom delimiters `{{{ }}}` to avoid conflicts with Helm templating (`{{ }}`):
```bash
gomplate --input-dir="stacks/template" \
--output-dir="work" \
--left-delim "{{{" \
--right-delim "}}}"
```
### Terragrunt Experimental Stacks
The use of Terragrunt Stacks requires the experimental flag:
```bash
export TG_EXPERIMENT_MODE=true
terragrunt stack run apply
```
This enables hierarchical organization of Terraform modules with dependency management.
### ArgoCD ApplicationSets
The registry pattern uses ArgoCD Applications that reference directories:
```yaml
source:
path: "otc/osctest.t09.de/stacks/core"
```
ArgoCD automatically detects all YAML files in the path and synchronizes them as Applications.
## Best Practices and Patterns
**Immutable Infrastructure**: Every environment is fully defined in Git
**Secrets Outside Git**: Sensitive data is generated at runtime or injected from secret stores
**Progressive Rollouts**: New environments start as template instances, then are individually customized
**Version Pinning**: Critical components (Helm charts, Terragrunt modules) are pinned to specific versions
**Namespace Isolation**: Each stack deploys into dedicated namespaces
**Self-Healing**: ArgoCD's automated sync policy enables automatic drift correction
## Usage Examples
### Deployment by Pipeline
The platform deployment is the second part of the EDP installtaion. First there is the infrastructure setup, which ends with a created kubernetes cluster. Then the platform provisioning by the defined stacks is done. Both is runnable by the `deploy`pipelien in `infra-deploy`:
![alt text](./deploy-action.png)
The green pipeline looks liek this:
![alt text](./green-deploy-pipeline.png)
### Local setup with 'kind'
It's also possible to just run the second part, the stcks provisionning. Then you need to have a kubernetes cluster already running, which is e.g. feasable by a local kind-cluster.
So imagine, you want to to the stacks 'core,observability' on your local machine. Then you can run the local entzr
```bash
# have kind insatlled
# in /infra-deploy
# provide a kind cluster
kind delete clusters --all
./scripts/ensure-kind-cluster.sh -r
# provide some emnv vars
export TERRAFORM=/bin/bash
export LOADBALANCER_ID=ABC
export DOMAIN=ABC
export DOMAIN_GITEA=ABC
export OS_ACCESS_KEY=ABC
export OS_SECRET_KEY=ABC
export STACKS=core,observability
# deploy
./scripts/edp-install.sh
```
## Status
**Maturity**: [Production]
## Additional Resources
* [CNOE](https://cnoe.io/docs/overview/cnoe)

View file

@ -0,0 +1,368 @@
---
title: "Coder"
linkTitle: "Coder"
weight: 20
description: >
Cloud Development Environments for secure, scalable remote development
---
## Overview
Coder is an enterprise cloud development environment (CDE) platform that provisions secure, consistent remote development workspaces. As part of the Edge Developer Platform, Coder enables developers to work in standardized, on-demand environments defined as code, moving development workloads from local machines to centrally managed infrastructure.
The Coder stack deploys a self-hosted Coder instance with PostgreSQL database backend, integrated authentication, and edge connectivity capabilities.
## Key Features
* **Infrastructure as Code Workspaces**: Development environments defined using Terraform templates
* **IDE Agnostic**: Supports browser-based IDEs, VS Code, JetBrains IDEs, and other development tools
* **Secure Remote Access**: Workspaces run in controlled cloud or on-premises infrastructure
* **On-Demand Provisioning**: Developers create ephemeral or persistent workspaces as needed
* **AI Agent Support**: Secure execution environment for AI coding assistants
* **Template-Based Deployment**: Reusable workspace templates ensure consistency across teams
## Repository
**Code**: [Coder Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/coder)
**Documentation**:
* [Coder Official Documentation](https://coder.com/docs)
* [Coder GitHub Repository](https://github.com/coder/coder)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* CloudNativePG operator (provided by `core` stack)
* Ingress controller configured (provided by `otc` stack)
* cert-manager for TLS certificate management (provided by `otc` stack)
* Domain name configured via `DOMAIN_GITEA` environment variable
### Quick Start
The Coder stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then the domain will be `coder.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- PostgreSQL database cluster (CloudNativePG)
- Coder application (Helm chart v2.28.3)
- Ingress configuration with TLS
- Database credentials and edge connectivity secrets
### Verification
Verify the Coder deployment:
```bash
# Check ArgoCD application status
kubectl get application coder -n argocd
# Verify Coder pods are running
kubectl get pods -n coder
# Check PostgreSQL cluster status
kubectl get cluster coder-db -n coder
# Verify ingress configuration
kubectl get ingress -n coder
```
Access the Coder web interface at `https://coder.{DOMAIN_GITEA}`.
## Architecture
### Component Architecture
The Coder stack consists of:
**Coder Control Plane**:
- Web application for workspace management
- API server for workspace provisioning
- Terraform executor for infrastructure operations
**PostgreSQL Database**:
- Single-instance CloudNativePG cluster
- Stores workspace metadata, templates, and user data
- Managed database user with `coder-db-user` secret
- 10Gi persistent storage on `csi-disk` storage class
**Networking**:
- ClusterIP service for internal communication
- Nginx ingress with TLS termination
- cert-manager integration for automatic certificate management
## Configuration
### Environment Variables
The Coder application is configured through environment variables in `values.yaml`:
**Access Configuration**:
- `CODER_ACCESS_URL`: Public URL where Coder is accessible (`https://coder.{DOMAIN_GITEA}`)
**Database Configuration**:
- `CODER_PG_CONNECTION_URL`: PostgreSQL connection string (from `coder-db-user` secret)
**Authentication**:
- `CODER_OAUTH2_GITHUB_DEFAULT_PROVIDER_ENABLE`: GitHub OAuth integration (disabled by default)
**Edge Connectivity**:
- `EDGE_CONNECT_ENDPOINT`: Edge connection endpoint (from `edge-credential` secret)
- `EDGE_CONNECT_USERNAME`: Edge authentication username
- `EDGE_CONNECT_PASSWORD`: Edge authentication password
### Helm Chart Configuration
Key Helm values configured in `stacks/coder/coder/values.yaml`:
```yaml
coder:
env:
- name: CODER_ACCESS_URL
value: "https://coder.{DOMAIN_GITEA}"
- name: CODER_PG_CONNECTION_URL
valueFrom:
secretKeyRef:
name: coder-db-user
key: uri
service:
type: ClusterIP
ingress:
enable: true
className: nginx
host: "coder.{DOMAIN_GITEA}"
annotations:
cert-manager.io/cluster-issuer: main
tls:
enable: true
secretName: coder-tls-secret
```
**Important**: Do not override `CODER_HTTP_ADDRESS`, `CODER_TLS_ENABLE`, `CODER_TLS_CERT_FILE`, or `CODER_TLS_KEY_FILE` as these are managed by the Helm chart.
### PostgreSQL Database Configuration
Defined in `stacks/coder/coder/manifests/postgres.yaml`:
**Cluster Specification**:
- 1 instance (single-node cluster)
- Primary update strategy: unsupervised
- Resource requests/limits: 1 CPU, 1Gi memory
- Storage: 10Gi using `csi-disk` storage class
**Managed Roles**:
- User: `coder`
- Permissions: createdb, login
- Password stored in `coder-db-user` secret
### ArgoCD Application Configuration
**Registry Application** (`template/registry/coder.yaml`):
- Name: `coder-reg`
- Manages the Coder stack directory
- Automated sync with prune and self-heal enabled
**Stack Application** (`template/stacks/coder/coder.yaml`):
- Name: `coder`
- Deploys Coder Helm chart v2.28.3 from `https://helm.coder.com/v2`
- Automated self-healing enabled
- Creates namespace automatically
- References values from `stacks-instances` repository
## Usage Examples
### Creating a Workspace Template
After deployment, create workspace templates using Terraform:
1. **Access Coder Dashboard**
```bash
open https://coder.${DOMAIN_GITEA}
```
2. **Create Template Repository**
Create a Git repository with a Terraform template:
```hcl
# main.tf
terraform {
required_providers {
coder = {
source = "coder/coder"
version = "~> 0.12"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.23"
}
}
}
resource "coder_agent" "main" {
os = "linux"
arch = "amd64"
}
resource "kubernetes_pod" "main" {
metadata {
name = "coder-${data.coder_workspace.me.owner}-${data.coder_workspace.me.name}"
namespace = "coder-workspaces"
}
spec {
container {
name = "dev"
image = "codercom/enterprise-base:ubuntu"
command = ["sh", "-c", coder_agent.main.init_script]
}
}
}
```
3. **Push Template to Coder**
```bash
coder templates push kubernetes-dev
```
### Provisioning a Development Workspace
```bash
# Create a new workspace from template
coder create my-workspace --template kubernetes-dev
# Connect via SSH
coder ssh my-workspace
# Open in VS Code
coder open my-workspace --ide vscode
# Stop workspace when not in use
coder stop my-workspace
# Delete workspace
coder delete my-workspace
```
### Integrating with Platform Services
Access EDP platform services from Coder workspaces:
```bash
# Connect to platform PostgreSQL
psql "postgresql://myuser@postgres.core.svc.cluster.local:5432/mydb"
# Access Forgejo
git clone https://forgejo.${DOMAIN_GITEA}/myorg/myrepo.git
# Query platform metrics
curl https://grafana.${DOMAIN}/api/datasources
```
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration and CloudNativePG operator for database management
* **OTC Stack**: Requires ingress-nginx controller and cert-manager for external access and TLS
* **Forgejo Stack**: Workspace templates can integrate with platform Git repositories
* **Observability Stack**: Workspace metrics can be collected by platform observability tools
* **Dex (SSO)**: Can be configured for centralized authentication (requires additional configuration)
## Troubleshooting
### Coder Pods Not Starting
**Problem**: Coder pods remain in `Pending` or `CrashLoopBackOff` state
**Solution**:
1. Check PostgreSQL cluster status:
```bash
kubectl get cluster coder-db -n coder
kubectl describe cluster coder-db -n coder
```
2. Verify database credentials secret:
```bash
kubectl get secret coder-db-user -n coder
kubectl get secret coder-db-user -n coder -o jsonpath='{.data.uri}' | base64 -d
```
3. Check Coder logs:
```bash
kubectl logs -n coder -l app=coder
```
### Cannot Access Coder UI
**Problem**: Coder web interface is not accessible at configured URL
**Solution**:
1. Verify ingress configuration:
```bash
kubectl get ingress -n coder
kubectl describe ingress -n coder
```
2. Check TLS certificate status:
```bash
kubectl get certificate -n coder
kubectl describe certificate coder-tls-secret -n coder
```
3. Verify DNS resolution:
```bash
nslookup coder.${DOMAIN_GITEA}
```
### Database Connection Errors
**Problem**: Coder cannot connect to PostgreSQL database
**Solution**:
1. Verify PostgreSQL cluster health:
```bash
kubectl get pods -n coder -l cnpg.io/cluster=coder-db
kubectl logs -n coder -l cnpg.io/cluster=coder-db
```
2. Check database and user creation:
```bash
kubectl get database coder -n coder
kubectl exec -it coder-db-1 -n coder -- psql -U postgres -c "\l"
kubectl exec -it coder-db-1 -n coder -- psql -U postgres -c "\du"
```
3. Test connection string:
```bash
kubectl exec -it coder-db-1 -n coder -- psql "$(kubectl get secret coder-db-user -n coder -o jsonpath='{.data.uri}' | base64 -d)"
```
### Workspace Provisioning Fails
**Problem**: Workspaces fail to provision from templates
**Solution**:
1. Check Coder provisioner logs:
```bash
kubectl logs -n coder -l app=coder --tail=100
```
2. Verify Kubernetes permissions for workspace creation:
```bash
kubectl auth can-i create pods --as=system:serviceaccount:coder:coder -n coder-workspaces
```
3. Review template Terraform configuration for errors
## Additional Resources
* [Coder Documentation](https://coder.com/docs)
* [Coder Templates Repository](https://github.com/coder/coder)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
* [Coder Blog: 2025 Launch Week](https://coder.com/blog/launch-week-2025-instant-infrastructure)

View file

@ -0,0 +1,480 @@
---
title: "Core"
linkTitle: "Core"
weight: 10
description: >
Essential infrastructure components for GitOps, database management, and single sign-on
---
## Overview
The Core stack provides foundational infrastructure components required by all other Edge Developer Platform stacks. It establishes the base layer for continuous deployment, database services, and centralized authentication, enabling a secure, scalable platform architecture.
The Core stack deploys ArgoCD for GitOps orchestration, CloudNativePG for PostgreSQL database management, and Dex for OpenID Connect single sign-on capabilities.
## Key Features
* **GitOps Continuous Deployment**: ArgoCD manages declarative infrastructure and application deployments
* **Database Operator**: CloudNativePG provides enterprise-grade PostgreSQL clusters for platform services
* **Single Sign-On**: Dex offers centralized OIDC authentication across platform components
* **Automated Synchronization**: Self-healing deployments with automatic drift correction
* **Role-Based Access Control**: Integrated RBAC for secure platform administration
* **TLS Certificate Management**: Automated certificate provisioning and renewal
## Repository
**Code**: [Core Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/core)
**Documentation**:
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [Dex Documentation](https://dexidp.io/docs/)
## Getting Started
### Prerequisites
* Kubernetes cluster (1.24+)
* kubectl configured with cluster access
* Ingress controller (nginx recommended)
* cert-manager for TLS certificate management
* Domain names configured for platform services
### Quick Start
The Core stack is deployed as the foundation of the EDP installation:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then domains will be `argocd.test-me.t09.de`, `dex.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Bootstrap**
The deployment automatically provisions:
- ArgoCD control plane in `argocd` namespace
- CloudNativePG operator in `cloudnative-pg` namespace
- Dex identity provider in `dex` namespace
- Ingress configurations with TLS certificates
- OIDC authentication integration
### Verification
Verify the Core stack deployment:
```bash
# Check ArgoCD installation
kubectl get application -n argocd
kubectl get pods -n argocd
# Verify CloudNativePG operator
kubectl get pods -n cloudnative-pg
kubectl get crd | grep cnpg.io
# Check Dex deployment
kubectl get pods -n dex
kubectl get ingress -n dex
# Verify ingress configurations
kubectl get ingress -n argocd
```
Access ArgoCD at `https://argocd.{DOMAIN}` and authenticate via Dex SSO. Or use username `admin` and the secret inside of kubernetes `argocd/argocd-initial-admin-secret` as password `kubectl get secret -n argocd argocd-initial-admin-secret -ojson | jq -r .data.password | base64 -d`.
## Architecture
### Component Architecture
The Core stack establishes a three-tier foundation:
**ArgoCD Control Plane**:
- Application management and GitOps reconciliation
- Multi-repository tracking with automated sync
- Resource health monitoring and drift detection
- Integrated RBAC with SSO authentication
**CloudNativePG Operator**:
- PostgreSQL cluster lifecycle management
- Automated backup and recovery
- High availability and failover
- Storage provisioning via CSI drivers
**Dex Identity Provider**:
- OpenID Connect authentication service
- Multiple connector support (Forgejo/Gitea, LDAP, SAML)
- Static client registration for platform services
- Token issuance and validation
### Networking
**Ingress Architecture**:
- nginx ingress controller for external access
- TLS termination with cert-manager integration
- Domain-based routing for platform services
**Kubernetes Services**:
- Internal service communication via ClusterIP
- DNS-based service discovery
- Network policies for security segmentation
## Configuration
### ArgoCD Configuration
Deployed via Helm chart v9.1.5 with custom values in `stacks/core/argocd/values.yaml`:
**OIDC Authentication**:
```yaml
configs:
cm:
url: "https://{DOMAIN_ARGOCD}"
oidc.config: |
name: Forgejo
issuer: https://{DOMAIN_DEX}
clientID: controller-argocd-dex
clientSecret: $dex-controller-argocd-dex:dex-controller-argocd-dex
requestedScopes: ["openid", "profile", "email", "groups"]
```
**RBAC Policy**:
```yaml
policy.csv: |
g, DevFW, role:admin
```
**Server Settings**:
- Insecure mode enabled (TLS handled by ingress)
- Annotation-based resource tracking
- 60-second reconciliation timeout
- Resource exclusions for ProviderConfigUsage and CiliumIdentity
### CloudNativePG Configuration
Deployed via Helm chart v0.26.1 with values in `stacks/core/cloudnative-pg/values.yaml`:
**Operator Settings**:
- Namespace: `cloudnative-pg`
- Automated database cluster provisioning
- Custom resource definitions for Cluster, Database, and Pooler resources
**Storage Configuration**:
- Uses `csi-disk` storage class by default
- PVC provisioning for PostgreSQL data
- Backup storage integration (S3-compatible)
### Dex Configuration
Deployed via Helm chart v0.23.0 with values in `stacks/core/dex/values.yaml`:
**Issuer Configuration**:
```yaml
config:
issuer: https://{DOMAIN_DEX}
storage:
type: memory # Use persistent storage for production
oauth2:
skipApprovalScreen: true
alwaysShowLoginScreen: false
```
**Forgejo Connector**:
```yaml
connectors:
- type: gitea
id: forgejo
name: Forgejo
config:
clientID: $FORGEJO_CLIENT_ID
clientSecret: $FORGEJO_CLIENT_SECRET
redirectURI: https://{DOMAIN_DEX}/callback
baseURL: https://edp.buildth.ing
orgs:
- name: DevFW
```
**Static OAuth2 Clients**:
- ArgoCD: `controller-argocd-dex`
- Grafana: `controller-grafana-dex`
### Environment Variables
Core stack services use the following environment variables:
**Domain Configuration**:
- `DOMAIN_ARGOCD`: ArgoCD web interface URL
- `DOMAIN_DEX`: Dex authentication service URL
- `DOMAIN_GITEA`: Forgejo/Gitea repository URL
- `DOMAIN_GRAFANA`: Grafana observability dashboard URL
**Repository Configuration**:
- `CLIENT_REPO_ID`: Repository identifier for stack configurations
- `CLIENT_REPO_DOMAIN`: Git repository domain
- `CLIENT_REPO_ORG_NAME`: Organization name for stack instances
## Usage Examples
### Managing Applications with ArgoCD
Access and manage applications through ArgoCD:
```bash
# Login to ArgoCD CLI
argocd login argocd.${DOMAIN} --sso
# List all applications
argocd app list
# Get application status
argocd app get coder
# Sync application manually
argocd app sync coder
# View application logs
argocd app logs coder
# Diff application state
argocd app diff coder
```
### Creating a PostgreSQL Database
Deploy a PostgreSQL cluster using CloudNativePG:
```yaml
# database-cluster.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: app-db
namespace: my-app
spec:
instances: 3
storage:
size: 20Gi
storageClass: csi-disk
postgresql:
parameters:
max_connections: "100"
shared_buffers: "256MB"
bootstrap:
initdb:
database: appdb
owner: appuser
```
Apply the configuration:
```bash
kubectl apply -f database-cluster.yaml
# Check cluster status
kubectl get cluster app-db -n my-app
kubectl get pods -n my-app -l cnpg.io/cluster=app-db
# Get connection credentials
kubectl get secret app-db-app -n my-app -o jsonpath='{.data.password}' | base64 -d
```
### Configuring SSO for Applications
Add OAuth2 applications to Dex for SSO integration:
```yaml
# Add to dex values.yaml
staticClients:
- id: my-app-client
redirectURIs:
- 'https://myapp.{DOMAIN}/callback'
name: 'My Application'
secretEnv: MY_APP_CLIENT_SECRET
```
Configure the application to use Dex:
```bash
# Application OIDC configuration
OIDC_ISSUER=https://dex.${DOMAIN}
OIDC_CLIENT_ID=my-app-client
OIDC_CLIENT_SECRET=${MY_APP_CLIENT_SECRET}
OIDC_REDIRECT_URI=https://myapp.${DOMAIN}/callback
```
### Deploying Applications via ArgoCD
Create an ArgoCD Application manifest:
```yaml
# my-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: 'https://github.com/myorg/my-app'
targetRevision: main
path: k8s
destination:
server: 'https://kubernetes.default.svc'
namespace: my-app
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
```
Push it to [stacks instances](https://edp.buildth.ing/DevFW-CICD/stacks-instances) to be picked up by argo
## Integration Points
* **All Stacks**: Core stack is a prerequisite for all other EDP stacks
* **OTC Stack**: Provides ingress-nginx and cert-manager dependencies
* **Coder Stack**: Uses CloudNativePG for workspace database management
* **Forgejo Stack**: Integrates with Dex for SSO and ArgoCD for deployment
* **Observability Stack**: Uses Dex for Grafana authentication and ArgoCD for deployment
* **Provider Stack**: Deploys Terraform providers via ArgoCD
## Troubleshooting
### ArgoCD Not Accessible
**Problem**: Cannot access ArgoCD web interface
**Solution**:
1. Verify ingress configuration:
```bash
kubectl get ingress -n argocd
kubectl describe ingress -n argocd
```
2. Check ArgoCD server status:
```bash
kubectl get pods -n argocd
kubectl logs -n argocd -l app.kubernetes.io/name=argocd-server
```
3. Verify TLS certificate:
```bash
kubectl get certificate -n argocd
kubectl describe certificate -n argocd
```
4. Test DNS resolution:
```bash
nslookup argocd.${DOMAIN}
```
### Dex Authentication Failing
**Problem**: SSO login fails or redirects incorrectly
**Solution**:
1. Check Dex logs:
```bash
kubectl logs -n dex -l app.kubernetes.io/name=dex
```
2. Verify Forgejo connector configuration:
```bash
kubectl get secret -n dex
kubectl get configmap -n dex dex -o yaml
```
3. Test Dex issuer endpoint:
```bash
curl https://dex.${DOMAIN}/.well-known/openid-configuration
```
4. Verify OAuth2 client credentials match in both Dex and consuming application
### CloudNativePG Operator Not Running
**Problem**: PostgreSQL clusters fail to provision
**Solution**:
1. Check operator status:
```bash
kubectl get pods -n cloudnative-pg
kubectl logs -n cloudnative-pg -l app.kubernetes.io/name=cloudnative-pg
```
2. Verify CRDs are installed:
```bash
kubectl get crd | grep cnpg.io
kubectl describe crd clusters.postgresql.cnpg.io
```
3. Check operator logs for errors:
```bash
kubectl logs -n cloudnative-pg -l app.kubernetes.io/name=cloudnative-pg --tail=100
```
### Application Sync Failures
**Problem**: ArgoCD applications remain out of sync or fail to deploy
**Solution**:
1. Check application status:
```bash
argocd app get <app-name>
kubectl describe application <app-name> -n argocd
```
2. Review sync operation logs:
```bash
argocd app logs <app-name>
```
3. Verify repository access:
```bash
argocd repo list
argocd repo get <repo-url>
```
4. Check for resource conflicts or missing dependencies:
```bash
kubectl get events -n <app-namespace> --sort-by='.lastTimestamp'
```
### Database Connection Issues
**Problem**: Applications cannot connect to CloudNativePG databases
**Solution**:
1. Verify cluster is ready:
```bash
kubectl get cluster <cluster-name> -n <namespace>
kubectl describe cluster <cluster-name> -n <namespace>
```
2. Check database credentials secret:
```bash
kubectl get secret <cluster-name>-app -n <namespace>
kubectl get secret <cluster-name>-app -n <namespace> -o yaml
```
3. Test connection from a pod:
```bash
kubectl run -it --rm psql-test --image=postgres:16 --restart=Never -- \
psql "$(kubectl get secret <cluster-name>-app -n <namespace> -o jsonpath='{.data.uri}' | base64 -d)"
```
4. Review PostgreSQL logs:
```bash
kubectl logs -n <namespace> <cluster-name>-1
```
## Additional Resources
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
* [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [CloudNativePG Architecture](https://cloudnative-pg.io/documentation/current/architecture/)
* [Dex Documentation](https://dexidp.io/docs/)
* [Dex Connectors](https://dexidp.io/docs/connectors/)
* [OpenID Connect Specification](https://openid.net/connect/)

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

View file

@ -0,0 +1,532 @@
---
title: "Forgejo"
linkTitle: "Forgejo"
weight: 30
description: >
Self-hosted Git service with built-in CI/CD capabilities
---
## Overview
Forgejo is a self-hosted Git service that provides repository hosting, code collaboration, and integrated CI/CD workflows. As part of the Edge Developer Platform, Forgejo serves as the central code repository and continuous integration system, offering a complete DevOps platform with Git hosting, issue tracking, and automated build pipelines.
The Forgejo stack deploys a Forgejo server instance with PostgreSQL database backend, MinIO object storage, and Forgejo Runners for executing CI/CD workflows.
## Key Features
* **Git Repository Hosting**: Full-featured Git server with web interface for code management
* **Built-in CI/CD**: Forgejo Actions provide GitHub Actions-compatible workflow automation
* **Issue Tracking**: Integrated project management with issues, milestones, and pull requests
* **Container Registry**: Built-in Docker registry for container image storage
* **Code Review**: Pull request workflows with inline comments and approval processes
* **Scalable Runners**: Distributed runner architecture with Docker-in-Docker execution
* **S3 Object Storage**: MinIO integration for artifacts, LFS objects, and backups
## Repository
**Code**: [Forgejo Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/forgejo)
**Documentation**:
* [Forgejo Official Documentation](https://forgejo.org/docs/latest/)
* [Forgejo Actions Documentation](https://forgejo.org/docs/latest/user/actions/)
* [Forgejo Helm Chart Repository](https://code.forgejo.org/forgejo-helm/forgejo-helm)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* CloudNativePG operator (provided by `core` stack)
* Ingress controller configured (provided by `otc` stack)
* cert-manager for TLS certificate management (provided by `otc` stack)
* Infrastructure deployed through [Infra Deploy](https://edp.buildth.ing/DevFW/infra-deploy)
### Quick Start
The Forgejo stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then the domain will be `forgejo.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- Forgejo server (Helm chart v12.0.0)
- PostgreSQL database cluster (CloudNativePG)
- Forgejo Runners with Docker-in-Docker execution
- Ingress configuration with TLS
- Database credentials and storage secrets
### Verification
Verify the Forgejo deployment:
```bash
# Check ArgoCD applications status
kubectl get application forgejo-server -n argocd
kubectl get application forgejo-runner -n argocd
# Verify Forgejo server pods are running
kubectl get pods -n gitea
# Check PostgreSQL cluster status
kubectl get cluster -n gitea
# Verify Forgejo runners are active
kubectl get pods -n gitea -l app=forgejo-runner
# Verify ingress configuration
kubectl get ingress -n gitea
```
Access the Forgejo web interface at `https://{DOMAIN_GITEA}`.
## Architecture
### Component Architecture
The Forgejo stack consists of:
**Forgejo Server**:
- Web application for Git repository management
- API server for Git operations and CI/CD orchestration
- Issue tracker and project management interface
- Container registry for Docker images
- Artifact storage via MinIO object storage
**Forgejo Runners**:
- 3-replica runner deployment for parallel job execution
- Docker-in-Docker (DinD) architecture for containerized builds
- Runner image: `code.forgejo.org/forgejo/runner:6.4.0`
- Build container: `docker:28.0.4-dind`
- Supports GitHub Actions-compatible workflows
**Storage Architecture**:
- 200Gi persistent volume for Git repositories (GPSSD storage)
- OTC S3 object storage for LFS objects and artifacts
- Encrypted volumes using KMS key integration
- S3-compatible backup storage (100GB)
**Networking**:
- SSH LoadBalancer service on port 32222 for Git operations
- HTTPS ingress with TLS termination for web interface
- Internal service communication via ClusterIP
## Configuration
### Forgejo Server Configuration
The Forgejo server is configured through Helm values in `stacks/forgejo/forgejo-server/values.yaml`:
**Application Settings**:
- `FORGEJO_IMAGE_TAG`: Forgejo container image version
- Application name: "EDP"
- Slogan: "Build your thing in minutes"
- User registration: Disabled by default
- Email notifications: Enabled
**Storage Configuration**:
```yaml
persistence:
size: 200Gi
storageClass: csi-disk
annotations:
everest.io/crypt-key-id: "{KMS_KEY_ID}"
everest.io/disk-volume-type: GPSSD
```
**Database Configuration**:
Database credentials are sourced from Kubernetes secrets:
- `POSTGRES_HOST`: PostgreSQL hostname
- `POSTGRES_DB`: Database name
- `POSTGRES_USER`: Database username
- `POSTGRES_PASSWORD`: Database password
- SSL verification enabled
**Object Storage**:
- Endpoint: `obs.eu-de.otc.t-systems.com`
- Credentials from `gitea/forgejo-cloud-credentials` secret
- Used for artifacts, LFS objects, and backups
**External Services**:
- Redis for caching and session management
- Elasticsearch for issue indexing
- SMTP for email notifications
**SSH Configuration**:
```yaml
service:
ssh:
type: LoadBalancer
port: 32222
```
### Forgejo Runner Configuration
Defined in `stacks/forgejo/forgejo-runner/dind-docker.yaml`:
**Deployment Specification**:
- 3 replicas for parallel execution
- Runner version: 6.4.0
- Docker DinD version: 28.0.4
**Runner Registration**:
- Offline registration using secret token
- Instance URL from configuration
- Predefined labels for Ubuntu 22.04 and latest
**Container Configuration**:
```yaml
runner:
image: code.forgejo.org/forgejo/runner:6.4.0
privileged: true
securityContext:
runAsUser: 0
allowPrivilegeEscalation: true
dind:
image: docker:28.0.4-dind
privileged: true
tlsCertDir: /certs
```
**Volume Management**:
- Docker certificates volume for TLS communication
- Runner data volume for registration and configuration
- Shared socket for container communication
### ArgoCD Application Configuration
**Server Application** (`template/stacks/forgejo/forgejo-server.yaml`):
- Name: `forgejo-server`
- Namespace: `gitea`
- Helm chart v12.0.0 from `https://code.forgejo.org/forgejo-helm/forgejo-helm.git`
- Automated self-healing enabled
- Values from `stacks-instances` repository
**Runner Application** (`template/stacks/forgejo/forgejo-runner.yaml`):
- Name: `forgejo-runner`
- Namespace: `argocd`
- Deployment manifests from `stacks-instances` repository
- Automated sync with unlimited retries
## Usage Examples
### Creating Your First Repository
After deployment, create and use Git repositories:
1. **Access Forgejo Interface**
```bash
open https://${DOMAIN_GITEA}
```
2. **Create a New Repository**
- Click "+" icon in top right
- Select "New Repository"
- Enter repository name and description
- Choose visibility (public/private)
- Initialize with README if desired
3. **Clone and Push Code**
```bash
# Clone the repository
git clone https://${DOMAIN_GITEA}/myorg/myrepo.git
cd myrepo
# Add your code
echo "# My Project" > README.md
git add README.md
git commit -m "Initial commit"
# Push to Forgejo
git push origin main
```
### Setting Up CI/CD with Forgejo Actions
Create automated workflows using Forgejo Actions:
1. **Create Workflow File**
```bash
mkdir -p .forgejo/workflows
cat > .forgejo/workflows/build.yaml << 'EOF'
name: Build and Test
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21'
- name: Build
run: go build -v ./...
- name: Test
run: go test -v ./...
EOF
```
2. **Commit and Push Workflow**
```bash
git add .forgejo/workflows/build.yaml
git commit -m "Add CI/CD workflow"
git push origin main
```
3. **Monitor Workflow Execution**
- Navigate to repository in Forgejo web interface
- Click "Actions" tab
- View workflow runs and logs
### Building and Publishing Container Images
Use Forgejo to build and store Docker images:
```yaml
# .forgejo/workflows/docker.yaml
name: Build Container Image
on:
push:
tags: ['v*']
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Build image
run: |
docker build -t forgejo.${DOMAIN_GITEA}/myorg/myapp:${GITHUB_REF_NAME} .
- name: Login to registry
run: |
echo "${{ secrets.REGISTRY_PASSWORD }}" | \
docker login forgejo.${DOMAIN_GITEA} -u "${{ secrets.REGISTRY_USER }}" --password-stdin
- name: Push image
run: |
docker push forgejo.${DOMAIN_GITEA}/myorg/myapp:${GITHUB_REF_NAME}
```
### Using SSH for Git Operations
Configure SSH access for Git operations:
```bash
# Generate SSH key if needed
ssh-keygen -t ed25519 -C "your_email@example.com"
# Add public key to Forgejo
# Navigate to: Settings -> SSH / GPG Keys -> Add Key
# Configure SSH host
cat >> ~/.ssh/config << EOF
Host forgejo.${DOMAIN_GITEA}
Port 32222
User git
EOF
# Clone repository via SSH
git clone ssh://git@forgejo.${DOMAIN_GITEA}:32222/myorg/myrepo.git
```
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration and CloudNativePG operator for database management
* **OTC Stack**: Requires ingress-nginx controller and cert-manager for external access and TLS
* **Coder Stack**: Development workspaces can clone repositories and trigger CI/CD workflows
* **Observability Stack**: Prometheus metrics collection enabled via ServiceMonitor
* **Dex (SSO)**: Can be configured for centralized authentication integration
## Troubleshooting
### Forgejo Server Not Starting
**Problem**: Forgejo server pods remain in `Pending` or `CrashLoopBackOff` state
**Solution**:
1. Check PostgreSQL cluster status:
```bash
kubectl get cluster -n gitea
kubectl describe cluster -n gitea
```
2. Verify database credentials:
```bash
kubectl get secret -n gitea | grep postgres
```
3. Check Forgejo server logs:
```bash
kubectl logs -n gitea -l app=forgejo
```
4. Verify MinIO connectivity:
```bash
kubectl get secret minio-credential -n gitea
kubectl logs -n gitea -l app=forgejo | grep -i minio
```
### Cannot Access Forgejo Web Interface
**Problem**: Forgejo web interface is not accessible at configured URL
**Solution**:
1. Verify ingress configuration:
```bash
kubectl get ingress -n gitea
kubectl describe ingress -n gitea
```
2. Check TLS certificate status:
```bash
kubectl get certificate -n gitea
kubectl describe certificate -n gitea
```
3. Verify DNS resolution:
```bash
nslookup forgejo.${DOMAIN_GITEA}
```
4. Test service connectivity:
```bash
kubectl port-forward -n gitea svc/forgejo-http 3000:3000
curl http://localhost:3000
```
### Git Operations Fail Over SSH
**Problem**: Cannot clone or push repositories via SSH
**Solution**:
1. Verify SSH service is exposed:
```bash
kubectl get svc -n gitea -l app=forgejo
```
2. Check LoadBalancer external IP:
```bash
kubectl get svc -n gitea forgejo-ssh -o wide
```
3. Test SSH connectivity:
```bash
ssh -T -p 32222 git@${DOMAIN_GITEA}
```
4. Verify SSH public key is added to Forgejo account
### Forgejo Runners Not Executing Jobs
**Problem**: CI/CD workflows remain queued or fail to execute
**Solution**:
1. Check runner pod status:
```bash
kubectl get pods -n gitea -l app=forgejo-runner
kubectl logs -n gitea -l app=forgejo-runner
```
2. Verify runner registration:
```bash
kubectl exec -n gitea -it deployment/forgejo-runner -- \
forgejo-runner status
```
3. Check Docker-in-Docker daemon:
```bash
kubectl logs -n gitea -l app=forgejo-runner -c dind
```
4. Verify runner token secret exists:
```bash
kubectl get secret -n gitea | grep runner
```
5. Check Forgejo server can communicate with runners:
```bash
kubectl logs -n gitea -l app=forgejo | grep -i runner
```
### Database Connection Errors
**Problem**: Forgejo cannot connect to PostgreSQL database
**Solution**:
1. Verify PostgreSQL cluster health:
```bash
kubectl get pods -n gitea -l cnpg.io/cluster
kubectl logs -n gitea -l cnpg.io/cluster
```
2. Test database connection:
```bash
kubectl exec -n gitea -it <postgres-pod> -- \
psql -U postgres -c "\l"
```
3. Verify database credentials secret:
```bash
kubectl get secret -n gitea -o yaml | grep POSTGRES
```
4. Check database connection from Forgejo pod:
```bash
kubectl exec -n gitea -it <forgejo-pod> -- \
nc -zv <postgres-host> 5432
```
### Storage Issues
**Problem**: Repository pushes fail or object storage errors occur
**Solution**:
1. Check PVC status and capacity:
```bash
kubectl get pvc -n gitea
kubectl describe pvc -n gitea
```
2. Verify MinIO credentials and connectivity:
```bash
kubectl get secret minio-credential -n gitea
kubectl logs -n gitea -l app=forgejo | grep -i "s3\|minio"
```
3. Check available storage space:
```bash
kubectl exec -n gitea -it <forgejo-pod> -- df -h
```
4. Review storage class configuration:
```bash
kubectl get storageclass csi-disk -o yaml
```
## Additional Resources
* [Forgejo Documentation](https://forgejo.org/docs/latest/)
* [Forgejo Actions User Guide](https://forgejo.org/docs/latest/user/actions/)
* [Forgejo Helm Chart Documentation](https://code.forgejo.org/forgejo-helm/forgejo-helm)
* [Forgejo Runner Documentation](https://code.forgejo.org/forgejo/runner)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

View file

@ -0,0 +1,500 @@
---
title: "Observability Client"
linkTitle: "Observability Client"
weight: 60
description: >
Core observability components for metrics collection, log aggregation, and monitoring
---
## Overview
The Observability Client stack provides essential monitoring and observability infrastructure for Kubernetes environments. As part of the Edge Developer Platform, it deploys client-side components that collect, process, and forward metrics and logs to centralized observability systems.
The stack integrates three core components: Kubernetes Metrics Server for resource metrics, Vector for log collection and forwarding, and Victoria Metrics for comprehensive metrics monitoring and alerting.
## Key Features
* **Resource Metrics**: Real-time CPU and memory metrics via Kubernetes Metrics Server
* **Log Aggregation**: Unified log collection and forwarding with Vector
* **Metrics Monitoring**: Comprehensive metrics collection, storage, and alerting with Victoria Metrics
* **Prometheus Compatibility**: Full Prometheus protocol support for metrics scraping
* **Multi-Tenant Support**: Configurable tenant isolation for metrics and logs
* **Automated Alerting**: Pre-configured alert rules with Alertmanager integration
* **Grafana Integration**: Built-in dashboard provisioning and datasource configuration
## Repository
**Code**: [Observability Client Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/observability-client)
**Documentation**:
* [Kubernetes Metrics Server](https://github.com/kubernetes-sigs/metrics-server)
* [Vector Documentation](https://vector.dev/docs/)
* [Victoria Metrics Documentation](https://docs.victoriametrics.com/)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* cert-manager for certificate management (provided by `otc` stack)
* Observability backend services for receiving metrics and logs
### Quick Start
The Observability Client stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible.
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- Metrics Server (Helm chart v3.12.2)
- Vector agent (Helm chart v0.43.0)
- Victoria Metrics k8s-stack (Helm chart v0.48.1)
- ServiceMonitor resources for Prometheus scraping
- Authentication secrets for remote write endpoints
### Verification
Verify the Observability Client deployment:
```bash
# Check ArgoCD application status
kubectl get application -n argocd | grep -E "metrics-server|vector|vm-client"
# Verify Metrics Server is running
kubectl get pods -n observability -l app.kubernetes.io/name=metrics-server
# Test metrics API
kubectl top nodes
kubectl top pods -A
# Verify Vector pods are running
kubectl get pods -n observability -l app.kubernetes.io/name=vector
# Check Victoria Metrics components
kubectl get pods -n observability -l app.kubernetes.io/name=victoria-metrics-k8s-stack
# Verify ServiceMonitor resources
kubectl get servicemonitor -n observability
```
## Architecture
### Component Architecture
The Observability Client stack consists of three integrated components:
**Metrics Server**:
- Collects resource metrics (CPU, memory) from kubelet
- Provides Metrics API for kubectl top and HPA
- Lightweight aggregator for cluster-wide resource usage
- Exposes ServiceMonitor for Prometheus scraping
**Vector Agent**:
- DaemonSet deployment for log collection across all nodes
- Processes and transforms Kubernetes logs
- Forwards logs to centralized Elasticsearch backend
- Injects cluster metadata and environment information
- Supports compression and bulk operations
**Victoria Metrics Stack**:
- VMAgent: Scrapes metrics from Kubernetes components and applications
- VMAlertmanager: Manages alert routing and notifications
- VMOperator: Manages VictoriaMetrics CRDs and lifecycle
- Integration with remote Victoria Metrics storage
- Supports multi-tenant metrics isolation
### Data Flow
```
Kubernetes Resources → Metrics Server → Metrics API
ServiceMonitor → VMAgent → Remote VictoriaMetrics
Application Logs → Vector Agent → Transform → Remote Elasticsearch
Prometheus Exporters → VMAgent → Remote VictoriaMetrics → VMAlertmanager
```
## Configuration
### Metrics Server Configuration
Configured in `stacks/observability-client/metrics-server/values.yaml`:
```yaml
metrics:
enabled: true
serviceMonitor:
enabled: true
```
**Key Settings**:
- Enables metrics collection endpoint
- Exposes ServiceMonitor for Prometheus-compatible scraping
- Deployed via Helm chart from `https://kubernetes-sigs.github.io/metrics-server/`
### Vector Configuration
Configured in `stacks/observability-client/vector/values.yaml`:
**Role**: Agent (DaemonSet deployment across nodes)
**Authentication**:
Credentials sourced from `simple-user-secret`:
- `VECTOR_USER`: Username for remote write authentication
- `VECTOR_PASSWORD`: Password for remote write authentication
**Data Sources**:
- `k8s`: Collects Kubernetes container logs
- `internal_metrics`: Gathers Vector internal metrics
**Log Processing**:
```yaml
transforms:
parser:
- Parse JSON from log messages
- Inject cluster environment metadata
- Remove original message field
```
**Output Sink**:
- Elasticsearch bulk API (v8)
- Basic authentication with environment variables
- Gzip compression enabled
- Custom headers: AccountID and ProjectID
### Victoria Metrics Stack Configuration
Configured in `stacks/observability-client/vm-client-stack/values.yaml`:
**Operator Settings**:
- Enabled with admission webhooks
- Managed by cert-manager for ArgoCD compatibility
**VMAgent Configuration**:
- Basic authentication for remote write
- Credentials from `vm-remote-write-secret`
- Stream parsing enabled
- Drop original labels to reduce memory footprint
**Monitoring Targets**:
- Node exporter for hardware metrics
- kube-state-metrics for Kubernetes object states
- Kubelet metrics (cadvisor)
- Kubernetes control plane components (API server, etcd, scheduler, controller manager)
- CoreDNS metrics
**Alertmanager Integration**:
- Slack notification templates
- Configurable routing rules
- TLS support for secure communication
**Storage Options**:
- VMSingle: Single-node deployment
- VMCluster: Distributed deployment with replication
- Configurable retention period
## ArgoCD Application Configuration
**Metrics Server Application** (`template/stacks/observability-client/metrics-server.yaml`):
- Name: `metrics-server`
- Chart version: 3.12.2
- Automated sync with self-heal enabled
- Namespace: `observability`
**Vector Application** (`template/stacks/observability-client/vector.yaml`):
- Name: `vector`
- Chart version: 0.43.0
- Automated sync with self-heal enabled
- Namespace: `observability`
**Victoria Metrics Application** (`template/stacks/observability-client/vm-client-stack.yaml`):
- Name: `vm-client`
- Chart version: 0.48.1
- Automated sync with self-heal enabled
- Namespace: `observability`
- References manifests from instance repository
## Usage Examples
### Querying Resource Metrics
Access resource metrics collected by Metrics Server:
```bash
# View node resource usage
kubectl top nodes
# View pod resource usage across all namespaces
kubectl top pods -A
# View pod resource usage in specific namespace
kubectl top pods -n observability
# Sort pods by CPU usage
kubectl top pods -A --sort-by=cpu
# Sort pods by memory usage
kubectl top pods -A --sort-by=memory
```
### Using Metrics for Autoscaling
Create Horizontal Pod Autoscaler based on metrics:
```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
```
### Accessing Application Logs
Vector automatically collects logs from all containers. View logs in your centralized Elasticsearch/Kibana:
```bash
# Logs are automatically forwarded to Elasticsearch
# Access via Kibana dashboard or Elasticsearch API
# Example: Query logs via Elasticsearch API
curl -u $VECTOR_USER:$VECTOR_PASSWORD \
-X GET "https://elasticsearch.example.com/_search" \
-H 'Content-Type: application/json' \
-d '{
"query": {
"match": {
"kubernetes.namespace": "my-namespace"
}
}
}'
```
### Querying Victoria Metrics
Query metrics collected by Victoria Metrics:
```bash
# Access Victoria Metrics query API
# Metrics are forwarded to remote Victoria Metrics instance
# Example PromQL queries:
# - Container CPU usage: container_cpu_usage_seconds_total
# - Pod memory usage: container_memory_usage_bytes
# - Node disk I/O: node_disk_io_time_seconds_total
# Query via Victoria Metrics API
curl -X POST https://victoriametrics.example.com/api/v1/query \
-d 'query=up' \
-d 'time=2025-12-16T00:00:00Z'
```
### Creating Custom ServiceMonitors
Expose application metrics for collection:
```yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-metrics
labels:
app: myapp
spec:
ports:
- name: metrics
port: 8080
targetPort: 8080
selector:
app: myapp
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp-monitor
namespace: observability
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: metrics
path: /metrics
interval: 30s
```
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration
* **OTC Stack**: Requires cert-manager for certificate management
* **Observability Stack**: Forwards metrics and logs to centralized observability backend
* **All Application Stacks**: Collects metrics and logs from all platform applications
## Troubleshooting
### Metrics Server Not Responding
**Problem**: `kubectl top` commands fail or return no data
**Solution**:
1. Check Metrics Server pod status:
```bash
kubectl get pods -n observability -l app.kubernetes.io/name=metrics-server
kubectl logs -n observability -l app.kubernetes.io/name=metrics-server
```
2. Verify kubelet metrics endpoint:
```bash
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
```
3. Check ServiceMonitor configuration:
```bash
kubectl get servicemonitor -n observability -o yaml
```
### Vector Not Forwarding Logs
**Problem**: Logs are not appearing in Elasticsearch
**Solution**:
1. Check Vector agent status:
```bash
kubectl get pods -n observability -l app.kubernetes.io/name=vector
kubectl logs -n observability -l app.kubernetes.io/name=vector --tail=50
```
2. Verify authentication secret:
```bash
kubectl get secret simple-user-secret -n observability
kubectl get secret simple-user-secret -n observability -o jsonpath='{.data.username}' | base64 -d
```
3. Test Elasticsearch connectivity:
```bash
kubectl exec -it -n observability $(kubectl get pod -n observability -l app.kubernetes.io/name=vector -o jsonpath='{.items[0].metadata.name}') -- \
curl -u $VECTOR_USER:$VECTOR_PASSWORD https://elasticsearch.example.com/_cluster/health
```
4. Check Vector internal metrics:
```bash
kubectl port-forward -n observability svc/vector 9090:9090
curl http://localhost:9090/metrics
```
### Victoria Metrics Not Scraping
**Problem**: Metrics are not being collected or forwarded
**Solution**:
1. Check VMAgent status:
```bash
kubectl get pods -n observability -l app.kubernetes.io/name=vmagent
kubectl logs -n observability -l app.kubernetes.io/name=vmagent
```
2. Verify remote write secret:
```bash
kubectl get secret vm-remote-write-secret -n observability
kubectl get secret vm-remote-write-secret -n observability -o jsonpath='{.data.username}' | base64 -d
```
3. Check ServiceMonitor targets:
```bash
kubectl get servicemonitor -n observability
kubectl describe servicemonitor metrics-server -n observability
```
4. Verify operator is running:
```bash
kubectl get pods -n observability -l app.kubernetes.io/name=victoria-metrics-operator
kubectl logs -n observability -l app.kubernetes.io/name=victoria-metrics-operator
```
### High Memory Usage
**Problem**: Victoria Metrics or Vector consuming excessive memory
**Solution**:
1. For Victoria Metrics, verify `dropOriginalLabels` is enabled:
```bash
kubectl get vmagent -n observability -o yaml | grep dropOriginalLabels
```
2. Reduce scrape intervals for high-cardinality metrics:
```yaml
# Edit ServiceMonitor
spec:
endpoints:
- interval: 60s # Increase from 30s
```
3. Filter unnecessary logs in Vector:
```yaml
# Add filter transform to Vector configuration
transforms:
filter:
type: filter
condition: '.kubernetes.namespace != "kube-system"'
```
4. Check resource limits:
```bash
kubectl describe pod -n observability -l app.kubernetes.io/name=vmagent
kubectl describe pod -n observability -l app.kubernetes.io/name=vector
```
### Certificate Issues
**Problem**: TLS certificate errors in logs
**Solution**:
1. Verify cert-manager is running:
```bash
kubectl get pods -n cert-manager
```
2. Check certificate status:
```bash
kubectl get certificate -n observability
kubectl describe certificate -n observability
```
3. Review webhook configuration:
```bash
kubectl get validatingwebhookconfigurations | grep victoria-metrics
kubectl get mutatingwebhookconfigurations | grep victoria-metrics
```
4. Restart operator if needed:
```bash
kubectl rollout restart deployment victoria-metrics-operator -n observability
```
## Additional Resources
* [Kubernetes Metrics Server Documentation](https://github.com/kubernetes-sigs/metrics-server)
* [Vector Documentation](https://vector.dev/docs/)
* [Victoria Metrics Documentation](https://docs.victoriametrics.com/)
* [Victoria Metrics Operator](https://docs.victoriametrics.com/operator/)
* [Prometheus Operator API](https://prometheus-operator.dev/docs/operator/api/)
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)

View file

@ -0,0 +1,581 @@
---
title: "Observability"
linkTitle: "Observability"
weight: 50
description: >
Comprehensive monitoring, metrics, and logging for Kubernetes infrastructure
---
## Overview
The Observability stack provides enterprise-grade monitoring, metrics collection, and logging capabilities for the Edge Developer Platform. Built on VictoriaMetrics and Grafana, it offers a complete observability solution with pre-configured dashboards, alerting, and SSO integration.
The stack deploys VictoriaMetrics for metrics storage and querying, Grafana for visualization, VictoriaLogs for log aggregation, and VMAuth for authenticated access to monitoring endpoints.
## Key Features
* **Metrics Collection**: VictoriaMetrics-based Kubernetes monitoring with long-term storage
* **Visualization**: Grafana with pre-built dashboards for ArgoCD, Ingress-Nginx, and infrastructure components
* **Log Aggregation**: VictoriaLogs for centralized logging with Grafana integration
* **SSO Integration**: OAuth authentication through Dex with role-based access control
* **Alerting**: Alertmanager with email notifications for critical events
* **Secure Access**: TLS-enabled ingress with authentication proxy (VMAuth)
* **Persistent Storage**: Encrypted volumes with configurable retention policies
## Repository
**Code**: [Observability Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/observability)
**Documentation**:
* [VictoriaMetrics Documentation](https://docs.victoriametrics.com/)
* [Grafana Documentation](https://grafana.com/docs/)
* [Grafana Operator Documentation](https://grafana.github.io/grafana-operator/)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* Ingress controller configured (provided by `otc` stack)
* cert-manager for TLS certificate management (provided by `otc` stack)
* Dex SSO provider (provided by `core` stack)
* Infrastructure deployed through [Infra Deploy](https://edp.buildth.ing/DevFW/infra-deploy)
### Quick Start
The Observability stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then domains will be `vmauth.test-me.t09.de` and `grafana.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- VictoriaMetrics Operator and components
- VictoriaMetrics Single (metrics storage)
- VMAuth (authentication proxy)
- Alertmanager (alerting)
- Grafana Operator
- Grafana instance with OAuth
- VictoriaLogs datasource
- Pre-configured dashboards
- Ingress configurations with TLS
### Verification
Verify the Observability deployment:
```bash
# Check ArgoCD applications status
kubectl get application grafana-operator -n argocd
kubectl get application victoria-k8s-stack -n argocd
# Verify VictoriaMetrics components are running
kubectl get pods -n observability
# Check Grafana instance status
kubectl get grafana grafana -n observability
# Verify ingress configurations
kubectl get ingress -n observability
```
Access the monitoring interfaces:
* Grafana: `https://grafana.{DOMAIN_O12Y}`
## Architecture
### Component Architecture
The Observability stack consists of multiple integrated components:
**VictoriaMetrics Components**:
- **VictoriaMetrics Operator**: Manages VictoriaMetrics custom resources
- **VictoriaMetrics Single**: Standalone metrics storage with 20Gi storage and 1-month retention
- **VMAgent**: Scrapes metrics from Kubernetes components (kubelet, CoreDNS, kube-apiserver, etcd)
- **VMAuth**: Authentication proxy on port 8427 for secure metrics access
- **VMAlertmanager**: Handles alert routing and notifications
**Grafana Components**:
- **Grafana Operator**: Manages Grafana instances and dashboards as Kubernetes resources
- **Grafana Instance**: Web application for metrics visualization with OAuth authentication
- **Pre-configured Dashboards**: ArgoCD, Ingress-Nginx, VictoriaLogs monitoring
**Logging**:
- **VictoriaLogs**: Log aggregation service integrated as Grafana datasource
**Storage**:
- VictoriaMetrics Single: 20Gi persistent storage on `csi-disk` storage class
- Grafana: 10Gi persistent storage on `csi-disk` storage class with KMS encryption
- Configurable retention: 1 month for metrics, minimum 24 hours enforced
**Networking**:
- Nginx ingress with TLS termination for Grafana and VMAuth
- cert-manager integration for automatic certificate management
- Internal ClusterIP services for component communication
## Configuration
### VictoriaMetrics Configuration
Key configuration in `stacks/observability/victoria-k8s-stack/values.yaml`:
**Operator Settings**:
```yaml
victoria-metrics-operator:
enabled: true
operator:
enable_converter_ownership: true
admissionWebhooks:
certManager:
enabled: true
issuer:
name: main
```
**Storage Configuration**:
```yaml
vmsingle:
enabled: true
spec:
retentionPeriod: "1"
storage:
storageClassName: csi-disk
resources:
requests:
storage: 20Gi
```
**VMAuth Configuration**:
```yaml
vmauth:
enabled: true
spec:
port: "8427"
ingress:
enabled: true
ingressClassName: nginx
hosts:
- name: "{{{ .Env.DOMAIN_O12Y }}}"
tls:
- secretName: vmauth-tls-secret
hosts:
- "{{{ .Env.DOMAIN_O12Y }}}"
annotations:
cert-manager.io/cluster-issuer: main
```
**Monitoring Targets**:
- Kubelet (cadvisor, probes, resources metrics)
- CoreDNS
- etcd
- kube-apiserver
**Disabled Collectors** (to avoid alerts on managed clusters):
- kube-controller-manager
- kube-scheduler
- kube-proxy
### Alertmanager Configuration
Email alerting configured in `values.yaml`:
```yaml
alertmanager:
spec:
externalURL: "https://{{{ .Env.DOMAIN_O12Y }}}"
configSecret: vmalertmanager-config
config:
route:
routes:
- matchers:
- severity =~ "critical|major"
receiver: mail
receivers:
- name: 'mail'
email_configs:
- to: 'alerts@example.com'
from: 'monitoring@example.com'
smarthost: 'mail.mms-support.de:465'
auth_username:
name: email-user-credentials
key: username
auth_password:
name: email-user-credentials
key: password
```
### Grafana Configuration
Grafana instance configuration in `stacks/observability/grafana-operator/manifests/grafana.yaml`:
**OAuth/SSO Integration**:
```yaml
config:
auth.generic_oauth:
enabled: "true"
disable_login_form: "true"
client_id: "$__env{GF_AUTH_GENERIC_OAUTH_CLIENT_ID}"
client_secret: "$__env{GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET}"
scopes: "openid email profile offline_access groups"
auth_url: "https://dex.{DOMAIN}/auth"
token_url: "https://dex.{DOMAIN}/token"
api_url: "https://dex.{DOMAIN}/userinfo"
role_attribute_path: "contains(groups[*], 'DevFW') && 'Admin' || 'Viewer'"
```
**Storage**:
```yaml
deployment:
spec:
template:
spec:
volumes:
- name: grafana-data
persistentVolumeClaim:
claimName: grafana-pvc
persistentVolumeClaim:
spec:
storageClassName: csi-disk
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
**Ingress**:
```yaml
ingress:
spec:
ingressClassName: nginx
rules:
- host: "{{{ .Env.DOMAIN_GRAFANA }}}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grafana-service
port:
number: 3000
tls:
- hosts:
- "{{{ .Env.DOMAIN_GRAFANA }}}"
secretName: grafana-tls-secret
```
### ArgoCD Application Configuration
**Grafana Operator Application** (`template/stacks/observability/grafana-operator.yaml`):
- Name: `grafana-operator`
- Chart: `grafana-operator` v5.18.0 from `ghcr.io/grafana/helm-charts`
- Automated sync with self-healing enabled
- Namespace: `observability`
**VictoriaMetrics Stack Application** (`template/stacks/observability/victoria-k8s-stack.yaml`):
- Name: `victoria-k8s-stack`
- Chart: `victoria-metrics-k8s-stack` v0.48.1 from `https://victoriametrics.github.io/helm-charts/`
- Automated self-healing enabled
- Creates namespace automatically
## Usage Examples
### Accessing Grafana
Access Grafana through SSO:
1. **Navigate to Grafana**
```bash
open https://grafana.${DOMAIN_GRAFANA}
```
2. **Authenticate via Dex**
- Click "Sign in with OAuth"
- Authenticate through configured identity provider
- Users in `DevFW` group receive Admin role, others receive Viewer role
### Querying Metrics
Query VictoriaMetrics directly:
```bash
# Access VMAuth endpoint
curl -u username:password https://vmauth.${DOMAIN_O12Y}/api/v1/query \
-d 'query=up' | jq
# Query pod CPU usage
curl -u username:password https://vmauth.${DOMAIN_O12Y}/api/v1/query \
-d 'query=container_cpu_usage_seconds_total' | jq
# Query with time range
curl -u username:password https://vmauth.${DOMAIN_O12Y}/api/v1/query_range \
-d 'query=container_memory_usage_bytes' \
-d 'start=2024-01-01T00:00:00Z' \
-d 'end=2024-01-01T23:59:59Z' \
-d 'step=5m' | jq
```
### Creating Custom Dashboards
Create custom Grafana dashboards as Kubernetes resources:
```yaml
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDashboard
metadata:
name: custom-app-dashboard
namespace: observability
spec:
instanceSelector:
matchLabels:
dashboards: "grafana"
json: |
{
"dashboard": {
"title": "Custom Application Metrics",
"panels": [
{
"title": "Request Rate",
"targets": [
{
"expr": "rate(http_requests_total[5m])",
"datasource": "VictoriaMetrics"
}
]
}
]
}
}
```
Apply the dashboard:
```bash
kubectl apply -f custom-dashboard.yaml
```
### Viewing Logs in Grafana
Access VictoriaLogs through Grafana:
1. Navigate to Grafana `https://grafana.${DOMAIN_GRAFANA}`
2. Go to Explore
3. Select "VictoriaLogs" datasource
4. Use LogQL queries:
```
{namespace="default"}
{app="nginx"} |= "error"
{namespace="observability"} | json | level="error"
```
### Setting Up Custom Alerts
Create custom alert rules using VMRule:
```yaml
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMRule
metadata:
name: custom-app-alerts
namespace: observability
spec:
groups:
- name: custom-app
interval: 30s
rules:
- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.05
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate detected"
description: "Error rate is {{ $value }} requests/sec"
```
Push the alert rule to [stacks instances](https://edp.buildth.ing/DevFW-CICD/stacks-instances/src/branch/main/otc/observability.t09.de/stacks/observability/victoria-k8s-stack/manifests)
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration
* **OTC Stack**: Requires ingress-nginx controller and cert-manager for external access and TLS
* **Dex (SSO)**: Integrated for Grafana authentication with role-based access control
* **All Platform Services**: Automatically collects metrics from Kubernetes components and platform services
* **Application Stacks**: Provides monitoring for Coder, Forgejo, and other deployed services
## Troubleshooting
### VictoriaMetrics Pods Not Starting
**Problem**: VictoriaMetrics components remain in `Pending` or `CrashLoopBackOff` state
**Solution**:
1. Check VictoriaMetrics resources:
```bash
kubectl get vmsingle,vmagent,vmalertmanager -n observability
kubectl describe vmsingle vmsingle -n observability
```
2. Verify persistent volume claims:
```bash
kubectl get pvc -n observability
kubectl describe pvc vmstorage-vmsingle-0 -n observability
```
3. Check operator logs:
```bash
kubectl logs -n observability -l app.kubernetes.io/name=victoria-metrics-operator
```
### Grafana Not Accessible
**Problem**: Grafana web interface is not accessible at configured URL
**Solution**:
1. Verify Grafana instance status:
```bash
kubectl get grafana grafana -n observability
kubectl describe grafana grafana -n observability
```
2. Check Grafana pod logs:
```bash
kubectl logs -n observability -l app=grafana
```
3. Verify ingress configuration:
```bash
kubectl get ingress -n observability
kubectl describe ingress grafana-ingress -n observability
```
4. Check TLS certificate status:
```bash
kubectl get certificate -n observability
kubectl describe certificate grafana-tls-secret -n observability
```
### OAuth Authentication Failing
**Problem**: Cannot authenticate to Grafana via SSO
**Solution**:
1. Verify Dex is running:
```bash
kubectl get pods -n core -l app=dex
kubectl logs -n core -l app=dex
```
2. Check OAuth client secret:
```bash
kubectl get secret dex-grafana-client -n observability
kubectl describe secret dex-grafana-client -n observability
```
3. Review Grafana OAuth configuration:
```bash
kubectl get grafana grafana -n observability -o yaml | grep -A 20 auth.generic_oauth
```
4. Check Grafana logs for OAuth errors:
```bash
kubectl logs -n observability -l app=grafana | grep -i oauth
```
### Metrics Not Appearing
**Problem**: Metrics not showing up in Grafana or VictoriaMetrics
**Solution**:
1. Check VMAgent scraping status:
```bash
kubectl get vmagent -n observability
kubectl logs -n observability -l app.kubernetes.io/name=vmagent
```
2. Verify service monitors are created:
```bash
kubectl get vmservicescrape -n observability
kubectl get vmpodscrape -n observability
```
3. Check target endpoints:
```bash
# Access VMAgent UI (port-forward if needed)
kubectl port-forward -n observability svc/vmagent 8429:8429
open http://localhost:8429/targets
```
4. Verify VictoriaMetrics Single is accepting data:
```bash
kubectl logs -n observability -l app.kubernetes.io/name=vmsingle
```
### Alerts Not Sending
**Problem**: Alertmanager not sending email notifications
**Solution**:
1. Verify Alertmanager configuration:
```bash
kubectl get vmalertmanager -n observability
kubectl describe vmalertmanager vmalertmanager -n observability
```
2. Check email credentials secret:
```bash
kubectl get secret email-user-credentials -n observability
kubectl describe secret email-user-credentials -n observability
```
3. Review Alertmanager logs:
```bash
kubectl logs -n observability -l app.kubernetes.io/name=vmalertmanager
```
4. Test alert firing manually:
```bash
# Access Alertmanager UI
kubectl port-forward -n observability svc/vmalertmanager 9093:9093
open http://localhost:9093
```
### High Storage Usage
**Problem**: VictoriaMetrics storage running out of space
**Solution**:
1. Check current storage usage:
```bash
kubectl exec -it -n observability vmsingle-0 -- df -h /storage
```
2. Reduce retention period in `values.yaml`:
```yaml
vmsingle:
spec:
retentionPeriod: "15d" # Reduce from 1 month
```
3. Increase PVC size:
```bash
kubectl patch pvc vmstorage-vmsingle-0 -n observability \
-p '{"spec":{"resources":{"requests":{"storage":"50Gi"}}}}'
```
4. Monitor storage metrics in Grafana for capacity planning
## Additional Resources
* [VictoriaMetrics Documentation](https://docs.victoriametrics.com/)
* [VictoriaMetrics Operator Documentation](https://docs.victoriametrics.com/operator/)
* [Grafana Documentation](https://grafana.com/docs/grafana/latest/)
* [Grafana Operator Documentation](https://grafana.github.io/grafana-operator/docs/)
* [VictoriaLogs Documentation](https://docs.victoriametrics.com/victorialogs/)
* [Prometheus Querying Basics](https://prometheus.io/docs/prometheus/latest/querying/basics/)
* [PromQL for VictoriaMetrics](https://docs.victoriametrics.com/metricsql/)

View file

@ -0,0 +1,526 @@
---
title: "OTC"
linkTitle: "OTC"
weight: 10
description: >
Open Telekom Cloud infrastructure components for ingress, TLS, and storage
---
## Overview
The OTC (Open Telekom Cloud) stack provides essential infrastructure components for deploying applications on Open Telekom Cloud environments. It configures ingress routing, automated TLS certificate management, and cloud-native storage provisioning tailored specifically for OTC's Kubernetes infrastructure.
This stack serves as a foundational layer that other platform stacks depend on for external access, secure communication, and persistent storage.
## Key Features
* **Automated TLS Certificate Management**: Let's Encrypt integration via cert-manager for automatic certificate provisioning and renewal
* **Cloud Load Balancer Integration**: Nginx ingress controller configured with OTC-specific Elastic Load Balancer (ELB) annotations
* **Native Storage Provisioning**: Default StorageClass using Huawei FlexVolume provisioner for block storage
* **Prometheus Metrics**: Built-in monitoring capabilities for ingress traffic and performance
* **High Availability**: Rolling update strategy with minimal downtime
* **HTTP-01 Challenge Support**: ACME validation through ingress for certificate issuance
## Repository
**Code**: [OTC Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/otc)
**Documentation**:
* [cert-manager Documentation](https://cert-manager.io/docs/)
* [ingress-nginx Documentation](https://kubernetes.github.io/ingress-nginx/)
* [Open Telekom Cloud Documentation](https://docs.otc.t-systems.com/)
## Getting Started
### Prerequisites
* Kubernetes cluster running on Open Telekom Cloud
* ArgoCD installed (provided by `core` stack)
* Environment variables configured:
- `LOADBALANCER_ID`: OTC Elastic Load Balancer ID
- `LOADBALANCER_IP`: OTC Elastic Load Balancer IP address
- `CLIENT_REPO_DOMAIN`: Git repository domain
- `CLIENT_REPO_ORG_NAME`: Git repository organization
- `CLIENT_REPO_ID`: Client repository identifier
- `DOMAIN`: Domain name for the environment
### Quick Start
The OTC stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible.
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- cert-manager with ClusterIssuer for Let's Encrypt
- ingress-nginx controller with OTC load balancer integration
- Default StorageClass for OTC block storage
### Verification
Verify the OTC stack deployment:
```bash
# Check ArgoCD applications status
kubectl get application otc -n argocd
kubectl get application cert-manager -n argocd
kubectl get application ingress-nginx -n argocd
kubectl get application storageclass -n argocd
# Verify cert-manager pods
kubectl get pods -n cert-manager
# Check ingress-nginx controller
kubectl get pods -n ingress-nginx
# Verify ClusterIssuer status
kubectl get clusterissuer main
# Check StorageClass
kubectl get storageclass default
```
## Architecture
### Component Architecture
The OTC stack consists of three primary components:
**cert-manager**:
- Automates TLS certificate lifecycle management
- Integrates with Let's Encrypt ACME server (production endpoint)
- Uses HTTP-01 challenge validation via ingress
- Creates and manages certificates as Kubernetes resources
- Single replica deployment
**ingress-nginx**:
- Kubernetes ingress controller based on Nginx
- Routes external traffic to internal services
- Integrated with OTC Elastic Load Balancer (ELB)
- Supports TLS termination with cert-manager issued certificates
- Rolling update strategy with max 1 unavailable pod
- Prometheus metrics exporter with ServiceMonitor
**StorageClass**:
- Default storage provisioner for persistent volumes
- Uses Huawei FlexVolume driver (`flexvolume-huawei.com/fuxivol`)
- SATA block storage type
- Immediate volume binding mode
- Supports dynamic volume expansion
### Integration Flow
```
External Traffic → OTC ELB → ingress-nginx → Kubernetes Services
cert-manager (TLS certificates)
Let's Encrypt ACME
```
## Configuration
### cert-manager Configuration
**Helm Values** (`stacks/otc/cert-manager/values.yaml`):
```yaml
crds:
enabled: true
replicaCount: 1
```
**ClusterIssuer** (`stacks/otc/cert-manager/manifests/clusterissuer.yaml`):
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: main
spec:
acme:
email: admin@think-ahead.tech
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cluster-issuer-account-key
solvers:
- http01:
ingress:
ingressClassName: nginx
```
**Key Settings**:
- CRDs installed automatically
- Production Let's Encrypt ACME endpoint
- HTTP-01 validation through nginx ingress
- ClusterIssuer named `main` for cluster-wide certificate issuance
### ingress-nginx Configuration
**Helm Values** (`stacks/otc/ingress-nginx/values.yaml`):
```yaml
controller:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
service:
annotations:
kubernetes.io/elb.class: union
kubernetes.io/elb.port: '80'
kubernetes.io/elb.id: {{{ .Env.LOADBALANCER_ID }}}
kubernetes.io/elb.ip: {{{ .Env.LOADBALANCER_IP }}}
ingressClassResource:
name: nginx
allowSnippetAnnotations: true
config:
proxy-buffer-size: 32k
use-forwarded-headers: "true"
metrics:
enabled: true
serviceMonitor:
additionalLabels:
release: "ingress-nginx"
enabled: true
```
**Key Settings**:
- **OTC Load Balancer Integration**: Annotations configure connection to OTC ELB
- **Rolling Updates**: Minimizes downtime with 1 pod unavailable during updates
- **Snippet Annotations**: Enabled for advanced ingress configuration (idpbuilder compatibility)
- **Proxy Buffer**: 32k buffer size for handling large headers
- **Forwarded Headers**: Preserves original client information through proxies
- **Metrics**: Prometheus ServiceMonitor for observability
### StorageClass Configuration
**StorageClass** (`stacks/otc/storageclass/storageclass.yaml`):
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
name: default
parameters:
kubernetes.io/hw:passthrough: "true"
kubernetes.io/storagetype: BS
kubernetes.io/volumetype: SATA
kubernetes.io/zone: eu-de-02
provisioner: flexvolume-huawei.com/fuxivol
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
```
**Key Settings**:
- **Default StorageClass**: Automatically used when no StorageClass specified
- **OTC Zone**: Provisioned in `eu-de-02` availability zone
- **SATA Volumes**: Block storage (BS) with SATA performance tier
- **Volume Expansion**: Supports resizing persistent volumes dynamically
- **Reclaim Policy**: Volumes deleted when PersistentVolumeClaim is removed
### ArgoCD Application Configuration
**Registry Application** (`template/registry/otc.yaml`):
- Name: `otc`
- Manages the OTC stack directory
- Automated sync with prune and self-heal enabled
- Creates namespaces automatically
**Component Applications**:
**cert-manager** (referenced in stack):
- Deploys cert-manager Helm chart
- Automated self-healing enabled
- Includes ClusterIssuer manifest for Let's Encrypt
**ingress-nginx** (`template/stacks/otc/ingress-nginx.yaml`):
- Deploys from official Kubernetes ingress-nginx repository
- Chart version: helm-chart-4.12.1
- References environment-specific values from stacks-instances repository
**storageclass** (`template/stacks/otc/storageclass.yaml`):
- Deploys StorageClass manifest
- Managed as ArgoCD Application
- Automated sync with unlimited retries
## Usage Examples
### Creating an Ingress with Automatic TLS
Create an ingress resource that automatically provisions a TLS certificate:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
namespace: my-namespace
annotations:
cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
```
cert-manager will automatically:
1. Detect the ingress with `cert-manager.io/cluster-issuer` annotation
2. Create a Certificate resource
3. Request certificate from Let's Encrypt using HTTP-01 challenge
4. Store certificate in `myapp-tls` secret
5. Renew certificate before expiration
### Creating a PersistentVolumeClaim
Use the default OTC StorageClass for persistent storage:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-data
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: csi-disk
```
### Expanding an Existing Volume
Resize a persistent volume by editing the PVC:
```bash
# Edit the PVC storage request
kubectl patch pvc my-data -n my-namespace -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
# Verify expansion
kubectl get pvc my-data -n my-namespace
```
The volume will expand automatically due to `allowVolumeExpansion: true` in the StorageClass.
### Custom Ingress Configuration
Use nginx ingress snippets for advanced routing:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: advanced-app
annotations:
cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Custom-Header: value";
if ($http_user_agent ~* "bot") {
return 403;
}
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 8080
```
## Integration Points
* **Core Stack**: Requires ArgoCD for deployment orchestration
* **All Application Stacks**: Depends on OTC stack for:
- External access via ingress-nginx
- TLS certificates via cert-manager
- Persistent storage via default StorageClass
* **Observability Stack**: ingress-nginx metrics exported to Prometheus
* **Coder Stack**: Uses ingress and cert-manager for workspace access
* **Forgejo Stack**: Requires ingress and TLS for Git repository access
## Troubleshooting
### Certificate Issuance Fails
**Problem**: Certificate remains in `Pending` state and is not issued
**Solution**:
1. Check Certificate status:
```bash
kubectl get certificate -A
kubectl describe certificate <cert-name> -n <namespace>
```
2. Verify ClusterIssuer is ready:
```bash
kubectl get clusterissuer main
kubectl describe clusterissuer main
```
3. Check cert-manager logs:
```bash
kubectl logs -n cert-manager -l app=cert-manager
```
4. Verify HTTP-01 challenge can reach ingress:
```bash
kubectl get challenges -A
kubectl describe challenge <challenge-name> -n <namespace>
```
5. Common issues:
- DNS not pointing to load balancer IP
- Firewall blocking HTTP (port 80) traffic
- Ingress class not set to `nginx`
- Let's Encrypt rate limits exceeded
### Ingress Controller Not Ready
**Problem**: ingress-nginx pods are not running or LoadBalancer service has no external IP
**Solution**:
1. Check ingress controller status:
```bash
kubectl get pods -n ingress-nginx
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller
```
2. Verify LoadBalancer service:
```bash
kubectl get svc -n ingress-nginx
kubectl describe svc ingress-nginx-controller -n ingress-nginx
```
3. Check OTC load balancer annotations:
```bash
kubectl get svc ingress-nginx-controller -n ingress-nginx -o yaml
```
4. Verify environment variables are set correctly:
- `LOADBALANCER_ID` matches OTC ELB ID
- `LOADBALANCER_IP` matches ELB public IP
5. Check OTC console for ELB configuration and health checks
### Storage Provisioning Fails
**Problem**: PersistentVolumeClaim remains in `Pending` state
**Solution**:
1. Check PVC status:
```bash
kubectl get pvc -A
kubectl describe pvc <pvc-name> -n <namespace>
```
2. Verify StorageClass exists and is default:
```bash
kubectl get storageclass
kubectl describe storageclass default
```
3. Check volume provisioner logs:
```bash
kubectl logs -n kube-system -l app=csi-disk-plugin
```
4. Common issues:
- Insufficient quota in OTC project
- Invalid zone configuration (must be `eu-de-02`)
- Requested storage size exceeds limits
- Missing IAM permissions for volume creation
### Ingress Returns 503 Service Unavailable
**Problem**: Ingress configured but returns 503 error
**Solution**:
1. Verify backend service exists:
```bash
kubectl get svc <service-name> -n <namespace>
kubectl get endpoints <service-name> -n <namespace>
```
2. Check if pods are ready:
```bash
kubectl get pods -n <namespace> -l <service-selector>
```
3. Verify ingress configuration:
```bash
kubectl describe ingress <ingress-name> -n <namespace>
```
4. Check nginx ingress logs:
```bash
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller --tail=100
```
5. Test service connectivity from ingress controller:
```bash
kubectl exec -n ingress-nginx <controller-pod> -- curl http://<service-name>.<namespace>.svc.cluster.local:<port>
```
### TLS Certificate Shows as Invalid
**Problem**: Browser shows certificate warning or certificate details are incorrect
**Solution**:
1. Verify certificate is ready:
```bash
kubectl get certificate <cert-name> -n <namespace>
```
2. Check certificate contents:
```bash
kubectl get secret <tls-secret-name> -n <namespace> -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout
```
3. Ensure certificate covers the correct domain:
```bash
kubectl describe certificate <cert-name> -n <namespace>
```
4. Force certificate renewal if expired or incorrect:
```bash
kubectl delete certificate <cert-name> -n <namespace>
# cert-manager will automatically recreate it
```
## Additional Resources
* [cert-manager Documentation](https://cert-manager.io/docs/)
* [ingress-nginx User Guide](https://kubernetes.github.io/ingress-nginx/user-guide/)
* [Open Telekom Cloud Documentation](https://docs.otc.t-systems.com/)
* [Let's Encrypt Documentation](https://letsencrypt.org/docs/)
* [Kubernetes Ingress Concepts](https://kubernetes.io/docs/concepts/services-networking/ingress/)
* [Kubernetes Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/)

View file

@ -0,0 +1,418 @@
---
title: "Terralist"
linkTitle: "Terralist"
weight: 21
description: >
Private Terraform Module and Provider Registry with OAuth authentication
---
## Overview
Terralist is an open-source private Terraform registry for modules and providers that implements the HashiCorp registry protocol. As part of the Edge Developer Platform, Terralist enables teams to securely store, version, and distribute internal Terraform modules and providers with built-in authentication and documentation capabilities.
The Terralist stack deploys a self-hosted instance with OAuth2 authentication, persistent storage, and integrated ingress for secure access.
## Key Features
* **Private Module Registry**: Securely host and distribute confidential Terraform modules and providers
* **HashiCorp Protocol Compatible**: Works seamlessly with `terraform` CLI and standard registry workflows
* **OAuth2 Authentication**: Integrated OIDC authentication supporting `terraform login` command
* **Documentation Interface**: Web UI to visualize artifacts with automatic module documentation
* **Flexible Storage**: Supports local storage or remote cloud buckets with presigned URLs
* **Git Integration**: Works with mono-repositories while leveraging Terraform version attributes
* **API Management**: RESTful API for programmatic module and provider management
## Repository
**Code**: [Terralist Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/terralist)
**Documentation**:
* [Terralist Official Documentation](https://www.terralist.io/)
* [Terralist GitHub Repository](https://github.com/terralist/terralist)
* [Getting Started Guide](https://www.terralist.io/getting-started/)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* Ingress controller configured (provided by `otc` stack)
* cert-manager for TLS certificate management (provided by `otc` stack)
* Domain name configured via `DOMAIN_GITEA` environment variable
* OAuth2 provider configured (Dex or external provider)
### Quick Start
The Terralist stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then the domain will be `terralist.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- Terralist application (Helm chart v0.8.1)
- Persistent volume for module storage
- Ingress configuration with TLS
- OAuth2 credentials and configuration
### Verification
Verify the Terralist deployment:
```bash
# Check ArgoCD application status
kubectl get application terralist -n argocd
# Verify Terralist pods are running
kubectl get pods -n terralist
# Check persistent volume claim
kubectl get pvc -n terralist
# Verify ingress configuration
kubectl get ingress -n terralist
```
Access the Terralist web interface at `https://terralist.{DOMAIN_GITEA}`.
## Architecture
### Component Architecture
The Terralist stack consists of:
**Terralist Application**:
- Web interface for module and provider management
- REST API for programmatic access
- OAuth2 authentication handler
- Module documentation renderer
**Storage Layer**:
- SQLite database for metadata and configuration
- Local filesystem storage for modules and providers
- Persistent volume with 10Gi capacity on `csi-disk` storage class
- Optional cloud bucket integration for remote storage
**Networking**:
- Nginx ingress with TLS termination
- cert-manager integration for automatic certificate management
- OAuth2 callback endpoint configuration
## Configuration
### Environment Variables
The Terralist application is configured through environment variables in `values.yaml`:
**OAuth2 Configuration**:
- `TERRALIST_AUTHORITY_URL`: OIDC provider authority URL (from `terralist-oidc-secrets` secret)
- `TERRALIST_CLIENT_ID`: OAuth2 client identifier
- `TERRALIST_CLIENT_SECRET`: OAuth2 client secret
- `TERRALIST_TOKEN_SIGNING_SECRET`: Secret for token signing and validation
**Storage Configuration**:
- SQLite database at `/data/database.db`
- Module storage at `/data/modules`
### Helm Chart Configuration
Key Helm values configured in `stacks/terralist/terralist/values.yaml`:
```yaml
controllers:
main:
strategy: Recreate
containers:
main:
env:
- name: TERRALIST_AUTHORITY_URL
valueFrom:
secretKeyRef:
name: terralist-oidc-secrets
key: authority_url
- name: TERRALIST_CLIENT_ID
valueFrom:
secretKeyRef:
name: terralist-oidc-secrets
key: client_id
ingress:
main:
enabled: true
className: nginx
hosts:
- host: "terralist.{DOMAIN_GITEA}"
paths:
- path: /
service:
identifier: main
annotations:
cert-manager.io/cluster-issuer: main
tls:
- secretName: terralist-tls-secret
hosts:
- "terralist.{DOMAIN_GITEA}"
persistence:
data:
enabled: true
size: 10Gi
storageClass: csi-disk
accessMode: ReadWriteOnce
```
### ArgoCD Application Configuration
**Registry Application** (`template/registry/terralist.yaml`):
- Name: `terralist-reg`
- Manages the Terralist stack directory
- Automated sync with prune and self-heal enabled
**Stack Application** (`template/stacks/terralist/terralist.yaml`):
- Name: `terralist`
- Deploys Terralist Helm chart v0.8.1 from `https://github.com/terralist/helm-charts`
- Automated self-healing enabled
- Creates namespace automatically
- References values from `stacks-instances` repository
## Usage Examples
### Authenticating with Terralist
Configure Terraform CLI to use your private registry:
```bash
# Authenticate using OAuth2
terraform login terralist.${DOMAIN_GITEA}
# This opens a browser window for OAuth2 authentication
# After successful login, credentials are stored in ~/.terraform.d/credentials.tfrc.json
```
### Publishing a Module
Publish a module to your private registry:
1. **Create Module Structure**
```bash
my-module/
├── main.tf
├── variables.tf
├── outputs.tf
└── README.md
```
2. **Tag and Push via API**
```bash
# Package module
tar -czf my-module-1.0.0.tar.gz my-module/
# Upload to Terralist (requires authentication token)
curl -X POST https://terralist.${DOMAIN_GITEA}/v1/modules/my-org/my-module/my-provider/1.0.0 \
-H "Authorization: Bearer ${TERRALIST_TOKEN}" \
-F "file=@my-module-1.0.0.tar.gz"
```
### Consuming Private Modules
Use modules from your private registry in Terraform configurations:
```hcl
# Configure Terraform to use private registry
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# Reference module from private registry
module "vpc" {
source = "terralist.${DOMAIN_GITEA}/my-org/vpc/aws"
version = "1.0.0"
cidr_block = "10.0.0.0/16"
environment = "production"
}
```
### Browsing Module Documentation
Access the Terralist web interface to view module documentation:
```bash
# Open Terralist UI
open https://terralist.${DOMAIN_GITEA}
# Browse available modules
# - View module versions
# - Read generated documentation
# - Access module sources
# - Copy usage examples
```
### Managing Modules via API
```bash
# List all modules
curl -H "Authorization: Bearer ${TERRALIST_TOKEN}" \
https://terralist.${DOMAIN_GITEA}/v1/modules
# Get specific module versions
curl -H "Authorization: Bearer ${TERRALIST_TOKEN}" \
https://terralist.${DOMAIN_GITEA}/v1/modules/my-org/my-module/my-provider
# Delete a module version
curl -X DELETE -H "Authorization: Bearer ${TERRALIST_TOKEN}" \
https://terralist.${DOMAIN_GITEA}/v1/modules/my-org/my-module/my-provider/1.0.0
```
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration
* **OTC Stack**: Requires ingress-nginx controller and cert-manager for external access and TLS
* **Dex (SSO)**: Integrates with platform OAuth2 provider for authentication
* **Forgejo Stack**: Modules can be sourced from platform Git repositories
* **Observability Stack**: Application metrics can be collected by platform monitoring tools
## Troubleshooting
### Terralist Pod Not Starting
**Problem**: Terralist pod remains in `Pending` or `CrashLoopBackOff` state
**Solution**:
1. Check persistent volume claim status:
```bash
kubectl get pvc -n terralist
kubectl describe pvc data-terralist-0 -n terralist
```
2. Verify OAuth2 credentials secret:
```bash
kubectl get secret terralist-oidc-secrets -n terralist
kubectl describe secret terralist-oidc-secrets -n terralist
```
3. Check Terralist logs:
```bash
kubectl logs -n terralist -l app.kubernetes.io/name=terralist
```
### Cannot Access Terralist UI
**Problem**: Terralist web interface is not accessible at configured URL
**Solution**:
1. Verify ingress configuration:
```bash
kubectl get ingress -n terralist
kubectl describe ingress -n terralist
```
2. Check TLS certificate status:
```bash
kubectl get certificate -n terralist
kubectl describe certificate terralist-tls-secret -n terralist
```
3. Verify DNS resolution:
```bash
nslookup terralist.${DOMAIN_GITEA}
```
### OAuth2 Authentication Fails
**Problem**: `terraform login` or web authentication fails
**Solution**:
1. Verify OAuth2 configuration in secret:
```bash
kubectl get secret terralist-oidc-secrets -n terralist -o yaml
```
2. Check OAuth2 provider (Dex) is accessible:
```bash
curl https://dex.${DOMAIN_GITEA}/.well-known/openid-configuration
```
3. Verify callback URL is correctly configured in OAuth2 provider:
```
Expected callback: https://terralist.${DOMAIN_GITEA}/auth/cli/callback
```
4. Check Terralist logs for authentication errors:
```bash
kubectl logs -n terralist -l app.kubernetes.io/name=terralist | grep -i auth
```
### Module Upload Fails
**Problem**: Cannot upload modules via API or UI
**Solution**:
1. Verify authentication token is valid:
```bash
# Test token with API call
curl -H "Authorization: Bearer ${TERRALIST_TOKEN}" \
https://terralist.${DOMAIN_GITEA}/v1/modules
```
2. Check persistent volume has available space:
```bash
kubectl exec -n terralist -it terralist-0 -- df -h /data
```
3. Verify module package format is correct:
```bash
# Module should be a gzipped tar archive
tar -tzf my-module-1.0.0.tar.gz
```
4. Review upload logs:
```bash
kubectl logs -n terralist -l app.kubernetes.io/name=terralist --tail=50
```
### Terraform Cannot Download Modules
**Problem**: `terraform init` fails to download modules from private registry
**Solution**:
1. Verify authentication credentials exist:
```bash
cat ~/.terraform.d/credentials.tfrc.json
```
2. Re-authenticate if needed:
```bash
terraform logout terralist.${DOMAIN_GITEA}
terraform login terralist.${DOMAIN_GITEA}
```
3. Test module availability via API:
```bash
curl -H "Authorization: Bearer ${TERRALIST_TOKEN}" \
https://terralist.${DOMAIN_GITEA}/v1/modules/my-org/my-module/my-provider
```
4. Check module source URL format in Terraform configuration:
```hcl
# Correct format
source = "terralist.${DOMAIN_GITEA}/org/module/provider"
# Not: https://terralist.${DOMAIN_GITEA}/...
```
## Additional Resources
* [Terralist Documentation](https://www.terralist.io/)
* [Terralist GitHub Repository](https://github.com/terralist/terralist)
* [Terraform Registry Protocol](https://developer.hashicorp.com/terraform/internals/module-registry-protocol)
* [Private Module Registries Guide](https://developer.hashicorp.com/terraform/registry/private)
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)

View file

@ -0,0 +1,100 @@
---
title: Terraform-based deployment of EDP
linkTitle: Terraform
weight: 10
description: >
As-code definitions of EDP clusters, so they can be deployed reliably and consistently on OTC whenever needed.
---
## Overview
The [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy) and [infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue) repositories work together to provide a framework for deploying Edge Developer Platform instances.
`infra-catalogue` contains individual, atomic infrastructure components: `terraform` modules and `terragrunt` [units](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/units) and [stacks](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/stacks), such as [Kubernetes clusters](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/modules/kubernetes) and [Postgres databases](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/units/postgres/terragrunt.hcl).
`infra-deploy` then contains full [definitions](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod) of stacks built using these components - such as the production site at [edp.buildth.ing](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp). It also includes [scripts](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/scripts) with which to deploy these stacks.
Note that both repositories rely on the wide range of features available on [OTC](https://console.otc.t-systems.com). Several of these features, such as S3-compatible storage and on-demand managed Postgres instances, are not yet available on more sovereign clouds such as [Edge](https://hub.apps.edge.platform.mg3.mdb.osc.live/), so these are not currently supported.
## Key Features
* 'Catalogue' of infrastructure stacks to be used in deployments
* Definition of deployment stacks for each environment in prod or dev
* Scripts to govern deployment, installation and drift-correction of EDP
## Purpose in EDP
For our Edge Developer Platform to be reliable it must be deployable in a consistent manner. When errors occur, or after any manual alterations, the system can then be safely reset to a working state. This state should be provided in code to allow for automated validation and deployment, and to allow it to be deployed from an always-identical CI/CD pipeline rather than a variable local deployment environment.
## Repositories
**Infra-deploy**: [https://edp.buildth.ing/DevFW/infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy)
**Infra-catalogue**: [https://edp.buildth.ing/DevFW/infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue)
## Getting Started
### Prerequisites
* [Docker](https://docs.docker.com/)
* [Kubernetes management](https://kubernetes.io/docs/reference/kubectl/)
* Access to [OTC](https://console.otc.t-systems.com/console/)
* HashiCorp [Terraform](https://developer.hashicorp.com/terraform) or its open-source equivalent, [OpenTofu](https://opentofu.org/)
* [Terragrunt](https://terragrunt.gruntwork.io/), an orchestrator for Terraform stacks
### Quick Start
1. Set up OTC credentials per [README section](https://edp.buildth.ing/DevFW/infra-deploy#installation-on-otc)
2. Set cluster environment and run install script per [README section](https://edp.buildth.ing/DevFW/infra-deploy#using-the-edpbuilder)
Alternatively, manually trigger automated [deployment pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml).
- You will be asked for essential information like the deployment name and tenant.
- Any fields marked `INITIAL` only need to be set when first creating an environment
- Thereafter, the cached values are used and the `INITIAL` values provided to the pipeline are ignored.
- Specifically, they are cached in a `terragrunt.values.hcl` file within `infra-deploy/<tenant>/<cluster-name>`, where both variables are set in the pipeline
- e.g. [prod/edp](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp/terragrunt.values.hcl) or [nonprod/garm-provider-test](https://edp.buildth.ing/DevFW/infra-deploy/src/commit/189632811944d3d3bc41e26c09262de8f215f82b/non-prod/garm-provider-test/terragrunt.values.hcl)
### Verification
After the deploymenet completes, and a short startup time, you should be able to access your Forgejo instance at `<cluster-name>.buildth.ing` (production tenant) or `<cluster-name>.t09.de` (non-prod tenant). `<cluster-name>` is the name you provided in the deployment pipeline, or the $CLUSTER_ENVIRONMENT variable when running manually.
For example, the primary production cluster is called [edp](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp) and can be accessed at [edp.buildth.ing](https://edp.buildth.ing).
#### Screens
Deployment using production pipeline:
![Running the deployment pipeline](../deploy-pipeline.png)
...
![Successful deploy pipeline logs](../deploy-pipeline-success.png)
## Configuration
Configuration of clusters is done in two ways. The first, mentioned above, is to provide `INITIAL` configuration when creating a new cluster. Thereafter, configuration is done within the relevant `infra-deploy/<tenant>` directory (e.g. [prod/edp](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp)). Variables may be changed within the [terragrunt.values.hcl](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp/terragrunt.values.hcl) file, but equally the [terragrunt.stack.hcl](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp/terragrunt.stack.hcl) file contains references to the lower-level code set up in `infra-catalogue`.
These are organised in layers, according to Terragrunt's natural structure. First is a [stack](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/stacks), a high-level abstraction for a whole cluster. This in turn [references](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/stacks/forgejo/terragrunt.stack.hcl) terragrunt [units](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/units), which in turn are wrappers around standard _Terraform_ [modules](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/modules).
When deployed, the Terraform modules require a `provider.tf` file which is automatically generated by Terragrunt using [tenant-level](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/tenant.hcl) and [global](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/root.hcl) configuration stored in `infra-deploy`.
When deploying manually (e.g. with [install.sh](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/install.sh)), you can observe these layers as Terragrunt will cache them on your machine, within the `.terragrunt-stack/` directory generated within [/\<tenant\>/\<cluster-name\>/](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp).
## Troubleshooting
### Version updates
**Problem**: Updates to `infra-catalogue` are not immediately reflected in deployed clusters, even after running [deploy](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml).
**Solution**: Versions must be updated.
Each cluster deployment specifies a [catalogue version](https://edp.buildth.ing/DevFW/infra-deploy/src/commit/189632811944d3d3bc41e26c09262de8f215f82b/prod/edp/terragrunt.values.hcl#L7) in its `terragrunt.values.hcl`; this refers to a tag in [infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue/releases/tag/v2.0.6). Within `infra-catalogue`, stacks reference units and modules from the same tag.
Thus, to test a new change to `infra-catalogue`, first make a new [tag](https://edp.buildth.ing/DevFW/infra-catalogue/tags), then update the relevant [values file](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp/terragrunt.values.hcl) to point to it.
## Status
**Maturity**: TRL-9
## Additional Resources
- [Terraform](https://developer.hashicorp.com/terraform)
- [OpenTofu](https://opentofu.org/), the community-driven replacement for Terraform
- [Terragrunt](https://terragrunt.gruntwork.io/)

View file

@ -0,0 +1,91 @@
---
title: Deploying to OTC
linkTitle: Deploying to OTC
weight: 100
description: >
Open Telekom Cloud as deployment and infrastructure target
---
## Overview
OTC, Open Telekom Cloud, is one of the cloud platform offerings by Deutsche
Telekom and offers GDPR compliant cloud services. The system is based on
OpenStack.
## Key Features
- Managed Kubernetes
- Managed services including
- Databases
- RDS PostgreSQL
- ElasticSearch
- S3 compatible storage
- DNS Management
- Backup & Restore of Kubernetes volumes and managed services
## Purpose in EDP
OTC is used to host core infrastructure to provide the primary, public EDP
instance and as a test bed for Kubernetes based workloads that would eventually
be deployed to EdgeConnect.
Service components such as Forgejo, Grafana, Garm, and Coder are deployed in OTC
Kubernetes utilizing managed services for databases and storage to reduce the
maintenance and setup burden on the team.
Services and workloads are primarily provisioned using Terraform.
## Repository
**Code**:
- <https://edp.buildth.ing/DevFW/infra-catalogue> - Terraform modules of various
system components
- <https://edp.buildth.ing/DevFW/infra-deploy> - Runs deployment worklows,
contains base configuration of deployed system instances and various
deployment scripts
- <https://edp.buildth.ing/DevFW-CICD/stacks> - Template of a system
configuration divided into multiple, deployable application stacks
- <https://edp.buildth.ing/DevFW-CICD/stacks-instances> - System configurations
of deployed instances hydrated from the `stacks` template
**Terraform Provider**:
- <https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/latest/docs>
**Documentation**:
- <https://www.open-telekom-cloud.com/>
- <https://www.open-telekom-cloud.com/en/products-services/core-services/technical-documentation>
**OTC Console**
- <https://console.otc.t-systems.com/console/>
## Managed Services
EDP instances heavily utilize Open Telekom Cloud's (OTC) managed services to
simplify operations, enhance reliability, and allow the team to focus on
application development rather than infrastructure management. The core
components of each deployed instance run within the managed Kubernetes service.
The following managed services are integral to EDP deployments:
- **Cloud Container Engine (CCE)**: The managed Kubernetes service that forms
the foundation of each EDP instance, hosting all containerized core components
and workloads.
- **Relational Database Service (RDS) for PostgreSQL**: Provides scalable and
reliable PostgreSQL database instances, primarily used by applications such as
Forgejo.
- **Object Storage Service (OBS)**: Offers S3-compatible object storage for
storing backups, application data (e.g., for Forgejo), and other static
assets.
- **Cloud Search Service (CSS)**: An optional service providing robust search
capabilities, specifically used for Forgejo's indexing and search
functionalities.
- **Networking**: Essential networking components, including Virtual Private
Clouds (VPCs), Load Balancers, and DNS management, which facilitate secure and
efficient communication within the EDP ecosystem.
- **Cloud Backup and Recovery (CBR)**: Vaults are configured to automatically
back up persistent volumes created by CCE instances, ensuring data resilience
and disaster recovery readiness.

View file

@ -0,0 +1,42 @@
---
title: EDP Environments in OTC
linkTitle: Environments
weight: 10
description: >
Instances of EDP are deployed into distinct OTC environments
---
## Architecture
Two distinct tenants are utilized within OTC to enforce a strict separation
between production (`prod`) and non-production (`non-prod`) environments. This
segregation ensures isolated resource management, security policies, and
operational workflows, preventing any potential cross-contamination or impact
between critical production systems and development/testing activities.
- **Production Tenant:** This tenant is exclusively dedicated to production
workloads and is bound to the primary domain `buildth.ing`. All
production-facing EDP instances and associated infrastructure reside within
this tenant, leveraging `buildth.ing` for public access and service discovery.
Within this tenant, each EDP instance is typically dedicated to a specific
customer. This design decision provides robust data separation, addressing
critical privacy and compliance requirements by isolating customer data. It
also allows for independent upgrade paths and maintenance windows for
individual customer instances, minimizing impact on other customers while
still benefiting from centralized management and deployment strategies. The
primary `edp.buildth.ing` instance and the `observability.buildth.ing`
instance are exceptions to this customer-dedicated model, serving foundational
platform roles.
- **Non-Production Tenant:** This tenant hosts all development, testing, and
staging environments, bound to the primary domain `t09.de`. This setup allows
for flexible experimentation and robust testing without impacting production
stability.
Each tenant is designed to accommodate multiple instances of the product, EDP.
These instances are dynamically provisioned and typically bound to specific
subdomains, which inherit from their respective primary tenant domain (e.g.,
`my-test.t09.de` for a non-production instance or `customer-a.buildth.ing` for a
production instance). This subdomain structure facilitates logical separation
and routing for individual EDP deployments.
<likec4-view view-id="otcTenants" browser="true"></likec4-view>

View file

@ -0,0 +1,113 @@
---
title: Managing Instances
linkTitle: Managing Instances
weight: 50
description: >
Managing instances of EDP deployed in OTC
---
## Deployment Strategy
The core of the deployment strategy revolves around the primary production EDP
instance, `edp.buildth.ing`. This instance acts as a centralized control plane
and code repository, storing all application code, configuration, and deployment
pipelines. It is generally responsible for orchestrating the deployment and
updates of most other EDP instances across both production and non-production
tenants, ensuring consistency and automation.
<likec4-view view-id="otcTenants" browser="true"></likec4-view>
### Circular Dependency Issue
However, a unique circular dependency exists with `observability.buildth.ing`.
While `edp.buildth.ing` manages most deployments, it cannot manage its _own_
lifecycle. Attempting to upgrade `edp.buildth.ing` itself through its own
mechanisms could lead to critical components becoming unavailable during the
process (e.g., internal container registries going offline), preventing the
system from restarting successfully. To mitigate this, `edp.buildth.ing` is
instead deployed and managed by `observability.buildth.ing`, with all its
essential deployment dependencies located within the observability environment.
Crucially, git repositories and other resources like container images are
synchronized from `edp.buildth.ing` to the observability instance, as
`observability.buildth.ing` itself does not produce artifacts. In turn,
`edp.buildth.ing` is responsible for deploying and managing
`observability.buildth.ing` itself. This creates a carefully managed circular
relationship that ensures both critical components can be deployed and
maintained effectively without single points of failure related to
self-management.
## Configuration
This section outlines the processes for deploying and managing the configuration
of EDP instances within the Open Telekom Cloud (OTC) environment. Deployments
are primarily driven by Forgejo Actions and leverage Terraform for
infrastructure provisioning and lifecycle management, adhering to GitOps
principles.
### Deployment Workflows
The lifecycle management of EDP instances is orchestrated through a set of
dedicated workflows within the `infra-deploy` Forgejo
[repository](https://edp.buildth.ing/DevFW/infra-deploy), hosted on
`edp.buildth.ing`. These workflows are designed to emulate the standard
Terraform lifecycle, offering `plan`, `deploy`, and `destroy` operations.
- **Triggering Deployments**: Workflows are manually initiated and require
explicit configuration of an OTC tenant and an environment to accurately
target a specific system instance.
- **`plan` Workflow**:
- Executes a dry-run of the proposed deployment.
- Outputs the detailed `terraform plan`, showing all anticipated
infrastructure changes.
- Shows the diff of the configuration that would be applied to the
`stacks-instances` repository, reflecting changes derived from the `stacks`
repository.
- **`deploy` Workflow**:
- Utilized for both the initial creation of new EDP instances and subsequent
updates to existing deployments.
- For new instance creation, all required configuration fields must be
populated.
- **Important Considerations**:
- Configuration fields explicitly marked as "(INITIAL)" are foundational
and, once set during the initial deployment, cannot be altered through the
workflow without manual modification of the underlying Git configuration.
- Certain changes to the configuration may lead to extensive infrastructure
redeployments, which could potentially result in data loss if not
carefully managed and accompanied by appropriate backup strategies.
- **`destroy` Workflow**:
- Initiates the deprovisioning and complete removal of an existing EDP system
instance from the OTC environment.
- While the infrastructure is torn down, the corresponding configuration entry
is intentionally retained within the `stacks-instances` repository for
historical tracking or potential re-creation.
> NOTE: When deploying a new instance of EDP it is bootstrapped with random
> secrets including admin logins. Initial admin credentials for individual
> components are printed in workflow output. They can be retrieved from the
> secrets withing Kubernetes at a later point in time.
<a href="../workflow-deploy-form.png" target="_blank">
<img alt="Deploy workflow form" src="../workflow-deploy-form.png" style="max-width: 300px;" />
</a>
### Configuration Management
The configuration for deployed EDP instances is systematically managed across
several Git repositories to ensure version control, traceability, and adherence
to GitOps practices.
- **Base Configuration**: A foundational configuration entry for each deployed
system instance is stored directly within the `infra-deploy` repository.
- **Complete System Configuration**: The comprehensive configuration for a
system instance, derived from the `stacks` template repository, is maintained
in the `stacks-instances` repository.
- **GitOps Synchronization**: ArgoCD continuously monitors the
`stacks-instances` repository. It automatically detects and synchronizes any
discrepancies between the desired state defined in Git and the actual state of
the deployed system within the OTC Kubernetes cluster. The configurations in
the `stacks-instances` repository are organized by OTC tenant and instance
name. ArgoCD monitors only the portion of the repository that is relevant to
its specific instance.

Binary file not shown.

After

Width:  |  Height:  |  Size: 209 KiB

View file

@ -0,0 +1,31 @@
---
title: "Documentation System"
linkTitle: "Documentation System"
weight: 100
description: This documentation system, built on the 'documentation as code' principle, is used internally and recommended for all development teams.
---
Embracing the powerful philosophy of **Documentation as Code**, the entire
documentation is authored and meticulously maintained as plain text Markdown
files. These files are stored within a Git repository, allowing for the
leveraging of version control to track changes, facilitate collaborative
contributions, and ensure a robust review process, much like source code.
The documentation source code is hosted at
<https://edp.buildth.ing/DevFW-CICD/website-and-documentation>. The `README`
files within this repository provide detailed instructions on how to contribute
to and build the documentation. It is primarily powered by
[Hugo](https://gohugo.io/), a fast and flexible static site generator, which
transforms the Markdown content into a production-ready website. To enhance
clarity and understanding, sophisticated diagramming tools are integrated:
[Mermaid.js](https://mermaid.js.org/) for creating dynamic charts and diagrams
from text, and [LikeC4](https://likec4.dev/) for generating C4 model
architecture diagrams directly within the documentation.
Changes pushed to the `main` branch of the repository automatically trigger the
continuous integration and deployment (CI/CD) pipeline. This process is
orchestrated using [Forgejo Actions](/docs/edp/forgejo/actions/), which
automates the build of the static site. Subsequently, the updated documentation
is automatically deployed to <https://docs.edp.buildth.ing/>. This streamlined
workflow guarantees that the documentation is always current, accurately
reflecting the latest system state, and readily accessible to all stakeholders.

View file

@ -0,0 +1,108 @@
---
title: Forgejo
linkTitle: Forgejo
weight: 5
description: Forgejo provides source code management, project management, and CI/CD automation for the EDP.
---
The internal service is officially designated as the Edge Developer Platform (EDP). It is hosted at **[edp.buildth.ing](https://edp.buildth.ing)**. The domain selection followed a democratic team process to establish a unique identity distinct from standard corporate naming conventions.
![alt text](image.png)
![alt text](image-1.png)
## Technical Architecture & Deployment
### Infrastructure Stack
The platform is hosted on the **Open Telekom Cloud (OTC)**. The infrastructure adheres to Infrastructure-as-Code (IaC) principles.
* **Deployment Method:** The official Forgejo Helm Chart is deployed via **ArgoCD**.
* **Infrastructure Provisioning:** **Terraform** is used to provision all underlying OTC services, including:
* **Container Orchestration**: CCE (Cloud Container Engine): Kubernetes
* **Database:** RDS (Distributed Cache Service): PostgreSQL
* **Caching:** DCS (Distributed Cache Service): Redis
* **Object Storage:** OBS (Object Storage Service, S3-compatible): for user data (avatars, attachments).
* **Search:** CSS (Cloud Search Service): Elasticsearch
### The "Self-Replicating" Pipeline
A key architectural feature is the ability of the platform to maintain itself. A Forgejo Action can trigger the deployment script, which runs Terraform and syncs ArgoCD, effectively allowing "Forgejo to create/update Forgejo."
```mermaid
graph TD
subgraph "Open Telekom Cloud (OTC)"
subgraph "Control Plane"
Dev[DevOps Engineer] -->|Triggers| Pipeline[Deployment Pipeline]
Pipeline -->|Executes| TF[Terraform]
end
subgraph "Provisioned Infrastructure"
TF -->|Provisions| CCE[(CCE K8s Cluster)]
TF -->|Provisions| RDS[(RDS PostgreSQL)]
TF -->|Provisions| Redis[(DCS Redis)]
TF -->|Provisions| S3[(OBS S3 Bucket)]
TF -->|Provisions| CSS[(CSS Elasticsearch)]
end
subgraph "Application Layer (on CCE K8s)"
Pipeline -->|Helm Chart| Argo[ArgoCD]
Argo -->|Deploys| ForgejoApp[Forgejo]
end
CCE -- Runs --> Argo
CCE -- Runs --> ForgejoApp
ForgejoApp -->|Connects| RDS
ForgejoApp -->|Connects| Redis
ForgejoApp -->|Connects| S3
ForgejoApp -->|Connects| CSS
end
```
### Migration History
The initial environment was a manual setup on the Open Sovereign Cloud (OSC). Once the automation stack (Terraform/ArgoCD) was matured, the platform was migrated to the current OTC environment.
## Application Extensions
### Core Functionality
Beyond standard Git versioning, the platform utilizes:
* **Releases:** Hosting binaries for software distribution (e.g., Edge Connect CLI).
* **CI/CD:** Extensive pipeline usage for build, test, and deployment automation.
* **Note on Issues:** While initially used, issue tracking was migrated to JIRA to align with the broader IPCEI program standards.
### GARM (Git-based Actions Runner Manager)
The primary technical innovation was the integration of [GARM](./actions/runner-orchestration.md) to enable ephemeral, scalable runners. This required extending Forgejo's capabilities to support GitHub-compatible runner registration and webhook events.
## Development Methodology & Contributions
### Workflow
* **Branching Strategy:** Trunk-based development was utilized to ensure rapid integration.
* **Collaboration:** The team adopted **Mob Programming**. This practice proved essential for knowledge sharing and onboarding junior developers, creating a resilient and high-intensity learning environment.
* **Versions:** The platform evolved from Forgejo v7/8 to the current v11.0.3-edp1. An upgrade is pending to leverage the latest upstream GARM features.
### Open Source Contributions
We actively contributed our extensions back to the upstream Forgejo project in [a list of Codeberg.org pull requests](../../governance/_index.md#forgejo)
### Artifact Caching (Pull-Through Proxy)
We implemented a feature allowing Forgejo to act as a pull-through proxy for remote container registries, optimizing bandwidth and build speeds.
* [Source Code Branch: refactor-remote-registry-client](https://edp.buildth.ing/DevFW/edp-forgejo/src/branch/refactor-remote-registry-client)
## Key Performance Indicators (KPIs)
These KPIs measure the effectiveness of the Forgejo setup and quantify our strategic commitment to the Forgejo community.
| KPI | Description | Target / Benchmark |
| :--- | :--- | :--- |
| **Deployment Frequency** | Frequency of successful pipeline executions. | High (Daily/On-demand) |
| **Artifact Cache Hit Rate** | Percentage of build requests served by the local Forgejo proxy. | > 90% (Reduced external traffic) |
| **Upstream Contribution** | Percentage of GARM-related features contributed back to Codeberg. | 100% (No vendor lock-in) |
| **PR Resolution Time** | Average time for upstream community review and merge. | < 14 days (Healthy collaboration) |

View file

@ -0,0 +1,132 @@
---
title: Forgejo Actions
linkTitle: Forgejo Actions
weight: 10
description: GitHub Actions-compatible CI/CD automation
---
## Overview
[Forgejo Actions](https://forgejo.org/docs/next/user/actions/reference/) is a built-in CI/CD automation system that enables developers to define and execute workflows directly within their Forgejo repositories. As a continuous integration and continuous deployment platform, Forgejo Actions automates software development tasks such as building, testing, packaging, and deploying applications whenever specific events occur in your repository.
Forgejo Actions provides [GitHub Actions similarity](https://forgejo.org/docs/latest/user/actions/github-actions/), allowing teams to easily adapt existing GitHub Actions workflows and marketplace actions with minimal or no modifications. This compatibility significantly reduces migration effort for teams transitioning from GitHub to Forgejo, while maintaining familiar syntax and workflow patterns.
Workflows are defined using YAML files stored in the `.forgejo/workflows/` directory of your repository. Each workflow consists of one or more jobs that execute on action runners when triggered by repository events such as pushes, pull requests, tags, or manual dispatch. This enables automation of repetitive development tasks, ensuring consistent build and deployment processes across your software delivery pipeline.
By integrating CI/CD directly into the repository management platform, Forgejo Actions eliminates the need for external CI/CD systems, reducing infrastructure complexity and providing a unified development experience.
## Key Features
* **Automated Workflow Execution** - Execute automated workflows triggered by repository events such as code pushes, pull requests, tag creation, or manual dispatch, enabling continuous integration and deployment without manual intervention
* **GitHub Actions Similarity** - Maintains similarity with GitHub Actions syntax and workflows, allowing reuse of existing actions from the GitHub marketplace and simplifying migration from GitHub-based CI/CD pipelines
## Purpose in EDP
Forgejo Actions enables EDP customers to execute complete CI/CD pipelines directly on the platform for building, testing, packaging, and deploying software. This integrated automation capability is fundamental to the EDP value proposition.
Without native CI/CD automation, customers would face significant integration overhead connecting external CI/CD systems to their EDP workflows. This fragmentation would complicate pipeline management, increase operational complexity, and reduce the platform's effectiveness as a unified development solution.
Since Forgejo Actions is natively integrated into Forgejo, EDP provides this critical CI/CD capability with minimal additional infrastructure. Customers benefit from seamless automation without requiring separate tool provisioning, authentication configuration, or cross-system integration maintenance.
## Getting Started
### Prerequisites
* Installed Forgejo
* Installed Forgejo runner (see [Runner Installation Quick Start](/docs/edp/forgejo/actions/runners/#quick-start))
### Quick Start
1. Create a repository
2. Create file `/.forgejo/workflows/example.yaml`
```yaml
# example.yaml
name: example
on:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Hello World
run: |
echo "Hello World!"
```
1. Navigate to Actions > example.yaml > Run workflow
### Verification
See the logs, there should appear a "Hello World!" in "Hello World" Step
## Usage Examples
### Use actions to deploy infrastructure
See [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/.github/workflows/deploy.yaml) repository as a example
### Use goreleaser to build, test, package and release a project
This pipeline is triggered when a tag with the prefix `v` is pushed to the repository.
Then, it fetches the current repository with all tags and checks out the version for the current run.
After that the application is being built.
```yaml
# .github/workflows/release.yaml
name: ci
on:
push:
tags:
- v*
jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v6
with:
go-version: ">=1.25.1"
- name: Test code
run: make test
- name: Import GPG key
id: import_gpg
uses: https://github.com/crazy-max/ghaction-import-gpg@v6
with:
gpg_private_key: ${{ secrets.GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.GPG_PASSPHRASE }}
- name: Run GoReleaser
uses: https://github.com/goreleaser/goreleaser-action@v6
env:
GITEA_TOKEN: ${{ secrets.PACKAGES_TOKEN }}
GPG_FINGERPRINT: ${{ steps.import_gpg.outputs.fingerprint }}
with:
args: release --clean
```
## Troubleshooting
### The job is not being executed by a runner
**Problem**: The job is not being picked up by a runner
**Solution**: Probably, there is currently no runner available with the label defined in your job `runs-on` attribute. Check the available runner for your repository by navigating to the repository settings > Actions > Runners. Now you can see all available runners and their Labels. Choose on of them as your `runs-on` attribute.
## Status
**Maturity**: Production
## Additional Resources
* [Forgejo Actions](https://forgejo.org/docs/next/user/actions/reference/)
* [GitHub Actions](https://github.com/features/actions)
* [GitHub Actions similarity](https://forgejo.org/docs/latest/user/actions/github-actions/)

View file

@ -0,0 +1,178 @@
---
title: Runners
linkTitle: Runners
weight: 20
description: >
Self-hosted runner infrastructure with orchestration capabilities
---
## Overview
Action runners are the execution environment for Forgejo Actions workflows. By design, runners execute remote code submitted through CI/CD pipelines, making their architecture highly dependent on the underlying infrastructure and security requirements.
The primary objective in any runner setup is the separation and isolation of individual runs. Since runners are specifically built to execute arbitrary code from repositories, proper isolation is critical to prevent data and secret leakage between different pipeline executions. Each runner must be thoroughly cleaned or recreated after every job to ensure no residual data persists that could compromise subsequent runs.
Beyond isolation concerns, action runners represent high-value targets for supply chain attacks. Runners frequently compile, build, and package software binaries that may be distributed to thousands or millions of end users. Compromising a runner could allow attackers to inject malicious code directly into the software supply chain, making runner security a critical consideration in any deployment.
This document explores different runner architectures, examining their security characteristics, operational trade-offs, and suitability for various infrastructure environments and showing off an example deployment using a Containerized Kubernetes environment.
## Key Features
* Consistent environment for Forgejo Actions
* Primary location to execute code e.g. deployments
* Good [security practices](/docs/edp/forgejo/actions/runners/garm/) essential due to broad remit
## Purpose in EDP
A actions runner are executing Forgejo actions, which can be used to build, test, package and deploy software. To ensure that EDP customers do not need to provision their own action runners with high efford, we provide globally registered actions runners to pick up jobs.
## Repository
**Code**:
* [Runner on edge connect using GARM](https://edp.buildth.ing/DevFW-CICD/garm-provider-edge-connect/src/branch/main/runner)
* [Static runner](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/forgejo/forgejo-runner/dind-docker.yaml)
**Documentation**: [Forgejo Runner installation guide](https://forgejo.org/docs/latest/admin/actions/runner-installation/)
## Runner Setups
Different runner deployment architectures offer varying levels of isolation, security, and operational complexity. The choice depends on your infrastructure capabilities, security requirements, and operational overhead tolerance.
### On Bare Metal
Bare metal runners execute directly on physical hardware without virtualization layers.
**Advantages:**
* Maximum performance with direct hardware access
* Complete hardware isolation between different physical machines
* No hypervisor overhead or virtualization complexity
**Disadvantages:**
* Difficult to clean after each run, requiring manual intervention or full OS reinstallation
* Long provisioning time for individual runners
* Complex provisioning processes requiring physical access or remote management tools
* Limited scalability due to physical hardware constraints
* Higher risk of persistent contamination between runs
**Use case:** Best suited for specialized workloads requiring specific hardware, performance-critical builds, or environments where virtualization is not available.
### On Virtual Machines
VM-based runners operate within virtualized environments managed by a hypervisor.
**Advantages:**
* Strong isolation through hypervisor and hardware memory mapping
* Virtual machine images enable faster provisioning compared to bare metal
* Easy to snapshot, clone, and restore to clean states
* Better resource utilization through multiple VMs per physical host
* Automated cleanup by destroying and recreating VMs after each run
**Disadvantages:**
* Requires hypervisor infrastructure and management
* Slower provisioning than containers
* Higher resource overhead compared to containerized solutions
* More complex orchestration for scaling runner fleets
**Use case:** Ideal for environments requiring strong isolation guarantees, multi-tenant scenarios, or when running untrusted code from external contributors.
### In Containerized Environment
Container-based runners execute within isolated containers using OCI-compliant runtimes.
**Advantages:**
* Kernel-level isolation using Linux namespaces and cgroups
* Fast provisioning and startup times
* Easy deployment through standardized OCI container images
* Lightweight resource usage enabling high-density runner deployments
* Simple orchestration with Kubernetes or Docker Compose
**Disadvantages:**
* Weaker isolation than VMs since containers share the host kernel
* Requires elevated permissions or privileged access for certain workflows (e.g., Docker-in-Docker)
* Potential kernel-level vulnerabilities affect all containers on the host
* Container escape vulnerabilities pose security risks in multi-tenant environments
**Use case:** Best for high-volume CI/CD workloads, trusted code repositories, and environments prioritizing speed and efficiency over maximum isolation.
## Getting Started
### Prerequisites
* Forgejo instance
* Runner registration token has been generated for a given scope
* Global runners in `admin settings > actions > runner > Create new runner`
* Organization runners in `organization settings > actions > runner > Create new runner`
* Repository runners in `repository settings > actions > runner > Create new runner`
* Kubernetes cluster
### Quick Start
1. Download [Kubernetes manifest](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/forgejo/forgejo-runner/dind-docker.yaml)
2. Replace `${RUNNER_SECRET}` with the runner registration token
3. Replace `${RUNNER_NAME}` with the name the runner should have
4. Replace `${FORGEJO_INSTANCE_URL}` with the instance url
5. (if namespace does not exists) `kubectl create ns gitea`
6. Run `kubectl apply -f <file>`
### Verification
Take a look at the runners page, where you generated the token. There should be 3 runners in idle state now.
### Sequence Diagrams
```mermaid
---
title: Forgejo Runner executed in daemon mode
---
sequenceDiagram
Runner->>Forgejo: Register runner
loop Job Workflow
Runner->>Forgejo: Fetch job
Runner->>Runner: Work on job
Runner->>Forgejo: Send result
end
```
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
There is a sophisticated [configuration file](https://forgejo.org/docs/latest/admin/actions/runner-installation/#configuration), where finetuning can be done.
The most important thing is done by using labels to define the execution environment.
The label `ubuntu-latest:docker://ghcr.io/catthehacker/ubuntu:act-22.04` (as used in [example runner](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/forgejo/forgejo-runner/dind-docker.yaml)). That a job that uses `ubuntu-latest` label will be executed as docker container inside the `ghcr.io/catthehacker/ubuntu:act-22.04` image.
Alternatives to `docker` are [`lxc`](https://forgejo.org/docs/latest/admin/actions/security/#job-containers-w-lxc) and [`host`](https://forgejo.org/docs/latest/admin/actions/security/#execution-on-host-host).
## Troubleshooting
### In containerized environments, I want to build container images
**Problem**: In containerized environment, containers usually do not have many privileges. To start or build containers additional privleges, usually root is required inside of the kernel, the container runtime needs to manage linux namespaces and cgroups.
**Solution**: A partial solution for this is `buildkitd` utilizing `rootlesskit`. This allows containers to be **built** (but not run) in a non root environment. Several examples can be found in the [official buildkit repo](https://github.com/moby/buildkit/tree/master/examples/kubernetes).
***Rootless vs User namespaces:***
As of Kubernetes 1.33, uid mapping can be enabled for pods using `pod.spec.hostUsers: false` utilizing user namespaces to map user and group ids between the container ids (0-65535) to high host ids (0-65535 + n * 65536) where n is an arbitrary number of containers. This allows that the container runs with actual root permission in its user namespace without being root on the host system.
Rootless is considered the more secure version, as the executable is mapped to a privileged entitiy at all.
## Status
**Maturity**: Beta
## Additional Resources
* [Forgejo Runner installation guide](https://forgejo.org/docs/latest/admin/actions/runner-installation)
* [Static Runners on Kubernetes](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/forgejo/forgejo-runner/dind-docker.yaml)
* [Runner Orchestartion using GARM on Edge Connect](../runner-orchestration)

View file

@ -0,0 +1,176 @@
---
title: Orchestration with GARM
linkTitle: Orchestration with GARM
weight: 30
description: Using GARM to manage short-lived Forgejo runners
---
## Overview
GARM provides on-demand runner orchestration for Forgejo Actions through dynamic autoscaling. As Forgejo has similar API structure to Gitea (from which it was forked), GARM's Gitea/GitHub compatibility makes it a natural fit for automated runner provisioning. GARM supports custom providers, enabling runner infrastructure deployment across multiple cloud and infrastructure platforms.
A custom edge-connect provider was implemented for GARM to enable infrastructure provisioning. Additionally, Forgejo was adapted to align more closely with Gitea's API, ensuring seamless integration with GARM's orchestration capabilities.
## Key Features
* Autoscales Forgejo Actions runners dynamically based on workload demand
* Leverages edge-connect infrastructure for distributed runner provisioning
## Purpose in EDP
- Provides CI/CD infrastructure for all software development projects
- Enhances the EDP platform capabilities through improved Forgejo automation
- Enables teams to focus on development by consuming platform-managed runners without capacity planning concerns
## Repository
**Code**:
- [GARM Provider for Edge Connect](https://edp.buildth.ing/DevFW-CICD/garm-provider-edge-connect)
- [GARM deploy script](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/scripts/local-helm.sh)
- [GARM deploy manifests](https://edp.buildth.ing/DevFW/garm-deploy.git)
## Getting Started
### Prerequisites
* Container Runtime installed (e.g. docker)
* Forgejo, Gitea or Github
### Quick Start
1. Clone the [GARM Provider repository](https://edp.buildth.ing/DevFW-CICD/garm-provider-edge-connect/)
2. Build the Docker image: `docker buildx build -t <your-image-tag> .`
3. Push the image to your container registry
4. Deploy GARM using the [deployment script](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/scripts/local-helm.sh) from the [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy) repository, targeting your Kubernetes cluster: `./local-helm.sh --garm`
### Verification
- Verify the GARM pod is running: `kubectl get pods -n garm`
- Retrieve the GARM domain endpoint: `kubectl get ing -n garm`
- Get the GARM admin password: `kubectl get secret -n garm garm-credentials -o json | jq .data.GARM_ADMIN_PASSWORD -r | base64 -d`
- Configure endpoints, credentials, repositories, and runner pools in GARM as described in [TODO](TODO)
## Integration Points
* **Forgejo**: Picks up pending action jobs, listen in Forgejo
* **Edge Connect**: Uses this infrastructure to deploy runners that can pick up open jobs in forgejo
## Architecture
The primary technical innovation was the integration of **[GARM](https://github.com/cloudbase/garm)** to enable ephemeral, scalable runners. This required extending Forgejo's capabilities to support GitHub-compatible runner registration and webhook events.
**Workflow Architecture:**
1. **Event:** A workflow event occurs in Forgejo.
2. **Trigger:** A webhook notifies GARM.
3. **Provisioning:** GARM spins up a fresh, ephemeral runner.
4. **Execution:** The runner registers via the API, executes the job, and is terminated immediately after, ensuring a clean build environment.
```mermaid
sequenceDiagram
participant User
participant Forgejo
participant GARM
participant Runner as Ephemeral Runner
User->>Forgejo: Push Code / Trigger Event
Forgejo->>GARM: Webhook Event (Workflow Dispatch)
GARM->>Forgejo: Register Runner (via API)
GARM->>Runner: Spin up Instance
Runner->>Forgejo: Request Job
Forgejo->>Runner: Send Job Payload
Runner->>Runner: Execute Steps
Runner->>Forgejo: Report Status
GARM->>Runner: Terminate (Ephemeral)
```
### Sequence Diagrams
The diagram below shows how a trigger of an action results in deployment of a runner on edge-connect.
{{<likec4-view view="forgejoGarmInteraction" browser="false" dynamic-variant="sequence" project="architecture" title="Interaction between Forgejo, Garm and Edge Connect">}}
### Deployment Architecture
{{<likec4-view view="forgejoGarmArchitecture" browser="false" dynamic-variant="sequence" project="architecture" title="Architecture of Forgejo, Garm and Edge Connect">}}
## Configuration
### Provider Setup
The config below configures an external provder for garm. Especially important is the `provider.external.config_file` which refers to the configuration of the external provider (example below) and `provider.external.provider_executable` which needs to point to the provider executable.
```config.toml
# config.toml
...
[[provider]]
name = "edge-connect"
description = "edge connect provider"
provider_type = "external"
[provider.external]
config_file = "/etc/garm/edge-connect-provider-config.toml"
provider_executable = "/opt/garm/providers.d/garm-provider-edge-connect"
environment_variables = ["EDP_EDGE_CONNECT_"]
```
```edge-connect-provider-config.toml
# edge-connect-provider-config.toml
log_file = "/garm/provider.log"
credentials_file = "/etc/garm-creds/credentials.toml" # to authenticate agains edge_connect.url
[edge_connect]
organization = "edp-developer-framework"
region = "EU"
url = "https://hub.apps.edge.platform.mg3.mdb.osc.live"
default_flavor = "EU.small"
[edge_connect.cloudlet]
name = "Munich"
organization = "TelekomOP"
```
```credentials.toml
# credentials.toml for edge connect platform
username = ""
password = ""
```
### Runner Pool Configuration
Once the configuration is in place and garm has been deployed. You can connect garm to Forgejo/Gitea/Github, using the commands below. If you have a forgejo instance, you want to create a gitea endpoint.
```sh
# https://edp.buildth.ing/DevFW/garm-deploy/src/branch/master/helm/garm/templates/init-job.yaml#L39-L56
garm-cli init --name gitea --password ${GARM_ADMIN_PASSWORD} --username ${GARM_ADMIN_USERNAME} --email ${GARM_ADMIN_EMAIL} --url ${GARM_URL}
if [ $? -ne 0 ]; then
echo "garm maybe already initialized"
exit 0
fi
# API_GIT_URL=https://garm-provider-test.t09.de/api/v1
# GIT_URL=https://garm-provider-test.t09.de
garm-cli gitea endpoint create \
--api-base-url ${API_GIT_URL} \
--base-url ${GIT_URL} \
--description "My first Gitea endpoint" \
--name local-gitea
garm-cli gitea credentials add \
--endpoint local-gitea \
--auth-type pat \
--pat-oauth-token $GITEA_TOKEN \
--name autotoken \
--description "Gitea token"
```
Now, connect to the WebUI, use `GARM_ADMIN_USERNAME` and `GARM_ADMIN_PASSWORD` as credentials to authenticate. Click on repositories and
## Status
**Maturity**: Beta
## Additional Resources
* [GARM repository](https://github.com/cloudbase/garm)
* [How to use](https://github.com/cloudbase/garm/blob/main/doc/using_garm.md)

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

View file

@ -0,0 +1,78 @@
---
title: Project Management in Forgejo
linkTitle: Project Management
weight: 50
description: >
Organization-level project and issue management
---
{{% alert title="Discontinued Feature" color="warning" %}}
This feature was implemented at a prototype level but never reached production readiness. Development was discontinued in favor of other platform priorities.
{{% /alert %}}
## Overview
This was an attempt to extend Forgejo's project and issue management capabilities beyond the repository level. The goal was to enable organizations and users to create projects and issues that could span multiple repositories or exist independently of any repository.
## Problem Statement
Forgejo's issue management is repository-centered. While this works well for code-specific issues, it creates challenges for broader project management:
* **Cross-repository work**: Tasks often span multiple repositories but must be artificially tied to one
* **Non-code projects**: Some projects don't map cleanly to a repository (e.g., planning, documentation initiatives)
* **Related repositories**: Symbiotically related repos would benefit from shared issue tracking
Real-world examples:
* Upstream: [forgejo-actions-feature-requests](https://code.forgejo.org/forgejo/forgejo-actions-feature-requests) - arguably doesn't need repository/code functionality
* EDP: [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy) and [infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue) - symbiotically related projects
## Implementation Status
**Status**: Prototype level - basic operations work but not production-ready
**What was built:**
* Projects can be created at the organization/user level (not tied to repositories)
* Issues can be created within these organization-level projects
* Issues can be moved between columns within any projects
* Basic Create and View Issue pages function without errors
**What was incomplete:**
* Several features on Create/View pages disabled rather than adapted, e.g. due dates
* Repository-specific features (tags, code reviews, etc.) not resolved for org-level context
* Broader issue management features not yet functional
## Discontinuation
Development was discontinued due to:
* Project priorities shifted to other platform features
* Scope of remaining work deemed too large for the anticipated value
* Concerns about maintaining a custom feature divergent from upstream Forgejo
## Repository
**Code**: [edp-forgejo](https://edp.buildth.ing/DevFW/edp-forgejo) (Remark: You must be logged into edp.buildth.ing as the repo is internal)
This is a fork of upstream Forgejo with the organization-level project management changes. The fork is based on Forgejo v11.x (upstream has progressed to at least v13.x).
**Implementation**: Changes to both UI (in TypeScript) and server-side (Golang) functionality.
## Technical Approach
The implementation involved:
* Minimally modifying Forgejo's data model to associate projects with organizations/users instead of repositories
* Adapting issue creation and display logic to work without repository context
* Addressing repository-specific settings (labels, milestones, code review integration) for org-level issues
* UI changes to support project creation and issue management at the organization level
## Integration Points
This feature was developed as an isolated extension to Forgejo. Its code is within the `edp-forgejo` repository alongside other EDP updates - such as magenta colour scheme - but in terms of functionality has minimal overlap/links with other EDP components.
## Lessons Learned
* Repository-centric design is deeply embedded in Forgejo's architecture
* Maintaining custom features in a fork creates significant maintenance burden
* The scope of fully-functional cross-repository project management is substantial
* This is related to Issues and Repositories being two of the most extensive features in Forgejo
* Alternative approaches (using dedicated project management tools, or simply 'shell' repositories) may be more sustainable
* Clear buy-in is needed for the long term in order to make a change like this viable

View file

@ -0,0 +1,80 @@
---
title: "Operations"
linkTitle: "Operations"
weight: 40
description: >
Operational guides for deploying, monitoring, and maintaining the Edge Developer Platform components.
---
## Operations Overview
This section outlines some of the operational aspects of the Edge Developer
Platform (EDP). The approach emphasizes a "developer operations" mode, primarily
focusing on monitoring and issue resolution rather than traditional operations.
## Deployments
### EDP Clusters
For details on deploying instances of EDP on OTC, see
[this](/docs/edp/deployment/otc/) section.
#### Further Infrastructural References
- OTC Documentation:
- [IPCEI-CIS Confluence - OTC](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/1000105031/OTC)
### Edge Connect
The `edge` and `orca` clouds within Edge Connect serve as deployment targets for
EDP applications. These environments are [Gardener](https://gardener.cloud/)
Kubernetes clusters.
For general use, interaction with Edge Connect is intended via its web UI:
<https://hub.apps.edge.platform.mg3.mdb.osc.live>
![Edge Hub](edge-hub.png)
#### Further Infrastructural References
![Gardener](gardener.png)
Cluster-level access is available for addressing operational issues. Details on
obtaining access are provided in the following resources:
- [IPCEI-CIS Confluence - Edge Cloud](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/1122869593/Edge+Cloud)
- [IPCEI-CIS Jira - Edge Cloud Access](https://jira.telekom-mms.com/browse/IPCEICIS-6222?focusedId=3411527&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-3411527)
- **Hint:** To authenticate and obtain the cluster `kubectl` context, retrieve
the `kubeconfig` for the `platform` from your Gardener Account Settings. Then,
execute:
```bash
gardenctl target --garden mg3 --project platform --shoot edge
```
## Monitoring & Observability
The `observability.buildth.ing` [cluster](https://observability.buildth.ing/) within the Prod OTC [tenant](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/observability) is designated
for monitoring platform stacks, with visualization primarily through
[Grafana](https://grafana.observability.buildth.ing). Currently, a formal
operational monitoring lifecycle with defined metrics and alerts is not fully
established, reflecting the current developer-centric operational mode.
Login credentials can be found in the `grafana-admin-credentials` secret within the cluster.
> NOTE: The deployed stacks are depending on the `is_observability` flag setting (to include extra components for observability) in the `deploy` workflow within the `infra-deploy` repository.
![EDP Grafana Dashboard](edp-grafana.png)
## Maintenance
EDP maintenance follows an issue-driven strategy.
### Updates & Upgrades
Updates are performed on-demand for individual components in
[stacks](/docs/edp/deployment/infrastructure/stacks/).
### Backup & Recovery
Customer data within EDP is regularly backed up. Refer to
[IPCEICIS-5017](https://jira.telekom-mms.com/browse/IPCEICIS-5017) for details.

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

View file

@ -1,70 +0,0 @@
---
title: "Getting Started"
linkTitle: "Getting Started"
weight: 20
description: >
Quick start guides and onboarding information for the Edge Developer Platform.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: TBD
* **Assignee**: Team
* **Status**: Draft - Structure only
* **Last Updated**: 2025-11-16
* **TODO**:
* [ ] Add concrete quick start steps
* [ ] Include prerequisites and access information
* [ ] Create first application tutorial
{{% /alert %}}
## Welcome to EDP
This section helps you get started with the Edge Developer Platform, whether you're a developer building applications or a platform engineer managing infrastructure.
## Quick Start for Developers
### Prerequisites
* Access to EDP instance
* Git client installed
* kubectl configured (for Kubernetes access)
* Basic knowledge of containers and CI/CD
### Your First Application
1. **Access the Platform**: Log in to Backstage portal
2. **Clone Repository**: Get your application repository from Forgejo/GitLab
3. **Configure Pipeline**: Set up CI/CD in Woodpecker or ArgoCD
4. **Deploy**: Push code and watch automated deployment
### Next Steps
* Explore available components and services
* Review platform documentation and best practices
* Join the developer community
## Quick Start for Platform Engineers
### Platform Access
* Kubernetes cluster access
* Infrastructure management tools
* Monitoring and observability dashboards
### Key Resources
* Platform architecture documentation
* Operational runbooks
* Troubleshooting guides
## Documentation Template
When creating "Getting Started" content for a component:
1. **Prerequisites**: What users need before starting
2. **Step-by-Step Guide**: Clear, numbered instructions
3. **Verification**: How to confirm success
4. **Common Issues**: FAQ and troubleshooting
5. **Next Steps**: Links to deeper documentation

View file

@ -3,79 +3,19 @@ title: "Governance"
linkTitle: "Governance" linkTitle: "Governance"
weight: 100 weight: 100
description: > description: >
Project history, architecture decisions, compliance, and audit information. Project history, decision context, and audit-oriented traceability (primary sources and evidence).
--- ---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6737](https://jira.telekom-mms.com/browse/IPCEICIS-6737)
* **Assignee**: Sophie
* **Status**: Draft - Structure only
* **Last Updated**: 2025-11-16
* **TODO**:
* [ ] Migrate relevant ADRs from docs-old
* [ ] Document project history and phases
* [ ] Add deliverables mapping
* [ ] Include compliance documentation
{{% /alert %}}
## Governance Overview ## Governance Overview
This section provides information for auditors, governance teams, and stakeholders who need to understand the project's decision-making process, history, and compliance. This chapter is publicly accessible, but it is written from within the IPCEI-CIS project context and therefore builds heavily on internal shared understanding.
## Architecture Decision Records (ADRs) Most terminology, references, and primary sources in this chapter are internal (e.g., Confluence, Jira). Access and context are assumed.
Documentation of significant architectural decisions made during the project, including context, options considered, and rationale. Primary intended audience:
## Project History - IPCEI-CIS auditors
- IPCEI-CIS project management
- Project leads of other IPCEI-CIS sub-projects
- IPCEI-CIS central architecture
### Development Process
The EDP was developed using collaborative approaches including mob programming and iterative development with regular user feedback.
### Project Phases
* Research & Design
* Proof of Concept
* Friendly User Phase
* Production Rollout
### Deliverables Mapping
Mapping to IPCEI-CIS deliverables and project milestones.
## Compliance & Audit
### Technology Choices
Documentation of technology evaluation and selection process for key components (e.g., VictoriaMetrics, GARM, Terraform, ArgoCD).
### Security Controls
Overview of implemented security controls and compliance measures.
### Ticket References
Cross-references to Jira tickets, epics, and project tracking for audit trails.
## Community & External Relations
### Open Source Contributions
Contributions to the Forgejo community and other open-source projects.
### External Stakeholders
User experience research and feedback integration.
## Documentation Template
When creating governance documentation:
1. **Context**: Background and situation
2. **Decision/Event**: What was decided or what happened
3. **Rationale**: Why this decision was made
4. **Alternatives**: Other options considered
5. **Consequences**: Impact and outcomes
6. **References**: Links to tickets, discussions, external resources

View file

@ -0,0 +1,88 @@
---
title: "Compliance & audit"
linkTitle: "Compliance"
weight: 30
description: >
Technology choices, auditability, and external relations.
---
## Technology Choices
Documentation of technology evaluation and selection process for key components (e.g., VictoriaMetrics, GARM, Terraform, ArgoCD).
### Forgejo
The internal service is officially designated as the [Edge Developer Platform (EDP)](/docs/edp/forgejo/). It is hosted at **[edp.buildth.ing](https://edp.buildth.ing)**. The domain selection followed a democratic team process to establish a unique identity distinct from standard corporate naming conventions.
**Solution selection:**
The decision to utilize **[Forgejo](https://forgejo.org/)** as the core self-hosted Git service was driven by specific strategic requirements:
- **EU-Based Stewardship:** Forgejo is stewarded by **[Codeberg e.V.](https://docs.codeberg.org/getting-started/what-is-codeberg/)**, a non-profit organization based in Berlin, Germany. This alignment ensures compliance with GDPR and data sovereignty requirements, placing governance under EU jurisdiction rather than US tech entities.
- **License Protection (GPL v3+):** Unlike "Open Core" models, Forgejo uses a copyleft license. This legally protects custom extensions developed in this project (such as GARM support) from being appropriated into proprietary software, ensuring the ecosystem remains open.
- **Open Source Strategy:** The platform aligns with the "Public Money, Public Code" philosophy, mandating that funded developments are returned to the community.
**Access Model:**
The platform operates on a hybrid visibility model:
- **Public Access:** The [`DEVFW-CICD`](https://edp.buildth.ing/DevFW-CICD) organization is publicly accessible, fostering transparency.
- **Private Access:** Sensitive development occurs in restricted organizations (e.g., [`DEVFW`](https://edp.buildth.ing/DevFW)).
- **User Base:** Primary users include the internal development team, with friendly user access granted to the IPCEI team and MMS BT.
## Ticket References
Cross-references to Jira tickets, epics, and project tracking for audit trails.
Current, evidence-backed anchors:
- PoC “parts” and hands-on scope are anchored in Jira and listed explicitly in the PoC design README (see Traceability / Ticket anchors).
- PoC consolidation and governance intent (“traces from tickets to outputs”) is described in the team-process documentation.
- The Forgejo ProjectMgmt prototype documents how tickets, milestones, and boards were structured in Forgejo to run demo slices and work packages.
## Open Source Contributions
Contributions to the Forgejo community and other open-source projects.
### Forgejo
Project extensions were contributed upstream to the Forgejo project on **[Codeberg.org](https://codeberg.org/)**.
**Key Pull Requests:**
- **API Compatibility:** Added GitHub-compatible endpoints for runner registration.
- [PR #9409: Feat: Add endpoints for GARM](https://codeberg.org/forgejo/forgejo/pulls/9409)
- **Webhook Support:** Implemented webhook triggers for workflow events.
- [PR #9803: Feat: Add webhook support for workflow events](https://codeberg.org/forgejo/forgejo/pulls/9803)
- **Ephemeral Runners:** Added support for runners that terminate after a single job.
- [PR #9962: Feat: Support for ephemeral runners](https://codeberg.org/forgejo/forgejo/pulls/9962)
## External Stakeholders
From the beginning, the project used structured stakeholder formats to collect requirements, validate assumptions, and strengthen a product-development mindset beyond “pure delivery”.
Evidence (internal only):
- Stakeholder workshop planning and target groups are captured in Confluence: [eDF Stakeholder Workshops](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/902567168/eDF+Stakeholder+Workshops) (internal/external workshops, goals, and intended outcomes).
- A concrete external workshop session is documented in Confluence: [external stakeholder workshop](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/936478033/external+stakeholder+workshop) (incl. agenda attachment). Note: the page explicitly contains AI-generated content and should be verified.
- An internal workshop session with detailed agenda and feedback is documented in Confluence: [internal stakeholder workshop 7.11.](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/915155061/internal+stakeholder+workshop+7.11.) (also includes AI-generated summary blocks).
### Key decisions and learnings (PII-free synthesis)
The workshop and research artifacts consistently point to a few pragmatic decisions and product learnings (summarized here without personal data[^pii-free]):
- **Onboarding is a primary adoption gate:** prioritize low cognitive load, clear guidance, and a “cold start” path that works without prior context (captured in the Customer Engagement plan and related onboarding-focused activities).
- **Treat navigation and information architecture as product work:** “too many clicks”, missing global orientation cues, and inconsistent navigation were recurring friction points; prioritization leaned towards making “projects / work” more discoverable and first-class (see UX insights log for navigation/IA patterns).
- **Forgejo PM & Docs need either redesign or deliberate scope boundaries:** modern PM/docs workflows and stakeholder reporting expectations were a known gap; this informed decisions about prototyping and scoping improvements vs. relying on integrations.
- **Expect a tension between autonomy and guardrails:** research highlights that developer autonomy is valued while guardrails increase trust and repeatability; positioning matters because “platform” concepts can be perceived as top-down control if not framed carefully.
- **Institutionalize UX feedback loops:** beyond ad-hoc workshops, the work moved towards a repeatable research cadence (panel/community, surveys, and insight logging) to reduce “one-off feedback” risk.
- **Automated UX testing was formalized as a concrete use case:** a dedicated “use case identification” artifact structures automated UX testing around functional correctness, visual consistency/accessibility, and task-based end-to-end “happy path” flow checks (used as input for the later UX work package stream).
[^pii-free]: PII = “personally identifiable information”. “PII-free synthesis” means summarizing patterns, decisions, and learnings without including names, participant lists, direct quotes, or other details that could identify individuals.
Later, a dedicated “user experience” focus was strengthened and formalized via a dedicated work package / deliverable stream that explicitly frames UX validation as an activity with objectives, KPIs, and user validation:
- Work package definition and objectives: [Workpackage e.3 - Sustainable-edge-management-optimized user interface for edge developers](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/1165704046/Workpackage+e.3+-+Sustainable-edge-management-optimized+user+interface+for+edge+developers)
- Deliverable (incl. PoC results summary around autonomous UI/UX testing and “happy path” user flows): [Deliverable D66 - Sustainable-edge-management-optimized user interface for edge developers](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/1165704082/Deliverable+D66+-+Sustainable-edge-management-optimized+user+interface+for+edge+developers)
See also: the central [References](/docs/governance/references/) index.

View file

@ -0,0 +1,114 @@
---
title: "Project history"
linkTitle: "History"
weight: 10
description: >
Mandate, phases, milestones, and process evolution.
---
## Mandate and product vision
Within the IPCEI-CIS work package for an Edge Developer Framework, the goal of the Developer Framework / EDP effort is to provide services that enable teams to develop, validate, roll out and operate applications efficiently across the edge cloud continuum.
The initial product framing emphasized:
- A coherent developer experience spanning development, testing, validation, rollout, monitoring and (eventually) billing.
- Reuse through templates and "golden paths".
- A portal-centric interaction model where developers consume platform capabilities via stable APIs and UI, not ad-hoc cluster access.
Primary source (internal only): [Confluence: Sub Project Developer Framework](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/856788263/Sub+Project+Developer+Framework)
## Phases and milestones
The following phase model is derived from the documented primary sources referenced in this chapter (Confluence and the referenced repositories). The phrasing focuses on “what changed and why”; it is not a release plan.
Terminology: In this chapter, “Repository” refers to concrete Git repositories used as evidence sources. Unless stated otherwise:
- “Repository (this docs repo)” means this documentation repository (“website-and-documentation”), including `/docs-old/`.
- “Repository (edp-doc)” means the EDP technical documentation repository at (internal only) [edp.buildth.ing/DevFW/edp-doc](https://edp.buildth.ing/DevFW/edp-doc).
- “Confluence” refers to the IPCEI-CIS Confluence space on `confluence.telekom-mms.com` (internal only).
It does not refer to the wider set of platform/service code repositories unless explicitly stated.
### Phase 1 — Discovery & system design (2024)
Focus:
- Establish a reference architecture for an Internal Developer Platform (IDP) style solution.
- Evaluate IDP foundations (explicitly referencing CNOE as a favored baseline), using a “planes” model as conceptual structure.
- Early emphasis on becoming self-hosting quickly (“eat your own dogfood”) and validating end-to-end paths.
Primary source (internal only): [Confluence: System Design](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/856788272/System+Design)
### Phase 2 — Proof of Concept (PoC) definition and scope (2024)
Focus:
- Align on a shared understanding of the product “Developer Platform” (technical and business framing) and what is feasible within 2024.
- Define PoC goals and acceptance criteria, including an end-to-end story centered on:
- an IDP builder/orchestrator running in the target environment (OSC),
- a developer portal (Backstage) for the user experience,
- a “golden path” flow from source → CI/CD → deployment.
Primary sources:
- Confluence (internal only): [Confluence: Proof of Concept 2024](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/902010138/Proof+of+Concept+2024)
- Repository (this repo): docs-old PoC structure summary: [PoC Structure](/docs-old/v1/project/plan-in-2024/poc/)
### Phase 3 — PoC consolidation: deliverables, repository structure, traceability (late 2024)
Focus:
- Package outputs produced since mid-2024 into a demonstrable PoC product.
- Make “traces” explicit from backlog items to concrete outputs (repos, docs, capabilities), to support governance and auditability.
- Establish working agreements for branching, PR-based review, and Definition of Done.
Primary source: repository document [Team and Work Structure](/docs-old/v1/project/team-process/) (docs-old, in this repo).
### Phase 4 — “Forgejo as a Service” and Foundry-based provisioning (2025)
Focus:
- Expand from “PoC capabilities” toward a service milestone around Forgejo, including supporting services (persistence, backups, caching, indexing, SSO, runners, observability).
- Provision Foundry/EDP resources via Infrastructure-as-Code, initially in OTC.
- Address reliability and migration risks while moving from earlier instances to production endpoints.
Evidence:
- Confluence (internal only): [Confluence: Forgejo as a service](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/999903971/Forgejo+as+a+service) (service decomposition and operational concerns)
- ADR: “Add Scaleway as Cloud resource Provider” explicitly places Foundry/EDP IaC provisioning in mid-April 2025 and captures platform issues and mitigation.
- Postmortem (2025-07-14) documents downtime rooted in an incomplete Foundry migration and the need for explicit migration plans.
### Phase 5 — EdgeConnect integration: deployment target + SDK/tooling (ongoing)
Focus:
- Treat EdgeConnect as a sovereign deployment target operated outside EDP, and provide consumable tooling to integrate it into delivery workflows.
- Provide reusable automation components (SDK, CLI client, Terraform provider, Forgejo Actions) so that EdgeConnect is used consistently through stable entry points.
- Use EdgeConnect for deploying project artifacts (including this documentation website) to edge cloudlets.
Evidence:
- Repository (this repo): EdgeConnect documentation under `/docs/edgeconnect/` (SDK/client/actions).
- Repository (this repo): docs-old “Publishing to Edge” describes the documentation deployment via `edgeconnectdeployment.yaml`.
## Development and delivery process evolution
Across the phases above, delivery methods and team process evolved in response to scaling and operational needs:
- Scrum ceremonies and working agreements are documented in Confluence (internal only): [Confluence: Scrum working agreement](https://confluence.telekom-mms.com/pages/viewpage.action?pageId=977833214).
- Collaborative delivery techniques (mob / ensemble programming) appear as an explicit practice, including in incident documentation (“Team: Mob”) and internal guidance on sustainable mobbing models.
### Team enablement and skill development (PII-free synthesis)
This section summarizes team enablement and skill development, based on the projects documented sources, and is presented without personal data[^pii-free]:
- **Baseline skill assumptions**: Kubernetes and GitOps are foundational. The platform architecture explicitly uses Kubernetes and a CNOE-derived stacks concept (see [Platform Orchestration](/docs/edp/deployment/basics/orchestration/)).
- **Enablement/training happened as part of delivery** (not a separate “academy”): retrospectives and planning explicitly track knowledge-sharing sessions and training topics (internal only, see References).
- **Kubernetes enablement**: a Kubernetes introduction training was planned as part of team onboarding/enablement activities (internal only; see References).
- **Go as a relevant skill**: multiple components are implemented in Golang (e.g., EdgeConnect tooling, Forgejo). Internal material discusses Golang developer skill profiles; this docs repo does not contain a single, explicit record of a dedicated “Go training” event.
- **Skill leveling via collaboration**: Mob Programming is used as a deliberate practice for knowledge sharing and onboarding less experienced developers (see [Forgejo docs entry](/docs/edp/forgejo/)).
[^pii-free]: PII = “personally identifiable information”. “PII-free synthesis” means summarizing patterns and practices without including names, participant lists, or direct quotes that could identify individuals.
See also: the central [References](/docs/governance/references/) index.

View file

@ -0,0 +1,52 @@
---
title: "References"
linkTitle: "References"
weight: 40
description: >
Index of primary sources and evidence links used across the Governance chapter.
---
This list is an index of links referenced across the Governance chapter, plus the intended meaning (“semantics”) of each link.
- (internal only) Confluence: [Sub Project Developer Framework](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/856788263/Sub+Project+Developer+Framework) — mandate, quick links, and high-level framing.
- (internal only) Confluence: [System Design](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/856788272/System+Design) — architecture framing (planes model, baseline preferences, early decision drivers).
- (internal only) Confluence: [Proof of Concept 2024](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/902010138/Proof+of+Concept+2024) — PoC scope, goals, and evaluation/acceptance framing.
- (internal only) Confluence: [Forgejo as a service](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/999903971/Forgejo+as+a+service) — service decomposition and operational concerns used as evidence for Phase 4.
- (internal only) Confluence: [Scrum working agreement](https://confluence.telekom-mms.com/pages/viewpage.action?pageId=977833214) — delivery process reference.
- (internal only) Confluence: [Knowledge sharing sessions](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/999672269/Knowledge+sharing+sessions) — planning table of internal enablement sessions (training topics and facilitation). Note: contains personal data; use only for PII-free synthesis.
- (internal only) Confluence: [Retro: How to improve our work](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/895683955/Retro+How+to+improve+our+work) — retrospective notes including explicit calls for Kubernetes training sessions and documentation/working-agreement improvements. Note: contains personal data; use only for PII-free synthesis.
- (internal only) Confluence: [Retro 15/04/25](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/999671293/Retro+15+04+25) — retrospective notes showing iteration on ticket sizing, async refinement, and meeting overhead; also references “Knowledge sharing sessions”. Note: contains personal data; use only for PII-free synthesis.
- (internal only) Confluence: [Retro 13/05/25](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/999891618/Retro+13+05+25) — retrospective notes explicitly discussing mobbing practices (roles, breaks, splitting mob groups) and knowledge exchange. Note: contains personal data; use only for PII-free synthesis.
- (internal only) Confluence: [Research Paper Mob Programming](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/1131139130/Research+Paper+Mob+Programming) — internal background material on mob programming practices and trade-offs. Note: treat as internal working material.
- (internal only) Confluence: [eDF Stakeholder Workshops](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/902567168/eDF+Stakeholder+Workshops) — plan for internal/external stakeholder workshops, target groups, and intended outcomes.
- (internal only) Confluence: [internal stakeholder workshop 7.11.](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/915155061/internal+stakeholder+workshop+7.11.) — internal stakeholder session agenda and captured feedback (contains AI-generated summary blocks).
- (internal only) Confluence: [external stakeholder workshop](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/936478033/external+stakeholder+workshop) — external stakeholder session notes (contains agenda attachment and AI-generated summary blocks).
- (internal only) Confluence: [Workpackage e.3 - Sustainable-edge-management-optimized user interface for edge developers](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/1165704046/Workpackage+e.3+-+Sustainable-edge-management-optimized+user+interface+for+edge+developers) — UX-focused workpackage with objectives, KPIs, and “validation with users” framing.
- (internal only) Confluence: [Deliverable D66 - Sustainable-edge-management-optimized user interface for edge developers](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/1165704082/Deliverable+D66+-+Sustainable-edge-management-optimized+user+interface+for+edge+developers) — deliverable page including PoC results summary for autonomous UI/UX testing.
- (internal only) Confluence: [Customer Engagement](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/1040844220/Customer+Engagement) — research planning cadence (who/why/when), plus synthesized insights/assumptions used to justify PII-free learnings summaries.
- (internal only) Confluence: [UX Insights and Learnings](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/1033832272/UX+Insights+and+Learnings) — running log of UX observations and recommended improvements (useful for evidence-backed, non-PII synthesis of recurring friction patterns).
- (internal only) Confluence: [[IPCEICIS-3703] Use Case identification for automated UX testing](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/1055949846/IPCEICIS-3703+Use+Case+identification+for+automated+UX+testing) — structured prioritization of automated UX testing scenarios (happy-path smoke flows, functional correctness, visual/accessibility checks). Note: treat as internal working material; do not replicate embedded credentials/content.
- (internal only) Jira: [IPCEICIS-368](https://jira.telekom-mms.com/browse/IPCEICIS-368) — PoC part 1 traceability anchor.
- (internal only) Jira: [IPCEICIS-765](https://jira.telekom-mms.com/browse/IPCEICIS-765) — PoC part 2.1 traceability anchor.
- (internal only) Jira: [IPCEICIS-766](https://jira.telekom-mms.com/browse/IPCEICIS-766) — PoC part 2.2 traceability anchor.
- (internal only) Jira: [IPCEICIS-514](https://jira.telekom-mms.com/browse/IPCEICIS-514) — PoC golden path template traceability anchor.
- (internal only) Jira: [IPCEICIS-759](https://jira.telekom-mms.com/browse/IPCEICIS-759) — PoC example app traceability anchor.
- (internal only) Jira: [IPCEICIS-760](https://jira.telekom-mms.com/browse/IPCEICIS-760) — PoC CI/CD traceability anchor.
- (internal only) Jira: [IPCEICIS-761](https://jira.telekom-mms.com/browse/IPCEICIS-761) — PoC telemetry traceability anchor.
- (internal only) Jira: [IPCEICIS-762](https://jira.telekom-mms.com/browse/IPCEICIS-762) — PoC infrastructure traceability anchor.
- (internal only) Jira: [IPCEICIS-763](https://jira.telekom-mms.com/browse/IPCEICIS-763) — PoC additional items traceability anchor.
- (internal only) Jira: [IPCEICIS-767](https://jira.telekom-mms.com/browse/IPCEICIS-767) — PoC orchestration extension traceability anchor.
- (internal only) Jira: [IPCEICIS-768](https://jira.telekom-mms.com/browse/IPCEICIS-768) — PoC part 3 (user documentation) traceability anchor.
- Documentation site: [PoC Structure](/docs-old/v1/project/plan-in-2024/poc/) — published docs-old summary of the PoC structure.
- Documentation site: [Team and Work Structure](/docs-old/v1/project/team-process/) — published docs-old description of process and traceability intent.
- Documentation site: [Forgejo docs entry](/docs/edp/forgejo/) — documentation entry point for the Forgejo/EDP component.
- Service entry point: [edp.buildth.ing](https://edp.buildth.ing) — EDP Forgejo instance.
- Service org: [edp.buildth.ing/DevFW-CICD](https://edp.buildth.ing/DevFW-CICD) — public organization referenced for transparency.
- Service org: [edp.buildth.ing/DevFW](https://edp.buildth.ing/DevFW) — private organization reference.
- (internal only) Repository (edp-doc): [edp.buildth.ing/DevFW/edp-doc](https://edp.buildth.ing/DevFW/edp-doc) — EDP technical documentation repository (ADRs, postmortems, PoC process), used as evidence sources in this chapter.
- Upstream project: [forgejo.org](https://forgejo.org/) — Forgejo project homepage.
- Upstream governance: [Codeberg e.V.](https://docs.codeberg.org/getting-started/what-is-codeberg/) — referenced as steward/governance body.
- Upstream contribution: [PR #9409](https://codeberg.org/forgejo/forgejo/pulls/9409) — GARM endpoints contribution.
- Upstream contribution: [PR #9803](https://codeberg.org/forgejo/forgejo/pulls/9803) — webhook workflow events contribution.
- Upstream contribution: [PR #9962](https://codeberg.org/forgejo/forgejo/pulls/9962) — ephemeral runners contribution.
- Upstream hosting: [Codeberg.org](https://codeberg.org/) — hosting platform used for upstream Forgejo contributions.

View file

@ -0,0 +1,61 @@
---
title: "Traceability"
linkTitle: "Traceability"
weight: 20
description: >
Deliverables mapping, evidence model, matrix overview, and ticket anchors.
---
## Deliverables mapping
This section captures the traceability model and evidence-backed anchors that connect capabilities/phases to concrete outputs (repositories, documentation pages, deployed services). It does not yet claim a complete IPCEI deliverable-ID → epic → artifact mapping.
## Traceability model (used for audit)
The working model (used throughout the PoC) is:
- Deliverable / capability definition (often in Confluence) →
- Ticket(s) in Jira / Forgejo →
- Implementation via commits + pull requests →
- Concrete output (repo, docs page, automation component, deployed service) →
- Evidence (ADR / postmortem / runbook / deployment config) showing real operation.
Primary sources for the traceability intent:
- Repository (edp-doc): PoC design README lists Jira parts and calls for a mapping table from “parts” to upstream references.
- Repository (edp-doc): team-process documents emphasize “traces from tickets to outputs” and an outcome summary in the ticket as part of Definition of Done.
## Matrix (evidence-backed overview)
This matrix is intended to be directly consumable: it summarizes what can already be evidenced from the current sources. It is an overview across phases/capabilities; it is not the full IPCEI deliverable-ID mapping.
| Phase | What is delivered / proven | Concrete outputs (where) | Evidence / trace hooks |
| --- | --- | --- | --- |
| Phase 1 — Discovery & system design | Reference architecture framing and decision drivers | Confluence (internal only): [System Design](https://confluence.telekom-mms.com/spaces/IPCEICIS/pages/856788272/System+Design) (planes model, CNOE baseline preference, dogfooding) | Architecture notes are the earliest “why” evidence for later component choices |
| Phase 2 — PoC definition | PoC scope, acceptance criteria, end-to-end “golden path” story | Repository (this docs repo): PoC structure page (docs-old) and Repository (edp-doc): PoC design README | Jira parts exist for user docs + hands-on building blocks (see “Ticket anchors” below) |
| Phase 3 — PoC consolidation & traceability | A packaged PoC with explicit traceability from backlog to outputs | Repository (this docs repo): PoC team-process guidance (Definition of Done, PR review, “traces”) | “Outcome” is expected to be summarized in the ticket with links to PR/commit artifacts |
| Phase 4 — Forgejo-as-a-Service + Foundry provisioning | A service milestone with operational concerns (persistence, backups, SSO, runners) and IaC provisioning | ADR (Scaleway as additional install channel) + postmortem (Foundry migration downtime) | Concrete operational evidence that architecture and migration risks were handled as governance work |
| Phase 5 — EdgeConnect integration | EdgeConnect as delivery target and integration tooling | Repository (this docs repo): EdgeConnect docs section + docs-old “Publishing to Edge” (deployment yaml) | Deployment configuration and workflow description provide concrete “proof of use” |
## Ticket anchors (PoC)
The PoC design README explicitly provides Jira anchors that can be used to build a full traceability matrix:
- Part 1 (User documentation): [IPCEICIS-368](https://jira.telekom-mms.com/browse/IPCEICIS-368)
- Part 2.1 (Local IdP creation): [IPCEICIS-765](https://jira.telekom-mms.com/browse/IPCEICIS-765)
- Part 2.2 (OSC IdP creation): [IPCEICIS-766](https://jira.telekom-mms.com/browse/IPCEICIS-766)
- Part 2.x.1 (Golden Path template): [IPCEICIS-514](https://jira.telekom-mms.com/browse/IPCEICIS-514)
- Part 2.x.2 (Fibonacci example app): [IPCEICIS-759](https://jira.telekom-mms.com/browse/IPCEICIS-759)
- Part 2.x.3 (Forgejo Actions CI/CD): [IPCEICIS-760](https://jira.telekom-mms.com/browse/IPCEICIS-760)
- Part 2.x.4 (Telemetry): [IPCEICIS-761](https://jira.telekom-mms.com/browse/IPCEICIS-761)
- Part 2.x.5 (OSC infrastructure): [IPCEICIS-762](https://jira.telekom-mms.com/browse/IPCEICIS-762)
- Part 2.x.6 (Additional items): [IPCEICIS-763](https://jira.telekom-mms.com/browse/IPCEICIS-763)
- Part 2.3 (Extended local orchestration): [IPCEICIS-767](https://jira.telekom-mms.com/browse/IPCEICIS-767)
- Part 3 (User documentation): [IPCEICIS-768](https://jira.telekom-mms.com/browse/IPCEICIS-768)
Related docs-old pages referenced by the history and matrix:
- [PoC Structure](/docs-old/v1/project/plan-in-2024/poc/)
- [Team and Work Structure](/docs-old/v1/project/team-process/)
See also: the central [References](/docs/governance/references/) index.

View file

@ -1,74 +0,0 @@
---
title: "Operations"
linkTitle: "Operations"
weight: 40
description: >
Operational guides for deploying, monitoring, and maintaining the Edge Developer Platform.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: TBD
* **Assignee**: Team
* **Status**: Draft - Structure only
* **Last Updated**: 2025-11-16
* **TODO**:
* [ ] Add deployment procedures
* [ ] Document monitoring setup and dashboards
* [ ] Include troubleshooting guides
* [ ] Add maintenance procedures
{{% /alert %}}
## Operations Overview
This section covers operational aspects of the Edge Developer Platform including deployment, monitoring, troubleshooting, and maintenance.
## Deployment
### Platform Deployment
Instructions for deploying EDP components to your infrastructure.
### Application Deployment
Guides for deploying applications to the platform using available deployment methods.
## Monitoring & Observability
### Metrics
Access and interpret platform and application metrics using VictoriaMetrics, Prometheus, and Grafana.
### Logging
Log aggregation and analysis for troubleshooting and audit purposes.
### Alerting
Configure alerts for critical platform events and application issues.
## Troubleshooting
Common issues and their solutions for platform operations.
## Maintenance
### Updates & Upgrades
Procedures for updating platform components and maintaining system health.
### Backup & Recovery
Data backup strategies and disaster recovery procedures.
## Documentation Template
When creating operational documentation:
1. **Purpose**: What this operation achieves
2. **Prerequisites**: Required access, tools, and knowledge
3. **Procedure**: Step-by-step instructions with commands
4. **Verification**: How to confirm successful completion
5. **Rollback**: How to revert if needed
6. **Troubleshooting**: Common issues and solutions

View file

@ -1,61 +0,0 @@
---
title: "Platform Overview"
linkTitle: "Platform Overview"
weight: 10
description: >
High-level overview of the Edge Developer Platform (EDP), its purpose, and product structure.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: TBD
* **Assignee**: Team
* **Status**: Draft - Initial structure created
* **Last Updated**: 2025-11-16
* **TODO**:
* [ ] Add detailed product structure from excalidraw
* [ ] Include platform maturity matrix
* [ ] Add links to component pages as they are created
{{% /alert %}}
## Purpose
The Edge Developer Platform (EDP) is a comprehensive DevOps platform designed to enable developers to build, deploy, and operate cloud-native applications at the edge. It provides an integrated suite of tools and services covering the entire software development lifecycle.
## Target Audience
* **Developers**: Build and deploy applications using standardized workflows
* **Platform Engineers**: Operate and maintain the platform infrastructure
* **DevOps Teams**: Implement CI/CD pipelines and automation
* **Auditors**: Verify platform capabilities and compliance
## Product Structure
EDP consists of multiple integrated components organized in layers:
### Core Platform Services
The foundation layer provides essential platform capabilities including source code management, CI/CD, and container orchestration.
### Developer Experience
Tools and services that developers interact with directly to build, test, and deploy applications.
### Infrastructure & Operations
Infrastructure automation, observability, and operational tooling for platform management.
## Platform Maturity
Components in EDP have different maturity levels:
* **Production**: Fully integrated and supported for production use
* **Beta**: Available for testing with potential changes
* **Experimental**: Early stage, subject to significant changes
## Getting Started
For quick start guides and onboarding information, see the [Getting Started](../getting-started/) section.
For detailed component information, explore the [Components](../components/) section.

View file

@ -85,15 +85,15 @@ specification {
model { model {
mySystem = system 'My System' { mySystem = system 'My System' {
description 'System description' description 'System description'
backend = service 'Backend API' { backend = service 'Backend API' {
description 'REST API service' description 'REST API service'
} }
db = database 'Database' { db = database 'Database' {
description 'PostgreSQL database' description 'PostgreSQL database'
} }
backend -> db 'reads/writes' backend -> db 'reads/writes'
} }
} }
@ -101,9 +101,9 @@ model {
views { views {
view systemOverview { view systemOverview {
title "System Overview" title "System Overview"
include mySystem include mySystem
autoLayout TopBottom autoLayout TopBottom
} }
} }
@ -139,7 +139,7 @@ Parameters:
1. **Edit Models** 1. **Edit Models**
- Modify `.c4` files in `models/` or `views/` - Modify `.c4` files in `models/` or `views/`
2. **Preview Changes** 2. **Preview Changes**
```bash ```bash
cd resources/edp-likec4 # or doc-likec4 cd resources/edp-likec4 # or doc-likec4
@ -216,7 +216,7 @@ specification {
element system element system
element container element container
element component element component
relationship async relationship async
relationship sync relationship sync
} }
@ -231,19 +231,19 @@ model {
customer = person 'Customer' { customer = person 'Customer' {
description 'End user' description 'End user'
} }
system = system 'My System' { system = system 'My System' {
frontend = container 'Frontend' { frontend = container 'Frontend' {
description 'React application' description 'React application'
} }
backend = container 'Backend' { backend = container 'Backend' {
description 'Node.js API' description 'Node.js API'
} }
frontend -> backend 'API calls' frontend -> backend 'API calls'
} }
customer -> system.frontend 'uses' customer -> system.frontend 'uses'
} }
``` ```
@ -256,17 +256,17 @@ Create diagrams:
views { views {
view overview { view overview {
title "System Overview" title "System Overview"
include * include *
autoLayout TopBottom autoLayout TopBottom
} }
view systemDetail { view systemDetail {
title "System Details" title "System Details"
include system.* include system.*
autoLayout LeftRight autoLayout LeftRight
} }
} }

View file

@ -76,16 +76,15 @@ git push
content/en/ content/en/
├── _index.md # Homepage ├── _index.md # Homepage
├── docs/ ├── docs/
│ ├── architecture/ # Architecture documentation │ ├── _index.md # Docs landing page
│ │ └── highlevelarch.md │ ├── edgeconnect/ # Edge Connect documentation
│ ├── documentation/ # Documentation guides │ ├── edp/ # EDP platform documentation
│ │ ├── local-development.md │ └── governance/ # Governance documentation
│ │ ├── testing.md ├── docs-old/ # Legacy documentation (archived)
│ │ └── cicd.md └── blog/ # Blog posts
│ ├── decisions/ # ADRs (Architecture Decision Records) ├── 20250401_review.md
│ └── v1/ # Legacy documentation ├── 20251027_important_links.md
└── blog/ # Blog posts └── 240823-archsession.md
└── 20250401_review.md
``` ```
## Writing Documentation ## Writing Documentation
@ -211,6 +210,88 @@ task likec4:generate
grep -r "^view " resources/edp-likec4/ --include="*.c4" grep -r "^view " resources/edp-likec4/ --include="*.c4"
``` ```
## Using LikeC4 with AI Agents (MCP Server)
The LikeC4 Model Context Protocol (MCP) server allows AI agents to query and navigate your architecture models interactively.
### Configuring AI Agents
Create or edit `.vscode/mcp.json` in your workspace:
```json
{
"servers": {
"likec4": {
"type": "sse",
"url": "http://localhost:33335/mcp"
}
}
}
```
This configuration gives GitHub Copilot and other MCP-compatible AI agents access to LikeC4 tools for querying your architecture models.
### Starting the MCP Server
Start the LikeC4 MCP server locally:
```bash
# in the project root of the documentation repo
npx likec4 mcp resources/edp-likec4 --http
```
The server runs on `http://localhost:33335/mcp` by default.
### Querying Architecture Models
Once connected, you can ask the AI agent questions like:
- **"In which model file is the resource edp.garm specified?"**
- Returns: `resources/edp-likec4/models/containers/garm.c4`
- **"Show me all views that include the forgejo element"**
- Lists all views containing the forgejo element
- **"What relationships does edp.garm have?"**
- Shows incoming and outgoing relationships
- **"List all elements in the architecture project"**
- Provides a complete overview of elements
### Available MCP Tools
The LikeC4 MCP server provides these tools to AI agents:
- `list-projects` - List all LikeC4 projects
- `search-element` - Search elements by id/title/kind/tags
- `read-element` - Get detailed element information
- `read-view` - Get view details (nodes/edges)
- `read-deployment` - Get deployment node information
- `find-relationships` - Find relationships between elements
- `read-project-summary` - Get project overview
### Use Cases
**Finding definitions:**
```
"Where is edp.forgejo defined?"
```
**Understanding relationships:**
```
"What components does edp.garm contain?"
```
**Exploring views:**
```
"Which views show the deployment architecture?"
```
**Documentation validation:**
```
"Check if all elements in the forgejoGarmInteraction view are documented"
```
## Available Tasks ## Available Tasks
View all tasks: View all tasks:

Some files were not shown because too many files have changed in this diff Show more