Compare commits

..

No commits in common. "main" and "main" have entirely different histories.
main ... main

157 changed files with 4392 additions and 18154 deletions

View file

@ -1,74 +0,0 @@
# Technical Documentation Guidelines
You are an expert technical writer with deep expertise in creating clear, concise, and well-structured documentation. Your goal is to produce documentation that flows naturally while maintaining technical accuracy.
## Core Principles
### 1. Conciseness and Clarity
- Use clear, direct language
- Eliminate unnecessary words and redundancy
- Make every sentence count
- Prefer active voice over passive voice
- Use short paragraphs (3-5 sentences maximum)
### 2. Structure and Organization
- Start with the most important information
- Use logical hierarchies with consistent heading levels
- Group related concepts together
- Provide clear navigation through table of contents when appropriate
- Use lists for sequential steps or related items
### 3. Flow and Readability
- Ensure smooth transitions between sections
- Connect ideas logically
- Build complexity gradually
- Use examples to illustrate concepts
- Maintain consistent terminology throughout
### 4. Technical Accuracy
- Be precise with technical terms
- Include relevant code examples that are tested and functional
- Document edge cases and limitations
- Provide accurate command syntax and parameters
- Link to related documentation when appropriate
## Documentation Structure
### Standard Document Layout
1. **Title** - Clear, descriptive heading
2. **Overview** - Brief introduction (2-3 sentences)
3. **Prerequisites** - What the reader needs to know or have
4. **Main Content** - Organized in logical sections
5. **Examples** - Practical, real-world use cases
6. **Troubleshooting** - Common issues and solutions (when applicable)
7. **Related Resources** - Links to additional documentation
### Code Examples
- Provide complete, runnable examples
- Include comments for complex logic
- Show expected output
- Use consistent formatting and syntax highlighting
### Commands and APIs
- Show full syntax with all parameters
- Indicate required vs optional parameters
- Provide parameter descriptions
- Include return values or output format
## Writing Style
- **Be direct**: "Configure the database" not "You should configure the database"
- **Be specific**: "Set timeout to 30 seconds" not "Set an appropriate timeout"
- **Be consistent**: Use the same terms for the same concepts
- **Be complete**: Don't assume implicit knowledge; explain as needed
## When Uncertain
**If you don't know something or need clarification:**
- Ask specific questions
- Request examples or use cases
- Clarify technical details or edge cases
- Verify terminology and naming conventions
- Confirm target audience and their expected knowledge level
Your expertise is in writing excellent documentation. Use your judgment to create documentation that serves the reader's needs effectively. When in doubt, ask rather than guess.

View file

@ -1 +0,0 @@
use flake

View file

@ -1,8 +1,8 @@
name: Hugo Site Tests name: Hugo Site Tests
on: on:
# push: push:
# branches: [ main ] branches: [ main ]
pull_request: pull_request:
branches: [ main ] branches: [ main ]

4
.gitignore vendored
View file

@ -35,7 +35,3 @@ Thumbs.db
npm-debug.log* npm-debug.log*
yarn-debug.log* yarn-debug.log*
yarn-error.log* yarn-error.log*
### direnv ###
.direnv
.envrc

View file

@ -4,60 +4,20 @@ Documentation for the edgeDeveloperFramework (eDF) project and the resulting Edg
## Quick Start ## Quick Start
### Development Environment
Install and enter [Devbox](https://www.jetify.com/devbox):
```bash ```bash
curl -fsSL https://get.jetify.com/devbox | bash # Install dependencies
devbox shell task deps
# Start local development server
task serve
# Run tests
task test
# Build production site
task build
``` ```
Devbox installs Hugo, Node.js, Go, and all required tools. First-time setup requires sudo for the Nix daemon (one-time only).
To avoid entering the shell, run commands directly:
```bash
devbox run task serve
```
### Local Development
```bash
task deps:install # Install dependencies
task serve # Start dev server at http://localhost:1313 (hot-reloading)
task test:quick # Run tests
task build # Build production site
```
## Architecture Diagrams (LikeC4)
[LikeC4](https://likec4.dev/) generates interactive architecture diagrams from text-based [C4 models](https://c4model.com/). Create or edit diagrams:
```bash
cd resources/edp-likec4 # Platform architecture
npm install # First time only
npm start # Preview at http://localhost:5173
```
Edit `.c4` files to define systems and views. Generate web components for Hugo:
```bash
task likec4:generate
```
Embed in Markdown pages:
```markdown
{{</* likec4-view view="overview" project="architecture" */>}}
```
See [LikeC4 documentation](https://likec4.dev/) for detailed syntax and [README-likec4.md](doc/README-likec4.md) for project-specific details.
## Deployment
Deployment is automatic via ArgoCD. Push to `main` triggers CI/CD build and deployment within 5-10 minutes.
**Infrastructure Configuration:**
- ArgoCD is configured within [stacks-instances](https://edp.buildth.ing/DevFW-CICD/stacks-instances/src/branch/main/otc/edp.buildth.ing/registry/docs.yaml)
- Documentation stack definition: [./argocd-stack/](https://edp.buildth.ing/DevFW-CICD/website-and-documentation/src/branch/main/argocd-stack)
## Documentation ## Documentation
* [Developer Guide](doc/README-developer.md) * [Developer Guide](doc/README-developer.md)

View file

@ -43,7 +43,7 @@ tasks:
- deps:ensure-npm - deps:ensure-npm
- build:generate-info - build:generate-info
cmds: cmds:
- "{{.HUGO_CMD}} server --noHTTPCache" - "{{.HUGO_CMD}} server"
clean: clean:
desc: Clean build artifacts desc: Clean build artifacts
@ -166,14 +166,14 @@ tasks:
generates: generates:
- node_modules/.package-lock.json - node_modules/.package-lock.json
cmds: cmds:
- "{{.NPM_CMD}} ci" - "{{.NPM_CMD}} install"
status: status:
- test -d node_modules - test -d node_modules
deps:install: deps:install:
desc: Install all dependencies desc: Install all dependencies
cmds: cmds:
- "{{.NPM_CMD}} ci" - "{{.NPM_CMD}} install"
- "{{.HUGO_CMD}} mod get -u" - "{{.HUGO_CMD}} mod get -u"
- "{{.HUGO_CMD}} mod tidy" - "{{.HUGO_CMD}} mod tidy"

View file

@ -1,28 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: docs
namespace: argocd
labels:
env: prod
spec:
project: default
syncPolicy:
automated:
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
destination:
name: in-cluster
namespace: docs
syncOptions:
- CreateNamespace=true
sources:
- repoURL: https://edp.buildth.ing/DevFW-CICD/website-and-documentation
targetRevision: HEAD
path: argocd-stack/helm
helm:
parameters:
- name: image.tag
value: $ARGOCD_APP_REVISION_SHORT

View file

@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View file

@ -1,24 +0,0 @@
apiVersion: v2
name: helm
description: Deploy documentation to edp.buildth.ing
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View file

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: docs
name: docs
spec:
replicas: 1
selector:
matchLabels:
app: docs
strategy: {}
template:
metadata:
labels:
app: docs
spec:
containers:
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
name: docs
ports:
- name: http
containerPort: 80
protocol: TCP
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: docs
spec:
selector:
app: docs
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: docs
annotations:
cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: docs.edp.buildth.ing
http:
paths:
- backend:
service:
name: docs
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- docs.edp.buildth.ing
secretName: docs-edp-buildth-ing-tls

View file

@ -1,4 +0,0 @@
image:
repository: edp.buildth.ing/devfw-cicd/website-and-documentation
tag: "UNKNOWN_TAG"

View file

@ -16,22 +16,22 @@ Built on open standards and battle-tested technologies.
{{% blocks/section color="dark" type="row" %}} {{% blocks/section color="dark" type="row" %}}
{{% blocks/feature icon="fa-solid fa-diagram-project" title="Edge Developer Platform (EDP)" url="/docs/edp/" %}} {{% blocks/feature icon="fa-solid fa-diagram-project" title="Architecture Documentation" url="/docs/architecture/" %}}
Understand EDP as the developer platform hub (Forgejo, CI/CD, deployment, operations) and how it connects inner loop and outer loop workflows. Explore the platform's architecture with interactive C4 diagrams. Understand the system design, components, and deployment topology.
**Dive into EDP docs →** **Dive into the architecture →**
{{% /blocks/feature %}} {{% /blocks/feature %}}
{{% blocks/feature icon="fa-solid fa-cloud" title="EdgeConnect Cloud" url="/docs/edgeconnect/" %}} {{% blocks/feature icon="fa-solid fa-book-open" title="Technical Writer Guide" url="/docs/documentation/" %}}
Learn what EdgeConnect is, how it is consumed via stable entry points (CLI, SDK, Terraform), and how EDP integrates with it as a deployment target. Learn how to contribute to this documentation. Write content, test locally, and understand the CI/CD pipeline.
**Explore EdgeConnect →** **Start documenting →**
{{% /blocks/feature %}} {{% /blocks/feature %}}
{{% blocks/feature icon="fa-solid fa-scale-balanced" title="Governance" url="/docs/governance/" %}} {{% blocks/feature icon="fa-solid fa-archive" title="Legacy Documentation (v1)" url="/docs/v1/" %}}
Read the project history, decision context, and audit-oriented traceability to primary sources and repository artifacts. Access the previous version of our documentation including historical project information and early architecture decisions.
**Go to Governance →** **Browse v1 docs →**
{{% /blocks/feature %}} {{% /blocks/feature %}}
{{% /blocks/section %}} {{% /blocks/section %}}
@ -76,11 +76,11 @@ Read the project history, decision context, and audit-oriented traceability to p
## Get Started ## Get Started
Whether you're a **platform engineer**, **application developer**, or **auditor**, we have resources for you: Whether you're a **platform engineer**, **application developer**, or **technicalWriter**, we have resources for you:
* 📖 Start at [Documentation](/docs/) * 📖 Read the [Documentation](/docs/) to understand the platform
* 🧭 Read [Edge Developer Platform (EDP)](/docs/edp/) * 🏗️ Explore [Platform Components](/docs/components/) and their usage
* ☁️ Read [EdgeConnect Cloud](/docs/edgeconnect/) * ✍️ Learn [How to Document](/docs/DOCUMENTATION-GUIDE/) and contribute
* 🧾 Read [Governance](/docs/governance/) * 🔍 Browse [Legacy Documentation](/docs-old/) for historical context
{{% /blocks/section %}} {{% /blocks/section %}}

View file

@ -0,0 +1,84 @@
# Review
1) 09h35 Marco
business plan
issue: value of software, depreciation
FTE: around 100 overall, 3 full teams of developers
tax discussion
10h04 Discussions
2) 10h10 Julius
3) 10h27 Sebastiano - DevDay bis 10h40
schriften bei votes größer - fragen sollten lesbar sein!
devops is dead .... claim
4) Stephan bis 10h55
5) christopher 10h58
6) robert 11:11
* app
* devops-pipelines
* edp in osc deployed
7) michal has nothing to show
8) evgenii wants to finish -- 11:30
9) patrick 11:32
====
projekt management meeting
workshops, externe teams
customer episodes
wem was wo prinzipien
|
Rollen, Personas
weiter die perspektive des nutzers bekommen, inneres verlangen eines developers, mein anspruch an das EDP
(bekommen wir das hin, möchte ic damit arbeiten)
level 2 erklimmen
workshops halten
senioren bekommen
level1: source code structure, artefakte builden, revision control, branching model, e.g. pull requesting, tests der software, local debugging
level2: automatisierung des artefakte-builds, versionsmgmt, milestones, tickets, issues, compliances an security
level3: deployment auf stages, feedback pipeline verhalten
level4: feedback app-verhalten (logs, metrics, alerts) + development loop
level5: 3rd level support in production
level1: coding
source code structure, artefakte builden, revision control, branching model, e.g. pull requesting, tests der software, local debugging
level2: reaching the outdside world with output
automatisierung des artefakte-builds, versionsmgmt, milestones, tickets, issues, compliances an security
level3: run the app anywhere
deployment auf stages, feedback pipeline verhalten
level4: monitoring the app
feedback app-verhalten (logs, metrics, alerts) + development loop
level5: support
3rd level support in production (or any outer stage)
sprint 4
leveraging säule
eigene app säule
chore säule

View file

@ -0,0 +1,6 @@
---
title: important links
weight: 20
---
* Gardener login to Edge and orca cluster: IPCEICIS-6222

View file

@ -0,0 +1,40 @@
---
title: Architecture session
weight: 20
---
## Platform Generics
* https://tag-app-delivery.cncf.io/whitepapers/platforms/#capabilities-of-platforms
* https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/
* https://humanitec.com/blog/wtf-internal-developer-platform-vs-internal-developer-portal-vs-paas
## reference architecture + Portfolio
* https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures
* https://humanitec.com/reference-architectures
* https://www.youtube.com/watch?v=AimSwK8Mw-U
## Platform Portfolio
### Viktor Farcic
* https://technologyconversations.com/
* https://technologyconversations.com/2024/01/08/the-best-devops-tools-platforms-and-services-in-2024/
### Internal devloper platform
* https://internaldeveloperplatform.org/core-components/
### Workflow / CI/CD
* https://cnoe.io/blog/optimizing-data-quality-in-dev-portals

View file

@ -1,12 +0,0 @@
---
title: "Autonomous UAT Agent"
linkTitle: "autonomous-uat-agent"
weight: 10
description: >
General documentation for D66 and the Autonomous UAT Agent
---
# General Documentation (D66)
This section contains the core documentation for D66, focusing on how the Autonomous UAT Agent works and how to run it.

View file

@ -1,109 +0,0 @@
---
title: "Agent Workflow Diagram"
linkTitle: "UAT Agent Workflow Diagram"
weight: 5
description: >
Visual workflow of a typical Agent S (Autonomous UAT Agent) run (gui_agent_cli.py) across Ministral, Holo, and VNC
---
# Agent Workflow Diagram (Autonomous UAT Agent)
This page provides a **visual sketch** of the typical workflow (example: `gui_agent_cli.py`).
## Workflow (fallback without Mermaid)
If Mermaid rendering is not available or fails in your build, this section shows the same workflow as plain text.
```text
Operator/Prompt
-> gui_agent_cli.py
-> (1) Planning request -> Ministral vLLM (thinking)
<- Next action intent
-> (2) Screenshot capture -> VNC Desktop / Firefox
<- PNG screenshot
-> (3) Grounding request -> Holo vLLM (vision)
<- Coordinates + element metadata
-> (4) Execute action -> VNC Desktop / Firefox
-> Artifacts saved -> results/ (logs, screenshots, JSON)
```
| Step | From | To | What | Output |
|---:|---|---|---|---|
| 0 | Operator | gui_agent_cli.py | Provide goal / prompt | Goal text |
| 1 | gui_agent_cli.py | Ministral vLLM | Plan next step (text) | Next action intent |
| 2 | gui_agent_cli.py | VNC Desktop | Capture screenshot | PNG screenshot |
| 3 | gui_agent_cli.py | Holo vLLM | Ground UI element(s) | Coordinates + element metadata |
| 4 | gui_agent_cli.py | VNC Desktop | Execute click/type/scroll | UI state change |
| 5 | gui_agent_cli.py | results/ | Persist evidence | Logs + screenshots + JSON |
## High-level data flow
```mermaid
flowchart LR
%% Left-to-right overview of one typical agent loop
user[Operator / Prompt] --> cli[Agent S script<br/>gui_agent_cli.py]
subgraph OTC[OTC (Open Telekom Cloud)]
subgraph MIN_HOST[ecs_ministral_L4]
MIN[(Ministral 3 8B<br/>Thinking / Planning)]
end
subgraph HOLO_HOST[ecs_holo_A40]
HOLO[(Holo 1.5-7B<br/>Vision / Grounding)]
end
subgraph TARGET[GUI test target]
VNC[VNC / Desktop]
FF[Firefox]
VNC --> FF
end
end
cli -->|1. plan step<br/>vLLM_THINKING_ENDPOINT| MIN
MIN -->|next action<br/>click / type / wait| cli
cli -->|2. capture screenshot| VNC
VNC -->|screenshot (PNG)| cli
cli -->|3. grounding request<br/>vLLM_VISION_ENDPOINT| HOLO
HOLO -->|coordinates + UI element info| cli
cli -->|4. execute action<br/>mouse / keyboard| VNC
cli -->|logs + screenshots| artifacts[(Artifacts<br/>logs, screenshots, JSON comms)]
```
## Sequence (one loop)
```mermaid
sequenceDiagram
autonumber
actor U as Operator
participant CLI as gui_agent_cli.py
participant MIN as Ministral vLLM (ecs_ministral_L4)
participant VNC as VNC Desktop (Firefox)
participant HOLO as Holo vLLM (ecs_holo_A40)
U->>CLI: Provide goal / prompt
loop Step loop (until done)
CLI->>MIN: Plan next step (text-only reasoning)
MIN-->>CLI: Next action (intent)
CLI->>VNC: Capture screenshot
VNC-->>CLI: Screenshot (image)
CLI->>HOLO: Ground UI element(s) in screenshot
HOLO-->>CLI: Coordinates + element metadata
CLI->>VNC: Execute click/type/scroll
end
CLI-->>U: Result summary + saved artifacts
```
## Notes
- The **thinking** and **grounding** models are separate on purpose: it improves coordinate reliability and makes failures easier to debug.
- The agent loop typically produces artifacts (logs + screenshots) which are later copied into D66 evidence bundles.

View file

@ -1,82 +0,0 @@
---
title: "Model Stack"
linkTitle: "Model Stack"
weight: 4
description: >
Thinking vs grounding model split for D66 (current state and target state)
---
# Model Stack
For a visual overview of how the models interact with the VNC-based GUI automation loop, see: [Workflow Diagram](./agent-workflow-diagram.md)
## Requirement
The Autonomous UAT Agent must use **open-source models from European companies**. This has been a project requirement form the very beginnning of this project.
## Target setup
- **Thinking / planning:** Ministral
- **Grounding / coordinates:** Holo 1.5
The Agent S framework runs an iterative loop: it uses a reasoning model to decide *what to do next* (plan the next action) and a grounding model to translate UI intent into *pixel-accurate coordinates* on the current screenshot. This split is essential for reliable GUI automation because planning and “where exactly to click” are different problems and benefit from different model capabilities.
## Why split models?
- Reasoning models optimize planning and textual decision making
- Vision/grounding models optimize stable coordinate output
- Separation reduces “coordinate hallucinations” and makes debugging easier
## Current state in repo
- Some scripts and docs still reference historical **Claude** and **Pixtral** experiments.
- **Pixtral is not suitable for pixel-level grounding in this use case**: in our evaluations it did not provide the consistency and coordinate stability required for reliable UI automation.
- In an early prototyping phase, **Anthropic Claude Sonnet** was useful due to strong instruction-following and reasoning quality; however it does not meet the D66 constraints (open-source + European provider), so it could not be used for the D66 target solution.
## Current configuration (D66)
### Thinking model: Ministral 3 8B (Instruct)
- HuggingFace model card: https://huggingface.co/mistralai/Ministral-3-8B-Instruct-2512
- Runs on **OTC (Open Telekom Cloud) ECS**: `ecs_ministral_L4` (public IP: `164.30.28.242`)
- Flavor: GPU-accelerated | 16 vCPUs | 64 GiB | `pi5e.4xlarge.4`
- GPU: 1 × NVIDIA Tesla L4 (24 GiB)
- Image: `Standard_Ubuntu_24.04_amd64_bios_GPU_GitLab_3074` (Public image)
- Deployment: vLLM OpenAI-compatible endpoint (chat completions)
- Endpoint env var: `vLLM_THINKING_ENDPOINT`
- Current server (deployment reference): `http://164.30.28.242:8001/v1`
**Operational note:** vLLM is configured to **auto-start on server boot** (OTC ECS restart) via `systemd`.
**Key serving settings (vLLM):**
- `--gpu-memory-utilization 0.90`
- `--max-model-len 32768`
- `--host 0.0.0.0`
- `--port 8001`
**Key client settings (Autonomous UAT Agent scripts):**
- `model`: `/home/ubuntu/ministral-vllm/models/ministral-3-8b`
- `temperature`: `0.0`
### Grounding model: Holo 1.5-7B
- HuggingFace model card: https://huggingface.co/holo-1.5-7b
- Runs on **OTC (Open Telekom Cloud) ECS**: `ecs_holo_A40` (public IP: `164.30.22.166`)
- Flavor: GPU-accelerated | 48 vCPUs | 384 GiB | `g7.12xlarge.8`
- GPU: 1 × NVIDIA A40 (48 GiB)
- Image: `Standard_Ubuntu_24.04_amd64_bios_GPU_GitLab_3074` (Public image)
- Deployment: vLLM OpenAI-compatible endpoint (multimodal grounding)
- Endpoint env var: `vLLM_VISION_ENDPOINT`
- Current server (deployment reference): `http://164.30.22.166:8000/v1`
**Key client settings (grounding / coordinate space):**
- `model`: `holo-1.5-7b`
- Native coordinate space: `3840×2160` (4K)
- Client grounding dimensions:
- `grounding_width`: `3840`
- `grounding_height`: `2160`

View file

@ -1,17 +0,0 @@
---
title: "Results & Findings"
linkTitle: "Results"
weight: 20
description: >
Results, findings, and evidence artifacts for D66
---
# Results & Findings (D66)
This section contains the outputs that support D66 claims: findings summaries and pointers to logs, screenshots, and run artifacts.
## Pages
- [PoC Validation](./poc-validation.md)
- [Golden Run (Telekom Header Navigation)](./golden-run-telekom-header-nav/)
- [Logs & Artifacts](./logs-and-artifacts.md)

View file

@ -1,116 +0,0 @@
---
title: "Golden Run: Telekom Header Navigation"
linkTitle: "Golden Run (Telekom)"
weight: 3
description: >
Evidence pack (screenshots + logs) for the golden run on www.telekom.de header navigation
---
# Golden Run: Telekom Header Navigation
This page is the evidence pack for the **Autonomous UAT Agent** golden run on **www.telekom.de**.
## Run intent
- Goal: Test interactive elements in the header navigation for functional weaknesses
- Output: Click-marked screenshots + per-run log (and optionally model communication JSON)
## How the run was executed (ECS)
Command (as used in the runbook):
```bash
python staging_scripts/gui_agent_cli.py \
--prompt "Role: You are a UI/UX testing agent specializing in functional correctness.
Goal: Test all interactive elements in the header navigation on www.telekom.de for functional weaknesses.
Tasks:
1. Navigate to the website
2. Identify and test interactive elements (buttons, links, forms, menus)
3. Check for broken flows, defective links, non-functioning elements
4. Document issues found
Report Format:
Return findings in the 'issues' field as a list of objects:
- element: Name/description of the element
- location: Where on the page
- problem: What doesn't work
- recommendation: How to fix it
If no problems found, return an empty array: []" \
--max-steps 15
```
## Artifacts
## Screenshot gallery
### Thumbnail grid (recommended for many screenshots)
Click any thumbnail to open the full image.
<div style="display:grid; grid-template-columns: repeat(auto-fit, minmax(240px, 1fr)); gap: 12px; align-items:start;">
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_001.png"><img src="screenshots/uat_agent_step_001.png" alt="UAT agent step 001" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 001</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_002.png"><img src="screenshots/uat_agent_step_002.png" alt="UAT agent step 002" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 002</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_003.png"><img src="screenshots/uat_agent_step_003.png" alt="UAT agent step 003" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 003</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_004.png"><img src="screenshots/uat_agent_step_004.png" alt="UAT agent step 004" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 004</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_005.png"><img src="screenshots/uat_agent_step_005.png" alt="UAT agent step 005" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 005</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_006.png"><img src="screenshots/uat_agent_step_006.png" alt="UAT agent step 006" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 006</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_007.png"><img src="screenshots/uat_agent_step_007.png" alt="UAT agent step 007" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 007</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_008.png"><img src="screenshots/uat_agent_step_008.png" alt="UAT agent step 008" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 008</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_010.png"><img src="screenshots/uat_agent_step_010.png" alt="UAT agent step 010" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 010</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_011.png"><img src="screenshots/uat_agent_step_011.png" alt="UAT agent step 011" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 011</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_012.png"><img src="screenshots/uat_agent_step_012.png" alt="UAT agent step 012" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 012</figcaption>
</figure>
<figure style="margin:0;">
<a href="screenshots/uat_agent_step_013.png"><img src="screenshots/uat_agent_step_013.png" alt="UAT agent step 013" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
<figcaption style="text-align:center; font-size:0.9em;">Step 013</figcaption>
</figure>
</div>
<details>
<summary>Full-size images (stacked)</summary>
{{< figure src="screenshots/uat_agent_step_001.png" caption="Step 001" >}}
{{< figure src="screenshots/uat_agent_step_002.png" caption="Step 002" >}}
{{< figure src="screenshots/uat_agent_step_003.png" caption="Step 003" >}}
{{< figure src="screenshots/uat_agent_step_004.png" caption="Step 004" >}}
{{< figure src="screenshots/uat_agent_step_005.png" caption="Step 005" >}}
{{< figure src="screenshots/uat_agent_step_006.png" caption="Step 006" >}}
{{< figure src="screenshots/uat_agent_step_007.png" caption="Step 007" >}}
{{< figure src="screenshots/uat_agent_step_008.png" caption="Step 008" >}}
{{< figure src="screenshots/uat_agent_step_010.png" caption="Step 010" >}}
{{< figure src="screenshots/uat_agent_step_011.png" caption="Step 011" >}}
{{< figure src="screenshots/uat_agent_step_012.png" caption="Step 012" >}}
{{< figure src="screenshots/uat_agent_step_013.png" caption="Step 013" >}}
</details>

View file

@ -1,36 +0,0 @@
---
title: "Logs & Artifacts"
linkTitle: "Logs & Artifacts"
weight: 2
description: >
Where to find logs, screenshots, and reports relevant to D66
---
# Logs & Artifacts
## Repo locations
- Local calibration and run logs: `logs/`
- Script outputs (varies by run):
- `Backend/IPCEI-UX-Agent-S3/staging_scripts/uxqa.db`
- `Backend/IPCEI-UX-Agent-S3/staging_scripts/Screenshots/`
- `Backend/IPCEI-UX-Agent-S3/staging_scripts/agent_output/`
- Golden run evidence pack (recommended publishing location in docs):
- `docs/D66/results/golden-run-telekom-header-nav/`
## What to capture for D66
- A representative run per capability:
- functional correctness checks
- visual quality audits
- task-based UX smoke tests
- For each run, capture:
- target URL
- timestamp
- key screenshots/overlays
- issue summaries (structured)
## Notes
If needed, we can add a consistent run naming convention and a small “how to export a D66 evidence pack” procedure.

View file

@ -1,29 +0,0 @@
---
title: "PoC Validation"
linkTitle: "PoC Validation"
weight: 1
description: >
What was validated and where to find the evidence
---
# PoC Validation Evidence
## What was validated
- Autonomous GUI interaction via the Autonomous UAT Agent (Agent S3-based scripts)
- Generation of UX findings and recommendations
- Production of reproducible artifacts (screenshots, logs)
## Where to find evidence in this repo
- Run logs and calibration logs: `logs/`
- Story evidence and investigation notes:
- `docs/story-025-001-context.md`
- `docs/story-026-001-context.md`
- `docs/story-023-003-coordinate-space-detection.md`
## How to reproduce a run
1. Choose a script in `Backend/IPCEI-UX-Agent-S3/staging_scripts/`
2. Set target URL (if supported) via `AS2_TARGET_URL`
3. Run and capture artifacts (see `docs/D66/documentation/outputs-and-artifacts.md`)

View file

@ -1,115 +0,0 @@
---
title: "Running Autonomous UAT Agent Scripts"
linkTitle: "Running Autonomous UAT Agent Scripts"
weight: 3
description: >
How to run the key D66 evaluation scripts and what they produce
---
# Running Autonomous UAT Agent Scripts
The **Autonomous UAT Agent** is the overall UX/UI testing use case built on top of the Agent S codebase and scripts in this repo.
All commands below assume you are running from the **Agent-S repository root** (Linux/ECS), `~/Projects/Agent_S3/Agent-S`. To do that, connect to the server via SSH. You will need a key pair for authentication and an open inbound port in the firewall. For information on how to obtain the key pair and request firewall access, contact [tom.sakretz@telekom.de](mailto:tom.sakretz@telekom.de).
## Template for running a script from command line terminal
### 1) Connect from Windows
```powershell
ssh -i "C:\Path to KeyPair\KeyPair-ECS.pem" ubuntu@80.158.3.120
```
### 2) Prepare the ECS runtime (GUI + browser)
```bash
# Activate venv
source ~/Projects/Agent_S3/Agent-S/venv/bin/activate
# Go to Agent-S repo root
cd ~/Projects/Agent_S3/Agent-S
# Start VNC (DISPLAY=:1) and a browser
vncserver :1
export XAUTHORITY="$HOME/.Xauthority"
export DISPLAY=":1"
firefox &
```
### 3) One-command recommended run (ECS)
If you only want to produce clean, repeatable evidence (screenshots with click markers), run the following command CLI:
```bash
python staging_scripts/gui_agent_cli.py --prompt "Go to telekom.de and click the cart icon" --max-steps 10
```
This will produce:
- Screenshots: `./results/gui_agent_cli/<timestamp>/screenshots/`
- Text log: `./results/gui_agent_cli/<timestamp>/logs/run.log`
- JSON comm log: `./results/gui_agent_cli/<timestamp>/logs/run.log`
## Prerequisites (runtime)
- Linux GUI session (VNC/Xvfb) because these scripts drive a real browser via `pyautogui`.
- A working `DISPLAY` (default for all scripts is `:1`).
- Network access to the model endpoints (thinking + vision/grounding).
## Key scripts (repo locations)
The GUI Agent CLI script is the most flexible entry point and is therefore the only one described in more detail in this documentation. Assumes you are in project root `~/Projects/Agent_S3/Agent-S`.
- GUI Agent CLI: `staging_scripts/gui_agent_cli.py`
Historically, we used purpose-built scripts for individual tasks. We now recommend using `gui_agent_cli.py` as the primary entry point, because the same scenarios can usually be expressed via a well-scoped prompt while keeping the workflow more flexible and easier to maintain. The scripts below are kept for reference and may not reflect the current, preferred workflow.
- UI check (Agent S3): `staging_scripts/1_UI_check_AS3.py`
- Functional correctness check: `staging_scripts/1_UI_functional_correctness_check.py`
- Visual quality audit: `staging_scripts/2_UX_visual_quality_audit.py`
- Task-based UX flow (newsletter): `staging_scripts/3_UX_taskflow_newsletter_signup.py`
## Golden run (terminal on ECS)
This is the “golden run” command sequence currently used for D66 evidence generation. The golden run is a complete workflow that works as a template for reproducible outcomes.
```bash
python staging_scripts/gui_agent_cli.py \
--prompt "Role: You are a UI/UX testing agent specializing in functional correctness.
Goal: Test all interactive elements in the header navigation on www.telekom.de for functional weaknesses.
Tasks:
1. Navigate to the website
2. Identify and test interactive elements (buttons, links, forms, menus)
3. Check for broken flows, defective links, non-functioning elements
4. Document issues found
Report Format:
Return findings in the 'issues' field as a list of objects:
- element: Name/description of the element
- location: Where on the page
- problem: What doesn't work
- recommendation: How to fix it
If no problems found, return an empty array: []" \
--max-steps 30
```
Golden run artifacts:
- Screenshots: `./results/gui_agent_cli/<timestamp>/screenshots/`
- Text log: `./results/gui_agent_cli/<timestamp>/logs/run.log`
- Optional JSON comm log (if enabled): `./results/gui_agent_cli/<timestamp>/logs/calibration_log_*.json`
An example golden run with screenshots and log outputs can be seen in [Results](./results/).
## Alternative: run the agent via a web interface (Frontend)
Work in progress.
We are currently updating the web-based view and its ECS runner integration. This section will be filled with the correct, up-to-date instructions once the frontend flow supports the current Autonomous UAT Agent + `gui_agent_cli.py` workflow.
## Notes on model usage
Some scripts still contain legacy model configs (Claude/Pixtral). The D66 target configuration is documented in [Model Stack](./model-stack.md).

View file

@ -6,12 +6,24 @@ menu:
weight: 20 weight: 20
--- ---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
# Edge Developer Platform (EDP) Documentation # Edge Developer Platform (EDP) Documentation
Welcome to the EDP documentation. This documentation serves developers, engineers, and auditors who want to understand, use, and audit the Edge Developer Platform. Welcome to the EDP documentation. This documentation serves developers, engineers, and auditors who want to understand, use, and audit the Edge Developer Platform.
It describes the outcomes and products of the edgeDeveloperFramework (eDF) sub-project within IPCEI-CIS.
## Target Audience ## Target Audience
* **Developers & Engineers**: Learn how to use the platform, deploy applications, and integrate services * **Developers & Engineers**: Learn how to use the platform, deploy applications, and integrate services
@ -20,8 +32,14 @@ It describes the outcomes and products of the edgeDeveloperFramework (eDF) sub-p
## Documentation Structure ## Documentation Structure
The documentation is organized into three core areas: The documentation follows a top-down approach focusing on outcomes and practical usage:
* **[Edge Developer Platform (EDP)](/docs/edp/)**: The central platform to support developers working at the edge, based around Forgejo * **Platform Overview**: High-level introduction and product structure
* **[EdgeConnect Cloud](/docs/edgeconnect/)**: The sovereign edge cloud context and key deployment target for EDP integrations * **Components**: Individual platform components and their usage
* **[Governance](/docs/governance/)**: Project history, decision context, and audit-oriented traceability * **Getting Started**: Onboarding and quick start guides
* **Operations**: Deployment, monitoring, and troubleshooting
* **Governance**: Project history, decisions, and compliance
## Purpose
This documentation describes the outcomes and products of the edgeDeveloperFramework (eDF) project. The EDP is designed as a usable, integrated platform with clear links to repositories and implementation details.

View file

@ -0,0 +1,141 @@
---
title: "[Component Name]"
linkTitle: "[Short Name]"
weight: 1
description: >
[Brief one-line description of the component]
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### C4 charts
Embed C4 charts this way:
1. add a likec4-view with the name of the view
{{< likec4-view view="components-template-documentation" project="architecture" title="Example Documentation Diagram" >}}
2. create the LikeC4 view somewhere in ```./resources/edp-likec4/views```, the example above is in ```./resources/edp-likec4/views/documentation/components-template-documentation.c4```
3. run ```task likec4:generate``` to create the webcomponent
4. if you are in ```task:serve``` hot-reload mode the view will show up directly
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,39 @@
---
title: "Components"
linkTitle: "Components"
weight: 30
description: >
Overview of EDP platform components and their integration.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: Stephan
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
This section documents all components of the Edge Developer Platform based on the product structure.
## Component Categories
The EDP consists of the following main component categories:
* **Orchestrator**: Platform and infrastructure orchestration
* **Forgejo & CI/CD**: Source code management and automation
* **Deployments**: Deployment targets and edge connectivity
* **Dev Environments**: Development environment provisioning
* **Physical Environments**: Runtime infrastructure
### Product Component Structure
[TODO] Links
![alt text](website-and-documentation_resources_product-structure.svg)

View file

@ -0,0 +1,28 @@
---
title: "Deployments"
linkTitle: "Deployments"
weight: 40
description: >
Deployment targets and edge connectivity solutions.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6733](https://jira.telekom-mms.com/browse/IPCEICIS-6733)
* **Assignee**: Patrick
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
Deployment components manage connections to various deployment targets including cloud infrastructure and edge devices.
## Components
* **OTC**: Open Telekom Cloud deployment target
* **EdgeConnect**: Secure edge connectivity solution

View file

@ -0,0 +1,128 @@
---
title: "EdgeConnect"
linkTitle: "EdgeConnect"
weight: 20
description: >
Secure connectivity solution for edge devices and environments
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6734](https://jira.telekom-mms.com/browse/IPCEICIS-6734)
* **Assignee**: Waldemar
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "EdgeConnect Client"
linkTitle: "EdgeConnect Client"
weight: 30
description: >
Client software for establishing EdgeConnect connections
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6734](https://jira.telekom-mms.com/browse/IPCEICIS-6734)
* **Assignee**: Waldemar
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "EdgeConnect SDK"
linkTitle: "EdgeConnect SDK"
weight: 10
description: >
Software Development Kit for establishing EdgeConnect connections
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6734](https://jira.telekom-mms.com/browse/IPCEICIS-6734)
* **Assignee**: Waldemar
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "OTC (Open Telekom Cloud)"
linkTitle: "OTC"
weight: 10
description: >
Open Telekom Cloud deployment and infrastructure target
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6733](https://jira.telekom-mms.com/browse/IPCEICIS-6733)
* **Assignee**: Patrick
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "Development Environments"
linkTitle: "DevEnvironments"
weight: 30
description: >
Development environment provisioning and management
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,27 @@
---
title: "Documentation System"
linkTitle: "Documentation System"
weight: 100
description: The developer 'documentation as code' documentation System we use ourselfes and over to use for each development team.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6736](https://jira.telekom-mms.com/browse/IPCEICIS-6736)
* **Assignee**: Stephan
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
The Orchestration manages platform and infrastructure provisioning, providing the foundation for the EDP deployment model.
## Sub-Components
* **Infrastructure Provisioning**: Low-level infrastructure deployment (infra-deploy, infra-catalogue)
* **Platform Provisioning**: Platform-level component deployment via Stacks

View file

@ -0,0 +1,28 @@
---
title: "Forgejo"
linkTitle: "Forgejo"
weight: 20
description: >
Self-hosted Git service with project management and CI/CD capabilities.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
Forgejo provides source code management, project management, and CI/CD automation for the EDP.
## Sub-Components
* **Project Management**: Issue tracking and project management features
* **Actions**: CI/CD automation (see CI/CD section)

View file

@ -0,0 +1,27 @@
---
title: "Forgejo Actions"
linkTitle: "Forgejo Actions"
weight: 20
description: Forgejo Actions.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6730](https://jira.telekom-mms.com/browse/IPCEICIS-6730)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
Forgejo provides source code management, project management, and CI/CD automation for the EDP.
## Sub-Components
* **Project Management**: Issue tracking and project management features
* **Actions**: CI/CD automation (see CI/CD section)

View file

@ -0,0 +1,127 @@
---
title: "Forgejo Actions"
linkTitle: "Actions"
weight: 10
description: GitHub Actions-compatible CI/CD automation
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,127 @@
---
title: "Runner Orchestration"
linkTitle: "Runner Orchestration"
weight: 30
description: GARM
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "Action Runner"
linkTitle: "Runner"
weight: 20
description: >
Self-hosted runner infrastructure with orchestration capabilities
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,66 @@
---
title: "Forgejo Integration, Extension, and Community Collaboration"
linkTitle: Forgejo Software Forge
date: "2025-11-17"
description: "Summary of the project's work integrating GARM with Forgejo and contributing key features back to the community."
tags: ["Forgejo", "GARM", "CI/CD", "OSS", "Community", "Project Report"]
categories: ["Workpackage Results"]
weight: 10
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6731](https://jira.telekom-mms.com/browse/IPCEICIS-6731)
* **Assignee**: Daniel
* **Status**: Draft
* **Last Updated**: 2025-11-17
* **TODO**:
* [ ] Add concrete quick start steps
* [ ] Include prerequisites and access information
* [ ] Create first application tutorial
* **Review/Feedback**:
* [ ] Stephan:
* in general:
* [ ] some parts are worth to go th 'Governance'
* [ ] perhaps we should remove the emojis?
* [ ] perhaps we should avoid the impression that the text was copy/pated from AI
* some details/further ideas:
* [ ] where is it, this Forgejo? Why is it called 'edp.buildth.ing'?
* [ ] what are the components we use - package managament, actions, ...
* [ ] Friendly users? organisations? Public/private stuff?
* [ ] App Management discussions (we don't!)?
* [ ] what about code snippets how forgejo is deployed? SSO? user base? Federation options?
* [ ] storages, Redis, Postgres ... deployment options ... helm charts ...
* [ ] Migrations we did, where is the migration code?
* [ ] git POSIX filesystem concurrency discussion, S/3 bucket
* [ ] what is our general experience?
* [ ] repository centric domain data model
* [ ] how did we develop? which version did we take first? how did we upgrade?
* [ ] which development flows did we use? which pipleines?
* [ ] provide codeberg links for the PRs
* [ ] provide architecture drawings and repo links for the cache registry thing
* [ ] provide a hight level actions arch diagram from the perspective of forgejo - link to the GARM component here
{{% /alert %}}
## 🧾 Result short description / cognitions
Here is the management summary of the work package results:
* **📈 Strategic Selection:** We chose **[Forgejo](https://forgejo.org/)** as the project's self-hosted Git service. This decision was based on several key strategic factors:
* **EU-Based & Data Sovereignty:** The project is stewarded by **[Codeberg e.V.](https://docs.codeberg.org/getting-started/what-is-codeberg/)**, a non-profit based in Berlin, Germany. This is a massive win for our "funding agency" stakeholders, as it aligns with **GDPR, compliance, and data sovereignty goals**. It's governed by EU laws, not a US tech entity.
* **True Open Source (GPL v3+):** Forgejo is a community-driven fork of Gitea, created to *guarantee* it stays 100% free and open-source (FOSS).
* **License Protects Our Contributions:** It uses the **GPL v3+ "copyleft" license**. This is *perfect* for our collaboration goal. It legally ensures that the features we contribute back (like GARM support) can **never be taken and locked into a proprietary, closed-source product by anyone**. It protects our work and keeps the community open.
* **⚙️ Core Use Case:** Forgejo is used for all project source code **versioning** and as the backbone for our **CI/CD (Continuous Integration/Continuous Deployment)** pipelines.
* **🛠️ Key Extension (GARM Support):** The main technical achievement was integrating **[GARM (GitHub Actions Runner Manager)](https://github.com/cloudbase/garm)**. This was *not* supported by Forgejo out-of-the-box.
* **✨ Required Enhancements:** To make GARM work, our team developed and implemented several critical features:
* Webhook support for workflow events (to tell runners when to start).
* Support for ephemeral runners (for secure, clean-slate builds every time).
* GitHub API-compatible endpoints (to allow the runners to register themselves correctly).
* **💖 Community Contribution:** We didn't just keep this for ourselves! We contributed all these features **directly back to the upstream Forgejo community**. This wasn't just a code-dump; we actively collaborated via **issues**, **feature requests**, and **pull requests (PRs) on [codeberg.org](https://codeberg.org/)**.
* **🚀 Bonus Functionality:** We also implemented **artifact caching**. This configures Forgejo to act as a **pull-through proxy** for remote container registries (like Docker Hub), which seriously speeds up our build times and saves bandwidth.

View file

@ -0,0 +1,128 @@
---
title: "Project Management"
linkTitle: "Forgejo Project Mgmt"
weight: 50
description: >
Project and issue management capabilities within Forgejo
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,28 @@
---
title: "Orchestratiion"
linkTitle: "Orchestration"
weight: 10
description: >
Platform and infrastructure orchestration components.
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6734](https://jira.telekom-mms.com/browse/IPCEICIS-6734)
* **Assignee**: Stephan
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
The Orchestration manages platform and infrastructure provisioning, providing the foundation for the EDP deployment model.
## Sub-Components
* **Infrastructure Provisioning**: Low-level infrastructure deployment (infra-deploy, infra-catalogue)
* **Platform Provisioning**: Platform-level component deployment via Stacks

View file

@ -0,0 +1,128 @@
---
title: "Application Orchestration"
linkTitle: "Application Orchestration"
weight: 30
description: >
Application-level component provisioning via Stacks
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "Infrastructure Orchestration"
linkTitle: "Infrastructure Orchestration"
weight: 10
description: >
Infrastructure deployment and catalog management (infra-deploy, infra-catalogue)
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6732](https://jira.telekom-mms.com/browse/IPCEICIS-6732)
* **Assignee**: Martin
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,127 @@
---
title: "Provider"
linkTitle: "Provider"
weight: 20
description: Used Provider we deploy on
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6732](https://jira.telekom-mms.com/browse/IPCEICIS-6732)
* **Assignee**: Martin
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "Terrafrom"
linkTitle: "Terraform"
weight: 10
description: >
Infrastructure deployment and catalog management (infra-deploy, infra-catalogue)
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6732](https://jira.telekom-mms.com/browse/IPCEICIS-6732)
* **Assignee**: Martin
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "Platform Orchestration"
linkTitle: "Platform Orchestration"
weight: 20
description: >
Platform-level component provisioning via Stacks
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "Stacks"
linkTitle: "Stacks"
weight: 40
description: >
Platform-level component provisioning via Stacks
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-6729](https://jira.telekom-mms.com/browse/IPCEICIS-6729)
* **Assignee**: Stephan
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "Component 1"
linkTitle: "Component 1"
weight: 20
description: >
Component 1
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TBD]
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "Component 2"
linkTitle: "Component 2"
weight: 30
description: >
Component 2
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TBD]
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,16 @@
---
title: "Physical Environments"
linkTitle: "Physical Envs"
weight: 60
description: >
Physical runtime environments and infrastructure providers.
---
Physical environment components provide the runtime infrastructure for deploying and running applications.
## Components
* **Docker**: Container runtime
* **Kubernetes**: Container orchestration
* **LXC**: Linux Containers
* **Provider**: Infrastructure provider abstraction

View file

@ -0,0 +1,128 @@
---
title: "Docker"
linkTitle: "Docker"
weight: 10
description: >
Container runtime for running containerized applications
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "Kubernetes"
linkTitle: "Kubernetes"
weight: 20
description: >
Container orchestration platform for managing containerized workloads
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "LXC"
linkTitle: "LXC"
weight: 30
description: >
Linux Containers for lightweight system-level virtualization
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

View file

@ -0,0 +1,128 @@
---
title: "Infrastructure Provider"
linkTitle: "Provider"
weight: 40
description: >
Infrastructure provider abstraction for managing physical resources
---
{{% alert title="Draft" color="warning" %}}
**Editorial Status**: This page is currently being developed.
* **Jira Ticket**: [TICKET-XXX](https://your-jira/browse/TICKET-XXX)
* **Assignee**: [Name or Team]
* **Status**: Draft
* **Last Updated**: YYYY-MM-DD
* **TODO**:
* [ ] Add detailed component description
* [ ] Include usage examples and code samples
* [ ] Add architecture diagrams
* [ ] Review and finalize content
{{% /alert %}}
## Overview
[Detailed description of the component - what it is, what it does, and why it exists]
## Key Features
* [Feature 1]
* [Feature 2]
* [Feature 3]
## Purpose in EDP
[Explain the role this component plays in the Edge Developer Platform and how it contributes to the overall platform capabilities]
## Repository
**Code**: [Link to source code repository]
**Documentation**: [Link to component-specific documentation]
## Getting Started
### Prerequisites
* [Prerequisite 1]
* [Prerequisite 2]
### Quick Start
[Step-by-step guide to get started with this component]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Verification
[How to verify the component is working correctly]
## Usage Examples
### [Use Case 1]
[Example with code/commands showing common use case]
```bash
# Example commands
```
### [Use Case 2]
[Another common scenario]
## Integration Points
* **[Component A]**: [How it integrates]
* **[Component B]**: [How it integrates]
* **[Component C]**: [How it integrates]
## Architecture
[Optional: Add architectural diagrams and descriptions]
### Component Architecture (C4)
[Add C4 Container or Component diagrams showing the internal structure]
### Sequence Diagrams
[Add sequence diagrams showing key interaction flows with other components]
### Deployment Architecture
[Add infrastructure and deployment diagrams showing how the component is deployed]
## Configuration
[Key configuration options and how to set them]
## Troubleshooting
### [Common Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
### [Common Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Status
**Maturity**: [Production / Beta / Experimental]
## Additional Resources
* [Link to external documentation]
* [Link to community resources]
* [Link to related components]
## Documentation Notes
[Instructions for team members filling in this documentation - remove this section once complete]

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 92 KiB

View file

@ -0,0 +1,151 @@
---
title: "WiP Documentation Guide"
linkTitle: "WiP Doc Guide"
weight: 1
description: Guidelines and templates for creating EDP documentation. This page will be removed in the final documentation.
---
{{% alert title="WiP - Only during creation phase" %}}
This page will be removed in the final documentation.
{{% /alert %}}
## Purpose
This guide helps team members create consistent, high-quality documentation for the Edge Developer Platform.
## Documentation Principles
### 1. Focus on Outcomes
1. Describe how the platform is comprised and which Products we deliver
2. If you need inspiration for our EDP product structure look at [EDP product structure tree](../components/website-and-documentation_resources_product-structure.svg)
2. Include links to repositories for deeper technical information or for not beeing too verbose and redundant with existing doumentation within the IPCEI-CIS scope or our EDP repos scope.
### 2. Write for the Audience
1. **Developers**: How to use the software products
2. **Engineers**: Architecture
3. **Auditors**: Project history, decisions, compliance information
### 3. Keep It Concise
1. Top-down approach: start with overview, drill down as needed
2. Less is more - avoid deep nested structures
3. Avoid emojis
4. **When using AI**: Review the text that you paste, check integration into the rest of the documentation
### 4. Maintain Quality
1. Use present tense ("The system processes..." not "will process")
2. Run `task test:quick` before committing changes
## Documentation Structure
The EDP documentation is organized into five main sections:
### 1. Platform Overview
High-level introduction to EDP, target audience, purpose, and product structure.
**Content focus**: Why EDP exists, who uses it, what it provides
### 2. Getting Started
Onboarding guides and quick start instructions.
**Content focus**: Prerequisites, step-by-step setup, first application deployment
### 3. Components
Detailed documentation for each platform component.
**Content focus**: What each component does, how to use it, integration points
**Template**: Use `components/TEMPLATE.md` as starting point
### 4. Operations
Deployment, monitoring, troubleshooting, and maintenance procedures.
**Content focus**: How to operate the platform, resolve issues, maintain health
### 5. Governance
Project history, architecture decisions, compliance, and audit information.
**Content focus**: Why decisions were made, project evolution, external relations
## Writing Documentation
### Components
#### Using Templates
In section 'Components' Templates are provided for common documentation types:
* **Component Documentation**: `content/en/docs/components/TEMPLATE.md`
#### Content Structure
Follow this pattern for component documentation:
1. **Overview**: What it is and what it does
2. **Key Features**: Bullet list of main capabilities
3. **Purpose in EDP**: Why it's part of the platform
4. **Getting Started**: Quick start guide
5. **Usage Examples**: Common scenarios
6. **Integration Points**: How it connects to other components
7. **Status**: Current maturity level
8. **Documentation Notes**: Instructions for filling in details (remove when complete)
### Frontmatter
Every markdown file starts with YAML frontmatter according to [Docsy](https://www.docsy.dev/docs/adding-content/content/#page-frontmatter):
```yaml
---
title: "Full Page Title"
linkTitle: "Short Nav Title"
weight: 10
description: >
Brief description for search and previews.
---
```
* **title**: Full page title (appears in page header)
* **linkTitle**: Shorter title for navigation menu
* **weight**: Sort order (lower numbers appear first)
* **description**: Brief summary for SEO and page previews
## Testing Documentation
Before committing changes:
```bash
# Run all tests
task test:quick
# Build site locally
task build
# Preview changes
task serve
```
## Adding New Sections
When adding a new documentation section:
1. Create directory: `content/en/docs/[section-name]/`
2. Create index file: `_index.md` with frontmatter
3. Add weight to control sort order
4. Update navigation in parent `_index.md` if needed
5. Test with `task test`
## Reference
* **Main README**: `/doc/README-technical-writer.md`
* **Component Template**: `/content/en/docs/components/TEMPLATE.md`
* **Hugo Documentation**: <https://gohugo.io/documentation/>
* **Docsy Theme**: <https://www.docsy.dev/docs/>

View file

@ -1,91 +0,0 @@
---
title: Edge Connect Ecosystem
linkTitle: Edge Connect Ecosystem
weight: 1
description: >
Build Your Edge Cloud Solutions in Minutes
---
## **Key Integrations** - Choose Your Workflow
### **Command Line Interface (CLI)** - Power at Your Fingertips
The **EdgeConnect CLI** is your command-line companion for rapid edge deployments. Transform complex deployment workflows into simple, intuitive commands. With intelligent state comparison, deployment planning, and YAML-based configuration parsing, this battle-tested tool brings professional-grade deployment management directly to your terminal. Whether you're orchestrating multi-platform releases or managing application lifecycles, the CLI provides the speed and precision that DevOps engineers demand.
**Key Features:**
- Configuration parsing and validation via YAML files
- Deployment planning with state comparison capabilities
- Cross-platform support
**Repository**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client
### **Go SDK** - Build Edge-Native Applications
Embed Edge Connect directly into your Go applications with the **Go SDK**. Whether you're building custom deployment tools, automation pipelines, or management dashboards, the SDK provides native Go bindings for the complete Edge Connect API. Clean, idiomatic Go code with comprehensive error handling and type safety means you can focus on building features, not wrestling with HTTP clients.
**Perfect For:** Custom tooling, automation scripts, CI/CD integrations, and any Go application that needs programmatic access to Edge Connect infrastructure.
**Repository**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client
### **Terraform Provider** - Infrastructure as Code, Simplified
Declare your edge infrastructure with confidence using the **Terraform Provider for Edge Connect**. Manage applications and instances across regions with familiar HCL syntax. Full CRUD lifecycle management, Kubernetes manifest support, and seamless integration with your existing Terraform workflows. Deploy once, replicate everywhere—the infrastructure-as-code way.
**Repository**: https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect
### **GitHub Actions Suite** - CI/CD on Autopilot
Three powerful actions that integrate Edge Connect directly into your GitHub workflows:
- **Deploy Action**: Push your containers to the edge with a single workflow step, powered by Edge Connect SDK 2.0.1
- **Delete Action**: Clean up resources effortlessly with force-delete capabilities
- **Action Demo**: A complete working example showing automated builds triggered by commit SHAs, complete with Kubernetes manifests and CI/CD best practices
Turn every git push into an edge deployment with no manual intervention required.
**Repositories**:
- https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action
- https://edp.buildth.ing/DevFW-CICD/edge-connect-delete-action
- https://edp.buildth.ing/DevFW-CICD/edgeconnect-action-demo
### **MCP Server** - AI-Native Edge Management
The **Edge Connect MCP Server** brings cutting-edge AI tooling to infrastructure management. Built on the Model Context Protocol, it offers:
- Rich, interactive web-based dashboards powered by MCP-UI
- Full application lifecycle management through conversational interfaces
- Local (stdio) and remote (HTTP streaming) operation modes
Manage your edge infrastructure through Claude Desktop or any MCP-compatible client. The future of infrastructure management is conversational.
**Repository**: https://edp.buildth.ing/DevFW-CICD/edge-connect-mcp
## **Use Cases** - See It in Action
### **Deployment Examples** - Hit the Ground Running
The **edge-connect-deployment-examples** repository is your blueprint for success. Featuring real-world Terraform configurations for Edge Connect, including database integrations and infrastructure bootstrapping templates. Clone, customize, and deploy your edge solution can be live in minutes, not days.
**Repository**: https://edp.buildth.ing/DevFW-CICD/edge-connect-deployment-examples
### **GARM Provider** - Scale Your GitHub Actions Runners
Transform your CI/CD capacity with the **GARM Edge Connect Provider**. Deploy ephemeral GitHub Actions runners as Kubernetes pods across edge locations. Automatic resource cleanup, customizable pod specs, and multi-platform support mean your pipelines scale elastically across the edge. Why pay for idle cloud runners when you can deploy exactly what you need, where you need it?
**Repository**: https://edp.buildth.ing/DevFW-CICD/garm-provider-edge-connect
### **Coder Integration** - Your Development Environment, Anywhere on the Edge (In development)
Bring the power of cloud development environments directly to the edge with the **Coder Edge Connect Integration**. This Terraform-based workspace template seamlessly deploys fully-featured VS Code development environments as Kubernetes workloads on Edge Connect infrastructure.
**What It Does:**
- **On-Demand Workspaces**: Spin up isolated development environments with customizable CPU, memory, and disk configurations
- **Browser-Based IDE**: Automatic code-server provisioning gives you VS Code in your browser—code from anywhere, deploy to the edge
- **Real-Time Monitoring**: Built-in Coder agent integration tracks CPU, memory, disk usage, and load averages for both containers and host systems
- **Edge-Native Deployment**: Workspaces run as Kubernetes pods directly on Edge Connect cloudlets, bringing compute closer to your dataent
**Perfect For:** Distributed development teams, edge application developers, and organizations needing secure, scalable development environments close to production edge infrastructure.
**Repository**: https://edp.buildth.ing/DevFW/POC-coder-edge-connect
## **The Edge Connect Advantage**
This isn't just another cloud platform. It's a **complete ecosystem** designed for developer velocity:
- **Multi-language support**: CLI, Go SDK, Terraform, GitHub Actions
- **AI-native tooling**: MCP server integration for conversational infrastructure management
- **Open and extensible**: From simple demos to complex GARM integrations
**Your edge infrastructure, your way. Deploy in minutes, scale without limits.**

View file

@ -1,46 +0,0 @@
---
title: EdgeConnect
linkTitle: EdgeConnect Cloud
weight: 20
description: >
Sovereign edge cloud for running applications
---
## Overview
EdgeConnect is a custom cloud provided by the project as a whole. It has several goals, including retaining sovereign control over cloud compute resources, and supporting sustainability-aware infrastructure choices.
While EdgeConnect is managed outwith our Edge Developer Platform, we have produced a number of tools to facilitate its use and broaden its applicability. These are an [SDK](/docs/edgeconnect/edgeconnect-sdk/), command-line [client](/docs/edgeconnect/edgeconnect-client/), bespoke [provider](/docs/edgeconnect/terraform-provider/) for [Terraform](https://developer.hashicorp.com/terraform), and tailor-made [Forgejo Actions](/docs/edgeconnect/edgeconnect-actions/).
{{< likec4-view view="edgeconnect-context" project="architecture" title="EdgeConnect Context View: Users, Tooling and Control Plane" >}}
The diagram summarizes how EdgeConnect is typically consumed and operated. Developers and automation do not interact with edge clusters directly; instead they use stable entry points (CLI, SDK, Terraform) that talk to the EdgeConnect API.
EdgeConnect itself is shown as a single cloud boundary that contains the control plane (API + controllers) and the managed resource model (e.g., App, AppInstance). Controllers continuously reconcile the desired state expressed via the API and drive deployments into the runtime.
EDP appears here as an external consumer: it can automate provisioning and deployment workflows (for example via Terraform) while EdgeConnect remains a separately managed cloud. This separation clarifies responsibilities: EDP orchestrates delivery processes, EdgeConnect provides the target runtime and lifecycle management.
## Key Features
* Managed by the broader project, not specifically by EDP
* Focus on sovereignty and sustainability
* Utilities such as [CLI](/docs/edgeconnect/edgeconnect-client/) and [Terraform provider](/docs/edgeconnect/terraform-provider/) encourage widespread platform use
* [EDP](/docs/edp/) products such as [Forgejo](/docs/edp/forgejo/) are hosted on [OTC](/docs/edp/deployment/otc/) rather than EdgeConnect
## Purpose in EDP
EdgeConnect is documented here because it is a key deployment target and integration point for the broader platform. Even though EdgeConnect is operated separately from EDP (and core EDP services are hosted on OTC), EDP tooling and automation frequently needs to provision or deploy workloads into EdgeConnect in a consistent, repeatable way.
Working with EdgeConnect also helps ensure that our developer workflows and platform components remain portable and “cloud-ready” beyond a single environment. By integrating with a sovereign system and making sustainability-aware choices visible in practice, we align platform engineering with the projects wider goals and enable closer collaboration with the teams operating the EdgeConnect cloud.
### Access
* [Gardener console access](https://gardener.apps.mg3.mdb.osc.live/namespace/garden-platform/shoots)
- Choose `Log in with mg3` then `platform` before entering credentials set up by the Platform Team.
* [Edge cluster](https://hub.apps.edge.platform.mg3.mdb.osc.live/)
* [Orca cluster](https://hub.apps.orca.platform.mg3.mdb.osc.live/)
### Notes
Documentation for EdgeConnect is provided using other systems, including Confluence.

View file

@ -1,257 +0,0 @@
---
title: Edge Connect MCP Server
linkTitle: MCP Server
weight: 40
description: Model Context Protocol server enabling AI-assisted EdgeConnect management
---
## Overview
The Edge Connect MCP Server enables AI assistants like [Claude](https://claude.ai) to directly interact with EdgeConnect through the [Model Context Protocol](https://modelcontextprotocol.io/) (MCP). This allows natural language requests to manage applications and instances, with AI agents autonomously executing API operations on your behalf.
MCP is an open protocol that connects AI systems to data sources and tools. In agentic coding workflows, AI assistants can plan, execute, and verify infrastructure operations through conversational interfaces while maintaining full visibility and control.
## Key Features
* **Natural language control**: Manage EdgeConnect resources through conversational AI interactions
* **Full API coverage**: Supports all App and AppInstance operations (create, list, show, update, delete, refresh)
* **Rich visualizations**: Interactive dashboards and detail views via [MCP-UI](https://github.com/MCP-UI-Org/mcp-ui). Tools return both JSON and HTML responses — clients like [Goose](https://github.com/block/goose) render the HTML dashboards, while others use the JSON data.
* **Multiple integration modes**: Local stdio for desktop apps, remote HTTP/SSE for web clients
* **Production-ready security**: OAuth 2.1 authorization with JWT validation and PKCE for remote deployments
* **Graceful fallbacks**: Returns structured JSON when UI resources aren't supported
## Purpose in EDP
Manual infrastructure operations don't scale, but writing automation scripts for every task is costly. The Edge Connect MCP Server bridges this gap by enabling AI assistants to act as automation agents — understanding natural language requests, planning operations, and executing them through the [EdgeConnect API](https://swagger.edge.platform.mg3.mdb.osc.live/).
This expands EdgeConnect accessibility beyond developers comfortable with [CLIs](/docs/edgeconnect/edgeconnect-client/) and [APIs](https://swagger.edge.platform.mg3.mdb.osc.live/), enabling infrastructure management through conversation while maintaining the precision and repeatability of programmatic control. For teams already using AI coding assistants, it integrates EdgeConnect operations directly into their development workflow.
## Repository
**Code**: https://edp.buildth.ing/DevFW-CICD/edge-connect-mcp
**Releases**: https://edp.buildth.ing/DevFW-CICD/edge-connect-mcp/releases
## Getting Started
### Prerequisites
* EdgeConnect access credentials (username/password or bearer token)
* For [Claude Desktop](https://claude.ai/download): macOS or Windows with Claude Desktop installed
* For [Claude Code](https://github.com/anthropics/claude-code): Claude CLI installed
* For remote deployment: Server infrastructure and optional OAuth provider
### Quick Start
1. Download the binary from [releases](https://edp.buildth.ing/DevFW-CICD/edge-connect-mcp/releases) or build from source:
```bash
go build -o edge-connect-mcp
```
2. Configure for Claude Desktop by editing the config file:
**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
**Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
```json
{
"mcpServers": {
"edge-connect": {
"command": "/path/to/edge-connect-mcp",
"env": {
"EDGE_CONNECT_BASE_URL": "https://hub.apps.edge.platform.mg3.mdb.osc.live",
"EDGE_CONNECT_AUTH_TYPE": "credentials",
"EDGE_CONNECT_USERNAME": "your-username",
"EDGE_CONNECT_PASSWORD": "your-password",
"EDGE_CONNECT_DEFAULT_REGION": "EU"
}
}
}
}
```
3. Restart Claude Desktop and verify the MCP server appears in the tools menu
### Verification
Ask Claude: "List my EdgeConnect applications in the EU region." If the MCP server is configured correctly, Claude will retrieve and display your applications.
## Usage Examples
### Conversational Operations
The MCP server enables natural interactions like:
* "Show me all running application instances"
* "Create a new app called nginx-test using the nginx:latest image"
* "Deploy my-app version 2.0 to the Munich cloudlet"
* "Delete all instances of old-app"
Claude interprets these requests, selects appropriate tools, and executes the operations while explaining each step.
### Integration with Claude Code CLI
Configure the MCP server using the Claude CLI:
```bash
# Add MCP server
claude mcp add edge-connect
# Configure
claude mcp edit edge-connect --set command=/path/to/edge-connect-mcp
claude mcp edit edge-connect --set-env EDGE_CONNECT_BASE_URL=https://hub.apps.edge.platform.mg3.mdb.osc.live
claude mcp edit edge-connect --set-env EDGE_CONNECT_AUTH_TYPE=credentials
claude mcp edit edge-connect --set-env EDGE_CONNECT_USERNAME=your-username
claude mcp edit edge-connect --set-env EDGE_CONNECT_PASSWORD=your-password
# Test
claude mcp test edge-connect
```
### Remote Deployment
For team access or web-based clients, run in remote mode with [OAuth 2.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-09):
```bash
# Edge Connect configuration
export EDGE_CONNECT_BASE_URL="https://hub.apps.edge.platform.mg3.mdb.osc.live"
export EDGE_CONNECT_AUTH_TYPE="credentials"
export EDGE_CONNECT_USERNAME="your-username"
export EDGE_CONNECT_PASSWORD="your-password"
# MCP server configuration
export MCP_SERVER_MODE="remote"
export MCP_REMOTE_HOST="0.0.0.0"
export MCP_REMOTE_PORT="8080"
# OAuth 2.1 configuration
export OAUTH_ENABLED="true"
export OAUTH_RESOURCE_URI="https://mcp.example.com"
export OAUTH_AUTH_SERVERS="https://auth.example.com"
export OAUTH_ISSUER="https://auth.example.com"
export OAUTH_JWKS_URL="https://auth.example.com/.well-known/jwks.json"
./edge-connect-mcp -mode remote
```
Web clients connect via `http://your-server:8080/mcp` using OAuth bearer tokens.
**Note**: No shared remote server endpoint is currently deployed. Users must run their own instance locally or on their infrastructure. A shared deployment may be provided in future.
## Configuration
### Environment Variables
**EdgeConnect API** (required):
- `EDGE_CONNECT_BASE_URL`: API endpoint
- `EDGE_CONNECT_AUTH_TYPE`: Authentication method (`token`, `credentials`, or `none`)
- `EDGE_CONNECT_TOKEN`: Bearer token (when `auth_type=token`)
- `EDGE_CONNECT_USERNAME`: Username (when `auth_type=credentials`)
- `EDGE_CONNECT_PASSWORD`: Password (when `auth_type=credentials`)
**Note on Authentication**: Username/password credentials are currently required because federated access with short-lived credentials is not yet available. The Platform Team plans to provide federated authentication in the coming months.
**Optional**:
- `EDGE_CONNECT_DEFAULT_REGION`: Default region (default: `EU`)
- `EDGE_CONNECT_DEBUG`: Enable debug logging (`true` or `1`)
**Remote Mode**:
- `MCP_SERVER_MODE`: Server mode (`stdio` or `remote`)
- `MCP_REMOTE_HOST`: Bind address (default: `0.0.0.0`)
- `MCP_REMOTE_PORT`: Port (default: `8080`)
- `MCP_REMOTE_AUTH_REQUIRED`: Enable simple bearer token auth (`true` or `false`)
- `MCP_REMOTE_AUTH_TOKENS`: Comma-separated bearer tokens
**OAuth 2.1** (recommended for production remote deployments):
- `OAUTH_ENABLED`: Enable OAuth (`true` or `false`)
- `OAUTH_RESOURCE_URI`: Protected resource identifier
- `OAUTH_AUTH_SERVERS`: Authorization server URLs (comma-separated)
- `OAUTH_ISSUER`: JWT token issuer
- `OAUTH_JWKS_URL`: JSON Web Key Set endpoint
### Command-Line Flags
Flags override environment variables:
- `-mode`: Server mode (`stdio` or `remote`)
- `-host`: Bind address for remote mode
- `-port`: Port for remote mode
### Available Tools
**App Management**:
- `create_app`: Create new application
- `show_app`: Retrieve application details (with UI visualization)
- `list_apps`: List applications matching filters (with UI dashboard)
- `update_app`: Update existing application
- `delete_app`: Delete application (idempotent)
**App Instance Management**:
- `create_app_instance`: Create instance on cloudlet
- `show_app_instance`: Retrieve instance details
- `list_app_instances`: List instances matching filters (with UI dashboard)
- `update_app_instance`: Update instance configuration
- `refresh_app_instance`: Refresh instance state
- `delete_app_instance`: Delete instance (idempotent)
### MCP-UI Visualization Support
This server implements [MCP-UI](https://github.com/MCP-UI-Org/mcp-ui), returning both structured JSON and rich HTML visualizations in every response. The HTML includes interactive dashboards with status indicators, filtering, and visual organization of infrastructure data.
MCP clients that support UI resources (currently [Goose](https://github.com/block/goose)) will automatically render these HTML views. Clients without UI support (like [Claude Desktop](https://claude.ai/download) and [Claude Code](https://github.com/anthropics/claude-code)) receive the JSON data and work normally without the visual enhancements.
Operations with UI support include `list_apps`, `show_app`, `list_app_instances`, and `show_app_instance`.
## Integration Points
* **EdgeConnect API**: Communicates with [EdgeConnect platform](https://hub.apps.edge.platform.mg3.mdb.osc.live) for all operations
* **EdgeConnect SDK**: Built on the [Go SDK](/docs/edgeconnect/edgeconnect-sdk/) for authentication and API client implementation
* **[MCP-UI](https://github.com/MCP-UI-Org/mcp-ui)**: All tools return dual-format responses (JSON + HTML). Clients that support UI resources (like [Goose](https://github.com/block/goose)) render rich HTML dashboards; others use the JSON data automatically.
* **[Claude Desktop](https://claude.ai/download)/[Code](https://github.com/anthropics/claude-code)**: Primary integration targets for AI-assisted infrastructure management
* **OAuth Providers**: Supports [Auth0](https://auth0.com/), [Amazon Cognito](https://aws.amazon.com/cognito/), [Keycloak](https://www.keycloak.org/), and other [OAuth 2.1](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-09)-compliant systems
## Troubleshooting
### MCP Server Not Appearing
**Problem**: Claude Desktop doesn't show the edge-connect tools
**Solution**:
- Verify the config file path is correct for your OS
- Check the `command` path points to the binary
- Restart Claude Desktop after configuration changes
- Check Claude Desktop logs for MCP initialization errors
### Authentication Errors
**Problem**: Operations fail with "authentication failed" or "unauthorized"
**Solution**:
- Verify credentials in environment variables are correct
- Ensure `EDGE_CONNECT_BASE_URL` uses HTTPS and has no trailing slash
- Check `EDGE_CONNECT_AUTH_TYPE` matches your credential type
- Test credentials with the [EdgeConnect CLI](/docs/edgeconnect/edgeconnect-client/) first
### Remote Server Connection Issues
**Problem**: Can't connect to remote MCP server
**Solution**:
- Verify server is running: check `/health` endpoint returns `{"status":"healthy"}`
- If OAuth is enabled, ensure client has valid JWT bearer token
- Check firewall rules allow connections to the MCP port
- Verify CORS headers if connecting from web clients
- Review server logs for authentication or validation errors
## Status
**Maturity**: Production
## Additional Resources
* [Model Context Protocol Specification](https://modelcontextprotocol.io/)
* [MCP-UI Documentation](https://github.com/MCP-UI-Org/mcp-ui)
* [EdgeConnect API Documentation](https://swagger.edge.platform.mg3.mdb.osc.live/)
* [Claude Desktop](https://claude.ai/download)
* [OAuth 2.1 RFC](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-09)
* [Source Code Repository](https://edp.buildth.ing/DevFW-CICD/edge-connect-mcp)

View file

@ -1,286 +0,0 @@
---
title: Forgejo Actions
linkTitle: Forgejo Actions
weight: 40
description: >
CI/CD actions for automated EdgeConnect deployment and deletion
---
## Overview
The EdgeConnect Actions are custom composite actions for use in [Forgejo](/docs/edp/forgejo/actions/)/[GitHub Actions](https://forgejo.org/docs/latest/user/actions/github-actions/) that automate EdgeConnect application deployments in CI/CD pipelines. They wrap the [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) to provide a simple, declarative way to deploy and delete applications without manual CLI installation or configuration.
Two actions are available:
- **edge-connect-deploy-action**: Deploys applications using declarative YAML configuration
- **edge-connect-delete-action**: Deletes applications and their instances from EdgeConnect
## Key Features
* **Zero installation**: Actions automatically download and use the EdgeConnect Client
* **Declarative workflow**: Deploy applications using YAML configuration files
* **CI/CD optimized**: Designed for automated pipelines with auto-approve and dry-run support
* **Version pinning**: Specify exact EdgeConnect Client version for reproducible builds
* **Secrets management**: Credentials passed securely through workflow secrets
* **Compatible with GitHub and Forgejo Actions**: Works in both ecosystems
## Purpose in EDP
CI/CD automation is essential for modern development workflows. While the [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) provides powerful deployment capabilities, integrating it into CI/CD pipelines requires downloading binaries, managing credentials, and configuring authentication for each workflow run.
These actions eliminate that boilerplate by:
- Automatically fetching the correct Client version
- Handling authentication setup
- Providing a clean, reusable action interface
- Reducing pipeline configuration to a few lines
This enables teams to focus on application configuration rather than pipeline plumbing, while maintaining the full power of declarative EdgeConnect deployments.
The actions complement the [Terraform provider](/docs/edgeconnect/terraform-provider/) by offering a simpler option for teams already using Forgejo/GitHub Actions who want deployment automation without adopting Terraform.
## Repository
**Deploy Action**: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action
**Delete Action**: https://edp.buildth.ing/DevFW-CICD/edge-connect-delete-action
**Demo Repository**: https://edp.buildth.ing/DevFW-CICD/edgeconnect-action-demo
## Getting Started
### Prerequisites
* Forgejo or GitHub repository with Actions enabled
* EdgeConnect access credentials (username and password)
* `EdgeConnectConfig.yaml` file defining your application (see [YAML Configuration Format](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format))
* For Kubernetes apps: K8s manifest file referenced in the config
* Repository secrets configured with EdgeConnect credentials
### Quick Start
1. Create an `EdgeConnectConfig.yaml` file in your repository defining your application (see [Client documentation](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format))
2. Add EdgeConnect credentials as repository secrets:
- `EDGEXR_PLATFORM_USERNAME`
- `EDGEXR_PLATFORM_PASSWORD`
3. Create a workflow file (e.g., `.forgejo/workflows/deploy.yaml`) using the action
4. Commit and push to trigger the workflow
### Verification
After the workflow runs successfully:
- Check the workflow logs for deployment status
- Verify resources appear in the [EdgeConnect console](https://hub.apps.edge.platform.mg3.mdb.osc.live/)
- Test application endpoints are accessible
## Usage Examples
### Minimal Deploy Action
```yaml
- name: Deploy to EdgeConnect
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Minimal Delete Action
```yaml
- name: Delete from EdgeConnect
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-delete-action@main
with:
configFile: ./EdgeConnectConfig.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Complete Workflow Example
A typical deployment workflow that builds, tags, and deploys:
```yaml
name: deploy
on:
workflow_run:
workflows: [build]
types:
- completed
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Update manifest with image tag
run: |
sha="${{ github.sha }}"
shortSha="${sha:0:7}"
echo "Setting image version to: registry.example.com/myapp:${shortSha}"
sed -i "s@###IMAGETAG###@registry.example.com/myapp:${shortSha}@g" ./k8s-deployment.yaml
- name: Deploy to EdgeConnect
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Dry-Run Mode
Preview changes without applying them:
```yaml
- name: Preview deployment
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
dryRun: 'true'
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
### Version Pinning
Use a specific EdgeConnect Client version:
```yaml
- name: Deploy with specific version
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
with:
configFile: ./EdgeConnectConfig.yaml
version: 'v2.0.1'
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
```
## Integration Points
* **EdgeConnect Client**: Actions download and execute the Client CLI tool
* **EdgeConnect SDK**: Client uses the SDK for all API interactions
* **Forgejo/GitHub Actions**: Native integration with both action ecosystems
* **EdgeConnect API**: All operations communicate with EdgeConnect platform APIs
* **Container Registries**: Works with any registry for application images
## Configuration
### Action Inputs
Both deploy and delete actions accept the same inputs:
| Input | Required | Default | Description |
|-------|----------|---------|-------------|
| `configFile` | Yes | - | Path to EdgeConnectConfig.yaml file |
| `baseUrl` | Yes | - | EdgeConnect API base URL (e.g., https://hub.apps.edge.platform.mg3.mdb.osc.live) |
| `username` | Yes | - | EdgeConnect username for authentication |
| `password` | Yes | - | EdgeConnect password for authentication |
| `dryRun` | No | `false` | Preview changes without applying (set to `'true'` to enable) |
| `version` | No | `v2.0.1` | EdgeConnect Client version to download and use |
### YAML Configuration File
The `configFile` parameter points to an `EdgeConnectConfig.yaml` that defines your application and deployment targets. See the [EdgeConnect Client YAML Configuration Format](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format) for the complete specification.
Example structure:
```yaml
kind: edgeconnect-deployment
metadata:
name: "my-app"
appVersion: "1.0.0"
organization: "myorg"
spec:
k8sApp:
manifestFile: "./k8s-deployment.yaml"
infraTemplate:
- region: "EU"
cloudletOrg: "TelekomOp"
cloudletName: "Munich"
flavorName: "EU.small"
```
### Secrets Management
Configure repository secrets in Forgejo/GitHub:
1. Navigate to repository Settings → Secrets
2. Add secrets:
- Name: `EDGEXR_PLATFORM_USERNAME`, Value: your EdgeConnect username
- Name: `EDGEXR_PLATFORM_PASSWORD`, Value: your EdgeConnect password
3. Reference in workflows using `${{ secrets.SECRET_NAME }}`
## Troubleshooting
### Action Fails with "Failed to download edge-connect-client"
**Problem**: Action cannot download the Client binary
**Solution**:
- Verify the `version` parameter matches an actual release version
- Ensure the release exists at https://edp.buildth.ing/DevFW-CICD/edge-connect-client/releases
- Check network connectivity from the runner
- Try using default version by omitting the `version` parameter
### Authentication Errors
**Problem**: "authentication failed" or "unauthorized" errors
**Solution**:
- Verify secrets are correctly configured in repository settings
- Check secret names match exactly (case-sensitive)
- Ensure `baseUrl` is correct for your target environment (Edge vs Orca)
- Confirm credentials work by testing with the [client](../edgeconnect-client/)
### "Configuration validation failed"
**Problem**: YAML configuration file validation errors
**Solution**:
- Verify `configFile` path is correct relative to repository root
- Check YAML syntax is valid (use a YAML validator)
- Ensure all required fields are present (see [Client docs](/docs/edgeconnect/edgeconnect-client/#yaml-configuration-format))
- Verify manifest file paths in the config exist and are correct
### Resources Not Appearing in Console
**Problem**: Action succeeds but resources don't appear in EdgeConnect console
**Solution**:
- Verify you're checking the correct environment (Edge vs Orca)
- Ensure `baseUrl` parameter matches the console you're viewing
- Check organization name in config matches your console access
- Review action logs for any warnings or skipped operations
### Deployment Succeeds but App Doesn't Work
**Problem**: Deployment completes but application is not functioning
**Solution**:
- Check application logs in the EdgeConnect console
- Verify image tags are correct (common issue with placeholder replacement)
- Ensure manifest files reference correct image registry and paths
- Check network configuration allows required outbound connections
- Verify cloudlet has sufficient resources for the specified flavor
## Status
**Maturity**: Production
## Additional Resources
* [EdgeConnect Client Documentation](/docs/edgeconnect/edgeconnect-client/)
* [EdgeConnect SDK Documentation](/docs/edgeconnect/edgeconnect-sdk/)
* [Terraform Provider Documentation](/docs/edgeconnect/terraform-provider/)
* [EdgeConnect Console](https://hub.apps.edge.platform.mg3.mdb.osc.live/)
* [Demo Repository](https://edp.buildth.ing/DevFW-CICD/edgeconnect-action-demo)
* [Forgejo Actions Documentation](https://forgejo.org/docs/latest/user/actions/)

View file

@ -1,246 +0,0 @@
---
title: EdgeConnect Client
linkTitle: Client
weight: 20
description: >
Client software for establishing EdgeConnect connections
---
## Overview
The EdgeConnect Client is a command-line tool for managing EdgeConnect applications and instances. It is built using our Golang [SDK](/docs/edgeconnect/edgeconnect-sdk/), and supports functionality to create, destroy, describe and list various resources.
The tool provides both imperative commands (for direct resource management) and declarative workflows (using YAML configuration files) to deploy applications across multiple edge cloudlets. It supports different EdgeConnect deployment environments through an API version selector.
## Key Features
* **Dual workflow support**: Imperative commands for direct operations, declarative YAML for infrastructure-as-code
* **Multi-cloudlet deployment**: Deploy applications to multiple edge locations from a single configuration
* **Deployment planning**: Preview and approve changes before applying them (dry-run mode)
* **Environment compatibility**: Works with different EdgeConnect deployment environments (configured via `api-version`)
* **CI/CD ready**: Designed for automated deployments with auto-approve and exit codes
## Purpose in EDP
No system can be considered useful unless it is actually, in practice, used. While the Edge Connect [console](https://hub.apps.edge.platform.mg3.mdb.osc.live/) and [API](https://swagger.edge.platform.mg3.mdb.osc.live/) are essential tools to allow the platform to be used by developers, there are numerous use cases for interaction that is automated but simpler to use than an API.
The EdgeConnect Client bridges the gap between manual console operations and direct API integration, enabling automated deployments in CI/CD pipelines, infrastructure-as-code workflows, and scripted operations while maintaining simplicity and usability.
## Repository
**Code**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client
**Releases**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client/releases
## Getting Started
### Prerequisites
* Access credentials for the EdgeConnect platform (username and password)
* Knowledge of your target deployment environment (determines `api-version` setting)
* For Kubernetes deployments: K8s manifest files
* For Docker deployments: Docker image reference
### Quick Start
1. Download the Edge Connect Client binary from the Forgejo [releases page](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/releases) for your platform (Linux, macOS, or Windows)
2. Extract and move to your PATH: `tar -xzf edge-connect-client_*.tar.gz && sudo mv edge-connect /usr/local/bin/`
3. Configure authentication using environment variables or a config file (see Configuration section)
4. Verify installation: `edge-connect --help`
### Verification
Run `edge-connect app list --org <your-org> --region <region>` to verify you can authenticate and communicate with the EdgeConnect API.
## Usage Examples
### Declarative Deployment (Recommended)
Create an `EdgeConnectConfig.yaml` file defining your application and deployment targets, then apply it:
```bash
edge-connect apply -f EdgeConnectConfig.yaml
```
Use `--dry-run` to preview changes without applying them, and `--auto-approve` for automated CI/CD workflows.
### Imperative Commands
Direct resource management using CLI commands:
```bash
# Create an application
edge-connect app create --org myorg --name myapp --version 1.0.0 --region EU
# Create an instance on a specific cloudlet
edge-connect instance create --org myorg --name myinstance \
--app myapp --version 1.0.0 --region EU \
--cloudlet Munich --cloudlet-org TelekomOp --flavor EU.small
# List resources
edge-connect app list --org myorg --region EU
edge-connect instance list --org myorg --region EU
# Delete resources
edge-connect instance delete --org myorg --name myinstance --region EU \
--cloudlet Munich --cloudlet-org TelekomOp
edge-connect app delete --org myorg --name myapp --version 1.0.0 --region EU
```
## Integration Points
* **EdgeConnect API**: Communicates with EdgeConnect platform APIs for all resource operations
* **EdgeConnect SDK**: Built on top of the Golang SDK, sharing authentication and client implementation
* **CI/CD Pipelines**: Designed for integration with GitLab CI, GitHub Actions, and other automation tools
* **Infrastructure-as-Code**: YAML configuration files enable GitOps workflows
## Configuration
### Global Settings
The client can be configured via config file, environment variables, or command-line flags (in order of precedence: flags > env vars > config file).
**Config File** (`~/.edge-connect.yaml` or use `--config` flag):
```yaml
base_url: "https://hub.apps.edge.platform.mg3.mdb.osc.live"
username: "your-username@example.com"
password: "your-password"
api_version: "v2" # v1 or v2 - identifies deployment environment
```
**Environment Variables**:
- `EDGE_CONNECT_BASE_URL`: API base URL
- `EDGE_CONNECT_USERNAME`: Authentication username
- `EDGE_CONNECT_PASSWORD`: Authentication password
- `EDGE_CONNECT_API_VERSION`: API version selector (v1 or v2, default: v2)
**Global Flags** (available on all commands):
- `--base-url`: API base URL
- `--username`: Authentication username
- `--password`: Authentication password
- `--api-version`: API version selector (v1 or v2) - specifies which deployment environment to target
- `--config`: Path to config file
- `--debug`: Enable debug logging
**Note on API Versions**: The `api-version` setting (v1 or v2) is an internal label used to distinguish between different EdgeConnect deployment environments, not an official API version designation from the platform.
### Commands
**App Management** (`edge-connect app <command>`):
CLI command `app` corresponds to **App** in the platform console.
- `create`: Create app (flags: `--org`, `--name`, `--version`, `--region`)
- `show`: Show app details (flags: same as create)
- `list`: List apps (flags: `--org`, `--region`, optional: `--name`, `--version`)
- `delete`: Delete app (flags: `--org`, `--name`, `--version`, `--region`)
**App Instance Management** (`edge-connect instance <command>`):
CLI command `instance` corresponds to **App Instance** in the platform console.
- `create`: Create app instance (flags: `--org`, `--name`, `--app`, `--version`, `--region`, `--cloudlet`, `--cloudlet-org`, `--flavor`)
- `show`: Show app instance details (flags: `--org`, `--name`, `--cloudlet`, `--cloudlet-org`, `--region`, `--app-id`)
- `list`: List app instances (flags: same as show, all optional)
- `delete`: Delete app instance (flags: `--org`, `--name`, `--cloudlet`, `--cloudlet-org`, `--region`)
**Declarative Operations**:
- `apply`: Deploy from YAML (flags: `-f <file>`, `--dry-run`, `--auto-approve`)
- `delete`: Delete from YAML (flags: `-f <file>`, `--dry-run`, `--auto-approve`)
### YAML Configuration Format
The `EdgeConnectConfig.yaml` file defines apps and their deployment targets:
```yaml
kind: edgeconnect-deployment
metadata:
name: "my-app" # App name (required)
appVersion: "1.0.0" # App version (required)
organization: "myorg" # Organization (required)
spec:
# Choose ONE: k8sApp OR dockerApp
k8sApp:
manifestFile: "./k8s-deployment.yaml" # Path to K8s manifest
# OR dockerApp:
# image: "registry.example.com/myimage:tag"
# manifestFile: "./docker-compose.yaml" # Optional
# Deployment targets (at least one required)
infraTemplate:
- region: "EU" # Region (required)
cloudletOrg: "TelekomOp" # Cloudlet provider (required)
cloudletName: "Munich" # Cloudlet name (required)
flavorName: "EU.small" # Instance size (required)
- region: "US"
cloudletOrg: "TelekomOp"
cloudletName: "gardener-shepherd-test"
flavorName: "default"
# Optional network configuration
network:
outboundConnections:
- protocol: "tcp" # tcp, udp, or icmp
portRangeMin: 80
portRangeMax: 80
remoteCIDR: "0.0.0.0/0"
- protocol: "tcp"
portRangeMin: 443
portRangeMax: 443
remoteCIDR: "0.0.0.0/0"
# Optional deployment strategy (default: recreate)
deploymentStrategy: "recreate" # recreate, blue-green, or rolling
```
**Key Points**:
- Manifest file paths are relative to the config file location
- Multiple `infraTemplate` entries deploy to multiple cloudlets simultaneously
- Network configuration is optional; outbound connections default to platform settings
- Deployment strategy currently only supports "recreate" (others planned)
## Troubleshooting
### Authentication Failures
**Problem**: Errors like "authentication failed" or "unauthorized"
**Solution**:
- Verify credentials are correct in config file or environment variables
- Ensure `base_url` includes the scheme (https://) and has no trailing path
- Check that you're connecting to the correct cloud instance (Edge or Orca)
- Ensure the correct `api-version` is set for your deployment environment
### "Configuration validation failed" Errors
**Problem**: YAML configuration file validation errors
**Solution**:
- Check that all required fields are present (name, appVersion, organization)
- Ensure you have exactly one of `k8sApp` or `dockerApp` (not both, not neither)
- Verify manifest file paths exist relative to the config file location
- Check for leading/trailing whitespace in string values
- Ensure at least one `infraTemplate` entry is defined
### Wrong API Version or Cloud Instance
**Problem**: Commands work but resources don't appear in the console, or vice versa
**Solution**: Verify both the `base_url` and `api-version` match your target environment. There are two cloud instances (Edge and Orca) with different URLs and API versions. Check with your platform administrator for the correct configuration.
## Status
**Maturity**: Production
## Additional Resources
* [EdgeConnect SDK Documentation](/docs/edgeconnect/edgeconnect-sdk/)
* **Edge Cloud**: [Console](https://hub.apps.edge.platform.mg3.mdb.osc.live/) | [API Docs](https://swagger.edge.platform.mg3.mdb.osc.live/)
* **Orca Cloud**: [Console](https://hub.apps.orca.platform.mg3.mdb.osc.live/) | [API Docs](https://swagger.orca.platform.mg3.mdb.osc.live/)
* [Source Code Repository](https://edp.buildth.ing/DevFW-CICD/edge-connect-client)

View file

@ -1,70 +0,0 @@
---
title: EdgeConnect SDK
linkTitle: SDK
weight: 10
description: >
Software Development Kit for interacting with EdgeConnect
---
## Overview
The EdgeConnect SDK is a Go library which provides a simple method for interacting with Edge Connect within programs. It is designed to be used by other tools, such as the [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) or [Terraform provider](/docs/edgeconnect/terraform-provider/),
## Key Features
* Allows querying endpoints without the need to manage API calls and responses directly
* Wraps the existing [Edge Connect API](https://swagger.edge.platform.mg3.mdb.osc.live/)
* Supports multiple unnumbered versions of the API
## Purpose in EDP
No system can be considered useful unless it is actually, in practice, used. While the Edge Connect [console](https://hub.apps.edge.platform.mg3.mdb.osc.live/) and [API](https://swagger.edge.platform.mg3.mdb.osc.live/) are essential tools to allow the platform to be used by developers, there are numerous use cases for interaction that is automated but simpler to use than an API. These include a [command-line tool](/docs/edgeconnect/edgeconnect-client/) and [Terraform provider](/docs/edgeconnect/terraform-provider/).
While each such tool could simply independently wrap existing endpoints, this is generally too low-level for sustainable development. It would involve extensive boilerplate code in each such package, plus small changes to API endpoints or error handling may require constant rework.
To avoid this, the Edge Connect SDK aims to provide a common library for interacting with EdgeConnect, allowing the abstraction of HTTP requests and authentication procedures while nonetheless allowing access directly to the endpoints available.
## Repository
**Code**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client
**Documentation**: https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk
## Getting Started
### Prerequisites
* Golang
* Edge Connect credentials
### Quick Start
[Step-by-step guide to get started with this component]
1. Simply [import](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#installation) the SDK to your project
2. [Initialise and configure](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#configuration-options) a client with your credentials
3. [Build](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#examples) your code around the existing endpoints
### Verification
[How to verify the component is working correctly]
## Usage Examples
See [README](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk#examples) for simple code examples, or repositories for [EdgeConnect Client](/docs/edgeconnect/edgeconnect-client/) and [Terraform provider](/docs/edgeconnect/terraform-provider/) for full projects relying on it.
## Troubleshooting
### Varying code versions
**Problem**: While the Edge Connect API does not (at time of writing) have different semantic versions, it does have different iterations which function differently. The SDK provides two different libraries, labelled [v1](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk/edgeconnect) and [v2](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/sdk/edgeconnect/v2) and referring to API definitions similarly stored as [v1](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/api/swagger_v1.json) and [v2](https://edp.buildth.ing/DevFW-CICD/edge-connect-client/src/branch/main/api/swagger_v2.json).
**Solution**: If you receive errors when using the SDK, consider changing the version you import:
```go
import v1 "edp.buildth.ing/DevFW-CICD/edge-connect-client/sdk/edgeconnect"
import v2 "edp.buildth.ing/DevFW-CICD/edge-connect-client/v2/sdk/edgeconnect/v2"
```
## Status
**Maturity**: Beta

View file

@ -1,80 +0,0 @@
---
title: Terraform provider for Edge cloud
linkTitle: Terraform provider
weight: 30
description: Custom Terraform provider for orchestrating Edge deployments
---
## Overview
This work-in-progress Terraform provider for Edge cloud allows orchestration of selected resources using flexible, concise [HCL](https://developer.hashicorp.com/terraform/language). This allows deployment to Edge Cloud through a familiar format, abstracting away specific endpoints and authentication elements, and allowing seamless combination of Edge resources with others: on OTC, other clouds, or local utilities.
## Key Features
* Interact with Apps and AppInstances using widely-used Terraform framework
* Using Terraform's systems, provide minimal configuration: just an endpoint and credentials, then no need to deal with headers or other API boilerplate
* Also works with community-driven OpenTofu
* Provider currently under development: more features can be added when requested.
## Purpose in EDP
Interacting with infrastructure is a complex process, with many parameters and components working together. Doing so by clicking buttons in a web UI ("ClickOps") is extremely difficult to scale, rapidly becoming highly confusing.
Instead, automations are possible through APIs and SDKs. Working directly with an API (e.g. via `curl`) inevitably tends to involve large amounts of boilerplate code to manage authentication, rarely-changing configuration such as region/tenant selection, and more. When one resource (say, a web server) must interact with another (say, a DNS record), the cross-references further increase this complexity.
An SDK mitigates this complexity when coding software, by providing library functions which interact with the API in abstracted ways which require a minimum of necessary information. Our SDK for Edge Connect is described in a [separate section](/docs/edgeconnect/edgeconnect-sdk/).
However, when simply wanting to deploy infrastructure in isolation - say, updating the status of a Kubernetes or App resource after a change in configuration - an SDK is still an overly complicated tool.
This is where [Terraform](https://developer.hashicorp.com/terraform) or its community-led alternative [OpenTofu](https://opentofu.org/), come in. They provide a simple language for defining resources, with a level of abstraction that retains the power and flexibility of the API while greatly simplifying definitions and execution.
Terraform is widely used for major infrastructure systems such as [AWS](https://registry.terraform.io/providers/hashicorp/aws/latest/docs), [Azure](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs) or general [Kubernetes](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs). However, it is highly flexible, supporting a range of resource types which are not inherently tied to infrastructure: [file](https://registry.terraform.io/search/providers?q=file) manipulation; package setup through [Ansible](https://registry.terraform.io/providers/ansible/aap/1.4.0); secret generation in [Vault](https://registry.terraform.io/providers/hashicorp/vault/latest/docs).
As a result of this breadth of functionality and cross-compatibility, Terraform support is considered by some as necessary for a platform to be used 'seriously' - that is, at scale, or in major workloads. Our provider thus unlocks broad market relevance for the platform in a way few other tools or features could.
## Repository
**Code**: https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect
**Documentation**: Provider is intended to ultimately wrap each resource-based endpoint of the [Edge API](https://swagger.edge.platform.mg3.mdb.osc.live/), but currently supports a limited [subset of resources](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect#resources).
## Getting Started
### Prerequisites
* [Terraform](https://developer.hashicorp.com/terraform) or [OpenTofu](https://opentofu.org/)
* Edge access and credentials
### Quick Start
1. Configure Terraform to use the provider by [including it](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect#using-terraform-registry-recommended) in `provider.tf`
1. In the same directory, create terraform resources in `.tf` files according to the [spec](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect#resources)
1. [Set up credentials](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect/src/branch/main/README.md#provider-configuration) using environment variables or a `provider` block
1. Run `terraform init` in the directory
1. Execute `terraform plan` and/or `terraform apply` to deploy your application
1. `terraform destroy` can be used to remove all deployed resources
### Verification
If `terraform apply` completes successfully (without errors), the provider is working correctly. You can also manually validate in the Edge UI that your resources have been deployed/reconfigured as Terraform indicated.
## Status
**Maturity**: Experimental
## Additional Resources
* [Terralist](https://www.terralist.io/)
* [Terraform](https://developer.hashicorp.com/terraform)
* [OpenTofu](https://opentofu.org/)
* [Edge Connect API](https://swagger.edge.platform.mg3.mdb.osc.live)
## Integration Points
* **Edge Connect SDK**: The provider uses the [Edge Connect SDK](http://localhost:1313/docs/components/deployments/edgeconnect/edgeconnect-sdk/) under the hood.
* **Terralist**: The provider is published using a [custom instance](https://terralist.edp.buildth.ing/) of [Terralist](https://www.terralist.io/). This [can only](https://edp.buildth.ing/DevFW-CICD/stacks/src/commit/5b438097bbd027f0025d6198c34c22f856392a03/template/stacks/terralist/terralist/values.yaml#L9-L38) be written to with a login via [Forgejo](https://edp.buildth.ing/), but can be read publicly. See [the repository README](https://edp.buildth.ing/DevFW-CICD/terraform-provider-edge-connect#terralist) for more information.
### Component Architecture (C4)
<likec4-view view-id="provider" browser="true"></likec4-view>

View file

@ -1,52 +0,0 @@
---
title: Edge Developer Platform
linkTitle: Edge Developer Platform
weight: 10
description: >
A platform to support developers working in the Edge, based around Forgejo
---
## Purpose
The Edge Developer Platform (EDP) is a comprehensive DevOps platform designed to enable developers to build, deploy, and operate cloud-native applications at the edge. It provides an integrated suite of tools and services covering the entire software development lifecycle.
{{< likec4-view view="application-transition" project="architecture" title="EDP Context View: Edge Developer Platform Components and User Interaction" >}}
The magenta **EDP** represents the developer platform: a shared, productized layer that enables modern DevOps by standardizing how applications are described, built, deployed, and observed. In the **inner loop**, developers iterate locally (fast feedback: code → run → test). EDP then connects that work to an **outer loop** where additional roles (review, test, operations, audit/compliance) contribute feedback and controls for production readiness.
In this modern DevOps setup, EDP acts as the hub: it synchronizes with local development and **deploys applications to target clouds** (for example, an EdgeConnect cloud), while providing the operational capabilities needed to run them safely. Agentic AI can support both loops—for example by assisting developers with implementation and testing in the inner loop, and by automating reviews, policy checks, release notes, and deployment verification (including drift detection and remediation) in the outer loop.
## Product Structure
EDP consists of multiple integrated components organized in layers:
### Core Platform Services
The foundation layer provides essential platform capabilities including source code management, CI/CD, and container orchestration.
For documentation, see: [Basic Platform Concepts](./deployment/basics/) and [Forgejo](./forgejo/)
### Developer Experience
Tools and services that developers interact with directly to build, test, and deploy applications.
For documentation, see: [Forgejo](./forgejo/) and [Deployment](./deployment/)
### Infrastructure & Operations
Infrastructure automation, observability, and operational tooling for platform management.
For documentation, see: [Operations](./operations/) and [Infrastructure as Code](./deployment/infrastructure/)
## Getting Started
EDP is available at https://edp.buildth.ing.
EDP includes a Forgejo instance that hosts both public and private repositories containing all EDP components.
To request access and get onboarded, start with the welcome repository:
- https://edp.buildth.ing/edp-team/welcome
Once you have access to the repositories, you can explore the EDP documentation according to the product structure above.

View file

@ -1,509 +0,0 @@
---
title: Deployment
linkTitle: Deployment
weight: 10
description: >
Platform-level component provisioning via Stacks - Orchestrating the platform infrastructure itself
---
## Overview
Platform Orchestration refers to the automation and management of the platform infrastructure itself. This includes the provisioning, configuration, and lifecycle management of all components that make up the Internal Developer Platform (IDP).
In the context of IPCEI-CIS, Platform Orchestration means:
- **Platform Bootstrap**: Initial setup of Kubernetes clusters and core services
- **Platform Services Management**: Deployment and management of ArgoCD, Forgejo, Keycloak, etc.
- **Infrastructure-as-Code**: Declarative management using Terraform and GitOps
- **Multi-Cluster Orchestration**: Coordination across different Kubernetes clusters
- **Platform Stacks**: Reusable bundles of platform components (CNOE concept)
### Target Audience
Platform Orchestration is primarily aimed at:
- **Platform Engineering Teams**: Teams that build and operate the IDP
- **Infrastructure Architects**: Those responsible for the platform architecture
- **SRE Teams**: Teams responsible for reliability and operations
## Key Features
### Declarative Platform Definition
The entire platform is defined declaratively as code:
- **GitOps-First**: Everything is versioned in Git and traceable
- **Reproducibility**: The platform can be rebuilt at any time
- **Environment Parity**: Consistency between Dev, Test, and Production
- **Auditability**: Complete history of all changes
### Self-Bootstrapping
The platform can bootstrap itself:
1. **Initial Bootstrap**: Minimal tool (like `idpbuilder`) starts the platform
2. **Self-Management**: After bootstrap, ArgoCD takes over management
3. **Continuous Reconciliation**: Platform is continuously reconciled with Git state
4. **Self-Healing**: Automatic recovery on deviations
### Stack-based Composition
Platform components are organized as reusable stacks (CNOE concept):
- **Modularity**: Components can be updated individually
- **Reusability**: Stacks can be used across different environments
- **Composability**: Compose complex platforms from simple building blocks
- **Versioning**: Stacks can be versioned and tested
**In IPCEI-CIS**: The stacks concept from CNOE is the core organizational principle for platform components.
### Multi-Cluster Support
Platform Orchestration supports different cluster topologies:
- **Control Plane + Worker Clusters**: Centralized control, distributed workloads
- **Hub-and-Spoke**: One management cluster manages multiple target clusters
- **Federation**: Coordination across multiple independent clusters
## Purpose in EDP
Platform Orchestration is the foundation of the IPCEI-CIS Edge Developer Platform. It enables:
### Foundation for Developer Self-Service
Platform Orchestration ensures all services are available that developers need for self-service:
- **GitOps Engine** (ArgoCD) for continuous deployment
- **Source Control** (Forgejo) for code and configuration management
- **Identity Management** (Keycloak) for authentication and authorization
- **Observability** (Grafana, Prometheus) for monitoring and logging
- **CI/CD** (Forgejo Actions/Pipelines) for automated build and test
### Consistency Across Environments
Through declarative definition, consistency is guaranteed:
- Development, test, and production environments are identically configured
- No "configuration drift" between environments
- Predictable behavior across all stages
### Platform as Code
The platform itself is treated like software:
- **Version Control**: All changes are versioned in Git
- **Code Review**: Platform changes go through review processes
- **Testing**: Platform configurations can be tested
- **Rollback**: Easy rollback on problems
### Reduced Operational Overhead
Automation reduces manual effort:
- No manual installation steps
- Automatic updates and patching
- Self-healing on failures
- Standardized deployment processes
## Repository
**CNOE Reference Implementation**: [cnoe-io/stacks](https://github.com/cnoe-io/stacks)
**CNOE idpbuilder**: [cnoe-io/idpbuilder](https://github.com/cnoe-io/idpbuilder)
**Documentation**: [CNOE.io Documentation](https://cnoe.io/docs/)
## Getting Started
### Prerequisites
- **Docker**: For local Kubernetes clusters (Kind)
- **kubectl**: Kubernetes CLI tool
- **Git**: For repository management
- **idpbuilder**: CNOE bootstrap tool
### Quick Start
Platform Orchestration with CNOE Reference Implementation:
```bash
# 1. Install idpbuilder
curl -fsSL https://cnoe.io/install.sh | bash
# 2. Bootstrap platform
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation
# 3. Wait for platform ready (ca. 10 minutes)
kubectl get applications -A
```
### Verification
Verify the platform is running correctly:
```bash
# Get platform secrets (credentials)
idpbuilder get secrets
# Check all ArgoCD applications
kubectl get applications -n argocd
# Expected: All applications "Synced" and "Healthy"
```
Access URLs (with path-routing):
- **ArgoCD**: `https://cnoe.localtest.me:8443/argocd`
- **Forgejo**: `https://cnoe.localtest.me:8443/gitea`
- **Keycloak**: `https://cnoe.localtest.me:8443/keycloak`
## Usage Examples
### Use Case 1: Platform Bootstrap
Initial bootstrapping of a new platform instance:
```bash
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation \
--log-level debug
# Workflow:
# 1. Creates Kind cluster
# 2. Installs ingress-nginx
# 3. Clones and installs ArgoCD
# 4. Installs Forgejo
# 5. Waits for core services
# 6. Creates technical users
# 7. Configures Git repositories
# 8. Installs remaining stacks via ArgoCD
```
After approximately 10 minutes, the platform is fully deployed.
### Use Case 2: Adding New Platform Components
Add new platform components via ArgoCD:
```bash
# Create ArgoCD Application for new component
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-secrets
namespace: argocd
spec:
project: default
source:
repoURL: https://charts.external-secrets.io
targetRevision: 0.9.9
chart: external-secrets
destination:
server: https://kubernetes.default.svc
namespace: external-secrets-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
```
### Use Case 3: Platform Updates
Update platform components:
```bash
# 1. Update via Git (GitOps)
cd your-platform-config-repo
git pull
# 2. Update stack version
vim argocd/applications/component.yaml
# Change targetRevision to new version
# 3. Commit and push
git add .
git commit -m "Update component to v1.2.3"
git push
# 4. ArgoCD will automatically sync
# 5. Monitor the update
argocd app sync component --watch
```
## Integration Points
### ArgoCD Integration
- **Bootstrap**: ArgoCD is initially installed via idpbuilder
- **Self-Management**: After bootstrap, ArgoCD manages itself via Application CRD
- **Platform Coordination**: ArgoCD orchestrates all other platform components
- **Health Monitoring**: ArgoCD monitors health status of all platform services
### Forgejo Integration
- **Source of Truth**: Git repositories contain all platform definitions
- **GitOps Workflow**: Changes in Git trigger platform updates
- **Backup**: Git serves as backup of platform configuration
- **Audit Trail**: Git history documents all platform changes
- **CI/CD**: Forgejo Actions can automate platform operations
### Terraform Integration
- **Infrastructure Provisioning**: Terraform provisions cloud resources for platform
- **State Management**: Terraform state tracks infrastructure
- **Integration**: Terraform can be triggered via Forgejo pipelines
- **Multi-Cloud**: Support for multiple cloud providers
## Architecture
### Platform Orchestration Flow
```text
┌─────────────────┐
│ idpbuilder │ Bootstrap Tool
│ (Initial Run) │
└────────┬────────┘
┌─────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ ArgoCD │────────▶│ Forgejo │ │
│ │ (GitOps) │ │ (Git Repo) │ │
│ └──────┬───────┘ └──────────────┘ │
│ │ │
│ │ Monitors & Syncs │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────┐ │
│ │ Platform Stacks │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ │ │
│ │ │Forgejo │ │Keycloak │ │ │
│ │ └──────────┘ └──────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ │ │
│ │ │Observ- │ │Ingress │ │ │
│ │ │ability │ │ │ │ │
│ │ └──────────┘ └──────────┘ │ │
│ └──────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
```
### Platform Bootstrap Sequence
The idpbuilder executes the following workflow:
1. Create Kind Kubernetes cluster
2. Install ingress-nginx controller
3. Install ArgoCD
4. Install Forgejo Git server
5. Wait for services to be ready
6. Create technical users in Forgejo
7. Create repository for platform state in Forgejo
8. Push platform stacks to Forgejo
9. Create ArgoCD Applications for all stacks
10. ArgoCD takes over continuous synchronization
### Deployment Architecture
The platform is deployed in different namespaces:
- `argocd`: ArgoCD and its components
- `gitea`: Forgejo Git server
- `keycloak`: Identity and access management
- `observability`: Prometheus, Grafana, etc.
- `ingress-nginx`: Ingress controller
## Configuration
### idpbuilder Configuration
Key configuration options for idpbuilder:
```bash
# Path-based routing (recommended for local development)
idpbuilder create --use-path-routing
# Custom package directory
idpbuilder create --package-dir /path/to/custom/packages
# Custom Kind cluster config
idpbuilder create --kind-config custom-kind.yaml
# Enable debug logging
idpbuilder create --log-level debug
```
### ArgoCD Configuration
Important ArgoCD configurations for platform orchestration:
```yaml
# argocd-cm ConfigMap
data:
# Enable automatic sync
application.instanceLabelKey: argocd.argoproj.io/instance
# Repository credentials
repositories: |
- url: https://github.com/cnoe-io/stacks
name: cnoe-stacks
type: git
# Resource exclusions
resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
```
### Platform Stack Configuration
Configuration of platform stacks via Kustomize:
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: platform-system
resources:
- argocd-app.yaml
- forgejo-app.yaml
- keycloak-app.yaml
patches:
- target:
kind: Application
patch: |-
- op: add
path: /spec/syncPolicy
value:
automated:
prune: true
selfHeal: true
```
## Troubleshooting
### Platform not reachable
**Problem**: After `idpbuilder create`, platform services are not reachable
**Solution**:
```bash
# 1. Check if all pods are running
kubectl get pods -A
# 2. Check ArgoCD application status
kubectl get applications -n argocd
# 3. Check ingress
kubectl get ingress -A
# 4. Verify DNS resolution
nslookup cnoe.localtest.me
# 5. Check idpbuilder logs
idpbuilder get logs
```
### ArgoCD Applications not synchronized
**Problem**: ArgoCD Applications show status "OutOfSync"
**Solution**:
```bash
# 1. Check application details
argocd app get <app-name>
# 2. View sync status
argocd app sync <app-name> --dry-run
# 3. Force sync
argocd app sync <app-name> --force
# 4. Check for errors in ArgoCD logs
kubectl logs -n argocd deployment/argocd-application-controller
```
### Git Repository Connection Issues
**Problem**: ArgoCD cannot access Git repository
**Solution**:
```bash
# 1. Verify repository configuration
argocd repo list
# 2. Test connection
argocd repo get https://your-git-repo
# 3. Check credentials
kubectl get secret -n argocd
# 4. Re-add repository with correct credentials
argocd repo add https://your-git-repo \
--username <user> \
--password <token>
```
## Platform Orchestration Best Practices
Based on experience and [CNCF Guidelines](https://tag-app-delivery.cncf.io/whitepapers/platforms/):
1. **Start Simple**: Begin with the CNOE reference stack, extend gradually
2. **Automate Everything**: Manual platform changes are anti-pattern
3. **Monitor Continuously**: Use observability tools for platform health
4. **Document Well**: Platform documentation is essential for adoption
5. **Version Everything**: All platform components should be versioned
6. **Test Changes**: Platform updates should be tested in non-prod
7. **Plan for Disaster**: Backup and disaster recovery strategies are important
8. **Use Stacks**: Organize platform components as reusable stacks
## Status
**Maturity**: Production (for CNOE Reference Implementation)
**Stability**: Stable
**Support**: Community Support via CNOE Community
## Additional Resources
### CNOE Resources
- [CNOE Official Website](https://cnoe.io/)
- [CNOE GitHub Organization](https://github.com/cnoe-io)
- [CNOE Reference Implementation](https://github.com/cnoe-io/stacks)
- [CNOE Community Slack](https://cloud-native.slack.com/archives/C05TN9WFN5S)
### Platform Engineering
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)
- [Platform Engineering Maturity Model](https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/)
- [Team Topologies](https://teamtopologies.com/) - Organizational patterns
### GitOps
- [GitOps Working Group](https://opengitops.dev/)
- [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
- [GitOps Principles](https://opengitops.dev/)
### CNOE Stacks
- [Understanding CNOE Stacks](https://cnoe.io/docs/reference-implementation/stacks/)
- [Creating Custom Stacks](https://cnoe.io/docs/reference-implementation/customization/)

View file

@ -1,479 +0,0 @@
---
title: Basic Concepts
linkTitle: Basic Concepts
weight: 1
description: >
Platform-level component provisioning via Stacks - Orchestrating the platform infrastructure itself
---
## Overview
Platform Orchestration refers to the automation and management of the platform infrastructure itself. This includes the provisioning, configuration, and lifecycle management of all components that make up the Internal Developer Platform (IDP).
In the context of IPCEI-CIS, Platform Orchestration means:
- **Platform Bootstrap**: Initial setup of Kubernetes clusters and core services
- **Platform Services Management**: Deployment and management of ArgoCD, Forgejo, Keycloak, etc.
- **Infrastructure-as-Code**: Declarative management using Terraform and GitOps
- **Multi-Cluster Orchestration**: Coordination across different Kubernetes clusters
- **Platform Stacks**: Reusable bundles of platform components (CNOE concept)
### Target Audience
Platform Orchestration is primarily aimed at:
- **Platform Engineering Teams**: Teams that build and operate the IDP
- **Infrastructure Architects**: Those responsible for the platform architecture
- **SRE Teams**: Teams responsible for reliability and operations
## Key Features
### Declarative Platform Definition
The entire platform is defined declaratively as code:
- **GitOps-First**: Everything is versioned in Git and traceable
- **Reproducibility**: The platform can be rebuilt at any time
- **Environment Parity**: Consistency between Dev, Test, and Production
- **Auditability**: Complete history of all changes
### Self-Bootstrapping
The platform can bootstrap itself:
1. **Initial Bootstrap**: Minimal tool (like `idpbuilder`) starts the platform
2. **Self-Management**: After bootstrap, ArgoCD takes over management
3. **Continuous Reconciliation**: Platform is continuously reconciled with Git state
4. **Self-Healing**: Automatic recovery on deviations
### Stack-based Composition
Platform components are organized as reusable stacks (CNOE concept):
- **Modularity**: Components can be updated individually
- **Reusability**: Stacks can be used across different environments
- **Composability**: Compose complex platforms from simple building blocks
- **Versioning**: Stacks can be versioned and tested
**In IPCEI-CIS**: The stacks concept from CNOE is the core organizational principle for platform components.
### Multi-Cluster Support
Platform Orchestration supports different cluster topologies:
- **Control Plane + Worker Clusters**: Centralized control, distributed workloads
- **Hub-and-Spoke**: One management cluster manages multiple target clusters
- **Federation**: Coordination across multiple independent clusters
## Purpose in EDP
Platform Orchestration is the foundation of the IPCEI-CIS Edge Developer Platform. It enables:
### Foundation for Developer Self-Service
Platform Orchestration ensures all services are available that developers need for self-service:
- **GitOps Engine** (ArgoCD) for continuous deployment
- **Source Control** (Forgejo) for code and configuration management
- **Identity Management** (Keycloak) for authentication and authorization
- **Observability** (Grafana, Prometheus) for monitoring and logging
- **CI/CD** (Forgejo Actions/Pipelines) for automated build and test
### Consistency Across Environments
Through declarative definition, consistency is guaranteed:
- Development, test, and production environments are identically configured
- No "configuration drift" between environments
- Predictable behavior across all stages
### Platform as Code
The platform itself is treated like software:
- **Version Control**: All changes are versioned in Git
- **Code Review**: Platform changes go through review processes
- **Testing**: Platform configurations can be tested
- **Rollback**: Easy rollback on problems
### Reduced Operational Overhead
Automation reduces manual effort:
- No manual installation steps
- Automatic updates and patching
- Self-healing on failures
- Standardized deployment processes
## Repository
**CNOE Reference Implementation**: [cnoe-io/stacks](https://github.com/cnoe-io/stacks)
**CNOE idpbuilder**: [cnoe-io/idpbuilder](https://github.com/cnoe-io/idpbuilder)
**Documentation**: [CNOE.io Documentation](https://cnoe.io/docs/)
## Getting Started
### Prerequisites
- **Docker**: For local Kubernetes clusters (Kind)
- **kubectl**: Kubernetes CLI tool
- **Git**: For repository management
- **idpbuilder**: CNOE bootstrap tool
### Quick Start
Platform Orchestration with CNOE Reference Implementation:
```bash
# 1. Install idpbuilder
curl -fsSL https://cnoe.io/install.sh | bash
# 2. Bootstrap platform
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation
# 3. Wait for platform ready (ca. 10 minutes)
kubectl get applications -A
```
### Verification
Verify the platform is running correctly:
```bash
# Get platform secrets (credentials)
idpbuilder get secrets
# Check all ArgoCD applications
kubectl get applications -n argocd
# Expected: All applications "Synced" and "Healthy"
```
Access URLs (with path-routing):
- **ArgoCD**: `https://cnoe.localtest.me:8443/argocd`
- **Forgejo**: `https://cnoe.localtest.me:8443/gitea`
- **Keycloak**: `https://cnoe.localtest.me:8443/keycloak`
## Usage Examples
### Use Case 1: Platform Bootstrap
Initial bootstrapping of a new platform instance:
```bash
idpbuilder create \
--use-path-routing \
--package-dir https://github.com/cnoe-io/stacks//ref-implementation \
--log-level debug
# Workflow:
# 1. Creates Kind cluster
# 2. Installs ingress-nginx
# 3. Clones and installs ArgoCD
# 4. Installs Forgejo
# 5. Waits for core services
# 6. Creates technical users
# 7. Configures Git repositories
# 8. Installs remaining stacks via ArgoCD
```
After approximately 10 minutes, the platform is fully deployed.
### Use Case 2: Adding New Platform Components
Add new platform components via ArgoCD:
```bash
# Create ArgoCD Application for new component
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-secrets
namespace: argocd
spec:
project: default
source:
repoURL: https://charts.external-secrets.io
targetRevision: 0.9.9
chart: external-secrets
destination:
server: https://kubernetes.default.svc
namespace: external-secrets-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
EOF
```
### Use Case 3: Platform Updates
Update platform components:
```bash
# 1. Update via Git (GitOps)
cd your-platform-config-repo
git pull
# 2. Update stack version
vim argocd/applications/component.yaml
# Change targetRevision to new version
# 3. Commit and push
git add .
git commit -m "Update component to v1.2.3"
git push
# 4. ArgoCD will automatically sync
# 5. Monitor the update
argocd app sync component --watch
```
## Integration Points
### ArgoCD Integration
- **Bootstrap**: ArgoCD is initially installed via idpbuilder
- **Self-Management**: After bootstrap, ArgoCD manages itself via Application CRD
- **Platform Coordination**: ArgoCD orchestrates all other platform components
- **Health Monitoring**: ArgoCD monitors health status of all platform services
### Forgejo Integration
- **Source of Truth**: Git repositories contain all platform definitions
- **GitOps Workflow**: Changes in Git trigger platform updates
- **Backup**: Git serves as backup of platform configuration
- **Audit Trail**: Git history documents all platform changes
- **CI/CD**: Forgejo Actions can automate platform operations
### Terraform Integration
- **Infrastructure Provisioning**: Terraform provisions cloud resources for platform
- **State Management**: Terraform state tracks infrastructure
- **Integration**: Terraform can be triggered via Forgejo pipelines
- **Multi-Cloud**: Support for multiple cloud providers
## Architecture
### Platform Orchestration Flow
{{< likec4-view view="platform_orchestration_flow" title="Platform Orchestration Flow" >}}
### Platform Bootstrap Sequence
The idpbuilder executes the following workflow:
1. Create Kind Kubernetes cluster
2. Install ingress-nginx controller
3. Install ArgoCD
4. Install Forgejo Git server
5. Wait for services to be ready
6. Create technical users in Forgejo
7. Create repository for platform state in Forgejo
8. Push platform stacks to Forgejo
9. Create ArgoCD Applications for all stacks
10. ArgoCD takes over continuous synchronization
### Deployment Architecture
The platform is deployed in different namespaces:
- `argocd`: ArgoCD and its components
- `gitea`: Forgejo Git server
- `keycloak`: Identity and access management
- `observability`: Prometheus, Grafana, etc.
- `ingress-nginx`: Ingress controller
## Configuration
### idpbuilder Configuration
Key configuration options for idpbuilder:
```bash
# Path-based routing (recommended for local development)
idpbuilder create --use-path-routing
# Custom package directory
idpbuilder create --package-dir /path/to/custom/packages
# Custom Kind cluster config
idpbuilder create --kind-config custom-kind.yaml
# Enable debug logging
idpbuilder create --log-level debug
```
### ArgoCD Configuration
Important ArgoCD configurations for platform orchestration:
```yaml
# argocd-cm ConfigMap
data:
# Enable automatic sync
application.instanceLabelKey: argocd.argoproj.io/instance
# Repository credentials
repositories: |
- url: https://github.com/cnoe-io/stacks
name: cnoe-stacks
type: git
# Resource exclusions
resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
```
### Platform Stack Configuration
Configuration of platform stacks via Kustomize:
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: platform-system
resources:
- argocd-app.yaml
- forgejo-app.yaml
- keycloak-app.yaml
patches:
- target:
kind: Application
patch: |-
- op: add
path: /spec/syncPolicy
value:
automated:
prune: true
selfHeal: true
```
## Troubleshooting
### Platform not reachable
**Problem**: After `idpbuilder create`, platform services are not reachable
**Solution**:
```bash
# 1. Check if all pods are running
kubectl get pods -A
# 2. Check ArgoCD application status
kubectl get applications -n argocd
# 3. Check ingress
kubectl get ingress -A
# 4. Verify DNS resolution
nslookup cnoe.localtest.me
# 5. Check idpbuilder logs
idpbuilder get logs
```
### ArgoCD Applications not synchronized
**Problem**: ArgoCD Applications show status "OutOfSync"
**Solution**:
```bash
# 1. Check application details
argocd app get <app-name>
# 2. View sync status
argocd app sync <app-name> --dry-run
# 3. Force sync
argocd app sync <app-name> --force
# 4. Check for errors in ArgoCD logs
kubectl logs -n argocd deployment/argocd-application-controller
```
### Git Repository Connection Issues
**Problem**: ArgoCD cannot access Git repository
**Solution**:
```bash
# 1. Verify repository configuration
argocd repo list
# 2. Test connection
argocd repo get https://your-git-repo
# 3. Check credentials
kubectl get secret -n argocd
# 4. Re-add repository with correct credentials
argocd repo add https://your-git-repo \
--username <user> \
--password <token>
```
## Platform Orchestration Best Practices
Based on experience and [CNCF Guidelines](https://tag-app-delivery.cncf.io/whitepapers/platforms/):
1. **Start Simple**: Begin with the CNOE reference stack, extend gradually
2. **Automate Everything**: Manual platform changes are anti-pattern
3. **Monitor Continuously**: Use observability tools for platform health
4. **Document Well**: Platform documentation is essential for adoption
5. **Version Everything**: All platform components should be versioned
6. **Test Changes**: Platform updates should be tested in non-prod
7. **Plan for Disaster**: Backup and disaster recovery strategies are important
8. **Use Stacks**: Organize platform components as reusable stacks
## Status
**Maturity**: Production (for CNOE Reference Implementation)
**Stability**: Stable
**Support**: Community Support via CNOE Community
## Additional Resources
### CNOE Resources
- [CNOE Official Website](https://cnoe.io/)
- [CNOE GitHub Organization](https://github.com/cnoe-io)
- [CNOE Reference Implementation](https://github.com/cnoe-io/stacks)
- [CNOE Community Slack](https://cloud-native.slack.com/archives/C05TN9WFN5S)
### Platform Engineering
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)
- [Platform Engineering Maturity Model](https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/)
- [Team Topologies](https://teamtopologies.com/) - Organizational patterns
### GitOps
- [GitOps Working Group](https://opengitops.dev/)
- [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
- [GitOps Principles](https://opengitops.dev/)
### CNOE Stacks
- [Understanding CNOE Stacks](https://cnoe.io/docs/reference-implementation/stacks/)
- [Creating Custom Stacks](https://cnoe.io/docs/reference-implementation/customization/)

View file

@ -1,776 +0,0 @@
---
title: "Application Orchestration"
linkTitle: "Application Orchestration"
weight: 30
description: >
Application deployment via CI/CD pipelines and GitOps - Orchestrating application deployments
---
## Overview
Application Orchestration deals with the automation of application deployment and lifecycle management. It encompasses the entire workflow from source code to running application in production.
In the context of IPCEI-CIS, Application Orchestration includes:
- **CI/CD Pipelines**: Automated build, test, and deployment pipelines
- **GitOps Deployment**: Declarative application deployment via ArgoCD
- **Progressive Delivery**: Canary deployments, blue-green deployments
- **Application Configuration**: Environment-specific configuration management
- **Golden Paths**: Standardized deployment templates and workflows
### Target Audience
Application Orchestration is primarily for:
- **Application Developers**: Teams developing and deploying applications
- **DevOps Teams**: Teams responsible for deployment automation
- **Product Teams**: Teams responsible for application lifecycle
## Key Features
### Automated CI/CD Pipelines
Forgejo Actions provides GitHub Actions-compatible CI/CD:
- **Build Automation**: Automatic building of container images
- **Test Automation**: Automated unit, integration, and E2E tests
- **Security Scanning**: Vulnerability scanning of dependencies and images
- **Artifact Publishing**: Publishing to container registries
- **Deployment Triggering**: Automatic deployment after successful build
### GitOps-based Deployment
ArgoCD enables declarative application deployment:
- **Declarative Configuration**: Applications defined as Kubernetes manifests
- **Automated Sync**: Automatic synchronization between Git and cluster
- **Rollback Capability**: Easy rollback to previous versions
- **Multi-Environment**: Consistent deployment across Dev/Test/Prod
- **Health Monitoring**: Continuous monitoring of application health
### Progressive Delivery
Support for advanced deployment strategies:
- **Canary Deployments**: Gradual rollout to subset of users
- **Blue-Green Deployments**: Zero-downtime deployments with instant rollback
- **A/B Testing**: Traffic splitting for feature testing
- **Feature Flags**: Dynamic feature enablement without deployment
### Configuration Management
Flexible configuration for different environments:
- **Environment Variables**: Configuration via environment variables
- **ConfigMaps**: Kubernetes-native configuration
- **Secrets Management**: Secure handling of sensitive data
- **External Secrets**: Integration with external secret stores (Vault, etc.)
## Purpose in EDP
Application Orchestration is the core of developer experience in IPCEI-CIS Edge Developer Platform.
### Developer Self-Service
Developers can deploy applications independently:
- **Self-Service Deployment**: No dependency on operations team
- **Standardized Workflows**: Clear, documented deployment processes
- **Fast Feedback**: Quick feedback through automated pipelines
- **Environment Parity**: Consistent behavior across all environments
### Quality and Security
Automated checks ensure quality and security:
- **Automated Testing**: All changes are automatically tested
- **Security Scans**: Vulnerability scanning of dependencies and images
- **Policy Enforcement**: Automated policy checks (OPA, Kyverno)
- **Compliance**: Auditability of all deployments
### Efficiency and Productivity
Automation increases team efficiency:
- **Faster Time-to-Market**: Faster deployment of new features
- **Reduced Manual Work**: Automation of repetitive tasks
- **Fewer Errors**: Fewer manual mistakes through automation
- **Better Collaboration**: Clear interfaces between Dev and Ops
## Repository
**Forgejo**: [forgejo.org](https://forgejo.org/)
**Forgejo Actions**: [Forgejo Actions Documentation](https://forgejo.org/docs/latest/user/actions/)
**ArgoCD**: [argoproj.github.io/cd](https://argoproj.github.io/cd/)
## Getting Started
### Prerequisites
- **Forgejo Account**: Access to Forgejo instance
- **Kubernetes Cluster**: Target cluster for deployments
- **ArgoCD Access**: Access to ArgoCD instance
- **Git**: For repository management
### Quick Start: Application Deployment
1. **Create Application Repository**
```bash
# Create new repository in Forgejo
git init my-application
cd my-application
# Add application code and Dockerfile
cat > Dockerfile <<EOF
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
EOF
```
2. **Add CI/CD Pipeline**
Create `.forgejo/workflows/build.yaml`:
```yaml
name: Build and Push
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Registry
uses: docker/login-action@v2
with:
registry: registry.example.com
username: ${{ secrets.REGISTRY_USER }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: ${{ github.event_name == 'push' }}
tags: registry.example.com/my-app:${{ github.sha }}
```
3. **Create Kubernetes Manifests**
Create `k8s/deployment.yaml`:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-application
spec:
replicas: 3
selector:
matchLabels:
app: my-application
template:
metadata:
labels:
app: my-application
spec:
containers:
- name: app
image: registry.example.com/my-app:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
---
apiVersion: v1
kind: Service
metadata:
name: my-application
spec:
selector:
app: my-application
ports:
- port: 80
targetPort: 3000
```
4. **Configure ArgoCD Application**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-application
namespace: argocd
spec:
project: default
source:
repoURL: https://forgejo.example.com/myteam/my-application
targetRevision: main
path: k8s
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
```
5. **Deploy**
```bash
# Commit and push
git add .
git commit -m "Add application and deployment configuration"
git push origin main
# ArgoCD will automatically deploy the application
argocd app sync my-application --watch
```
## Usage Examples
### Use Case 1: Multi-Environment Deployment
Deploy application to multiple environments:
**Repository Structure:**
```text
my-application/
├── .forgejo/
│ └── workflows/
│ └── build.yaml
├── base/
│ ├── deployment.yaml
│ ├── service.yaml
│ └── kustomization.yaml
├── overlays/
│ ├── dev/
│ │ ├── kustomization.yaml
│ │ └── patches.yaml
│ ├── staging/
│ │ ├── kustomization.yaml
│ │ └── patches.yaml
│ └── production/
│ ├── kustomization.yaml
│ └── patches.yaml
```
**Kustomize Base** (`base/kustomization.yaml`):
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
commonLabels:
app: my-application
```
**Environment Overlay** (`overlays/production/kustomization.yaml`):
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: production
replicas:
- name: my-application
count: 5
images:
- name: my-app
newTag: v1.2.3
patches:
- patches.yaml
```
**ArgoCD Applications for each environment:**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-application-prod
namespace: argocd
spec:
project: default
source:
repoURL: https://forgejo.example.com/myteam/my-application
targetRevision: main
path: overlays/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
```
### Use Case 2: Canary Deployment
Progressive rollout with canary strategy:
**Argo Rollouts Canary:**
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: my-application
spec:
replicas: 10
strategy:
canary:
steps:
- setWeight: 10
- pause: {duration: 5m}
- setWeight: 30
- pause: {duration: 5m}
- setWeight: 60
- pause: {duration: 5m}
- setWeight: 100
selector:
matchLabels:
app: my-application
template:
metadata:
labels:
app: my-application
spec:
containers:
- name: app
image: registry.example.com/my-app:v2.0.0
```
### Use Case 3: Feature Flags
Dynamic feature control without deployment:
**Application Code with Feature Flag:**
```javascript
const Unleash = require('unleash-client');
const unleash = new Unleash({
url: 'http://unleash.platform/api/',
appName: 'my-application',
customHeaders: {
Authorization: process.env.UNLEASH_API_TOKEN
}
});
// Use feature flag
if (unleash.isEnabled('new-checkout-flow')) {
// New checkout implementation
renderNewCheckout();
} else {
// Old checkout implementation
renderOldCheckout();
}
```
## Integration Points
### Forgejo Integration
Forgejo serves as central source code management and CI/CD platform:
- **Source Control**: Git repositories for application code
- **CI/CD Pipelines**: Forgejo Actions for automated builds and tests
- **Container Registry**: Built-in container registry for images
- **Webhook Integration**: Triggers for external systems
- **Pull Request Workflows**: Code review and approval processes
### ArgoCD Integration
ArgoCD handles declarative application deployment:
- **GitOps Sync**: Continuous synchronization with Git state
- **Health Monitoring**: Application health status monitoring
- **Rollback Support**: Easy rollback to previous versions
- **Multi-Cluster**: Deployment to multiple clusters
- **UI and CLI**: Web interface and command-line access
### Observability Integration
Integration with monitoring and logging:
- **Metrics**: Prometheus metrics from applications
- **Logs**: Centralized log collection via Loki/ELK
- **Tracing**: Distributed tracing with Jaeger/Tempo
- **Alerting**: Alert rules for application issues
## Architecture
### Application Deployment Flow
{{< likec4-view view="application_deployment_flow" title="Application Deployment Flow" >}}
### CI/CD Pipeline Architecture
Typical Forgejo Actions pipeline stages:
1. **Checkout**: Clone source code
2. **Build**: Compile application and dependencies
3. **Test**: Run unit and integration tests
4. **Security Scan**: Scan dependencies and code for vulnerabilities
5. **Build Image**: Create container image
6. **Push Image**: Push to container registry
7. **Update Manifests**: Update Kubernetes manifests with new image tag
8. **Notify**: Send notifications on success/failure
## Configuration
### Forgejo Actions Configuration
Example for Node.js application:
```yaml
name: CI/CD Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
env:
REGISTRY: registry.example.com
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Run linter
run: npm run lint
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: '.'
format: 'sarif'
output: 'trivy-results.sarif'
build-and-push:
needs: [test, security]
runs-on: ubuntu-latest
if: github.event_name == 'push'
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ secrets.REGISTRY_USER }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=sha,prefix={{branch}}-
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
```
### ArgoCD Application Configuration
Complete configuration example:
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-application
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://forgejo.example.com/myteam/my-application
targetRevision: main
path: k8s/overlays/production
# Kustomize options
kustomize:
version: v5.0.0
images:
- my-app=registry.example.com/my-app:v1.2.3
destination:
server: https://kubernetes.default.svc
namespace: production
# Sync policy
syncPolicy:
automated:
prune: true # Delete resources not in Git
selfHeal: true # Override manual changes
allowEmpty: false # Don't delete everything on empty repo
syncOptions:
- CreateNamespace=true
- PruneLast=true
- RespectIgnoreDifferences=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
# Ignore differences (avoid sync loops)
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas # Ignore if HPA manages replicas
```
## Troubleshooting
### Pipeline Fails
**Problem**: Forgejo Actions pipeline fails
**Solution**:
```bash
# 1. Check pipeline logs in Forgejo UI
# Navigate to: Repository → Actions → Select failed run
# 2. Check runner status
# In Forgejo: Site Admin → Actions → Runners
# 3. Check runner logs
kubectl logs -n forgejo-runner deployment/act-runner
# 4. Test pipeline locally with act
act -l # List available jobs
act -j build # Run specific job
```
### ArgoCD Application OutOfSync
**Problem**: Application shows "OutOfSync" status
**Solution**:
```bash
# 1. Check differences
argocd app diff my-application
# 2. View sync status details
argocd app get my-application
# 3. Manual sync
argocd app sync my-application
# 4. Hard refresh (ignore cache)
argocd app sync my-application --force
# 5. Check for ignored differences
argocd app get my-application --show-operation
```
### Application Deployment Fails
**Problem**: Application pod crashes after deployment
**Solution**:
```bash
# 1. Check pod status
kubectl get pods -n production
# 2. View pod logs
kubectl logs -n production deployment/my-application
# 3. Describe pod for events
kubectl describe pod -n production <pod-name>
# 4. Check resource limits
kubectl top pod -n production
# 5. Rollback via ArgoCD
argocd app rollback my-application
```
### Image Pull Errors
**Problem**: Kubernetes cannot pull container image
**Solution**:
```bash
# 1. Verify image exists
docker pull registry.example.com/my-app:v1.2.3
# 2. Check image pull secret
kubectl get secret -n production regcred
# 3. Create image pull secret if missing
kubectl create secret docker-registry regcred \
--docker-server=registry.example.com \
--docker-username=user \
--docker-password=password \
-n production
# 4. Reference secret in deployment
kubectl patch deployment my-application -n production \
-p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"regcred"}]}}}}'
```
## Best Practices
### Golden Path Templates
Provide standardized templates for common use cases:
1. **Web Application Template**: Node.js, Python, Go web services
2. **API Service Template**: RESTful API with OpenAPI
3. **Batch Job Template**: Kubernetes CronJob configurations
4. **Microservice Template**: Service mesh integration
Example repository template structure:
```text
application-template/
├── .forgejo/
│ └── workflows/
│ ├── build.yaml
│ ├── test.yaml
│ └── deploy.yaml
├── k8s/
│ ├── base/
│ └── overlays/
├── src/
│ └── ...
├── Dockerfile
├── README.md
└── .gitignore
```
### Deployment Checklist
Before deploying to production:
- ✅ All tests passing
- ✅ Security scans completed
- ✅ Resource limits defined
- ✅ Health checks configured
- ✅ Monitoring and alerts set up
- ✅ Backup strategy defined
- ✅ Rollback plan documented
- ✅ Team notified about deployment
### Configuration Management
- Use ConfigMaps for non-sensitive configuration
- Use Secrets for sensitive data
- Use External Secrets Operator for vault integration
- Never commit secrets to Git
- Use environment-specific overlays (Kustomize)
- Document all configuration options
## Status
**Maturity**: Production
**Stability**: Stable
**Support**: Internal Platform Team
## Additional Resources
### Forgejo
- [Forgejo Documentation](https://forgejo.org/docs/latest/)
- [Forgejo Actions Guide](https://forgejo.org/docs/latest/user/actions/)
- [Forgejo API Reference](https://forgejo.org/docs/latest/api/)
### ArgoCD
- [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
- [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
- [ArgoCD Sync Waves](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/)
### GitOps
- [GitOps Principles](https://opengitops.dev/)
- [GitOps Patterns](https://www.gitops.tech/)
- [Kubernetes Deployment Strategies](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy)
### CI/CD
- [GitHub Actions Documentation](https://docs.github.com/en/actions) (Forgejo Actions compatible)
- [Docker Best Practices](https://docs.docker.com/develop/dev-best-practices/)
- [Container Security Best Practices](https://kubernetes.io/docs/concepts/security/pod-security-standards/)

View file

@ -1,224 +0,0 @@
---
title: Platform Orchestration
linkTitle: Platform Orchestration
weight: 1
description: >
Orchestration in the context of Platform Engineering - coordinating infrastructure, platform, and application delivery.
---
## Overview
Orchestration in the context of Platform Engineering refers to the coordinated automation and management of infrastructure, platform, and application components throughout their entire lifecycle. It is a fundamental concept that bridges the gap between declarative specifications (what should be deployed) and actual execution (how it is deployed).
## The Role of Orchestration in Platform Engineering
Platform Engineering has emerged as a discipline to improve developer experience and reduce cognitive load on development teams ([CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)). Orchestration is the central mechanism that enables this vision:
1. **Automation of Complex Workflows**: Orchestration coordinates multiple steps and dependencies automatically
2. **Consistency and Reproducibility**: Guaranteed, repeatable deployments across different environments
3. **Self-Service Capabilities**: Developers can independently orchestrate resources and deployments
4. **Governance and Compliance**: Centralized control over policies and best practices
### What Does Orchestration Do?
Orchestration systems perform the following tasks:
- **Workflow Coordination**: Coordination of complex, multi-step deployment processes
- **Dependency Management**: Resolution and management of dependencies between components
- **State Management**: Continuous monitoring and reconciliation between desired and actual state
- **Resource Provisioning**: Automatic provisioning of infrastructure and services
- **Configuration Management**: Management of configurations across different environments
- **Health Monitoring**: Monitoring the health of deployed resources
## Three Layers of Orchestration
In modern Platform Engineering, we distinguish three fundamental layers of orchestration:
### [Infrastructure Orchestration](../infrastructure/)
Infrastructure Orchestration deals with the lowest level - the physical and virtual infrastructure layer. This includes:
- Provisioning of compute, network, and storage resources
- Cloud resource management (VMs, networking, storage)
- Infrastructure-as-Code deployment (Terraform, etc.)
- Bare metal and hypervisor management
**Target Audience**: Infrastructure Engineers, Cloud Architects
**Note**: Detailed documentation for Infrastructure Orchestration is maintained separately.
More details: [Infrastructure Orchestration →](../infrastructure/)
### [Platform Orchestration](../otc/)
Platform Orchestration focuses on deploying and managing the platform itself - the services and tools that development teams use. This includes:
- Installation and configuration of Kubernetes clusters
- Deployment of platform services (GitOps tools, Observability, Security)
- Management of platform components via Stacks
- Multi-cluster orchestration
**Target Audience**: Platform Engineering Teams, SRE Teams
**In IPCEI-CIS**: Platform orchestration is realized using the CNOE stack concept with ArgoCD and Forgejo.
More details: [Platform Orchestration →](../otc/)
### [Application Orchestration](application/)
Application Orchestration concentrates on the deployment and lifecycle management of applications running on the platform. This includes:
- Deployment of microservices and containerized applications
- CI/CD pipeline orchestration
- Configuration management and secrets handling
- Application health monitoring and auto-scaling
**Target Audience**: Application Developers, DevOps Engineers
**In IPCEI-CIS**: Application orchestration uses Forgejo pipelines for CI/CD and ArgoCD for GitOps-based deployment.
More details: [Application Orchestration →](application/)
## GitOps as Orchestration Paradigm
A central approach in modern platform orchestration solutions is **GitOps**. GitOps uses Git repositories as the single source of truth for declarative infrastructure and applications:
- **Declarative Approach**: The desired state is defined in Git
- **Automatic Synchronization**: Controllers monitor Git and reconcile the live state
- **Audit Trail**: All changes are traceable in Git history
- **Rollback Capability**: Easy rollback through Git revert
### Continuous Reconciliation
An important concept is **continuous reconciliation**:
1. The orchestrator monitors both the source (Git) and the target (e.g., Kubernetes cluster)
2. Deviations trigger automatic corrective actions
3. Health checks validate that the desired state has been achieved
4. Drift detection warns of unexpected changes
## Orchestration Tools in IPCEI-CIS
Within the IPCEI-CIS platform, we utilize the [CNOE (Cloud Native Operational Excellence)](https://cnoe.io/) stack concept with the following orchestration components:
### ArgoCD
- **Continuous Delivery** for Kubernetes based on GitOps
- Synchronizes Kubernetes manifests from Git repositories
- Supports Helm Charts, Kustomize, Jsonnet, and plain YAML
- Multi-cluster deployment capabilities
- Application Sets for parameterized deployments
**Role in IPCEI-CIS**: ArgoCD is the central component for GitOps-based deployment management. After the initial bootstrapping phase, ArgoCD takes over the technical coordination of all components.
### Forgejo
- **Git Repository Management** and source control
- **CI/CD Pipelines** via Forgejo Actions (GitHub Actions compatible)
- **Developer Portal Capabilities** (initially planned, project discontinued)
- Package registry and artifact management
- Integration with ArgoCD for GitOps workflows
**Role in IPCEI-CIS**: Forgejo serves as the Git repository host and CI/CD engine. It was initially planned as a developer portal (similar to Backstage's role in other stacks) but this aspect was not fully realized before project completion.
**Note on Backstage**: In typical CNOE implementations, Backstage serves as the developer portal providing golden paths through software templates. IPCEI-CIS initially planned to use Forgejo for this purpose but the project concluded before full implementation.
### Terraform
- **Infrastructure-as-Code** provisioning
- Multi-cloud resource management
- State management for infrastructure
- Integration with Forgejo pipelines for automated deployment
**Role in IPCEI-CIS**: Terraform handles infrastructure provisioning at the infrastructure orchestration layer, integrated into automated workflows via Forgejo pipelines.
### CNOE Stacks Concept
- **Modular Platform Components** bundled as stacks
- Reusable, composable platform building blocks
- Version-controlled stack definitions
- GitOps-based stack deployment via ArgoCD
**Role in IPCEI-CIS**: The stacks concept from CNOE provides the structural foundation for platform orchestration, enabling modular deployment and management of platform components.
## The Orchestration Workflow
A typical orchestration workflow in the IPCEI-CIS platform:
{{< likec4-view view="orchestration_workflow" title="Orchestration Workflow" >}}
**Workflow Steps**:
1. **Definition**: Developer defines application/infrastructure as code
2. **Commit**: Changes are committed to Forgejo Git repository
3. **CI Pipeline**: Forgejo Actions build, test, and package the application
4. **Sync**: ArgoCD detects changes and triggers deployment
5. **Provision**: Terraform orchestrates required cloud resources (if needed)
6. **Deploy**: Application is deployed to Kubernetes
7. **Monitor**: Continuous monitoring and health checks
8. **Reconcile**: Automatic correction on drift detection
## Benefits of Coordinated Orchestration
The integration of infrastructure, platform, and application orchestration provides crucial advantages:
- **Reduced Complexity**: Developers don't need to know all infrastructure details
- **Faster Time-to-Market**: Automated workflows accelerate deployments
- **Consistency**: Standardized patterns across all teams
- **Governance**: Central policies are automatically enforced
- **Scalability**: Platform teams can support many application teams
- **Self-Service**: Developers can provision services independently
- **Audit and Compliance**: Complete traceability through Git history
## Best Practices
Successful orchestration follows proven principles ([Platform Engineering Principles](https://platformengineering.org/blog/what-is-platform-engineering)):
1. **Platform as a Product**: Treat the platform as a product with focus on user experience
2. **Self-Service First**: Enable developers to use services autonomously
3. **Documentation**: Comprehensive documentation of golden paths
4. **Feedback Loops**: Continuous improvement through user feedback
5. **Thin Platform Layer**: Use managed services where possible instead of building everything
6. **Progressive Disclosure**: Offer different abstraction levels
7. **Focus on Common Problems**: Solve recurring problems centrally
8. **Treat Glue as Valuable**: Integration of different tools is valuable
9. **Clear Mission**: Define clear goals and responsibilities
## Avoiding Anti-Patterns
Common mistakes in platform orchestration ([How to fail at Platform Engineering](https://www.cncf.io/blog/2024/03/08/how-to-fail-at-platform-engineering/)):
- **Product Misfit**: Building platform without involving developers
- **Overly Complex Design**: Too many features and unnecessary complexity
- **Swiss Knife Syndrome**: Trying to solve all problems with one tool
- **Insufficient Documentation**: Missing or outdated documentation
- **Siloed Development**: Platform and development teams working in isolation
- **Stagnant Platform**: Platform not continuously evolved
## Sub-Components
The orchestration component includes the following sub-areas:
- **[Infrastructure Orchestration](infrastructure/)**: Low-level infrastructure deployment and provisioning
- **[Platform Orchestration](platform/)**: Platform-level component deployment via Stacks
- **[Application Orchestration](application/)**: Application-level deployment and CI/CD
- **[Stacks](stacks/)**: Reusable component bundles and compositions
## Further Resources
### Fundamentals
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/) - Comprehensive paper on Platform Engineering
- [Platform Engineering Definition](https://platformengineering.org/blog/what-is-platform-engineering) - What is Platform Engineering?
- [Team Topologies](https://teamtopologies.com/) - Organizational structures for modern teams
### GitOps
- [GitOps Principles](https://opengitops.dev/) - Official GitOps principles
- [ArgoCD Documentation](https://argo-cd.readthedocs.io/) - ArgoCD documentation
### Tools
- [CNOE.io](https://cnoe.io/) - Cloud Native Operational Excellence Framework
- [Forgejo](https://forgejo.org/) - Self-hosted Git service with CI/CD
- [Terraform](https://www.terraform.io/) - Infrastructure as Code tool

View file

@ -1,201 +0,0 @@
---
title: Infrastructure as Code
linkTitle: Infrastructure as Code
weight: 10
description: >
Managing infrastructure through machine-readable definition files rather than manual configuration
---
## Overview
Infrastructure as Code (IaC) is the practice of managing and provisioning infrastructure through code rather than manual processes. Instead of clicking through web consoles or running one-off commands, infrastructure is defined in version-controlled files that can be executed repeatedly to produce identical environments.
This approach treats infrastructure with the same rigor as application code: it's versioned, reviewed, tested, and deployed through automated pipelines.
## Why Infrastructure as Code?
### The problem with manual infrastructure
Traditional infrastructure management faces several challenges:
- **Inconsistency**: Manual steps vary between operators and environments
- **Undocumented**: Critical knowledge exists only in operators' heads
- **Error-Prone**: Human mistakes during repetitive tasks
- **Slow**: Manual provisioning takes hours or days
- **Untrackable**: No audit trail of what changed, when, or why
- **Irreproducible**: Difficulty recreating environments exactly
### The IaC solution
Infrastructure as Code addresses these challenges by making infrastructure:
**Declarative** - Describe the desired state, not the steps to achieve it. The IaC tool handles the implementation details.
**Versioned** - Every infrastructure change is committed to Git, providing complete history and the ability to rollback.
**Automated** - Infrastructure deploys through pipelines without human intervention, eliminating manual errors.
**Testable** - Infrastructure changes can be validated before production deployment.
**Documented** - The code itself is the documentation, always current and accurate.
**Reproducible** - The same code produces identical infrastructure every time, across all environments.
## Core Concepts
### Declarative vs imperative
**Imperative** approaches specify the exact steps: "Create a server, then install software, then configure networking."
**Declarative** approaches specify the desired outcome: "I need a server with this software and network configuration." The IaC tool determines the necessary steps.
Most modern IaC tools use the declarative approach, making them more maintainable and resilient.
### State Management
IaC tools maintain a "state" - a record of what infrastructure currently exists. When you change your code and re-run the tool, it compares the desired state (your code) with the actual state (what exists) and makes only the necessary changes.
This enables:
- **Drift detection** - Identify manual changes made outside IaC
- **Safe updates** - Modify only what changed
- **Dependency management** - Update resources in the correct order
### Idempotency
Running the same IaC code multiple times produces the same result. If infrastructure already matches the code, the tool makes no changes. This property is called idempotency and is essential for reliable automation.
## Infrastructure as Code in EDP
The Edge Developer Platform uses IaC extensively:
### Terraform and Terragrunt
[Terraform](terraform/) is our primary IaC tool for provisioning cloud resources. We use [Terragrunt](https://terragrunt.gruntwork.io/) as an orchestration layer to manage multiple Terraform modules and reduce code duplication.
Our implementation includes:
- **[infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue)** - Reusable infrastructure components (modules, units, and stacks)
- **[infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy)** - Full environment definitions using catalogue components
### Platform stacks
We organize infrastructure into [stacks](stacks/) - coherent bundles of related components:
- **[Core Stack](stacks/core/)** - Essential platform services
- **[Forgejo Stack](stacks/forgejo/)** - Source control and CI/CD
- **[Observability Stack](stacks/observability/)** - Monitoring and logging
- **[OTC Stack](stacks/otc/)** - Cloud provider resources
- **[Coder Stack](stacks/coder/)** - Development environments
- **[Terralist Stack](stacks/terralist/)** - Terraform registry
Each stack is defined as code, versioned independently, and can be deployed across different environments.
### GitOps integration
Our IaC integrates with GitOps principles:
1. All infrastructure definitions live in Git repositories
2. Changes go through code review processes
3. Automated pipelines deploy infrastructure
4. ArgoCD continuously reconciles Kubernetes resources with Git state
This creates an auditable, automated, and reliable deployment process.
## Benefits realized
### Consistency across environments
Development, testing, and production environments are deployed from the same code. This eliminates the "works on my machine" problem at the infrastructure level.
### Rapid environment provisioning
A complete EDP environment can be provisioned in minutes rather than days. This enables:
- Quick disaster recovery
- Easy creation of test environments
- Fast onboarding for new team members
### Reduced operational risk
Code review catches infrastructure errors before deployment. Automated testing validates changes. Version control enables instant rollback if problems occur.
### Knowledge sharing
Infrastructure configuration is explicit and discoverable in code. New team members can understand the platform by reading the repository rather than shadowing experienced operators.
### Compliance and auditability
Every infrastructure change is tracked in Git history with author, timestamp, and reason. This provides audit trails required for compliance and simplifies troubleshooting.
## Getting started
To work with EDP's Infrastructure as Code:
1. **Understand Terraform basics** - Review [Terraform documentation](https://developer.hashicorp.com/terraform)
2. **Explore infra-catalogue** - Browse [infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue) to understand available components
3. **Review existing deployments** - Examine [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy) to see how components are composed
4. **Follow the Terraform guide** - See [Terraform-based deployment](terraform/) for detailed instructions
## Best Practices
Based on our experience building and operating IaC:
**Version everything** - All infrastructure code belongs in version control. No exceptions.
**Keep it simple** - Start with basic modules. Add abstraction only when duplication becomes painful.
**Test before production** - Deploy infrastructure changes to test environments first.
**Use meaningful commit messages** - Explain why changes were made, not just what changed.
**Review all changes** - Infrastructure changes should go through the same review process as application code.
**Document assumptions** - Use code comments to explain non-obvious decisions.
**Manage secrets securely** - Never commit credentials to version control. Use secret management tools.
**Plan for drift** - Regularly compare actual infrastructure with code state to detect manual changes.
## Challenges and limitations
Infrastructure as Code is powerful but has challenges:
**Learning curve** - Teams need to learn IaC tools and practices. Initial productivity may decrease.
**State management complexity** - State files must be stored securely and accessed by multiple team members. State corruption can cause serious issues.
**Provider limitations** - Not all infrastructure can be managed as code. Some resources require manual configuration.
**Breaking changes** - Poorly written code can destroy infrastructure. Safeguards and testing are essential.
**Tool lock-in** - Switching IaC tools (e.g., Terraform to Pulumi) requires rewriting infrastructure code.
Despite these challenges, the benefits far outweigh the costs for any infrastructure of meaningful complexity.
## Why we invest in IaC
The IPCEI-CIS Edge Developer Platform requires reliable, reproducible infrastructure. Manual provisioning cannot meet these requirements at scale.
By investing in Infrastructure as Code:
- We can deploy complete environments consistently
- Platform engineers can focus on improvement rather than repetitive tasks
- Infrastructure changes are transparent and auditable
- New team members can contribute confidently
- Disaster recovery becomes routine rather than heroic
Our IaC tools ([infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue) and [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy)) embody these principles and enable the platform's reliability.
## Additional Resources
### Terraform Ecosystem
- [Terraform Documentation](https://developer.hashicorp.com/terraform)
- [OpenTofu](https://opentofu.org/) - Community-driven Terraform fork
- [Terragrunt](https://terragrunt.gruntwork.io/) - Terraform orchestration
### Infrastructure as Code Concepts
- [Infrastructure as Code book](https://www.oreilly.com/library/view/infrastructure-as-code/9781098114664/) by Kief Morris
- [Terraform Best Practices](https://www.terraform-best-practices.com/)
- [CNCF Platforms White Paper](https://tag-app-delivery.cncf.io/whitepapers/platforms/)
### EDP-Specific Resources
- [Terraform-based deployment](terraform/) - Detailed deployment guide
- [Infrastructure Stacks](stacks/) - Reusable component bundles
- [Platform Orchestration](../) - How IaC fits into overall deployment

Binary file not shown.

Before

Width:  |  Height:  |  Size: 333 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 188 KiB

View file

@ -1,519 +0,0 @@
---
title: "Stacks"
linkTitle: "Stacks"
weight: 40
description: >
Platform-level component provisioning via Stacks
---
## Overview
The `stacks` and `stacks-instances` repositories form the core of a GitOps-based system for provisioning Edge Developer Platforms (EDP). They implement a template-instance pattern that enables the deployment of reusable platform components across different environments. The concept of "stacks" originates from the CNOE.io project (Cloud Native Operational Excellence), which can be traced through the evolutionary development from `edpbuilder` (derived from CNOE.io's `EDPbuilder`) to `infra-deploy`.
## Key Features of the Everything-as-Code Stacks Approach
This declarative Stacks provisioning architecture is characterized by the following central properties:
### Complete Code Declaration
**Platform as Code**: All Kubernetes resources, Helm charts, and application manifests are declaratively versioned as YAML files. The entire platform topology is traceable in Git.
**Configuration as Code**: Environment-specific configurations are generated through template hydration, not manually edited. Gomplate transforms generic templates into concrete configurations.
### GitOps-Native Architecture
**Single Source of Truth**: Git is the sole source of truth for the desired state of all infrastructure and platform components.
**Declarative State Management**: ArgoCD continuously synchronizes the actual state with the desired state defined in Git. Deviations are automatically corrected.
**Audit Trail**: Every change to infrastructure or platform is documented through Git commits, with author, timestamp, and change description.
**Pull-based Deployment**: ArgoCD pulls changes from Git, rather than external systems requiring push access to the cluster. This significantly increases security.
### Template-Instance Separation
**DRY Principle (Don't Repeat Yourself)**: Common platform components are defined once as templates and reused for all environments.
**Environment Promotion**: New environments can be quickly created through template hydration. Consistency across environments is guaranteed.
**Centralized Maintainability**: Updates to stack definitions can be made centrally in the `stacks` repository and then selectively rolled out to instances.
**Customization Points**: Despite reuse, environment-specific customizations remain possible through values files and manifest overlays.
### Modular Composition
**Stack-based Architecture**: Platform capabilities are organized into independent, reusable stacks (core, otc, forgejo, observability).
**Selective Deployment**: Through the `STACKS` environment variable, only required components can be deployed selectively.
**Mix-and-Match**: Different stack combinations yield different platform profiles (Development, Production, Observability clusters).
**Pluggable Components**: New stacks can be added without modifying existing ones.
### Environment Agnosticism
**Cloud Provider Abstraction**: Templates are formulated generically. Provider-specific details are introduced through hydration.
**Multi-Cloud Ready**: The architecture supports various cloud providers (currently OTC, historically KIND, extensible to AWS/Azure/GCP).
**Environment Variables as Interface**: All environment-specific aspects are controlled through clearly defined environment variables.
**Portable Definitions**: Stack definitions can be ported between environments and even cloud providers.
### Self-Healing and Drift Detection
**Automated Reconciliation**: ArgoCD detects deviations from the desired state and corrects them automatically.
**Continuous Monitoring**: Permanent monitoring of cluster state compared to Git definition.
**Declarative State Recovery**: After failures or manual changes, the declared state is automatically restored.
**Sync Policies**: Configurable sync strategies (automated, manual, with pruning) per application.
### Secrets Management
**Secrets Outside Git**: Sensitive data is not stored in Git but generated at runtime or injected from secret stores.
**Generated Credentials**: Passwords, tokens, and secrets are generated during deployment and directly created as Kubernetes Secrets.
**Sealed Secrets Ready**: The architecture is compatible with Sealed Secrets or External Secrets Operators for encrypted secret storage in Git.
**Credential Rotation**: Secrets can be regenerated through re-deployment.
### Observability and Auditability
**Declarative Monitoring**: Observability stacks are part of the Platform-as-Code definition.
**Deployment History**: Complete history of all deployments and changes through Git log.
**ArgoCD UI**: Graphical representation of sync status and application topology.
**Infrastructure Events**: Terraform state changes and Terragrunt outputs document infrastructure changes.
### Idempotence and Reproducibility
**Idempotent Operations**: Repeated execution of the same declaration leads to the same result without side effects.
**Deterministic Builds**: Same input parameters (Git commit + environment variables) produce identical environments.
**Disaster Recovery**: Complete environments can be rebuilt from code without restoring backups.
**Testing in Production-Like Environments**: Development and staging environments are code-identical to production, only with different parameter values.
## Purpose in EDP
A 'stack' is the declarative description for the platform provisionning in an EDP installation.
## Repository
**Code**:
* [Stacks Templates Repo](https://edp.buildth.ing/DevFW-CICD/stacks)
* [Stacks Instances Repo, used for ArgoCD Gitops](https://edp.buildth.ing/DevFW-CICD/stacks-instances)
* [EDP Stacks Deployment mechanism](https://edp.buildth.ing/DevFW/infra-deploy)
**Documentation**: [Link to component-specific documentation]
* [Outdated: The former 'edpbuilder' as script, derived from CNOE's 'idpbuilder](https://edp.buildth.ing/DevFW/edpbuilder)
## The stacks Repository
### Purpose and Structure
The `stacks` repository contains reusable template definitions for platform components. It serves as a central library of building blocks from which Edge Developer Platforms can be composed.
```
stacks/
└── template/
├── edfbuilder.yaml
├── registry/
│ ├── core.yaml
│ ├── otc.yaml
│ ├── forgejo.yaml
│ ├── observability.yaml
│ └── observability-client.yaml
└── stacks/
├── core/
├── otc/
├── forgejo/
├── observability/
└── observability-client/
```
### Components
**edfbuilder.yaml**: The central bootstrap definition. This is an ArgoCD Application that references the `registry` directory and serves as the entry point for the entire platform provisioning.
**registry/**: Contains ArgoCD ApplicationSets that function as a meta-layer. Each file defines a category of stacks (e.g., core, forgejo, observability) and references the corresponding subdirectory in `stacks/`.
**stacks/**: The actual platform components, organized into thematic categories:
- **core**: Fundamental components such as ArgoCD, CloudNative PostgreSQL, Dex (SSO)
- **otc**: Cloud-provider-specific components for Open Telekom Cloud (cert-manager, ingress-nginx, StorageClasses)
- **forgejo**: Git server and CI runners
- **observability**: Central observability components (Grafana, Victoria Metrics Stack)
- **observability-client**: Client-side metrics collection for non-observability clusters
Each stack consists of:
- YAML definitions (primarily ArgoCD Applications)
- `values.yaml` files for Helm charts
- `manifests/` directories for additional Kubernetes resources
### Templating Mechanism
The templates use Gomplate with delimiter syntax `{{{ }}}` for environment variables:
```yaml
repoURL: "https://{{{ .Env.CLIENT_REPO_DOMAIN }}}/{{{ .Env.CLIENT_REPO_ORG_NAME }}}"
path: "{{{ .Env.CLIENT_REPO_ID }}}/{{{ .Env.DOMAIN }}}/stacks/core"
```
These placeholders are replaced with environment-specific values during the deployment phase.
## The stacks-instances Repository
### Purpose and Structure
The `stacks-instances` repository contains the materialized, environment-specific configurations. While `stacks` provides the blueprints, `stacks-instances` contains the actual deployment definitions for concrete environments.
```
stacks-instances/
└── otc/
├── osctest.t09.de/
│ ├── edfbuilder.yaml
│ ├── registry/
│ └── stacks/
├── backup-test-manu.t09.de/
│ ├── edfbuilder.yaml
│ ├── registry/
│ └── stacks/
└── ...
```
### Organizational Principle
The structure follows the schema `{cloud-provider}/{domain}/`:
- **cloud-provider**: Identifies the cloud environment (e.g., `otc` for Open Telekom Cloud)
- **domain**: The fully qualified domain name of the environment (e.g., `osctest.t09.de`)
Each environment replicates the structure of `stacks/template`, but with resolved template variables and environment-specific customizations.
### Usage by ArgoCD
ArgoCD synchronizes directly from this repository. Applications reference paths such as:
```yaml
source:
path: "otc/osctest.t09.de/stacks/core"
repoURL: "https://edp.buildth.ing/DevFW-CICD/stacks-instances"
targetRevision: HEAD
```
This enables true GitOps: every change to the configurations is traceable through Git commits and automatically synchronized by ArgoCD in the target environment.
## The infra-deploy Repository
### Role in the Overall Architecture
The `infra-deploy` repository is the orchestration layer that coordinates both infrastructure and platform provisioning. It represents the evolution of `edpbuilder`, which was originally derived from the CNOE.io project's `EDPbuilder`.
### Two-Phase Provisioning
**Phase 1: Infrastructure Provisioning**
Uses Terragrunt Stacks (experimental feature) to provision cloud resources:
```
infra-deploy/
├── root.hcl
├── non-prod/
│ ├── tenant.hcl
│ ├── dns_zone/
│ │ ├── terragrunt.hcl
│ │ ├── terragrunt.stack.hcl
│ │ └── terragrunt.values.hcl
│ └── testing/
├── prod/
└── templates/
└── forgejo/
├── terragrunt.hcl
└── terragrunt.stack.hcl
```
Terragrunt Stacks provision:
- VPC and network segments
- Kubernetes clusters (CCE on OTC)
- Managed databases (RDS PostgreSQL)
- Load balancers and DNS entries
- Security groups and other cloud resources
**Phase 2: Platform Provisioning**
The script `scripts/edp-install.sh` executes the following steps:
1. **Template Hydration**:
- Checkout of the `stacks` repository
- Execution of Gomplate to resolve template variables
- Generation of environment-specific manifests
2. **Instance Management**:
- Checkout/update of the `stacks-instances` repository
- During CI execution: commit and push of the new instance
3. **Secrets Management**:
- Generation of credentials (database passwords, SSO secrets, API tokens)
- Creation of Kubernetes Secrets
4. **Bootstrap**:
- Helm-based installation of ArgoCD
- Application of `edfbuilder.yaml` or selective registry entries
5. **GitOps Handover**:
- ArgoCD takes over further synchronization from `stacks-instances`
- Continuous monitoring and self-healing
### GitHub Actions Workflows
The `.github/workflows/` directory contains three central workflows:
**deploy.yaml**: Complete deployment pipeline with the following inputs:
- Cluster environment and tenant (prod/non-prod)
- Node flavor and availability zone
- Stack selection (core, otc, forgejo, observability, etc.)
- Infra-catalogue version
**plan.yaml**: Terraform/Terragrunt plan preview without execution
**destroy.yaml**: Controlled teardown of environments
## Deployment Workflow
The complete provisioning process proceeds as follows:
1. **Initiation**: GitHub Actions workflow is triggered (manually or automatically)
2. **Environment Preparation**:
```bash
export CLUSTER_ENVIRONMENT=qa-stage
cd scripts
./new-otc-env.sh # Creates Terragrunt configuration if new
```
3. **Infrastructure Provisioning**:
```bash
./ensure-cluster.sh otc
# Internally executes:
# - ./ensure-otc-cluster.sh
# - terragrunt stack run apply
```
4. **Platform Provisioning**:
```bash
./edp-install.sh
# Executes:
# - Checkout of stacks
# - Gomplate hydration
# - Checkout/update of stacks-instances
# - Secrets generation
# - ArgoCD installation
# - Bootstrap of stacks
```
5. **ArgoCD Synchronization**: ArgoCD continuously reads from `stacks-instances` and synchronizes the desired state
## The CNOE.io Stacks Concept
The term "stacks" originates from the Cloud Native Operational Excellence (CNOE.io) project. The core idea is the composition of platform capabilities from modular, reusable building blocks.
### Principles
**Modularity**: Each stack is a self-contained unit with clear dependencies
**Composability**: Stacks can be freely combined to create different platform profiles
**Declarativeness**: All configurations are declarative and GitOps-capable
**Environment-agnostic**: Templates are generic; environment specifics are introduced through hydration
### Stack Selection and Combinations
The environment variable `STACKS` controls which components are deployed:
```bash
# Complete EDP with central observability
STACKS="core,otc,forgejo,observability"
# Application cluster with client-side observability
STACKS="core,otc,forgejo,observability-client"
# Minimal development environment
STACKS="core,forgejo"
```
## Data Flow and Dependencies
```
┌─────────────────┐
│ GitHub Actions │
│ (deploy.yaml) │
└────────┬────────┘
├─> Phase 1: Infrastructure
│ ┌──────────────────┐
│ │ infra-deploy │
│ │ (Terragrunt) │
│ └────────┬─────────┘
│ │
│ v
│ ┌──────────────────┐
│ │ Cloud Provider │
│ │ (OTC) │
│ │ - VPC │
│ │ - K8s Cluster │
│ │ - RDS │
│ └──────────────────┘
└─> Phase 2: Platform
┌──────────────────┐
│ edp-install.sh │
└────────┬─────────┘
├─> Checkout: stacks (Templates)
│ └─> Gomplate Hydration
├─> Checkout/Update: stacks-instances
├─> Secrets Generation
├─> ArgoCD Installation (Helm)
└─> Bootstrap (edfbuilder.yaml)
v
┌────────────────┐
│ ArgoCD │
└────────┬───────┘
└─> Continuous Synchronization
from stacks-instances
v
┌──────────────┐
│ Kubernetes │
│ Cluster │
└──────────────┘
```
## Historical Context: edpbuilder to infra-deploy
The evolution from `edpbuilder` to `infra-deploy` demonstrates the maturation of the architecture:
**edpbuilder** (Origin):
- Directly derived from CNOE.io's `EDPbuilder`
- Focus on local KIND clusters
- Manual configuration
- Monolithic structure
**infra-deploy** (Current):
- Production-ready for cloud deployments (OTC)
- Terragrunt-based infrastructure orchestration
- CI/CD integration via GitHub Actions
- Clear separation between infrastructure and platform
- Template-instance separation through stacks/stacks-instances
## Technical Particularities
### Gomplate Templating
Gomplate is used with custom delimiters `{{{ }}}` to avoid conflicts with Helm templating (`{{ }}`):
```bash
gomplate --input-dir="stacks/template" \
--output-dir="work" \
--left-delim "{{{" \
--right-delim "}}}"
```
### Terragrunt Experimental Stacks
The use of Terragrunt Stacks requires the experimental flag:
```bash
export TG_EXPERIMENT_MODE=true
terragrunt stack run apply
```
This enables hierarchical organization of Terraform modules with dependency management.
### ArgoCD ApplicationSets
The registry pattern uses ArgoCD Applications that reference directories:
```yaml
source:
path: "otc/osctest.t09.de/stacks/core"
```
ArgoCD automatically detects all YAML files in the path and synchronizes them as Applications.
## Best Practices and Patterns
**Immutable Infrastructure**: Every environment is fully defined in Git
**Secrets Outside Git**: Sensitive data is generated at runtime or injected from secret stores
**Progressive Rollouts**: New environments start as template instances, then are individually customized
**Version Pinning**: Critical components (Helm charts, Terragrunt modules) are pinned to specific versions
**Namespace Isolation**: Each stack deploys into dedicated namespaces
**Self-Healing**: ArgoCD's automated sync policy enables automatic drift correction
## Usage Examples
### Deployment by Pipeline
The platform deployment is the second part of the EDP installtaion. First there is the infrastructure setup, which ends with a created kubernetes cluster. Then the platform provisioning by the defined stacks is done. Both is runnable by the `deploy`pipelien in `infra-deploy`:
![alt text](./deploy-action.png)
The green pipeline looks liek this:
![alt text](./green-deploy-pipeline.png)
### Local setup with 'kind'
It's also possible to just run the second part, the stcks provisionning. Then you need to have a kubernetes cluster already running, which is e.g. feasable by a local kind-cluster.
So imagine, you want to to the stacks 'core,observability' on your local machine. Then you can run the local entzr
```bash
# have kind insatlled
# in /infra-deploy
# provide a kind cluster
kind delete clusters --all
./scripts/ensure-kind-cluster.sh -r
# provide some emnv vars
export TERRAFORM=/bin/bash
export LOADBALANCER_ID=ABC
export DOMAIN=ABC
export DOMAIN_GITEA=ABC
export OS_ACCESS_KEY=ABC
export OS_SECRET_KEY=ABC
export STACKS=core,observability
# deploy
./scripts/edp-install.sh
```
## Status
**Maturity**: [Production]
## Additional Resources
* [CNOE](https://cnoe.io/docs/overview/cnoe)

View file

@ -1,368 +0,0 @@
---
title: "Coder"
linkTitle: "Coder"
weight: 20
description: >
Cloud Development Environments for secure, scalable remote development
---
## Overview
Coder is an enterprise cloud development environment (CDE) platform that provisions secure, consistent remote development workspaces. As part of the Edge Developer Platform, Coder enables developers to work in standardized, on-demand environments defined as code, moving development workloads from local machines to centrally managed infrastructure.
The Coder stack deploys a self-hosted Coder instance with PostgreSQL database backend, integrated authentication, and edge connectivity capabilities.
## Key Features
* **Infrastructure as Code Workspaces**: Development environments defined using Terraform templates
* **IDE Agnostic**: Supports browser-based IDEs, VS Code, JetBrains IDEs, and other development tools
* **Secure Remote Access**: Workspaces run in controlled cloud or on-premises infrastructure
* **On-Demand Provisioning**: Developers create ephemeral or persistent workspaces as needed
* **AI Agent Support**: Secure execution environment for AI coding assistants
* **Template-Based Deployment**: Reusable workspace templates ensure consistency across teams
## Repository
**Code**: [Coder Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/coder)
**Documentation**:
* [Coder Official Documentation](https://coder.com/docs)
* [Coder GitHub Repository](https://github.com/coder/coder)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* CloudNativePG operator (provided by `core` stack)
* Ingress controller configured (provided by `otc` stack)
* cert-manager for TLS certificate management (provided by `otc` stack)
* Domain name configured via `DOMAIN_GITEA` environment variable
### Quick Start
The Coder stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then the domain will be `coder.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- PostgreSQL database cluster (CloudNativePG)
- Coder application (Helm chart v2.28.3)
- Ingress configuration with TLS
- Database credentials and edge connectivity secrets
### Verification
Verify the Coder deployment:
```bash
# Check ArgoCD application status
kubectl get application coder -n argocd
# Verify Coder pods are running
kubectl get pods -n coder
# Check PostgreSQL cluster status
kubectl get cluster coder-db -n coder
# Verify ingress configuration
kubectl get ingress -n coder
```
Access the Coder web interface at `https://coder.{DOMAIN_GITEA}`.
## Architecture
### Component Architecture
The Coder stack consists of:
**Coder Control Plane**:
- Web application for workspace management
- API server for workspace provisioning
- Terraform executor for infrastructure operations
**PostgreSQL Database**:
- Single-instance CloudNativePG cluster
- Stores workspace metadata, templates, and user data
- Managed database user with `coder-db-user` secret
- 10Gi persistent storage on `csi-disk` storage class
**Networking**:
- ClusterIP service for internal communication
- Nginx ingress with TLS termination
- cert-manager integration for automatic certificate management
## Configuration
### Environment Variables
The Coder application is configured through environment variables in `values.yaml`:
**Access Configuration**:
- `CODER_ACCESS_URL`: Public URL where Coder is accessible (`https://coder.{DOMAIN_GITEA}`)
**Database Configuration**:
- `CODER_PG_CONNECTION_URL`: PostgreSQL connection string (from `coder-db-user` secret)
**Authentication**:
- `CODER_OAUTH2_GITHUB_DEFAULT_PROVIDER_ENABLE`: GitHub OAuth integration (disabled by default)
**Edge Connectivity**:
- `EDGE_CONNECT_ENDPOINT`: Edge connection endpoint (from `edge-credential` secret)
- `EDGE_CONNECT_USERNAME`: Edge authentication username
- `EDGE_CONNECT_PASSWORD`: Edge authentication password
### Helm Chart Configuration
Key Helm values configured in `stacks/coder/coder/values.yaml`:
```yaml
coder:
env:
- name: CODER_ACCESS_URL
value: "https://coder.{DOMAIN_GITEA}"
- name: CODER_PG_CONNECTION_URL
valueFrom:
secretKeyRef:
name: coder-db-user
key: uri
service:
type: ClusterIP
ingress:
enable: true
className: nginx
host: "coder.{DOMAIN_GITEA}"
annotations:
cert-manager.io/cluster-issuer: main
tls:
enable: true
secretName: coder-tls-secret
```
**Important**: Do not override `CODER_HTTP_ADDRESS`, `CODER_TLS_ENABLE`, `CODER_TLS_CERT_FILE`, or `CODER_TLS_KEY_FILE` as these are managed by the Helm chart.
### PostgreSQL Database Configuration
Defined in `stacks/coder/coder/manifests/postgres.yaml`:
**Cluster Specification**:
- 1 instance (single-node cluster)
- Primary update strategy: unsupervised
- Resource requests/limits: 1 CPU, 1Gi memory
- Storage: 10Gi using `csi-disk` storage class
**Managed Roles**:
- User: `coder`
- Permissions: createdb, login
- Password stored in `coder-db-user` secret
### ArgoCD Application Configuration
**Registry Application** (`template/registry/coder.yaml`):
- Name: `coder-reg`
- Manages the Coder stack directory
- Automated sync with prune and self-heal enabled
**Stack Application** (`template/stacks/coder/coder.yaml`):
- Name: `coder`
- Deploys Coder Helm chart v2.28.3 from `https://helm.coder.com/v2`
- Automated self-healing enabled
- Creates namespace automatically
- References values from `stacks-instances` repository
## Usage Examples
### Creating a Workspace Template
After deployment, create workspace templates using Terraform:
1. **Access Coder Dashboard**
```bash
open https://coder.${DOMAIN_GITEA}
```
2. **Create Template Repository**
Create a Git repository with a Terraform template:
```hcl
# main.tf
terraform {
required_providers {
coder = {
source = "coder/coder"
version = "~> 0.12"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.23"
}
}
}
resource "coder_agent" "main" {
os = "linux"
arch = "amd64"
}
resource "kubernetes_pod" "main" {
metadata {
name = "coder-${data.coder_workspace.me.owner}-${data.coder_workspace.me.name}"
namespace = "coder-workspaces"
}
spec {
container {
name = "dev"
image = "codercom/enterprise-base:ubuntu"
command = ["sh", "-c", coder_agent.main.init_script]
}
}
}
```
3. **Push Template to Coder**
```bash
coder templates push kubernetes-dev
```
### Provisioning a Development Workspace
```bash
# Create a new workspace from template
coder create my-workspace --template kubernetes-dev
# Connect via SSH
coder ssh my-workspace
# Open in VS Code
coder open my-workspace --ide vscode
# Stop workspace when not in use
coder stop my-workspace
# Delete workspace
coder delete my-workspace
```
### Integrating with Platform Services
Access EDP platform services from Coder workspaces:
```bash
# Connect to platform PostgreSQL
psql "postgresql://myuser@postgres.core.svc.cluster.local:5432/mydb"
# Access Forgejo
git clone https://forgejo.${DOMAIN_GITEA}/myorg/myrepo.git
# Query platform metrics
curl https://grafana.${DOMAIN}/api/datasources
```
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration and CloudNativePG operator for database management
* **OTC Stack**: Requires ingress-nginx controller and cert-manager for external access and TLS
* **Forgejo Stack**: Workspace templates can integrate with platform Git repositories
* **Observability Stack**: Workspace metrics can be collected by platform observability tools
* **Dex (SSO)**: Can be configured for centralized authentication (requires additional configuration)
## Troubleshooting
### Coder Pods Not Starting
**Problem**: Coder pods remain in `Pending` or `CrashLoopBackOff` state
**Solution**:
1. Check PostgreSQL cluster status:
```bash
kubectl get cluster coder-db -n coder
kubectl describe cluster coder-db -n coder
```
2. Verify database credentials secret:
```bash
kubectl get secret coder-db-user -n coder
kubectl get secret coder-db-user -n coder -o jsonpath='{.data.uri}' | base64 -d
```
3. Check Coder logs:
```bash
kubectl logs -n coder -l app=coder
```
### Cannot Access Coder UI
**Problem**: Coder web interface is not accessible at configured URL
**Solution**:
1. Verify ingress configuration:
```bash
kubectl get ingress -n coder
kubectl describe ingress -n coder
```
2. Check TLS certificate status:
```bash
kubectl get certificate -n coder
kubectl describe certificate coder-tls-secret -n coder
```
3. Verify DNS resolution:
```bash
nslookup coder.${DOMAIN_GITEA}
```
### Database Connection Errors
**Problem**: Coder cannot connect to PostgreSQL database
**Solution**:
1. Verify PostgreSQL cluster health:
```bash
kubectl get pods -n coder -l cnpg.io/cluster=coder-db
kubectl logs -n coder -l cnpg.io/cluster=coder-db
```
2. Check database and user creation:
```bash
kubectl get database coder -n coder
kubectl exec -it coder-db-1 -n coder -- psql -U postgres -c "\l"
kubectl exec -it coder-db-1 -n coder -- psql -U postgres -c "\du"
```
3. Test connection string:
```bash
kubectl exec -it coder-db-1 -n coder -- psql "$(kubectl get secret coder-db-user -n coder -o jsonpath='{.data.uri}' | base64 -d)"
```
### Workspace Provisioning Fails
**Problem**: Workspaces fail to provision from templates
**Solution**:
1. Check Coder provisioner logs:
```bash
kubectl logs -n coder -l app=coder --tail=100
```
2. Verify Kubernetes permissions for workspace creation:
```bash
kubectl auth can-i create pods --as=system:serviceaccount:coder:coder -n coder-workspaces
```
3. Review template Terraform configuration for errors
## Additional Resources
* [Coder Documentation](https://coder.com/docs)
* [Coder Templates Repository](https://github.com/coder/coder)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
* [Coder Blog: 2025 Launch Week](https://coder.com/blog/launch-week-2025-instant-infrastructure)

View file

@ -1,480 +0,0 @@
---
title: "Core"
linkTitle: "Core"
weight: 10
description: >
Essential infrastructure components for GitOps, database management, and single sign-on
---
## Overview
The Core stack provides foundational infrastructure components required by all other Edge Developer Platform stacks. It establishes the base layer for continuous deployment, database services, and centralized authentication, enabling a secure, scalable platform architecture.
The Core stack deploys ArgoCD for GitOps orchestration, CloudNativePG for PostgreSQL database management, and Dex for OpenID Connect single sign-on capabilities.
## Key Features
* **GitOps Continuous Deployment**: ArgoCD manages declarative infrastructure and application deployments
* **Database Operator**: CloudNativePG provides enterprise-grade PostgreSQL clusters for platform services
* **Single Sign-On**: Dex offers centralized OIDC authentication across platform components
* **Automated Synchronization**: Self-healing deployments with automatic drift correction
* **Role-Based Access Control**: Integrated RBAC for secure platform administration
* **TLS Certificate Management**: Automated certificate provisioning and renewal
## Repository
**Code**: [Core Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/core)
**Documentation**:
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [Dex Documentation](https://dexidp.io/docs/)
## Getting Started
### Prerequisites
* Kubernetes cluster (1.24+)
* kubectl configured with cluster access
* Ingress controller (nginx recommended)
* cert-manager for TLS certificate management
* Domain names configured for platform services
### Quick Start
The Core stack is deployed as the foundation of the EDP installation:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then domains will be `argocd.test-me.t09.de`, `dex.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Bootstrap**
The deployment automatically provisions:
- ArgoCD control plane in `argocd` namespace
- CloudNativePG operator in `cloudnative-pg` namespace
- Dex identity provider in `dex` namespace
- Ingress configurations with TLS certificates
- OIDC authentication integration
### Verification
Verify the Core stack deployment:
```bash
# Check ArgoCD installation
kubectl get application -n argocd
kubectl get pods -n argocd
# Verify CloudNativePG operator
kubectl get pods -n cloudnative-pg
kubectl get crd | grep cnpg.io
# Check Dex deployment
kubectl get pods -n dex
kubectl get ingress -n dex
# Verify ingress configurations
kubectl get ingress -n argocd
```
Access ArgoCD at `https://argocd.{DOMAIN}` and authenticate via Dex SSO. Or use username `admin` and the secret inside of kubernetes `argocd/argocd-initial-admin-secret` as password `kubectl get secret -n argocd argocd-initial-admin-secret -ojson | jq -r .data.password | base64 -d`.
## Architecture
### Component Architecture
The Core stack establishes a three-tier foundation:
**ArgoCD Control Plane**:
- Application management and GitOps reconciliation
- Multi-repository tracking with automated sync
- Resource health monitoring and drift detection
- Integrated RBAC with SSO authentication
**CloudNativePG Operator**:
- PostgreSQL cluster lifecycle management
- Automated backup and recovery
- High availability and failover
- Storage provisioning via CSI drivers
**Dex Identity Provider**:
- OpenID Connect authentication service
- Multiple connector support (Forgejo/Gitea, LDAP, SAML)
- Static client registration for platform services
- Token issuance and validation
### Networking
**Ingress Architecture**:
- nginx ingress controller for external access
- TLS termination with cert-manager integration
- Domain-based routing for platform services
**Kubernetes Services**:
- Internal service communication via ClusterIP
- DNS-based service discovery
- Network policies for security segmentation
## Configuration
### ArgoCD Configuration
Deployed via Helm chart v9.1.5 with custom values in `stacks/core/argocd/values.yaml`:
**OIDC Authentication**:
```yaml
configs:
cm:
url: "https://{DOMAIN_ARGOCD}"
oidc.config: |
name: Forgejo
issuer: https://{DOMAIN_DEX}
clientID: controller-argocd-dex
clientSecret: $dex-controller-argocd-dex:dex-controller-argocd-dex
requestedScopes: ["openid", "profile", "email", "groups"]
```
**RBAC Policy**:
```yaml
policy.csv: |
g, DevFW, role:admin
```
**Server Settings**:
- Insecure mode enabled (TLS handled by ingress)
- Annotation-based resource tracking
- 60-second reconciliation timeout
- Resource exclusions for ProviderConfigUsage and CiliumIdentity
### CloudNativePG Configuration
Deployed via Helm chart v0.26.1 with values in `stacks/core/cloudnative-pg/values.yaml`:
**Operator Settings**:
- Namespace: `cloudnative-pg`
- Automated database cluster provisioning
- Custom resource definitions for Cluster, Database, and Pooler resources
**Storage Configuration**:
- Uses `csi-disk` storage class by default
- PVC provisioning for PostgreSQL data
- Backup storage integration (S3-compatible)
### Dex Configuration
Deployed via Helm chart v0.23.0 with values in `stacks/core/dex/values.yaml`:
**Issuer Configuration**:
```yaml
config:
issuer: https://{DOMAIN_DEX}
storage:
type: memory # Use persistent storage for production
oauth2:
skipApprovalScreen: true
alwaysShowLoginScreen: false
```
**Forgejo Connector**:
```yaml
connectors:
- type: gitea
id: forgejo
name: Forgejo
config:
clientID: $FORGEJO_CLIENT_ID
clientSecret: $FORGEJO_CLIENT_SECRET
redirectURI: https://{DOMAIN_DEX}/callback
baseURL: https://edp.buildth.ing
orgs:
- name: DevFW
```
**Static OAuth2 Clients**:
- ArgoCD: `controller-argocd-dex`
- Grafana: `controller-grafana-dex`
### Environment Variables
Core stack services use the following environment variables:
**Domain Configuration**:
- `DOMAIN_ARGOCD`: ArgoCD web interface URL
- `DOMAIN_DEX`: Dex authentication service URL
- `DOMAIN_GITEA`: Forgejo/Gitea repository URL
- `DOMAIN_GRAFANA`: Grafana observability dashboard URL
**Repository Configuration**:
- `CLIENT_REPO_ID`: Repository identifier for stack configurations
- `CLIENT_REPO_DOMAIN`: Git repository domain
- `CLIENT_REPO_ORG_NAME`: Organization name for stack instances
## Usage Examples
### Managing Applications with ArgoCD
Access and manage applications through ArgoCD:
```bash
# Login to ArgoCD CLI
argocd login argocd.${DOMAIN} --sso
# List all applications
argocd app list
# Get application status
argocd app get coder
# Sync application manually
argocd app sync coder
# View application logs
argocd app logs coder
# Diff application state
argocd app diff coder
```
### Creating a PostgreSQL Database
Deploy a PostgreSQL cluster using CloudNativePG:
```yaml
# database-cluster.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: app-db
namespace: my-app
spec:
instances: 3
storage:
size: 20Gi
storageClass: csi-disk
postgresql:
parameters:
max_connections: "100"
shared_buffers: "256MB"
bootstrap:
initdb:
database: appdb
owner: appuser
```
Apply the configuration:
```bash
kubectl apply -f database-cluster.yaml
# Check cluster status
kubectl get cluster app-db -n my-app
kubectl get pods -n my-app -l cnpg.io/cluster=app-db
# Get connection credentials
kubectl get secret app-db-app -n my-app -o jsonpath='{.data.password}' | base64 -d
```
### Configuring SSO for Applications
Add OAuth2 applications to Dex for SSO integration:
```yaml
# Add to dex values.yaml
staticClients:
- id: my-app-client
redirectURIs:
- 'https://myapp.{DOMAIN}/callback'
name: 'My Application'
secretEnv: MY_APP_CLIENT_SECRET
```
Configure the application to use Dex:
```bash
# Application OIDC configuration
OIDC_ISSUER=https://dex.${DOMAIN}
OIDC_CLIENT_ID=my-app-client
OIDC_CLIENT_SECRET=${MY_APP_CLIENT_SECRET}
OIDC_REDIRECT_URI=https://myapp.${DOMAIN}/callback
```
### Deploying Applications via ArgoCD
Create an ArgoCD Application manifest:
```yaml
# my-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: 'https://github.com/myorg/my-app'
targetRevision: main
path: k8s
destination:
server: 'https://kubernetes.default.svc'
namespace: my-app
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
```
Push it to [stacks instances](https://edp.buildth.ing/DevFW-CICD/stacks-instances) to be picked up by argo
## Integration Points
* **All Stacks**: Core stack is a prerequisite for all other EDP stacks
* **OTC Stack**: Provides ingress-nginx and cert-manager dependencies
* **Coder Stack**: Uses CloudNativePG for workspace database management
* **Forgejo Stack**: Integrates with Dex for SSO and ArgoCD for deployment
* **Observability Stack**: Uses Dex for Grafana authentication and ArgoCD for deployment
* **Provider Stack**: Deploys Terraform providers via ArgoCD
## Troubleshooting
### ArgoCD Not Accessible
**Problem**: Cannot access ArgoCD web interface
**Solution**:
1. Verify ingress configuration:
```bash
kubectl get ingress -n argocd
kubectl describe ingress -n argocd
```
2. Check ArgoCD server status:
```bash
kubectl get pods -n argocd
kubectl logs -n argocd -l app.kubernetes.io/name=argocd-server
```
3. Verify TLS certificate:
```bash
kubectl get certificate -n argocd
kubectl describe certificate -n argocd
```
4. Test DNS resolution:
```bash
nslookup argocd.${DOMAIN}
```
### Dex Authentication Failing
**Problem**: SSO login fails or redirects incorrectly
**Solution**:
1. Check Dex logs:
```bash
kubectl logs -n dex -l app.kubernetes.io/name=dex
```
2. Verify Forgejo connector configuration:
```bash
kubectl get secret -n dex
kubectl get configmap -n dex dex -o yaml
```
3. Test Dex issuer endpoint:
```bash
curl https://dex.${DOMAIN}/.well-known/openid-configuration
```
4. Verify OAuth2 client credentials match in both Dex and consuming application
### CloudNativePG Operator Not Running
**Problem**: PostgreSQL clusters fail to provision
**Solution**:
1. Check operator status:
```bash
kubectl get pods -n cloudnative-pg
kubectl logs -n cloudnative-pg -l app.kubernetes.io/name=cloudnative-pg
```
2. Verify CRDs are installed:
```bash
kubectl get crd | grep cnpg.io
kubectl describe crd clusters.postgresql.cnpg.io
```
3. Check operator logs for errors:
```bash
kubectl logs -n cloudnative-pg -l app.kubernetes.io/name=cloudnative-pg --tail=100
```
### Application Sync Failures
**Problem**: ArgoCD applications remain out of sync or fail to deploy
**Solution**:
1. Check application status:
```bash
argocd app get <app-name>
kubectl describe application <app-name> -n argocd
```
2. Review sync operation logs:
```bash
argocd app logs <app-name>
```
3. Verify repository access:
```bash
argocd repo list
argocd repo get <repo-url>
```
4. Check for resource conflicts or missing dependencies:
```bash
kubectl get events -n <app-namespace> --sort-by='.lastTimestamp'
```
### Database Connection Issues
**Problem**: Applications cannot connect to CloudNativePG databases
**Solution**:
1. Verify cluster is ready:
```bash
kubectl get cluster <cluster-name> -n <namespace>
kubectl describe cluster <cluster-name> -n <namespace>
```
2. Check database credentials secret:
```bash
kubectl get secret <cluster-name>-app -n <namespace>
kubectl get secret <cluster-name>-app -n <namespace> -o yaml
```
3. Test connection from a pod:
```bash
kubectl run -it --rm psql-test --image=postgres:16 --restart=Never -- \
psql "$(kubectl get secret <cluster-name>-app -n <namespace> -o jsonpath='{.data.uri}' | base64 -d)"
```
4. Review PostgreSQL logs:
```bash
kubectl logs -n <namespace> <cluster-name>-1
```
## Additional Resources
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)
* [ArgoCD Best Practices](https://argo-cd.readthedocs.io/en/stable/user-guide/best_practices/)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [CloudNativePG Architecture](https://cloudnative-pg.io/documentation/current/architecture/)
* [Dex Documentation](https://dexidp.io/docs/)
* [Dex Connectors](https://dexidp.io/docs/connectors/)
* [OpenID Connect Specification](https://openid.net/connect/)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 146 KiB

View file

@ -1,532 +0,0 @@
---
title: "Forgejo"
linkTitle: "Forgejo"
weight: 30
description: >
Self-hosted Git service with built-in CI/CD capabilities
---
## Overview
Forgejo is a self-hosted Git service that provides repository hosting, code collaboration, and integrated CI/CD workflows. As part of the Edge Developer Platform, Forgejo serves as the central code repository and continuous integration system, offering a complete DevOps platform with Git hosting, issue tracking, and automated build pipelines.
The Forgejo stack deploys a Forgejo server instance with PostgreSQL database backend, MinIO object storage, and Forgejo Runners for executing CI/CD workflows.
## Key Features
* **Git Repository Hosting**: Full-featured Git server with web interface for code management
* **Built-in CI/CD**: Forgejo Actions provide GitHub Actions-compatible workflow automation
* **Issue Tracking**: Integrated project management with issues, milestones, and pull requests
* **Container Registry**: Built-in Docker registry for container image storage
* **Code Review**: Pull request workflows with inline comments and approval processes
* **Scalable Runners**: Distributed runner architecture with Docker-in-Docker execution
* **S3 Object Storage**: MinIO integration for artifacts, LFS objects, and backups
## Repository
**Code**: [Forgejo Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/forgejo)
**Documentation**:
* [Forgejo Official Documentation](https://forgejo.org/docs/latest/)
* [Forgejo Actions Documentation](https://forgejo.org/docs/latest/user/actions/)
* [Forgejo Helm Chart Repository](https://code.forgejo.org/forgejo-helm/forgejo-helm)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* CloudNativePG operator (provided by `core` stack)
* Ingress controller configured (provided by `otc` stack)
* cert-manager for TLS certificate management (provided by `otc` stack)
* Infrastructure deployed through [Infra Deploy](https://edp.buildth.ing/DevFW/infra-deploy)
### Quick Start
The Forgejo stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then the domain will be `forgejo.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- Forgejo server (Helm chart v12.0.0)
- PostgreSQL database cluster (CloudNativePG)
- Forgejo Runners with Docker-in-Docker execution
- Ingress configuration with TLS
- Database credentials and storage secrets
### Verification
Verify the Forgejo deployment:
```bash
# Check ArgoCD applications status
kubectl get application forgejo-server -n argocd
kubectl get application forgejo-runner -n argocd
# Verify Forgejo server pods are running
kubectl get pods -n gitea
# Check PostgreSQL cluster status
kubectl get cluster -n gitea
# Verify Forgejo runners are active
kubectl get pods -n gitea -l app=forgejo-runner
# Verify ingress configuration
kubectl get ingress -n gitea
```
Access the Forgejo web interface at `https://{DOMAIN_GITEA}`.
## Architecture
### Component Architecture
The Forgejo stack consists of:
**Forgejo Server**:
- Web application for Git repository management
- API server for Git operations and CI/CD orchestration
- Issue tracker and project management interface
- Container registry for Docker images
- Artifact storage via MinIO object storage
**Forgejo Runners**:
- 3-replica runner deployment for parallel job execution
- Docker-in-Docker (DinD) architecture for containerized builds
- Runner image: `code.forgejo.org/forgejo/runner:6.4.0`
- Build container: `docker:28.0.4-dind`
- Supports GitHub Actions-compatible workflows
**Storage Architecture**:
- 200Gi persistent volume for Git repositories (GPSSD storage)
- OTC S3 object storage for LFS objects and artifacts
- Encrypted volumes using KMS key integration
- S3-compatible backup storage (100GB)
**Networking**:
- SSH LoadBalancer service on port 32222 for Git operations
- HTTPS ingress with TLS termination for web interface
- Internal service communication via ClusterIP
## Configuration
### Forgejo Server Configuration
The Forgejo server is configured through Helm values in `stacks/forgejo/forgejo-server/values.yaml`:
**Application Settings**:
- `FORGEJO_IMAGE_TAG`: Forgejo container image version
- Application name: "EDP"
- Slogan: "Build your thing in minutes"
- User registration: Disabled by default
- Email notifications: Enabled
**Storage Configuration**:
```yaml
persistence:
size: 200Gi
storageClass: csi-disk
annotations:
everest.io/crypt-key-id: "{KMS_KEY_ID}"
everest.io/disk-volume-type: GPSSD
```
**Database Configuration**:
Database credentials are sourced from Kubernetes secrets:
- `POSTGRES_HOST`: PostgreSQL hostname
- `POSTGRES_DB`: Database name
- `POSTGRES_USER`: Database username
- `POSTGRES_PASSWORD`: Database password
- SSL verification enabled
**Object Storage**:
- Endpoint: `obs.eu-de.otc.t-systems.com`
- Credentials from `gitea/forgejo-cloud-credentials` secret
- Used for artifacts, LFS objects, and backups
**External Services**:
- Redis for caching and session management
- Elasticsearch for issue indexing
- SMTP for email notifications
**SSH Configuration**:
```yaml
service:
ssh:
type: LoadBalancer
port: 32222
```
### Forgejo Runner Configuration
Defined in `stacks/forgejo/forgejo-runner/dind-docker.yaml`:
**Deployment Specification**:
- 3 replicas for parallel execution
- Runner version: 6.4.0
- Docker DinD version: 28.0.4
**Runner Registration**:
- Offline registration using secret token
- Instance URL from configuration
- Predefined labels for Ubuntu 22.04 and latest
**Container Configuration**:
```yaml
runner:
image: code.forgejo.org/forgejo/runner:6.4.0
privileged: true
securityContext:
runAsUser: 0
allowPrivilegeEscalation: true
dind:
image: docker:28.0.4-dind
privileged: true
tlsCertDir: /certs
```
**Volume Management**:
- Docker certificates volume for TLS communication
- Runner data volume for registration and configuration
- Shared socket for container communication
### ArgoCD Application Configuration
**Server Application** (`template/stacks/forgejo/forgejo-server.yaml`):
- Name: `forgejo-server`
- Namespace: `gitea`
- Helm chart v12.0.0 from `https://code.forgejo.org/forgejo-helm/forgejo-helm.git`
- Automated self-healing enabled
- Values from `stacks-instances` repository
**Runner Application** (`template/stacks/forgejo/forgejo-runner.yaml`):
- Name: `forgejo-runner`
- Namespace: `argocd`
- Deployment manifests from `stacks-instances` repository
- Automated sync with unlimited retries
## Usage Examples
### Creating Your First Repository
After deployment, create and use Git repositories:
1. **Access Forgejo Interface**
```bash
open https://${DOMAIN_GITEA}
```
2. **Create a New Repository**
- Click "+" icon in top right
- Select "New Repository"
- Enter repository name and description
- Choose visibility (public/private)
- Initialize with README if desired
3. **Clone and Push Code**
```bash
# Clone the repository
git clone https://${DOMAIN_GITEA}/myorg/myrepo.git
cd myrepo
# Add your code
echo "# My Project" > README.md
git add README.md
git commit -m "Initial commit"
# Push to Forgejo
git push origin main
```
### Setting Up CI/CD with Forgejo Actions
Create automated workflows using Forgejo Actions:
1. **Create Workflow File**
```bash
mkdir -p .forgejo/workflows
cat > .forgejo/workflows/build.yaml << 'EOF'
name: Build and Test
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21'
- name: Build
run: go build -v ./...
- name: Test
run: go test -v ./...
EOF
```
2. **Commit and Push Workflow**
```bash
git add .forgejo/workflows/build.yaml
git commit -m "Add CI/CD workflow"
git push origin main
```
3. **Monitor Workflow Execution**
- Navigate to repository in Forgejo web interface
- Click "Actions" tab
- View workflow runs and logs
### Building and Publishing Container Images
Use Forgejo to build and store Docker images:
```yaml
# .forgejo/workflows/docker.yaml
name: Build Container Image
on:
push:
tags: ['v*']
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Build image
run: |
docker build -t forgejo.${DOMAIN_GITEA}/myorg/myapp:${GITHUB_REF_NAME} .
- name: Login to registry
run: |
echo "${{ secrets.REGISTRY_PASSWORD }}" | \
docker login forgejo.${DOMAIN_GITEA} -u "${{ secrets.REGISTRY_USER }}" --password-stdin
- name: Push image
run: |
docker push forgejo.${DOMAIN_GITEA}/myorg/myapp:${GITHUB_REF_NAME}
```
### Using SSH for Git Operations
Configure SSH access for Git operations:
```bash
# Generate SSH key if needed
ssh-keygen -t ed25519 -C "your_email@example.com"
# Add public key to Forgejo
# Navigate to: Settings -> SSH / GPG Keys -> Add Key
# Configure SSH host
cat >> ~/.ssh/config << EOF
Host forgejo.${DOMAIN_GITEA}
Port 32222
User git
EOF
# Clone repository via SSH
git clone ssh://git@forgejo.${DOMAIN_GITEA}:32222/myorg/myrepo.git
```
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration and CloudNativePG operator for database management
* **OTC Stack**: Requires ingress-nginx controller and cert-manager for external access and TLS
* **Coder Stack**: Development workspaces can clone repositories and trigger CI/CD workflows
* **Observability Stack**: Prometheus metrics collection enabled via ServiceMonitor
* **Dex (SSO)**: Can be configured for centralized authentication integration
## Troubleshooting
### Forgejo Server Not Starting
**Problem**: Forgejo server pods remain in `Pending` or `CrashLoopBackOff` state
**Solution**:
1. Check PostgreSQL cluster status:
```bash
kubectl get cluster -n gitea
kubectl describe cluster -n gitea
```
2. Verify database credentials:
```bash
kubectl get secret -n gitea | grep postgres
```
3. Check Forgejo server logs:
```bash
kubectl logs -n gitea -l app=forgejo
```
4. Verify MinIO connectivity:
```bash
kubectl get secret minio-credential -n gitea
kubectl logs -n gitea -l app=forgejo | grep -i minio
```
### Cannot Access Forgejo Web Interface
**Problem**: Forgejo web interface is not accessible at configured URL
**Solution**:
1. Verify ingress configuration:
```bash
kubectl get ingress -n gitea
kubectl describe ingress -n gitea
```
2. Check TLS certificate status:
```bash
kubectl get certificate -n gitea
kubectl describe certificate -n gitea
```
3. Verify DNS resolution:
```bash
nslookup forgejo.${DOMAIN_GITEA}
```
4. Test service connectivity:
```bash
kubectl port-forward -n gitea svc/forgejo-http 3000:3000
curl http://localhost:3000
```
### Git Operations Fail Over SSH
**Problem**: Cannot clone or push repositories via SSH
**Solution**:
1. Verify SSH service is exposed:
```bash
kubectl get svc -n gitea -l app=forgejo
```
2. Check LoadBalancer external IP:
```bash
kubectl get svc -n gitea forgejo-ssh -o wide
```
3. Test SSH connectivity:
```bash
ssh -T -p 32222 git@${DOMAIN_GITEA}
```
4. Verify SSH public key is added to Forgejo account
### Forgejo Runners Not Executing Jobs
**Problem**: CI/CD workflows remain queued or fail to execute
**Solution**:
1. Check runner pod status:
```bash
kubectl get pods -n gitea -l app=forgejo-runner
kubectl logs -n gitea -l app=forgejo-runner
```
2. Verify runner registration:
```bash
kubectl exec -n gitea -it deployment/forgejo-runner -- \
forgejo-runner status
```
3. Check Docker-in-Docker daemon:
```bash
kubectl logs -n gitea -l app=forgejo-runner -c dind
```
4. Verify runner token secret exists:
```bash
kubectl get secret -n gitea | grep runner
```
5. Check Forgejo server can communicate with runners:
```bash
kubectl logs -n gitea -l app=forgejo | grep -i runner
```
### Database Connection Errors
**Problem**: Forgejo cannot connect to PostgreSQL database
**Solution**:
1. Verify PostgreSQL cluster health:
```bash
kubectl get pods -n gitea -l cnpg.io/cluster
kubectl logs -n gitea -l cnpg.io/cluster
```
2. Test database connection:
```bash
kubectl exec -n gitea -it <postgres-pod> -- \
psql -U postgres -c "\l"
```
3. Verify database credentials secret:
```bash
kubectl get secret -n gitea -o yaml | grep POSTGRES
```
4. Check database connection from Forgejo pod:
```bash
kubectl exec -n gitea -it <forgejo-pod> -- \
nc -zv <postgres-host> 5432
```
### Storage Issues
**Problem**: Repository pushes fail or object storage errors occur
**Solution**:
1. Check PVC status and capacity:
```bash
kubectl get pvc -n gitea
kubectl describe pvc -n gitea
```
2. Verify MinIO credentials and connectivity:
```bash
kubectl get secret minio-credential -n gitea
kubectl logs -n gitea -l app=forgejo | grep -i "s3\|minio"
```
3. Check available storage space:
```bash
kubectl exec -n gitea -it <forgejo-pod> -- df -h
```
4. Review storage class configuration:
```bash
kubectl get storageclass csi-disk -o yaml
```
## Additional Resources
* [Forgejo Documentation](https://forgejo.org/docs/latest/)
* [Forgejo Actions User Guide](https://forgejo.org/docs/latest/user/actions/)
* [Forgejo Helm Chart Documentation](https://code.forgejo.org/forgejo-helm/forgejo-helm)
* [Forgejo Runner Documentation](https://code.forgejo.org/forgejo/runner)
* [CloudNativePG Documentation](https://cloudnative-pg.io/)
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 82 KiB

View file

@ -1,500 +0,0 @@
---
title: "Observability Client"
linkTitle: "Observability Client"
weight: 60
description: >
Core observability components for metrics collection, log aggregation, and monitoring
---
## Overview
The Observability Client stack provides essential monitoring and observability infrastructure for Kubernetes environments. As part of the Edge Developer Platform, it deploys client-side components that collect, process, and forward metrics and logs to centralized observability systems.
The stack integrates three core components: Kubernetes Metrics Server for resource metrics, Vector for log collection and forwarding, and Victoria Metrics for comprehensive metrics monitoring and alerting.
## Key Features
* **Resource Metrics**: Real-time CPU and memory metrics via Kubernetes Metrics Server
* **Log Aggregation**: Unified log collection and forwarding with Vector
* **Metrics Monitoring**: Comprehensive metrics collection, storage, and alerting with Victoria Metrics
* **Prometheus Compatibility**: Full Prometheus protocol support for metrics scraping
* **Multi-Tenant Support**: Configurable tenant isolation for metrics and logs
* **Automated Alerting**: Pre-configured alert rules with Alertmanager integration
* **Grafana Integration**: Built-in dashboard provisioning and datasource configuration
## Repository
**Code**: [Observability Client Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/observability-client)
**Documentation**:
* [Kubernetes Metrics Server](https://github.com/kubernetes-sigs/metrics-server)
* [Vector Documentation](https://vector.dev/docs/)
* [Victoria Metrics Documentation](https://docs.victoriametrics.com/)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* cert-manager for certificate management (provided by `otc` stack)
* Observability backend services for receiving metrics and logs
### Quick Start
The Observability Client stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible.
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- Metrics Server (Helm chart v3.12.2)
- Vector agent (Helm chart v0.43.0)
- Victoria Metrics k8s-stack (Helm chart v0.48.1)
- ServiceMonitor resources for Prometheus scraping
- Authentication secrets for remote write endpoints
### Verification
Verify the Observability Client deployment:
```bash
# Check ArgoCD application status
kubectl get application -n argocd | grep -E "metrics-server|vector|vm-client"
# Verify Metrics Server is running
kubectl get pods -n observability -l app.kubernetes.io/name=metrics-server
# Test metrics API
kubectl top nodes
kubectl top pods -A
# Verify Vector pods are running
kubectl get pods -n observability -l app.kubernetes.io/name=vector
# Check Victoria Metrics components
kubectl get pods -n observability -l app.kubernetes.io/name=victoria-metrics-k8s-stack
# Verify ServiceMonitor resources
kubectl get servicemonitor -n observability
```
## Architecture
### Component Architecture
The Observability Client stack consists of three integrated components:
**Metrics Server**:
- Collects resource metrics (CPU, memory) from kubelet
- Provides Metrics API for kubectl top and HPA
- Lightweight aggregator for cluster-wide resource usage
- Exposes ServiceMonitor for Prometheus scraping
**Vector Agent**:
- DaemonSet deployment for log collection across all nodes
- Processes and transforms Kubernetes logs
- Forwards logs to centralized Elasticsearch backend
- Injects cluster metadata and environment information
- Supports compression and bulk operations
**Victoria Metrics Stack**:
- VMAgent: Scrapes metrics from Kubernetes components and applications
- VMAlertmanager: Manages alert routing and notifications
- VMOperator: Manages VictoriaMetrics CRDs and lifecycle
- Integration with remote Victoria Metrics storage
- Supports multi-tenant metrics isolation
### Data Flow
```
Kubernetes Resources → Metrics Server → Metrics API
ServiceMonitor → VMAgent → Remote VictoriaMetrics
Application Logs → Vector Agent → Transform → Remote Elasticsearch
Prometheus Exporters → VMAgent → Remote VictoriaMetrics → VMAlertmanager
```
## Configuration
### Metrics Server Configuration
Configured in `stacks/observability-client/metrics-server/values.yaml`:
```yaml
metrics:
enabled: true
serviceMonitor:
enabled: true
```
**Key Settings**:
- Enables metrics collection endpoint
- Exposes ServiceMonitor for Prometheus-compatible scraping
- Deployed via Helm chart from `https://kubernetes-sigs.github.io/metrics-server/`
### Vector Configuration
Configured in `stacks/observability-client/vector/values.yaml`:
**Role**: Agent (DaemonSet deployment across nodes)
**Authentication**:
Credentials sourced from `simple-user-secret`:
- `VECTOR_USER`: Username for remote write authentication
- `VECTOR_PASSWORD`: Password for remote write authentication
**Data Sources**:
- `k8s`: Collects Kubernetes container logs
- `internal_metrics`: Gathers Vector internal metrics
**Log Processing**:
```yaml
transforms:
parser:
- Parse JSON from log messages
- Inject cluster environment metadata
- Remove original message field
```
**Output Sink**:
- Elasticsearch bulk API (v8)
- Basic authentication with environment variables
- Gzip compression enabled
- Custom headers: AccountID and ProjectID
### Victoria Metrics Stack Configuration
Configured in `stacks/observability-client/vm-client-stack/values.yaml`:
**Operator Settings**:
- Enabled with admission webhooks
- Managed by cert-manager for ArgoCD compatibility
**VMAgent Configuration**:
- Basic authentication for remote write
- Credentials from `vm-remote-write-secret`
- Stream parsing enabled
- Drop original labels to reduce memory footprint
**Monitoring Targets**:
- Node exporter for hardware metrics
- kube-state-metrics for Kubernetes object states
- Kubelet metrics (cadvisor)
- Kubernetes control plane components (API server, etcd, scheduler, controller manager)
- CoreDNS metrics
**Alertmanager Integration**:
- Slack notification templates
- Configurable routing rules
- TLS support for secure communication
**Storage Options**:
- VMSingle: Single-node deployment
- VMCluster: Distributed deployment with replication
- Configurable retention period
## ArgoCD Application Configuration
**Metrics Server Application** (`template/stacks/observability-client/metrics-server.yaml`):
- Name: `metrics-server`
- Chart version: 3.12.2
- Automated sync with self-heal enabled
- Namespace: `observability`
**Vector Application** (`template/stacks/observability-client/vector.yaml`):
- Name: `vector`
- Chart version: 0.43.0
- Automated sync with self-heal enabled
- Namespace: `observability`
**Victoria Metrics Application** (`template/stacks/observability-client/vm-client-stack.yaml`):
- Name: `vm-client`
- Chart version: 0.48.1
- Automated sync with self-heal enabled
- Namespace: `observability`
- References manifests from instance repository
## Usage Examples
### Querying Resource Metrics
Access resource metrics collected by Metrics Server:
```bash
# View node resource usage
kubectl top nodes
# View pod resource usage across all namespaces
kubectl top pods -A
# View pod resource usage in specific namespace
kubectl top pods -n observability
# Sort pods by CPU usage
kubectl top pods -A --sort-by=cpu
# Sort pods by memory usage
kubectl top pods -A --sort-by=memory
```
### Using Metrics for Autoscaling
Create Horizontal Pod Autoscaler based on metrics:
```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
```
### Accessing Application Logs
Vector automatically collects logs from all containers. View logs in your centralized Elasticsearch/Kibana:
```bash
# Logs are automatically forwarded to Elasticsearch
# Access via Kibana dashboard or Elasticsearch API
# Example: Query logs via Elasticsearch API
curl -u $VECTOR_USER:$VECTOR_PASSWORD \
-X GET "https://elasticsearch.example.com/_search" \
-H 'Content-Type: application/json' \
-d '{
"query": {
"match": {
"kubernetes.namespace": "my-namespace"
}
}
}'
```
### Querying Victoria Metrics
Query metrics collected by Victoria Metrics:
```bash
# Access Victoria Metrics query API
# Metrics are forwarded to remote Victoria Metrics instance
# Example PromQL queries:
# - Container CPU usage: container_cpu_usage_seconds_total
# - Pod memory usage: container_memory_usage_bytes
# - Node disk I/O: node_disk_io_time_seconds_total
# Query via Victoria Metrics API
curl -X POST https://victoriametrics.example.com/api/v1/query \
-d 'query=up' \
-d 'time=2025-12-16T00:00:00Z'
```
### Creating Custom ServiceMonitors
Expose application metrics for collection:
```yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-metrics
labels:
app: myapp
spec:
ports:
- name: metrics
port: 8080
targetPort: 8080
selector:
app: myapp
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp-monitor
namespace: observability
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: metrics
path: /metrics
interval: 30s
```
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration
* **OTC Stack**: Requires cert-manager for certificate management
* **Observability Stack**: Forwards metrics and logs to centralized observability backend
* **All Application Stacks**: Collects metrics and logs from all platform applications
## Troubleshooting
### Metrics Server Not Responding
**Problem**: `kubectl top` commands fail or return no data
**Solution**:
1. Check Metrics Server pod status:
```bash
kubectl get pods -n observability -l app.kubernetes.io/name=metrics-server
kubectl logs -n observability -l app.kubernetes.io/name=metrics-server
```
2. Verify kubelet metrics endpoint:
```bash
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
```
3. Check ServiceMonitor configuration:
```bash
kubectl get servicemonitor -n observability -o yaml
```
### Vector Not Forwarding Logs
**Problem**: Logs are not appearing in Elasticsearch
**Solution**:
1. Check Vector agent status:
```bash
kubectl get pods -n observability -l app.kubernetes.io/name=vector
kubectl logs -n observability -l app.kubernetes.io/name=vector --tail=50
```
2. Verify authentication secret:
```bash
kubectl get secret simple-user-secret -n observability
kubectl get secret simple-user-secret -n observability -o jsonpath='{.data.username}' | base64 -d
```
3. Test Elasticsearch connectivity:
```bash
kubectl exec -it -n observability $(kubectl get pod -n observability -l app.kubernetes.io/name=vector -o jsonpath='{.items[0].metadata.name}') -- \
curl -u $VECTOR_USER:$VECTOR_PASSWORD https://elasticsearch.example.com/_cluster/health
```
4. Check Vector internal metrics:
```bash
kubectl port-forward -n observability svc/vector 9090:9090
curl http://localhost:9090/metrics
```
### Victoria Metrics Not Scraping
**Problem**: Metrics are not being collected or forwarded
**Solution**:
1. Check VMAgent status:
```bash
kubectl get pods -n observability -l app.kubernetes.io/name=vmagent
kubectl logs -n observability -l app.kubernetes.io/name=vmagent
```
2. Verify remote write secret:
```bash
kubectl get secret vm-remote-write-secret -n observability
kubectl get secret vm-remote-write-secret -n observability -o jsonpath='{.data.username}' | base64 -d
```
3. Check ServiceMonitor targets:
```bash
kubectl get servicemonitor -n observability
kubectl describe servicemonitor metrics-server -n observability
```
4. Verify operator is running:
```bash
kubectl get pods -n observability -l app.kubernetes.io/name=victoria-metrics-operator
kubectl logs -n observability -l app.kubernetes.io/name=victoria-metrics-operator
```
### High Memory Usage
**Problem**: Victoria Metrics or Vector consuming excessive memory
**Solution**:
1. For Victoria Metrics, verify `dropOriginalLabels` is enabled:
```bash
kubectl get vmagent -n observability -o yaml | grep dropOriginalLabels
```
2. Reduce scrape intervals for high-cardinality metrics:
```yaml
# Edit ServiceMonitor
spec:
endpoints:
- interval: 60s # Increase from 30s
```
3. Filter unnecessary logs in Vector:
```yaml
# Add filter transform to Vector configuration
transforms:
filter:
type: filter
condition: '.kubernetes.namespace != "kube-system"'
```
4. Check resource limits:
```bash
kubectl describe pod -n observability -l app.kubernetes.io/name=vmagent
kubectl describe pod -n observability -l app.kubernetes.io/name=vector
```
### Certificate Issues
**Problem**: TLS certificate errors in logs
**Solution**:
1. Verify cert-manager is running:
```bash
kubectl get pods -n cert-manager
```
2. Check certificate status:
```bash
kubectl get certificate -n observability
kubectl describe certificate -n observability
```
3. Review webhook configuration:
```bash
kubectl get validatingwebhookconfigurations | grep victoria-metrics
kubectl get mutatingwebhookconfigurations | grep victoria-metrics
```
4. Restart operator if needed:
```bash
kubectl rollout restart deployment victoria-metrics-operator -n observability
```
## Additional Resources
* [Kubernetes Metrics Server Documentation](https://github.com/kubernetes-sigs/metrics-server)
* [Vector Documentation](https://vector.dev/docs/)
* [Victoria Metrics Documentation](https://docs.victoriametrics.com/)
* [Victoria Metrics Operator](https://docs.victoriametrics.com/operator/)
* [Prometheus Operator API](https://prometheus-operator.dev/docs/operator/api/)
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)

View file

@ -1,581 +0,0 @@
---
title: "Observability"
linkTitle: "Observability"
weight: 50
description: >
Comprehensive monitoring, metrics, and logging for Kubernetes infrastructure
---
## Overview
The Observability stack provides enterprise-grade monitoring, metrics collection, and logging capabilities for the Edge Developer Platform. Built on VictoriaMetrics and Grafana, it offers a complete observability solution with pre-configured dashboards, alerting, and SSO integration.
The stack deploys VictoriaMetrics for metrics storage and querying, Grafana for visualization, VictoriaLogs for log aggregation, and VMAuth for authenticated access to monitoring endpoints.
## Key Features
* **Metrics Collection**: VictoriaMetrics-based Kubernetes monitoring with long-term storage
* **Visualization**: Grafana with pre-built dashboards for ArgoCD, Ingress-Nginx, and infrastructure components
* **Log Aggregation**: VictoriaLogs for centralized logging with Grafana integration
* **SSO Integration**: OAuth authentication through Dex with role-based access control
* **Alerting**: Alertmanager with email notifications for critical events
* **Secure Access**: TLS-enabled ingress with authentication proxy (VMAuth)
* **Persistent Storage**: Encrypted volumes with configurable retention policies
## Repository
**Code**: [Observability Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/observability)
**Documentation**:
* [VictoriaMetrics Documentation](https://docs.victoriametrics.com/)
* [Grafana Documentation](https://grafana.com/docs/)
* [Grafana Operator Documentation](https://grafana.github.io/grafana-operator/)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* Ingress controller configured (provided by `otc` stack)
* cert-manager for TLS certificate management (provided by `otc` stack)
* Dex SSO provider (provided by `core` stack)
* Infrastructure deployed through [Infra Deploy](https://edp.buildth.ing/DevFW/infra-deploy)
### Quick Start
The Observability stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then domains will be `vmauth.test-me.t09.de` and `grafana.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- VictoriaMetrics Operator and components
- VictoriaMetrics Single (metrics storage)
- VMAuth (authentication proxy)
- Alertmanager (alerting)
- Grafana Operator
- Grafana instance with OAuth
- VictoriaLogs datasource
- Pre-configured dashboards
- Ingress configurations with TLS
### Verification
Verify the Observability deployment:
```bash
# Check ArgoCD applications status
kubectl get application grafana-operator -n argocd
kubectl get application victoria-k8s-stack -n argocd
# Verify VictoriaMetrics components are running
kubectl get pods -n observability
# Check Grafana instance status
kubectl get grafana grafana -n observability
# Verify ingress configurations
kubectl get ingress -n observability
```
Access the monitoring interfaces:
* Grafana: `https://grafana.{DOMAIN_O12Y}`
## Architecture
### Component Architecture
The Observability stack consists of multiple integrated components:
**VictoriaMetrics Components**:
- **VictoriaMetrics Operator**: Manages VictoriaMetrics custom resources
- **VictoriaMetrics Single**: Standalone metrics storage with 20Gi storage and 1-month retention
- **VMAgent**: Scrapes metrics from Kubernetes components (kubelet, CoreDNS, kube-apiserver, etcd)
- **VMAuth**: Authentication proxy on port 8427 for secure metrics access
- **VMAlertmanager**: Handles alert routing and notifications
**Grafana Components**:
- **Grafana Operator**: Manages Grafana instances and dashboards as Kubernetes resources
- **Grafana Instance**: Web application for metrics visualization with OAuth authentication
- **Pre-configured Dashboards**: ArgoCD, Ingress-Nginx, VictoriaLogs monitoring
**Logging**:
- **VictoriaLogs**: Log aggregation service integrated as Grafana datasource
**Storage**:
- VictoriaMetrics Single: 20Gi persistent storage on `csi-disk` storage class
- Grafana: 10Gi persistent storage on `csi-disk` storage class with KMS encryption
- Configurable retention: 1 month for metrics, minimum 24 hours enforced
**Networking**:
- Nginx ingress with TLS termination for Grafana and VMAuth
- cert-manager integration for automatic certificate management
- Internal ClusterIP services for component communication
## Configuration
### VictoriaMetrics Configuration
Key configuration in `stacks/observability/victoria-k8s-stack/values.yaml`:
**Operator Settings**:
```yaml
victoria-metrics-operator:
enabled: true
operator:
enable_converter_ownership: true
admissionWebhooks:
certManager:
enabled: true
issuer:
name: main
```
**Storage Configuration**:
```yaml
vmsingle:
enabled: true
spec:
retentionPeriod: "1"
storage:
storageClassName: csi-disk
resources:
requests:
storage: 20Gi
```
**VMAuth Configuration**:
```yaml
vmauth:
enabled: true
spec:
port: "8427"
ingress:
enabled: true
ingressClassName: nginx
hosts:
- name: "{{{ .Env.DOMAIN_O12Y }}}"
tls:
- secretName: vmauth-tls-secret
hosts:
- "{{{ .Env.DOMAIN_O12Y }}}"
annotations:
cert-manager.io/cluster-issuer: main
```
**Monitoring Targets**:
- Kubelet (cadvisor, probes, resources metrics)
- CoreDNS
- etcd
- kube-apiserver
**Disabled Collectors** (to avoid alerts on managed clusters):
- kube-controller-manager
- kube-scheduler
- kube-proxy
### Alertmanager Configuration
Email alerting configured in `values.yaml`:
```yaml
alertmanager:
spec:
externalURL: "https://{{{ .Env.DOMAIN_O12Y }}}"
configSecret: vmalertmanager-config
config:
route:
routes:
- matchers:
- severity =~ "critical|major"
receiver: mail
receivers:
- name: 'mail'
email_configs:
- to: 'alerts@example.com'
from: 'monitoring@example.com'
smarthost: 'mail.mms-support.de:465'
auth_username:
name: email-user-credentials
key: username
auth_password:
name: email-user-credentials
key: password
```
### Grafana Configuration
Grafana instance configuration in `stacks/observability/grafana-operator/manifests/grafana.yaml`:
**OAuth/SSO Integration**:
```yaml
config:
auth.generic_oauth:
enabled: "true"
disable_login_form: "true"
client_id: "$__env{GF_AUTH_GENERIC_OAUTH_CLIENT_ID}"
client_secret: "$__env{GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET}"
scopes: "openid email profile offline_access groups"
auth_url: "https://dex.{DOMAIN}/auth"
token_url: "https://dex.{DOMAIN}/token"
api_url: "https://dex.{DOMAIN}/userinfo"
role_attribute_path: "contains(groups[*], 'DevFW') && 'Admin' || 'Viewer'"
```
**Storage**:
```yaml
deployment:
spec:
template:
spec:
volumes:
- name: grafana-data
persistentVolumeClaim:
claimName: grafana-pvc
persistentVolumeClaim:
spec:
storageClassName: csi-disk
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
**Ingress**:
```yaml
ingress:
spec:
ingressClassName: nginx
rules:
- host: "{{{ .Env.DOMAIN_GRAFANA }}}"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grafana-service
port:
number: 3000
tls:
- hosts:
- "{{{ .Env.DOMAIN_GRAFANA }}}"
secretName: grafana-tls-secret
```
### ArgoCD Application Configuration
**Grafana Operator Application** (`template/stacks/observability/grafana-operator.yaml`):
- Name: `grafana-operator`
- Chart: `grafana-operator` v5.18.0 from `ghcr.io/grafana/helm-charts`
- Automated sync with self-healing enabled
- Namespace: `observability`
**VictoriaMetrics Stack Application** (`template/stacks/observability/victoria-k8s-stack.yaml`):
- Name: `victoria-k8s-stack`
- Chart: `victoria-metrics-k8s-stack` v0.48.1 from `https://victoriametrics.github.io/helm-charts/`
- Automated self-healing enabled
- Creates namespace automatically
## Usage Examples
### Accessing Grafana
Access Grafana through SSO:
1. **Navigate to Grafana**
```bash
open https://grafana.${DOMAIN_GRAFANA}
```
2. **Authenticate via Dex**
- Click "Sign in with OAuth"
- Authenticate through configured identity provider
- Users in `DevFW` group receive Admin role, others receive Viewer role
### Querying Metrics
Query VictoriaMetrics directly:
```bash
# Access VMAuth endpoint
curl -u username:password https://vmauth.${DOMAIN_O12Y}/api/v1/query \
-d 'query=up' | jq
# Query pod CPU usage
curl -u username:password https://vmauth.${DOMAIN_O12Y}/api/v1/query \
-d 'query=container_cpu_usage_seconds_total' | jq
# Query with time range
curl -u username:password https://vmauth.${DOMAIN_O12Y}/api/v1/query_range \
-d 'query=container_memory_usage_bytes' \
-d 'start=2024-01-01T00:00:00Z' \
-d 'end=2024-01-01T23:59:59Z' \
-d 'step=5m' | jq
```
### Creating Custom Dashboards
Create custom Grafana dashboards as Kubernetes resources:
```yaml
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDashboard
metadata:
name: custom-app-dashboard
namespace: observability
spec:
instanceSelector:
matchLabels:
dashboards: "grafana"
json: |
{
"dashboard": {
"title": "Custom Application Metrics",
"panels": [
{
"title": "Request Rate",
"targets": [
{
"expr": "rate(http_requests_total[5m])",
"datasource": "VictoriaMetrics"
}
]
}
]
}
}
```
Apply the dashboard:
```bash
kubectl apply -f custom-dashboard.yaml
```
### Viewing Logs in Grafana
Access VictoriaLogs through Grafana:
1. Navigate to Grafana `https://grafana.${DOMAIN_GRAFANA}`
2. Go to Explore
3. Select "VictoriaLogs" datasource
4. Use LogQL queries:
```
{namespace="default"}
{app="nginx"} |= "error"
{namespace="observability"} | json | level="error"
```
### Setting Up Custom Alerts
Create custom alert rules using VMRule:
```yaml
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMRule
metadata:
name: custom-app-alerts
namespace: observability
spec:
groups:
- name: custom-app
interval: 30s
rules:
- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.05
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate detected"
description: "Error rate is {{ $value }} requests/sec"
```
Push the alert rule to [stacks instances](https://edp.buildth.ing/DevFW-CICD/stacks-instances/src/branch/main/otc/observability.t09.de/stacks/observability/victoria-k8s-stack/manifests)
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration
* **OTC Stack**: Requires ingress-nginx controller and cert-manager for external access and TLS
* **Dex (SSO)**: Integrated for Grafana authentication with role-based access control
* **All Platform Services**: Automatically collects metrics from Kubernetes components and platform services
* **Application Stacks**: Provides monitoring for Coder, Forgejo, and other deployed services
## Troubleshooting
### VictoriaMetrics Pods Not Starting
**Problem**: VictoriaMetrics components remain in `Pending` or `CrashLoopBackOff` state
**Solution**:
1. Check VictoriaMetrics resources:
```bash
kubectl get vmsingle,vmagent,vmalertmanager -n observability
kubectl describe vmsingle vmsingle -n observability
```
2. Verify persistent volume claims:
```bash
kubectl get pvc -n observability
kubectl describe pvc vmstorage-vmsingle-0 -n observability
```
3. Check operator logs:
```bash
kubectl logs -n observability -l app.kubernetes.io/name=victoria-metrics-operator
```
### Grafana Not Accessible
**Problem**: Grafana web interface is not accessible at configured URL
**Solution**:
1. Verify Grafana instance status:
```bash
kubectl get grafana grafana -n observability
kubectl describe grafana grafana -n observability
```
2. Check Grafana pod logs:
```bash
kubectl logs -n observability -l app=grafana
```
3. Verify ingress configuration:
```bash
kubectl get ingress -n observability
kubectl describe ingress grafana-ingress -n observability
```
4. Check TLS certificate status:
```bash
kubectl get certificate -n observability
kubectl describe certificate grafana-tls-secret -n observability
```
### OAuth Authentication Failing
**Problem**: Cannot authenticate to Grafana via SSO
**Solution**:
1. Verify Dex is running:
```bash
kubectl get pods -n core -l app=dex
kubectl logs -n core -l app=dex
```
2. Check OAuth client secret:
```bash
kubectl get secret dex-grafana-client -n observability
kubectl describe secret dex-grafana-client -n observability
```
3. Review Grafana OAuth configuration:
```bash
kubectl get grafana grafana -n observability -o yaml | grep -A 20 auth.generic_oauth
```
4. Check Grafana logs for OAuth errors:
```bash
kubectl logs -n observability -l app=grafana | grep -i oauth
```
### Metrics Not Appearing
**Problem**: Metrics not showing up in Grafana or VictoriaMetrics
**Solution**:
1. Check VMAgent scraping status:
```bash
kubectl get vmagent -n observability
kubectl logs -n observability -l app.kubernetes.io/name=vmagent
```
2. Verify service monitors are created:
```bash
kubectl get vmservicescrape -n observability
kubectl get vmpodscrape -n observability
```
3. Check target endpoints:
```bash
# Access VMAgent UI (port-forward if needed)
kubectl port-forward -n observability svc/vmagent 8429:8429
open http://localhost:8429/targets
```
4. Verify VictoriaMetrics Single is accepting data:
```bash
kubectl logs -n observability -l app.kubernetes.io/name=vmsingle
```
### Alerts Not Sending
**Problem**: Alertmanager not sending email notifications
**Solution**:
1. Verify Alertmanager configuration:
```bash
kubectl get vmalertmanager -n observability
kubectl describe vmalertmanager vmalertmanager -n observability
```
2. Check email credentials secret:
```bash
kubectl get secret email-user-credentials -n observability
kubectl describe secret email-user-credentials -n observability
```
3. Review Alertmanager logs:
```bash
kubectl logs -n observability -l app.kubernetes.io/name=vmalertmanager
```
4. Test alert firing manually:
```bash
# Access Alertmanager UI
kubectl port-forward -n observability svc/vmalertmanager 9093:9093
open http://localhost:9093
```
### High Storage Usage
**Problem**: VictoriaMetrics storage running out of space
**Solution**:
1. Check current storage usage:
```bash
kubectl exec -it -n observability vmsingle-0 -- df -h /storage
```
2. Reduce retention period in `values.yaml`:
```yaml
vmsingle:
spec:
retentionPeriod: "15d" # Reduce from 1 month
```
3. Increase PVC size:
```bash
kubectl patch pvc vmstorage-vmsingle-0 -n observability \
-p '{"spec":{"resources":{"requests":{"storage":"50Gi"}}}}'
```
4. Monitor storage metrics in Grafana for capacity planning
## Additional Resources
* [VictoriaMetrics Documentation](https://docs.victoriametrics.com/)
* [VictoriaMetrics Operator Documentation](https://docs.victoriametrics.com/operator/)
* [Grafana Documentation](https://grafana.com/docs/grafana/latest/)
* [Grafana Operator Documentation](https://grafana.github.io/grafana-operator/docs/)
* [VictoriaLogs Documentation](https://docs.victoriametrics.com/victorialogs/)
* [Prometheus Querying Basics](https://prometheus.io/docs/prometheus/latest/querying/basics/)
* [PromQL for VictoriaMetrics](https://docs.victoriametrics.com/metricsql/)

View file

@ -1,526 +0,0 @@
---
title: "OTC"
linkTitle: "OTC"
weight: 10
description: >
Open Telekom Cloud infrastructure components for ingress, TLS, and storage
---
## Overview
The OTC (Open Telekom Cloud) stack provides essential infrastructure components for deploying applications on Open Telekom Cloud environments. It configures ingress routing, automated TLS certificate management, and cloud-native storage provisioning tailored specifically for OTC's Kubernetes infrastructure.
This stack serves as a foundational layer that other platform stacks depend on for external access, secure communication, and persistent storage.
## Key Features
* **Automated TLS Certificate Management**: Let's Encrypt integration via cert-manager for automatic certificate provisioning and renewal
* **Cloud Load Balancer Integration**: Nginx ingress controller configured with OTC-specific Elastic Load Balancer (ELB) annotations
* **Native Storage Provisioning**: Default StorageClass using Huawei FlexVolume provisioner for block storage
* **Prometheus Metrics**: Built-in monitoring capabilities for ingress traffic and performance
* **High Availability**: Rolling update strategy with minimal downtime
* **HTTP-01 Challenge Support**: ACME validation through ingress for certificate issuance
## Repository
**Code**: [OTC Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/otc)
**Documentation**:
* [cert-manager Documentation](https://cert-manager.io/docs/)
* [ingress-nginx Documentation](https://kubernetes.github.io/ingress-nginx/)
* [Open Telekom Cloud Documentation](https://docs.otc.t-systems.com/)
## Getting Started
### Prerequisites
* Kubernetes cluster running on Open Telekom Cloud
* ArgoCD installed (provided by `core` stack)
* Environment variables configured:
- `LOADBALANCER_ID`: OTC Elastic Load Balancer ID
- `LOADBALANCER_IP`: OTC Elastic Load Balancer IP address
- `CLIENT_REPO_DOMAIN`: Git repository domain
- `CLIENT_REPO_ORG_NAME`: Git repository organization
- `CLIENT_REPO_ID`: Client repository identifier
- `DOMAIN`: Domain name for the environment
### Quick Start
The OTC stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible.
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- cert-manager with ClusterIssuer for Let's Encrypt
- ingress-nginx controller with OTC load balancer integration
- Default StorageClass for OTC block storage
### Verification
Verify the OTC stack deployment:
```bash
# Check ArgoCD applications status
kubectl get application otc -n argocd
kubectl get application cert-manager -n argocd
kubectl get application ingress-nginx -n argocd
kubectl get application storageclass -n argocd
# Verify cert-manager pods
kubectl get pods -n cert-manager
# Check ingress-nginx controller
kubectl get pods -n ingress-nginx
# Verify ClusterIssuer status
kubectl get clusterissuer main
# Check StorageClass
kubectl get storageclass default
```
## Architecture
### Component Architecture
The OTC stack consists of three primary components:
**cert-manager**:
- Automates TLS certificate lifecycle management
- Integrates with Let's Encrypt ACME server (production endpoint)
- Uses HTTP-01 challenge validation via ingress
- Creates and manages certificates as Kubernetes resources
- Single replica deployment
**ingress-nginx**:
- Kubernetes ingress controller based on Nginx
- Routes external traffic to internal services
- Integrated with OTC Elastic Load Balancer (ELB)
- Supports TLS termination with cert-manager issued certificates
- Rolling update strategy with max 1 unavailable pod
- Prometheus metrics exporter with ServiceMonitor
**StorageClass**:
- Default storage provisioner for persistent volumes
- Uses Huawei FlexVolume driver (`flexvolume-huawei.com/fuxivol`)
- SATA block storage type
- Immediate volume binding mode
- Supports dynamic volume expansion
### Integration Flow
```
External Traffic → OTC ELB → ingress-nginx → Kubernetes Services
cert-manager (TLS certificates)
Let's Encrypt ACME
```
## Configuration
### cert-manager Configuration
**Helm Values** (`stacks/otc/cert-manager/values.yaml`):
```yaml
crds:
enabled: true
replicaCount: 1
```
**ClusterIssuer** (`stacks/otc/cert-manager/manifests/clusterissuer.yaml`):
```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: main
spec:
acme:
email: admin@think-ahead.tech
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cluster-issuer-account-key
solvers:
- http01:
ingress:
ingressClassName: nginx
```
**Key Settings**:
- CRDs installed automatically
- Production Let's Encrypt ACME endpoint
- HTTP-01 validation through nginx ingress
- ClusterIssuer named `main` for cluster-wide certificate issuance
### ingress-nginx Configuration
**Helm Values** (`stacks/otc/ingress-nginx/values.yaml`):
```yaml
controller:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
service:
annotations:
kubernetes.io/elb.class: union
kubernetes.io/elb.port: '80'
kubernetes.io/elb.id: {{{ .Env.LOADBALANCER_ID }}}
kubernetes.io/elb.ip: {{{ .Env.LOADBALANCER_IP }}}
ingressClassResource:
name: nginx
allowSnippetAnnotations: true
config:
proxy-buffer-size: 32k
use-forwarded-headers: "true"
metrics:
enabled: true
serviceMonitor:
additionalLabels:
release: "ingress-nginx"
enabled: true
```
**Key Settings**:
- **OTC Load Balancer Integration**: Annotations configure connection to OTC ELB
- **Rolling Updates**: Minimizes downtime with 1 pod unavailable during updates
- **Snippet Annotations**: Enabled for advanced ingress configuration (idpbuilder compatibility)
- **Proxy Buffer**: 32k buffer size for handling large headers
- **Forwarded Headers**: Preserves original client information through proxies
- **Metrics**: Prometheus ServiceMonitor for observability
### StorageClass Configuration
**StorageClass** (`stacks/otc/storageclass/storageclass.yaml`):
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
name: default
parameters:
kubernetes.io/hw:passthrough: "true"
kubernetes.io/storagetype: BS
kubernetes.io/volumetype: SATA
kubernetes.io/zone: eu-de-02
provisioner: flexvolume-huawei.com/fuxivol
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
```
**Key Settings**:
- **Default StorageClass**: Automatically used when no StorageClass specified
- **OTC Zone**: Provisioned in `eu-de-02` availability zone
- **SATA Volumes**: Block storage (BS) with SATA performance tier
- **Volume Expansion**: Supports resizing persistent volumes dynamically
- **Reclaim Policy**: Volumes deleted when PersistentVolumeClaim is removed
### ArgoCD Application Configuration
**Registry Application** (`template/registry/otc.yaml`):
- Name: `otc`
- Manages the OTC stack directory
- Automated sync with prune and self-heal enabled
- Creates namespaces automatically
**Component Applications**:
**cert-manager** (referenced in stack):
- Deploys cert-manager Helm chart
- Automated self-healing enabled
- Includes ClusterIssuer manifest for Let's Encrypt
**ingress-nginx** (`template/stacks/otc/ingress-nginx.yaml`):
- Deploys from official Kubernetes ingress-nginx repository
- Chart version: helm-chart-4.12.1
- References environment-specific values from stacks-instances repository
**storageclass** (`template/stacks/otc/storageclass.yaml`):
- Deploys StorageClass manifest
- Managed as ArgoCD Application
- Automated sync with unlimited retries
## Usage Examples
### Creating an Ingress with Automatic TLS
Create an ingress resource that automatically provisions a TLS certificate:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
namespace: my-namespace
annotations:
cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
```
cert-manager will automatically:
1. Detect the ingress with `cert-manager.io/cluster-issuer` annotation
2. Create a Certificate resource
3. Request certificate from Let's Encrypt using HTTP-01 challenge
4. Store certificate in `myapp-tls` secret
5. Renew certificate before expiration
### Creating a PersistentVolumeClaim
Use the default OTC StorageClass for persistent storage:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-data
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: csi-disk
```
### Expanding an Existing Volume
Resize a persistent volume by editing the PVC:
```bash
# Edit the PVC storage request
kubectl patch pvc my-data -n my-namespace -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
# Verify expansion
kubectl get pvc my-data -n my-namespace
```
The volume will expand automatically due to `allowVolumeExpansion: true` in the StorageClass.
### Custom Ingress Configuration
Use nginx ingress snippets for advanced routing:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: advanced-app
annotations:
cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Custom-Header: value";
if ($http_user_agent ~* "bot") {
return 403;
}
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 8080
```
## Integration Points
* **Core Stack**: Requires ArgoCD for deployment orchestration
* **All Application Stacks**: Depends on OTC stack for:
- External access via ingress-nginx
- TLS certificates via cert-manager
- Persistent storage via default StorageClass
* **Observability Stack**: ingress-nginx metrics exported to Prometheus
* **Coder Stack**: Uses ingress and cert-manager for workspace access
* **Forgejo Stack**: Requires ingress and TLS for Git repository access
## Troubleshooting
### Certificate Issuance Fails
**Problem**: Certificate remains in `Pending` state and is not issued
**Solution**:
1. Check Certificate status:
```bash
kubectl get certificate -A
kubectl describe certificate <cert-name> -n <namespace>
```
2. Verify ClusterIssuer is ready:
```bash
kubectl get clusterissuer main
kubectl describe clusterissuer main
```
3. Check cert-manager logs:
```bash
kubectl logs -n cert-manager -l app=cert-manager
```
4. Verify HTTP-01 challenge can reach ingress:
```bash
kubectl get challenges -A
kubectl describe challenge <challenge-name> -n <namespace>
```
5. Common issues:
- DNS not pointing to load balancer IP
- Firewall blocking HTTP (port 80) traffic
- Ingress class not set to `nginx`
- Let's Encrypt rate limits exceeded
### Ingress Controller Not Ready
**Problem**: ingress-nginx pods are not running or LoadBalancer service has no external IP
**Solution**:
1. Check ingress controller status:
```bash
kubectl get pods -n ingress-nginx
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller
```
2. Verify LoadBalancer service:
```bash
kubectl get svc -n ingress-nginx
kubectl describe svc ingress-nginx-controller -n ingress-nginx
```
3. Check OTC load balancer annotations:
```bash
kubectl get svc ingress-nginx-controller -n ingress-nginx -o yaml
```
4. Verify environment variables are set correctly:
- `LOADBALANCER_ID` matches OTC ELB ID
- `LOADBALANCER_IP` matches ELB public IP
5. Check OTC console for ELB configuration and health checks
### Storage Provisioning Fails
**Problem**: PersistentVolumeClaim remains in `Pending` state
**Solution**:
1. Check PVC status:
```bash
kubectl get pvc -A
kubectl describe pvc <pvc-name> -n <namespace>
```
2. Verify StorageClass exists and is default:
```bash
kubectl get storageclass
kubectl describe storageclass default
```
3. Check volume provisioner logs:
```bash
kubectl logs -n kube-system -l app=csi-disk-plugin
```
4. Common issues:
- Insufficient quota in OTC project
- Invalid zone configuration (must be `eu-de-02`)
- Requested storage size exceeds limits
- Missing IAM permissions for volume creation
### Ingress Returns 503 Service Unavailable
**Problem**: Ingress configured but returns 503 error
**Solution**:
1. Verify backend service exists:
```bash
kubectl get svc <service-name> -n <namespace>
kubectl get endpoints <service-name> -n <namespace>
```
2. Check if pods are ready:
```bash
kubectl get pods -n <namespace> -l <service-selector>
```
3. Verify ingress configuration:
```bash
kubectl describe ingress <ingress-name> -n <namespace>
```
4. Check nginx ingress logs:
```bash
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller --tail=100
```
5. Test service connectivity from ingress controller:
```bash
kubectl exec -n ingress-nginx <controller-pod> -- curl http://<service-name>.<namespace>.svc.cluster.local:<port>
```
### TLS Certificate Shows as Invalid
**Problem**: Browser shows certificate warning or certificate details are incorrect
**Solution**:
1. Verify certificate is ready:
```bash
kubectl get certificate <cert-name> -n <namespace>
```
2. Check certificate contents:
```bash
kubectl get secret <tls-secret-name> -n <namespace> -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout
```
3. Ensure certificate covers the correct domain:
```bash
kubectl describe certificate <cert-name> -n <namespace>
```
4. Force certificate renewal if expired or incorrect:
```bash
kubectl delete certificate <cert-name> -n <namespace>
# cert-manager will automatically recreate it
```
## Additional Resources
* [cert-manager Documentation](https://cert-manager.io/docs/)
* [ingress-nginx User Guide](https://kubernetes.github.io/ingress-nginx/user-guide/)
* [Open Telekom Cloud Documentation](https://docs.otc.t-systems.com/)
* [Let's Encrypt Documentation](https://letsencrypt.org/docs/)
* [Kubernetes Ingress Concepts](https://kubernetes.io/docs/concepts/services-networking/ingress/)
* [Kubernetes Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/)

View file

@ -1,418 +0,0 @@
---
title: "Terralist"
linkTitle: "Terralist"
weight: 21
description: >
Private Terraform Module and Provider Registry with OAuth authentication
---
## Overview
Terralist is an open-source private Terraform registry for modules and providers that implements the HashiCorp registry protocol. As part of the Edge Developer Platform, Terralist enables teams to securely store, version, and distribute internal Terraform modules and providers with built-in authentication and documentation capabilities.
The Terralist stack deploys a self-hosted instance with OAuth2 authentication, persistent storage, and integrated ingress for secure access.
## Key Features
* **Private Module Registry**: Securely host and distribute confidential Terraform modules and providers
* **HashiCorp Protocol Compatible**: Works seamlessly with `terraform` CLI and standard registry workflows
* **OAuth2 Authentication**: Integrated OIDC authentication supporting `terraform login` command
* **Documentation Interface**: Web UI to visualize artifacts with automatic module documentation
* **Flexible Storage**: Supports local storage or remote cloud buckets with presigned URLs
* **Git Integration**: Works with mono-repositories while leveraging Terraform version attributes
* **API Management**: RESTful API for programmatic module and provider management
## Repository
**Code**: [Terralist Stack Templates](https://edp.buildth.ing/DevFW-CICD/stacks/src/branch/main/template/stacks/terralist)
**Documentation**:
* [Terralist Official Documentation](https://www.terralist.io/)
* [Terralist GitHub Repository](https://github.com/terralist/terralist)
* [Getting Started Guide](https://www.terralist.io/getting-started/)
## Getting Started
### Prerequisites
* Kubernetes cluster with ArgoCD installed (provided by `core` stack)
* Ingress controller configured (provided by `otc` stack)
* cert-manager for TLS certificate management (provided by `otc` stack)
* Domain name configured via `DOMAIN_GITEA` environment variable
* OAuth2 provider configured (Dex or external provider)
### Quick Start
The Terralist stack is deployed as part of the EDP installation process:
1. **Trigger Deploy Pipeline**
- Go to [Infra Deploy Pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml)
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible. (if you enter `test-me` then the domain will be `terralist.test-me.t09.de`)
- Execute workflow
2. **ArgoCD Synchronization**
ArgoCD automatically deploys:
- Terralist application (Helm chart v0.8.1)
- Persistent volume for module storage
- Ingress configuration with TLS
- OAuth2 credentials and configuration
### Verification
Verify the Terralist deployment:
```bash
# Check ArgoCD application status
kubectl get application terralist -n argocd
# Verify Terralist pods are running
kubectl get pods -n terralist
# Check persistent volume claim
kubectl get pvc -n terralist
# Verify ingress configuration
kubectl get ingress -n terralist
```
Access the Terralist web interface at `https://terralist.{DOMAIN_GITEA}`.
## Architecture
### Component Architecture
The Terralist stack consists of:
**Terralist Application**:
- Web interface for module and provider management
- REST API for programmatic access
- OAuth2 authentication handler
- Module documentation renderer
**Storage Layer**:
- SQLite database for metadata and configuration
- Local filesystem storage for modules and providers
- Persistent volume with 10Gi capacity on `csi-disk` storage class
- Optional cloud bucket integration for remote storage
**Networking**:
- Nginx ingress with TLS termination
- cert-manager integration for automatic certificate management
- OAuth2 callback endpoint configuration
## Configuration
### Environment Variables
The Terralist application is configured through environment variables in `values.yaml`:
**OAuth2 Configuration**:
- `TERRALIST_AUTHORITY_URL`: OIDC provider authority URL (from `terralist-oidc-secrets` secret)
- `TERRALIST_CLIENT_ID`: OAuth2 client identifier
- `TERRALIST_CLIENT_SECRET`: OAuth2 client secret
- `TERRALIST_TOKEN_SIGNING_SECRET`: Secret for token signing and validation
**Storage Configuration**:
- SQLite database at `/data/database.db`
- Module storage at `/data/modules`
### Helm Chart Configuration
Key Helm values configured in `stacks/terralist/terralist/values.yaml`:
```yaml
controllers:
main:
strategy: Recreate
containers:
main:
env:
- name: TERRALIST_AUTHORITY_URL
valueFrom:
secretKeyRef:
name: terralist-oidc-secrets
key: authority_url
- name: TERRALIST_CLIENT_ID
valueFrom:
secretKeyRef:
name: terralist-oidc-secrets
key: client_id
ingress:
main:
enabled: true
className: nginx
hosts:
- host: "terralist.{DOMAIN_GITEA}"
paths:
- path: /
service:
identifier: main
annotations:
cert-manager.io/cluster-issuer: main
tls:
- secretName: terralist-tls-secret
hosts:
- "terralist.{DOMAIN_GITEA}"
persistence:
data:
enabled: true
size: 10Gi
storageClass: csi-disk
accessMode: ReadWriteOnce
```
### ArgoCD Application Configuration
**Registry Application** (`template/registry/terralist.yaml`):
- Name: `terralist-reg`
- Manages the Terralist stack directory
- Automated sync with prune and self-heal enabled
**Stack Application** (`template/stacks/terralist/terralist.yaml`):
- Name: `terralist`
- Deploys Terralist Helm chart v0.8.1 from `https://github.com/terralist/helm-charts`
- Automated self-healing enabled
- Creates namespace automatically
- References values from `stacks-instances` repository
## Usage Examples
### Authenticating with Terralist
Configure Terraform CLI to use your private registry:
```bash
# Authenticate using OAuth2
terraform login terralist.${DOMAIN_GITEA}
# This opens a browser window for OAuth2 authentication
# After successful login, credentials are stored in ~/.terraform.d/credentials.tfrc.json
```
### Publishing a Module
Publish a module to your private registry:
1. **Create Module Structure**
```bash
my-module/
├── main.tf
├── variables.tf
├── outputs.tf
└── README.md
```
2. **Tag and Push via API**
```bash
# Package module
tar -czf my-module-1.0.0.tar.gz my-module/
# Upload to Terralist (requires authentication token)
curl -X POST https://terralist.${DOMAIN_GITEA}/v1/modules/my-org/my-module/my-provider/1.0.0 \
-H "Authorization: Bearer ${TERRALIST_TOKEN}" \
-F "file=@my-module-1.0.0.tar.gz"
```
### Consuming Private Modules
Use modules from your private registry in Terraform configurations:
```hcl
# Configure Terraform to use private registry
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# Reference module from private registry
module "vpc" {
source = "terralist.${DOMAIN_GITEA}/my-org/vpc/aws"
version = "1.0.0"
cidr_block = "10.0.0.0/16"
environment = "production"
}
```
### Browsing Module Documentation
Access the Terralist web interface to view module documentation:
```bash
# Open Terralist UI
open https://terralist.${DOMAIN_GITEA}
# Browse available modules
# - View module versions
# - Read generated documentation
# - Access module sources
# - Copy usage examples
```
### Managing Modules via API
```bash
# List all modules
curl -H "Authorization: Bearer ${TERRALIST_TOKEN}" \
https://terralist.${DOMAIN_GITEA}/v1/modules
# Get specific module versions
curl -H "Authorization: Bearer ${TERRALIST_TOKEN}" \
https://terralist.${DOMAIN_GITEA}/v1/modules/my-org/my-module/my-provider
# Delete a module version
curl -X DELETE -H "Authorization: Bearer ${TERRALIST_TOKEN}" \
https://terralist.${DOMAIN_GITEA}/v1/modules/my-org/my-module/my-provider/1.0.0
```
## Integration Points
* **Core Stack**: Depends on ArgoCD for deployment orchestration
* **OTC Stack**: Requires ingress-nginx controller and cert-manager for external access and TLS
* **Dex (SSO)**: Integrates with platform OAuth2 provider for authentication
* **Forgejo Stack**: Modules can be sourced from platform Git repositories
* **Observability Stack**: Application metrics can be collected by platform monitoring tools
## Troubleshooting
### Terralist Pod Not Starting
**Problem**: Terralist pod remains in `Pending` or `CrashLoopBackOff` state
**Solution**:
1. Check persistent volume claim status:
```bash
kubectl get pvc -n terralist
kubectl describe pvc data-terralist-0 -n terralist
```
2. Verify OAuth2 credentials secret:
```bash
kubectl get secret terralist-oidc-secrets -n terralist
kubectl describe secret terralist-oidc-secrets -n terralist
```
3. Check Terralist logs:
```bash
kubectl logs -n terralist -l app.kubernetes.io/name=terralist
```
### Cannot Access Terralist UI
**Problem**: Terralist web interface is not accessible at configured URL
**Solution**:
1. Verify ingress configuration:
```bash
kubectl get ingress -n terralist
kubectl describe ingress -n terralist
```
2. Check TLS certificate status:
```bash
kubectl get certificate -n terralist
kubectl describe certificate terralist-tls-secret -n terralist
```
3. Verify DNS resolution:
```bash
nslookup terralist.${DOMAIN_GITEA}
```
### OAuth2 Authentication Fails
**Problem**: `terraform login` or web authentication fails
**Solution**:
1. Verify OAuth2 configuration in secret:
```bash
kubectl get secret terralist-oidc-secrets -n terralist -o yaml
```
2. Check OAuth2 provider (Dex) is accessible:
```bash
curl https://dex.${DOMAIN_GITEA}/.well-known/openid-configuration
```
3. Verify callback URL is correctly configured in OAuth2 provider:
```
Expected callback: https://terralist.${DOMAIN_GITEA}/auth/cli/callback
```
4. Check Terralist logs for authentication errors:
```bash
kubectl logs -n terralist -l app.kubernetes.io/name=terralist | grep -i auth
```
### Module Upload Fails
**Problem**: Cannot upload modules via API or UI
**Solution**:
1. Verify authentication token is valid:
```bash
# Test token with API call
curl -H "Authorization: Bearer ${TERRALIST_TOKEN}" \
https://terralist.${DOMAIN_GITEA}/v1/modules
```
2. Check persistent volume has available space:
```bash
kubectl exec -n terralist -it terralist-0 -- df -h /data
```
3. Verify module package format is correct:
```bash
# Module should be a gzipped tar archive
tar -tzf my-module-1.0.0.tar.gz
```
4. Review upload logs:
```bash
kubectl logs -n terralist -l app.kubernetes.io/name=terralist --tail=50
```
### Terraform Cannot Download Modules
**Problem**: `terraform init` fails to download modules from private registry
**Solution**:
1. Verify authentication credentials exist:
```bash
cat ~/.terraform.d/credentials.tfrc.json
```
2. Re-authenticate if needed:
```bash
terraform logout terralist.${DOMAIN_GITEA}
terraform login terralist.${DOMAIN_GITEA}
```
3. Test module availability via API:
```bash
curl -H "Authorization: Bearer ${TERRALIST_TOKEN}" \
https://terralist.${DOMAIN_GITEA}/v1/modules/my-org/my-module/my-provider
```
4. Check module source URL format in Terraform configuration:
```hcl
# Correct format
source = "terralist.${DOMAIN_GITEA}/org/module/provider"
# Not: https://terralist.${DOMAIN_GITEA}/...
```
## Additional Resources
* [Terralist Documentation](https://www.terralist.io/)
* [Terralist GitHub Repository](https://github.com/terralist/terralist)
* [Terraform Registry Protocol](https://developer.hashicorp.com/terraform/internals/module-registry-protocol)
* [Private Module Registries Guide](https://developer.hashicorp.com/terraform/registry/private)
* [ArgoCD Documentation](https://argo-cd.readthedocs.io/)

View file

@ -1,100 +0,0 @@
---
title: Terraform-based deployment of EDP
linkTitle: Terraform
weight: 10
description: >
As-code definitions of EDP clusters, so they can be deployed reliably and consistently on OTC whenever needed.
---
## Overview
The [infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy) and [infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue) repositories work together to provide a framework for deploying Edge Developer Platform instances.
`infra-catalogue` contains individual, atomic infrastructure components: `terraform` modules and `terragrunt` [units](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/units) and [stacks](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/stacks), such as [Kubernetes clusters](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/modules/kubernetes) and [Postgres databases](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/units/postgres/terragrunt.hcl).
`infra-deploy` then contains full [definitions](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod) of stacks built using these components - such as the production site at [edp.buildth.ing](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp). It also includes [scripts](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/scripts) with which to deploy these stacks.
Note that both repositories rely on the wide range of features available on [OTC](https://console.otc.t-systems.com). Several of these features, such as S3-compatible storage and on-demand managed Postgres instances, are not yet available on more sovereign clouds such as [Edge](https://hub.apps.edge.platform.mg3.mdb.osc.live/), so these are not currently supported.
## Key Features
* 'Catalogue' of infrastructure stacks to be used in deployments
* Definition of deployment stacks for each environment in prod or dev
* Scripts to govern deployment, installation and drift-correction of EDP
## Purpose in EDP
For our Edge Developer Platform to be reliable it must be deployable in a consistent manner. When errors occur, or after any manual alterations, the system can then be safely reset to a working state. This state should be provided in code to allow for automated validation and deployment, and to allow it to be deployed from an always-identical CI/CD pipeline rather than a variable local deployment environment.
## Repositories
**Infra-deploy**: [https://edp.buildth.ing/DevFW/infra-deploy](https://edp.buildth.ing/DevFW/infra-deploy)
**Infra-catalogue**: [https://edp.buildth.ing/DevFW/infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue)
## Getting Started
### Prerequisites
* [Docker](https://docs.docker.com/)
* [Kubernetes management](https://kubernetes.io/docs/reference/kubectl/)
* Access to [OTC](https://console.otc.t-systems.com/console/)
* HashiCorp [Terraform](https://developer.hashicorp.com/terraform) or its open-source equivalent, [OpenTofu](https://opentofu.org/)
* [Terragrunt](https://terragrunt.gruntwork.io/), an orchestrator for Terraform stacks
### Quick Start
1. Set up OTC credentials per [README section](https://edp.buildth.ing/DevFW/infra-deploy#installation-on-otc)
2. Set cluster environment and run install script per [README section](https://edp.buildth.ing/DevFW/infra-deploy#using-the-edpbuilder)
Alternatively, manually trigger automated [deployment pipeline](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml).
- You will be asked for essential information like the deployment name and tenant.
- Any fields marked `INITIAL` only need to be set when first creating an environment
- Thereafter, the cached values are used and the `INITIAL` values provided to the pipeline are ignored.
- Specifically, they are cached in a `terragrunt.values.hcl` file within `infra-deploy/<tenant>/<cluster-name>`, where both variables are set in the pipeline
- e.g. [prod/edp](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp/terragrunt.values.hcl) or [nonprod/garm-provider-test](https://edp.buildth.ing/DevFW/infra-deploy/src/commit/189632811944d3d3bc41e26c09262de8f215f82b/non-prod/garm-provider-test/terragrunt.values.hcl)
### Verification
After the deploymenet completes, and a short startup time, you should be able to access your Forgejo instance at `<cluster-name>.buildth.ing` (production tenant) or `<cluster-name>.t09.de` (non-prod tenant). `<cluster-name>` is the name you provided in the deployment pipeline, or the $CLUSTER_ENVIRONMENT variable when running manually.
For example, the primary production cluster is called [edp](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp) and can be accessed at [edp.buildth.ing](https://edp.buildth.ing).
#### Screens
Deployment using production pipeline:
![Running the deployment pipeline](../deploy-pipeline.png)
...
![Successful deploy pipeline logs](../deploy-pipeline-success.png)
## Configuration
Configuration of clusters is done in two ways. The first, mentioned above, is to provide `INITIAL` configuration when creating a new cluster. Thereafter, configuration is done within the relevant `infra-deploy/<tenant>` directory (e.g. [prod/edp](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp)). Variables may be changed within the [terragrunt.values.hcl](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp/terragrunt.values.hcl) file, but equally the [terragrunt.stack.hcl](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp/terragrunt.stack.hcl) file contains references to the lower-level code set up in `infra-catalogue`.
These are organised in layers, according to Terragrunt's natural structure. First is a [stack](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/stacks), a high-level abstraction for a whole cluster. This in turn [references](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/stacks/forgejo/terragrunt.stack.hcl) terragrunt [units](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/units), which in turn are wrappers around standard _Terraform_ [modules](https://edp.buildth.ing/DevFW/infra-catalogue/src/branch/main/modules).
When deployed, the Terraform modules require a `provider.tf` file which is automatically generated by Terragrunt using [tenant-level](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/tenant.hcl) and [global](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/root.hcl) configuration stored in `infra-deploy`.
When deploying manually (e.g. with [install.sh](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/install.sh)), you can observe these layers as Terragrunt will cache them on your machine, within the `.terragrunt-stack/` directory generated within [/\<tenant\>/\<cluster-name\>/](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp).
## Troubleshooting
### Version updates
**Problem**: Updates to `infra-catalogue` are not immediately reflected in deployed clusters, even after running [deploy](https://edp.buildth.ing/DevFW/infra-deploy/actions?workflow=deploy.yaml).
**Solution**: Versions must be updated.
Each cluster deployment specifies a [catalogue version](https://edp.buildth.ing/DevFW/infra-deploy/src/commit/189632811944d3d3bc41e26c09262de8f215f82b/prod/edp/terragrunt.values.hcl#L7) in its `terragrunt.values.hcl`; this refers to a tag in [infra-catalogue](https://edp.buildth.ing/DevFW/infra-catalogue/releases/tag/v2.0.6). Within `infra-catalogue`, stacks reference units and modules from the same tag.
Thus, to test a new change to `infra-catalogue`, first make a new [tag](https://edp.buildth.ing/DevFW/infra-catalogue/tags), then update the relevant [values file](https://edp.buildth.ing/DevFW/infra-deploy/src/branch/main/prod/edp/terragrunt.values.hcl) to point to it.
## Status
**Maturity**: TRL-9
## Additional Resources
- [Terraform](https://developer.hashicorp.com/terraform)
- [OpenTofu](https://opentofu.org/), the community-driven replacement for Terraform
- [Terragrunt](https://terragrunt.gruntwork.io/)

View file

@ -1,91 +0,0 @@
---
title: Deploying to OTC
linkTitle: Deploying to OTC
weight: 100
description: >
Open Telekom Cloud as deployment and infrastructure target
---
## Overview
OTC, Open Telekom Cloud, is one of the cloud platform offerings by Deutsche
Telekom and offers GDPR compliant cloud services. The system is based on
OpenStack.
## Key Features
- Managed Kubernetes
- Managed services including
- Databases
- RDS PostgreSQL
- ElasticSearch
- S3 compatible storage
- DNS Management
- Backup & Restore of Kubernetes volumes and managed services
## Purpose in EDP
OTC is used to host core infrastructure to provide the primary, public EDP
instance and as a test bed for Kubernetes based workloads that would eventually
be deployed to EdgeConnect.
Service components such as Forgejo, Grafana, Garm, and Coder are deployed in OTC
Kubernetes utilizing managed services for databases and storage to reduce the
maintenance and setup burden on the team.
Services and workloads are primarily provisioned using Terraform.
## Repository
**Code**:
- <https://edp.buildth.ing/DevFW/infra-catalogue> - Terraform modules of various
system components
- <https://edp.buildth.ing/DevFW/infra-deploy> - Runs deployment worklows,
contains base configuration of deployed system instances and various
deployment scripts
- <https://edp.buildth.ing/DevFW-CICD/stacks> - Template of a system
configuration divided into multiple, deployable application stacks
- <https://edp.buildth.ing/DevFW-CICD/stacks-instances> - System configurations
of deployed instances hydrated from the `stacks` template
**Terraform Provider**:
- <https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/latest/docs>
**Documentation**:
- <https://www.open-telekom-cloud.com/>
- <https://www.open-telekom-cloud.com/en/products-services/core-services/technical-documentation>
**OTC Console**
- <https://console.otc.t-systems.com/console/>
## Managed Services
EDP instances heavily utilize Open Telekom Cloud's (OTC) managed services to
simplify operations, enhance reliability, and allow the team to focus on
application development rather than infrastructure management. The core
components of each deployed instance run within the managed Kubernetes service.
The following managed services are integral to EDP deployments:
- **Cloud Container Engine (CCE)**: The managed Kubernetes service that forms
the foundation of each EDP instance, hosting all containerized core components
and workloads.
- **Relational Database Service (RDS) for PostgreSQL**: Provides scalable and
reliable PostgreSQL database instances, primarily used by applications such as
Forgejo.
- **Object Storage Service (OBS)**: Offers S3-compatible object storage for
storing backups, application data (e.g., for Forgejo), and other static
assets.
- **Cloud Search Service (CSS)**: An optional service providing robust search
capabilities, specifically used for Forgejo's indexing and search
functionalities.
- **Networking**: Essential networking components, including Virtual Private
Clouds (VPCs), Load Balancers, and DNS management, which facilitate secure and
efficient communication within the EDP ecosystem.
- **Cloud Backup and Recovery (CBR)**: Vaults are configured to automatically
back up persistent volumes created by CCE instances, ensuring data resilience
and disaster recovery readiness.

View file

@ -1,42 +0,0 @@
---
title: EDP Environments in OTC
linkTitle: Environments
weight: 10
description: >
Instances of EDP are deployed into distinct OTC environments
---
## Architecture
Two distinct tenants are utilized within OTC to enforce a strict separation
between production (`prod`) and non-production (`non-prod`) environments. This
segregation ensures isolated resource management, security policies, and
operational workflows, preventing any potential cross-contamination or impact
between critical production systems and development/testing activities.
- **Production Tenant:** This tenant is exclusively dedicated to production
workloads and is bound to the primary domain `buildth.ing`. All
production-facing EDP instances and associated infrastructure reside within
this tenant, leveraging `buildth.ing` for public access and service discovery.
Within this tenant, each EDP instance is typically dedicated to a specific
customer. This design decision provides robust data separation, addressing
critical privacy and compliance requirements by isolating customer data. It
also allows for independent upgrade paths and maintenance windows for
individual customer instances, minimizing impact on other customers while
still benefiting from centralized management and deployment strategies. The
primary `edp.buildth.ing` instance and the `observability.buildth.ing`
instance are exceptions to this customer-dedicated model, serving foundational
platform roles.
- **Non-Production Tenant:** This tenant hosts all development, testing, and
staging environments, bound to the primary domain `t09.de`. This setup allows
for flexible experimentation and robust testing without impacting production
stability.
Each tenant is designed to accommodate multiple instances of the product, EDP.
These instances are dynamically provisioned and typically bound to specific
subdomains, which inherit from their respective primary tenant domain (e.g.,
`my-test.t09.de` for a non-production instance or `customer-a.buildth.ing` for a
production instance). This subdomain structure facilitates logical separation
and routing for individual EDP deployments.
<likec4-view view-id="otcTenants" browser="true"></likec4-view>

View file

@ -1,113 +0,0 @@
---
title: Managing Instances
linkTitle: Managing Instances
weight: 50
description: >
Managing instances of EDP deployed in OTC
---
## Deployment Strategy
The core of the deployment strategy revolves around the primary production EDP
instance, `edp.buildth.ing`. This instance acts as a centralized control plane
and code repository, storing all application code, configuration, and deployment
pipelines. It is generally responsible for orchestrating the deployment and
updates of most other EDP instances across both production and non-production
tenants, ensuring consistency and automation.
<likec4-view view-id="otcTenants" browser="true"></likec4-view>
### Circular Dependency Issue
However, a unique circular dependency exists with `observability.buildth.ing`.
While `edp.buildth.ing` manages most deployments, it cannot manage its _own_
lifecycle. Attempting to upgrade `edp.buildth.ing` itself through its own
mechanisms could lead to critical components becoming unavailable during the
process (e.g., internal container registries going offline), preventing the
system from restarting successfully. To mitigate this, `edp.buildth.ing` is
instead deployed and managed by `observability.buildth.ing`, with all its
essential deployment dependencies located within the observability environment.
Crucially, git repositories and other resources like container images are
synchronized from `edp.buildth.ing` to the observability instance, as
`observability.buildth.ing` itself does not produce artifacts. In turn,
`edp.buildth.ing` is responsible for deploying and managing
`observability.buildth.ing` itself. This creates a carefully managed circular
relationship that ensures both critical components can be deployed and
maintained effectively without single points of failure related to
self-management.
## Configuration
This section outlines the processes for deploying and managing the configuration
of EDP instances within the Open Telekom Cloud (OTC) environment. Deployments
are primarily driven by Forgejo Actions and leverage Terraform for
infrastructure provisioning and lifecycle management, adhering to GitOps
principles.
### Deployment Workflows
The lifecycle management of EDP instances is orchestrated through a set of
dedicated workflows within the `infra-deploy` Forgejo
[repository](https://edp.buildth.ing/DevFW/infra-deploy), hosted on
`edp.buildth.ing`. These workflows are designed to emulate the standard
Terraform lifecycle, offering `plan`, `deploy`, and `destroy` operations.
- **Triggering Deployments**: Workflows are manually initiated and require
explicit configuration of an OTC tenant and an environment to accurately
target a specific system instance.
- **`plan` Workflow**:
- Executes a dry-run of the proposed deployment.
- Outputs the detailed `terraform plan`, showing all anticipated
infrastructure changes.
- Shows the diff of the configuration that would be applied to the
`stacks-instances` repository, reflecting changes derived from the `stacks`
repository.
- **`deploy` Workflow**:
- Utilized for both the initial creation of new EDP instances and subsequent
updates to existing deployments.
- For new instance creation, all required configuration fields must be
populated.
- **Important Considerations**:
- Configuration fields explicitly marked as "(INITIAL)" are foundational
and, once set during the initial deployment, cannot be altered through the
workflow without manual modification of the underlying Git configuration.
- Certain changes to the configuration may lead to extensive infrastructure
redeployments, which could potentially result in data loss if not
carefully managed and accompanied by appropriate backup strategies.
- **`destroy` Workflow**:
- Initiates the deprovisioning and complete removal of an existing EDP system
instance from the OTC environment.
- While the infrastructure is torn down, the corresponding configuration entry
is intentionally retained within the `stacks-instances` repository for
historical tracking or potential re-creation.
> NOTE: When deploying a new instance of EDP it is bootstrapped with random
> secrets including admin logins. Initial admin credentials for individual
> components are printed in workflow output. They can be retrieved from the
> secrets withing Kubernetes at a later point in time.
<a href="../workflow-deploy-form.png" target="_blank">
<img alt="Deploy workflow form" src="../workflow-deploy-form.png" style="max-width: 300px;" />
</a>
### Configuration Management
The configuration for deployed EDP instances is systematically managed across
several Git repositories to ensure version control, traceability, and adherence
to GitOps practices.
- **Base Configuration**: A foundational configuration entry for each deployed
system instance is stored directly within the `infra-deploy` repository.
- **Complete System Configuration**: The comprehensive configuration for a
system instance, derived from the `stacks` template repository, is maintained
in the `stacks-instances` repository.
- **GitOps Synchronization**: ArgoCD continuously monitors the
`stacks-instances` repository. It automatically detects and synchronizes any
discrepancies between the desired state defined in Git and the actual state of
the deployed system within the OTC Kubernetes cluster. The configurations in
the `stacks-instances` repository are organized by OTC tenant and instance
name. ArgoCD monitors only the portion of the repository that is relevant to
its specific instance.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 209 KiB

Some files were not shown because too many files have changed in this diff Show more