diff --git a/.markdownlint.json b/.markdownlint.json
index 8c37aec..52b45af 100644
--- a/.markdownlint.json
+++ b/.markdownlint.json
@@ -1,8 +1,25 @@
{
+ "$schema": "https://raw.githubusercontent.com/DavidAnson/markdownlint/main/schema/markdownlint-config-schema.json",
"default": true,
+ "MD001": true,
+ "MD003": { "style": "atx" },
+ "MD004": { "style": "asterisk" },
+ "MD007": { "indent": 2 },
+ "MD009": { "br_spaces": 2 },
+ "MD010": { "code_blocks": false },
+ "MD012": { "maximum": 2 },
"MD013": false,
- "MD033": false,
- "MD041": false,
+ "MD022": { "lines_above": 1, "lines_below": 1 },
"MD024": { "siblings_only": true },
- "MD025": { "front_matter_title": "" }
+ "MD025": { "front_matter_title": "" },
+ "MD026": { "punctuation": ".,;:" },
+ "MD029": { "style": "ordered" },
+ "MD031": { "list_items": false },
+ "MD032": true,
+ "MD033": { "allowed_elements": ["div", "span", "a", "img", "br", "details", "summary"] },
+ "MD034": false,
+ "MD040": false,
+ "MD041": false,
+ "MD045": false,
+ "MD047": true
}
diff --git a/content/en/blog/20250401_review.md b/content/en/blog/20250401_review.md
index 051acb2..a5a339d 100644
--- a/content/en/blog/20250401_review.md
+++ b/content/en/blog/20250401_review.md
@@ -19,12 +19,12 @@ devops is dead .... claim
4) Stephan bis 10h55
-5) christopher 10h58
+5) christopher 10h58
6) robert 11:11
-- app
-- devops-pipelines
-- edp in osc deployed
+* app
+* devops-pipelines
+* edp in osc deployed
7) michal has nothing to show
@@ -33,7 +33,6 @@ devops is dead .... claim
9) patrick 11:32
-
====
projekt management meeting
@@ -58,7 +57,7 @@ senioren bekommen
level1: source code structure, artefakte builden, revision control, branching model, e.g. pull requesting, tests der software, local debugging
level2: automatisierung des artefakte-builds, versionsmgmt, milestones, tickets, issues, compliances an security
-level3: deployment auf stages, feedback pipeline verhalten
+level3: deployment auf stages, feedback pipeline verhalten
level4: feedback app-verhalten (logs, metrics, alerts) + development loop
level5: 3rd level support in production
@@ -69,7 +68,7 @@ level2: reaching the outdside world with output
automatisierung des artefakte-builds, versionsmgmt, milestones, tickets, issues, compliances an security
level3: run the app anywhere
-deployment auf stages, feedback pipeline verhalten
+deployment auf stages, feedback pipeline verhalten
level4: monitoring the app
feedback app-verhalten (logs, metrics, alerts) + development loop
@@ -78,10 +77,8 @@ level5: support
3rd level support in production (or any outer stage)
-
sprint 4
leveraging säule
eigene app säule
chore säule
-
diff --git a/content/en/docs/concepts/3_use-cases/_index.md b/content/en/docs/concepts/3_use-cases/_index.md
index 20c1660..6a5b20f 100644
--- a/content/en/docs/concepts/3_use-cases/_index.md
+++ b/content/en/docs/concepts/3_use-cases/_index.md
@@ -4,7 +4,7 @@ weight: 2
description: The golden paths in the engineers and product development domain
---
-## Rationale
+## Rationale
The challenge of IPCEI-CIS Developer Framework is to provide value for DTAG customers, and more specifically: for Developers of DTAG customers.
@@ -52,11 +52,10 @@ The resulting visualization should look similar like this:

-
## When and how to use the developer framework?
### e.g. an example
.... taken from https://cloud.google.com/blog/products/application-development/common-myths-about-platform-engineering?hl=en
-
\ No newline at end of file
+
diff --git a/content/en/docs/concepts/4_digital-platforms/platform-components/_index.md b/content/en/docs/concepts/4_digital-platforms/platform-components/_index.md
index 5bed0e4..16aea34 100644
--- a/content/en/docs/concepts/4_digital-platforms/platform-components/_index.md
+++ b/content/en/docs/concepts/4_digital-platforms/platform-components/_index.md
@@ -7,6 +7,7 @@ description: What in terms of components or building blocks is needed in a platf
> This page is in work. Right now we have in the index a collection of links describing and listing typical components and building blocks of platforms. Also we have a growing number of subsections regarding special types of components.
See also:
+
* https://thenewstack.io/build-an-open-source-kubernetes-gitops-platform-part-1/
* https://thenewstack.io/build-an-open-source-kubernetes-gitops-platform-part-2/
diff --git a/content/en/docs/concepts/4_digital-platforms/platform-components/cicd-pipeline/_index.md b/content/en/docs/concepts/4_digital-platforms/platform-components/cicd-pipeline/_index.md
index a9c0ac5..128118e 100644
--- a/content/en/docs/concepts/4_digital-platforms/platform-components/cicd-pipeline/_index.md
+++ b/content/en/docs/concepts/4_digital-platforms/platform-components/cicd-pipeline/_index.md
@@ -47,11 +47,11 @@ Components are the composable and self-contained building blocks for the context
Components must be as small as possible and follow the same concepts of software development and deployment as any other software product. In particular, they must have the following characteristics:
-- designed for a single task
-- provide a clear and intuitive output
-- easy to compose
-- easily customizable or interchangeable
-- automatically testable
+* designed for a single task
+* provide a clear and intuitive output
+* easy to compose
+* easily customizable or interchangeable
+* automatically testable
In the EDF components are divided into different categories. Each category contains components that perform similar actions. For example, the `build` category contains components that compile code, while the `deploy` category contains components that automate the management of the artefacts created in a production-like system.
diff --git a/content/en/docs/concepts/4_digital-platforms/platform-components/cicd-pipeline/review-stl.md b/content/en/docs/concepts/4_digital-platforms/platform-components/cicd-pipeline/review-stl.md
index ed5e701..429eab2 100644
--- a/content/en/docs/concepts/4_digital-platforms/platform-components/cicd-pipeline/review-stl.md
+++ b/content/en/docs/concepts/4_digital-platforms/platform-components/cicd-pipeline/review-stl.md
@@ -8,4 +8,4 @@ There is no continuous whatever step inbetween ... Gitops is just 'overwriting'
This means whatever quality ensuring steps have to take part before 'overwriting' have to be defined as state changer in the repos, not in the environments.
-Conclusio: I think we only have three contexts, or let's say we don't have the contect 'continuous delivery'
\ No newline at end of file
+Conclusio: I think we only have three contexts, or let's say we don't have the contect 'continuous delivery'
diff --git a/content/en/docs/concepts/4_digital-platforms/platform-components/developer-portals/_index.md b/content/en/docs/concepts/4_digital-platforms/platform-components/developer-portals/_index.md
index 60c5453..53dec62 100644
--- a/content/en/docs/concepts/4_digital-platforms/platform-components/developer-portals/_index.md
+++ b/content/en/docs/concepts/4_digital-platforms/platform-components/developer-portals/_index.md
@@ -33,4 +33,4 @@ https://www.getport.io/compare/backstage-vs-port
* [port-vs-backstage-choosing-your-internal-developer-portal](https://medium.com/@vaibhavgupta0702/port-vs-backstage-choosing-your-internal-developer-portal-71c6a6acd979)
* [idp-vs-self-service-portal-a-platform-engineering-showdown](https://thenewstack.io/idp-vs-self-service-portal-a-platform-engineering-showdown)
* [portals-vs-platform-orchestrator](https://humanitec.com/portals-vs-platform-orchestrator)
-* [internal-developer-portal-vs-internal-developer-platform](https://www.cortex.io/post/internal-developer-portal-vs-internal-developer-platform)
\ No newline at end of file
+* [internal-developer-portal-vs-internal-developer-platform](https://www.cortex.io/post/internal-developer-portal-vs-internal-developer-platform)
diff --git a/content/en/docs/concepts/4_digital-platforms/platform-components/orchestrator/_index.md b/content/en/docs/concepts/4_digital-platforms/platform-components/orchestrator/_index.md
index ed92bfb..745cbca 100644
--- a/content/en/docs/concepts/4_digital-platforms/platform-components/orchestrator/_index.md
+++ b/content/en/docs/concepts/4_digital-platforms/platform-components/orchestrator/_index.md
@@ -17,7 +17,7 @@ description: "The new kid on the block since 2023 ist 'Platform Orchestrating':
* cnoe.io
-#### Resources
+#### Resources
* [CNOE IDPBuilder](https://cnoe.io/docs/reference-implementation/installations/idpbuilder)
-* https://github.com/csantanapr/cnoe-examples/tree/main
\ No newline at end of file
+* https://github.com/csantanapr/cnoe-examples/tree/main
diff --git a/content/en/docs/concepts/4_digital-platforms/platform-components/references/_index.md b/content/en/docs/concepts/4_digital-platforms/platform-components/references/_index.md
index 1cd858c..5d1b186 100644
--- a/content/en/docs/concepts/4_digital-platforms/platform-components/references/_index.md
+++ b/content/en/docs/concepts/4_digital-platforms/platform-components/references/_index.md
@@ -29,8 +29,8 @@ description: An currently uncurated list of references with respect to typical p
| Core Component | Short Description |
| ---- | --- |
-| Application Configuration Management | Manage application configuration in a dynamic, scalable and reliable way. |
-| Infrastructure Orchestration | Orchestrate your infrastructure in a dynamic and intelligent way depending on the context. |
-| Environment Management | Enable developers to create new and fully provisioned environments whenever needed. |
-| Deployment Management | Implement a delivery pipeline for Continuous Delivery or even Continuous Deployment (CD). |
-| Role-Based Access Control | Manage who can do what in a scalable way. |
\ No newline at end of file
+| Application Configuration Management | Manage application configuration in a dynamic, scalable and reliable way. |
+| Infrastructure Orchestration | Orchestrate your infrastructure in a dynamic and intelligent way depending on the context. |
+| Environment Management | Enable developers to create new and fully provisioned environments whenever needed. |
+| Deployment Management | Implement a delivery pipeline for Continuous Delivery or even Continuous Deployment (CD). |
+| Role-Based Access Control | Manage who can do what in a scalable way. |
diff --git a/content/en/docs/concepts/4_digital-platforms/platform-engineering/_index.md b/content/en/docs/concepts/4_digital-platforms/platform-engineering/_index.md
index b093bda..88092d7 100644
--- a/content/en/docs/concepts/4_digital-platforms/platform-engineering/_index.md
+++ b/content/en/docs/concepts/4_digital-platforms/platform-engineering/_index.md
@@ -5,7 +5,7 @@ description: Theory and general blue prints of the platform engineering discipli
---
-## Rationale
+## Rationale
IPCEI-CIS Developer Framework is part of a cloud native technology stack. To design the capabilities and architecture of the Developer Framework we need to define the surounding context and internal building blocks, both aligned with cutting edge cloud native methodologies and research results.
@@ -16,6 +16,7 @@ In CNCF the discipline of building stacks to enhance the developer experience is
[CNCF first asks](https://tag-app-delivery.cncf.io/whitepapers/platforms/) why we need platform engineering:
> The desire to refocus delivery teams on their core focus and reduce duplication of effort across the organisation has motivated enterprises to implement platforms for cloud-native computing. By investing in platforms, enterprises can:
+>
> * Reduce the cognitive load on product teams and thereby accelerate product development and delivery
> * Improve reliability and resiliency of products relying on platform capabilities by dedicating experts to configure and manage them
> * Accelerate product development and delivery by reusing and sharing platform tools and knowledge across many teams in an enterprise
@@ -40,7 +41,7 @@ https://humanitec.com/blog/wtf-internal-developer-platform-vs-internal-developer
## Internal Developer Platform
-> In IPCEI-CIS right now (July 2024) we are primarily interested in understanding how IDPs are built as one option to implement an IDP is to build it ourselves.
+> In IPCEI-CIS right now (July 2024) we are primarily interested in understanding how IDPs are built as one option to implement an IDP is to build it ourselves.
The outcome of the Platform Engineering discipline is - created by the platform engineering team - a so called 'Internal Developer Platform'.
@@ -69,4 +70,4 @@ The amount of available IDPs as product is rapidly growing.
## Platform 'Initiatives' aka Use Cases
Cortex is [talking about Use Cases (aka Initiatives):](https://www.youtube.com/watch?v=LrEC-fkBbQo) (or https://www.brighttalk.com/webcast/20257/601901)
-
\ No newline at end of file
+
diff --git a/content/en/docs/concepts/4_digital-platforms/platform-engineering/reference-architecture/_index.md b/content/en/docs/concepts/4_digital-platforms/platform-engineering/reference-architecture/_index.md
index f6420d7..d3d6af0 100644
--- a/content/en/docs/concepts/4_digital-platforms/platform-engineering/reference-architecture/_index.md
+++ b/content/en/docs/concepts/4_digital-platforms/platform-engineering/reference-architecture/_index.md
@@ -7,14 +7,14 @@ weight = 1
date = '2024-07-30'
+++
-## [The Structure of a Successful Internal Developer Platform](https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures)
+## [The Structure of a Successful Internal Developer Platform](https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures)
In a platform reference architecture there are five main planes that make up an IDP:
1. Developer Control Plane – this is the primary configuration layer and interaction point for the platform users. Components include Workload specifications such as Score and a portal for developers to interact with.
2. Integration and Delivery Plane – this plane is about building and storing the image, creating app and infra configs, and deploying the final state. It usually contains a CI pipeline, an image registry, a Platform Orchestrator, and the CD system.
3. Resource Plane – this is where the actual infrastructure exists including clusters, databases, storage or DNS services.
-4, Monitoring and Logging Plane – provides real-time metrics and logs for apps and infrastructure.
+4, Monitoring and Logging Plane – provides real-time metrics and logs for apps and infrastructure.
5. Security Plane – manages secrets and identity to protect sensitive information, e.g., storing, managing, and security retrieving API keys and credentials/secrets.

@@ -29,12 +29,9 @@ https://github.com/humanitec-architecture
https://humanitec.com/reference-architectures
-
## Create a reference architecture
[Create your own platform reference architecture](https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures)
[Reference arch slide deck](https://docs.google.com/presentation/d/1yAf_FSjiA0bAFukgu5p1DRMvvGGE1fF4KhvZbb7gn2I/edit?pli=1#slide=id.g1ef66f3349b_3_3)
-
-
diff --git a/content/en/docs/concepts/5_platforms/CNOE/_index.md b/content/en/docs/concepts/5_platforms/CNOE/_index.md
index dac0b04..1f4e68b 100644
--- a/content/en/docs/concepts/5_platforms/CNOE/_index.md
+++ b/content/en/docs/concepts/5_platforms/CNOE/_index.md
@@ -5,12 +5,12 @@ weight = 4
* https://cnoe.io/docs/intro
-
+
> The goal for the CNOE framework is to bring together a cohort of enterprises operating at the same scale so that they can navigate their operational technology decisions together, de-risk their tooling bets, coordinate contribution, and offer guidance to large enterprises on which CNCF technologies to use together to achieve the best cloud efficiencies.
### Aussprache
-* Englisch Kuh.noo,
+* Englisch Kuh.noo,
* also 'Kanu' im Deutschen
@@ -26,6 +26,7 @@ See https://cnoe.io/docs/reference-implementation/integrations/reference-impl:
# in a local terminal with docker and kind
idpbuilder create --use-path-routing --log-level debug --package-dir https://github.com/cnoe-io/stacks//ref-implementation
```
+
### Output
```bash
@@ -150,7 +151,7 @@ Data:
USER_PASSWORD : RwCHPvPVMu+fQM4L6W/q-Wq79MMP+3CN-Jeo
```
-### login to backstage
+### login to backstage
login geht mit den Creds, siehe oben:
diff --git a/content/en/docs/concepts/5_platforms/Humanitec/_index.md b/content/en/docs/concepts/5_platforms/Humanitec/_index.md
index 21c9e69..1b6be58 100644
--- a/content/en/docs/concepts/5_platforms/Humanitec/_index.md
+++ b/content/en/docs/concepts/5_platforms/Humanitec/_index.md
@@ -4,4 +4,4 @@ weight = 4
+++
-tbd
\ No newline at end of file
+tbd
diff --git a/content/en/docs/decisions/0001-pipeline-tools.md b/content/en/docs/decisions/0001-pipeline-tools.md
index 311ab33..04dd838 100644
--- a/content/en/docs/decisions/0001-pipeline-tools.md
+++ b/content/en/docs/decisions/0001-pipeline-tools.md
@@ -10,25 +10,25 @@ ArgoCD is considered set in stone as the tool to manage the deployment of applic
In general, there are 2 decisions to make:
-- What tools should we use to execute the pipeline?
-- What tools should we use to compose the pipeline?
+* What tools should we use to execute the pipeline?
+* What tools should we use to compose the pipeline?
The following use-cases should be considered for this decision:
-- **User who wants to manage their own runners (???)**
-- User who only wants to use our golden path
-- User who wants to use our golden path and add custom actions
-- User who wants to use their own templates and import some of our actions
-- User who wants to import an existing GitHub repository with a pipeline
+* **User who wants to manage their own runners (???)**
+* User who only wants to use our golden path
+* User who wants to use our golden path and add custom actions
+* User who wants to use their own templates and import some of our actions
+* User who wants to import an existing GitHub repository with a pipeline
## Considered Options
-- Argo Workflows + Events
-- Argo Workflows + Events + Additional Composition tool
-- Forgejo Actions
-- Forgejo Actions + Additional Composition tool
-- Dagger (as Engine)
-- Shuttle (as Engine)
+* Argo Workflows + Events
+* Argo Workflows + Events + Additional Composition tool
+* Forgejo Actions
+* Forgejo Actions + Additional Composition tool
+* Dagger (as Engine)
+* Shuttle (as Engine)
## Decision Outcome
@@ -40,87 +40,87 @@ TBD
**Pro**
-- integration with ArgoCD
-- ability to trigger additional workflows based on events.
-- level of maturity and community support.
+* integration with ArgoCD
+* ability to trigger additional workflows based on events.
+* level of maturity and community support.
**Con**
-- Ability to self-host runners?
-- way how composition for pipelines works (based on Kubernetes CRDs)
- - Templates must be available in the cluster where the pipelines are executed, so any imported templates must be applied into the cluster before the pipeline can be executed and cannot simply reference a repository
- - This makes it difficult to import existing templates from other repositories when using self-hosted runners
- - This also makes it difficult to use our golden path, or at least we will need to provide a way to import our golden path into the cluster
- - This also makes the split of every component has its own repo very difficult
-- additional UI to manage the pipeline
-- Additional complexity
+* Ability to self-host runners?
+* way how composition for pipelines works (based on Kubernetes CRDs)
+ * Templates must be available in the cluster where the pipelines are executed, so any imported templates must be applied into the cluster before the pipeline can be executed and cannot simply reference a repository
+ * This makes it difficult to import existing templates from other repositories when using self-hosted runners
+ * This also makes it difficult to use our golden path, or at least we will need to provide a way to import our golden path into the cluster
+ * This also makes the split of every component has its own repo very difficult
+* additional UI to manage the pipeline
+* Additional complexity
### Argo Workflows + Events + Additional Composition tool
**Pro**
-- Composability can be offloaded to another tool
+* Composability can be offloaded to another tool
**Con**
-- All cons of the previous option (except composability)
-- Additional complexity by adding another tool
+* All cons of the previous option (except composability)
+* Additional complexity by adding another tool
### Forgejo Actions
**Pro**
-- tight integration with GitHub Actions providing a familiar interface for developers and a vast catalog of actions to choose from
-- ability to compose pipelines without relying on another tool
-- Self-hosting of runners possible
-- every component can have its own repository and use different tools (e.g. written in go, bash, python etc.)
+* tight integration with GitHub Actions providing a familiar interface for developers and a vast catalog of actions to choose from
+* ability to compose pipelines without relying on another tool
+* Self-hosting of runners possible
+* every component can have its own repository and use different tools (e.g. written in go, bash, python etc.)
**Con**
-- level of maturity - will require additional investments to provide a production-grade system
+* level of maturity - will require additional investments to provide a production-grade system
### Forgejo Actions + Additional Tool
**Pro**
-- may be possible to use GitHub actions alongside another tool
+* may be possible to use GitHub actions alongside another tool
**Con**
-- additional complexity by adding another tool
+* additional complexity by adding another tool
### Shuttle
**Pro**
-- Possibility to clearly define interfaces for pipeline steps
-- Relatively simple
+* Possibility to clearly define interfaces for pipeline steps
+* Relatively simple
**Con**
-- basically backed by only one company
-- **centralized templates**, so no mechanism for composing pipelines from multiple repositories
+* basically backed by only one company
+* **centralized templates**, so no mechanism for composing pipelines from multiple repositories
### Dagger
**Pro**
-- Pipeline as code
- - if it runs it should run anywhere and produce the "same" / somewhat stable results
- - build environments are defined within containers / the dagger config. Dagger is the only dependency one has to install on a machine
-- DX is extremely nice, especially if you have to debug (image) builds, also type safety due to the ability to code your build in a strong language
-- additional tooling, like trivy, is added to a build pipeline with low effort due to containers and existing plugin/wrappers
-- you can create complex test environments similar to test containers and docker compose
+* Pipeline as code
+ * if it runs it should run anywhere and produce the "same" / somewhat stable results
+ * build environments are defined within containers / the dagger config. Dagger is the only dependency one has to install on a machine
+* DX is extremely nice, especially if you have to debug (image) builds, also type safety due to the ability to code your build in a strong language
+* additional tooling, like trivy, is added to a build pipeline with low effort due to containers and existing plugin/wrappers
+* you can create complex test environments similar to test containers and docker compose
**Con**
-- relies heavily containers, which might not be available some environments (due to policy etc), it also has an effect on reproducibility and verifiability
-- as a dev you need to properly understand containers
-- dagger engine has to run privileged locally and/or in the cloud which might be a blocker or at least a big pain in the ...
+* relies heavily containers, which might not be available some environments (due to policy etc), it also has an effect on reproducibility and verifiability
+* as a dev you need to properly understand containers
+* dagger engine has to run privileged locally and/or in the cloud which might be a blocker or at least a big pain in the ...
**Suggestion Patrick**
-- dagger is a heavy weight and might not be as productive in a dev workflow as it seems (setup lsp etc)
-- it might be too opinionated to force on teams, especially since it is not near mainstream enough, community might be too small
-- it feels like dagger gets you 95% of the way, but the remaining 5% are a real struggle
-- if we like it, we should check the popularity in the dev community before further considering as it has a direct impact on teams and their preferences
+* dagger is a heavy weight and might not be as productive in a dev workflow as it seems (setup lsp etc)
+* it might be too opinionated to force on teams, especially since it is not near mainstream enough, community might be too small
+* it feels like dagger gets you 95% of the way, but the remaining 5% are a real struggle
+* if we like it, we should check the popularity in the dev community before further considering as it has a direct impact on teams and their preferences
diff --git a/content/en/docs/project/MVP-12-OTC.md b/content/en/docs/project/MVP-12-OTC.md
index 3969671..252ed04 100644
--- a/content/en/docs/project/MVP-12-OTC.md
+++ b/content/en/docs/project/MVP-12-OTC.md
@@ -16,12 +16,13 @@ Dein beschriebenes Szenario – Vision und PoC vorhanden, aber kein ausformulier
## Bewertung eures PDCA-basierten Vorgehens
**Positiv:**
-- **Täglicher PDCA-Zyklus** (Plan-Do-Check-Act) sorgt für schnelle Feedbackschleifen.
-- **Morgendliches Planning** und **Check-Meeting im Plenum** fördern Transparenz und Selbstorganisation.
-- **Subgruppen-Erkundung** erlaubt parallele Experimente.
-- **Abschließendes "A"** zur Ergebnissicherung ist essenziell, sonst bleibt es bei "busy work".
+* **Täglicher PDCA-Zyklus** (Plan-Do-Check-Act) sorgt für schnelle Feedbackschleifen.
+* **Morgendliches Planning** und **Check-Meeting im Plenum** fördern Transparenz und Selbstorganisation.
+* **Subgruppen-Erkundung** erlaubt parallele Experimente.
+* **Abschließendes "A"** zur Ergebnissicherung ist essenziell, sonst bleibt es bei "busy work".
**Risiken:**
+
1. **Fehlende Langfriststruktur:** Ohne grobe übergreifende Richtung (z. B. Meilensteinplan, Zielbild) kann es leicht in ziellosem Explorieren enden.
2. **Uneinheitlicher Erkenntnisgewinn:** Subgruppen könnten redundant oder inkompatibel arbeiten, wenn kein gemeinsames Verständnis besteht.
3. **Dokumentation als „Nachsorge“:** Wenn die Doku erst am Ende passiert, droht Wissensverlust – lieber „Living Docs“ in Echtzeit pflegen.
@@ -30,10 +31,10 @@ Dein beschriebenes Szenario – Vision und PoC vorhanden, aber kein ausformulier
## Verbesserungsvorschläge
1. **Exploration Backlog oder Hypothesenboard:** Auch ohne klassisches Product Backlog könnt ihr mit einem **Experiment-/Hypothesenboard** (à la Lean Startup) arbeiten. Zum Beispiel:
- - Hypothese: „Feature X wird den Use Case Y verbessern.“
- - Experiment: „Prototyp bauen und testen mit Nutzergruppe Z.“
- - Ergebnis & Learnings dokumentieren.
-
+ * Hypothese: „Feature X wird den Use Case Y verbessern.“
+ * Experiment: „Prototyp bauen und testen mit Nutzergruppe Z.“
+ * Ergebnis & Learnings dokumentieren.
+
2. **Wöchentliche Zielsetzungen:** Jeden Montag ein Weekly Planning mit klaren Zielen für die Woche (Outcome-fokussiert, nicht Task-fokussiert).
3. **Integration mit Dual Track Agile:** Das Vorgehen ähnelt dem **Dual-Track-Modell**: ein **Discovery-Track** (Erkunden, Validieren, Prototypisieren) läuft parallel zum späteren **Delivery-Track** (Releases, Sprints). Ihr seid noch voll im Discovery-Track.
@@ -44,11 +45,11 @@ Dein beschriebenes Szenario – Vision und PoC vorhanden, aber kein ausformulier
## Gängige Standard-Methoden in dieser Phase
-- **Design Thinking** (für Nutzerzentrierte Exploration)
-- **Lean UX / Lean Startup** (Hypothesengetrieben)
-- **Dual Track Agile**
-- **Google Design Sprint (5 Tage)** – extrem hilfreich, wenn es um das Testen von Nutzerideen geht
-- **Shape Up (Basecamp)** – für nicht-sprintbasiertes Arbeiten mit "Bet" auf 6-Wochen-Projekte
+* **Design Thinking** (für Nutzerzentrierte Exploration)
+* **Lean UX / Lean Startup** (Hypothesengetrieben)
+* **Dual Track Agile**
+* **Google Design Sprint (5 Tage)** – extrem hilfreich, wenn es um das Testen von Nutzerideen geht
+* **Shape Up (Basecamp)** – für nicht-sprintbasiertes Arbeiten mit "Bet" auf 6-Wochen-Projekte
---
@@ -63,27 +64,28 @@ Möchtest du eine konkrete Vorlage für ein Exploration Backlog oder Hypothesenb
Danke für die Ergänzung – das ändert die Ausgangslage entscheidend und erlaubt eine deutlich präzisere Einordnung.
-### Neue Ausgangslage:
+### Neue Ausgangslage
-- Ihr **habt eine Architektur**: Die Komponenten und Schnittstellen sind klar.
-- Es gibt **bereits einen PoC** und eine Produktvision.
-- Das **Backlog fehlt**, weil ihr gerade die **"nicht-funktionalen" Themen wie Deployment, Cloud-Umgebung, Konfiguration und Betrieb** klärt.
-- Das System ist **technisch grob fertig designt**, aber die **Operations- und Deployment-Prozesse** sind unklar und werden explorativ entwickelt.
+* Ihr **habt eine Architektur**: Die Komponenten und Schnittstellen sind klar.
+* Es gibt **bereits einen PoC** und eine Produktvision.
+* Das **Backlog fehlt**, weil ihr gerade die **"nicht-funktionalen" Themen wie Deployment, Cloud-Umgebung, Konfiguration und Betrieb** klärt.
+* Das System ist **technisch grob fertig designt**, aber die **Operations- und Deployment-Prozesse** sind unklar und werden explorativ entwickelt.
### Schärferes Feedback
#### 1. **Tages-PDCA-Zyklen wirken in diesem Kontext zu granular**
+
Bei Infrastruktur-, Deployment- oder Konfigurationsfragen entstehen oft tiefe „Spikes“ (z. B. CI/CD-Pipeline aufsetzen, Helm-Chart modellieren, Secrets Handling evaluieren). Diese brauchen oft mehr als einen Tag bis zum „Check“, weil Abhängigkeiten (z. B. Rechte, Cloud-Zugänge, Testumgebungen) entstehen. Ein **2- bis 3-Tages-Rhythmus** mit klaren Zwischenzielen wäre realistischer, ergänzt um:
-- **Daily Standup als Taktgeber**, aber nicht zwangsläufig als vollständiger PDCA-Zyklus.
-- **Weekly Planning mit Zielvorgaben und Review-Ritualen**, um Fortschritt messbar zu machen.
+* **Daily Standup als Taktgeber**, aber nicht zwangsläufig als vollständiger PDCA-Zyklus.
+* **Weekly Planning mit Zielvorgaben und Review-Ritualen**, um Fortschritt messbar zu machen.
#### 2. **Was ihr macht, ist kein Produkt-Delivery, sondern "System Enablement"**
Ihr steckt im Übergang von Architektur zu einem **Infrastructure as Code + Plattform Enablement Track**. Das bedeutet:
-- Die „User Stories“ sind keine klassischen Features, sondern z. B.:
- - „Als Team möchte ich unsere Software mit einem Befehl deployen können.“
- - „Als Betreiber will ich wissen, wie ich Services konfiguriere.“
+* Die „User Stories“ sind keine klassischen Features, sondern z. B.:
+ * „Als Team möchte ich unsere Software mit einem Befehl deployen können.“
+ * „Als Betreiber will ich wissen, wie ich Services konfiguriere.“
Das spricht für ein **Infrastructure Kanban Board**, keine Sprints.
@@ -100,29 +102,29 @@ Das spricht für ein **Infrastructure Kanban Board**, keine Sprints.
#### 4. **Konkrete Empfehlungen zur Strukturierung**
-- **Ziele für die 8 Wochen formulieren** als **"Capabilities"**, z. B.:
- - „Greenfield-Clouddeployment in AWS/GCP über GitHub Actions.“
- - „Parameterisierbares Helm-Chart mit Doku.“
- - „Konfigurierbarkeit über ENV vs. Secrets evaluieren.“
-- Dazu **1–2 Woche cadence**, jede Woche ein Review/Checkpoint.
-- **Boards mit Task-Kategorien**:
- - 🧪 Spike
- - 🛠️ Tooling/Automation
- - 📦 Infrastruktur/Dienst
- - 📓 Dokumentation
-- **Ergebnisse sofort versionieren und dokumentieren**, z. B.:
- - GitOps Repo-Struktur
- - ADRs
- - Beispiel-Deployments
+* **Ziele für die 8 Wochen formulieren** als **"Capabilities"**, z. B.:
+ * „Greenfield-Clouddeployment in AWS/GCP über GitHub Actions.“
+ * „Parameterisierbares Helm-Chart mit Doku.“
+ * „Konfigurierbarkeit über ENV vs. Secrets evaluieren.“
+* Dazu **1–2 Woche cadence**, jede Woche ein Review/Checkpoint.
+* **Boards mit Task-Kategorien**:
+ * 🧪 Spike
+ * 🛠️ Tooling/Automation
+ * 📦 Infrastruktur/Dienst
+ * 📓 Dokumentation
+* **Ergebnisse sofort versionieren und dokumentieren**, z. B.:
+ * GitOps Repo-Struktur
+ * ADRs
+ * Beispiel-Deployments
---
### Fazit
Ihr seid im **"technisch operativen Enablement"** eures Systems, nicht im Produkt-Delivery. Das rechtfertigt:
-- ein **kanban-artiges, flexibles Modell mit timeboxed Reviews**,
-- tägliche Abstimmungen, aber nicht zwingend tägliche Zyklen,
-- Fokus auf **Enabler-Ziele statt funktionale Stories**.
+* ein **kanban-artiges, flexibles Modell mit timeboxed Reviews**,
+* tägliche Abstimmungen, aber nicht zwingend tägliche Zyklen,
+* Fokus auf **Enabler-Ziele statt funktionale Stories**.
SCRUM ist hier fehl am Platz, aber euer Ansatz ist sehr nahe an einer **modernen DevOps-orientierten Arbeitsweise**, die z. B. von **Platform Engineering Teams** eingesetzt wird.
@@ -191,10 +193,10 @@ Diese Capabilities bilden **Spalten** oder **Swimlanes** im Board (wenn euer Too
## 📅 Cadence-Empfehlung
-- **Daily Standup (15 min)**: Was läuft, was blockiert? Pull statt Push!
-- **Weekly Planning (Montag)**: Priorisieren und Backlog-Grooming (welche Capabilities sind diese Woche dran?)
-- **Weekly Review (Freitag)**: Erkenntnisse, Entscheidungen und Dokumentation präsentieren
-- Optional: **Retrospektive alle 2 Wochen**, falls das Team Energie hat
+* **Daily Standup (15 min)**: Was läuft, was blockiert? Pull statt Push!
+* **Weekly Planning (Montag)**: Priorisieren und Backlog-Grooming (welche Capabilities sind diese Woche dran?)
+* **Weekly Review (Freitag)**: Erkenntnisse, Entscheidungen und Dokumentation präsentieren
+* Optional: **Retrospektive alle 2 Wochen**, falls das Team Energie hat
---
@@ -264,4 +266,4 @@ Gerade in eurem Kontext (technisches Enablement, Deployment, Betrieb) ist das **
---
-Möchtest du eine Vorlage für ein ADR oder ein Beispielverzeichnis für eine sinnvolle Living-Docs-Struktur im Repo?
\ No newline at end of file
+Möchtest du eine Vorlage für ein ADR oder ein Beispielverzeichnis für eine sinnvolle Living-Docs-Struktur im Repo?
diff --git a/content/en/docs/project/bootstrapping/backup/_index.md b/content/en/docs/project/bootstrapping/backup/_index.md
index c9dd005..b4b31f1 100644
--- a/content/en/docs/project/bootstrapping/backup/_index.md
+++ b/content/en/docs/project/bootstrapping/backup/_index.md
@@ -39,6 +39,7 @@ velero install \
3. Delete `credentials.ini`, it is not needed anymore (a secret has been created in the cluster).
4. Create a schedule to back up the relevant resources in the cluster:
+
```
velero schedule create devfw-bootstrap --schedule="23 */2 * * *" "--include-namespaces=forgejo"
```
@@ -48,6 +49,7 @@ velero schedule create devfw-bootstrap --schedule="23 */2 * * *" "--include-name
You can now use Velero to create backups, restore them, or perform other operations. Please refer to the [Velero Documentation](https://velero.io/docs/main/backup-reference/).
To list all currently available backups:
+
```
velero backup get
```
diff --git a/content/en/docs/project/conceptual-onboarding/1_intro/_index.md b/content/en/docs/project/conceptual-onboarding/1_intro/_index.md
index 9fa9723..37aa4d0 100644
--- a/content/en/docs/project/conceptual-onboarding/1_intro/_index.md
+++ b/content/en/docs/project/conceptual-onboarding/1_intro/_index.md
@@ -5,10 +5,12 @@ description: The 5-step storyflow of this Onboarding chapter
---
{{% pageinfo color="info" %}}
+
## Summary
-This onboarding section is for you when are new to IPCEI-CIS subproject 'Edge Developer Framework (EDF)' and you want to know about
-* its context to 'Platform Engineering'
+This onboarding section is for you when are new to IPCEI-CIS subproject 'Edge Developer Framework (EDF)' and you want to know about
+
+* its context to 'Platform Engineering'
* and why we think it's the stuff we need to care about in the EDF
{{% /pageinfo %}}
@@ -41,9 +43,7 @@ Please do not think this story and the underlying assumptions are carved in ston
## Your role as 'Framework Engineer' in the Domain Architecture
-Pls be aware of the the following domain and task structure of our mission:
+Pls be aware of the the following domain and task structure of our mission:

-
-
diff --git a/content/en/docs/project/conceptual-onboarding/2_edge-developer-framework/_index.md b/content/en/docs/project/conceptual-onboarding/2_edge-developer-framework/_index.md
index 8da5935..452461a 100644
--- a/content/en/docs/project/conceptual-onboarding/2_edge-developer-framework/_index.md
+++ b/content/en/docs/project/conceptual-onboarding/2_edge-developer-framework/_index.md
@@ -5,10 +5,11 @@ description: Driving requirements for a platform
---
{{% pageinfo color="info" %}}
+
## Summary
-The 'Edge Developer Framework' is both the project and the product we are working for. Out of the leading 'Portfolio Document'
-we derive requirements which are ought to be fulfilled by Platform Engineering.
+The 'Edge Developer Framework' is both the project and the product we are working for. Out of the leading 'Portfolio Document'
+we derive requirements which are ought to be fulfilled by Platform Engineering.
**This is our claim!**
@@ -26,6 +27,7 @@ e. Development of DTAG/TSI Edge Developer Framework
* Goal: All developed innovations must be accessible to developer communities in a **highly user-friendly and easy way**
### Development of DTAG/TSI Edge Developer Framework (p.14)
+
| capability | major novelties |||
| -- | -- | -- | -- |
| e.1. Edge Developer full service framework (SDK + day1 +day2 support for edge installations) | Adaptive CI/CD pipelines for heterogeneous edge environments | Decentralized and self healing deployment and management | edge-driven monitoring and analytics |
@@ -34,22 +36,23 @@ e. Development of DTAG/TSI Edge Developer Framework
### DTAG objectives & contributions (p.27)
-DTAG will also focus on developing an easy-to-use **Edge Developer framework for software
+DTAG will also focus on developing an easy-to-use **Edge Developer framework for software
developers** to **manage the whole lifecycle of edge applications**, i.e. for **day-0-, day-1- and up to day-2-
-operations**. With this DTAG will strongly enable the ecosystem building for the entire IPCEI-CIS edge to
-cloud continuum and ensure openness and accessibility for anyone or any company to make use and
-further build on the edge to cloud continuum. Providing the use of the tool framework via an open-source approach will further reduce entry barriers and enhance the openness and accessibility for anyone or
+operations**. With this DTAG will strongly enable the ecosystem building for the entire IPCEI-CIS edge to
+cloud continuum and ensure openness and accessibility for anyone or any company to make use and
+further build on the edge to cloud continuum. Providing the use of the tool framework via an open-source approach will further reduce entry barriers and enhance the openness and accessibility for anyone or
any organization (see innovations e.).
### WP Deliverables (p.170)
e.1 Edge developer full-service framework
-This tool set and related best practices and guidelines will **adapt, enhance and further innovate DevOps principles** and
-their related, necessary supporting technologies according to the specific requirements and constraints associated with edge or edge cloud development, in order to keep the healthy and balanced innovation path on both sides,
+This tool set and related best practices and guidelines will **adapt, enhance and further innovate DevOps principles** and
+their related, necessary supporting technologies according to the specific requirements and constraints associated with edge or edge cloud development, in order to keep the healthy and balanced innovation path on both sides,
the (software) development side and the operations side in the field of DevOps.
{{% pageinfo color="info" %}}
+
### What comes next?
[Next](../platforming/) we'll see how these requirements seem to be fulfilled by platforms!
diff --git a/content/en/docs/project/conceptual-onboarding/3_platforming/_index.md b/content/en/docs/project/conceptual-onboarding/3_platforming/_index.md
index 6a41b34..48f790f 100644
--- a/content/en/docs/project/conceptual-onboarding/3_platforming/_index.md
+++ b/content/en/docs/project/conceptual-onboarding/3_platforming/_index.md
@@ -7,17 +7,18 @@ description: DevOps is dead - long live next level DevOps in platforms
{{% pageinfo color="info" %}}
+
## Summary
-Since 2010 we have DevOps. This brings increasing delivery speed and efficiency at scale.
-But next we got high 'cognitive loads' for developers and production congestion due to engineering lifecycle complexity.
+Since 2010 we have DevOps. This brings increasing delivery speed and efficiency at scale.
+But next we got high 'cognitive loads' for developers and production congestion due to engineering lifecycle complexity.
So we need on top of DevOps an instrumentation to ensure and enforce speed, quality, security in modern, cloud native software development. This instrumentation is called 'golden paths' in intenal develoepr platforms (IDP).
{{% /pageinfo %}}
## History of Platform Engineering
-Let's start with a look into the history of platform engineering. A good starting point is [Humanitec](https://humanitec.com/), as they nowadays are one of the biggest players (['the market leader in IDPs.'](https://internaldeveloperplatform.org/#how-we-curate-this-site)) in platform engineering.
+Let's start with a look into the history of platform engineering. A good starting point is [Humanitec](https://humanitec.com/), as they nowadays are one of the biggest players (['the market leader in IDPs.'](https://internaldeveloperplatform.org/#how-we-curate-this-site)) in platform engineering.
They create lots of [beautiful articles and insights](https://humanitec.com/blog), their own [platform products](https://humanitec.com/products/) and [basic concepts for the platform architecture](https://humanitec.com/platform-engineering) (we'll meet this later on!).
@@ -51,7 +52,7 @@ There is a CNCF working group which provides the definition of [Capabilities of
### Platform Engineering Team
-Or, in another illustration for the platform as a developer service interface, which also defines the **'Platform Engineering Team'** inbetween:
+Or, in another illustration for the platform as a developer service interface, which also defines the **'Platform Engineering Team'** inbetween:
@@ -70,7 +71,7 @@ First of all some important wording to motivate the important term 'internal dev
[Capabilities of platforms](https://tag-app-delivery.cncf.io/whitepapers/platforms/#capabilities-of-platforms)
-### Ecosystems in InternalDeveloperPlatform
+### Ecosystems in InternalDeveloperPlatform
Build or buy - this is also in pltaform engineering a tweaked discussion, which one of the oldest player answers like this with some oppinioated internal capability structuring:
@@ -78,6 +79,7 @@ Build or buy - this is also in pltaform engineering a tweaked discussion, which
{{% pageinfo color="info" %}}
+
### What comes next?
[Next](../orchestrators/) we'll see how these concepts got structured!
@@ -87,7 +89,7 @@ Build or buy - this is also in pltaform engineering a tweaked discussion, which
### Digital Platform defintion from [What we **call** a Platform](https://martinfowler.com/articles/talk-about-platforms.html)
-> Words are hard, it seems. ‘Platform’ is just about the most ambiguous term we could use for an approach that is super-important for increasing delivery speed and efficiency at scale. Hence the title of this article, here is what I’ve been talking about most recently.
+> Words are hard, it seems. ‘Platform’ is just about the most ambiguous term we could use for an approach that is super-important for increasing delivery speed and efficiency at scale. Hence the title of this article, here is what I’ve been talking about most recently.
\
Definitions for software and hardware platforms abound, generally describing an operating environment upon which applications can execute and which provides reusable capabilities such as file systems and security.
\
diff --git a/content/en/docs/project/conceptual-onboarding/4_orchestrators/_index.md b/content/en/docs/project/conceptual-onboarding/4_orchestrators/_index.md
index 11f4446..29b4486 100644
--- a/content/en/docs/project/conceptual-onboarding/4_orchestrators/_index.md
+++ b/content/en/docs/project/conceptual-onboarding/4_orchestrators/_index.md
@@ -5,9 +5,11 @@ description: Next level platforming is orchestrating platforms
---
{{% pageinfo color="info" %}}
+
## Summary
-When defining and setting up platforms next two intrinsic problems arise:
+When defining and setting up platforms next two intrinsic problems arise:
+
1. it is not declarative and automated
2. it is not or least not easily changable
@@ -33,10 +35,11 @@ https://humanitec.com/reference-architectures
-> Hint: There is a [slides tool provided by McKinsey](https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures) to set up your own platform deign based on the reference architecture
+> Hint: There is a [slides tool provided by McKinsey](https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures) to set up your own platform deign based on the reference architecture
{{% pageinfo color="info" %}}
+
### What comes next?
[Next](../cnoe/) we'll see how we are going to do platform orchestration with CNOE!
@@ -50,4 +53,3 @@ You remember the [capability mappings from the time before orchestration](../pla
-
diff --git a/content/en/docs/project/conceptual-onboarding/5_cnoe/_index.md b/content/en/docs/project/conceptual-onboarding/5_cnoe/_index.md
index 3788735..19b2e67 100644
--- a/content/en/docs/project/conceptual-onboarding/5_cnoe/_index.md
+++ b/content/en/docs/project/conceptual-onboarding/5_cnoe/_index.md
@@ -5,6 +5,7 @@ description: Our top candidate for a platform orchestrator
---
{{% pageinfo color="info" %}}
+
## Summary
In late 2023 platform orchestration raised - the discipline of declarativley dinfing, building, orchestarting and reconciling building blocks of (digital) platforms.
@@ -17,6 +18,7 @@ Thus we were looking for open source means for platform orchestrating and found
## Requirements for an Orchestrator
When we want to set up a [complete platform](../platforming/platforms-def.drawio.png) we expect to have
+
* a **schema** which defines the platform, its ressources and internal behaviour
* a **dynamic configuration or templating mechanism** to provide a concrete specification of a platform
* a **deployment mechanism** to deploy and reconcile the platform
@@ -55,6 +57,7 @@ There are already some example stacks:
{{% pageinfo color="info" %}}
+
### What comes next?
[Next](../cnoe-showtime/) we'll see how a CNOE stacked Internal Developer Platform is deployed on you local laptop!
diff --git a/content/en/docs/project/conceptual-onboarding/6_cnoe-showtime/_index.md b/content/en/docs/project/conceptual-onboarding/6_cnoe-showtime/_index.md
index ab7be8e..a741445 100644
--- a/content/en/docs/project/conceptual-onboarding/6_cnoe-showtime/_index.md
+++ b/content/en/docs/project/conceptual-onboarding/6_cnoe-showtime/_index.md
@@ -5,9 +5,10 @@ description: CNOE hands on
---
{{% pageinfo color="info" %}}
+
## Summary
-CNOE is a 'Platform Engineering Framework' (Danger: Our wording!) - it is open source and locally runnable.
+CNOE is a 'Platform Engineering Framework' (Danger: Our wording!) - it is open source and locally runnable.
It consists of the orchestrator 'idpbuilder' and both of some predefined building blocks and also some predefined platform configurations.
@@ -87,7 +88,7 @@ It's an important feature of idpbuilder that it will set up on an existing clust
That's why we here first create the kind cluster `localdev`itself:
-```bash
+```bash
cat << EOF | kind create cluster --name localdev --config=-
# Kind kubernetes release images https://github.com/kubernetes-sigs/kind/releases
kind: Cluster
@@ -137,7 +138,7 @@ kube-system kube-scheduler-localdev-control-plane 1/1 Ru
local-path-storage local-path-provisioner-6f8956fb48-6fvt2 1/1 Running 0 15s
```
-### First run: Start with core applications, 'core package'
+### First run: Start with core applications, 'core package'
Now we run idpbuilder the first time:
@@ -149,7 +150,7 @@ ib create --use-path-routing
#### Output
-##### idpbuilder log
+##### idpbuilder log
```bash
stl@ubuntu-vpn:~/git/mms/idpbuilder$ ib create --use-path-routing
@@ -243,7 +244,7 @@ Data:
username : giteaAdmin
```
-In ArgoCD you will see the deployed three applications of the core package:
+In ArgoCD you will see the deployed three applications of the core package:

@@ -302,7 +303,7 @@ drwxr-xr-x 4 stl stl 4096 Jul 29 10:57 ..
Now we run idpbuilder the second time with `-p basic/package1`
-##### idpbuilder log
+##### idpbuilder log
```bash
stl@ubuntu-vpn:~/git/mms/cnoe-stacks$ ib create --use-path-routing -p basic/package1
@@ -572,9 +573,10 @@ Next wait a bit until Gitops does its magic and our 'wanted' state in the repo g

{{% pageinfo color="info" %}}
+
### What comes next?
The showtime of CNOE high level behaviour and usage scenarios is now finished. We setup an initial IDP and used a backstage golden path to init and deploy a simple application.
-[Last not least](../conclusio/) we want to sum up the whole way from Devops to 'Frameworking' (is this the correct wording???)
+[Last not least](../conclusio/) we want to sum up the whole way from Devops to 'Frameworking' (is this the correct wording???)
{{% /pageinfo %}}
diff --git a/content/en/docs/project/conceptual-onboarding/7_conclusio/README.md b/content/en/docs/project/conceptual-onboarding/7_conclusio/README.md
index 769478d..a1027c9 100644
--- a/content/en/docs/project/conceptual-onboarding/7_conclusio/README.md
+++ b/content/en/docs/project/conceptual-onboarding/7_conclusio/README.md
@@ -9,10 +9,10 @@ docker commit likec4 likec4
docker run -it --rm --user node -v $PWD:/app -p 5173:5173 likec4 bash
// as root
-npx playwright install-deps
+npx playwright install-deps
npx playwright install
npm install likec4
// render
-node@e20899c8046f:/app/content/en/docs/project/onboarding$ ./node_modules/.bin/likec4 export png -o ./images .
\ No newline at end of file
+node@e20899c8046f:/app/content/en/docs/project/onboarding$ ./node_modules/.bin/likec4 export png -o ./images .
diff --git a/content/en/docs/project/conceptual-onboarding/7_conclusio/_index.md b/content/en/docs/project/conceptual-onboarding/7_conclusio/_index.md
index da262e3..f76269f 100644
--- a/content/en/docs/project/conceptual-onboarding/7_conclusio/_index.md
+++ b/content/en/docs/project/conceptual-onboarding/7_conclusio/_index.md
@@ -5,6 +5,7 @@ description: 'Summary and final thoughts: Always challenge theses concepts, accu
---
{{% pageinfo color="info" %}}
+
## Summary
In the project 'Edge Developer Framework' we start with DevOps, set platforms on top to automate golden paths, and finally set 'frameworks' (aka Orchestrators') on top to have declarative,automated and reconcilable platforms.
@@ -14,7 +15,7 @@ In the project 'Edge Developer Framework' we start with DevOps, set platforms on
## From Devops over Platform to Framework Engineering
-We come along from a quite well known, but already complex discipline called 'Platform Engineering', which is the next level devops.
+We come along from a quite well known, but already complex discipline called 'Platform Engineering', which is the next level devops.
On top of these two domains we now have 'Framework Engineering', i.e. buildung dynamic, orchestrated and reconciling platforms:
| Classic Platform engineering | New: Framework Orchestration on top of Platforms | Your job: Framework Engineer |
@@ -23,11 +24,12 @@ On top of these two domains we now have 'Framework Engineering', i.e. buildung d
## The whole picture of engineering
-So always keep in mind that as as 'Framework Engineer' you
- * include the skills of a platform and a devops engineer,
- * you do Framework, Platform and Devops Engineering at the same time
- * and your results have impact on Frameworks, Platforms and Devops tools, layers, processes.
+So always keep in mind that as as 'Framework Engineer' you
+
+* include the skills of a platform and a devops engineer,
+* you do Framework, Platform and Devops Engineering at the same time
+* and your results have impact on Frameworks, Platforms and Devops tools, layers, processes.
The following diamond is illustrating this: on top is you, on the bottom is our baseline 'DevOps'
-
\ No newline at end of file
+
diff --git a/content/en/docs/project/conceptual-onboarding/storyline.md b/content/en/docs/project/conceptual-onboarding/storyline.md
index 11d2997..b49b373 100644
--- a/content/en/docs/project/conceptual-onboarding/storyline.md
+++ b/content/en/docs/project/conceptual-onboarding/storyline.md
@@ -1,5 +1,5 @@
-## Storyline
+## Storyline
1. We have the 'Developer Framework'
2. We think the solution for DF is 'Platforming' (Digital Platforms)
@@ -25,4 +25,3 @@
## Architecture
-
diff --git a/content/en/docs/project/intro-stakeholder-workshop/_index.md b/content/en/docs/project/intro-stakeholder-workshop/_index.md
index 63b29e9..a3cf645 100644
--- a/content/en/docs/project/intro-stakeholder-workshop/_index.md
+++ b/content/en/docs/project/intro-stakeholder-workshop/_index.md
@@ -36,7 +36,7 @@ linktitle: Stakeholder Workshops

* from 'left' to 'right' - plan to monitor
-* 'leftshift'
+* 'leftshift'
* --> turns out to be a right shift for developers with cognitive overload
* 'DevOps isd dead' -> we need Platforms
@@ -64,7 +64,7 @@ Here is a small list of companies alrteady using IDPs:
* Autodesk
* Adobe
* Cisco
-* ...
+* ...
## 3 Platform building by 'Orchestrating'
@@ -91,5 +91,3 @@ Sticking together the platforming orchestration concept, the reference architect
This will now be presented! Enjoy!
-
-
diff --git a/content/en/docs/project/plan-in-2024/_index.md b/content/en/docs/project/plan-in-2024/_index.md
index 42c9e7f..7dd8b50 100644
--- a/content/en/docs/project/plan-in-2024/_index.md
+++ b/content/en/docs/project/plan-in-2024/_index.md
@@ -5,7 +5,7 @@ description: The planned project workload in 2024
---
-## First Blue Print in 2024
+## First Blue Print in 2024
Our first architectural blue print for the IPCEI-CIS Developer Framework derives from Humanitecs Reference Architecture, see links in [Blog](../../blog/240823-archsession.md)
@@ -39,6 +39,7 @@ https://confluence.telekom-mms.com/display/IPCEICIS/Architecture
7) Wildcard Domain ?? --> Eher ja
Next Steps: (Vorschlag: in den nächsten 2 Wochen)
+
1. Florian spezifiziert an Tobias
2. Tobias stellt bereit, kubeconfig kommt an uns
3. wir deployen
diff --git a/content/en/docs/project/plan-in-2024/poc/_index.md b/content/en/docs/project/plan-in-2024/poc/_index.md
index b49e260..ef03359 100644
--- a/content/en/docs/project/plan-in-2024/poc/_index.md
+++ b/content/en/docs/project/plan-in-2024/poc/_index.md
@@ -12,4 +12,4 @@ Presented and approved on tuesday, 26.11.2024 within the team:
The use cases/application lifecycle and deployment flow is drawn here: https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024
-
\ No newline at end of file
+
diff --git a/content/en/docs/project/plan-in-2024/streams/_index.md b/content/en/docs/project/plan-in-2024/streams/_index.md
index d060a96..2f65326 100644
--- a/content/en/docs/project/plan-in-2024/streams/_index.md
+++ b/content/en/docs/project/plan-in-2024/streams/_index.md
@@ -3,9 +3,10 @@ title: Workstreams
weight: 2
---
-This page is WiP (23.8.2024).
+This page is WiP (23.8.2024).
> Continued discussion on 29th Aug 24
+>
> * idea: Top down mit SAFe, Value Streams
> * paralell dazu bottom up (die zB aus den technisch/operativen Tätigkeietn entstehen)
> * Scrum Master?
@@ -14,7 +15,7 @@ This page is WiP (23.8.2024).
Stefan and Stephan try to solve the mission 'wir wollen losmachen'.
-**Solution Idea**:
+**Solution Idea**:
1. First we define a **rough overall structure (see 'streams')** and propose some initial **activities** (like user stories) within them.
1. Next we work in **iterative steps** and produce iteratively progress and knowledge and outcomes in these activities.
diff --git a/content/en/docs/project/plan-in-2024/streams/deployment/_index.md b/content/en/docs/project/plan-in-2024/streams/deployment/_index.md
index ed797a0..cae759b 100644
--- a/content/en/docs/project/plan-in-2024/streams/deployment/_index.md
+++ b/content/en/docs/project/plan-in-2024/streams/deployment/_index.md
@@ -3,12 +3,13 @@ title: Deployment
weight: 3
---
-> **Mantra**:
-> 1. Everything as Code.
-> 1. Cloud natively deployable everywhere.
-> 1. Ramping up and tearing down oftenly is a no-brainer.
+> **Mantra**:
+>
+> 1. Everything as Code.
+> 1. Cloud natively deployable everywhere.
+> 1. Ramping up and tearing down oftenly is a no-brainer.
> 1. Especially locally (whereby 'locally' means 'under my own control')
## Entwurf (28.8.24)
-
\ No newline at end of file
+
diff --git a/content/en/docs/project/plan-in-2024/streams/deployment/forgejo/_index.md b/content/en/docs/project/plan-in-2024/streams/deployment/forgejo/_index.md
index 7e10216..de68795 100644
--- a/content/en/docs/project/plan-in-2024/streams/deployment/forgejo/_index.md
+++ b/content/en/docs/project/plan-in-2024/streams/deployment/forgejo/_index.md
@@ -4,7 +4,7 @@ linkTitle: Forgejo
weight: 1
---
-> **WiP** Ich (Stephan) schreibe mal schnell einige Stichworte, was ich so von Stefan gehört habe:
+> **WiP** Ich (Stephan) schreibe mal schnell einige Stichworte, was ich so von Stefan gehört habe:
## Summary
@@ -33,4 +33,4 @@ tbd
* STBE deployed mit Helm in bereitgestelltes OTC-Kubernetes
* erstmal interne User Datenbank nutzen
-* dann ggf. OIDC mit vorhandenem Keycloak in der OTC anbinden
\ No newline at end of file
+* dann ggf. OIDC mit vorhandenem Keycloak in der OTC anbinden
diff --git a/content/en/docs/project/plan-in-2024/streams/fundamentals/_index.md b/content/en/docs/project/plan-in-2024/streams/fundamentals/_index.md
index fd8d2df..13d90d3 100644
--- a/content/en/docs/project/plan-in-2024/streams/fundamentals/_index.md
+++ b/content/en/docs/project/plan-in-2024/streams/fundamentals/_index.md
@@ -15,4 +15,4 @@ weight: 1
### nice article about platform orchestration automation (introducing BACK stack)
-* https://dev.to/thenjdevopsguy/creating-your-platform-engineering-environment-4hpa
\ No newline at end of file
+* https://dev.to/thenjdevopsguy/creating-your-platform-engineering-environment-4hpa
diff --git a/content/en/docs/project/plan-in-2024/streams/fundamentals/cicd-definition/_index.md b/content/en/docs/project/plan-in-2024/streams/fundamentals/cicd-definition/_index.md
index 2565522..54259ed 100644
--- a/content/en/docs/project/plan-in-2024/streams/fundamentals/cicd-definition/_index.md
+++ b/content/en/docs/project/plan-in-2024/streams/fundamentals/cicd-definition/_index.md
@@ -12,9 +12,10 @@ Der Produktionsprozess für Applikationen soll im Kontext von Gitops und Plattfo
In Gitops basierten Plattformen (Anm.: wie es zB. CNOE und Humanitec mit ArgoCD sind) trifft das klassische Verständnis von Pipelining mit finalem Pushing des fertigen Builds auf die Target-Plattform nicht mehr zu.
-D.h. in diesem fall is Argo CD = Continuous Delivery = Pulling des desired state auf die Target plattform. Eine pipeline hat hier keien Rechte mehr, single source of truth ist das 'Control-Git'.
+D.h. in diesem fall is Argo CD = Continuous Delivery = Pulling des desired state auf die Target plattform. Eine pipeline hat hier keien Rechte mehr, single source of truth ist das 'Control-Git'.
D.h. es stellen sich zwei Fragen:
+
1. Wie sieht der adaptierte Workflow aus, der die 'Single Source of Truth' im 'Control-Git' definiert? Was ist das gewünschte korrekte Wording? Was bedeuen CI und CD in diesem (neuen) Kontext ? Auf welchen Environmants laufen Steps (zB Funktionstest), die eben nicht mehr auf einer gitops-kontrollierten Stage laufen?
2. Wie sieht der Workflow aus für 'Events', die nach dem CI/CD in die single source of truth einfliessen? ZB. abnahmen auf einer Abnahme Stage, oder Integrationsprobleme auf einer test Stage
@@ -22,9 +23,9 @@ D.h. es stellen sich zwei Fragen:
* Es sollen existierende, typische Pipelines hergenommen werden und auf die oben skizzierten Fragestellungen hin untersucht und angepasst werden.
* In lokalen Demo-Systemen (mit oder ohne CNOE aufgesetzt) sollen die Pipeline entwürfe dummyhaft dargestellt werden und luffähig sein.
-* Für den POC sollen Workflow-Systeme wie Dagger, Argo Workflow, Flux, Forgejo Actions zum Einsatz kommen.
+* Für den POC sollen Workflow-Systeme wie Dagger, Argo Workflow, Flux, Forgejo Actions zum Einsatz kommen.
## Further ideas for POSs
-* see sample flows in https://docs.kubefirst.io/
\ No newline at end of file
+* see sample flows in https://docs.kubefirst.io/
diff --git a/content/en/docs/project/plan-in-2024/streams/fundamentals/platform-definition/_index.md b/content/en/docs/project/plan-in-2024/streams/fundamentals/platform-definition/_index.md
index c4d21c9..33d842d 100644
--- a/content/en/docs/project/plan-in-2024/streams/fundamentals/platform-definition/_index.md
+++ b/content/en/docs/project/plan-in-2024/streams/fundamentals/platform-definition/_index.md
@@ -4,14 +4,14 @@ linkTitle: Platform Definition
weight: 1
---
-## Summary
+## Summary
-Das theoretische Fundament unserer Plattform-Architektur soll begründet und weitere wesentliche Erfahrungen anderer Player durch Recherche erhoben werden, so dass unser aktuelles Zielbild abgesichert ist.
+Das theoretische Fundament unserer Plattform-Architektur soll begründet und weitere wesentliche Erfahrungen anderer Player durch Recherche erhoben werden, so dass unser aktuelles Zielbild abgesichert ist.
## Rationale
-Wir starten gerade auf der Bais des Referenzmodells zu Platform-Engineering von Gartner und Huamitec.
-Es gibt viele weitere Grundlagen und Entwicklungen zu Platform Engineering.
+Wir starten gerade auf der Bais des Referenzmodells zu Platform-Engineering von Gartner und Huamitec.
+Es gibt viele weitere Grundlagen und Entwicklungen zu Platform Engineering.
## Task
@@ -25,6 +25,3 @@ Es gibt viele weitere Grundlagen und Entwicklungen zu Platform Engineering.
* Argumentation für unseren Weg zusammentragen.
* Best Practices und wichtige Tipps und Erfahrungen zusammentragen.
-
-
-
diff --git a/content/en/docs/project/plan-in-2024/streams/pocs/_index.md b/content/en/docs/project/plan-in-2024/streams/pocs/_index.md
index fb81dfc..c83523b 100644
--- a/content/en/docs/project/plan-in-2024/streams/pocs/_index.md
+++ b/content/en/docs/project/plan-in-2024/streams/pocs/_index.md
@@ -5,4 +5,4 @@ weight: 2
## Further ideas for POSs
-* see sample apps 'metaphor' in https://docs.kubefirst.io/
\ No newline at end of file
+* see sample apps 'metaphor' in https://docs.kubefirst.io/
diff --git a/content/en/docs/project/plan-in-2024/streams/pocs/cnoe/_index.md b/content/en/docs/project/plan-in-2024/streams/pocs/cnoe/_index.md
index 1a48228..9d0f324 100644
--- a/content/en/docs/project/plan-in-2024/streams/pocs/cnoe/_index.md
+++ b/content/en/docs/project/plan-in-2024/streams/pocs/cnoe/_index.md
@@ -11,7 +11,7 @@ Als designiertes Basis-Tool des Developer Frameworks sollen die Verwendung und d
## Rationale
-CNOE ist das designierte Werkzeug zur Beschreibung und Ausspielung des Developer Frameworks.
+CNOE ist das designierte Werkzeug zur Beschreibung und Ausspielung des Developer Frameworks.
Dieses Werkzeug gilt es zu erlernen, zu beschreiben und weiterzuentwickeln.
Insbesondere der Metacharkter des 'Software zur Bereitstellung von Bereitstellungssoftware für Software', d.h. der unterschiedlichen Ebenen für unterschiedliche Use Cases und Akteure soll klar verständlich und dokumentiert werden. Siehe hierzu auch das Webinar von Huamnitec und die Diskussion zu unterschiedlichen Bereitstellungsmethoden eines RedisCaches.
@@ -29,4 +29,3 @@ Insbesondere der Metacharkter des 'Software zur Bereitstellung von Bereitstellun
* k3d anstatt kind
* kind: ggf. issue mit kindnet, ersetzen durch Cilium
-
diff --git a/content/en/docs/project/plan-in-2024/streams/pocs/sia-asset/_index.md b/content/en/docs/project/plan-in-2024/streams/pocs/sia-asset/_index.md
index 147f8a8..60967a9 100644
--- a/content/en/docs/project/plan-in-2024/streams/pocs/sia-asset/_index.md
+++ b/content/en/docs/project/plan-in-2024/streams/pocs/sia-asset/_index.md
@@ -47,4 +47,4 @@ graph TB
LocalBox.EDF -. "provision" .-> LocalBox.Local
LocalBox.EDF -. "provision" .-> CloudGroup.Prod
LocalBox.EDF -. "provision" .-> CloudGroup.Test
-```
\ No newline at end of file
+```
diff --git a/content/en/docs/project/team-process/_index.md b/content/en/docs/project/team-process/_index.md
index 1252cc0..93c2b62 100644
--- a/content/en/docs/project/team-process/_index.md
+++ b/content/en/docs/project/team-process/_index.md
@@ -16,13 +16,13 @@ We currently face the following [challenges in our process](https://confluence.t
1. missing team alignment on PoC-Output over all components
1. Action: team is committed to **clearly defined PoC capabilities**
1. Action: every each team-member is aware of **individual and common work** to be done (backlog) to achieve PoC
-1. missing concept for repository (process, structure,
+1. missing concept for repository (process, structure,
1. Action: the **PoC has a robust repository concept** up & running
1. Action: repo concept is applicable for other repositorys as well (esp. documentation repo)
### General working context
-A **project goal** drives us as a **team** to create valuable **product output**.
+A **project goal** drives us as a **team** to create valuable **product output**.
The **backlog** contains the product specification which instructs us by working in **tasks** with the help and usage of **ressources** (like git, 3rd party code and knowledge and so on).
@@ -104,7 +104,7 @@ Most important in the POC structure are:
#### Definition of Done
1. Jira: there is a final comment summarizimg the outcome (in a bit more verbose from than just the 'resolution' of the ticket) and the main outputs. This may typically be a link to the commit and/or pull request of the final repo state
-2. Git/Repo: there is a README.md in the root of the repo. It summarizes in a typical Gihub-manner how to use the repo, so that it does what it is intended to do and reveals all the bells and whistles of the repo to the consumer. If the README doesn't lead to the usable and recognizable added value the work is not done!
+2. Git/Repo: there is a README.md in the root of the repo. It summarizes in a typical Gihub-manner how to use the repo, so that it does what it is intended to do and reveals all the bells and whistles of the repo to the consumer. If the README doesn't lead to the usable and recognizable added value the work is not done!
#### Review
@@ -133,7 +133,7 @@ The following topics are optional and do not need an agreement at the moment:
## Status of POC Capabilities
-The following table lists an analysis of the status of the ['Funcionality validation' of the POC](https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024).
+The following table lists an analysis of the status of the ['Funcionality validation' of the POC](https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024).
Assumption: These functionalities should be the aforementioned capabilities.
-
\ No newline at end of file
+
diff --git a/content/en/docs/solution/design/architectural-documentation.md b/content/en/docs/solution/design/architectural-documentation.md
index ddbaad3..0988144 100644
--- a/content/en/docs/solution/design/architectural-documentation.md
+++ b/content/en/docs/solution/design/architectural-documentation.md
@@ -1,9 +1,10 @@
-# why we have architectural documentation
+# why we have architectural documentation
TN: Robert, Patrick, Stefan, Stephan
25.2.25, 13-14h
## referring Tickets / Links
+
* https://jira.telekom-mms.com/browse/IPCEICIS-2424
* https://jira.telekom-mms.com/browse/IPCEICIS-478
* Confluence: https://confluence.telekom-mms.com/display/IPCEICIS/Architecture
@@ -20,13 +21,12 @@ we need charts, because:
(**) marker: ????
-
## typed of charts
* schichtenmodell (frontend, middleware, backend)
* bebauungsplan mit abhängigkeiten, domänen
* kontext von außen
-* komponentendiagramm,
+* komponentendiagramm,
## decisions
@@ -36,4 +36,4 @@ we need charts, because:
* runbook (compare to openbao discussions)
* persistenz der EDP konfiguartion (zb postgres)
-* OIDC vs. SSI
\ No newline at end of file
+* OIDC vs. SSI
diff --git a/content/en/docs/solution/design/architectural-work-structure.md b/content/en/docs/solution/design/architectural-work-structure.md
index 0dbe0e6..676ec2e 100644
--- a/content/en/docs/solution/design/architectural-work-structure.md
+++ b/content/en/docs/solution/design/architectural-work-structure.md
@@ -6,30 +6,37 @@ Sebastiano, Stefan, Robert, Patrick, Stephan
25.2.25, 14-15h
## links
+
* https://confluence.telekom-mms.com/display/IPCEICIS/Team+Members
# montags-call
+
* Sebasriano im montags-call, inklusive florian, mindestens interim, solange wir keinen architektur-aussenminister haben
# workshops
+
* nach abstimmung mit hasan zu platform workshops
* weitere beteiligung in weiteren workshop-serien to be defined
# programm-alignment
+
* sponsoren finden
* erledigt sich durch die workshop-serien
# interne architekten
+
* robert und patrick steigen ein
* themen-teilung
# produkt struktur
+
edp standalone
ipcei edp
# architektur themen
## stl
+
produktstruktur
application model (cnoe, oam, score, xrd, ...)
api
@@ -45,29 +52,34 @@ security
monitoring
kubernetes internals
-## robert:
+## robert
+
pipelining
kubernetes-inetrnals
api
crossplane
platforming - erzeugen von ressourcen in 'clouds' (e.g. gcp, und hetzner :-) )
-## patrick:
+## patrick
+
security
identity-mgmt (SSI)
EaC
und alles andere macht mir auch total spass!
-# einschätzungen:
+# einschätzungen
+
* ipceicis-pltaform ist wichtigstes teilprojekt (hasan + patrick)
-* offener punkt: workload-steuerung, application model (kompatibility mit EDP)
+* offener punkt: workload-steuerung, application model (kompatibility mit EDP)
* thema security, siehe ssi vs. oidc
* wir brauchen eigene workshops zum definieren der zusammenarbiets-modi
# committements
+
* patrick und robert nehmen teil an architektur
# offen
+
* sebastian schwaar onboarding? (>=50%) --- robert fragt
- * alternative: consulting/support anfallsweise
- * hält eine kubernetes einführungs-schulung --> termine sind zu vereinbaren (liegt bei sophie)
\ No newline at end of file
+ * alternative: consulting/support anfallsweise
+ * hält eine kubernetes einführungs-schulung --> termine sind zu vereinbaren (liegt bei sophie)
diff --git a/content/en/docs/solution/design/crossplane-vs-terraform.md b/content/en/docs/solution/design/crossplane-vs-terraform.md
index 00294cc..6d27389 100644
--- a/content/en/docs/solution/design/crossplane-vs-terraform.md
+++ b/content/en/docs/solution/design/crossplane-vs-terraform.md
@@ -2,7 +2,7 @@
* Monday, March 31, 2025
-## Issue
+## Issue
Robert worked on the kindserver reconciling.
@@ -12,12 +12,12 @@ Even worse, if crossplane did delete the cluster and then set it up again correc
## Decisions
-1. quick solution: crosspllane doesn't delete clusters.
+1. quick solution: crosspllane doesn't delete clusters.
* If it detects drift with a kind cluster, it shall create an alert (like email) but not act in any way
-2. analyze how crossplane orchestration logic calls 'business logic' to decide what to do.
+2. analyze how crossplane orchestration logic calls 'business logic' to decide what to do.
* In this logic we could decide whether to delete resources like clusters and if so then how. Secondly an 'orchestration' or let's workflow how to correctly set the old state with respect to argocd could be implemented there.
3. keep terraform in mind
* we probably will need it in adapters anyway
* if the crossplane design does not fir, or the benefit is too small, or we definetly ahve more ressources in developing terraform, the we could completley switch
4. focus on EDP domain and application logic
- * for the momen (in MVP1) we need to focus on EDP higher level functionality
\ No newline at end of file
+ * for the momen (in MVP1) we need to focus on EDP higher level functionality
diff --git a/content/en/docs/solution/design/decision-iam-and-edf-self-containment.md b/content/en/docs/solution/design/decision-iam-and-edf-self-containment.md
index 2ec75aa..8e31282 100644
--- a/content/en/docs/solution/design/decision-iam-and-edf-self-containment.md
+++ b/content/en/docs/solution/design/decision-iam-and-edf-self-containment.md
@@ -26,6 +26,6 @@ Each embdding into customer infrastructure works with adapters which implement t
eDF has an own IAM. This may either hold the principals and permissions itself when there is no other IAM or proxy and map them when integrated into external enterprise IAMs.
-## Reference
+## Reference
-Arch call from 4.12.24, Florian, Stefan, Stephan-Pierre
\ No newline at end of file
+Arch call from 4.12.24, Florian, Stefan, Stephan-Pierre
diff --git a/content/en/docs/solution/design/platform-component.md b/content/en/docs/solution/design/platform-component.md
index d20c0c4..76046c4 100644
--- a/content/en/docs/solution/design/platform-component.md
+++ b/content/en/docs/solution/design/platform-component.md
@@ -3,37 +3,40 @@
# platform-team austausch
## stefan
+
* initiale fragen:
* vor 2 wochen workshop tapeten-termin
* wer nimmt an den workshops teil?
* was bietet platform an?
* EDP: könnte 5mio/a kosten
- * -> produkt pitch mit marko
- * -> edp ist unabhängig von ipceicis cloud continuum*
- * generalisierte quality of services ( <-> platform schnittstelle)
+ * -> produkt pitch mit marko
+ * -> edp ist unabhängig von ipceicis cloud continuum*
+ * generalisierte quality of services ( <-> platform schnittstelle)
+## Hasan
-## Hasan:
* martin macht: agent based iac generation
* platform-workshops mitgestalten
-* mms-fokus
+* mms-fokus
* connectivity enabled cloud offering, e2e von infrastruktur bis endgerät
* sdk für latenzarme systeme, beraten und integrieren
- * monitoring in EDP?
-* beispiel 'unity'
+ * monitoring in EDP?
+* beispiel 'unity'
* vorstellung im arch call
* wie können unterschieldiche applikationsebenen auf unterschiedliche infrastruktur(compute ebenen) verteit werden
* zero touch application deployment model
* ich werde gerade 'abgebremst'
* workshop beteiligung, TPM application model
+
## martin
+
* edgeXR erlaubt keine persistenz
* openai, llm als abstarktion nicht vorhanden
* momentan nur compute vorhanden
* roaming von applikationen --> EDP muss das unterstützen
* anwendungsfall: sprachmodell übersetzt design-artifakte in architektur, dann wird provisionierung ermöglicht
-? Applikations-modelle
+? Applikations-modelle
? zusammenhang mit golden paths
* zB für reines compute faas
diff --git a/content/en/docs/solution/design/proposal-local-deployment.md b/content/en/docs/solution/design/proposal-local-deployment.md
index 3ef08c1..71cad6c 100644
--- a/content/en/docs/solution/design/proposal-local-deployment.md
+++ b/content/en/docs/solution/design/proposal-local-deployment.md
@@ -20,4 +20,4 @@ The implementation of EDF must be kubernetes provider agnostic. Thus each provid
## Local deployment
-This implies that EDF must always be deployable into a local cluster, whereby by 'local' we mean a cluster which is under the full control of the platform engineer, e.g. a kind cluster on their laptop.
\ No newline at end of file
+This implies that EDF must always be deployable into a local cluster, whereby by 'local' we mean a cluster which is under the full control of the platform engineer, e.g. a kind cluster on their laptop.
diff --git a/content/en/docs/solution/design/proposal-stack-hydration.md b/content/en/docs/solution/design/proposal-stack-hydration.md
index adce110..4037da2 100644
--- a/content/en/docs/solution/design/proposal-stack-hydration.md
+++ b/content/en/docs/solution/design/proposal-stack-hydration.md
@@ -11,7 +11,7 @@ description: The implementation of EDF stacks must be kubernetes provider agnost
## Background
-When booting and reconciling the 'final' stack exectuting orchestrator (here: ArgoCD) needs to get rendered (or hydrated) presentations of the manifests.
+When booting and reconciling the 'final' stack exectuting orchestrator (here: ArgoCD) needs to get rendered (or hydrated) presentations of the manifests.
It is not possible or unwanted that the orchestrator itself resolves dependencies or configuration values.
@@ -23,6 +23,6 @@ The hydration takes place for all target clouds/kubernetes providers. There is n
This implies that in a development process there needs to be a build step hydrating the ArgoCD manifests for the targeted cloud.
-## Reference
+## Reference
-Discussion from Robert and Stephan-Pierre in the context of stack development - there should be an easy way to have locally changed stacks propagated into the local running platform.
\ No newline at end of file
+Discussion from Robert and Stephan-Pierre in the context of stack development - there should be an easy way to have locally changed stacks propagated into the local running platform.
diff --git a/content/en/docs/solution/scenarios/gitops/_index.md b/content/en/docs/solution/scenarios/gitops/_index.md
index b0191fb..81490f5 100644
--- a/content/en/docs/solution/scenarios/gitops/_index.md
+++ b/content/en/docs/solution/scenarios/gitops/_index.md
@@ -13,4 +13,4 @@ What kind of Gitops do we have with idpbuilder/CNOE ?
https://github.com/gitops-bridge-dev/gitops-bridge
-
\ No newline at end of file
+
diff --git a/content/en/docs/solution/scenarios/orchestration/_index.md b/content/en/docs/solution/scenarios/orchestration/_index.md
index 2ef6417..d45f7d5 100644
--- a/content/en/docs/solution/scenarios/orchestration/_index.md
+++ b/content/en/docs/solution/scenarios/orchestration/_index.md
@@ -18,7 +18,7 @@ The next chart shows a system landscape of CNOE orchestration.
[2024-04-PlatformEngineering-DevOpsDayRaleigh.pdf](https://github.com/cnoe-io/presentations/blob/main/2024-04-PlatformEngineering-DevOpsDayRaleigh.pdf)
-Questions: What are the degrees of freedom in this chart? What variations with respect to environments and environmnent types exist?
+Questions: What are the degrees of freedom in this chart? What variations with respect to environments and environmnent types exist?

@@ -28,7 +28,7 @@ The next chart shows a context chart of CNOE orchestration.
[reference-implementation-aws](https://github.com/cnoe-io/reference-implementation-aws)
-Questions: What are the degrees of freedom in this chart? What variations with respect to environments and environmnent types exist?
+Questions: What are the degrees of freedom in this chart? What variations with respect to environments and environmnent types exist?
-
\ No newline at end of file
+
diff --git a/content/en/docs/solution/tools/Backstage/Backstage setup tutorial/_index.md b/content/en/docs/solution/tools/Backstage/Backstage setup tutorial/_index.md
index d8cdba2..9f8f288 100644
--- a/content/en/docs/solution/tools/Backstage/Backstage setup tutorial/_index.md
+++ b/content/en/docs/solution/tools/Backstage/Backstage setup tutorial/_index.md
@@ -33,9 +33,11 @@ To install the Backstage Standalone app, you can use npx. npx is a tool that com
```bash
npx @backstage/create-app@latest
```
+
This command will create a new directory with a Backstage app inside. The wizard will ask you for the name of the app. This name will be created as sub directory in your current working directory.
Below is a simplified layout of the files and folders generated when creating an app.
+
```bash
app
├── app-config.yaml
@@ -46,15 +48,17 @@ app
└── backend
```
-- **app-config.yaml**: Main configuration file for the app. See Configuration for more information.
-- **catalog-info.yaml**: Catalog Entities descriptors. See Descriptor Format of Catalog Entities to get started.
-- **package.json**: Root package.json for the project. Note: Be sure that you don't add any npm dependencies here as they probably should be installed in the intended workspace rather than in the root.
-- **packages/**: Lerna leaf packages or "workspaces". Everything here is going to be a separate package, managed by lerna.
-- **packages/app/**: A fully functioning Backstage frontend app that acts as a good starting point for you to get to know Backstage.
-- **packages/backend/**: We include a backend that helps power features such as Authentication, Software Catalog, Software Templates, and TechDocs, amongst other things.
+* **app-config.yaml**: Main configuration file for the app. See Configuration for more information.
+* **catalog-info.yaml**: Catalog Entities descriptors. See Descriptor Format of Catalog Entities to get started.
+* **package.json**: Root package.json for the project. Note: Be sure that you don't add any npm dependencies here as they probably should be installed in the intended workspace rather than in the root.
+* **packages/**: Lerna leaf packages or "workspaces". Everything here is going to be a separate package, managed by lerna.
+* **packages/app/**: A fully functioning Backstage frontend app that acts as a good starting point for you to get to know Backstage.
+* **packages/backend/**: We include a backend that helps power features such as Authentication, Software Catalog, Software Templates, and TechDocs, amongst other things.
## Run the Backstage Application
+
You can run it in Backstage root directory by executing this command:
+
```bash
yarn dev
```
diff --git a/content/en/docs/solution/tools/Backstage/Exsisting Plugins/_index.md b/content/en/docs/solution/tools/Backstage/Exsisting Plugins/_index.md
index d449433..866f60f 100644
--- a/content/en/docs/solution/tools/Backstage/Exsisting Plugins/_index.md
+++ b/content/en/docs/solution/tools/Backstage/Exsisting Plugins/_index.md
@@ -4,46 +4,52 @@ weight = 4
+++
1. **Catalog**:
- - Used for managing services and microservices, including registration, visualization, and the ability to track dependencies and relationships between services. It serves as a central directory for all services in an organization.
+ * Used for managing services and microservices, including registration, visualization, and the ability to track dependencies and relationships between services. It serves as a central directory for all services in an organization.
2. **Docs**:
- - Designed for creating and managing documentation, supporting formats such as Markdown. It helps teams organize and access technical and non-technical documentation in a unified interface.
+ * Designed for creating and managing documentation, supporting formats such as Markdown. It helps teams organize and access technical and non-technical documentation in a unified interface.
3. **API Docs**:
- - Automatically generates API documentation based on OpenAPI specifications or other API definitions, ensuring that your API information is always up to date and accessible for developers.
+ * Automatically generates API documentation based on OpenAPI specifications or other API definitions, ensuring that your API information is always up to date and accessible for developers.
4. **TechDocs**:
- - A tool for creating and publishing technical documentation. It is integrated directly into Backstage, allowing developers to host and maintain documentation alongside their projects.
+ * A tool for creating and publishing technical documentation. It is integrated directly into Backstage, allowing developers to host and maintain documentation alongside their projects.
5. **Scaffolder**:
- - Allows the rapid creation of new projects based on predefined templates, making it easier to deploy services or infrastructure with consistent best practices.
+ * Allows the rapid creation of new projects based on predefined templates, making it easier to deploy services or infrastructure with consistent best practices.
6. **CI/CD**:
- - Provides integration with CI/CD systems such as GitHub Actions and Jenkins, allowing developers to view build status, logs, and pipelines directly in Backstage.
+ * Provides integration with CI/CD systems such as GitHub Actions and Jenkins, allowing developers to view build status, logs, and pipelines directly in Backstage.
7. **Metrics**:
- - Offers the ability to monitor and visualize performance metrics for applications, helping teams to keep track of key indicators like response times and error rates.
+ * Offers the ability to monitor and visualize performance metrics for applications, helping teams to keep track of key indicators like response times and error rates.
8. **Snyk**:
- - Used for dependency security analysis, scanning your codebase for vulnerabilities and helping to manage any potential security risks in third-party libraries.
+ * Used for dependency security analysis, scanning your codebase for vulnerabilities and helping to manage any potential security risks in third-party libraries.
9. **SonarQube**:
- - Integrates with SonarQube to analyze code quality, providing insights into code health, including issues like technical debt, bugs, and security vulnerabilities.
+ * Integrates with SonarQube to analyze code quality, providing insights into code health, including issues like technical debt, bugs, and security vulnerabilities.
10. **GitHub**:
- - Enables integration with GitHub repositories, displaying information such as commits, pull requests, and other repository activity, making collaboration more transparent and efficient.
+
+* Enables integration with GitHub repositories, displaying information such as commits, pull requests, and other repository activity, making collaboration more transparent and efficient.
11. **CircleCI**:
- - Allows seamless integration with CircleCI for managing CI/CD workflows, giving developers insight into build pipelines, test results, and deployment statuses.
+
+* Allows seamless integration with CircleCI for managing CI/CD workflows, giving developers insight into build pipelines, test results, and deployment statuses.
12. **Kubernetes**:
- - Provides tools to manage Kubernetes clusters, including visualizing pod status, logs, and cluster health, helping teams maintain and troubleshoot their cloud-native applications.
+
+* Provides tools to manage Kubernetes clusters, including visualizing pod status, logs, and cluster health, helping teams maintain and troubleshoot their cloud-native applications.
13. **Cloud**:
- - Includes plugins for integration with cloud providers like AWS and Azure, allowing teams to manage cloud infrastructure, services, and billing directly from Backstage.
+
+* Includes plugins for integration with cloud providers like AWS and Azure, allowing teams to manage cloud infrastructure, services, and billing directly from Backstage.
14. **OpenTelemetry**:
- - Helps with monitoring distributed applications by integrating OpenTelemetry, offering powerful tools to trace requests, detect performance bottlenecks, and ensure application health.
+
+* Helps with monitoring distributed applications by integrating OpenTelemetry, offering powerful tools to trace requests, detect performance bottlenecks, and ensure application health.
15. **Lighthouse**:
- - Integrates Google Lighthouse to analyze web application performance, helping teams identify areas for improvement in metrics like load times, accessibility, and SEO.
+
+* Integrates Google Lighthouse to analyze web application performance, helping teams identify areas for improvement in metrics like load times, accessibility, and SEO.
diff --git a/content/en/docs/solution/tools/Backstage/General Information/_index.md b/content/en/docs/solution/tools/Backstage/General Information/_index.md
index 54dabd1..09d2514 100644
--- a/content/en/docs/solution/tools/Backstage/General Information/_index.md
+++ b/content/en/docs/solution/tools/Backstage/General Information/_index.md
@@ -21,4 +21,4 @@ Backstage supports the concept of "Golden Paths," enabling teams to follow recom
Modularity and Extensibility:
The platform allows for the creation of plugins, enabling users to customize and extend Backstage's functionality to fit their organization's needs.
-Backstage provides developers with centralized and convenient access to essential tools and resources, making it an effective solution for supporting Platform Engineering and developing an internal platform portal.
\ No newline at end of file
+Backstage provides developers with centralized and convenient access to essential tools and resources, making it an effective solution for supporting Platform Engineering and developing an internal platform portal.
diff --git a/content/en/docs/solution/tools/Backstage/Plugin Creation Tutorial/_index.md b/content/en/docs/solution/tools/Backstage/Plugin Creation Tutorial/_index.md
index a975456..e83bc96 100644
--- a/content/en/docs/solution/tools/Backstage/Plugin Creation Tutorial/_index.md
+++ b/content/en/docs/solution/tools/Backstage/Plugin Creation Tutorial/_index.md
@@ -3,6 +3,7 @@ title = "Plugin Creation Tutorial"
weight = 4
+++
Backstage plugins and functionality extensions should be writen in TypeScript/Node.js because backstage is written in those languages
+
### General Algorithm for Adding a Plugin in Backstage
1. **Create the Plugin**
@@ -33,6 +34,7 @@ Backstage plugins and functionality extensions should be writen in TypeScript/No
Run the Backstage development server using `yarn dev` and navigate to your plugin’s route via the sidebar or directly through its URL. Ensure that the plugin’s functionality works as expected.
### Example
+
All steps will be demonstrated using a simple example plugin, which will request JSON files from the API of jsonplaceholder.typicode.com and display them on a page.
1. Creating test-plugin:
@@ -121,8 +123,9 @@ All steps will be demonstrated using a simple example plugin, which will request
};
```
-
+
3. Setup routs in plugins/{plugin_id}/src/routs.ts
+
```javascript
import { createRouteRef } from '@backstage/core-plugin-api';
@@ -133,11 +136,13 @@ All steps will be demonstrated using a simple example plugin, which will request
4. Register the plugin in `packages/app/src/App.tsx` in routes
Import of the plugin:
+
```javascript
import { TestPluginPage } from '@internal/backstage-plugin-test-plugin';
```
Adding route:
+
```javascript
const routes = (
@@ -148,6 +153,7 @@ All steps will be demonstrated using a simple example plugin, which will request
```
5. Add Item to sidebar menu of the backstage in `packages/app/src/components/Root/Root.tsx`. This should be added in to Root object as another SidebarItem
+
```javascript
export const Root = ({ children }: PropsWithChildren<{}>) => (
@@ -159,11 +165,12 @@ All steps will be demonstrated using a simple example plugin, which will request
);
```
-
+
6. Plugin is ready. Run the application
+
```bash
yarn dev
```

-
\ No newline at end of file
+
diff --git a/content/en/docs/solution/tools/CNOE/CNOE-competitors/_index.md b/content/en/docs/solution/tools/CNOE/CNOE-competitors/_index.md
index 22a54ea..dfed2d0 100644
--- a/content/en/docs/solution/tools/CNOE/CNOE-competitors/_index.md
+++ b/content/en/docs/solution/tools/CNOE/CNOE-competitors/_index.md
@@ -9,60 +9,62 @@ description: We compare CNOW - which we see as an orchestrator - with other plat
Kratix is a Kubernetes-native framework that helps platform engineering teams automate the provisioning and management of infrastructure and services through custom-defined abstractions called Promises. It allows teams to extend Kubernetes functionality and provide resources in a self-service manner to developers, streamlining the delivery and management of workloads across environments.
### Concepts
+
Key concepts of Kratix:
-- Workload:
+* Workload:
This is an abstraction representing any application or service that needs to be deployed within the infrastructure. It defines the requirements and dependent resources necessary to execute this task.
-- Promise:
+* Promise:
A "Promise" is a ready-to-use infrastructure or service package. Promises allow developers to request specific resources (such as databases, storage, or computing power) through the standard Kubernetes interface. It’s similar to an operator in Kubernetes but more universal and flexible.
Kratix simplifies the development and delivery of applications by automating the provisioning and management of infrastructure and resources through simple Kubernetes APIs.
-### Pros of Kratix:
-- Resource provisioning automation. Kratix simplifies infrastructure creation for developers through the abstraction of "Promises." This means developers can simply request the necessary resources (like databases, message queues) without dealing with the intricacies of infrastructure management.
+### Pros of Kratix
+* Resource provisioning automation. Kratix simplifies infrastructure creation for developers through the abstraction of "Promises." This means developers can simply request the necessary resources (like databases, message queues) without dealing with the intricacies of infrastructure management.
-- Flexibility and adaptability. Platform teams can customize and adapt Kratix to specific needs by creating custom Promises for various services, allowing the infrastructure to meet the specific requirements of the organization.
+* Flexibility and adaptability. Platform teams can customize and adapt Kratix to specific needs by creating custom Promises for various services, allowing the infrastructure to meet the specific requirements of the organization.
-- Unified resource request interface. Developers can use a single API (Kubernetes) to request resources, simplifying interaction with infrastructure and reducing complexity when working with different tools and systems.
+* Unified resource request interface. Developers can use a single API (Kubernetes) to request resources, simplifying interaction with infrastructure and reducing complexity when working with different tools and systems.
-### Cons of Kratix:
-- Although Kratix offers great flexibility, it can also lead to more complex setup and platform management processes. Creating custom Promises and configuring their behavior requires time and effort.
+### Cons of Kratix
+* Although Kratix offers great flexibility, it can also lead to more complex setup and platform management processes. Creating custom Promises and configuring their behavior requires time and effort.
-- Kubernetes dependency. Kratix relies on Kubernetes, which makes it less applicable in environments that don’t use Kubernetes or containerization technologies. It might also lead to integration challenges if an organization uses other solutions.
+* Kubernetes dependency. Kratix relies on Kubernetes, which makes it less applicable in environments that don’t use Kubernetes or containerization technologies. It might also lead to integration challenges if an organization uses other solutions.
-- Limited ecosystem. Kratix doesn’t have as mature an ecosystem as some other infrastructure management solutions (e.g., Terraform, Pulumi). This may limit the availability of ready-made solutions and tools, increasing the amount of manual work when implementing Kratix.
+* Limited ecosystem. Kratix doesn’t have as mature an ecosystem as some other infrastructure management solutions (e.g., Terraform, Pulumi). This may limit the availability of ready-made solutions and tools, increasing the amount of manual work when implementing Kratix.
## Humanitec
-Humanitec is an Internal Developer Platform (IDP) that helps platform engineering teams automate the provisioning
-and management of infrastructure and services through dynamic configuration and environment management.
+Humanitec is an Internal Developer Platform (IDP) that helps platform engineering teams automate the provisioning
+and management of infrastructure and services through dynamic configuration and environment management.
It allows teams to extend their infrastructure capabilities and provide resources in a self-service manner to developers, streamlining the deployment and management of workloads across various environments.
### Concepts
+
Key concepts of Humanitec:
-- Application Definition:
+* Application Definition:
This is an abstraction where developers define their application, including its services, environments, a dependencies. It abstracts away infrastructure details, allowing developers to focus on building and deploying their applications.
-- Dynamic Configuration Management:
+* Dynamic Configuration Management:
Humanitec automatically manages the configuration of applications and services across multiple environments (e.g., development, staging, production). It ensures consistency and alignment of configurations as applications move through different stages of deployment.
-Humanitec simplifies the development and delivery process by providing self-service deployment options while maintaining
+Humanitec simplifies the development and delivery process by providing self-service deployment options while maintaining
centralized governance and control for platform teams.
-### Pros of Humanitec:
-- Resource provisioning automation. Humanitec automates infrastructure and environment provisioning, allowing developers to focus on building and deploying applications without worrying about manual configuration.
+### Pros of Humanitec
+* Resource provisioning automation. Humanitec automates infrastructure and environment provisioning, allowing developers to focus on building and deploying applications without worrying about manual configuration.
-- Dynamic environment management. Humanitec manages application configurations across different environments, ensuring consistency and reducing manual configuration errors.
+* Dynamic environment management. Humanitec manages application configurations across different environments, ensuring consistency and reducing manual configuration errors.
-- Golden Paths. best-practice workflows and processes that guide developers through infrastructure provisioning and application deployment. This ensures consistency and reduces cognitive load by providing a set of recommended practices.
+* Golden Paths. best-practice workflows and processes that guide developers through infrastructure provisioning and application deployment. This ensures consistency and reduces cognitive load by providing a set of recommended practices.
-- Unified resource management interface. Developers can use Humanitec’s interface to request resources and deploy applications, reducing complexity and improving the development workflow.
+* Unified resource management interface. Developers can use Humanitec’s interface to request resources and deploy applications, reducing complexity and improving the development workflow.
-### Cons of Humanitec:
-- Humanitec is commercially licensed software
+### Cons of Humanitec
+* Humanitec is commercially licensed software
-- Integration challenges. Humanitec’s dependency on specific cloud-native environments can create challenges for organizations with diverse infrastructures or those using legacy systems.
+* Integration challenges. Humanitec’s dependency on specific cloud-native environments can create challenges for organizations with diverse infrastructures or those using legacy systems.
-- Cost. Depending on usage, Humanitec might introduce additional costs related to the implementation of an Internal Developer Platform, especially for smaller teams.
+* Cost. Depending on usage, Humanitec might introduce additional costs related to the implementation of an Internal Developer Platform, especially for smaller teams.
-- Harder to customise
+* Harder to customise
diff --git a/content/en/docs/solution/tools/CNOE/idpbuilder/installation/_index.md b/content/en/docs/solution/tools/CNOE/idpbuilder/installation/_index.md
index d919ab5..fbb475d 100644
--- a/content/en/docs/solution/tools/CNOE/idpbuilder/installation/_index.md
+++ b/content/en/docs/solution/tools/CNOE/idpbuilder/installation/_index.md
@@ -11,10 +11,10 @@ Windows and Mac users already utilize a virtual machine for the Docker Linux env
### Prerequisites
-- Docker Engine
-- Go
-- kubectl
-- kind
+* Docker Engine
+* Go
+* kubectl
+* kind
### Build process
@@ -76,28 +76,28 @@ idpbuilder delete cluster
CNOE provides two implementations of an IDP:
-- Amazon AWS implementation
-- KIND implementation
+* Amazon AWS implementation
+* KIND implementation
Both are not useable to run on bare metal or an OSC instance. The Amazon implementation is complex and makes use of Terraform which is currently not supported by either base metal or OSC. Therefore the KIND implementation is used and customized to support the idpbuilder installation. The idpbuilder is also doing some network magic which needs to be replicated.
Several prerequisites have to be provided to support the idpbuilder on bare metal or the OSC:
-- Kubernetes dependencies
-- Network dependencies
-- Changes to the idpbuilder
-
+* Kubernetes dependencies
+* Network dependencies
+* Changes to the idpbuilder
+
### Prerequisites
Talos Linux is choosen for a bare metal Kubernetes instance.
-- talosctl
-- Go
-- Docker Engine
-- kubectl
-- kustomize
-- helm
-- nginx
+* talosctl
+* Go
+* Docker Engine
+* kubectl
+* kustomize
+* helm
+* nginx
As soon as the idpbuilder works correctly on bare metal, the next step is to apply it to an OSC instance.
@@ -338,14 +338,14 @@ talosctl cluster destroy
Required:
-- Add *.cnoe.localtest.me to the Talos cluster DNS, pointing to the host device IP address, which runs nginx.
+* Add *.cnoe.localtest.me to the Talos cluster DNS, pointing to the host device IP address, which runs nginx.
-- Create a SSL certificate with `cnoe.localtest.me` as common name. Edit the nginx config to load this certificate. Configure idpbuilder to distribute this certificate instead of the one idpbuilder distributes by idefault.
+* Create a SSL certificate with `cnoe.localtest.me` as common name. Edit the nginx config to load this certificate. Configure idpbuilder to distribute this certificate instead of the one idpbuilder distributes by idefault.
Optimizations:
-- Implement an idpbuilder uninstall. This is specially important when working on the OSC instance.
+* Implement an idpbuilder uninstall. This is specially important when working on the OSC instance.
-- Remove or configure gitea.cnoe.localtest.me, it seems not to work even in the idpbuilder local installation with KIND.
+* Remove or configure gitea.cnoe.localtest.me, it seems not to work even in the idpbuilder local installation with KIND.
-- Improvements to the idpbuilder to support Kubernetes instances other then KIND. This can either be done by parametrization or by utilizing Terraform / OpenTOFU or Crossplane.
\ No newline at end of file
+* Improvements to the idpbuilder to support Kubernetes instances other then KIND. This can either be done by parametrization or by utilizing Terraform / OpenTOFU or Crossplane.
diff --git a/content/en/docs/solution/tools/CNOE/included-backstage-templates/basic-argo-workflow/_index.md b/content/en/docs/solution/tools/CNOE/included-backstage-templates/basic-argo-workflow/_index.md
index 068c65e..110bb03 100644
--- a/content/en/docs/solution/tools/CNOE/included-backstage-templates/basic-argo-workflow/_index.md
+++ b/content/en/docs/solution/tools/CNOE/included-backstage-templates/basic-argo-workflow/_index.md
@@ -11,9 +11,9 @@ This Backstage template YAML automates the creation of an Argo Workflow for Kube
This template is designed for teams that need a streamlined approach to deploy and manage data processing or machine learning jobs using Spark within an Argo Workflow environment. It simplifies the deployment process and integrates the application with a CI/CD pipeline. The template performs the following:
-- **Workflow and Spark Job Setup**: Defines a basic Argo Workflow and configures a Spark job using the provided application file path, ideal for data processing tasks.
-- **Repository Setup**: Publishes the workflow configuration to a Gitea repository, enabling version control and easy updates to the job configuration.
-- **ArgoCD Integration**: Creates an ArgoCD application to manage the Spark job deployment, ensuring continuous delivery and synchronization with Kubernetes.
-- **Backstage Registration**: Registers the application in Backstage, making it easily discoverable and manageable through the Backstage catalog.
+* **Workflow and Spark Job Setup**: Defines a basic Argo Workflow and configures a Spark job using the provided application file path, ideal for data processing tasks.
+* **Repository Setup**: Publishes the workflow configuration to a Gitea repository, enabling version control and easy updates to the job configuration.
+* **ArgoCD Integration**: Creates an ArgoCD application to manage the Spark job deployment, ensuring continuous delivery and synchronization with Kubernetes.
+* **Backstage Registration**: Registers the application in Backstage, making it easily discoverable and manageable through the Backstage catalog.
This template boosts productivity by automating steps required for setting up Argo Workflows and Spark jobs, integrating version control, and enabling centralized management and visibility, making it ideal for projects requiring efficient deployment and scalable data processing solutions.
diff --git a/content/en/docs/solution/tools/CNOE/included-backstage-templates/basic-kubernetes-deployment/_idex.md b/content/en/docs/solution/tools/CNOE/included-backstage-templates/basic-kubernetes-deployment/_idex.md
index 98acba3..80e668c 100644
--- a/content/en/docs/solution/tools/CNOE/included-backstage-templates/basic-kubernetes-deployment/_idex.md
+++ b/content/en/docs/solution/tools/CNOE/included-backstage-templates/basic-kubernetes-deployment/_idex.md
@@ -11,9 +11,9 @@ This Backstage template YAML automates the creation of a basic Kubernetes Deploy
The template is designed for teams needing a streamlined approach to deploy applications in Kubernetes while automatically configuring their CI/CD pipelines. It performs the following:
-- **Deployment Creation**: A Kubernetes Deployment YAML is generated based on the provided application name, specifying a basic setup with an Nginx container.
-- **Repository Setup**: Publishes the deployment code in a Gitea repository, allowing for version control and future updates.
-- **ArgoCD Integration**: Automatically creates an ArgoCD application for the deployment, facilitating continuous delivery and synchronization with Kubernetes.
-- **Backstage Registration**: Registers the application in Backstage to make it discoverable and manageable via the Backstage catalog.
+* **Deployment Creation**: A Kubernetes Deployment YAML is generated based on the provided application name, specifying a basic setup with an Nginx container.
+* **Repository Setup**: Publishes the deployment code in a Gitea repository, allowing for version control and future updates.
+* **ArgoCD Integration**: Automatically creates an ArgoCD application for the deployment, facilitating continuous delivery and synchronization with Kubernetes.
+* **Backstage Registration**: Registers the application in Backstage to make it discoverable and manageable via the Backstage catalog.
This template enhances productivity by automating several steps required for deployment, version control, and registration, making it ideal for projects where fast, consistent deployment and centralized management are required.
diff --git a/content/en/docs/solution/tools/CNOE/verification.md b/content/en/docs/solution/tools/CNOE/verification.md
index af36de9..4f60e77 100644
--- a/content/en/docs/solution/tools/CNOE/verification.md
+++ b/content/en/docs/solution/tools/CNOE/verification.md
@@ -14,17 +14,17 @@ most part they adhere to the general definition:
Examples:
-- Form validation before processing the data
-- Compiler checking syntax
-- Rust's borrow checker
+* Form validation before processing the data
+* Compiler checking syntax
+* Rust's borrow checker
> Verification describes testing if your 'thing' complies with your spec
Examples:
-- Unit tests
-- Testing availability (ping, curl health check)
-- Checking a ZKP of some computation
+* Unit tests
+* Testing availability (ping, curl health check)
+* Checking a ZKP of some computation
---
diff --git a/content/en/docs/solution/tools/Crossplane/provider-kind/_index.md b/content/en/docs/solution/tools/Crossplane/provider-kind/_index.md
index c90fc5c..7370f44 100644
--- a/content/en/docs/solution/tools/Crossplane/provider-kind/_index.md
+++ b/content/en/docs/solution/tools/Crossplane/provider-kind/_index.md
@@ -14,10 +14,10 @@ The provider config takes the credentials to log into the cloud provider and pro
The implementations of the cloud resources reflect each type of cloud resource, typical resources are:
-- S3 Bucket
-- Nodepool
-- VPC
-- GkeCluster
+* S3 Bucket
+* Nodepool
+* VPC
+* GkeCluster
## Architecture of provider-kind
@@ -57,16 +57,16 @@ object is a secret.
The need for the following inputs arise when developing a provider-kind:
-- kindserver password as a kubernetes secret
-- endpoint, the IP address of the kindserver as a detail of `ProviderConfig`
-- kindConfig, the kind configuration file as a detail of `KindCluster`
+* kindserver password as a kubernetes secret
+* endpoint, the IP address of the kindserver as a detail of `ProviderConfig`
+* kindConfig, the kind configuration file as a detail of `KindCluster`
The following outputs arise:
-- kubernetesVersion, kubernetes version of a created kind cluster as a detail of `KindCluster`
-- internalIP, IP address of a created kind cluster as a detail of `KindCluster`
-- readiness as a detail of `KindCluster`
-- kube config of a created kind cluster as a kubernetes secret reference of `KindCluster`
+* kubernetesVersion, kubernetes version of a created kind cluster as a detail of `KindCluster`
+* internalIP, IP address of a created kind cluster as a detail of `KindCluster`
+* readiness as a detail of `KindCluster`
+* kube config of a created kind cluster as a kubernetes secret reference of `KindCluster`
### Inputs
@@ -210,7 +210,7 @@ Internally, the Connect function get's triggered in the kindcluster controller `
first, to setup the provider and configure it with the kindserver password and IP address of the kindserver.
After that the provider-kind has been configured with the kindserver secret and it's `ProviderConfig`, the provider is ready to
-be activated by applying a `KindCluster` manifest to kubernetes.
+be activated by applying a `KindCluster` manifest to kubernetes.
When the user applies a new `KindCluster` manifest, a observe loop is started. The provider regulary triggers the `Observe`
function of the controller. As there has yet been created nothing yet, the controller will return
@@ -296,7 +296,7 @@ The official way for creating crossplane providers is to use the provider-templa
a new provider.
First, clone the provider-template. The commit ID when this howto has been written is 2e0b022c22eb50a8f32de2e09e832f17161d7596.
-Rename the new folder after cloning.
+Rename the new folder after cloning.
```
git clone https://github.com/crossplane/provider-template.git
@@ -320,7 +320,7 @@ sed -i "s/mytype/${type,,}/g" internal/controller/${provider_name,,}.go
```
Patch the Makefile:
-
+
```
dev: $(KIND) $(KUBECTL)
@$(INFO) Creating kind cluster
@@ -346,8 +346,8 @@ make dev
Now it's time to add the required fields (internalIP, endpoint, etc.) to the spec fields in go api sources found in:
-- apis/container/v1alpha1/kindcluster_types.go
-- apis/v1alpha1/providerconfig_types.go
+* apis/container/v1alpha1/kindcluster_types.go
+* apis/v1alpha1/providerconfig_types.go
The file `apis/kind.go` may also be modified. The word `sample` can be replaces with `container` in our case.
@@ -427,9 +427,9 @@ the ability to deploy helm and kubernetes objects in the newly created cluster.
A composition is realized as a custom resource definition (CRD) considting of three parts:
-- A definition
-- A composition
-- One or more deplyoments of the composition
+* A definition
+* A composition
+* One or more deplyoments of the composition
### definition.yaml
@@ -757,8 +757,8 @@ Open the composition in VS Code: examples/composition_deprecated/composition.yam
Currently missing is the third and final part, the imperative steps which need to be processed:
-- creation of TLS certificates and giteaAdmin password
-- creation of a Forgejo repository for the stacks
-- uploading the stacks in the Forgejo repository
+* creation of TLS certificates and giteaAdmin password
+* creation of a Forgejo repository for the stacks
+* uploading the stacks in the Forgejo repository
-Connecting the definition field (ArgoCD repo URL) and composition interconnects (function-patch-and-transform) are also missing.
\ No newline at end of file
+Connecting the definition field (ArgoCD repo URL) and composition interconnects (function-patch-and-transform) are also missing.
diff --git a/content/en/docs/solution/tools/Kube-prometheus-stack/_index.md b/content/en/docs/solution/tools/Kube-prometheus-stack/_index.md
index 2bbf352..9b5512a 100644
--- a/content/en/docs/solution/tools/Kube-prometheus-stack/_index.md
+++ b/content/en/docs/solution/tools/Kube-prometheus-stack/_index.md
@@ -19,10 +19,12 @@ grafana.sidecar.dashboards contains necessary configurations so additional user
grafana.grafana.ini.server contains configuration details that are necessary, so the ingress points to the correct url.
### Start
+
Once Grafana is running it is accessible under https://cnoe.localtest.me/grafana.
Many preconfigured dashboards can be used by klicking the menu option Dashboards.
### Adding your own dashboards
+
The application edfbuilder/kind/stacks/core/kube-prometheus.yaml is used to import new Loki dashboards. Examples for imported dashboards can be found in the folder edfbuilder/kind/stacks/core/kube-prometheus/dashboards.
It is possible to add your own dashboards: Dashboards must be in JSON format. To add your own dashboard create a new ConfigMap in YAML format using onw of the examples as a blueprint. The new dashboard in JSON format has to be added as the value for data.k8s-dashboard-[...].json like in the examples. (It is important to use a unique name for data.k8s-dashboard-[...].json for each dashboard.)
diff --git a/content/en/docs/solution/tools/Loki/_index.md b/content/en/docs/solution/tools/Loki/_index.md
index 91945b3..4c51070 100644
--- a/content/en/docs/solution/tools/Loki/_index.md
+++ b/content/en/docs/solution/tools/Loki/_index.md
@@ -5,6 +5,6 @@ description: Grafana Loki is a scalable open-source log aggregation system
## Loki Overview
-The application Grafana Loki is started in edfbuilder/kind/stacks/core/loki.yaml.
+The application Grafana Loki is started in edfbuilder/kind/stacks/core/loki.yaml.
Loki is started in microservices mode and contains the components ingester, distributor, querier, and query-frontend.
The Helm values file edfbuilder/kind/stacks/core/loki/values.yaml contains configuration values.
diff --git a/content/en/docs/solution/tools/Promtail/_index.md b/content/en/docs/solution/tools/Promtail/_index.md
index a5a1a81..b675162 100644
--- a/content/en/docs/solution/tools/Promtail/_index.md
+++ b/content/en/docs/solution/tools/Promtail/_index.md
@@ -5,5 +5,5 @@ description: Grafana Promtail is an agent that ships logs to a Grafan Loki insta
## Promtail Overview
-The application Grafana Promtail is started in edfbuilder/kind/stacks/core/promtail.yaml.
+The application Grafana Promtail is started in edfbuilder/kind/stacks/core/promtail.yaml.
The Helm values file edfbuilder/kind/stacks/core/promtail/values.yaml contains configuration values.
diff --git a/content/en/docs/solution/tools/kyverno integration/_index.md b/content/en/docs/solution/tools/kyverno integration/_index.md
index 12ca83e..d9aab2e 100644
--- a/content/en/docs/solution/tools/kyverno integration/_index.md
+++ b/content/en/docs/solution/tools/kyverno integration/_index.md
@@ -17,14 +17,17 @@ Kyverno is a policy engine for Kubernetes designed to enforce, validate, and mut
Kyverno simplifies governance and compliance in Kubernetes environments by automating policy management and ensuring best practices are followed.
## Prerequisites
+
Same as for idpbuilder installation
-- Docker Engine
-- Go
-- kubectl
-- kind
+* Docker Engine
+* Go
+* kubectl
+* kind
## Installation
+
### Build process
+
For building idpbuilder the source code needs to be downloaded and compiled:
```