test: configure comprehensive markdown linting with Docsy best practices
Configure markdownlint with rules aligned to technical documentation standards and Docsy theme conventions. Design Decisions: - Enable core quality rules (heading hierarchy, consistent list styles) - Allow inline HTML for Docsy shortcodes and components - Permit bare URLs (common in technical documentation) - Make code block language hints optional (pragmatic for existing content) - Set maximum 2 consecutive blank lines (balanced readability) - Enforce single trailing newline (POSIX standard) - Use asterisk for unordered lists (consistency) - Allow 2-space list indentation (Markdown standard) Auto-fixed Issues: - Converted dash lists to asterisk lists (568 fixes) - Removed trailing spaces (211 fixes) - Added missing trailing newlines (74 fixes) - Added blank lines around lists and headings (100+ fixes) Remaining Style Warnings (intentionally accepted): - MD029: List numbering variations in meeting notes (75 instances) - MD036: Bold text for section headers in ADRs (13 instances) - MD025: Multiple H1 in notes/brainstorming docs (10 instances) - MD032/MD022: Minor spacing variations (15 instances) Test Results: ✅ Hugo build: 227 pages generated successfully ✅ HTML validation: No errors ✅ Link checking: All links valid (except dev-only livereload) ✅ Markdown linting: Only non-critical style warnings remain The configuration balances strict quality checks with pragmatic flexibility for diverse content types (documentation, ADRs, meeting notes, tutorials).
This commit is contained in:
parent
3eaa574a26
commit
f797af114b
61 changed files with 425 additions and 358 deletions
|
|
@ -1,9 +1,10 @@
|
|||
# why we have architectural documentation
|
||||
# why we have architectural documentation
|
||||
|
||||
TN: Robert, Patrick, Stefan, Stephan
|
||||
25.2.25, 13-14h
|
||||
|
||||
## referring Tickets / Links
|
||||
|
||||
* https://jira.telekom-mms.com/browse/IPCEICIS-2424
|
||||
* https://jira.telekom-mms.com/browse/IPCEICIS-478
|
||||
* Confluence: https://confluence.telekom-mms.com/display/IPCEICIS/Architecture
|
||||
|
|
@ -20,13 +21,12 @@ we need charts, because:
|
|||
(**) marker: ????
|
||||
|
||||
|
||||
|
||||
## typed of charts
|
||||
|
||||
* schichtenmodell (frontend, middleware, backend)
|
||||
* bebauungsplan mit abhängigkeiten, domänen
|
||||
* kontext von außen
|
||||
* komponentendiagramm,
|
||||
* komponentendiagramm,
|
||||
|
||||
## decisions
|
||||
|
||||
|
|
@ -36,4 +36,4 @@ we need charts, because:
|
|||
|
||||
* runbook (compare to openbao discussions)
|
||||
* persistenz der EDP konfiguartion (zb postgres)
|
||||
* OIDC vs. SSI
|
||||
* OIDC vs. SSI
|
||||
|
|
|
|||
|
|
@ -6,30 +6,37 @@ Sebastiano, Stefan, Robert, Patrick, Stephan
|
|||
25.2.25, 14-15h
|
||||
|
||||
## links
|
||||
|
||||
* https://confluence.telekom-mms.com/display/IPCEICIS/Team+Members
|
||||
|
||||
# montags-call
|
||||
|
||||
* Sebasriano im montags-call, inklusive florian, mindestens interim, solange wir keinen architektur-aussenminister haben
|
||||
|
||||
# workshops
|
||||
|
||||
* nach abstimmung mit hasan zu platform workshops
|
||||
* weitere beteiligung in weiteren workshop-serien to be defined
|
||||
|
||||
# programm-alignment
|
||||
|
||||
* sponsoren finden
|
||||
* erledigt sich durch die workshop-serien
|
||||
|
||||
# interne architekten
|
||||
|
||||
* robert und patrick steigen ein
|
||||
* themen-teilung
|
||||
|
||||
# produkt struktur
|
||||
|
||||
edp standalone
|
||||
ipcei edp
|
||||
|
||||
# architektur themen
|
||||
|
||||
## stl
|
||||
|
||||
produktstruktur
|
||||
application model (cnoe, oam, score, xrd, ...)
|
||||
api
|
||||
|
|
@ -45,29 +52,34 @@ security
|
|||
monitoring
|
||||
kubernetes internals
|
||||
|
||||
## robert:
|
||||
## robert
|
||||
|
||||
pipelining
|
||||
kubernetes-inetrnals
|
||||
api
|
||||
crossplane
|
||||
platforming - erzeugen von ressourcen in 'clouds' (e.g. gcp, und hetzner :-) )
|
||||
|
||||
## patrick:
|
||||
## patrick
|
||||
|
||||
security
|
||||
identity-mgmt (SSI)
|
||||
EaC
|
||||
und alles andere macht mir auch total spass!
|
||||
|
||||
# einschätzungen:
|
||||
# einschätzungen
|
||||
|
||||
* ipceicis-pltaform ist wichtigstes teilprojekt (hasan + patrick)
|
||||
* offener punkt: workload-steuerung, application model (kompatibility mit EDP)
|
||||
* offener punkt: workload-steuerung, application model (kompatibility mit EDP)
|
||||
* thema security, siehe ssi vs. oidc
|
||||
* wir brauchen eigene workshops zum definieren der zusammenarbiets-modi
|
||||
|
||||
# committements
|
||||
|
||||
* patrick und robert nehmen teil an architektur
|
||||
|
||||
# offen
|
||||
|
||||
* sebastian schwaar onboarding? (>=50%) --- robert fragt
|
||||
* alternative: consulting/support anfallsweise
|
||||
* hält eine kubernetes einführungs-schulung --> termine sind zu vereinbaren (liegt bei sophie)
|
||||
* alternative: consulting/support anfallsweise
|
||||
* hält eine kubernetes einführungs-schulung --> termine sind zu vereinbaren (liegt bei sophie)
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
* Monday, March 31, 2025
|
||||
|
||||
## Issue
|
||||
## Issue
|
||||
|
||||
Robert worked on the kindserver reconciling.
|
||||
|
||||
|
|
@ -12,12 +12,12 @@ Even worse, if crossplane did delete the cluster and then set it up again correc
|
|||
|
||||
## Decisions
|
||||
|
||||
1. quick solution: crosspllane doesn't delete clusters.
|
||||
1. quick solution: crosspllane doesn't delete clusters.
|
||||
* If it detects drift with a kind cluster, it shall create an alert (like email) but not act in any way
|
||||
2. analyze how crossplane orchestration logic calls 'business logic' to decide what to do.
|
||||
2. analyze how crossplane orchestration logic calls 'business logic' to decide what to do.
|
||||
* In this logic we could decide whether to delete resources like clusters and if so then how. Secondly an 'orchestration' or let's workflow how to correctly set the old state with respect to argocd could be implemented there.
|
||||
3. keep terraform in mind
|
||||
* we probably will need it in adapters anyway
|
||||
* if the crossplane design does not fir, or the benefit is too small, or we definetly ahve more ressources in developing terraform, the we could completley switch
|
||||
4. focus on EDP domain and application logic
|
||||
* for the momen (in MVP1) we need to focus on EDP higher level functionality
|
||||
* for the momen (in MVP1) we need to focus on EDP higher level functionality
|
||||
|
|
|
|||
|
|
@ -26,6 +26,6 @@ Each embdding into customer infrastructure works with adapters which implement t
|
|||
eDF has an own IAM. This may either hold the principals and permissions itself when there is no other IAM or proxy and map them when integrated into external enterprise IAMs.
|
||||
|
||||
|
||||
## Reference
|
||||
## Reference
|
||||
|
||||
Arch call from 4.12.24, Florian, Stefan, Stephan-Pierre
|
||||
Arch call from 4.12.24, Florian, Stefan, Stephan-Pierre
|
||||
|
|
|
|||
|
|
@ -3,37 +3,40 @@
|
|||
# platform-team austausch
|
||||
|
||||
## stefan
|
||||
|
||||
* initiale fragen:
|
||||
* vor 2 wochen workshop tapeten-termin
|
||||
* wer nimmt an den workshops teil?
|
||||
* was bietet platform an?
|
||||
* EDP: könnte 5mio/a kosten
|
||||
* -> produkt pitch mit marko
|
||||
* -> edp ist unabhängig von ipceicis cloud continuum*
|
||||
* generalisierte quality of services ( <-> platform schnittstelle)
|
||||
* -> produkt pitch mit marko
|
||||
* -> edp ist unabhängig von ipceicis cloud continuum*
|
||||
* generalisierte quality of services ( <-> platform schnittstelle)
|
||||
|
||||
|
||||
## Hasan
|
||||
|
||||
## Hasan:
|
||||
* martin macht: agent based iac generation
|
||||
* platform-workshops mitgestalten
|
||||
* mms-fokus
|
||||
* mms-fokus
|
||||
* connectivity enabled cloud offering, e2e von infrastruktur bis endgerät
|
||||
* sdk für latenzarme systeme, beraten und integrieren
|
||||
* monitoring in EDP?
|
||||
* beispiel 'unity'
|
||||
* monitoring in EDP?
|
||||
* beispiel 'unity'
|
||||
* vorstellung im arch call
|
||||
* wie können unterschieldiche applikationsebenen auf unterschiedliche infrastruktur(compute ebenen) verteit werden
|
||||
* zero touch application deployment model
|
||||
* ich werde gerade 'abgebremst'
|
||||
* workshop beteiligung, TPM application model
|
||||
|
||||
## martin
|
||||
|
||||
* edgeXR erlaubt keine persistenz
|
||||
* openai, llm als abstarktion nicht vorhanden
|
||||
* momentan nur compute vorhanden
|
||||
* roaming von applikationen --> EDP muss das unterstützen
|
||||
* anwendungsfall: sprachmodell übersetzt design-artifakte in architektur, dann wird provisionierung ermöglicht
|
||||
|
||||
? Applikations-modelle
|
||||
? Applikations-modelle
|
||||
? zusammenhang mit golden paths
|
||||
* zB für reines compute faas
|
||||
|
|
|
|||
|
|
@ -20,4 +20,4 @@ The implementation of EDF must be kubernetes provider agnostic. Thus each provid
|
|||
|
||||
## Local deployment
|
||||
|
||||
This implies that EDF must always be deployable into a local cluster, whereby by 'local' we mean a cluster which is under the full control of the platform engineer, e.g. a kind cluster on their laptop.
|
||||
This implies that EDF must always be deployable into a local cluster, whereby by 'local' we mean a cluster which is under the full control of the platform engineer, e.g. a kind cluster on their laptop.
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ description: The implementation of EDF stacks must be kubernetes provider agnost
|
|||
|
||||
## Background
|
||||
|
||||
When booting and reconciling the 'final' stack exectuting orchestrator (here: ArgoCD) needs to get rendered (or hydrated) presentations of the manifests.
|
||||
When booting and reconciling the 'final' stack exectuting orchestrator (here: ArgoCD) needs to get rendered (or hydrated) presentations of the manifests.
|
||||
|
||||
It is not possible or unwanted that the orchestrator itself resolves dependencies or configuration values.
|
||||
|
||||
|
|
@ -23,6 +23,6 @@ The hydration takes place for all target clouds/kubernetes providers. There is n
|
|||
|
||||
This implies that in a development process there needs to be a build step hydrating the ArgoCD manifests for the targeted cloud.
|
||||
|
||||
## Reference
|
||||
## Reference
|
||||
|
||||
Discussion from Robert and Stephan-Pierre in the context of stack development - there should be an easy way to have locally changed stacks propagated into the local running platform.
|
||||
Discussion from Robert and Stephan-Pierre in the context of stack development - there should be an easy way to have locally changed stacks propagated into the local running platform.
|
||||
|
|
|
|||
|
|
@ -13,4 +13,4 @@ What kind of Gitops do we have with idpbuilder/CNOE ?
|
|||
|
||||
https://github.com/gitops-bridge-dev/gitops-bridge
|
||||
|
||||

|
||||

|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ The next chart shows a system landscape of CNOE orchestration.
|
|||
|
||||
[2024-04-PlatformEngineering-DevOpsDayRaleigh.pdf](https://github.com/cnoe-io/presentations/blob/main/2024-04-PlatformEngineering-DevOpsDayRaleigh.pdf)
|
||||
|
||||
Questions: What are the degrees of freedom in this chart? What variations with respect to environments and environmnent types exist?
|
||||
Questions: What are the degrees of freedom in this chart? What variations with respect to environments and environmnent types exist?
|
||||
|
||||

|
||||
|
||||
|
|
@ -28,7 +28,7 @@ The next chart shows a context chart of CNOE orchestration.
|
|||
|
||||
[reference-implementation-aws](https://github.com/cnoe-io/reference-implementation-aws)
|
||||
|
||||
Questions: What are the degrees of freedom in this chart? What variations with respect to environments and environmnent types exist?
|
||||
Questions: What are the degrees of freedom in this chart? What variations with respect to environments and environmnent types exist?
|
||||
|
||||
|
||||

|
||||

|
||||
|
|
|
|||
|
|
@ -33,9 +33,11 @@ To install the Backstage Standalone app, you can use npx. npx is a tool that com
|
|||
```bash
|
||||
npx @backstage/create-app@latest
|
||||
```
|
||||
|
||||
This command will create a new directory with a Backstage app inside. The wizard will ask you for the name of the app. This name will be created as sub directory in your current working directory.
|
||||
|
||||
Below is a simplified layout of the files and folders generated when creating an app.
|
||||
|
||||
```bash
|
||||
app
|
||||
├── app-config.yaml
|
||||
|
|
@ -46,15 +48,17 @@ app
|
|||
└── backend
|
||||
```
|
||||
|
||||
- **app-config.yaml**: Main configuration file for the app. See Configuration for more information.
|
||||
- **catalog-info.yaml**: Catalog Entities descriptors. See Descriptor Format of Catalog Entities to get started.
|
||||
- **package.json**: Root package.json for the project. Note: Be sure that you don't add any npm dependencies here as they probably should be installed in the intended workspace rather than in the root.
|
||||
- **packages/**: Lerna leaf packages or "workspaces". Everything here is going to be a separate package, managed by lerna.
|
||||
- **packages/app/**: A fully functioning Backstage frontend app that acts as a good starting point for you to get to know Backstage.
|
||||
- **packages/backend/**: We include a backend that helps power features such as Authentication, Software Catalog, Software Templates, and TechDocs, amongst other things.
|
||||
* **app-config.yaml**: Main configuration file for the app. See Configuration for more information.
|
||||
* **catalog-info.yaml**: Catalog Entities descriptors. See Descriptor Format of Catalog Entities to get started.
|
||||
* **package.json**: Root package.json for the project. Note: Be sure that you don't add any npm dependencies here as they probably should be installed in the intended workspace rather than in the root.
|
||||
* **packages/**: Lerna leaf packages or "workspaces". Everything here is going to be a separate package, managed by lerna.
|
||||
* **packages/app/**: A fully functioning Backstage frontend app that acts as a good starting point for you to get to know Backstage.
|
||||
* **packages/backend/**: We include a backend that helps power features such as Authentication, Software Catalog, Software Templates, and TechDocs, amongst other things.
|
||||
|
||||
## Run the Backstage Application
|
||||
|
||||
You can run it in Backstage root directory by executing this command:
|
||||
|
||||
```bash
|
||||
yarn dev
|
||||
```
|
||||
|
|
|
|||
|
|
@ -4,46 +4,52 @@ weight = 4
|
|||
+++
|
||||
|
||||
1. **Catalog**:
|
||||
- Used for managing services and microservices, including registration, visualization, and the ability to track dependencies and relationships between services. It serves as a central directory for all services in an organization.
|
||||
* Used for managing services and microservices, including registration, visualization, and the ability to track dependencies and relationships between services. It serves as a central directory for all services in an organization.
|
||||
|
||||
2. **Docs**:
|
||||
- Designed for creating and managing documentation, supporting formats such as Markdown. It helps teams organize and access technical and non-technical documentation in a unified interface.
|
||||
* Designed for creating and managing documentation, supporting formats such as Markdown. It helps teams organize and access technical and non-technical documentation in a unified interface.
|
||||
|
||||
3. **API Docs**:
|
||||
- Automatically generates API documentation based on OpenAPI specifications or other API definitions, ensuring that your API information is always up to date and accessible for developers.
|
||||
* Automatically generates API documentation based on OpenAPI specifications or other API definitions, ensuring that your API information is always up to date and accessible for developers.
|
||||
|
||||
4. **TechDocs**:
|
||||
- A tool for creating and publishing technical documentation. It is integrated directly into Backstage, allowing developers to host and maintain documentation alongside their projects.
|
||||
* A tool for creating and publishing technical documentation. It is integrated directly into Backstage, allowing developers to host and maintain documentation alongside their projects.
|
||||
|
||||
5. **Scaffolder**:
|
||||
- Allows the rapid creation of new projects based on predefined templates, making it easier to deploy services or infrastructure with consistent best practices.
|
||||
* Allows the rapid creation of new projects based on predefined templates, making it easier to deploy services or infrastructure with consistent best practices.
|
||||
|
||||
6. **CI/CD**:
|
||||
- Provides integration with CI/CD systems such as GitHub Actions and Jenkins, allowing developers to view build status, logs, and pipelines directly in Backstage.
|
||||
* Provides integration with CI/CD systems such as GitHub Actions and Jenkins, allowing developers to view build status, logs, and pipelines directly in Backstage.
|
||||
|
||||
7. **Metrics**:
|
||||
- Offers the ability to monitor and visualize performance metrics for applications, helping teams to keep track of key indicators like response times and error rates.
|
||||
* Offers the ability to monitor and visualize performance metrics for applications, helping teams to keep track of key indicators like response times and error rates.
|
||||
|
||||
8. **Snyk**:
|
||||
- Used for dependency security analysis, scanning your codebase for vulnerabilities and helping to manage any potential security risks in third-party libraries.
|
||||
* Used for dependency security analysis, scanning your codebase for vulnerabilities and helping to manage any potential security risks in third-party libraries.
|
||||
|
||||
9. **SonarQube**:
|
||||
- Integrates with SonarQube to analyze code quality, providing insights into code health, including issues like technical debt, bugs, and security vulnerabilities.
|
||||
* Integrates with SonarQube to analyze code quality, providing insights into code health, including issues like technical debt, bugs, and security vulnerabilities.
|
||||
|
||||
10. **GitHub**:
|
||||
- Enables integration with GitHub repositories, displaying information such as commits, pull requests, and other repository activity, making collaboration more transparent and efficient.
|
||||
|
||||
* Enables integration with GitHub repositories, displaying information such as commits, pull requests, and other repository activity, making collaboration more transparent and efficient.
|
||||
|
||||
11. **CircleCI**:
|
||||
- Allows seamless integration with CircleCI for managing CI/CD workflows, giving developers insight into build pipelines, test results, and deployment statuses.
|
||||
|
||||
* Allows seamless integration with CircleCI for managing CI/CD workflows, giving developers insight into build pipelines, test results, and deployment statuses.
|
||||
|
||||
12. **Kubernetes**:
|
||||
- Provides tools to manage Kubernetes clusters, including visualizing pod status, logs, and cluster health, helping teams maintain and troubleshoot their cloud-native applications.
|
||||
|
||||
* Provides tools to manage Kubernetes clusters, including visualizing pod status, logs, and cluster health, helping teams maintain and troubleshoot their cloud-native applications.
|
||||
|
||||
13. **Cloud**:
|
||||
- Includes plugins for integration with cloud providers like AWS and Azure, allowing teams to manage cloud infrastructure, services, and billing directly from Backstage.
|
||||
|
||||
* Includes plugins for integration with cloud providers like AWS and Azure, allowing teams to manage cloud infrastructure, services, and billing directly from Backstage.
|
||||
|
||||
14. **OpenTelemetry**:
|
||||
- Helps with monitoring distributed applications by integrating OpenTelemetry, offering powerful tools to trace requests, detect performance bottlenecks, and ensure application health.
|
||||
|
||||
* Helps with monitoring distributed applications by integrating OpenTelemetry, offering powerful tools to trace requests, detect performance bottlenecks, and ensure application health.
|
||||
|
||||
15. **Lighthouse**:
|
||||
- Integrates Google Lighthouse to analyze web application performance, helping teams identify areas for improvement in metrics like load times, accessibility, and SEO.
|
||||
|
||||
* Integrates Google Lighthouse to analyze web application performance, helping teams identify areas for improvement in metrics like load times, accessibility, and SEO.
|
||||
|
|
|
|||
|
|
@ -21,4 +21,4 @@ Backstage supports the concept of "Golden Paths," enabling teams to follow recom
|
|||
Modularity and Extensibility:
|
||||
|
||||
The platform allows for the creation of plugins, enabling users to customize and extend Backstage's functionality to fit their organization's needs.
|
||||
Backstage provides developers with centralized and convenient access to essential tools and resources, making it an effective solution for supporting Platform Engineering and developing an internal platform portal.
|
||||
Backstage provides developers with centralized and convenient access to essential tools and resources, making it an effective solution for supporting Platform Engineering and developing an internal platform portal.
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ title = "Plugin Creation Tutorial"
|
|||
weight = 4
|
||||
+++
|
||||
Backstage plugins and functionality extensions should be writen in TypeScript/Node.js because backstage is written in those languages
|
||||
|
||||
### General Algorithm for Adding a Plugin in Backstage
|
||||
|
||||
1. **Create the Plugin**
|
||||
|
|
@ -33,6 +34,7 @@ Backstage plugins and functionality extensions should be writen in TypeScript/No
|
|||
Run the Backstage development server using `yarn dev` and navigate to your plugin’s route via the sidebar or directly through its URL. Ensure that the plugin’s functionality works as expected.
|
||||
|
||||
### Example
|
||||
|
||||
All steps will be demonstrated using a simple example plugin, which will request JSON files from the API of jsonplaceholder.typicode.com and display them on a page.
|
||||
|
||||
1. Creating test-plugin:
|
||||
|
|
@ -121,8 +123,9 @@ All steps will be demonstrated using a simple example plugin, which will request
|
|||
};
|
||||
|
||||
```
|
||||
|
||||
|
||||
3. Setup routs in plugins/{plugin_id}/src/routs.ts
|
||||
|
||||
```javascript
|
||||
import { createRouteRef } from '@backstage/core-plugin-api';
|
||||
|
||||
|
|
@ -133,11 +136,13 @@ All steps will be demonstrated using a simple example plugin, which will request
|
|||
|
||||
4. Register the plugin in `packages/app/src/App.tsx` in routes
|
||||
Import of the plugin:
|
||||
|
||||
```javascript
|
||||
import { TestPluginPage } from '@internal/backstage-plugin-test-plugin';
|
||||
```
|
||||
|
||||
Adding route:
|
||||
|
||||
```javascript
|
||||
const routes = (
|
||||
<FlatRoutes>
|
||||
|
|
@ -148,6 +153,7 @@ All steps will be demonstrated using a simple example plugin, which will request
|
|||
```
|
||||
|
||||
5. Add Item to sidebar menu of the backstage in `packages/app/src/components/Root/Root.tsx`. This should be added in to Root object as another SidebarItem
|
||||
|
||||
```javascript
|
||||
export const Root = ({ children }: PropsWithChildren<{}>) => (
|
||||
<SidebarPage>
|
||||
|
|
@ -159,11 +165,12 @@ All steps will be demonstrated using a simple example plugin, which will request
|
|||
</SidebarPage>
|
||||
);
|
||||
```
|
||||
|
||||
|
||||
6. Plugin is ready. Run the application
|
||||
|
||||
```bash
|
||||
yarn dev
|
||||
```
|
||||
|
||||

|
||||

|
||||

|
||||
|
|
|
|||
|
|
@ -9,60 +9,62 @@ description: We compare CNOW - which we see as an orchestrator - with other plat
|
|||
Kratix is a Kubernetes-native framework that helps platform engineering teams automate the provisioning and management of infrastructure and services through custom-defined abstractions called Promises. It allows teams to extend Kubernetes functionality and provide resources in a self-service manner to developers, streamlining the delivery and management of workloads across environments.
|
||||
|
||||
### Concepts
|
||||
|
||||
Key concepts of Kratix:
|
||||
- Workload:
|
||||
* Workload:
|
||||
This is an abstraction representing any application or service that needs to be deployed within the infrastructure. It defines the requirements and dependent resources necessary to execute this task.
|
||||
- Promise:
|
||||
* Promise:
|
||||
A "Promise" is a ready-to-use infrastructure or service package. Promises allow developers to request specific resources (such as databases, storage, or computing power) through the standard Kubernetes interface. It’s similar to an operator in Kubernetes but more universal and flexible.
|
||||
Kratix simplifies the development and delivery of applications by automating the provisioning and management of infrastructure and resources through simple Kubernetes APIs.
|
||||
|
||||
### Pros of Kratix:
|
||||
- Resource provisioning automation. Kratix simplifies infrastructure creation for developers through the abstraction of "Promises." This means developers can simply request the necessary resources (like databases, message queues) without dealing with the intricacies of infrastructure management.
|
||||
### Pros of Kratix
|
||||
* Resource provisioning automation. Kratix simplifies infrastructure creation for developers through the abstraction of "Promises." This means developers can simply request the necessary resources (like databases, message queues) without dealing with the intricacies of infrastructure management.
|
||||
|
||||
- Flexibility and adaptability. Platform teams can customize and adapt Kratix to specific needs by creating custom Promises for various services, allowing the infrastructure to meet the specific requirements of the organization.
|
||||
* Flexibility and adaptability. Platform teams can customize and adapt Kratix to specific needs by creating custom Promises for various services, allowing the infrastructure to meet the specific requirements of the organization.
|
||||
|
||||
- Unified resource request interface. Developers can use a single API (Kubernetes) to request resources, simplifying interaction with infrastructure and reducing complexity when working with different tools and systems.
|
||||
* Unified resource request interface. Developers can use a single API (Kubernetes) to request resources, simplifying interaction with infrastructure and reducing complexity when working with different tools and systems.
|
||||
|
||||
### Cons of Kratix:
|
||||
- Although Kratix offers great flexibility, it can also lead to more complex setup and platform management processes. Creating custom Promises and configuring their behavior requires time and effort.
|
||||
### Cons of Kratix
|
||||
* Although Kratix offers great flexibility, it can also lead to more complex setup and platform management processes. Creating custom Promises and configuring their behavior requires time and effort.
|
||||
|
||||
- Kubernetes dependency. Kratix relies on Kubernetes, which makes it less applicable in environments that don’t use Kubernetes or containerization technologies. It might also lead to integration challenges if an organization uses other solutions.
|
||||
* Kubernetes dependency. Kratix relies on Kubernetes, which makes it less applicable in environments that don’t use Kubernetes or containerization technologies. It might also lead to integration challenges if an organization uses other solutions.
|
||||
|
||||
- Limited ecosystem. Kratix doesn’t have as mature an ecosystem as some other infrastructure management solutions (e.g., Terraform, Pulumi). This may limit the availability of ready-made solutions and tools, increasing the amount of manual work when implementing Kratix.
|
||||
* Limited ecosystem. Kratix doesn’t have as mature an ecosystem as some other infrastructure management solutions (e.g., Terraform, Pulumi). This may limit the availability of ready-made solutions and tools, increasing the amount of manual work when implementing Kratix.
|
||||
|
||||
|
||||
## Humanitec
|
||||
|
||||
Humanitec is an Internal Developer Platform (IDP) that helps platform engineering teams automate the provisioning
|
||||
and management of infrastructure and services through dynamic configuration and environment management.
|
||||
Humanitec is an Internal Developer Platform (IDP) that helps platform engineering teams automate the provisioning
|
||||
and management of infrastructure and services through dynamic configuration and environment management.
|
||||
|
||||
It allows teams to extend their infrastructure capabilities and provide resources in a self-service manner to developers, streamlining the deployment and management of workloads across various environments.
|
||||
|
||||
### Concepts
|
||||
|
||||
Key concepts of Humanitec:
|
||||
- Application Definition:
|
||||
* Application Definition:
|
||||
This is an abstraction where developers define their application, including its services, environments, a dependencies. It abstracts away infrastructure details, allowing developers to focus on building and deploying their applications.
|
||||
|
||||
- Dynamic Configuration Management:
|
||||
* Dynamic Configuration Management:
|
||||
Humanitec automatically manages the configuration of applications and services across multiple environments (e.g., development, staging, production). It ensures consistency and alignment of configurations as applications move through different stages of deployment.
|
||||
|
||||
Humanitec simplifies the development and delivery process by providing self-service deployment options while maintaining
|
||||
Humanitec simplifies the development and delivery process by providing self-service deployment options while maintaining
|
||||
centralized governance and control for platform teams.
|
||||
|
||||
### Pros of Humanitec:
|
||||
- Resource provisioning automation. Humanitec automates infrastructure and environment provisioning, allowing developers to focus on building and deploying applications without worrying about manual configuration.
|
||||
### Pros of Humanitec
|
||||
* Resource provisioning automation. Humanitec automates infrastructure and environment provisioning, allowing developers to focus on building and deploying applications without worrying about manual configuration.
|
||||
|
||||
- Dynamic environment management. Humanitec manages application configurations across different environments, ensuring consistency and reducing manual configuration errors.
|
||||
* Dynamic environment management. Humanitec manages application configurations across different environments, ensuring consistency and reducing manual configuration errors.
|
||||
|
||||
- Golden Paths. best-practice workflows and processes that guide developers through infrastructure provisioning and application deployment. This ensures consistency and reduces cognitive load by providing a set of recommended practices.
|
||||
* Golden Paths. best-practice workflows and processes that guide developers through infrastructure provisioning and application deployment. This ensures consistency and reduces cognitive load by providing a set of recommended practices.
|
||||
|
||||
- Unified resource management interface. Developers can use Humanitec’s interface to request resources and deploy applications, reducing complexity and improving the development workflow.
|
||||
* Unified resource management interface. Developers can use Humanitec’s interface to request resources and deploy applications, reducing complexity and improving the development workflow.
|
||||
|
||||
### Cons of Humanitec:
|
||||
- Humanitec is commercially licensed software
|
||||
### Cons of Humanitec
|
||||
* Humanitec is commercially licensed software
|
||||
|
||||
- Integration challenges. Humanitec’s dependency on specific cloud-native environments can create challenges for organizations with diverse infrastructures or those using legacy systems.
|
||||
* Integration challenges. Humanitec’s dependency on specific cloud-native environments can create challenges for organizations with diverse infrastructures or those using legacy systems.
|
||||
|
||||
- Cost. Depending on usage, Humanitec might introduce additional costs related to the implementation of an Internal Developer Platform, especially for smaller teams.
|
||||
* Cost. Depending on usage, Humanitec might introduce additional costs related to the implementation of an Internal Developer Platform, especially for smaller teams.
|
||||
|
||||
- Harder to customise
|
||||
* Harder to customise
|
||||
|
|
|
|||
|
|
@ -11,10 +11,10 @@ Windows and Mac users already utilize a virtual machine for the Docker Linux env
|
|||
|
||||
### Prerequisites
|
||||
|
||||
- Docker Engine
|
||||
- Go
|
||||
- kubectl
|
||||
- kind
|
||||
* Docker Engine
|
||||
* Go
|
||||
* kubectl
|
||||
* kind
|
||||
|
||||
### Build process
|
||||
|
||||
|
|
@ -76,28 +76,28 @@ idpbuilder delete cluster
|
|||
|
||||
CNOE provides two implementations of an IDP:
|
||||
|
||||
- Amazon AWS implementation
|
||||
- KIND implementation
|
||||
* Amazon AWS implementation
|
||||
* KIND implementation
|
||||
|
||||
Both are not useable to run on bare metal or an OSC instance. The Amazon implementation is complex and makes use of Terraform which is currently not supported by either base metal or OSC. Therefore the KIND implementation is used and customized to support the idpbuilder installation. The idpbuilder is also doing some network magic which needs to be replicated.
|
||||
|
||||
Several prerequisites have to be provided to support the idpbuilder on bare metal or the OSC:
|
||||
|
||||
- Kubernetes dependencies
|
||||
- Network dependencies
|
||||
- Changes to the idpbuilder
|
||||
|
||||
* Kubernetes dependencies
|
||||
* Network dependencies
|
||||
* Changes to the idpbuilder
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Talos Linux is choosen for a bare metal Kubernetes instance.
|
||||
|
||||
- talosctl
|
||||
- Go
|
||||
- Docker Engine
|
||||
- kubectl
|
||||
- kustomize
|
||||
- helm
|
||||
- nginx
|
||||
* talosctl
|
||||
* Go
|
||||
* Docker Engine
|
||||
* kubectl
|
||||
* kustomize
|
||||
* helm
|
||||
* nginx
|
||||
|
||||
As soon as the idpbuilder works correctly on bare metal, the next step is to apply it to an OSC instance.
|
||||
|
||||
|
|
@ -338,14 +338,14 @@ talosctl cluster destroy
|
|||
|
||||
Required:
|
||||
|
||||
- Add *.cnoe.localtest.me to the Talos cluster DNS, pointing to the host device IP address, which runs nginx.
|
||||
* Add *.cnoe.localtest.me to the Talos cluster DNS, pointing to the host device IP address, which runs nginx.
|
||||
|
||||
- Create a SSL certificate with `cnoe.localtest.me` as common name. Edit the nginx config to load this certificate. Configure idpbuilder to distribute this certificate instead of the one idpbuilder distributes by idefault.
|
||||
* Create a SSL certificate with `cnoe.localtest.me` as common name. Edit the nginx config to load this certificate. Configure idpbuilder to distribute this certificate instead of the one idpbuilder distributes by idefault.
|
||||
|
||||
Optimizations:
|
||||
|
||||
- Implement an idpbuilder uninstall. This is specially important when working on the OSC instance.
|
||||
* Implement an idpbuilder uninstall. This is specially important when working on the OSC instance.
|
||||
|
||||
- Remove or configure gitea.cnoe.localtest.me, it seems not to work even in the idpbuilder local installation with KIND.
|
||||
* Remove or configure gitea.cnoe.localtest.me, it seems not to work even in the idpbuilder local installation with KIND.
|
||||
|
||||
- Improvements to the idpbuilder to support Kubernetes instances other then KIND. This can either be done by parametrization or by utilizing Terraform / OpenTOFU or Crossplane.
|
||||
* Improvements to the idpbuilder to support Kubernetes instances other then KIND. This can either be done by parametrization or by utilizing Terraform / OpenTOFU or Crossplane.
|
||||
|
|
|
|||
|
|
@ -11,9 +11,9 @@ This Backstage template YAML automates the creation of an Argo Workflow for Kube
|
|||
|
||||
This template is designed for teams that need a streamlined approach to deploy and manage data processing or machine learning jobs using Spark within an Argo Workflow environment. It simplifies the deployment process and integrates the application with a CI/CD pipeline. The template performs the following:
|
||||
|
||||
- **Workflow and Spark Job Setup**: Defines a basic Argo Workflow and configures a Spark job using the provided application file path, ideal for data processing tasks.
|
||||
- **Repository Setup**: Publishes the workflow configuration to a Gitea repository, enabling version control and easy updates to the job configuration.
|
||||
- **ArgoCD Integration**: Creates an ArgoCD application to manage the Spark job deployment, ensuring continuous delivery and synchronization with Kubernetes.
|
||||
- **Backstage Registration**: Registers the application in Backstage, making it easily discoverable and manageable through the Backstage catalog.
|
||||
* **Workflow and Spark Job Setup**: Defines a basic Argo Workflow and configures a Spark job using the provided application file path, ideal for data processing tasks.
|
||||
* **Repository Setup**: Publishes the workflow configuration to a Gitea repository, enabling version control and easy updates to the job configuration.
|
||||
* **ArgoCD Integration**: Creates an ArgoCD application to manage the Spark job deployment, ensuring continuous delivery and synchronization with Kubernetes.
|
||||
* **Backstage Registration**: Registers the application in Backstage, making it easily discoverable and manageable through the Backstage catalog.
|
||||
|
||||
This template boosts productivity by automating steps required for setting up Argo Workflows and Spark jobs, integrating version control, and enabling centralized management and visibility, making it ideal for projects requiring efficient deployment and scalable data processing solutions.
|
||||
|
|
|
|||
|
|
@ -11,9 +11,9 @@ This Backstage template YAML automates the creation of a basic Kubernetes Deploy
|
|||
|
||||
The template is designed for teams needing a streamlined approach to deploy applications in Kubernetes while automatically configuring their CI/CD pipelines. It performs the following:
|
||||
|
||||
- **Deployment Creation**: A Kubernetes Deployment YAML is generated based on the provided application name, specifying a basic setup with an Nginx container.
|
||||
- **Repository Setup**: Publishes the deployment code in a Gitea repository, allowing for version control and future updates.
|
||||
- **ArgoCD Integration**: Automatically creates an ArgoCD application for the deployment, facilitating continuous delivery and synchronization with Kubernetes.
|
||||
- **Backstage Registration**: Registers the application in Backstage to make it discoverable and manageable via the Backstage catalog.
|
||||
* **Deployment Creation**: A Kubernetes Deployment YAML is generated based on the provided application name, specifying a basic setup with an Nginx container.
|
||||
* **Repository Setup**: Publishes the deployment code in a Gitea repository, allowing for version control and future updates.
|
||||
* **ArgoCD Integration**: Automatically creates an ArgoCD application for the deployment, facilitating continuous delivery and synchronization with Kubernetes.
|
||||
* **Backstage Registration**: Registers the application in Backstage to make it discoverable and manageable via the Backstage catalog.
|
||||
|
||||
This template enhances productivity by automating several steps required for deployment, version control, and registration, making it ideal for projects where fast, consistent deployment and centralized management are required.
|
||||
|
|
|
|||
|
|
@ -14,17 +14,17 @@ most part they adhere to the general definition:
|
|||
|
||||
Examples:
|
||||
|
||||
- Form validation before processing the data
|
||||
- Compiler checking syntax
|
||||
- Rust's borrow checker
|
||||
* Form validation before processing the data
|
||||
* Compiler checking syntax
|
||||
* Rust's borrow checker
|
||||
|
||||
> Verification describes testing if your 'thing' complies with your spec
|
||||
|
||||
Examples:
|
||||
|
||||
- Unit tests
|
||||
- Testing availability (ping, curl health check)
|
||||
- Checking a ZKP of some computation
|
||||
* Unit tests
|
||||
* Testing availability (ping, curl health check)
|
||||
* Checking a ZKP of some computation
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -14,10 +14,10 @@ The provider config takes the credentials to log into the cloud provider and pro
|
|||
|
||||
The implementations of the cloud resources reflect each type of cloud resource, typical resources are:
|
||||
|
||||
- S3 Bucket
|
||||
- Nodepool
|
||||
- VPC
|
||||
- GkeCluster
|
||||
* S3 Bucket
|
||||
* Nodepool
|
||||
* VPC
|
||||
* GkeCluster
|
||||
|
||||
## Architecture of provider-kind
|
||||
|
||||
|
|
@ -57,16 +57,16 @@ object is a secret.
|
|||
|
||||
The need for the following inputs arise when developing a provider-kind:
|
||||
|
||||
- kindserver password as a kubernetes secret
|
||||
- endpoint, the IP address of the kindserver as a detail of `ProviderConfig`
|
||||
- kindConfig, the kind configuration file as a detail of `KindCluster`
|
||||
* kindserver password as a kubernetes secret
|
||||
* endpoint, the IP address of the kindserver as a detail of `ProviderConfig`
|
||||
* kindConfig, the kind configuration file as a detail of `KindCluster`
|
||||
|
||||
The following outputs arise:
|
||||
|
||||
- kubernetesVersion, kubernetes version of a created kind cluster as a detail of `KindCluster`
|
||||
- internalIP, IP address of a created kind cluster as a detail of `KindCluster`
|
||||
- readiness as a detail of `KindCluster`
|
||||
- kube config of a created kind cluster as a kubernetes secret reference of `KindCluster`
|
||||
* kubernetesVersion, kubernetes version of a created kind cluster as a detail of `KindCluster`
|
||||
* internalIP, IP address of a created kind cluster as a detail of `KindCluster`
|
||||
* readiness as a detail of `KindCluster`
|
||||
* kube config of a created kind cluster as a kubernetes secret reference of `KindCluster`
|
||||
|
||||
### Inputs
|
||||
|
||||
|
|
@ -210,7 +210,7 @@ Internally, the Connect function get's triggered in the kindcluster controller `
|
|||
first, to setup the provider and configure it with the kindserver password and IP address of the kindserver.
|
||||
|
||||
After that the provider-kind has been configured with the kindserver secret and it's `ProviderConfig`, the provider is ready to
|
||||
be activated by applying a `KindCluster` manifest to kubernetes.
|
||||
be activated by applying a `KindCluster` manifest to kubernetes.
|
||||
|
||||
When the user applies a new `KindCluster` manifest, a observe loop is started. The provider regulary triggers the `Observe`
|
||||
function of the controller. As there has yet been created nothing yet, the controller will return
|
||||
|
|
@ -296,7 +296,7 @@ The official way for creating crossplane providers is to use the provider-templa
|
|||
a new provider.
|
||||
|
||||
First, clone the provider-template. The commit ID when this howto has been written is 2e0b022c22eb50a8f32de2e09e832f17161d7596.
|
||||
Rename the new folder after cloning.
|
||||
Rename the new folder after cloning.
|
||||
|
||||
```
|
||||
git clone https://github.com/crossplane/provider-template.git
|
||||
|
|
@ -320,7 +320,7 @@ sed -i "s/mytype/${type,,}/g" internal/controller/${provider_name,,}.go
|
|||
```
|
||||
|
||||
Patch the Makefile:
|
||||
|
||||
|
||||
```
|
||||
dev: $(KIND) $(KUBECTL)
|
||||
@$(INFO) Creating kind cluster
|
||||
|
|
@ -346,8 +346,8 @@ make dev
|
|||
|
||||
Now it's time to add the required fields (internalIP, endpoint, etc.) to the spec fields in go api sources found in:
|
||||
|
||||
- apis/container/v1alpha1/kindcluster_types.go
|
||||
- apis/v1alpha1/providerconfig_types.go
|
||||
* apis/container/v1alpha1/kindcluster_types.go
|
||||
* apis/v1alpha1/providerconfig_types.go
|
||||
|
||||
The file `apis/kind.go` may also be modified. The word `sample` can be replaces with `container` in our case.
|
||||
|
||||
|
|
@ -427,9 +427,9 @@ the ability to deploy helm and kubernetes objects in the newly created cluster.
|
|||
|
||||
A composition is realized as a custom resource definition (CRD) considting of three parts:
|
||||
|
||||
- A definition
|
||||
- A composition
|
||||
- One or more deplyoments of the composition
|
||||
* A definition
|
||||
* A composition
|
||||
* One or more deplyoments of the composition
|
||||
|
||||
### definition.yaml
|
||||
|
||||
|
|
@ -757,8 +757,8 @@ Open the composition in VS Code: examples/composition_deprecated/composition.yam
|
|||
|
||||
Currently missing is the third and final part, the imperative steps which need to be processed:
|
||||
|
||||
- creation of TLS certificates and giteaAdmin password
|
||||
- creation of a Forgejo repository for the stacks
|
||||
- uploading the stacks in the Forgejo repository
|
||||
* creation of TLS certificates and giteaAdmin password
|
||||
* creation of a Forgejo repository for the stacks
|
||||
* uploading the stacks in the Forgejo repository
|
||||
|
||||
Connecting the definition field (ArgoCD repo URL) and composition interconnects (function-patch-and-transform) are also missing.
|
||||
Connecting the definition field (ArgoCD repo URL) and composition interconnects (function-patch-and-transform) are also missing.
|
||||
|
|
|
|||
|
|
@ -19,10 +19,12 @@ grafana.sidecar.dashboards contains necessary configurations so additional user
|
|||
grafana.grafana.ini.server contains configuration details that are necessary, so the ingress points to the correct url.
|
||||
|
||||
### Start
|
||||
|
||||
Once Grafana is running it is accessible under https://cnoe.localtest.me/grafana.
|
||||
Many preconfigured dashboards can be used by klicking the menu option Dashboards.
|
||||
|
||||
### Adding your own dashboards
|
||||
|
||||
The application edfbuilder/kind/stacks/core/kube-prometheus.yaml is used to import new Loki dashboards. Examples for imported dashboards can be found in the folder edfbuilder/kind/stacks/core/kube-prometheus/dashboards.
|
||||
|
||||
It is possible to add your own dashboards: Dashboards must be in JSON format. To add your own dashboard create a new ConfigMap in YAML format using onw of the examples as a blueprint. The new dashboard in JSON format has to be added as the value for data.k8s-dashboard-[...].json like in the examples. (It is important to use a unique name for data.k8s-dashboard-[...].json for each dashboard.)
|
||||
|
|
|
|||
|
|
@ -5,6 +5,6 @@ description: Grafana Loki is a scalable open-source log aggregation system
|
|||
|
||||
## Loki Overview
|
||||
|
||||
The application Grafana Loki is started in edfbuilder/kind/stacks/core/loki.yaml.
|
||||
The application Grafana Loki is started in edfbuilder/kind/stacks/core/loki.yaml.
|
||||
Loki is started in microservices mode and contains the components ingester, distributor, querier, and query-frontend.
|
||||
The Helm values file edfbuilder/kind/stacks/core/loki/values.yaml contains configuration values.
|
||||
|
|
|
|||
|
|
@ -5,5 +5,5 @@ description: Grafana Promtail is an agent that ships logs to a Grafan Loki insta
|
|||
|
||||
## Promtail Overview
|
||||
|
||||
The application Grafana Promtail is started in edfbuilder/kind/stacks/core/promtail.yaml.
|
||||
The application Grafana Promtail is started in edfbuilder/kind/stacks/core/promtail.yaml.
|
||||
The Helm values file edfbuilder/kind/stacks/core/promtail/values.yaml contains configuration values.
|
||||
|
|
|
|||
|
|
@ -17,14 +17,17 @@ Kyverno is a policy engine for Kubernetes designed to enforce, validate, and mut
|
|||
Kyverno simplifies governance and compliance in Kubernetes environments by automating policy management and ensuring best practices are followed.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Same as for idpbuilder installation
|
||||
- Docker Engine
|
||||
- Go
|
||||
- kubectl
|
||||
- kind
|
||||
* Docker Engine
|
||||
* Go
|
||||
* kubectl
|
||||
* kind
|
||||
|
||||
## Installation
|
||||
|
||||
### Build process
|
||||
|
||||
For building idpbuilder the source code needs to be downloaded and compiled:
|
||||
|
||||
```
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue