Merge branch 'main' into feature/cicd-concept

This commit is contained in:
kai.reichart 2024-11-11 12:46:57 +00:00
commit 805bb5de09
40 changed files with 895 additions and 150 deletions

View file

@ -6,58 +6,83 @@ This repo contains business and architectural design and documentation of the De
The documentation is done in [Hugo-format](https://gohugo.io).
The repo contains a [Hugo `.devcontainer`-defintion](https://containers.dev/) so that you just have to run locally an IDE which is devcontainer aware, e.g. Visual Studio code.
Hugo is a static site renderer - so to get the documentation site presented you need a running Hugo processor. Therefore there is
### Installation
* either a Hugo [`.devcontainer`-definition](https://containers.dev/) - just run a devcontainer aware IDE or CLI, e.g. Visual Studio code
* or a Hugo [`Devbox`-definition](https://www.jetify.com/devbox/) - in this case just run a devbox shell
To get a locally running documentation editing and presentation environment, follow these steps:
## Local installation of the Hugo documentation system
We describe two possible ways (one with devcontainer, one with devbox) to get the Hugo-documentation system locally running.
For both prepare the following three steps:
1. open a terminal on your local box
2. clone this repo: `git clone https://bitbucket.telekom-mms.com/scm/ipceicis/ipceicis-developerframework.git `
3. change to the repo working dir: `cd ipceicis-developerframework`
4. open the repo in an [Devcontainer-aware tool/IDE](https://containers.dev/supporting) (e.g. `code .`)
5. start the `devcontainer` (in VSC it's `F1 + Reopen in Devcontainer`)
6. when the container is up & running just open your browser with `http://localhost:1313/`
2. clone this repo: `git clone https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW/website-and-documentation`
3. change to the repo working dir: `cd website-and-documentation`
If you want to run the devcontainer without VS Code, you can use npn to run it inside a docker container:
### Possibility 1: Hugo in a devcontainer
1. install Node.js (>= Version 14), npm and the docker engine
2. install the devcontainer cli: `npm install -g @devcontainers/cli`
3. change into the folder of this repo
4. start the devcontainer by running: `devcontainer up --workspace-folder .`
5. find out the IP address of the devconatiner by using `docker ps` and `docker inspect <id of container>`
6. when the container is up & running just open your browser with `http://<DOCKER IP>:1313/`
[`devcontainers`](https://containers.dev/) are running containers as virtual systems on your local box. The defintion is in the `.devcontainer` folder.
Thus as preliminary you need a container daemon running, e.g. Docker.
### Editing
There are several options to create and run the devcontainer - we present here two:
#### Documentation language
#### Option 1: Run the container triggered by and connected to an IDE, e.g. VS Code
1. open the repo in an [Devcontainer-aware tool/IDE](https://containers.dev/supporting) (e.g. `code .`)
1. start the `devcontainer` (in VSC it's `F1 + Reopen in Devcontainer`)
1. when the container is up & running just open your browser with `http://localhost:1313/`
#### Option 2: Run the container natively
An alternative to get the container image is the [devcontainer CLI](https://github.com/devcontainers/cli), then you can run the devcontainer without VS Code.
Thus as preliminary you need to do the install steps of the devconatiner cli.
1. start the devcontainer by running: `devcontainer up --workspace-folder .`
1. find out the IP address of the devconatiner by using `docker ps` and `docker inspect <id of container>`
1. when the container is up & running just open your browser with `http://<DOCKER IP>:1313/`
### Possibility 2: Hugo in a devbox
[`Devboxes`](https://www.jetify.com/devbox/) are locally isolated environments, managed by the [Nix package manager](https://nix.dev/). So first [prepare the devbox](https://www.jetify.com/docs/devbox/installing_devbox/).
Then
1. ```devbox shell```
1. In the shell: ```hugo serve```
## Editing
### Documentation language
The documentation is done in [Docsy-Theme](https://www.docsy.dev/).
So for editing content just goto the `content`-folder and edit content arrording to the [Docsy documentation](https://www.docsy.dev/docs/adding-content/)
### Commiting
## Commiting
After having finished a unit of work commit and push.
## Annex
# Annex
### Installation steps illustrated
## Installation steps illustrated
When you run the above installation, the outputs could typically look like this:
#### Steps 4/5 in Visual Studio Code
### In Visual Studio Code
##### Reopen in Container
#### Reopen in Container
![vsc-f1](./assets/images/vsc-f1.png)
##### Hugo server is running and (typically) listens to localhost:1313
#### Hugo server is running and (typically) listens to localhost:1313
After some installation time you have:
![vsc-hugo](./assets/images/vsc-hugo.png)
#### Steps 6 in a web browser
### Final result in a web browser
![browser](./assets/images/browser.png)

View file

@ -1,5 +1,6 @@
---
title: Blog
menu: {main: {weight: 30}}
description: Blog section, in Work (should be more automated content)
---

View file

@ -30,6 +30,8 @@ Deploy and develop the famous socks shops:
* https://medium.com/@wadecharley703/socks-shop-microservices-application-deployment-on-the-cloud-cd1017cce1c0
* See also mkdev fork: https://github.com/mkdev-me/microservices-demo
### Humanitec Demos
* https://github.com/poc-template-org/node-js-sample

View file

@ -8,6 +8,3 @@ description: Platforming is the discipline to provide full sophisticated golden
## Surveys
* [10-best-internal-developer-platforms-to-consider-in-2023/](https://www.qovery.com/blog/10-best-internal-developer-platforms-to-consider-in-2023/)

View file

@ -1,38 +1,8 @@
+++
title = "Platform Components"
weight = 3
[params]
author = 'stephan.lo@telekom.de'
date = '2024-07-30'
+++
---
title: "Platform Components"
weight: 3
description: What in terms of components or building blocks is needed in a platform?
---
> This page is in work. Right now we have in the index a collection of links describing and listing typical components and building blocks of platforms. Also we have a growing number of subsections regarding special types of components.
## CNCF
> [Here are capability domains to consider when building platforms for cloud-native computing](https://tag-app-delivery.cncf.io/whitepapers/platforms/#capabilities-of-platforms):
* Web portals for observing and provisioning products and capabilities
* APIs (and CLIs) for automatically provisioning products and capabilities
* “Golden path” templates and docs enabling optimal use of capabilities in products
* Automation for building and testing services and products
* Automation for delivering and verifying services and products
* Development environments such as hosted IDEs and remote connection tools
* Observability for services and products using instrumentation and dashboards, including observation of functionality, performance and costs
* Infrastructure services including compute runtimes, programmable networks, and block and volume storage
* Data services including databases, caches, and object stores
* Messaging and event services including brokers, queues, and event fabrics
* Identity and secret management services such as service and user identity and authorization, certificate and key issuance, and static secret storage
* Security services including static analysis of code and artifacts, runtime analysis, and policy enforcement
* Artifact storage including storage of container image and language-specific packages, custom binaries and libraries, and source code
## IDP
> [An Internal Developer Platform (IDP) should be built to cover 5 Core Components:](https://internaldeveloperplatform.org/core-components/)
| Core Component | Short Description |
| ---- | --- |
| Application Configuration Management | Manage application configuration in a dynamic, scalable and reliable way. |
| Infrastructure Orchestration | Orchestrate your infrastructure in a dynamic and intelligent way depending on the context. |
| Environment Management | Enable developers to create new and fully provisioned environments whenever needed. |
| Deployment Management | Implement a delivery pipeline for Continuous Delivery or even Continuous Deployment (CD). |
| Role-Based Access Control | Manage who can do what in a scalable way. |

View file

@ -0,0 +1,11 @@
# Gitops changes the definition of 'Delivery' or 'Deployment'
We have Gitops these days .... so there is a desired state of an environment in a repo and a reconciling mechanism done by Gitops to enforce this state on the environemnt.
There is no continuous whatever step inbetween ... Gitops is just 'overwriting' (to avoid saying 'delivering' or 'deploying') the environment with the new state.
This means whatever quality ensuring steps have to take part before 'overwriting' have to be defined as state changer in the repos, not in the environments.
Conclusio: I think we only have three contexts, or let's say we don't have the contect 'continuous delivery'

View file

@ -1,11 +1,11 @@
+++
archetype = "sub-chapter"
title = "Developer Portals"
weight = 1
[params]
author = 'stephan.lo@telekom.de'
date = '2024-07-30'
+++
---
title: "Developer Portals"
weight: 2
description: Developer portals are one part of the UI for developers to access platforms. The general idea is that the UI parts should be enough for a developer to th their work.
---
> This page is in work. Right now we have in the index a collection of links describing developer portals.
* Backstage (siehe auch https://nl.devoteam.com/expert-view/project-unox/)
* [Port](https://www.getport.io/)

View file

@ -1,11 +1,8 @@
+++
archetype = "sub-chapter"
title = "Platform Orchestrator"
weight = 1
[params]
author = 'stephan.lo@telekom.de'
date = '2024-07-30'
+++
---
title: Platform Orchestrator
weight: 3
description: "The new kid on the block since 2023 ist 'Platform Orchestrating': Do the the magic declaratively cloud natively automated."
---
'Platform Orchestration' is first mentionned by [Thoughtworks in Sept 2023](https://www.thoughtworks.com/en-de/radar/techniques/platform-orchestration)

View file

@ -0,0 +1,36 @@
---
title: List of references
weight: 10
linktitle: References
description: An currently uncurated list of references with respect to typical platform building components
---
## CNCF
> [Here are capability domains to consider when building platforms for cloud-native computing](https://tag-app-delivery.cncf.io/whitepapers/platforms/#capabilities-of-platforms):
* Web portals for observing and provisioning products and capabilities
* APIs (and CLIs) for automatically provisioning products and capabilities
* “Golden path” templates and docs enabling optimal use of capabilities in products
* Automation for building and testing services and products
* Automation for delivering and verifying services and products
* Development environments such as hosted IDEs and remote connection tools
* Observability for services and products using instrumentation and dashboards, including observation of functionality, performance and costs
* Infrastructure services including compute runtimes, programmable networks, and block and volume storage
* Data services including databases, caches, and object stores
* Messaging and event services including brokers, queues, and event fabrics
* Identity and secret management services such as service and user identity and authorization, certificate and key issuance, and static secret storage
* Security services including static analysis of code and artifacts, runtime analysis, and policy enforcement
* Artifact storage including storage of container image and language-specific packages, custom binaries and libraries, and source code
## IDP
> [An Internal Developer Platform (IDP) should be built to cover 5 Core Components:](https://internaldeveloperplatform.org/core-components/)
| Core Component | Short Description |
| ---- | --- |
| Application Configuration Management | Manage application configuration in a dynamic, scalable and reliable way. |
| Infrastructure Orchestration | Orchestrate your infrastructure in a dynamic and intelligent way depending on the context. |
| Environment Management | Enable developers to create new and fully provisioned environments whenever needed. |
| Deployment Management | Implement a delivery pipeline for Continuous Delivery or even Continuous Deployment (CD). |
| Role-Based Access Control | Manage who can do what in a scalable way. |

View file

@ -1,10 +1,9 @@
+++
title = "Platform Engineering"
weight = 1
[params]
author = 'stephan.lo@telekom.de'
date = '2024-07-30'
+++
---
title: Platform Engineering
weight: 1
description: Theory and general blue prints of the platform engineering discipline
---
## Rationale

View file

@ -116,7 +116,7 @@ NAMESPACE NAME SYNC STATUS HEALTH STATUS
argocd argo-workflows Synced Healthy
argocd argocd Synced Healthy
argocd backstage Synced Healthy
argocd backstage-templates Synced Healthy
argocd included-backstage-templates Synced Healthy
argocd coredns Synced Healthy
argocd external-secrets Synced Healthy
argocd gitea Synced Healthy

View file

@ -1,7 +1,7 @@
---
title: Concepts
weight: 1
description: The underlying platfroming concepts of the EDF solution, i.e. the problem domain
description: The underlying platforming concepts of the Edge Developer Framework (EDF) solution, i.e. the problem domain
---

View file

@ -0,0 +1,7 @@
---
title: Bootstrapping Infrastructure
weight: 30
description: The cluster and the installed applications in the bootstrapping cluster
---
In order to be able to do useful work, we do need a number of applications right away. We're deploying these manually so we have the necessary basis for our work. Once the framework has been developed far enough, we will deploy this infrastructure with the framework itself.

View file

@ -0,0 +1,84 @@
---
title: Backup of the Bootstrapping Cluster
weight: 30
description: Backup and Restore of the Contents of the Bootstrapping Cluster
---
## Velero
We are using [Velero](https://velero.io/) for backup and restore of the deployed applications.
## Installing Velero Tools
To manage a Velero install in a cluster, you need to have Velero command line tools installed locally. Please follow the instructions for [Basic Install](https://velero.io/docs/v1.9/basic-install).
## Setting Up Velero For a Cluster
Installing and configuring Velero for a cluster can be accomplished with the CLI.
1. Create a file with the credentials for the S3 compatible bucket that is storing the backups, for example `credentials.ini`.
```ini
[default]
aws_access_key_id = "Access Key Value"
aws_secret_access_key = "Secret Key Value"
```
2. Install Velero in the cluster
```
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.2.1 \
--bucket osc-backup \
--secret-file ./credentials.ini \
--use-volume-snapshots=false \
--use-node-agent=true \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=https://obs.eu-de.otc.t-systems.com
```
3. Delete `credentials.ini`, it is not needed anymore (a secret has been created in the cluster).
4. Create a schedule to back up the relevant resources in the cluster:
```
velero schedule create devfw-bootstrap --schedule="23 */2 * * *" "--include-namespaces=forgejo"
```
## Working with Velero
You can now use Velero to create backups, restore them, or perform other operations. Please refer to the [Velero Documentation](https://velero.io/docs/main/backup-reference/).
To list all currently available backups:
```
velero backup get
```
## Setting up a Service Account for Access to the OTC Object Storage Bucket
We are using the S3-compatible Open Telekom Cloud Object Storage service to make available some storage for the backups. We picked OTC specifically because we're not using it for anything else, so no matter what catastrophy we create in Open Sovereign Cloud, the backups should be safe.
### Create an Object Storage Service Bucket
1. Log in to the [OTC Console with the correct tenant](https://auth.otc.t-systems.com/authui/federation/websso?domain_id=81e7dbd7ec9f4b03a58120dfaa61d3db&idp=ADFS_MMS_OTC00000000001000113497&protocol=saml).
1. Navigate to [Object Storage Service](https://console.otc.t-systems.com/obs/?agencyId=WEXsFwkkVdGYULIrZT1icF4nmHY1dgX2&region=eu-de&locale=en-us#/obs/manager/buckets).
1. Click Create Bucket in the upper right hand corner. Give your bucket a name. No further configuration should be necessary.
### Create a Service User to Access the Bucket
1. Log in to the [OTC Console with the correct tenant](https://auth.otc.t-systems.com/authui/federation/websso?domain_id=81e7dbd7ec9f4b03a58120dfaa61d3db&idp=ADFS_MMS_OTC00000000001000113497&protocol=saml).
1. Navigate to [Identity and Access Management](https://console.otc.t-systems.com/iam/?agencyId=WEXsFwkkVdGYULIrZT1icF4nmHY1dgX2&region=eu-de&locale=en-us#/iam/users).
1. Navigate to User Groups, and click Create User Group in the upper right hand corner.
1. Enter a suitable name ("OSC Cloud Backup") and click OK.
1. In the group list, locate the group just created and click its name.
1. Click Authorize to add the necessary roles. Enter "OBS" in the search box to filter for Object Storage roles.
1. Select "OBS OperateAccess", if there are two roles, select them both.
1. **2024-10-15** Also select the "OBS Administrator" role. It is unclear why the "OBS OperateAccess" role is not sufficient, but without the admin role, the service user will not have write access to the bucket.
1. Click Next to save the roles, then click OK to confirm, then click Finish.
1. Navigate to Users, and click Create User in the upper right hand corner.
1. Give the user a sensible name ("ipcei-cis-devfw-osc-backups").
1. Disable Management console access
1. Enable Access key, disable Password, disable Login protection.
1. Click Next
1. Pick the group created earlier.
1. Download the access key when prompted.
The access key is a CSV file with the Access Key and the Secret Key listed in the second line.

View file

@ -9,7 +9,7 @@ description: Our top candidate for a platform orchestrator
In late 2023 platform orchestration raised - the discipline of declarativley dinfing, building, orchestarting and reconciling building blocks of (digital) platforms.
The famost one ist the platfrom orchestrator from Humanitec. They provide lots of concepts and access, also open sourced tools and schemas. But they do not have open sourced the ocheastartor itself.
The famost one ist the platform orchestrator from Humanitec. They provide lots of concepts and access, also open sourced tools and schemas. But they do not have open sourced the ocheastartor itself.
Thus we were looking for open source means for platform orchestrating and found [CNOE](https://cnoe.io).
{{% /pageinfo %}}

View file

@ -389,7 +389,7 @@ NAMESPACE NAME SYNC STATUS HEALTH STATUS
argocd argo-workflows Synced Healthy
argocd argocd Synced Healthy
argocd backstage Synced Healthy
argocd backstage-templates Synced Healthy
argocd included-backstage-templates Synced Healthy
argocd external-secrets Synced Healthy
argocd gitea Synced Healthy
argocd keycloak Synced Healthy
@ -472,7 +472,7 @@ and see the basic setup of the Backstage portal:
### Use a Golden Path: 'Basic Deployment'
Now we want to use the Backstage portal as a developer. We create in Backstage our own platfrom based activity by using the golden path template 'Basic Deployment:
Now we want to use the Backstage portal as a developer. We create in Backstage our own platform based activity by using the golden path template 'Basic Deployment:
![alt text](image-10.png)

View file

@ -28,7 +28,7 @@ https://confluence.telekom-mms.com/display/IPCEICIS/Architecture
## Dimensionierung Cloud für initiales DevFramework
### 28.08.24, Stefan Betkle, Florian Fürstenberg, Stephan Lo
### 28.08.24, Stefan Bethke, Florian Fürstenberg, Stephan Lo
1) zuerst viele DevFrameworkPlatformEngineers arbeiten lokal, mit zentralem Deployment nach OTC in **einen/max zwei** Control-Cluster
2) wir gehen anfangs von ca. 5 clustern aus

View file

@ -1,8 +1,6 @@
---
title: Solution
weight: 2
description: The underlying platfroming concepts of the EDF solution, the solution domain
description: "The implemented platforming solutions of EDF, i.e. the solution domain. The documentation of all project output: Design, Building blocks, results, show cases, artifacts and so on"
---
All output the project created: Design, Building blocks, results, show cases, artifacts

View file

@ -1,9 +1,5 @@
+++
title = "Backstage"
weight = 2
[params]
author = 'evgenii.dominov@telekom.de'
date = '2024-09-36'
+++
Here you will find information about Backstage, it's plugins and usage tutorials
---
title: Backstage
weight: 2
description: Here you will find information about Backstage, it's plugins and usage tutorials
---

View file

@ -1,7 +1,8 @@
+++
title = "Analysis of the CNOE competitors"
weight = 1
+++
---
title: Analysis of CNOE competitors
weight: 1
description: We compare CNOW - which we see as an orchestrator - with other platform orchestring tools like Kratix and Humanitc
---
## Kratix

View file

@ -0,0 +1,4 @@
---
title: CNOE
description: CNOE is a platform building orchestrator, which we choosed at least to start in 2024 with to build the EDF
---

View file

@ -0,0 +1,141 @@
---
title: ArgoCD
weight: 30
description: A description of ArgoCD and its role in CNOE
---
## What is ArgoCD?
ArgoCD is a Continuous Delivery tool for kubernetes based on GitOps principles.
> ELI5: ArgoCD is an application running in kubernetes which monitors Git
> repositories containing some sort of kubernetes manifests and automatically
> deploys them to some configured kubernetes clusters.
From ArgoCD's perspective, applications are defined as custom resource
definitions within the kubernetes clusters that ArgoCD monitors. Such a
definition describes a source git repository that contains kubernetes
manifests, in the form of a helm chart, kustomize, jsonnet definitions or plain
yaml files, as well as a target kubernetes cluster and namespace the manifests
should be applied to. Thus, ArgoCD is capable of deploying applications to
various (remote) clusters and namespaces.
ArgoCD monitors both the source and the destination. It applies changes from
the git repository that acts as the source of truth for the destination as soon
as they occur, i.e. if a change was pushed to the git repository, the change is
applied to the kubernetes destination by ArgoCD. Subsequently, it checks
whether the desired state was established. For example, it verifies that all
resources were created, enough replicas started, and that all pods are in the
`running` state and healthy.
## Architecture
### Core Components
An ArgoCD deployment mainly consists of 3 main components:
#### Application Controller
The application controller is a kubernetes operator that synchronizes the live
state within a kubernetes cluster with the desired state derived from the git
sources. It monitors the live state, can detect derivations, and perform
corrective actions. Additionally, it can execute hooks on life cycle stages
such as pre- and post-sync.
#### Repository Server
The repository server interacts with git repositories and caches their state,
to reduce the amount of polling necessary. Furthermore, it is responsible for
generating the kubernetes manifests from the resources within the git
repositories, i.e. executing helm or jsonnet templates.
#### API Server
The API Server is a REST/gRPC Service that allows the Web UI and CLI, as well
as other API clients, to interact with the system. It also acts as the callback
for webhooks particularly from Git repository platforms such as GitHub or
Gitlab to reduce repository polling.
### Others
The system primarily stores its configuration as kubernetes resources. Thus,
other external storage is not vital.
Redis
: A Redis store is optional but recommended to be used as a cache to reduce
load on ArgoCD components and connected systems, e.g. git repositories.
ApplicationSetController
: The ApplicationSet Controller is similar to the Application Controller a
kubernetes operator that can deploy applications based on parameterized
application templates. This allows the deployment of different versions of an
application into various environments from a single template.
### Overview
![Conceptual Architecture](./argocd_architecture.webp)
![Core components](./argocd-core-components.webp)
## Role in CNOE
ArgoCD is one of the core components besides gitea/forgejo that is being
bootstrapped by the idpbuilder. Future project creation, e.g. through
backstage, relies on the availability of ArgoCD.
After the initial bootstrapping phase, effectively all components in the stack
that are deployed in kubernetes are managed by ArgoCD. This includes the
bootstrapped components of gitea and ArgoCD which are onboarded afterward.
Thus, the idpbuilder is only necessary in the bootstrapping phase of the
platform and the technical coordination of all components shifts to ArgoCD
eventually.
In general, the creation of new projects and applications should take place in
backstop. It is a catalog of software components and best practices that allows
developers to grasp and to manage their software portfolio. Underneath,
however, the deployment of applications and platform components is managed by
ArgoCD. Among others, backstage creates Application CRDs to instruct ArgoCD to
manage deployments and subsequently report on their current state.
## Glossary
_Initially shamelessly copied from [the docs](https://argo-cd.readthedocs.io/en/stable/core_concepts/)_
Application
: A group of Kubernetes resources as defined by a manifest. This is a Custom Resource Definition (CRD).
ApplicationSet
: A CRD that is a template that can create multiple parameterized Applications.
Application source type
: Which Tool is used to build the application.
Configuration management tool
: See Tool.
Configuration management plugin
: A custom tool.
Health
: The health of the application, is it running correctly? Can it serve requests?
Live state
: The live state of that application. What pods etc are deployed.
Refresh
: Compare the latest code in Git with the live state. Figure out what is different.
Sync
: The process of making an application move to its target state. E.g. by applying changes to a Kubernetes cluster.
Sync status
: Whether or not the live state matches the target state. Is the deployed application the same as Git says it should be?
Sync operation status
: Whether or not a sync succeeded.
Target state
: The desired state of an application, as represented by files in a Git repository.
Tool
: A tool to create manifests from a directory of files. E.g. Kustomize. See Application Source Type.

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

View file

@ -0,0 +1,6 @@
---
title: idpbuilder
weight: 3
description: Here you will find information about idpbuilder installation and usage
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View file

@ -0,0 +1,178 @@
---
title: Http Routing
weight: 100
---
### Routing switch
The idpbuilder supports creating platforms using either path based or subdomain
based routing:
```shell
idpbuilder create --log-level debug --package https://github.com/cnoe-io/stacks//ref-implementation
```
```shell
idpbuilder create --use-path-routing --log-level debug --package https://github.com/cnoe-io/stacks//ref-implementation
```
However, even though argo does report all deployments as green eventually, not
the entire demo is actually functional (verification?). This is due to
hardcoded values that for example point to the path-routed location of gitea to
access git repos. Thus, backstage might not be able to access them.
Within the demo / ref-implementation, a simple search & replace is suggested to
change urls to fit the given environment. But proper scripting/templating could
take care of that as the hostnames and necessary properties should be
available. This is, however, a tedious and repetitive task one has to keep in
mind throughout the entire system, which might lead to an explosion of config
options in the future. Code that addresses correct routing is located in both
the stack templates and the idpbuilder code.
### Cluster internal routing
For the most part, components communicate with either the cluster API using the
default DNS or with each other via http(s) using the public DNS/hostname (+
path-routing scheme). The latter is necessary due to configs that are visible
and modifiable by users. This includes for example argocd config for components
that has to sync to a gitea git repo. Using the same URL for internal and
external resolution is imperative.
The idpbuilder achieves transparent internal DNS resolution by overriding the
public DNS name in the cluster's internal DNS server (coreDNS). Subsequently,
within the cluster requests to the public hostnames resolve to the IP of the
internal ingress controller service. Thus, internal and external requests take
a similar path and run through proper routing (rewrites, ssl/tls, etc).
### Conclusion
One has to keep in mind that some specific app features might not
work properly or without haxx when using path based routing (e.g. docker
registry in gitea). Futhermore, supporting multiple setup strategies will
become cumbersome as the platforms grows. We should probably only support one
type of setup to keep the system as simple as possible, but allow modification
if necessary.
DNS solutions like `nip.io` or the already used `localtest.me` mitigate the
need for path based routing
## Excerpt
HTTP is a cornerstone of the internet due to its high flexibility. Starting
from HTTP/1.1 each request in the protocol contains among others a path and a
`Host`name in its header. While an HTTP request is sent to a single IP address
/ server, these two pieces of data allow (distributed) systems to handle
requests in various ways.
```shell
$ curl -v http://google.com/something > /dev/null
* Connected to google.com (2a00:1450:4001:82f::200e) port 80
* using HTTP/1.x
> GET /something HTTP/1.1
> Host: google.com
> User-Agent: curl/8.10.1
> Accept: */*
...
```
### Path-Routing
Imagine requesting `http://myhost.foo/some/file.html`, in a simple setup, the
web server `myhost.foo` resolves to would serve static files from some
directory, `/<some_dir>/some/file.html`.
In more complex systems, one might have multiple services that fulfill various
roles, for example a service that generates HTML sites of articles from a CMS
and a service that can convert images into various formats. Using path-routing
both services are available on the same host from a user's POV.
An article served from `http://myhost.foo/articles/news1.html` would be
generated from the article service and points to an image
`http://myhost.foo/images/pic.jpg` which in turn is generated by the image
converter service. When a user sends an HTTP request to `myhost.foo`, they hit
a reverse proxy which forwards the request based on the requested path to some
other system, waits for a response, and subsequently returns that response to
the user.
![Path-Routing Example](../path-routing.png)
Such a setup hides the complexity from the user and allows the creation of
large distributed, scalable systems acting as a unified entity from the
outside. Since everything is served on the same host, the browser is inclined
to trust all downstream services. This allows for easier 'communication'
between services through the browser. For example, cookies could be valid for
the entire host and thus authentication data could be forwarded to requested
downstream services without the user having to explicitly re-authenticate.
Furthermore, services 'know' their user-facing location by knowing their path
and the paths to other services as paths are usually set as a convention and /
or hard-coded. In practice, this makes configuration of the entire system
somewhat easier, especially if you have various environments for testing,
development, and production. The hostname of the system does not matter as one
can use hostname-relative URLs, e.g. `/some/service`.
Load balancing is also easily achievable by multiplying the number of service
instances. Most reverse proxy systems are able to apply various load balancing
strategies to forward traffic to downstream systems.
Problems might arise if downstream systems are not built with path-routing in
mind. Some systems require to be served from the root of a domain, see for
example the container registry spec.
### Hostname-Routing
Each downstream service in a distributed system is served from a different
host, typically a subdomain, e.g. `serviceA.myhost.foo` and
`serviceB.myhost.foo`. This gives services full control over their respective
host, and even allows them to do path-routing within each system. Moreover,
hostname-routing allows the entire system to create more flexible and powerful
routing schemes in terms of scalability. Intra-system communication becomes
somewhat harder as the browser treats each subdomain as a separate host,
shielding cookies for example form one another.
Each host that serves some services requires a DNS entry that has to be
published to the clients (from some DNS server). Depending on the environment
this can become quite tedious as DNS resolution on the internet and intranets
might have to deviate. This applies to intra-cluster communication as well, as
seen with the idpbuilder's platform. In this case, external DNS resolution has
to be replicated within the cluster to be able to use the same URLs to address
for example gitea.
The following example depicts DNS-only routing. By defining separate DNS
entries for each service / subdomain requests are resolved to the respective
servers. In theory, no additional infrastructure is necessary to route user
traffic to each service. However, as services are completely separated other
infrastructure like authentication possibly has to be duplicated.
![DNS-only routing](../hostname-routing.png)
When using hostname based routing, one does not have to set different IPs for
each hostname. Instead, having multiple DNS entries pointing to the same set of
IPs allows re-using existing infrastructure. As shown below, a reverse proxy is
able to forward requests to downstream services based on the `Host` request
parameter. This way specific hostname can be forwarded to a defined service.
![Hostname Proxy](../hostname-routing-proxy.png)
At the same time, one could imagine a multi-tenant system that differentiates
customer systems by name, e.g. `tenant-1.cool.system` and
`tenant-2.cool.system`. Configured as a wildcard-sytle domain, `*.cool.system`
could point to a reverse proxy that forwards requests to a tenants instance of
a system, allowing re-use of central infrastructure while still hosting
separate systems per tenant.
The implicit dependency on DNS resolution generally makes this kind of routing
more complex and error-prone as changes to DNS server entries are not always
possible or modifiable by everyone. Also, local changes to your `/etc/hosts`
file are a constant pain and should be seen as a dirty hack. As mentioned
above, dynamic DNS solutions like `nip.io` are often helpful in this case.
### Conclusion
Path and hostname based routing are the two most common methods of HTTP traffic
routing. They can be used separately but more often they are used in
conjunction. Due to HTTP's versatility other forms of HTTP routing, for example
based on the `Content-Type` Header are also very common.

View file

@ -348,4 +348,4 @@ Optimizations:
- Remove or configure gitea.cnoe.localtest.me, it seems not to work even in the idpbuilder local installation with KIND.
- Improvements to the idpbuilder to support Kubernetes instances other then KIND. This can either be done by parametrization or by utilizing Terraform / OpenTOFU or Crossplane.
- Improvements to the idpbuilder to support Kubernetes instances other then KIND. This can either be done by parametrization or by utilizing Terraform / OpenTOFU or Crossplane.

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

View file

@ -0,0 +1,5 @@
---
title: Included Backstage Templates
weight: 2
description: Here you will find information about backstage templates that are included into idpbuilder's ref-implementation
---

View file

@ -0,0 +1,19 @@
+++
title = "Template for basic Argo Workflow"
weight = 4
+++
# Backstage Template for Basic Argo Workflow with Spark Job
This Backstage template YAML automates the creation of an Argo Workflow for Kubernetes that includes a basic Spark job, providing a convenient way to configure and deploy workflows involving data processing or machine learning jobs. Users can define key parameters, such as the application name and the path to the main Spark application file. The template creates necessary Kubernetes resources, publishes the application code to a Gitea Git repository, registers the application in the Backstage catalog, and deploys it via ArgoCD for easy CI/CD management.
## Use Case
This template is designed for teams that need a streamlined approach to deploy and manage data processing or machine learning jobs using Spark within an Argo Workflow environment. It simplifies the deployment process and integrates the application with a CI/CD pipeline. The template performs the following:
- **Workflow and Spark Job Setup**: Defines a basic Argo Workflow and configures a Spark job using the provided application file path, ideal for data processing tasks.
- **Repository Setup**: Publishes the workflow configuration to a Gitea repository, enabling version control and easy updates to the job configuration.
- **ArgoCD Integration**: Creates an ArgoCD application to manage the Spark job deployment, ensuring continuous delivery and synchronization with Kubernetes.
- **Backstage Registration**: Registers the application in Backstage, making it easily discoverable and manageable through the Backstage catalog.
This template boosts productivity by automating steps required for setting up Argo Workflows and Spark jobs, integrating version control, and enabling centralized management and visibility, making it ideal for projects requiring efficient deployment and scalable data processing solutions.

View file

@ -0,0 +1,19 @@
+++
title = "Template for basic kubernetes deployment"
weight = 4
+++
# Backstage Template for Kubernetes Deployment
This Backstage template YAML automates the creation of a basic Kubernetes Deployment, aimed at simplifying the deployment and management of applications in Kubernetes for the user. The template allows users to define essential parameters, such as the applications name, and then creates and configures the Kubernetes resources, publishes the application code to a Gitea Git repository, and registers the application in the Backstage catalog for tracking and management.
## Use Case
The template is designed for teams needing a streamlined approach to deploy applications in Kubernetes while automatically configuring their CI/CD pipelines. It performs the following:
- **Deployment Creation**: A Kubernetes Deployment YAML is generated based on the provided application name, specifying a basic setup with an Nginx container.
- **Repository Setup**: Publishes the deployment code in a Gitea repository, allowing for version control and future updates.
- **ArgoCD Integration**: Automatically creates an ArgoCD application for the deployment, facilitating continuous delivery and synchronization with Kubernetes.
- **Backstage Registration**: Registers the application in Backstage to make it discoverable and manageable via the Backstage catalog.
This template enhances productivity by automating several steps required for deployment, version control, and registration, making it ideal for projects where fast, consistent deployment and centralized management are required.

View file

@ -0,0 +1,69 @@
---
title: Validation and Verification
weigth: 100
description: How does CNOE ensure equality between actual and desired state
---
## Definition
The CNOE docs do somewhat interchange validation and verification but for the
most part they adhere to the general definition:
> Validation is used when you check your approach before actually executing an
> action.
Examples:
- Form validation before processing the data
- Compiler checking syntax
- Rust's borrow checker
> Verification describes testing if your 'thing' complies with your spec
Examples:
- Unit tests
- Testing availability (ping, curl health check)
- Checking a ZKP of some computation
---
## In CNOE
It seems that both validation and verification within the CNOE framework are
not actually handled by some explicit component but should be addressed
throughout the system and workflows.
As stated in the [docs](https://cnoe.io/docs/intro/capabilities/validation),
validation takes place in all parts of the stack by enforcing strict API usage
and policies (signing, mitigations, security scans etc, see usage of kyverno
for example), and using code generation (proven code), linting, formatting,
LSP. Consequently, validation of source code, templates, etc is more a best
practice rather than a hard fact or feature and it is up to the user
to incorporate them into their workflows and pipelines. This is probably
due to the complexity of the entire stack and the individual properties of
each component and applications.
Verification of artifacts and deployments actually exists in a somewhat similar
state. The current CNOE reference-implementation does not provide sufficient
verification tooling.
However, as stated in the [docs](https://cnoe.io/docs/reference-implementation/integrations/verification)
within the framework `cnoe-cli` is capable of extremely limited verification of
artifacts within kubernetes. The same verification is also available as a step
within a backstage
[plugin](https://github.com/cnoe-io/plugin-scaffolder-actions). This is pretty
much just a wrapper of the cli tool. The tool consumes CRD-like structures
defining the state of pods and CRDs and checks for their existence within a
live cluster ([example](https://github.com/cnoe-io/cnoe-cli/blob/main/pkg/cmd/prereq/ack-s3-prerequisites.yaml)).
Depending on the aspiration of 'verification' this check is rather superficial
and might only suffice as an initial smoke test. Furthermore, it seems like the
feature is not actually used within the CNOE stacks repo.
For a live product more in depth verification tools and schemes are necessary
to verify the correct configuration and authenticity of workloads, which is, in
the context of traditional cloud systems, only achievable to a limited degree.
Existing tools within the stack, e.g. Argo, provide some verification
capabilities. But further investigation into the general topic is necessary.

View file

@ -1,6 +0,0 @@
+++
title = "idpbuilder"
weight = 3
+++
Here you will find information about idpbuilder installation and usage

View file

@ -0,0 +1,44 @@
---
title: Kyverno
description: Kyverno is a policy engine for Kubernetes designed to enforce, validate, and mutate configurations of Kubernetes resources
---
## Kyverno Overview
Kyverno is a policy engine for Kubernetes designed to enforce, validate, and mutate configurations of Kubernetes resources. It allows administrators to define policies as Kubernetes custom resources (CRDs) without requiring users to learn a new language or system.
### Key Uses
1. **Policy Enforcement**: Kyverno ensures resources comply with security, operational, or organizational policies, such as requiring specific labels, annotations, or resource limits.
2. **Validation**: It checks resources against predefined rules, ensuring configurations are correct before they are applied to the cluster.
3. **Mutation**: Kyverno can automatically modify resources on-the-fly, adding missing fields or values to Kubernetes objects.
4. **Generation**: It can generate resources like ConfigMaps or Secrets automatically when needed, helping to maintain consistency.
Kyverno simplifies governance and compliance in Kubernetes environments by automating policy management and ensuring best practices are followed.
## Prerequisites
Same as for idpbuilder installation
- Docker Engine
- Go
- kubectl
- kind
## Installation
### Build process
For building idpbuilder the source code needs to be downloaded and compiled:
```
git clone https://github.com/cnoe-io/idpbuilder.git
cd idpbuilder
go build
```
### Start idpbuilder
To start the idpbuilder with kyverno integration execute the following command:
```
idpbuilder create --use-path-routing -p https://github.com/cnoe-io/stacks//ref-implementation -p https://github.com/cnoe-io/stacks//kyverno-integration
```
After this step, you can see in ArgoCD that kyverno was installed

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

12
devbox.json Normal file
View file

@ -0,0 +1,12 @@
{
"$schema": "https://raw.githubusercontent.com/jetify-com/devbox/0.10.5/.schema/devbox.schema.json",
"packages": [
"hugo@0.125.4",
"dart-sass@1.75.0",
"go@latest"
],
"shell": {
"init_hook": [],
"scripts": {}
}
}

165
devbox.lock Normal file
View file

@ -0,0 +1,165 @@
{
"lockfile_version": "1",
"packages": {
"dart-sass@1.75.0": {
"last_modified": "2024-05-03T15:42:32Z",
"resolved": "github:NixOS/nixpkgs/5fd8536a9a5932d4ae8de52b7dc08d92041237fc#dart-sass",
"source": "devbox-search",
"version": "1.75.0",
"systems": {
"aarch64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/6ynzjs0v55h88ri86li1d9nyr822n7kk-dart-sass-1.75.0",
"default": true
},
{
"name": "pubcache",
"path": "/nix/store/f4wbni4cqdhq8y9phl6aazyh54mnacz7-dart-sass-1.75.0-pubcache"
}
],
"store_path": "/nix/store/6ynzjs0v55h88ri86li1d9nyr822n7kk-dart-sass-1.75.0"
},
"aarch64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/g88isq3r0zpxvx1rzc86dl9ny15jr980-dart-sass-1.75.0",
"default": true
},
{
"name": "pubcache",
"path": "/nix/store/l6vdyb4i5hb9qmvms9v9g7vsnynfq0lb-dart-sass-1.75.0-pubcache"
}
],
"store_path": "/nix/store/g88isq3r0zpxvx1rzc86dl9ny15jr980-dart-sass-1.75.0"
},
"x86_64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/h79n1apvmgpvw4w855zxf9qx887k9v3d-dart-sass-1.75.0",
"default": true
},
{
"name": "pubcache",
"path": "/nix/store/bxmfb2129kn4xnrz5i4p4ngkplavrxv4-dart-sass-1.75.0-pubcache"
}
],
"store_path": "/nix/store/h79n1apvmgpvw4w855zxf9qx887k9v3d-dart-sass-1.75.0"
},
"x86_64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/yvr71pda4bm9a2dilgyd77297xx32iad-dart-sass-1.75.0",
"default": true
},
{
"name": "pubcache",
"path": "/nix/store/h8n6s7f91kn596g2hbn3ccbs4s80bm46-dart-sass-1.75.0-pubcache"
}
],
"store_path": "/nix/store/yvr71pda4bm9a2dilgyd77297xx32iad-dart-sass-1.75.0"
}
}
},
"go@latest": {
"last_modified": "2024-10-13T23:44:06Z",
"resolved": "github:NixOS/nixpkgs/d4f247e89f6e10120f911e2e2d2254a050d0f732#go",
"source": "devbox-search",
"version": "1.23.2",
"systems": {
"aarch64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/35jikx2wg5r0qj47sic0p99bqnmwi6cn-go-1.23.2",
"default": true
}
],
"store_path": "/nix/store/35jikx2wg5r0qj47sic0p99bqnmwi6cn-go-1.23.2"
},
"aarch64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/6bx6d90kpy537yab22wja70ibpp4gkww-go-1.23.2",
"default": true
}
],
"store_path": "/nix/store/6bx6d90kpy537yab22wja70ibpp4gkww-go-1.23.2"
},
"x86_64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/yi89mimkmw48qhzrll1aaibxbvllpsjv-go-1.23.2",
"default": true
}
],
"store_path": "/nix/store/yi89mimkmw48qhzrll1aaibxbvllpsjv-go-1.23.2"
},
"x86_64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/klw1ipjsqx1np7pkk833x7sad7f3ivv9-go-1.23.2",
"default": true
}
],
"store_path": "/nix/store/klw1ipjsqx1np7pkk833x7sad7f3ivv9-go-1.23.2"
}
}
},
"hugo@0.125.4": {
"last_modified": "2024-04-27T02:17:36Z",
"resolved": "github:NixOS/nixpkgs/698fd43e541a6b8685ed408aaf7a63561018f9f8#hugo",
"source": "devbox-search",
"version": "0.125.4",
"systems": {
"aarch64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/2ssds5l4s15xfgljv2ygjhqpn949lxj4-hugo-0.125.4",
"default": true
}
],
"store_path": "/nix/store/2ssds5l4s15xfgljv2ygjhqpn949lxj4-hugo-0.125.4"
},
"aarch64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/nln80v8vsw5h3hv7kihglb12fy077flb-hugo-0.125.4",
"default": true
}
],
"store_path": "/nix/store/nln80v8vsw5h3hv7kihglb12fy077flb-hugo-0.125.4"
},
"x86_64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/n6az4gns36nrq9sbiqf2kf7kgn1kjyfm-hugo-0.125.4",
"default": true
}
],
"store_path": "/nix/store/n6az4gns36nrq9sbiqf2kf7kgn1kjyfm-hugo-0.125.4"
},
"x86_64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/k53ijl83p62i6lqia2jjky8l136x42i7-hugo-0.125.4",
"default": true
}
],
"store_path": "/nix/store/k53ijl83p62i6lqia2jjky8l136x42i7-hugo-0.125.4"
}
}
}
}
}

View file

@ -1,35 +0,0 @@
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lSQU5NM3Z0ZlRKWmdZVWdkZG14NC9SbjB3RFFZSktvWklodmNOQVFFTEJRQXcKRFRFTE1Ba0dBMVVFQXhNQ1kyRXdIaGNOTWpRd09UQXlNVFEwT1RNd1doY05NelF3T1RBeU1UUTBPVE13V2pBTgpNUXN3Q1FZRFZRUURFd0pqWVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWJUCjhJb0xxRGNmdmRhTUNicExDbEZEd09rMDVQQUhsdXBmbkF5bmd3OStrWmZBY21GNUd1WS95TlJtQWUxbGY2RlAKc3pWRUNRZXVFa1gyS1lRekxvclJMOE53VmM5RDBNWkRJMmRHMEdVUjZueU5RZHZaTkFLUWtwM3luT2VqbEdvYwp2SWVCREVKejJnNzErWTZoOGtwWUh0NEx1clIwZStvVEwySXhBYStIdjh6SmJCa2pPQ1lCZExRYWVGNjduSE15CitDTUV2emRRSlBCSXlqb2RtREFKTWVDWm9ac3g0YUE5T2hXL3dwczdKTzRnQ1NuUGVkMjN4blVTeWd4QnNBWVcKMWJDZEhONzlscWZxaFV6K21Wb3hrRXZkemt6S0ErWUdSZ3FSY0ZCRE9pR1dOVzREV0t5Y2hsYTBoektPZ3dFZwpvK21WaXBtV1RCbjY4REJJVWdjQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0dtTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZGYVR6VEUwNTNNSy9tN0JyY3hXMDVOYXkxN0VNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBZGtMWVpLNWZnRFNYb2xGNTNwTldYNWlxMlJDMXVaTlhOb3ArZ0drSjkzTW5tNFM5bQpZWXNiMks2ZDNUM0JkdmNHOWQ0VUhGTzMvdU9kaTkrc3AxY2h1U25hNis4V1hrZkZBY0c2anJubXVVTDNzOWUyCmFrRGhoWHdubnlSQmF2N0FVdGlQeWxTVVllbmZsZDd2dC91MmRQeDRYT2QrMmcwV3ZpdHpZWUdKenY2K1FiQ1UKS1pwVENmUkhPejFCQ2JnVDBoSU5BZjhMcFNoL3pzd1lrZjM5MHV1Z0tzVkZPVVJBNmc4dU5rSEoyUzRlUEloVApib3Zzb0swYTR6WDRZUWhrREpUYTF6MVBFUzJXWW1VYU5uZWlaTTVyTWw2TEppcWFBZ01mbUg2WTI1UnJobldlCkpTS1dlNmNxajdYUDhZemxMcVBEMUpNamh5QkVHbGpFT25hSQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://api.edf-bootstrap.cx.fg1.ffm.osc.live
name: shoot-fs1-0-edf-bootstrap-oidc
contexts:
- context:
cluster: shoot-fs1-0-edf-bootstrap-oidc
user: shoot-fs1-0-edf-bootstrap-oidc
name: shoot-fs1-0-edf-bootstrap-oidc
current-context: shoot-fs1-0-edf-bootstrap-oidc
kind: Config
preferences: {}
users:
- name: shoot-fs1-0-edf-bootstrap-oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://auth.apps.fs1-0.ffm.osc.live/dex
- --oidc-client-id=oidc-user
- --oidc-client-secret=8GmH7HL4mu4iwWptpV0418iMooaz4k4F
- --oidc-extra-scope=openid
- --oidc-extra-scope=profile
- --oidc-extra-scope=email
- --oidc-extra-scope=groups
- --oidc-extra-scope=audience:server:client_id:kubernetes
- --certificate-authority-data=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUY2ekNDQk5PZ0F3SUJBZ0lRSVFPK0xDcWpDbHRiSHc0YVJGWWptakFOQmdrcWhraUc5dzBCQVFzRkFEQ0IKZ2pFTE1Ba0dBMVVFQmhNQ1JFVXhLekFwQmdOVkJBb01JbFF0VTNsemRHVnRjeUJGYm5SbGNuQnlhWE5sSUZObApjblpwWTJWeklFZHRZa2d4SHpBZEJnTlZCQXNNRmxRdFUzbHpkR1Z0Y3lCVWNuVnpkQ0JEWlc1MFpYSXhKVEFqCkJnTlZCQU1NSEZRdFZHVnNaVk5sWXlCSGJHOWlZV3hTYjI5MElFTnNZWE56SURJd0hoY05Nakl3TWpJeU1UQXcKT1RBMldoY05Nekl3TWpJeU1qTTFPVFU1V2pCZU1Rc3dDUVlEVlFRR0V3SkVSVEVuTUNVR0ExVUVDZ3dlUkdWMQpkSE5qYUdVZ1ZHVnNaV3R2YlNCVFpXTjFjbWwwZVNCSGJXSklNU1l3SkFZRFZRUUREQjFVWld4bGEyOXRJRk5sClkzVnlhWFI1SUVSV0lGSlRRU0JEUVNBeU1qQ0NBaUl3RFFZSktvWklodmNOQVFFQkJRQURnZ0lQQURDQ0Fnb0MKZ2dJQkFLWWhCY0psRUVSUldRanNSVTIzcWJJVjE5aUxOdUE2UTNQMnd2YzYrdVo0ZkNwajhyWklFdlNwZUMxUwp5YnZnU216UnZ0c3VUL2hBNG1Ib0FJaFdpalJHVHNodGljY0dhWHFGUSt6SFFwb1FHY2tOOGhRUjUwMWVIVldyCnR6YmYxZFp4UmtoUzgrYVNtUWtLa1Y2ZlpDRzRvVXVmOTVOcnRQUmVNc3RrMGVIRk5KRU1HOTRoSFVZczFwemcKb043MnhtZ2JRYTBiZkJoMy9nWUZxbHdLOWxHam16MGhndzNRRGxucnB4T3E0RUdhc09walVtSGs4ZVJCQlhkMwpCUWM3TTQrWVZDd0VoYUdRMTV4aDR5M3NRNXM0ejFBY1AxOHJueWNmdnpZamRtdzBDSXNCQ3NPbC9sai9Fb2ZXCk1zK2h3Z002cGRPZGJReWsxd1JnS3JSaUkvcDMzM1QrY3oyR1BYRWlTTHhFUUN5THE3N2Y1WTZZMkFlRnpaSTIKV0tqSnJwbTlGTzNCYytUZ29vbURWM2hOeUdoU3FyRFdvcEpyUTl0RUI1UzZKcU1OcDlyeEFBRnVYMFkwQ2pndQo1SmhENENwNmp0dTBvdU5Kd1ZidE1rWlBwVmNwSHRhc1JJdGl1ZDNva2VtTWlqRTJSbEdtM3lGbGFUK3BEQmZvCnZ3T1NNMElqY0U0eDdvUnBWRXJuSG1FYThpN3hqWk1PMjhOUkNUWndZWkdHNE1LV2luL3NkR2ovSmJzcGJSNVoKelBMZlpkOGZkUGZBZ2RZQUllUHkwOWVyTkdWUWpYQnhLenNCOWlUMGxobEE3ZUFlU256azEyc2pac0I2RG0yRQoxWDJPMmlUSXZCKzhmQlUvN21oYnYzaWFVTjdxUG1rT2NacERkNUVmTXNVSjByQkZBZ01CQUFHamdnRitNSUlCCmVqQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0hRWURWUjBPQkJZRUZQRTlPcVpjZHYyK25LZ05EU2x6RWRFYmRXY0gKTUI4R0ExVWRJd1FZTUJhQUZMOVpJRFlBZWFDZ0ltdU0xZkpoMHJnc3k0SktNQklHQTFVZEV3RUIvd1FJTUFZQgpBZjhDQVFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3SUdDQ3NHQVFVRkJ3TUJNRkFHQTFVZEh3UkpNRWN3ClJhQkRvRUdHUDJoMGRIQTZMeTluY21Oc01pNWpjbXd1ZEdWc1pYTmxZeTVrWlM5eWJDOVVMVlJsYkdWVFpXTmYKUjJ4dlltRnNVbTl2ZEY5RGJHRnpjMTh5TG1OeWJEQ0JqUVlJS3dZQkJRVUhBUUVFZ1lBd2ZqQXVCZ2dyQmdFRgpCUWN3QVlZaWFIUjBjRG92TDJkeVkyd3lMbTlqYzNBdWRHVnNaWE5sWXk1a1pTOXZZM053Y2pCTUJnZ3JCZ0VGCkJRY3dBb1pBYUhSMGNEb3ZMMmR5WTJ3eUxtTnlkQzUwWld4bGMyVmpMbVJsTDJOeWRDOVVMVlJsYkdWVFpXTmYKUjJ4dlltRnNVbTl2ZEY5RGJHRnpjMTh5TG1OeWREQVRCZ05WSFNBRUREQUtNQWdHQm1lQkRBRUNBVEFOQmdrcQpoa2lHOXcwQkFRc0ZBQU9DQVFFQWc3U1piL0t3aVJkcWFrc2hkZmFBckpkdWxqUTBiUVRoZGtaS0lpUVNBdUtWCkd2UUcxRU5BL3NPRkpzYXlnNVBicVN6cDM0eEVjMk9KY1B4NEU0YU9FRFF5dmtDOFRxWnlKeDFROU9rQ1R5aDkKSDFlVWRSaWozMWJFWVIxd2JyTXJBR0FGdmR2VUkzc25xZnpFNzJkK05aSVJSN0ZUK2tvdVgzdksyYjNCdFFLdgprT3ZaWFlzOUZSYU5rZ05KT204dGNzbFVsYTkzMkpjcTM4OXN4cXorKzlRYjZUNlpWeGxDVXZWUFFoRW1aemhoCllCMTZYUldvVFFHZ0pLMXlPV25IRk01c2Q3ZFJUclkvd1VVTXZ5bWNzelZYUWd5Y2lTVzhJTDZCZXNCdXdFNlcKY3Z4bVV1eFNqZ2krM21zR3dmWmJPaitmcnVkUTY0QksrL2ErdnpZWEJnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJRHd6Q0NBcXVnQXdJQkFnSUJBVEFOQmdrcWhraUc5dzBCQVFzRkFEQ0JnakVMTUFrR0ExVUVCaE1DUkVVeApLekFwQmdOVkJBb01JbFF0VTNsemRHVnRjeUJGYm5SbGNuQnlhWE5sSUZObGNuWnBZMlZ6SUVkdFlrZ3hIekFkCkJnTlZCQXNNRmxRdFUzbHpkR1Z0Y3lCVWNuVnpkQ0JEWlc1MFpYSXhKVEFqQmdOVkJBTU1IRlF0VkdWc1pWTmwKWXlCSGJHOWlZV3hTYjI5MElFTnNZWE56SURJd0hoY05NRGd4TURBeE1UQTBNREUwV2hjTk16TXhNREF4TWpNMQpPVFU1V2pDQmdqRUxNQWtHQTFVRUJoTUNSRVV4S3pBcEJnTlZCQW9NSWxRdFUzbHpkR1Z0Y3lCRmJuUmxjbkJ5CmFYTmxJRk5sY25acFkyVnpJRWR0WWtneEh6QWRCZ05WQkFzTUZsUXRVM2x6ZEdWdGN5QlVjblZ6ZENCRFpXNTAKWlhJeEpUQWpCZ05WQkFNTUhGUXRWR1ZzWlZObFl5QkhiRzlpWVd4U2IyOTBJRU5zWVhOeklESXdnZ0VpTUEwRwpDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ3FYOW9iWCtoemtlWGFYUFNpNWtmbDgyaFZZQVVkCkFxU3ptMW56SG9xdk5LMzhEY0xaU0JudWFZL0pJUHdocWdjWjdiQmNyR1hIWCswQ2ZIdDhMUnZXdXJtQXdoaUMKRm9UNlpyQUl4bFFqZ2VUTnVVay85azl1TjBnb09BL0Z2dWRvY1AwNWwwM1N4NWlSVUtyRVJMTWpmVGxINlZKaQoxaEtUWHJjeGxrSUYrM2FuSHFQMXd2enBlc1ZzcVhGUDZzdDR2R0N2eDk3MDJjdStmak9sYnBTRDhEVDZJYXZxCmpuS2dQNlRlTUZ2dmhrMXFsVnREUktnUUZSemxBVmZGbVBIbUJpaVJxaURGdDFNbVVVT3lDeEdWV09IQUQzYloKd0kxOGdmTnljSjV2L2hxTzJWODF4ckp2Tkh5K1NFL2lXam5YMkoxNG5wK0dQZ05lR1l0RW90WEhBZ01CQUFHagpRakJBTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnRUdNQjBHQTFVZERnUVdCQlMvCldTQTJBSG1nb0NKcmpOWHlZZEs0TE11Q1NqQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFNUU9pWVFzZmRPaHkKTnNadCtVMmUraUtvNFlGV3o4MjduK3Fya1JrNHI2cDhGVTN6dHFPTnBmU085a1NwcCtnaGxhMCtBR0lXaVBBQwp1dnhoSStZem16QjZhelppZTYwRUk0UllaZUxiSzRybkpWTTNZbE5mdk5vQllpbWlwaWR4NWpvaWZzRnZIWlZ3CklFb0hOTi9xL3hXQTViclhldGhiZFh3RmVpbEhma0NvTVJOM3pVQTd0RkZIZWk0UjQwY1IzcDFtMEl2VlZHYjYKZzFYcWZNSXBpUnZwYjdQTzRnV0V5UzgrZUlWaWJzbGZ3WGhqZEZqQVNCZ01tVG5ycE13YXRYbGFqUldjMkJRTgo5bm9IVjhjaWd3VXRQSnNsSmowWXM2bERmTWpJcTJTUERxTy9uQnVkTU52YTBCa3Vxanp4K3pPQWR1VE5yUmxQCkJTZU9FNkZ1d2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
command: kubectl
env: null
provideClusterInfo: false