Merge branch 'development' into pipeline-adr-temp

This commit is contained in:
Stephan Lo 2024-12-19 10:33:48 +01:00
commit 9e1effa4f1
39 changed files with 1395 additions and 1 deletions

3
.vscode/settings.json vendored Normal file
View file

@ -0,0 +1,3 @@
{
"peacock.remoteColor": "#61dafb"
}

View file

@ -1,7 +1,7 @@
---
title: Engineers
weight: 2
description: 'Our clients: People creating code and bringing it to live - and their habits and contexts'
description: 'Our clients: People creating code and bringing it to life - and their habits and contexts'
---

View file

@ -40,6 +40,19 @@ Deploy and develop the famous socks shops:
* https://github.com/kezoo/nestjs-reactjs-graphql-typescript-boilerplate-example
### Telemetry Use Case with respect to the Fibonacci workload
The Fibonacci App on the cluster can be accessed on the path https://cnoe.localtest.me/fibonacci.
It can be called for example by using the URL https://cnoe.localtest.me/fibonacci?number=5000000.
The resulting ressource spike can be observed one the Grafana dashboard "Kubernetes / Compute Resources / Cluster".
The resulting visualization should look similar like this:
![alt text](fibonacci-app_cpu-spike.png)
## When and how to use the developer framework?
### e.g. an example

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

View file

@ -0,0 +1,95 @@
---
title: Stakeholder Workshop Intro
weight: 50
description: An overall eDF introduction for stakeholders
linktitle: Stakeholder Workshops
---
## Edge Developer Framework Solution Overview
> This section is derived from [conceptual-onboarding-intro](../conceptual-onboarding/1_intro/)
1. As presented in the introduction: We have the ['Edge Developer Framework'](./edgel-developer-framework/). \
In short the mission is:
* Build a european edge cloud IPCEI-CIS
* which contains typical layers infrastructure, platform, application
* and on top has a new layer 'developer platform'
* which delivers a **cutting edge developer experience** and enables **easy deploying** of applications onto the IPCEI-CIS
2. We think the solution for EDF is in relation to ['Platforming' (Digital Platforms)](../conceptual-onboarding/3_platforming/)
1. The next evolution after DevOps
2. Gartner predicts 80% of SWE companies to have platforms in 2026
3. Platforms have a history since roundabout 2019
4. CNCF has a working group which created capabilities and a maturity model
3. Platforms evolve - nowadys there are [Platform Orchestrators](../conceptual-onboarding/4_orchestrators/)
1. Humanitec set up a Reference Architecture
2. There is this 'Orchestrator' thing - declaratively describe, customize and change platforms!
4. Mapping our assumptions to the [CNOE solution](../conceptual-onboarding/5_cnoe/)
1. CNOE is a hot candidate to help and fulfill our platform building
2. CNOE aims to embrace change and customization!
## 2. Platforming as the result of DevOps
### DevOps since 2010
![alt text](DevOps-Lifecycle.jpg)
* from 'left' to 'right' - plan to monitor
* 'leftshift'
* --> turns out to be a right shift for developers with cognitive overload
* 'DevOps isd dead' -> we need Platforms
### Platforming to provide 'golden paths'
> don't mix up 'golden paths' with pipelines or CI/CD
![alt text](../conceptual-onboarding/3_platforming/humanitec-history.png)
#### Short list of platform using companies
As [Gartner states](https://www.gartner.com/en/newsroom/press-releases/2023-11-28-gartner-hype-cycle-shows-ai-practices-and-platform-engineering-will-reach-mainstream-adoption-in-software-engineering-in-two-to-five-years): "By 2026, 80% of software engineering organizations will establish platform teams as internal providers of reusable services, components and tools for application delivery."
Here is a small list of companies alrteady using IDPs:
* Spotify
* Airbnb
* Zalando
* Uber
* Netflix
* Salesforce
* Google
* Booking.com
* Amazon
* Autodesk
* Adobe
* Cisco
* ...
## 3 Platform building by 'Orchestrating'
So the goal of platforming is to build a 'digital platform' which fits [this architecture](https://www.gartner.com/en/infrastructure-and-it-operations-leaders/topics/platform-engineering) ([Ref. in German)](https://www.gartner.de/de/artikel/was-ist-platform-engineering):
![alt text](image.png)
### Digital Platform blue print: Reference Architecture
The blue print for such a platform is given by the reference architecture from Humanitec:
[Platform Orchestrators](../conceptual-onboarding/4_orchestrators/)
### Digital Platform builder: CNOE
Since 2023 this is done by 'orchestrating' such platforms. One orchestrator is the [CNOE solution](../conceptual-onboarding/5_cnoe/), which highly inspired our approach.
In our orchestartion engine we think in 'stacks' of 'packages' containing platform components.
## 4 Sticking all together: Our current platform orchestrating generated platform
Sticking together the platforming orchestration concept, the reference architecture and the CNOE stack solution, [this is our current running platform minimum viable product](../plan-in-2024/image-2024-8-14_10-50-27.png).
This will now be presented! Enjoy!

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 295 KiB

View file

@ -0,0 +1,15 @@
---
title: PoC Structure
weight: 5
description: Building plan of the PoC milestone (end 2024) output
---
Presented and approved on tuesday, 26.11.2024 within the team:
![alt text](./_assets/image.png)
The use cases/application lifecycle and deployment flow is drawn here: https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024
![alt text](./_assets/image-1.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 376 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 652 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 726 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 888 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 522 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 256 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 624 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

View file

@ -0,0 +1,139 @@
---
title: Team and Work Structure
weight: 50
description: The way we work and produce runnable, presentable software
linkTitle: Team-Process
---
This document describes a proposal to set up a team work structure to primarily get the POC successfully delivered. Later on we will adjust and refine the process to fit for the MVP.
## Introduction
### Rationale
We currently face the following [challenges in our process](https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024):
1. missing team alignment on PoC-Output over all components
1. Action: team is committed to **clearly defined PoC capabilities**
1. Action: every each team-member is aware of **individual and common work** to be done (backlog) to achieve PoC
1. missing concept for repository (process, structure,
1. Action: the **PoC has a robust repository concept** up & running
1. Action: repo concept is applicable for other repositorys as well (esp. documentation repo)
### General working context
A **project goal** drives us as a **team** to create valuable **product output**.
The **backlog** contains the product specification which instructs us by working in **tasks** with the help and usage of **ressources** (like git, 3rd party code and knowledge and so on).
![alt text](./_assets/P1.png)
Goal, Backlog, Tasks and Output must be in a well-defined context, such that the team can be productive.
### POC and MVP working context
This document has two targets: POC and MVP.
Today is mid november 2024 and we need to package our project results created since july 2024 to deliver the POC product.
![alt text](./_assets/P2.png)
> Think of the agenda's goal like this: Imagine Ralf the big sponsor passes by and sees 'edge Developer Framework' somewhere on your screen. Then he asks: 'Hey cool, you are one of these famous platform guys?! I always wanted to get a demo how this framework looks like!' \
> **What are you going to show him?**
## Team and Work Structure (POC first, MVP later)
In the following we will look at the work structure proposal, primarily for the POC, but reusable for any other release or the MVP
### Consolidated POC (or any release later)
![alt text](./_assets/P3.png)
#### Responsibilities to reliably specify the deliverables
![alt text](./_assets/P4.png)
#### Todos
1. SHOULD: Clarify context (arch, team, leads)
1. MUST: Define Deliverables (arch, team) (Hint: Deleiverables could be seen 1:1 as use cases - not sure about that right now)
1. MUST: Define Output structure (arch, leads)
### Process (General): from deliverables to output (POC first, MVP later)
Most important in the process are:
* **traces** from tickets to outputs (as the clue to understand and control what is where)
* **README.md** (as the clue how to use the output)
![alt text](./_assets/P5.png)
### Output Structure POC
Most important in the POC structure are:
* one repo which is the product
* a README which maps project goals to the repo content
* the content consists of capabilities
* capabilities are shown ('prooven') by use cases
* the use cases are described in the deliverables
![alt text](./_assets/P6.png)
#### Glossary
* README: user manual and storybook
* Outcome: like resolution, but more verbose and detailled (especially when resolution was 'Done'), so that state changes are easily recognisable
### Work Structure Guidelines (POC first, MVP later)
#### Structure
1. each task and/or user story has at least a branch in an existing repo or a new, dedicated task repo
> recommended: multi-repo over monorepo
1. each repo has a main and development branch. development is the intgration line
1. pull requests are used to merge work outputs to the integration line
1. optional (my be too cumbersome): each PR should be reflected as comment in jira
#### Workflow (in any task / user story)
1. when output comes in own repo: `git init` --> always create as fast as possible a new repo
1. commit early and oftenly
1. comments on output and outcome when where is new work done. this could typically correlate to a pull request, see above
#### Definition of Done
1. Jira: there is a final comment summarizimg the outcome (in a bit more verbose from than just the 'resolution' of the ticket) and the main outputs. This may typically be a link to the commit and/or pull request of the final repo state
2. Git/Repo: there is a README.md in the root of the repo. It summarizes in a typical Gihub-manner how to use the repo, so that it does what it is intended to do and reveals all the bells and whistles of the repo to the consumer. If the README doesn't lead to the usable and recognizable added value the work is not done!
#### Review
1. Before a ticket gets finished (not defined yet which jira-state this is) there must be a review by a second team member
1. the reviewing person may review whatever they want, but must at least check the README
#### Out of scope (for now)
The following topics are optional and do not need an agreement at the moment:
1. Commit message syntax
> Recommendation: at least 'WiP' would be good if the state is experimental
1. branch permissions
1. branch clean up policies
1. squashing when merging into the integration line
1. CI
1. Tech blogs / gists
1. Changelogs
#### Integration of Jira with Forgejo (compare to https://github.com/atlassian/github-for-jira)
1. Jira -> Forgejo: Create Branch
1. Forgejo -> Jira:
1. commit
2. PR
## Status of POC Capabilities
The following table lists an analysis of the status of the ['Funcionality validation' of the POC](https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024).
Assumption: These functionalities should be the aforementioned capabilities.
![alt text](./_assets/P8.png)

View file

@ -0,0 +1,7 @@
---
title: Design
weight: 1
description: Edge Developver Framework Design Documents
---
This design documentation structure is inspired by the [design of crossplane](https://github.com/crossplane/crossplane/tree/main/design#readme).

View file

@ -0,0 +1,31 @@
---
title: eDF is self-contained and has an own IAM (WiP)
weight: 2
description: tbd
---
* Type: Proposal
* Owner: Stephan Lo (stephan.lo@telekom.de)
* Reviewers: EDF Architects
* Status: Speculative, revision 0.1
## Background
tbd
## Proposal
==== 1 =====
There is a core eDF which is self-contained and does not have any impelmented dependency to external platforms.
eDF depends on abstractions.
Each embdding into customer infrastructure works with adapters which implement the abstraction.
==== 2 =====
eDF has an own IAM. This may either hold the principals and permissions itself when there is no other IAM or proxy and map them when integrated into external enterprise IAMs.
## Reference
Arch call from 4.12.24, Florian, Stefan, Stephan-Pierre

View file

@ -0,0 +1,23 @@
---
title: Agnostic EDF Deployment
weight: 2
description: The implementation of EDF must be kubernetes provider agnostic
---
* Type: Proposal
* Owner: Stephan Lo (stephan.lo@telekom.de)
* Reviewers: EDF Architects
* Status: Speculative, revision 0.1
## Background
EDF is running as a controlplane - or let's say an orchestration plane, correct wording is still to be defined - in a kubernetes cluster.
Right now we have at least ArgoCD as controller of manifests which we provide as CNOE stacks of packages and standalone packages.
## Proposal
The implementation of EDF must be kubernetes provider agnostic. Thus each provider specific deployment dependency must be factored out into provider specific definitions or deployment procedures.
## Local deployment
This implies that EDF must always be deployable into a local cluster, whereby by 'local' we mean a cluster which is under the full control of the platform engineer, e.g. a kind cluster on their laptop.

View file

@ -0,0 +1,28 @@
---
title: Agnostic Stack Definition
weight: 2
description: The implementation of EDF stacks must be kubernetes provider agnostic by a templating/hydration mechanism
---
* Type: Proposal
* Owner: Stephan Lo (stephan.lo@telekom.de)
* Reviewers: EDF Architects
* Status: Speculative, revision 0.1
## Background
When booting and reconciling the 'final' stack exectuting orchestrator (here: ArgoCD) needs to get rendered (or hydrated) presentations of the manifests.
It is not possible or unwanted that the orchestrator itself resolves dependencies or configuration values.
## Proposal
The hydration takes place for all target clouds/kubernetes providers. There is no 'default' or 'special' setup, like the Kind version.
## Local development
This implies that in a development process there needs to be a build step hydrating the ArgoCD manifests for the targeted cloud.
## Reference
Discussion from Robert and Stephan-Pierre in the context of stack development - there should be an easy way to have locally changed stacks propagated into the local running platform.

View file

@ -0,0 +1,4 @@
---
title: Crossplane
description: Crossplane is a tool to provision cloud resources. it can act as a backend for platform orchestrators as well
---

View file

@ -0,0 +1,764 @@
---
title: Howto develop a crossplane kind provider
weight: 1
description: A provider-kind allows using crossplane locally
---
To support local development and usage of crossplane compositions, a crossplane provider is needed.
Every big hyperscaler already has support in crossplane (e.g. provider-gcp and provider-aws).
Each provider has two main parts, the provider config and implementations of the cloud resources.
The provider config takes the credentials to log into the cloud provider and provides a token
(e.g. a kube config or even a service account) that the implementations can use to provision cloud resources.
The implementations of the cloud resources reflect each type of cloud resource, typical resources are:
- S3 Bucket
- Nodepool
- VPC
- GkeCluster
## Architecture of provider-kind
To have the crossplane concepts applied, the provider-kind consists of two components: kindserver and provider-kind.
The kindserver is used to manage local kind clusters. It provides an HTTP REST interface to create, delete and get informations of a running cluster, using an Authorization HTTP header field used as a password:
![kindserver_interface](./kindserver_interface.png)
The two properties to connect the provider-kind to kindserver are the IP address and password of kindserver. The IP address is required because the kindserver needs to be executed outside the kind cluster, directly on the local machine, as it need to control
kind itself:
![kindserver_provider-kind](./kindserver_provider-kind.png)
The provider-kind provides two crossplane elements, the `ProviderConfig` and `KindCluster` as the (only) cloud resource. The
`ProviderConfig` is configured with the IP address and password of the running kindserver. The `KindCluster` type is configured
to use the provided `ProviderConfig`. Kind clusters can be managed by adding and removing kubernetes manifests of type
`KindCluster`. The crossplane reconcilation loop makes use of the kindserver HTTP GET method to see if a new cluster needs to be
created by HTTP POST or being removed by HTTP DELETE.
The password used by `ProviderConfig` is configured as an kubernetes secret, while the kindserver IP address is configured
inside the `ProviderConfig` as the field endpoint.
When provider-kind created a new cluster by processing a `KindCluster` manifest, the two providers which are used to deploy applications, provider-helm and provider-kubernetes, can be configured to use the `KindCluster`.
![provider-kind_providerconfig](./provider-kind_providerconfig.png)
A Crossplane composition can be created by concaternating different providers and their objects. A composition is managed as a
custom resource definition and defined in a single file.
![composition](./composition.png)
## Configuration
Two kubernetes manifests are defines by provider-kind: `ProviderConfig` and `KindCluster`. The third needed kubernetes
object is a secret.
The need for the following inputs arise when developing a provider-kind:
- kindserver password as a kubernetes secret
- endpoint, the IP address of the kindserver as a detail of `ProviderConfig`
- kindConfig, the kind configuration file as a detail of `KindCluster`
The following outputs arise:
- kubernetesVersion, kubernetes version of a created kind cluster as a detail of `KindCluster`
- internalIP, IP address of a created kind cluster as a detail of `KindCluster`
- readiness as a detail of `KindCluster`
- kube config of a created kind cluster as a kubernetes secret reference of `KindCluster`
### Inputs
#### kindserver password
The kindserver password needs to be defined first. It is realized as a kubernetes secret and contains the password
which the kindserver has been configured with:
```
apiVersion: v1
data:
credentials: MTIzNDU=
kind: Secret
metadata:
name: kind-provider-secret
namespace: crossplane-system
type: Opaque
```
#### endpoint
The IP address of the kindserver `endpoint` is configured in the provider-kind `ProviderConfig`. This config also references the kindserver password (`kind-provider-secret`):
```
apiVersion: kind.crossplane.io/v1alpha1
kind: ProviderConfig
metadata:
name: kind-provider-config
spec:
credentials:
source: Secret
secretRef:
namespace: crossplane-system
name: kind-provider-secret
key: credentials
endpoint:
url: https://172.18.0.1:7443/api/v1/kindserver
```
It is suggested that the kindserver runs on the IP of the docker host, so that all kind clusters can access it without extra routing.
#### kindConfig
The kind config is provided as the field `kindConfig` in each `KindCluster` manifest. The manifest also references the provider-kind `ProviderConfig` (`kind-provider-config` in the `providerConfigRef` field):
```
apiVersion: container.kind.crossplane.io/v1alpha1
kind: KindCluster
metadata:
name: example-kind-cluster
spec:
forProvider:
kindConfig: |
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."gitea.cnoe.localtest.me:443"]
endpoint = ["https://gitea.cnoe.localtest.me"]
[plugins."io.containerd.grpc.v1.cri".registry.configs."gitea.cnoe.localtest.me".tls]
insecure_skip_verify = true
providerConfigRef:
name: kind-provider-config
writeConnectionSecretToRef:
namespace: default
name: kind-connection-secret
```
After the kind cluster has been created, it's kube config is stored in a kubernetes secret `kind-connection-secret` which `writeConnectionSecretToRef` references.
### Outputs
The three outputs can be recieved by getting the `KindCluster` manifest after the cluster has been created. The `KindCluster` is
available for reading even before the cluster has been created, but the three outputfields are empty until then. The ready state
will also switch from `false` to `true` after the cluster has finally been created.
#### kubernetesVersion, internalIP and readiness
This fields can be recieved with a standard kubectl get command:
```
$ kubectl get kindclusters kindcluster-fw252 -o yaml
...
status:
atProvider:
internalIP: 192.168.199.19
kubernetesVersion: v1.31.0
conditions:
- lastTransitionTime: "2024-11-12T18:22:39Z"
reason: Available
status: "True"
type: Ready
- lastTransitionTime: "2024-11-12T18:21:38Z"
reason: ReconcileSuccess
status: "True"
type: Synced
```
#### kube config
The kube config is stored in a kubernetes secret (`kind-connection-secret`) which can be accessed after the cluster has been
created:
```
$ kubectl get kindclusters kindcluster-fw252 -o yaml
...
writeConnectionSecretToRef:
name: kind-connection-secret
namespace: default
...
$ kubectl get secret kind-connection-secret
NAME TYPE DATA AGE
kind-connection-secret connection.crossplane.io/v1alpha1 2 107m
```
The API endpoint of the new cluster `endpoint` and it's kube config `kubeconfig` is stored in that secret. This values are set in
the Obbserve function of the kind controller of provider-kind. They are set with the special crossplane function managed
ExternalObservation.
## The reconciler loop of a crossplane provider
The reconciler loop is the heart of every crossplane provider. As it is coupled async, it's best to describe it working in words:
Internally, the Connect function get's triggered in the kindcluster controller `internal/controller/kindcluster/kindcluster.go`
first, to setup the provider and configure it with the kindserver password and IP address of the kindserver.
After that the provider-kind has been configured with the kindserver secret and it's `ProviderConfig`, the provider is ready to
be activated by applying a `KindCluster` manifest to kubernetes.
When the user applies a new `KindCluster` manifest, a observe loop is started. The provider regulary triggers the `Observe`
function of the controller. As there has yet been created nothing yet, the controller will return
`managed.ExternalObservation{ResourceExists: false}` to signal that the kind cluster resource has not been created yet.
As the is a kindserver SDK available, the controller is using the `Get` function of the SDK to query the kindserver.
The `KindCluster` is already applied and can be retrieved with `kubectl get kindclusters`. As the cluster has not been
created yet, it readiness state is `false`.
In parallel, the `Create` function is triggered in the controller. This function has acces to the desired kind config
`cr.Spec.ForProvider.KindConfig` and the name of the kind cluster `cr.ObjectMeta.Name`. It can now call the kindserver SDK to
create a new cluster with the given config and name. The create function is supposed not to run too long, therefore
it directly returns in the case of provider-kind. The kindserver already knows the name of the new cluster and even it is
not yet ready, it will respond with a partial success.
The observe loops is triggered regulary in parallel. It will be triggered after the create call but before the kind cluster has been
created. Now it will get a step further. It gets the information of kindserver, that the cluster is already knows, but not
finished creating yet.
After the cluster has been finished creating, the kindserver has all important informations for the provider-kind. That is
The API server endpoint of the new cluster and it's kube config. After another round of the observer loop, the controller
gets now the full set of information of kindcluster (cluster ready, it's API server endpoint and it's kube config).
When this informations has been recieved by the kindserver SDk in form of a JSON file, it is able to signal successfull
creating of the cluster. That is done by returning the following structure from inside the observe function:
```
return managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
ConnectionDetails: managed.ConnectionDetails{
xpv1.ResourceCredentialsSecretEndpointKey: []byte(clusterInfo.Endpoint),
xpv1.ResourceCredentialsSecretKubeconfigKey: []byte(clusterInfo.KubeConfig),
},
}, nil
```
Note that the managed.ConnectionDetails will automatically write the API server endpoint and it's kube config to the kubernetes
secret which `writeConnectionSecretToRef`of `KindCluster` points to.
It also set the availability flag before returning, that will mark the `KindCluster` as ready:
```
cr.Status.SetConditions(xpv1.Available())
```
Before returning, it will also set the informations which are transfered into fields of `kindCluster` which can be retrieved by a
`kubectl get`, the `kubernetesVersion` and the `internalIP` fields:
```
cr.Status.AtProvider.KubernetesVersion = clusterInfo.K8sVersion
cr.Status.AtProvider.InternalIP = clusterInfo.NodeIp
```
Now the `KindCluster` is setup completly and when it's data is retrieved by `kubectl get`, all data is available and it's readiness
is set to `true`.
The observer loops continies to be called to enable drift detection. That detection is currently not implemented, but is
prepared for future implementations. When the observer function would detect that the kind cluster with a given name is set
up with a kind config other then the desired, the controller would call the controller `Update` function, which would
delete the currently runnign kind cluster and recreates it with the desired kind config.
When the user is deleting the `KindCluster` manifest at a later stage, the `Delete` function of the controller is triggered
to call the kindserver SDK to delete the cluster with the given name. The observer loop will acknowledge that the cluster
is deleted successfully by retrieving `kind cluster not found` when the deletion had been successfull. If not, the controller
will trigger the delete function in a loop as well, until the kind cluster has been deleted.
That assembles the reconciler loop.
## kind API server IP address
Each newly created kind cluster has a practially random kubernetes API server endpoint. As the IP address of a new kind cluster
can't determined before creation, the kindserver manages the API server field of the kind config. It will map all
kind server kubernets API endpoints on it's own IP address, but on different ports. That garantees that alls kind
clusters can access the kubernetes API endpoints of all other kind clusters by using the docker host IP of the kindserver
itself. This is needed as the kube config hardcodes the kubernets API server endpoint. By using the docker host IP
but with different ports, every usage of a kube config from one kind cluster to another is working successfully.
The management of the kind config in the kindserver is implemented in the `Post` function of the kindserver `main.go` file.
## Create a the crossplane provider-kind
The official way for creating crossplane providers is to use the provider-template. Process the following steps to create
a new provider.
First, clone the provider-template. The commit ID when this howto has been written is 2e0b022c22eb50a8f32de2e09e832f17161d7596.
Rename the new folder after cloning.
```
git clone https://github.com/crossplane/provider-template.git
mv provider-template provider-kind
cd provider-kind/
```
The informations in the provided README.md are incomplete. Folow this steps to get it running:
> Please use bash for the next commands (`${type,,}` e.g. is not a mistake)
```
make submodules
export provider_name=Kind # Camel case, e.g. GitHub
make provider.prepare provider=${provider_name}
export group=container # lower case e.g. core, cache, database, storage, etc.
export type=KindCluster # Camel casee.g. Bucket, Database, CacheCluster, etc.
make provider.addtype provider=${provider_name} group=${group} kind=${type}
sed -i "s/sample/${group}/g" apis/${provider_name,,}.go
sed -i "s/mytype/${type,,}/g" internal/controller/${provider_name,,}.go
```
Patch the Makefile:
```
dev: $(KIND) $(KUBECTL)
@$(INFO) Creating kind cluster
+ @$(KIND) delete cluster --name=$(PROJECT_NAME)-dev
@$(KIND) create cluster --name=$(PROJECT_NAME)-dev
@$(KUBECTL) cluster-info --context kind-$(PROJECT_NAME)-dev
- @$(INFO) Installing Crossplane CRDs
- @$(KUBECTL) apply --server-side -k https://github.com/crossplane/crossplane//cluster?ref=master
+ @$(INFO) Installing Crossplane
+ @helm install crossplane --namespace crossplane-system --create-namespace crossplane-stable/crossplane --wait
@$(INFO) Installing Provider Template CRDs
@$(KUBECTL) apply -R -f package/crds
@$(INFO) Starting Provider Template controllers
```
Generate, build and execute the new provider-kind:
```
make generate
make build
make dev
```
Now it's time to add the required fields (internalIP, endpoint, etc.) to the spec fields in go api sources found in:
- apis/container/v1alpha1/kindcluster_types.go
- apis/v1alpha1/providerconfig_types.go
The file `apis/kind.go` may also be modified. The word `sample` can be replaces with `container` in our case.
When that's done, the yaml specifications needs to be modified to also include the required fields (internalIP, endpoint, etc.)
Next, a kindserver SDK can be implemented. That is a helper class which encapsulates the get, create and delete HTTP calls to the kindserver. Connection infos (kindserver IP address and password) will be stored by the constructor.
After that we can add the usage of the kindclient sdk in kindcluster controller `internal/controller/kindcluster/kindcluster.go`.
Finally we can update the `Makefile` to better handle the primary kind cluster creation and adding of a cluster role binding
so that crossplane can access the `KindCluster` objects. Examples and updating the README.md will finish the development.
All this steps are documented in: https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW/provider-kind/pulls/1
## Publish the provider-kind to a user defined docker registry
Every provider-kind release needs to be tagged first in the git repository:
```
git tag v0.1.0
git push origin v0.1.0
```
Next, make sure you have docker logged in into the target registry:
```
docker login forgejo.edf-bootstrap.cx.fg1.ffm.osc.live
```
Now it's time to specify the target registry, build the provider-kind for ARM64 and AMD64 CPU architectures and publish it to the target registry:
```
XPKG_REG_ORGS_NO_PROMOTE="" XPKG_REG_ORGS="forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz" make build.all publish BRANCH_NAME=main
```
The parameter `BRANCH_NAME=main` is needed when the tagging and publishing happens from another branch. The version of the provider-kind that of the tag name. The output of the make call ends then like this:
```
$ XPKG_REG_ORGS_NO_PROMOTE="" XPKG_REG_ORGS="forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz" make build.all publish BRANCH_NAME=main
...
14:09:19 [ .. ] Skipping image publish for docker.io/provider-kind:v0.1.0
Publish is deferred to xpkg machinery
14:09:19 [ OK ] Image publish skipped for docker.io/provider-kind:v0.1.0
14:09:19 [ .. ] Pushing package forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0
xpkg pushed to forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0
14:10:19 [ OK ] Pushed package forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0
```
After publishing, the provider-kind can be installed in-cluster similar to other providers like
provider-helm and provider-kubernetes. To install it apply the following manifest:
```
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-kind
spec:
package: forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0
```
The output of `kubectl get providers`:
```
$ kubectl get providers
NAME INSTALLED HEALTHY PACKAGE AGE
provider-helm True True xpkg.upbound.io/crossplane-contrib/provider-helm:v0.19.0 38m
provider-kind True True forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0 39m
provider-kubernetes True True xpkg.upbound.io/crossplane-contrib/provider-kubernetes:v0.15.0 38m
```
The provider-kind can now be used.
## Crossplane Composition `edfbuilder`
Together with the implemented provider-kind and it's config to create a composition which can create kind clusters and
the ability to deploy helm and kubernetes objects in the newly created cluster.
A composition is realized as a custom resource definition (CRD) considting of three parts:
- A definition
- A composition
- One or more deplyoments of the composition
### definition.yaml
The definition of the CRD will most probably contain one additional fiel, the ArgoCD repository URL to easily select
the stacks which should be deployed:
```
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: edfbuilders.edfbuilder.crossplane.io
spec:
connectionSecretKeys:
- kubeconfig
group: edfbuilder.crossplane.io
names:
kind: EDFBuilder
listKind: EDFBuilderList
plural: edfbuilders
singular: edfbuilders
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
description: A EDFBuilder is a composite resource that represents a K8S Cluster with edfbuilder Installed
type: object
properties:
spec:
type: object
properties:
repoURL:
type: string
description: URL to ArgoCD stack of stacks repo
required:
- repoURL
```
### composition.yaml
This is a shortened version of the file `examples/composition_deprecated/composition.yaml`. It combines a `KindCluster` with
deployments of of provider-helm and provider-kubernetes. Note that the `ProviderConfig` and the kindserver secret has already been
applied to kubernetes (by the Makefile) before applying this composition.
```
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
name: edfbuilders.edfbuilder.crossplane.io
spec:
writeConnectionSecretsToNamespace: crossplane-system
compositeTypeRef:
apiVersion: edfbuilder.crossplane.io/v1alpha1
kind: EDFBuilder
resources:
### kindcluster
- base:
apiVersion: container.kind.crossplane.io/v1alpha1
kind: KindCluster
metadata:
name: example
spec:
forProvider:
kindConfig: |
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."gitea.cnoe.localtest.me:443"]
endpoint = ["https://gitea.cnoe.localtest.me"]
[plugins."io.containerd.grpc.v1.cri".registry.configs."gitea.cnoe.localtest.me".tls]
insecure_skip_verify = true
providerConfigRef:
name: example-provider-config
writeConnectionSecretToRef:
namespace: default
name: my-connection-secret
### helm provider config
- base:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
spec:
credentials:
source: Secret
secretRef:
namespace: default
name: my-connection-secret
key: kubeconfig
patches:
- fromFieldPath: metadata.name
toFieldPath: metadata.name
readinessChecks:
- type: None
### ingress-nginx
- base:
apiVersion: helm.crossplane.io/v1beta1
kind: Release
metadata:
annotations:
crossplane.io/external-name: ingress-nginx
spec:
rollbackLimit: 99999
forProvider:
chart:
name: ingress-nginx
repository: https://kubernetes.github.io/ingress-nginx
version: 4.11.3
namespace: ingress-nginx
values:
controller:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
hostPort:
enabled: true
terminationGracePeriodSeconds: 0
service:
type: NodePort
watchIngressWithoutClass: true
nodeSelector:
ingress-ready: "true"
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
effect: "NoSchedule"
publishService:
enabled: false
extraArgs:
publish-status-address: localhost
# added for idpbuilder
enable-ssl-passthrough: ""
# added for idpbuilder
allowSnippetAnnotations: true
# added for idpbuilder
config:
proxy-buffer-size: 32k
use-forwarded-headers: "true"
patches:
- fromFieldPath: metadata.name
toFieldPath: spec.providerConfigRef.name
### kubernetes provider config
- base:
apiVersion: kubernetes.crossplane.io/v1alpha1
kind: ProviderConfig
spec:
credentials:
source: Secret
secretRef:
namespace: default
name: my-connection-secret
key: kubeconfig
patches:
- fromFieldPath: metadata.name
toFieldPath: metadata.name
readinessChecks:
- type: None
### kubernetes argocd stack of stacks application
- base:
apiVersion: kubernetes.crossplane.io/v1alpha2
kind: Object
spec:
forProvider:
manifest:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: edfbuilder
namespace: argocd
labels:
env: dev
spec:
destination:
name: in-cluster
namespace: argocd
source:
path: registry
repoURL: 'https://gitea.cnoe.localtest.me/giteaAdmin/edfbuilder-shoot'
targetRevision: HEAD
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
patches:
- fromFieldPath: metadata.name
toFieldPath: spec.providerConfigRef.name
```
## Usage
Set this values to allow many kind clusters running in parallel, if needed:
```
sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512
To make the changes persistent, edit the file /etc/sysctl.conf and add these lines:
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
```
Start provider-kind:
```
make build
kind delete clusters $(kind get clusters)
kind create cluster --name=provider-kind-dev
DOCKER_HOST_IP="$(docker inspect $(docker ps | grep kindest | awk '{ print $1 }' | head -n1) | jq -r .[0].NetworkSettings.Networks.kind.Gateway)" make dev
```
Wait until debug output of the provider-kind is shown:
```
...
namespace/crossplane-system configured
secret/example-provider-secret created
providerconfig.kind.crossplane.io/example-provider-config created
14:49:50 [ .. ] Starting Provider Kind controllers
2024-11-12T14:49:54+01:00 INFO controller-runtime.metrics Starting metrics server
2024-11-12T14:49:54+01:00 INFO Starting EventSource {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig", "source": "kind source: *v1alpha1.ProviderConfig"}
2024-11-12T14:49:54+01:00 INFO Starting EventSource {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig", "source": "kind source: *v1alpha1.ProviderConfigUsage"}
2024-11-12T14:49:54+01:00 INFO Starting Controller {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig"}
2024-11-12T14:49:54+01:00 INFO Starting EventSource {"controller": "managed/kindcluster.container.kind.crossplane.io", "controllerGroup": "container.kind.crossplane.io", "controllerKind": "KindCluster", "source": "kind source: *v1alpha1.KindCluster"}
2024-11-12T14:49:54+01:00 INFO Starting Controller {"controller": "managed/kindcluster.container.kind.crossplane.io", "controllerGroup": "container.kind.crossplane.io", "controllerKind": "KindCluster"}
2024-11-12T14:49:54+01:00 INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":8080", "secure": false}
2024-11-12T14:49:54+01:00 INFO Starting workers {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig", "worker count": 10}
2024-11-12T14:49:54+01:00 DEBUG provider-kind Reconciling {"controller": "providerconfig/providerconfig.kind.crossplane.io", "request": {"name":"example-provider-config"}}
2024-11-12T14:49:54+01:00 INFO Starting workers {"controller": "managed/kindcluster.container.kind.crossplane.io", "controllerGroup": "container.kind.crossplane.io", "controllerKind": "KindCluster", "worker count": 10}
2024-11-12T14:49:54+01:00 INFO KubeAPIWarningLogger metadata.finalizers: "in-use.crossplane.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
2024-11-12T14:49:54+01:00 DEBUG provider-kind Reconciling {"controller": "providerconfig/providerconfig.kind.crossplane.io", "request": {"name":"example-provider-config"}}
```
Start kindserver:
see kindserver/README.md
When kindserver is started:
```
cd examples/composition_deprecated
kubectl apply -f definition.yaml
kubectl apply -f composition.yaml
kubectl apply -f cluster.yaml
```
List the created elements, wait until the new cluster is created, then switch back to the primary cluster:
```
kubectl config use-context kind-provider-kind-dev
```
Show edfbuilder compositions:
```
kubectl get edfbuilders
NAME SYNCED READY COMPOSITION AGE
kindcluster True True edfbuilders.edfbuilder.crossplane.io 4m45s
```
Show kind clusters:
```
kubectl get kindclusters
NAME READY SYNCED EXTERNAL-NAME INTERNALIP VERSION AGE
kindcluster-wlxrt True True kindcluster-wlxrt 192.168.199.19 v1.31.0 5m12s
```
Show helm deployments:
```
kubectl get releases
NAME CHART VERSION SYNCED READY STATE REVISION DESCRIPTION AGE
kindcluster-29dgf ingress-nginx 4.11.3 True True deployed 1 Install complete 5m32s
kindcluster-w2dxl forgejo 10.0.2 True True deployed 1 Install complete 5m32s
kindcluster-x8x9k argo-cd 7.6.12 True True deployed 1 Install complete 5m32s
```
Show kubernetes objects:
```
kubectl get objects
NAME KIND PROVIDERCONFIG SYNCED READY AGE
kindcluster-8tbv8 ConfigMap kindcluster True True 5m50s
kindcluster-9lwc9 ConfigMap kindcluster True True 5m50s
kindcluster-9sgmd Deployment kindcluster True True 5m50s
kindcluster-ct2h7 Application kindcluster True True 5m50s
kindcluster-s5knq ConfigMap kindcluster True True 5m50s
```
Open the composition in VS Code: examples/composition_deprecated/composition.yaml
## What is missing
Currently missing is the third and final part, the imperative steps which need to be processed:
- creation of TLS certificates and giteaAdmin password
- creation of a Forgejo repository for the stacks
- uploading the stacks in the Forgejo repository
Connecting the definition field (ArgoCD repo URL) and composition interconnects (function-patch-and-transform) are also missing.

View file

@ -0,0 +1,72 @@
<mxfile host="65bd71144e">
<diagram id="IShv2I7JLD2IyEDAFXRT" name="Page-1">
<mxGraphModel dx="813" dy="535" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="0" shadow="0">
<root>
<mxCell id="0"/>
<mxCell id="1" parent="0"/>
<mxCell id="19" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="240" y="20" width="300" height="520" as="geometry"/>
</mxCell>
<mxCell id="2" value="provider-kind&lt;br&gt;&lt;b&gt;Secret&lt;/b&gt;" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="80" y="80" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="14" style="edgeStyle=none;html=1;entryX=1;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="3" target="2">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="3" value="provider-kind&lt;br&gt;&lt;b&gt;ProviderConfig&lt;/b&gt;" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="280" y="80" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="15" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="4" target="3">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="4" value="provider-kind&lt;br&gt;&lt;b&gt;KindCluster&lt;/b&gt;" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="280" y="160" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="16" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="5" target="4">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="5" value="provider-helm&lt;br&gt;&lt;b&gt;ProviderConfig&lt;/b&gt;" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="280" y="240" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="18" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="6" target="5">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="6" value="provider-helm&lt;br&gt;&lt;b&gt;Release&lt;/b&gt;" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="280" y="320" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="7" value="creates kind&lt;br&gt;cluster" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="390" y="160" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="8" value="deploys argocd" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="390" y="320" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="9" value="provider-kubernetes&lt;br&gt;&lt;b&gt;ProviderConfig&lt;/b&gt;" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="280" y="400" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="17" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="10" target="9">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="10" value="provider-kubernetes&lt;br&gt;&lt;b&gt;Object&lt;/b&gt;" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="280" y="480" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="11" value="deploys app of apps" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="390" y="480" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="13" value="" style="curved=1;endArrow=classic;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="9" target="4">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="390" y="280" as="sourcePoint"/>
<mxPoint x="440" y="230" as="targetPoint"/>
<Array as="points">
<mxPoint x="260" y="400"/>
<mxPoint x="260" y="300"/>
<mxPoint x="260" y="200"/>
</Array>
</mxGeometry>
</mxCell>
<mxCell id="20" value="Composition" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="240" y="20" width="120" height="40" as="geometry"/>
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

View file

@ -0,0 +1,31 @@
<mxfile host="65bd71144e">
<diagram id="gTaMLqmeyucP2gS6krt6" name="Page-1">
<mxGraphModel dx="813" dy="535" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="0" shadow="0">
<root>
<mxCell id="0"/>
<mxCell id="1" parent="0"/>
<mxCell id="2" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="40" y="60" width="510" height="240" as="geometry"/>
</mxCell>
<mxCell id="3" value="kindserver HTTP interface" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="40" y="60" width="210" height="40" as="geometry"/>
</mxCell>
<mxCell id="4" value="&amp;nbsp; GET /api/v1/kindserver/{clustername}" style="rounded=0;whiteSpace=wrap;html=1;align=left;" vertex="1" parent="1">
<mxGeometry x="60" y="120" width="250" height="40" as="geometry"/>
</mxCell>
<mxCell id="5" value="&amp;nbsp; DELETE /api/v1/kindserver/{clustername}" style="rounded=0;whiteSpace=wrap;html=1;align=left;" vertex="1" parent="1">
<mxGeometry x="60" y="180" width="250" height="40" as="geometry"/>
</mxCell>
<mxCell id="6" value="&amp;nbsp; POST /api/v1/kindserver/{clustername}" style="rounded=0;whiteSpace=wrap;html=1;align=left;" vertex="1" parent="1">
<mxGeometry x="60" y="240" width="250" height="40" as="geometry"/>
</mxCell>
<mxCell id="7" value="required HTTP header" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="390" y="60" width="160" height="40" as="geometry"/>
</mxCell>
<mxCell id="8" value="Authorization" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="390" y="100" width="160" height="40" as="geometry"/>
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View file

@ -0,0 +1,49 @@
<mxfile host="65bd71144e">
<diagram id="88xMscIdxIgwiurMMPnB" name="Page-1">
<mxGraphModel dx="813" dy="535" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="0" shadow="0">
<root>
<mxCell id="0"/>
<mxCell id="1" parent="0"/>
<mxCell id="18" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry width="630" height="340" as="geometry"/>
</mxCell>
<mxCell id="17" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="240" y="20" width="370" height="300" as="geometry"/>
</mxCell>
<mxCell id="6" value="" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="270" y="80" width="320" height="220" as="geometry"/>
</mxCell>
<mxCell id="7" value="crossplane" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="270" y="80" width="90" height="40" as="geometry"/>
</mxCell>
<mxCell id="8" value="provider-kind" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="300" y="170" width="120" height="60" as="geometry"/>
</mxCell>
<mxCell id="10" style="html=1;startArrow=classic;startFill=1;" parent="1" source="9" target="8" edge="1">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="9" value="kindserver" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="20" y="170" width="120" height="60" as="geometry"/>
</mxCell>
<mxCell id="12" value="has password" style="ellipse;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="110" y="220" width="90" height="60" as="geometry"/>
</mxCell>
<mxCell id="13" value="uses password" style="ellipse;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="390" y="220" width="90" height="60" as="geometry"/>
</mxCell>
<mxCell id="15" value="has IP" style="ellipse;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="10" y="220" width="90" height="60" as="geometry"/>
</mxCell>
<mxCell id="16" value="uses IP" style="ellipse;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="290" y="220" width="90" height="60" as="geometry"/>
</mxCell>
<mxCell id="20" value="running on the local host" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry width="150" height="40" as="geometry"/>
</mxCell>
<mxCell id="21" value="running inside kind cluster" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="240" y="20" width="160" height="40" as="geometry"/>
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

View file

@ -0,0 +1,71 @@
<mxfile host="65bd71144e">
<diagram id="OIxMhAz8XNpLu5mdxKmc" name="Page-1">
<mxGraphModel dx="813" dy="535" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="0" shadow="0">
<root>
<mxCell id="0"/>
<mxCell id="1" parent="0"/>
<mxCell id="3" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="5" y="40" width="585" height="410" as="geometry"/>
</mxCell>
<mxCell id="4" value="kubernetes objects" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="5" y="40" width="140" height="40" as="geometry"/>
</mxCell>
<mxCell id="5" value="provider-kind ProviderConfig secret" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="20" y="100" width="230" height="50" as="geometry"/>
</mxCell>
<mxCell id="13" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="6" target="5">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="6" value="provider-kind&amp;nbsp;ProviderConfig" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="20" y="170" width="230" height="50" as="geometry"/>
</mxCell>
<mxCell id="11" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="7" target="6">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="7" value="provider-kind&amp;nbsp;KindCluster" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="20" y="240" width="230" height="50" as="geometry"/>
</mxCell>
<mxCell id="17" style="edgeStyle=none;html=1;" edge="1" parent="1" source="8" target="16">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="8" value="provider-helm ProviderConfig" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="210" y="310" width="210" height="50" as="geometry"/>
</mxCell>
<mxCell id="9" value="password 12345" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="240" y="105" width="120" height="40" as="geometry"/>
</mxCell>
<mxCell id="10" value="endpoint 172.18.0.1" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="240" y="175" width="150" height="40" as="geometry"/>
</mxCell>
<mxCell id="15" value="deploys to KindCluster" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="410" y="317.5" width="150" height="35" as="geometry"/>
</mxCell>
<mxCell id="16" value="writes connection secret" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="240" y="245" width="150" height="40" as="geometry"/>
</mxCell>
<mxCell id="22" style="edgeStyle=none;html=1;" edge="1" parent="1" source="18">
<mxGeometry relative="1" as="geometry">
<mxPoint x="300" y="360" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="18" value="argocd" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="160" y="390" width="90" height="40" as="geometry"/>
</mxCell>
<mxCell id="20" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="19" target="8">
<mxGeometry relative="1" as="geometry"/>
</mxCell>
<mxCell id="19" value="forgejo" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="270" y="390" width="90" height="40" as="geometry"/>
</mxCell>
<mxCell id="23" style="edgeStyle=none;html=1;entryX=0.579;entryY=1.014;entryDx=0;entryDy=0;entryPerimeter=0;" edge="1" parent="1" source="21" target="8">
<mxGeometry relative="1" as="geometry">
<mxPoint x="320" y="360" as="targetPoint"/>
</mxGeometry>
</mxCell>
<mxCell id="21" value="ingress-nginx" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="380" y="390" width="90" height="40" as="geometry"/>
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

View file

@ -0,0 +1,30 @@
---
title: Kube-prometheus-stack
description: Kube-prometheus-stack contains Kubernetes manifests, Prometheus and Grafana, including preconfigured dashboards
---
## Kube-prometheus-stack Overview
Grafana is an open-source monitoring solution that enables viusalization of metrics and logs.
Prometheus is an open-source monitoring and alerting system which collects metrics from services and allows the metrics to be shown in Grafana.
### Implementation Details
The application ist started in edfbuilder/kind/stacks/core/kube-prometheus.yaml.
The application has the sync option spec.syncPolicy.syncOptions ServerSideApply=true. This is necessary, since kube-prometheus-stack exceeds the size limit for secrets and without this option a sync attempt will fail and throw an exception.
The Helm values file edfbuilder/kind/stacks/core/kube-prometheus/values.yaml contains configuration values:
grafana.additionalDataSources contains Loki as a Grafana Data Source.
grafana.ingress contains the Grafana ingress configuratione, like the host url (cnoe.localtest.me).
grafana.sidecar.dashboards contains necessary configurations so additional user defined dashboards are loaded when Grafana is started.
grafana.grafana.ini.server contains configuration details that are necessary, so the ingress points to the correct url.
### Start
Once Grafana is running it is accessible under https://cnoe.localtest.me/grafana.
Many preconfigured dashboards can be used by klicking the menu option Dashboards.
### Adding your own dashboards
The application edfbuilder/kind/stacks/core/kube-prometheus.yaml is used to import new Loki dashboards. Examples for imported dashboards can be found in the folder edfbuilder/kind/stacks/core/kube-prometheus/dashboards.
It is possible to add your own dashboards: Dashboards must be in JSON format. To add your own dashboard create a new ConfigMap in YAML format using onw of the examples as a blueprint. The new dashboard in JSON format has to be added as the value for data.k8s-dashboard-[...].json like in the examples. (It is important to use a unique name for data.k8s-dashboard-[...].json for each dashboard.)
Currently preconfigured dashboards include several dahboards for Loki and a dashboard to showcase Nginx-Ingress metrics.

View file

@ -0,0 +1,10 @@
---
title: Loki
description: Grafana Loki is a scalable open-source log aggregation system
---
## Loki Overview
The application Grafana Loki is started in edfbuilder/kind/stacks/core/loki.yaml.
Loki is started in microservices mode and contains the components ingester, distributor, querier, and query-frontend.
The Helm values file edfbuilder/kind/stacks/core/loki/values.yaml contains configuration values.

View file

@ -0,0 +1,9 @@
---
title: Promtail
description: Grafana Promtail is an agent that ships logs to a Grafan Loki instance (log-shipper)
---
## Promtail Overview
The application Grafana Promtail is started in edfbuilder/kind/stacks/core/promtail.yaml.
The Helm values file edfbuilder/kind/stacks/core/promtail/values.yaml contains configuration values.