Merge branch 'development' into feature/cicd-concept-stl

This commit is contained in:
Stephan Lo 2024-10-11 12:12:49 +02:00
commit 66d6b8ffc5
11 changed files with 161 additions and 23 deletions

View file

@ -0,0 +1,11 @@
# Gitops changes the definition of 'Delivery' or 'Deployment'
We have Gitops these days .... so there is a desired state of an environment in a repo and a reconciling mechanism done by Gitops to enforce this state on the environemnt.
There is no continuous whatever step inbetween ... Gitops is just 'overwriting' (to avoid saying 'delivering' or 'deploying') the environment with the new state.
This means whatever quality ensuring steps have to take part before 'overwriting' have to be defined as state changer in the repos, not in the environments.
Conclusio: I think we only have three contexts, or let's say we don't have the contect 'continuous delivery'

View file

@ -1,9 +1,5 @@
+++
title = "Backstage"
weight = 2
[params]
author = 'evgenii.dominov@telekom.de'
date = '2024-09-36'
+++
Here you will find information about Backstage, it's plugins and usage tutorials
---
title: Backstage
weight: 2
description: Here you will find information about Backstage, it's plugins and usage tutorials
---

View file

@ -1,7 +1,8 @@
+++
title = "Analysis of the CNOE competitors"
weight = 1
+++
---
title: Analysis of CNOE competitors
weight: 1
description: We compare CNOW - which we see as an orchestrator - with other platform orchestring tools like Kratix and Humanitc
---
## Kratix

View file

@ -0,0 +1,4 @@
---
title: CNOE
description: CNOE is a platform building orchestrator, which we choosed at least to start in 2024 with to build the EDF
---

View file

@ -0,0 +1,6 @@
---
title: idpbuilder
weight: 3
description: Here you will find information about idpbuilder installation and usage
---

View file

@ -0,0 +1,57 @@
---
title: Http Routing
weight: 100
---
### Routing switch
The idpbuilder supports creating platforms using either path based or subdomain
based routing:
```shell
idpbuilder create --log-level debug --package https://github.com/cnoe-io/stacks//ref-implementation
```
```shell
idpbuilder create --use-path-routing --log-level debug --package https://github.com/cnoe-io/stacks//ref-implementation
```
However, even though argo does report all deployments as green eventually, not
the entire demo is actually functional (verification?). This is due to
hardcoded values that for example point to the path-routed location of gitea to
access git repos. Thus, backstage might not be able to access them.
Within the demo / ref-implementation, a simple search & replace is suggested to
change urls to fit the given environment. But proper scripting/templating could
take care of that as the hostnames and necessary properties should be
available. This is, however, a tedious and repetitive task one has to keep in
mind throughout the entire system, which might lead to an explosion of config
options in the future. Code that addresses correct routing is located in both
the stack templates and the idpbuilder code.
### Cluster internal routing
For the most part, components communicate with either the cluster API using the
default DNS or with each other via http(s) using the public DNS/hostname (+
path-routing scheme). The latter is necessary due to configs that are visible
and modifiable by users. This includes for example argocd config for components
that has to sync to a gitea git repo. Using the same URL for internal and
external resolution is imperative.
The idpbuilder achieves transparent internal DNS resolution by overriding the
public DNS name in the cluster's internal DNS server (coreDNS). Subsequently,
within the cluster requests to the public hostnames resolve to the IP of the
internal ingress controller service. Thus, internal and external requests take
a similar path and run through proper routing (rewrites, ssl/tls, etc).
### Conclusion
One has to keep in mind that some specific app features might not
work properly or without haxx when using path based routing (e.g. docker
registry in gitea). Futhermore, supporting multiple setup strategies will
become cumbersome as the platforms grows. We should probably only support one
type of setup to keep the system as simple as possible, but allow modification
if necessary.
DNS solutions like `nip.io` or the already used `localtest.me` mitigate the
need for path based routing

View file

@ -0,0 +1,69 @@
---
title: Validation and Verification
weigth: 100
description: How does CNOE ensure equality between actual and desired state
---
## Definition
The CNOE docs do somewhat interchange validation and verification but for the
most part they adhere to the general definition:
> Validation is used when you check your approach before actually executing an
> action.
Examples:
- Form validation before processing the data
- Compiler checking syntax
- Rust's borrow checker
> Verification describes testing if your 'thing' complies with your spec
Examples:
- Unit tests
- Testing availability (ping, curl health check)
- Checking a ZKP of some computation
---
## In CNOE
It seems that both validation and verification within the CNOE framework are
not actually handled by some explicit component but should be addressed
throughout the system and workflows.
As stated in the [docs](https://cnoe.io/docs/intro/capabilities/validation),
validation takes place in all parts of the stack by enforcing strict API usage
and policies (signing, mitigations, security scans etc, see usage of kyverno
for example), and using code generation (proven code), linting, formatting,
LSP. Consequently, validation of source code, templates, etc is more a best
practice rather than a hard fact or feature and it is up to the user
to incorporate them into their workflows and pipelines. This is probably
due to the complexity of the entire stack and the individual properties of
each component and applications.
Verification of artifacts and deployments actually exists in a somewhat similar
state. The current CNOE reference-implementation does not provide sufficient
verification tooling.
However, as stated in the [docs](https://cnoe.io/docs/reference-implementation/integrations/verification)
within the framework `cnoe-cli` is capable of extremely limited verification of
artifacts within kubernetes. The same verification is also available as a step
within a backstage
[plugin](https://github.com/cnoe-io/plugin-scaffolder-actions). This is pretty
much just a wrapper of the cli tool. The tool consumes CRD-like structures
defining the state of pods and CRDs and checks for their existence within a
live cluster ([example](https://github.com/cnoe-io/cnoe-cli/blob/main/pkg/cmd/prereq/ack-s3-prerequisites.yaml)).
Depending on the aspiration of 'verification' this check is rather superficial
and might only suffice as an initial smoke test. Furthermore, it seems like the
feature is not actually used within the CNOE stacks repo.
For a live product more in depth verification tools and schemes are necessary
to verify the correct configuration and authenticity of workloads, which is, in
the context of traditional cloud systems, only achievable to a limited degree.
Existing tools within the stack, e.g. Argo, provide some verification
capabilities. But further investigation into the general topic is necessary.

View file

@ -1,6 +0,0 @@
+++
title = "idpbuilder"
weight = 3
+++
Here you will find information about idpbuilder installation and usage

View file

@ -1,7 +1,7 @@
+++
title = "Kyverno integration"
weight = 4
+++
---
title: Kyverno
description: Kyverno is a policy engine for Kubernetes designed to enforce, validate, and mutate configurations of Kubernetes resources
---
## Kyverno Overview

View file

Before

Width:  |  Height:  |  Size: 165 KiB

After

Width:  |  Height:  |  Size: 165 KiB

Before After
Before After