Merge branch 'development' into idpbuilder-backstage-templates

This commit is contained in:
Stephan Lo 2024-11-08 13:55:44 +01:00
commit a68629559d
10 changed files with 488 additions and 24 deletions

View file

@ -6,58 +6,83 @@ This repo contains business and architectural design and documentation of the De
The documentation is done in [Hugo-format](https://gohugo.io).
The repo contains a [Hugo `.devcontainer`-defintion](https://containers.dev/) so that you just have to run locally an IDE which is devcontainer aware, e.g. Visual Studio code.
Hugo is a static site renderer - so to get the documentation site presented you need a running Hugo processor. Therefore there is
### Installation
* either a Hugo [`.devcontainer`-definition](https://containers.dev/) - just run a devcontainer aware IDE or CLI, e.g. Visual Studio code
* or a Hugo [`Devbox`-definition](https://www.jetify.com/devbox/) - in this case just run a devbox shell
To get a locally running documentation editing and presentation environment, follow these steps:
## Local installation of the Hugo documentation system
We describe two possible ways (one with devcontainer, one with devbox) to get the Hugo-documentation system locally running.
For both prepare the following three steps:
1. open a terminal on your local box
2. clone this repo: `git clone https://bitbucket.telekom-mms.com/scm/ipceicis/ipceicis-developerframework.git `
3. change to the repo working dir: `cd ipceicis-developerframework`
4. open the repo in an [Devcontainer-aware tool/IDE](https://containers.dev/supporting) (e.g. `code .`)
5. start the `devcontainer` (in VSC it's `F1 + Reopen in Devcontainer`)
6. when the container is up & running just open your browser with `http://localhost:1313/`
2. clone this repo: `git clone https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW/website-and-documentation`
3. change to the repo working dir: `cd website-and-documentation`
If you want to run the devcontainer without VS Code, you can use npm to run it inside a docker container:
### Possibility 1: Hugo in a devcontainer
1. install Node.js (>= Version 14), npm and the docker engine
2. install the devcontainer cli: `npm install -g @devcontainers/cli`
3. change into the folder of this repo
4. start the devcontainer by running: `devcontainer up --workspace-folder .`
5. find out the IP address of the devconatiner by using `docker ps` and `docker inspect <id of container>`
6. when the container is up & running just open your browser with `http://<DOCKER IP>:1313/`
[`devcontainers`](https://containers.dev/) are running containers as virtual systems on your local box. The defintion is in the `.devcontainer` folder.
Thus as preliminary you need a container daemon running, e.g. Docker.
### Editing
There are several options to create and run the devcontainer - we present here two:
#### Documentation language
#### Option 1: Run the container triggered by and connected to an IDE, e.g. VS Code
1. open the repo in an [Devcontainer-aware tool/IDE](https://containers.dev/supporting) (e.g. `code .`)
1. start the `devcontainer` (in VSC it's `F1 + Reopen in Devcontainer`)
1. when the container is up & running just open your browser with `http://localhost:1313/`
#### Option 2: Run the container natively
An alternative to get the container image is the [devcontainer CLI](https://github.com/devcontainers/cli), then you can run the devcontainer without VS Code.
Thus as preliminary you need to do the install steps of the devconatiner cli.
1. start the devcontainer by running: `devcontainer up --workspace-folder .`
1. find out the IP address of the devconatiner by using `docker ps` and `docker inspect <id of container>`
1. when the container is up & running just open your browser with `http://<DOCKER IP>:1313/`
### Possibility 2: Hugo in a devbox
[`Devboxes`](https://www.jetify.com/devbox/) are locally isolated environments, managed by the [Nix package manager](https://nix.dev/). So first [prepare the devbox](https://www.jetify.com/docs/devbox/installing_devbox/).
Then
1. ```devbox shell```
1. In the shell: ```hugo serve```
## Editing
### Documentation language
The documentation is done in [Docsy-Theme](https://www.docsy.dev/).
So for editing content just goto the `content`-folder and edit content arrording to the [Docsy documentation](https://www.docsy.dev/docs/adding-content/)
### Commiting
## Commiting
After having finished a unit of work commit and push.
## Annex
# Annex
### Installation steps illustrated
## Installation steps illustrated
When you run the above installation, the outputs could typically look like this:
#### Steps 4/5 in Visual Studio Code
### In Visual Studio Code
##### Reopen in Container
#### Reopen in Container
![vsc-f1](./assets/images/vsc-f1.png)
##### Hugo server is running and (typically) listens to localhost:1313
#### Hugo server is running and (typically) listens to localhost:1313
After some installation time you have:
![vsc-hugo](./assets/images/vsc-hugo.png)
#### Steps 6 in a web browser
### Final result in a web browser
![browser](./assets/images/browser.png)

View file

@ -0,0 +1,141 @@
---
title: ArgoCD
weight: 30
description: A description of ArgoCD and its role in CNOE
---
## What is ArgoCD?
ArgoCD is a Continuous Delivery tool for kubernetes based on GitOps principles.
> ELI5: ArgoCD is an application running in kubernetes which monitors Git
> repositories containing some sort of kubernetes manifests and automatically
> deploys them to some configured kubernetes clusters.
From ArgoCD's perspective, applications are defined as custom resource
definitions within the kubernetes clusters that ArgoCD monitors. Such a
definition describes a source git repository that contains kubernetes
manifests, in the form of a helm chart, kustomize, jsonnet definitions or plain
yaml files, as well as a target kubernetes cluster and namespace the manifests
should be applied to. Thus, ArgoCD is capable of deploying applications to
various (remote) clusters and namespaces.
ArgoCD monitors both the source and the destination. It applies changes from
the git repository that acts as the source of truth for the destination as soon
as they occur, i.e. if a change was pushed to the git repository, the change is
applied to the kubernetes destination by ArgoCD. Subsequently, it checks
whether the desired state was established. For example, it verifies that all
resources were created, enough replicas started, and that all pods are in the
`running` state and healthy.
## Architecture
### Core Components
An ArgoCD deployment mainly consists of 3 main components:
#### Application Controller
The application controller is a kubernetes operator that synchronizes the live
state within a kubernetes cluster with the desired state derived from the git
sources. It monitors the live state, can detect derivations, and perform
corrective actions. Additionally, it can execute hooks on life cycle stages
such as pre- and post-sync.
#### Repository Server
The repository server interacts with git repositories and caches their state,
to reduce the amount of polling necessary. Furthermore, it is responsible for
generating the kubernetes manifests from the resources within the git
repositories, i.e. executing helm or jsonnet templates.
#### API Server
The API Server is a REST/gRPC Service that allows the Web UI and CLI, as well
as other API clients, to interact with the system. It also acts as the callback
for webhooks particularly from Git repository platforms such as GitHub or
Gitlab to reduce repository polling.
### Others
The system primarily stores its configuration as kubernetes resources. Thus,
other external storage is not vital.
Redis
: A Redis store is optional but recommended to be used as a cache to reduce
load on ArgoCD components and connected systems, e.g. git repositories.
ApplicationSetController
: The ApplicationSet Controller is similar to the Application Controller a
kubernetes operator that can deploy applications based on parameterized
application templates. This allows the deployment of different versions of an
application into various environments from a single template.
### Overview
![Conceptual Architecture](./argocd_architecture.webp)
![Core components](./argocd-core-components.webp)
## Role in CNOE
ArgoCD is one of the core components besides gitea/forgejo that is being
bootstrapped by the idpbuilder. Future project creation, e.g. through
backstage, relies on the availability of ArgoCD.
After the initial bootstrapping phase, effectively all components in the stack
that are deployed in kubernetes are managed by ArgoCD. This includes the
bootstrapped components of gitea and ArgoCD which are onboarded afterward.
Thus, the idpbuilder is only necessary in the bootstrapping phase of the
platform and the technical coordination of all components shifts to ArgoCD
eventually.
In general, the creation of new projects and applications should take place in
backstop. It is a catalog of software components and best practices that allows
developers to grasp and to manage their software portfolio. Underneath,
however, the deployment of applications and platform components is managed by
ArgoCD. Among others, backstage creates Application CRDs to instruct ArgoCD to
manage deployments and subsequently report on their current state.
## Glossary
_Initially shamelessly copied from [the docs](https://argo-cd.readthedocs.io/en/stable/core_concepts/)_
Application
: A group of Kubernetes resources as defined by a manifest. This is a Custom Resource Definition (CRD).
ApplicationSet
: A CRD that is a template that can create multiple parameterized Applications.
Application source type
: Which Tool is used to build the application.
Configuration management tool
: See Tool.
Configuration management plugin
: A custom tool.
Health
: The health of the application, is it running correctly? Can it serve requests?
Live state
: The live state of that application. What pods etc are deployed.
Refresh
: Compare the latest code in Git with the live state. Figure out what is different.
Sync
: The process of making an application move to its target state. E.g. by applying changes to a Kubernetes cluster.
Sync status
: Whether or not the live state matches the target state. Is the deployed application the same as Git says it should be?
Sync operation status
: Whether or not a sync succeeded.
Target state
: The desired state of an application, as represented by files in a Git repository.
Tool
: A tool to create manifests from a directory of files. E.g. Kustomize. See Application Source Type.

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View file

@ -55,3 +55,124 @@ if necessary.
DNS solutions like `nip.io` or the already used `localtest.me` mitigate the
need for path based routing
## Excerpt
HTTP is a cornerstone of the internet due to its high flexibility. Starting
from HTTP/1.1 each request in the protocol contains among others a path and a
`Host`name in its header. While an HTTP request is sent to a single IP address
/ server, these two pieces of data allow (distributed) systems to handle
requests in various ways.
```shell
$ curl -v http://google.com/something > /dev/null
* Connected to google.com (2a00:1450:4001:82f::200e) port 80
* using HTTP/1.x
> GET /something HTTP/1.1
> Host: google.com
> User-Agent: curl/8.10.1
> Accept: */*
...
```
### Path-Routing
Imagine requesting `http://myhost.foo/some/file.html`, in a simple setup, the
web server `myhost.foo` resolves to would serve static files from some
directory, `/<some_dir>/some/file.html`.
In more complex systems, one might have multiple services that fulfill various
roles, for example a service that generates HTML sites of articles from a CMS
and a service that can convert images into various formats. Using path-routing
both services are available on the same host from a user's POV.
An article served from `http://myhost.foo/articles/news1.html` would be
generated from the article service and points to an image
`http://myhost.foo/images/pic.jpg` which in turn is generated by the image
converter service. When a user sends an HTTP request to `myhost.foo`, they hit
a reverse proxy which forwards the request based on the requested path to some
other system, waits for a response, and subsequently returns that response to
the user.
![Path-Routing Example](../path-routing.png)
Such a setup hides the complexity from the user and allows the creation of
large distributed, scalable systems acting as a unified entity from the
outside. Since everything is served on the same host, the browser is inclined
to trust all downstream services. This allows for easier 'communication'
between services through the browser. For example, cookies could be valid for
the entire host and thus authentication data could be forwarded to requested
downstream services without the user having to explicitly re-authenticate.
Furthermore, services 'know' their user-facing location by knowing their path
and the paths to other services as paths are usually set as a convention and /
or hard-coded. In practice, this makes configuration of the entire system
somewhat easier, especially if you have various environments for testing,
development, and production. The hostname of the system does not matter as one
can use hostname-relative URLs, e.g. `/some/service`.
Load balancing is also easily achievable by multiplying the number of service
instances. Most reverse proxy systems are able to apply various load balancing
strategies to forward traffic to downstream systems.
Problems might arise if downstream systems are not built with path-routing in
mind. Some systems require to be served from the root of a domain, see for
example the container registry spec.
### Hostname-Routing
Each downstream service in a distributed system is served from a different
host, typically a subdomain, e.g. `serviceA.myhost.foo` and
`serviceB.myhost.foo`. This gives services full control over their respective
host, and even allows them to do path-routing within each system. Moreover,
hostname-routing allows the entire system to create more flexible and powerful
routing schemes in terms of scalability. Intra-system communication becomes
somewhat harder as the browser treats each subdomain as a separate host,
shielding cookies for example form one another.
Each host that serves some services requires a DNS entry that has to be
published to the clients (from some DNS server). Depending on the environment
this can become quite tedious as DNS resolution on the internet and intranets
might have to deviate. This applies to intra-cluster communication as well, as
seen with the idpbuilder's platform. In this case, external DNS resolution has
to be replicated within the cluster to be able to use the same URLs to address
for example gitea.
The following example depicts DNS-only routing. By defining separate DNS
entries for each service / subdomain requests are resolved to the respective
servers. In theory, no additional infrastructure is necessary to route user
traffic to each service. However, as services are completely separated other
infrastructure like authentication possibly has to be duplicated.
![DNS-only routing](../hostname-routing.png)
When using hostname based routing, one does not have to set different IPs for
each hostname. Instead, having multiple DNS entries pointing to the same set of
IPs allows re-using existing infrastructure. As shown below, a reverse proxy is
able to forward requests to downstream services based on the `Host` request
parameter. This way specific hostname can be forwarded to a defined service.
![Hostname Proxy](../hostname-routing-proxy.png)
At the same time, one could imagine a multi-tenant system that differentiates
customer systems by name, e.g. `tenant-1.cool.system` and
`tenant-2.cool.system`. Configured as a wildcard-sytle domain, `*.cool.system`
could point to a reverse proxy that forwards requests to a tenants instance of
a system, allowing re-use of central infrastructure while still hosting
separate systems per tenant.
The implicit dependency on DNS resolution generally makes this kind of routing
more complex and error-prone as changes to DNS server entries are not always
possible or modifiable by everyone. Also, local changes to your `/etc/hosts`
file are a constant pain and should be seen as a dirty hack. As mentioned
above, dynamic DNS solutions like `nip.io` are often helpful in this case.
### Conclusion
Path and hostname based routing are the two most common methods of HTTP traffic
routing. They can be used separately but more often they are used in
conjunction. Due to HTTP's versatility other forms of HTTP routing, for example
based on the `Content-Type` Header are also very common.

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

12
devbox.json Normal file
View file

@ -0,0 +1,12 @@
{
"$schema": "https://raw.githubusercontent.com/jetify-com/devbox/0.10.5/.schema/devbox.schema.json",
"packages": [
"hugo@0.125.4",
"dart-sass@1.75.0",
"go@latest"
],
"shell": {
"init_hook": [],
"scripts": {}
}
}

165
devbox.lock Normal file
View file

@ -0,0 +1,165 @@
{
"lockfile_version": "1",
"packages": {
"dart-sass@1.75.0": {
"last_modified": "2024-05-03T15:42:32Z",
"resolved": "github:NixOS/nixpkgs/5fd8536a9a5932d4ae8de52b7dc08d92041237fc#dart-sass",
"source": "devbox-search",
"version": "1.75.0",
"systems": {
"aarch64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/6ynzjs0v55h88ri86li1d9nyr822n7kk-dart-sass-1.75.0",
"default": true
},
{
"name": "pubcache",
"path": "/nix/store/f4wbni4cqdhq8y9phl6aazyh54mnacz7-dart-sass-1.75.0-pubcache"
}
],
"store_path": "/nix/store/6ynzjs0v55h88ri86li1d9nyr822n7kk-dart-sass-1.75.0"
},
"aarch64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/g88isq3r0zpxvx1rzc86dl9ny15jr980-dart-sass-1.75.0",
"default": true
},
{
"name": "pubcache",
"path": "/nix/store/l6vdyb4i5hb9qmvms9v9g7vsnynfq0lb-dart-sass-1.75.0-pubcache"
}
],
"store_path": "/nix/store/g88isq3r0zpxvx1rzc86dl9ny15jr980-dart-sass-1.75.0"
},
"x86_64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/h79n1apvmgpvw4w855zxf9qx887k9v3d-dart-sass-1.75.0",
"default": true
},
{
"name": "pubcache",
"path": "/nix/store/bxmfb2129kn4xnrz5i4p4ngkplavrxv4-dart-sass-1.75.0-pubcache"
}
],
"store_path": "/nix/store/h79n1apvmgpvw4w855zxf9qx887k9v3d-dart-sass-1.75.0"
},
"x86_64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/yvr71pda4bm9a2dilgyd77297xx32iad-dart-sass-1.75.0",
"default": true
},
{
"name": "pubcache",
"path": "/nix/store/h8n6s7f91kn596g2hbn3ccbs4s80bm46-dart-sass-1.75.0-pubcache"
}
],
"store_path": "/nix/store/yvr71pda4bm9a2dilgyd77297xx32iad-dart-sass-1.75.0"
}
}
},
"go@latest": {
"last_modified": "2024-10-13T23:44:06Z",
"resolved": "github:NixOS/nixpkgs/d4f247e89f6e10120f911e2e2d2254a050d0f732#go",
"source": "devbox-search",
"version": "1.23.2",
"systems": {
"aarch64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/35jikx2wg5r0qj47sic0p99bqnmwi6cn-go-1.23.2",
"default": true
}
],
"store_path": "/nix/store/35jikx2wg5r0qj47sic0p99bqnmwi6cn-go-1.23.2"
},
"aarch64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/6bx6d90kpy537yab22wja70ibpp4gkww-go-1.23.2",
"default": true
}
],
"store_path": "/nix/store/6bx6d90kpy537yab22wja70ibpp4gkww-go-1.23.2"
},
"x86_64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/yi89mimkmw48qhzrll1aaibxbvllpsjv-go-1.23.2",
"default": true
}
],
"store_path": "/nix/store/yi89mimkmw48qhzrll1aaibxbvllpsjv-go-1.23.2"
},
"x86_64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/klw1ipjsqx1np7pkk833x7sad7f3ivv9-go-1.23.2",
"default": true
}
],
"store_path": "/nix/store/klw1ipjsqx1np7pkk833x7sad7f3ivv9-go-1.23.2"
}
}
},
"hugo@0.125.4": {
"last_modified": "2024-04-27T02:17:36Z",
"resolved": "github:NixOS/nixpkgs/698fd43e541a6b8685ed408aaf7a63561018f9f8#hugo",
"source": "devbox-search",
"version": "0.125.4",
"systems": {
"aarch64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/2ssds5l4s15xfgljv2ygjhqpn949lxj4-hugo-0.125.4",
"default": true
}
],
"store_path": "/nix/store/2ssds5l4s15xfgljv2ygjhqpn949lxj4-hugo-0.125.4"
},
"aarch64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/nln80v8vsw5h3hv7kihglb12fy077flb-hugo-0.125.4",
"default": true
}
],
"store_path": "/nix/store/nln80v8vsw5h3hv7kihglb12fy077flb-hugo-0.125.4"
},
"x86_64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/n6az4gns36nrq9sbiqf2kf7kgn1kjyfm-hugo-0.125.4",
"default": true
}
],
"store_path": "/nix/store/n6az4gns36nrq9sbiqf2kf7kgn1kjyfm-hugo-0.125.4"
},
"x86_64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/k53ijl83p62i6lqia2jjky8l136x42i7-hugo-0.125.4",
"default": true
}
],
"store_path": "/nix/store/k53ijl83p62i6lqia2jjky8l136x42i7-hugo-0.125.4"
}
}
}
}
}