Merge branch 'development' into stakeholder-workshop
|
|
@ -0,0 +1,92 @@
|
|||
+++
|
||||
archetype = "sub-chapter"
|
||||
title = "CI/CD Pipeline"
|
||||
weight = 1
|
||||
[params]
|
||||
author = 'florian.fuerstenberg@t-systems.com'
|
||||
date = '2024-10-08'
|
||||
+++
|
||||
|
||||
This document describes the concept of pipelining in the context of the Edge Developer Framework.
|
||||
|
||||
## Overview
|
||||
|
||||
In order to provide a composable pipeline as part of the Edge Developer Framework (EDF), we have defined a set of concepts that can be used to create pipelines for different usage scenarios. These concepts are:
|
||||
|
||||
**Pipeline Contexts** define the context in which a pipeline execution is run. Typically, a context corresponds to a specific step within the software development lifecycle, such as building and testing code, deploying and testing code in staging environments, or releasing code. Contexts define which components are used, in which order, and the environment in which they are executed.
|
||||
|
||||
**Components** are the building blocks, which are used in the pipeline. They define specific steps that are executed in a pipeline such as compiling code, running tests, or deploying an application.
|
||||
|
||||

|
||||
|
||||
## Pipeline Contexts
|
||||
|
||||
We provide 4 Pipeline Contexts that can be used to create pipelines for different usage scenarios. The contexts can be described as the golden path, which is fully configurable and extenable by the users.
|
||||
|
||||
Pipeline runs with a given context can be triggered by different actions. For example, a pipeline run with the `Continuous Integration` context can be triggered by a commit to a repository, while a pipeline run with the `Continuous Delivery` context could be triggered by merging a pull request to a specific branch.
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
This context is focused on running tests and checks on every commit to a repository. It is used to ensure that the codebase is always in a working state and that new changes do not break existing functionality. Tests within this context are typically fast and lightweight, and are used to catch simple errors such as syntax errors, typos, and basic logic errors. Static vulnerability and compliance checks can also be performed in this context.
|
||||
|
||||
### Continuous Delivery
|
||||
|
||||
This context is focused on deploying code to a (ephermal) staging environment after its static checks have been performed. It is used to ensure that the codebase is always deployable and that new changes can be easily reviewed by stakeholders. Tests within this context are typically more comprehensive than those in the Continuous Integration context, and handle more complex scenarios such as integration tests and end-to-end tests. Additionally, live security and compliance checks can be performed in this context.
|
||||
|
||||
### Continuous Deployment
|
||||
|
||||
This context is focused on deploying code to a production environment and/or publishing artefacts after static checks have been performed.
|
||||
|
||||
### Chore
|
||||
|
||||
This context focuses on measures that need to be carried out regularly (e.g. security or compliance scans). They are used to ensure the robustness, security and efficiency of software projects. They enable teams to maintain high standards of quality and reliability while minimizing risks and allowing developers to focus on more critical and creative aspects of development, increasing overall productivity and satisfaction.
|
||||
|
||||
## Components
|
||||
|
||||
Components are the composable and self-contained building blocks for the contexts described above. The aim is to cover most (common) use cases for application teams and make them particularly easy to use by following our golden paths. This way, application teams only have to include and configure the functionalities they actually need. An additional benefit is that this allows for easy extensibility. If a desired functionality has not been implemented as a component, application teams can simply add their own.
|
||||
|
||||
Components must be as small as possible and follow the same concepts of software development and deployment as any other software product. In particular, they must have the following characteristics:
|
||||
|
||||
- designed for a single task
|
||||
- provide a clear and intuitive output
|
||||
- easy to compose
|
||||
- easily customizable or interchangeable
|
||||
- automatically testable
|
||||
|
||||
In the EDF components are divided into different categories. Each category contains components that perform similar actions. For example, the `build` category contains components that compile code, while the `deploy` category contains components that automate the management of the artefacts created in a production-like system.
|
||||
|
||||
> **Note:** Components are comparable to interfaces in programming. Each component defines a certain behaviour, but the actual implementation of these actions depends on the specific codebase and environment.
|
||||
>
|
||||
> For example, the `build` component defines the action of compiling code, but the actual build process depends on the programming language and build tools used in the project. The `vulnerability scanning` component will likely execute different tools and interact with different APIs depending on the context in which it is executed.
|
||||
|
||||
### Build
|
||||
|
||||
Build components are used to compile code. They can be used to compile code written in different programming languages, and can be used to compile code for different platforms.
|
||||
|
||||
### Code Test
|
||||
|
||||
These components define tests that are run on the codebase. They are used to ensure that the codebase is always in a working state and that new changes do not break existing functionality. Tests within this category are typically fast and lightweight, and are used to catch simple errors such as syntax errors, typos, and basic logic errors. Tests must be executable in isolation, and do not require external dependencies such as databases or network connections.
|
||||
|
||||
### Application Test
|
||||
|
||||
Application tests are tests, which run the code in a real execution environment, and provide external dependencies. These tests are typically more comprehensive than those in the `Code Test` category, and handle more complex scenarios such as integration tests and end-to-end tests.
|
||||
|
||||
### Deploy
|
||||
|
||||
Deploy components are used to deploy code to different environments, but can also be used to publish artifacts. They are typically used in the `Continuous Delivery` and `Continuous Deployment` contexts.
|
||||
|
||||
### Release
|
||||
|
||||
Release components are used to create releases of the codebase. They can be used to create tags in the repository, create release notes, or perform other tasks related to releasing code. They are typically used in the `Continuous Deployment` context.
|
||||
|
||||
### Repo House Keeping
|
||||
|
||||
Repo house keeping components are used to manage the repository. They can be used to clean up old branches, update the repository's README file, or perform other maintenance tasks. They can also be used to handle issues, such as automatically closing stale issues.
|
||||
|
||||
### Dependency Management
|
||||
|
||||
Dependency management is used to automate the process of managing dependencies in a codebase. It can be used to create pull requests with updated dependencies, or to automatically update dependencies in a codebase.
|
||||
|
||||
### Security and Compliance
|
||||
|
||||
Security and compliance components are used to ensure that the codebase meets security and compliance requirements. They can be used to scan the codebase for vulnerabilities, check for compliance with coding standards, or perform other security and compliance checks. Depending on the context, different tools can be used to accomplish scanning. In the `Continuous Integration` context, static code analysis can be used to scan the codebase for vulnerabilities, while in the `Continuous Delivery` context, live security and compliance checks can be performed.
|
||||
|
After Width: | Height: | Size: 732 KiB |
|
|
@ -116,7 +116,7 @@ NAMESPACE NAME SYNC STATUS HEALTH STATUS
|
|||
argocd argo-workflows Synced Healthy
|
||||
argocd argocd Synced Healthy
|
||||
argocd backstage Synced Healthy
|
||||
argocd backstage-templates Synced Healthy
|
||||
argocd included-backstage-templates Synced Healthy
|
||||
argocd coredns Synced Healthy
|
||||
argocd external-secrets Synced Healthy
|
||||
argocd gitea Synced Healthy
|
||||
|
|
|
|||
|
|
@ -389,7 +389,7 @@ NAMESPACE NAME SYNC STATUS HEALTH STATUS
|
|||
argocd argo-workflows Synced Healthy
|
||||
argocd argocd Synced Healthy
|
||||
argocd backstage Synced Healthy
|
||||
argocd backstage-templates Synced Healthy
|
||||
argocd included-backstage-templates Synced Healthy
|
||||
argocd external-secrets Synced Healthy
|
||||
argocd gitea Synced Healthy
|
||||
argocd keycloak Synced Healthy
|
||||
|
|
|
|||
141
content/en/docs/solution/tools/CNOE/argocd/_index.md
Normal file
|
|
@ -0,0 +1,141 @@
|
|||
---
|
||||
title: ArgoCD
|
||||
weight: 30
|
||||
description: A description of ArgoCD and its role in CNOE
|
||||
---
|
||||
|
||||
## What is ArgoCD?
|
||||
|
||||
ArgoCD is a Continuous Delivery tool for kubernetes based on GitOps principles.
|
||||
|
||||
> ELI5: ArgoCD is an application running in kubernetes which monitors Git
|
||||
> repositories containing some sort of kubernetes manifests and automatically
|
||||
> deploys them to some configured kubernetes clusters.
|
||||
|
||||
From ArgoCD's perspective, applications are defined as custom resource
|
||||
definitions within the kubernetes clusters that ArgoCD monitors. Such a
|
||||
definition describes a source git repository that contains kubernetes
|
||||
manifests, in the form of a helm chart, kustomize, jsonnet definitions or plain
|
||||
yaml files, as well as a target kubernetes cluster and namespace the manifests
|
||||
should be applied to. Thus, ArgoCD is capable of deploying applications to
|
||||
various (remote) clusters and namespaces.
|
||||
|
||||
ArgoCD monitors both the source and the destination. It applies changes from
|
||||
the git repository that acts as the source of truth for the destination as soon
|
||||
as they occur, i.e. if a change was pushed to the git repository, the change is
|
||||
applied to the kubernetes destination by ArgoCD. Subsequently, it checks
|
||||
whether the desired state was established. For example, it verifies that all
|
||||
resources were created, enough replicas started, and that all pods are in the
|
||||
`running` state and healthy.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
An ArgoCD deployment mainly consists of 3 main components:
|
||||
|
||||
#### Application Controller
|
||||
|
||||
The application controller is a kubernetes operator that synchronizes the live
|
||||
state within a kubernetes cluster with the desired state derived from the git
|
||||
sources. It monitors the live state, can detect derivations, and perform
|
||||
corrective actions. Additionally, it can execute hooks on life cycle stages
|
||||
such as pre- and post-sync.
|
||||
|
||||
#### Repository Server
|
||||
|
||||
The repository server interacts with git repositories and caches their state,
|
||||
to reduce the amount of polling necessary. Furthermore, it is responsible for
|
||||
generating the kubernetes manifests from the resources within the git
|
||||
repositories, i.e. executing helm or jsonnet templates.
|
||||
|
||||
#### API Server
|
||||
|
||||
The API Server is a REST/gRPC Service that allows the Web UI and CLI, as well
|
||||
as other API clients, to interact with the system. It also acts as the callback
|
||||
for webhooks particularly from Git repository platforms such as GitHub or
|
||||
Gitlab to reduce repository polling.
|
||||
|
||||
### Others
|
||||
|
||||
The system primarily stores its configuration as kubernetes resources. Thus,
|
||||
other external storage is not vital.
|
||||
|
||||
Redis
|
||||
: A Redis store is optional but recommended to be used as a cache to reduce
|
||||
load on ArgoCD components and connected systems, e.g. git repositories.
|
||||
|
||||
ApplicationSetController
|
||||
: The ApplicationSet Controller is similar to the Application Controller a
|
||||
kubernetes operator that can deploy applications based on parameterized
|
||||
application templates. This allows the deployment of different versions of an
|
||||
application into various environments from a single template.
|
||||
|
||||
### Overview
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## Role in CNOE
|
||||
|
||||
ArgoCD is one of the core components besides gitea/forgejo that is being
|
||||
bootstrapped by the idpbuilder. Future project creation, e.g. through
|
||||
backstage, relies on the availability of ArgoCD.
|
||||
|
||||
After the initial bootstrapping phase, effectively all components in the stack
|
||||
that are deployed in kubernetes are managed by ArgoCD. This includes the
|
||||
bootstrapped components of gitea and ArgoCD which are onboarded afterward.
|
||||
Thus, the idpbuilder is only necessary in the bootstrapping phase of the
|
||||
platform and the technical coordination of all components shifts to ArgoCD
|
||||
eventually.
|
||||
|
||||
In general, the creation of new projects and applications should take place in
|
||||
backstop. It is a catalog of software components and best practices that allows
|
||||
developers to grasp and to manage their software portfolio. Underneath,
|
||||
however, the deployment of applications and platform components is managed by
|
||||
ArgoCD. Among others, backstage creates Application CRDs to instruct ArgoCD to
|
||||
manage deployments and subsequently report on their current state.
|
||||
|
||||
## Glossary
|
||||
|
||||
_Initially shamelessly copied from [the docs](https://argo-cd.readthedocs.io/en/stable/core_concepts/)_
|
||||
|
||||
Application
|
||||
: A group of Kubernetes resources as defined by a manifest. This is a Custom Resource Definition (CRD).
|
||||
|
||||
ApplicationSet
|
||||
: A CRD that is a template that can create multiple parameterized Applications.
|
||||
|
||||
Application source type
|
||||
: Which Tool is used to build the application.
|
||||
|
||||
Configuration management tool
|
||||
: See Tool.
|
||||
|
||||
Configuration management plugin
|
||||
: A custom tool.
|
||||
|
||||
Health
|
||||
: The health of the application, is it running correctly? Can it serve requests?
|
||||
|
||||
Live state
|
||||
: The live state of that application. What pods etc are deployed.
|
||||
|
||||
Refresh
|
||||
: Compare the latest code in Git with the live state. Figure out what is different.
|
||||
|
||||
Sync
|
||||
: The process of making an application move to its target state. E.g. by applying changes to a Kubernetes cluster.
|
||||
|
||||
Sync status
|
||||
: Whether or not the live state matches the target state. Is the deployed application the same as Git says it should be?
|
||||
|
||||
Sync operation status
|
||||
: Whether or not a sync succeeded.
|
||||
|
||||
Target state
|
||||
: The desired state of an application, as represented by files in a Git repository.
|
||||
|
||||
Tool
|
||||
: A tool to create manifests from a directory of files. E.g. Kustomize. See Application Source Type.
|
||||
|
After Width: | Height: | Size: 86 KiB |
|
After Width: | Height: | Size: 76 KiB |
|
After Width: | Height: | Size: 48 KiB |
|
After Width: | Height: | Size: 38 KiB |
|
|
@ -55,3 +55,124 @@ if necessary.
|
|||
|
||||
DNS solutions like `nip.io` or the already used `localtest.me` mitigate the
|
||||
need for path based routing
|
||||
|
||||
## Excerpt
|
||||
|
||||
HTTP is a cornerstone of the internet due to its high flexibility. Starting
|
||||
from HTTP/1.1 each request in the protocol contains among others a path and a
|
||||
`Host`name in its header. While an HTTP request is sent to a single IP address
|
||||
/ server, these two pieces of data allow (distributed) systems to handle
|
||||
requests in various ways.
|
||||
|
||||
```shell
|
||||
$ curl -v http://google.com/something > /dev/null
|
||||
|
||||
* Connected to google.com (2a00:1450:4001:82f::200e) port 80
|
||||
* using HTTP/1.x
|
||||
> GET /something HTTP/1.1
|
||||
> Host: google.com
|
||||
> User-Agent: curl/8.10.1
|
||||
> Accept: */*
|
||||
...
|
||||
```
|
||||
|
||||
### Path-Routing
|
||||
|
||||
Imagine requesting `http://myhost.foo/some/file.html`, in a simple setup, the
|
||||
web server `myhost.foo` resolves to would serve static files from some
|
||||
directory, `/<some_dir>/some/file.html`.
|
||||
|
||||
In more complex systems, one might have multiple services that fulfill various
|
||||
roles, for example a service that generates HTML sites of articles from a CMS
|
||||
and a service that can convert images into various formats. Using path-routing
|
||||
both services are available on the same host from a user's POV.
|
||||
|
||||
An article served from `http://myhost.foo/articles/news1.html` would be
|
||||
generated from the article service and points to an image
|
||||
`http://myhost.foo/images/pic.jpg` which in turn is generated by the image
|
||||
converter service. When a user sends an HTTP request to `myhost.foo`, they hit
|
||||
a reverse proxy which forwards the request based on the requested path to some
|
||||
other system, waits for a response, and subsequently returns that response to
|
||||
the user.
|
||||
|
||||

|
||||
|
||||
Such a setup hides the complexity from the user and allows the creation of
|
||||
large distributed, scalable systems acting as a unified entity from the
|
||||
outside. Since everything is served on the same host, the browser is inclined
|
||||
to trust all downstream services. This allows for easier 'communication'
|
||||
between services through the browser. For example, cookies could be valid for
|
||||
the entire host and thus authentication data could be forwarded to requested
|
||||
downstream services without the user having to explicitly re-authenticate.
|
||||
|
||||
Furthermore, services 'know' their user-facing location by knowing their path
|
||||
and the paths to other services as paths are usually set as a convention and /
|
||||
or hard-coded. In practice, this makes configuration of the entire system
|
||||
somewhat easier, especially if you have various environments for testing,
|
||||
development, and production. The hostname of the system does not matter as one
|
||||
can use hostname-relative URLs, e.g. `/some/service`.
|
||||
|
||||
Load balancing is also easily achievable by multiplying the number of service
|
||||
instances. Most reverse proxy systems are able to apply various load balancing
|
||||
strategies to forward traffic to downstream systems.
|
||||
|
||||
Problems might arise if downstream systems are not built with path-routing in
|
||||
mind. Some systems require to be served from the root of a domain, see for
|
||||
example the container registry spec.
|
||||
|
||||
|
||||
### Hostname-Routing
|
||||
|
||||
Each downstream service in a distributed system is served from a different
|
||||
host, typically a subdomain, e.g. `serviceA.myhost.foo` and
|
||||
`serviceB.myhost.foo`. This gives services full control over their respective
|
||||
host, and even allows them to do path-routing within each system. Moreover,
|
||||
hostname-routing allows the entire system to create more flexible and powerful
|
||||
routing schemes in terms of scalability. Intra-system communication becomes
|
||||
somewhat harder as the browser treats each subdomain as a separate host,
|
||||
shielding cookies for example form one another.
|
||||
|
||||
Each host that serves some services requires a DNS entry that has to be
|
||||
published to the clients (from some DNS server). Depending on the environment
|
||||
this can become quite tedious as DNS resolution on the internet and intranets
|
||||
might have to deviate. This applies to intra-cluster communication as well, as
|
||||
seen with the idpbuilder's platform. In this case, external DNS resolution has
|
||||
to be replicated within the cluster to be able to use the same URLs to address
|
||||
for example gitea.
|
||||
|
||||
The following example depicts DNS-only routing. By defining separate DNS
|
||||
entries for each service / subdomain requests are resolved to the respective
|
||||
servers. In theory, no additional infrastructure is necessary to route user
|
||||
traffic to each service. However, as services are completely separated other
|
||||
infrastructure like authentication possibly has to be duplicated.
|
||||
|
||||

|
||||
|
||||
When using hostname based routing, one does not have to set different IPs for
|
||||
each hostname. Instead, having multiple DNS entries pointing to the same set of
|
||||
IPs allows re-using existing infrastructure. As shown below, a reverse proxy is
|
||||
able to forward requests to downstream services based on the `Host` request
|
||||
parameter. This way specific hostname can be forwarded to a defined service.
|
||||
|
||||

|
||||
|
||||
At the same time, one could imagine a multi-tenant system that differentiates
|
||||
customer systems by name, e.g. `tenant-1.cool.system` and
|
||||
`tenant-2.cool.system`. Configured as a wildcard-sytle domain, `*.cool.system`
|
||||
could point to a reverse proxy that forwards requests to a tenants instance of
|
||||
a system, allowing re-use of central infrastructure while still hosting
|
||||
separate systems per tenant.
|
||||
|
||||
|
||||
The implicit dependency on DNS resolution generally makes this kind of routing
|
||||
more complex and error-prone as changes to DNS server entries are not always
|
||||
possible or modifiable by everyone. Also, local changes to your `/etc/hosts`
|
||||
file are a constant pain and should be seen as a dirty hack. As mentioned
|
||||
above, dynamic DNS solutions like `nip.io` are often helpful in this case.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Path and hostname based routing are the two most common methods of HTTP traffic
|
||||
routing. They can be used separately but more often they are used in
|
||||
conjunction. Due to HTTP's versatility other forms of HTTP routing, for example
|
||||
based on the `Content-Type` Header are also very common.
|
||||
|
|
|
|||
BIN
content/en/docs/solution/tools/CNOE/idpbuilder/path-routing.png
Normal file
|
After Width: | Height: | Size: 52 KiB |
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: Included Backstage Templates
|
||||
weight: 2
|
||||
description: Here you will find information about backstage templates that are included into idpbuilder's ref-implementation
|
||||
---
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
+++
|
||||
title = "Template for basic Argo Workflow"
|
||||
weight = 4
|
||||
+++
|
||||
|
||||
# Backstage Template for Basic Argo Workflow with Spark Job
|
||||
|
||||
This Backstage template YAML automates the creation of an Argo Workflow for Kubernetes that includes a basic Spark job, providing a convenient way to configure and deploy workflows involving data processing or machine learning jobs. Users can define key parameters, such as the application name and the path to the main Spark application file. The template creates necessary Kubernetes resources, publishes the application code to a Gitea Git repository, registers the application in the Backstage catalog, and deploys it via ArgoCD for easy CI/CD management.
|
||||
|
||||
## Use Case
|
||||
|
||||
This template is designed for teams that need a streamlined approach to deploy and manage data processing or machine learning jobs using Spark within an Argo Workflow environment. It simplifies the deployment process and integrates the application with a CI/CD pipeline. The template performs the following:
|
||||
|
||||
- **Workflow and Spark Job Setup**: Defines a basic Argo Workflow and configures a Spark job using the provided application file path, ideal for data processing tasks.
|
||||
- **Repository Setup**: Publishes the workflow configuration to a Gitea repository, enabling version control and easy updates to the job configuration.
|
||||
- **ArgoCD Integration**: Creates an ArgoCD application to manage the Spark job deployment, ensuring continuous delivery and synchronization with Kubernetes.
|
||||
- **Backstage Registration**: Registers the application in Backstage, making it easily discoverable and manageable through the Backstage catalog.
|
||||
|
||||
This template boosts productivity by automating steps required for setting up Argo Workflows and Spark jobs, integrating version control, and enabling centralized management and visibility, making it ideal for projects requiring efficient deployment and scalable data processing solutions.
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
+++
|
||||
title = "Template for basic kubernetes deployment"
|
||||
weight = 4
|
||||
+++
|
||||
|
||||
# Backstage Template for Kubernetes Deployment
|
||||
|
||||
This Backstage template YAML automates the creation of a basic Kubernetes Deployment, aimed at simplifying the deployment and management of applications in Kubernetes for the user. The template allows users to define essential parameters, such as the application’s name, and then creates and configures the Kubernetes resources, publishes the application code to a Gitea Git repository, and registers the application in the Backstage catalog for tracking and management.
|
||||
|
||||
## Use Case
|
||||
|
||||
The template is designed for teams needing a streamlined approach to deploy applications in Kubernetes while automatically configuring their CI/CD pipelines. It performs the following:
|
||||
|
||||
- **Deployment Creation**: A Kubernetes Deployment YAML is generated based on the provided application name, specifying a basic setup with an Nginx container.
|
||||
- **Repository Setup**: Publishes the deployment code in a Gitea repository, allowing for version control and future updates.
|
||||
- **ArgoCD Integration**: Automatically creates an ArgoCD application for the deployment, facilitating continuous delivery and synchronization with Kubernetes.
|
||||
- **Backstage Registration**: Registers the application in Backstage to make it discoverable and manageable via the Backstage catalog.
|
||||
|
||||
This template enhances productivity by automating several steps required for deployment, version control, and registration, making it ideal for projects where fast, consistent deployment and centralized management are required.
|
||||
4
content/en/docs/solution/tools/Crossplane/_index.md
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: Crossplane
|
||||
description: Crossplane is a tool to provision cloud resources. it can act as a backend for platform orchestrators as well
|
||||
---
|
||||
|
|
@ -0,0 +1,764 @@
|
|||
---
|
||||
title: Howto develop a crossplane kind provider
|
||||
weight: 1
|
||||
description: A provider-kind allows using crossplane locally
|
||||
---
|
||||
|
||||
To support local development and usage of crossplane compositions, a crossplane provider is needed.
|
||||
Every big hyperscaler already has support in crossplane (e.g. provider-gcp and provider-aws).
|
||||
|
||||
Each provider has two main parts, the provider config and implementations of the cloud resources.
|
||||
|
||||
The provider config takes the credentials to log into the cloud provider and provides a token
|
||||
(e.g. a kube config or even a service account) that the implementations can use to provision cloud resources.
|
||||
|
||||
The implementations of the cloud resources reflect each type of cloud resource, typical resources are:
|
||||
|
||||
- S3 Bucket
|
||||
- Nodepool
|
||||
- VPC
|
||||
- GkeCluster
|
||||
|
||||
## Architecture of provider-kind
|
||||
|
||||
To have the crossplane concepts applied, the provider-kind consists of two components: kindserver and provider-kind.
|
||||
|
||||
The kindserver is used to manage local kind clusters. It provides an HTTP REST interface to create, delete and get informations of a running cluster, using an Authorization HTTP header field used as a password:
|
||||
|
||||

|
||||
|
||||
The two properties to connect the provider-kind to kindserver are the IP address and password of kindserver. The IP address is required because the kindserver needs to be executed outside the kind cluster, directly on the local machine, as it need to control
|
||||
kind itself:
|
||||
|
||||

|
||||
|
||||
The provider-kind provides two crossplane elements, the `ProviderConfig` and `KindCluster` as the (only) cloud resource. The
|
||||
`ProviderConfig` is configured with the IP address and password of the running kindserver. The `KindCluster` type is configured
|
||||
to use the provided `ProviderConfig`. Kind clusters can be managed by adding and removing kubernetes manifests of type
|
||||
`KindCluster`. The crossplane reconcilation loop makes use of the kindserver HTTP GET method to see if a new cluster needs to be
|
||||
created by HTTP POST or being removed by HTTP DELETE.
|
||||
|
||||
The password used by `ProviderConfig` is configured as an kubernetes secret, while the kindserver IP address is configured
|
||||
inside the `ProviderConfig` as the field endpoint.
|
||||
|
||||
When provider-kind created a new cluster by processing a `KindCluster` manifest, the two providers which are used to deploy applications, provider-helm and provider-kubernetes, can be configured to use the `KindCluster`.
|
||||
|
||||

|
||||
|
||||
A Crossplane composition can be created by concaternating different providers and their objects. A composition is managed as a
|
||||
custom resource definition and defined in a single file.
|
||||
|
||||

|
||||
|
||||
## Configuration
|
||||
|
||||
Two kubernetes manifests are defines by provider-kind: `ProviderConfig` and `KindCluster`. The third needed kubernetes
|
||||
object is a secret.
|
||||
|
||||
The need for the following inputs arise when developing a provider-kind:
|
||||
|
||||
- kindserver password as a kubernetes secret
|
||||
- endpoint, the IP address of the kindserver as a detail of `ProviderConfig`
|
||||
- kindConfig, the kind configuration file as a detail of `KindCluster`
|
||||
|
||||
The following outputs arise:
|
||||
|
||||
- kubernetesVersion, kubernetes version of a created kind cluster as a detail of `KindCluster`
|
||||
- internalIP, IP address of a created kind cluster as a detail of `KindCluster`
|
||||
- readiness as a detail of `KindCluster`
|
||||
- kube config of a created kind cluster as a kubernetes secret reference of `KindCluster`
|
||||
|
||||
### Inputs
|
||||
|
||||
#### kindserver password
|
||||
|
||||
The kindserver password needs to be defined first. It is realized as a kubernetes secret and contains the password
|
||||
which the kindserver has been configured with:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
data:
|
||||
credentials: MTIzNDU=
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: kind-provider-secret
|
||||
namespace: crossplane-system
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
#### endpoint
|
||||
|
||||
The IP address of the kindserver `endpoint` is configured in the provider-kind `ProviderConfig`. This config also references the kindserver password (`kind-provider-secret`):
|
||||
|
||||
```
|
||||
apiVersion: kind.crossplane.io/v1alpha1
|
||||
kind: ProviderConfig
|
||||
metadata:
|
||||
name: kind-provider-config
|
||||
spec:
|
||||
credentials:
|
||||
source: Secret
|
||||
secretRef:
|
||||
namespace: crossplane-system
|
||||
name: kind-provider-secret
|
||||
key: credentials
|
||||
endpoint:
|
||||
url: https://172.18.0.1:7443/api/v1/kindserver
|
||||
```
|
||||
|
||||
It is suggested that the kindserver runs on the IP of the docker host, so that all kind clusters can access it without extra routing.
|
||||
|
||||
#### kindConfig
|
||||
|
||||
The kind config is provided as the field `kindConfig` in each `KindCluster` manifest. The manifest also references the provider-kind `ProviderConfig` (`kind-provider-config` in the `providerConfigRef` field):
|
||||
|
||||
```
|
||||
apiVersion: container.kind.crossplane.io/v1alpha1
|
||||
kind: KindCluster
|
||||
metadata:
|
||||
name: example-kind-cluster
|
||||
spec:
|
||||
forProvider:
|
||||
kindConfig: |
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
nodes:
|
||||
- role: control-plane
|
||||
kubeadmConfigPatches:
|
||||
- |
|
||||
kind: InitConfiguration
|
||||
nodeRegistration:
|
||||
kubeletExtraArgs:
|
||||
node-labels: "ingress-ready=true"
|
||||
extraPortMappings:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
protocol: TCP
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
protocol: TCP
|
||||
containerdConfigPatches:
|
||||
- |-
|
||||
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."gitea.cnoe.localtest.me:443"]
|
||||
endpoint = ["https://gitea.cnoe.localtest.me"]
|
||||
[plugins."io.containerd.grpc.v1.cri".registry.configs."gitea.cnoe.localtest.me".tls]
|
||||
insecure_skip_verify = true
|
||||
providerConfigRef:
|
||||
name: kind-provider-config
|
||||
writeConnectionSecretToRef:
|
||||
namespace: default
|
||||
name: kind-connection-secret
|
||||
```
|
||||
|
||||
After the kind cluster has been created, it's kube config is stored in a kubernetes secret `kind-connection-secret` which `writeConnectionSecretToRef` references.
|
||||
|
||||
### Outputs
|
||||
|
||||
The three outputs can be recieved by getting the `KindCluster` manifest after the cluster has been created. The `KindCluster` is
|
||||
available for reading even before the cluster has been created, but the three outputfields are empty until then. The ready state
|
||||
will also switch from `false` to `true` after the cluster has finally been created.
|
||||
|
||||
#### kubernetesVersion, internalIP and readiness
|
||||
|
||||
This fields can be recieved with a standard kubectl get command:
|
||||
|
||||
```
|
||||
$ kubectl get kindclusters kindcluster-fw252 -o yaml
|
||||
...
|
||||
status:
|
||||
atProvider:
|
||||
internalIP: 192.168.199.19
|
||||
kubernetesVersion: v1.31.0
|
||||
conditions:
|
||||
- lastTransitionTime: "2024-11-12T18:22:39Z"
|
||||
reason: Available
|
||||
status: "True"
|
||||
type: Ready
|
||||
- lastTransitionTime: "2024-11-12T18:21:38Z"
|
||||
reason: ReconcileSuccess
|
||||
status: "True"
|
||||
type: Synced
|
||||
```
|
||||
|
||||
#### kube config
|
||||
|
||||
The kube config is stored in a kubernetes secret (`kind-connection-secret`) which can be accessed after the cluster has been
|
||||
created:
|
||||
|
||||
```
|
||||
$ kubectl get kindclusters kindcluster-fw252 -o yaml
|
||||
...
|
||||
writeConnectionSecretToRef:
|
||||
name: kind-connection-secret
|
||||
namespace: default
|
||||
...
|
||||
|
||||
$ kubectl get secret kind-connection-secret
|
||||
NAME TYPE DATA AGE
|
||||
kind-connection-secret connection.crossplane.io/v1alpha1 2 107m
|
||||
```
|
||||
|
||||
The API endpoint of the new cluster `endpoint` and it's kube config `kubeconfig` is stored in that secret. This values are set in
|
||||
the Obbserve function of the kind controller of provider-kind. They are set with the special crossplane function managed
|
||||
ExternalObservation.
|
||||
|
||||
## The reconciler loop of a crossplane provider
|
||||
|
||||
The reconciler loop is the heart of every crossplane provider. As it is coupled async, it's best to describe it working in words:
|
||||
|
||||
Internally, the Connect function get's triggered in the kindcluster controller `internal/controller/kindcluster/kindcluster.go`
|
||||
first, to setup the provider and configure it with the kindserver password and IP address of the kindserver.
|
||||
|
||||
After that the provider-kind has been configured with the kindserver secret and it's `ProviderConfig`, the provider is ready to
|
||||
be activated by applying a `KindCluster` manifest to kubernetes.
|
||||
|
||||
When the user applies a new `KindCluster` manifest, a observe loop is started. The provider regulary triggers the `Observe`
|
||||
function of the controller. As there has yet been created nothing yet, the controller will return
|
||||
`managed.ExternalObservation{ResourceExists: false}` to signal that the kind cluster resource has not been created yet.
|
||||
As the is a kindserver SDK available, the controller is using the `Get` function of the SDK to query the kindserver.
|
||||
|
||||
The `KindCluster` is already applied and can be retrieved with `kubectl get kindclusters`. As the cluster has not been
|
||||
created yet, it readiness state is `false`.
|
||||
|
||||
In parallel, the `Create` function is triggered in the controller. This function has acces to the desired kind config
|
||||
`cr.Spec.ForProvider.KindConfig` and the name of the kind cluster `cr.ObjectMeta.Name`. It can now call the kindserver SDK to
|
||||
create a new cluster with the given config and name. The create function is supposed not to run too long, therefore
|
||||
it directly returns in the case of provider-kind. The kindserver already knows the name of the new cluster and even it is
|
||||
not yet ready, it will respond with a partial success.
|
||||
|
||||
The observe loops is triggered regulary in parallel. It will be triggered after the create call but before the kind cluster has been
|
||||
created. Now it will get a step further. It gets the information of kindserver, that the cluster is already knows, but not
|
||||
finished creating yet.
|
||||
|
||||
After the cluster has been finished creating, the kindserver has all important informations for the provider-kind. That is
|
||||
The API server endpoint of the new cluster and it's kube config. After another round of the observer loop, the controller
|
||||
gets now the full set of information of kindcluster (cluster ready, it's API server endpoint and it's kube config).
|
||||
When this informations has been recieved by the kindserver SDk in form of a JSON file, it is able to signal successfull
|
||||
creating of the cluster. That is done by returning the following structure from inside the observe function:
|
||||
|
||||
```
|
||||
return managed.ExternalObservation{
|
||||
ResourceExists: true,
|
||||
ResourceUpToDate: true,
|
||||
ConnectionDetails: managed.ConnectionDetails{
|
||||
xpv1.ResourceCredentialsSecretEndpointKey: []byte(clusterInfo.Endpoint),
|
||||
xpv1.ResourceCredentialsSecretKubeconfigKey: []byte(clusterInfo.KubeConfig),
|
||||
},
|
||||
}, nil
|
||||
```
|
||||
|
||||
Note that the managed.ConnectionDetails will automatically write the API server endpoint and it's kube config to the kubernetes
|
||||
secret which `writeConnectionSecretToRef`of `KindCluster` points to.
|
||||
|
||||
It also set the availability flag before returning, that will mark the `KindCluster` as ready:
|
||||
|
||||
```
|
||||
cr.Status.SetConditions(xpv1.Available())
|
||||
```
|
||||
|
||||
Before returning, it will also set the informations which are transfered into fields of `kindCluster` which can be retrieved by a
|
||||
`kubectl get`, the `kubernetesVersion` and the `internalIP` fields:
|
||||
|
||||
```
|
||||
cr.Status.AtProvider.KubernetesVersion = clusterInfo.K8sVersion
|
||||
cr.Status.AtProvider.InternalIP = clusterInfo.NodeIp
|
||||
```
|
||||
|
||||
Now the `KindCluster` is setup completly and when it's data is retrieved by `kubectl get`, all data is available and it's readiness
|
||||
is set to `true`.
|
||||
|
||||
The observer loops continies to be called to enable drift detection. That detection is currently not implemented, but is
|
||||
prepared for future implementations. When the observer function would detect that the kind cluster with a given name is set
|
||||
up with a kind config other then the desired, the controller would call the controller `Update` function, which would
|
||||
delete the currently runnign kind cluster and recreates it with the desired kind config.
|
||||
|
||||
When the user is deleting the `KindCluster` manifest at a later stage, the `Delete` function of the controller is triggered
|
||||
to call the kindserver SDK to delete the cluster with the given name. The observer loop will acknowledge that the cluster
|
||||
is deleted successfully by retrieving `kind cluster not found` when the deletion had been successfull. If not, the controller
|
||||
will trigger the delete function in a loop as well, until the kind cluster has been deleted.
|
||||
|
||||
That assembles the reconciler loop.
|
||||
|
||||
## kind API server IP address
|
||||
|
||||
Each newly created kind cluster has a practially random kubernetes API server endpoint. As the IP address of a new kind cluster
|
||||
can't determined before creation, the kindserver manages the API server field of the kind config. It will map all
|
||||
kind server kubernets API endpoints on it's own IP address, but on different ports. That garantees that alls kind
|
||||
clusters can access the kubernetes API endpoints of all other kind clusters by using the docker host IP of the kindserver
|
||||
itself. This is needed as the kube config hardcodes the kubernets API server endpoint. By using the docker host IP
|
||||
but with different ports, every usage of a kube config from one kind cluster to another is working successfully.
|
||||
|
||||
The management of the kind config in the kindserver is implemented in the `Post` function of the kindserver `main.go` file.
|
||||
|
||||
## Create a the crossplane provider-kind
|
||||
|
||||
The official way for creating crossplane providers is to use the provider-template. Process the following steps to create
|
||||
a new provider.
|
||||
|
||||
First, clone the provider-template. The commit ID when this howto has been written is 2e0b022c22eb50a8f32de2e09e832f17161d7596.
|
||||
Rename the new folder after cloning.
|
||||
|
||||
```
|
||||
git clone https://github.com/crossplane/provider-template.git
|
||||
mv provider-template provider-kind
|
||||
cd provider-kind/
|
||||
```
|
||||
|
||||
The informations in the provided README.md are incomplete. Folow this steps to get it running:
|
||||
|
||||
> Please use bash for the next commands (`${type,,}` e.g. is not a mistake)
|
||||
|
||||
```
|
||||
make submodules
|
||||
export provider_name=Kind # Camel case, e.g. GitHub
|
||||
make provider.prepare provider=${provider_name}
|
||||
export group=container # lower case e.g. core, cache, database, storage, etc.
|
||||
export type=KindCluster # Camel casee.g. Bucket, Database, CacheCluster, etc.
|
||||
make provider.addtype provider=${provider_name} group=${group} kind=${type}
|
||||
sed -i "s/sample/${group}/g" apis/${provider_name,,}.go
|
||||
sed -i "s/mytype/${type,,}/g" internal/controller/${provider_name,,}.go
|
||||
```
|
||||
|
||||
Patch the Makefile:
|
||||
|
||||
```
|
||||
dev: $(KIND) $(KUBECTL)
|
||||
@$(INFO) Creating kind cluster
|
||||
+ @$(KIND) delete cluster --name=$(PROJECT_NAME)-dev
|
||||
@$(KIND) create cluster --name=$(PROJECT_NAME)-dev
|
||||
@$(KUBECTL) cluster-info --context kind-$(PROJECT_NAME)-dev
|
||||
- @$(INFO) Installing Crossplane CRDs
|
||||
- @$(KUBECTL) apply --server-side -k https://github.com/crossplane/crossplane//cluster?ref=master
|
||||
+ @$(INFO) Installing Crossplane
|
||||
+ @helm install crossplane --namespace crossplane-system --create-namespace crossplane-stable/crossplane --wait
|
||||
@$(INFO) Installing Provider Template CRDs
|
||||
@$(KUBECTL) apply -R -f package/crds
|
||||
@$(INFO) Starting Provider Template controllers
|
||||
```
|
||||
|
||||
Generate, build and execute the new provider-kind:
|
||||
|
||||
```
|
||||
make generate
|
||||
make build
|
||||
make dev
|
||||
```
|
||||
|
||||
Now it's time to add the required fields (internalIP, endpoint, etc.) to the spec fields in go api sources found in:
|
||||
|
||||
- apis/container/v1alpha1/kindcluster_types.go
|
||||
- apis/v1alpha1/providerconfig_types.go
|
||||
|
||||
The file `apis/kind.go` may also be modified. The word `sample` can be replaces with `container` in our case.
|
||||
|
||||
When that's done, the yaml specifications needs to be modified to also include the required fields (internalIP, endpoint, etc.)
|
||||
|
||||
Next, a kindserver SDK can be implemented. That is a helper class which encapsulates the get, create and delete HTTP calls to the kindserver. Connection infos (kindserver IP address and password) will be stored by the constructor.
|
||||
|
||||
After that we can add the usage of the kindclient sdk in kindcluster controller `internal/controller/kindcluster/kindcluster.go`.
|
||||
|
||||
Finally we can update the `Makefile` to better handle the primary kind cluster creation and adding of a cluster role binding
|
||||
so that crossplane can access the `KindCluster` objects. Examples and updating the README.md will finish the development.
|
||||
|
||||
All this steps are documented in: https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW/provider-kind/pulls/1
|
||||
|
||||
## Publish the provider-kind to a user defined docker registry
|
||||
|
||||
Every provider-kind release needs to be tagged first in the git repository:
|
||||
|
||||
```
|
||||
git tag v0.1.0
|
||||
git push origin v0.1.0
|
||||
```
|
||||
|
||||
Next, make sure you have docker logged in into the target registry:
|
||||
|
||||
```
|
||||
docker login forgejo.edf-bootstrap.cx.fg1.ffm.osc.live
|
||||
```
|
||||
|
||||
Now it's time to specify the target registry, build the provider-kind for ARM64 and AMD64 CPU architectures and publish it to the target registry:
|
||||
|
||||
```
|
||||
XPKG_REG_ORGS_NO_PROMOTE="" XPKG_REG_ORGS="forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz" make build.all publish BRANCH_NAME=main
|
||||
```
|
||||
|
||||
The parameter `BRANCH_NAME=main` is needed when the tagging and publishing happens from another branch. The version of the provider-kind that of the tag name. The output of the make call ends then like this:
|
||||
|
||||
```
|
||||
$ XPKG_REG_ORGS_NO_PROMOTE="" XPKG_REG_ORGS="forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz" make build.all publish BRANCH_NAME=main
|
||||
...
|
||||
14:09:19 [ .. ] Skipping image publish for docker.io/provider-kind:v0.1.0
|
||||
Publish is deferred to xpkg machinery
|
||||
14:09:19 [ OK ] Image publish skipped for docker.io/provider-kind:v0.1.0
|
||||
14:09:19 [ .. ] Pushing package forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0
|
||||
xpkg pushed to forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0
|
||||
14:10:19 [ OK ] Pushed package forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0
|
||||
```
|
||||
|
||||
After publishing, the provider-kind can be installed in-cluster similar to other providers like
|
||||
provider-helm and provider-kubernetes. To install it apply the following manifest:
|
||||
|
||||
```
|
||||
apiVersion: pkg.crossplane.io/v1
|
||||
kind: Provider
|
||||
metadata:
|
||||
name: provider-kind
|
||||
spec:
|
||||
package: forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0
|
||||
```
|
||||
|
||||
The output of `kubectl get providers`:
|
||||
|
||||
```
|
||||
$ kubectl get providers
|
||||
NAME INSTALLED HEALTHY PACKAGE AGE
|
||||
provider-helm True True xpkg.upbound.io/crossplane-contrib/provider-helm:v0.19.0 38m
|
||||
provider-kind True True forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/richardrobertreitz/provider-kind:v0.1.0 39m
|
||||
provider-kubernetes True True xpkg.upbound.io/crossplane-contrib/provider-kubernetes:v0.15.0 38m
|
||||
```
|
||||
|
||||
The provider-kind can now be used.
|
||||
|
||||
## Crossplane Composition `edfbuilder`
|
||||
|
||||
Together with the implemented provider-kind and it's config to create a composition which can create kind clusters and
|
||||
the ability to deploy helm and kubernetes objects in the newly created cluster.
|
||||
|
||||
A composition is realized as a custom resource definition (CRD) considting of three parts:
|
||||
|
||||
- A definition
|
||||
- A composition
|
||||
- One or more deplyoments of the composition
|
||||
|
||||
### definition.yaml
|
||||
|
||||
The definition of the CRD will most probably contain one additional fiel, the ArgoCD repository URL to easily select
|
||||
the stacks which should be deployed:
|
||||
|
||||
```
|
||||
apiVersion: apiextensions.crossplane.io/v1
|
||||
kind: CompositeResourceDefinition
|
||||
metadata:
|
||||
name: edfbuilders.edfbuilder.crossplane.io
|
||||
spec:
|
||||
connectionSecretKeys:
|
||||
- kubeconfig
|
||||
group: edfbuilder.crossplane.io
|
||||
names:
|
||||
kind: EDFBuilder
|
||||
listKind: EDFBuilderList
|
||||
plural: edfbuilders
|
||||
singular: edfbuilders
|
||||
versions:
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
referenceable: true
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: A EDFBuilder is a composite resource that represents a K8S Cluster with edfbuilder Installed
|
||||
type: object
|
||||
properties:
|
||||
spec:
|
||||
type: object
|
||||
properties:
|
||||
repoURL:
|
||||
type: string
|
||||
description: URL to ArgoCD stack of stacks repo
|
||||
required:
|
||||
- repoURL
|
||||
```
|
||||
|
||||
### composition.yaml
|
||||
|
||||
This is a shortened version of the file `examples/composition_deprecated/composition.yaml`. It combines a `KindCluster` with
|
||||
deployments of of provider-helm and provider-kubernetes. Note that the `ProviderConfig` and the kindserver secret has already been
|
||||
applied to kubernetes (by the Makefile) before applying this composition.
|
||||
|
||||
```
|
||||
apiVersion: apiextensions.crossplane.io/v1
|
||||
kind: Composition
|
||||
metadata:
|
||||
name: edfbuilders.edfbuilder.crossplane.io
|
||||
spec:
|
||||
writeConnectionSecretsToNamespace: crossplane-system
|
||||
compositeTypeRef:
|
||||
apiVersion: edfbuilder.crossplane.io/v1alpha1
|
||||
kind: EDFBuilder
|
||||
resources:
|
||||
|
||||
### kindcluster
|
||||
- base:
|
||||
apiVersion: container.kind.crossplane.io/v1alpha1
|
||||
kind: KindCluster
|
||||
metadata:
|
||||
name: example
|
||||
spec:
|
||||
forProvider:
|
||||
kindConfig: |
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
nodes:
|
||||
- role: control-plane
|
||||
kubeadmConfigPatches:
|
||||
- |
|
||||
kind: InitConfiguration
|
||||
nodeRegistration:
|
||||
kubeletExtraArgs:
|
||||
node-labels: "ingress-ready=true"
|
||||
extraPortMappings:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
protocol: TCP
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
protocol: TCP
|
||||
containerdConfigPatches:
|
||||
- |-
|
||||
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."gitea.cnoe.localtest.me:443"]
|
||||
endpoint = ["https://gitea.cnoe.localtest.me"]
|
||||
[plugins."io.containerd.grpc.v1.cri".registry.configs."gitea.cnoe.localtest.me".tls]
|
||||
insecure_skip_verify = true
|
||||
providerConfigRef:
|
||||
name: example-provider-config
|
||||
writeConnectionSecretToRef:
|
||||
namespace: default
|
||||
name: my-connection-secret
|
||||
|
||||
### helm provider config
|
||||
- base:
|
||||
apiVersion: helm.crossplane.io/v1beta1
|
||||
kind: ProviderConfig
|
||||
spec:
|
||||
credentials:
|
||||
source: Secret
|
||||
secretRef:
|
||||
namespace: default
|
||||
name: my-connection-secret
|
||||
key: kubeconfig
|
||||
patches:
|
||||
- fromFieldPath: metadata.name
|
||||
toFieldPath: metadata.name
|
||||
readinessChecks:
|
||||
- type: None
|
||||
|
||||
### ingress-nginx
|
||||
- base:
|
||||
apiVersion: helm.crossplane.io/v1beta1
|
||||
kind: Release
|
||||
metadata:
|
||||
annotations:
|
||||
crossplane.io/external-name: ingress-nginx
|
||||
spec:
|
||||
rollbackLimit: 99999
|
||||
forProvider:
|
||||
chart:
|
||||
name: ingress-nginx
|
||||
repository: https://kubernetes.github.io/ingress-nginx
|
||||
version: 4.11.3
|
||||
namespace: ingress-nginx
|
||||
values:
|
||||
controller:
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 1
|
||||
hostPort:
|
||||
enabled: true
|
||||
terminationGracePeriodSeconds: 0
|
||||
service:
|
||||
type: NodePort
|
||||
watchIngressWithoutClass: true
|
||||
|
||||
nodeSelector:
|
||||
ingress-ready: "true"
|
||||
tolerations:
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
operator: "Equal"
|
||||
effect: "NoSchedule"
|
||||
- key: "node-role.kubernetes.io/control-plane"
|
||||
operator: "Equal"
|
||||
effect: "NoSchedule"
|
||||
|
||||
publishService:
|
||||
enabled: false
|
||||
extraArgs:
|
||||
publish-status-address: localhost
|
||||
# added for idpbuilder
|
||||
enable-ssl-passthrough: ""
|
||||
|
||||
# added for idpbuilder
|
||||
allowSnippetAnnotations: true
|
||||
|
||||
# added for idpbuilder
|
||||
config:
|
||||
proxy-buffer-size: 32k
|
||||
use-forwarded-headers: "true"
|
||||
patches:
|
||||
- fromFieldPath: metadata.name
|
||||
toFieldPath: spec.providerConfigRef.name
|
||||
|
||||
### kubernetes provider config
|
||||
- base:
|
||||
apiVersion: kubernetes.crossplane.io/v1alpha1
|
||||
kind: ProviderConfig
|
||||
spec:
|
||||
credentials:
|
||||
source: Secret
|
||||
secretRef:
|
||||
namespace: default
|
||||
name: my-connection-secret
|
||||
key: kubeconfig
|
||||
patches:
|
||||
- fromFieldPath: metadata.name
|
||||
toFieldPath: metadata.name
|
||||
readinessChecks:
|
||||
- type: None
|
||||
|
||||
### kubernetes argocd stack of stacks application
|
||||
- base:
|
||||
apiVersion: kubernetes.crossplane.io/v1alpha2
|
||||
kind: Object
|
||||
spec:
|
||||
forProvider:
|
||||
manifest:
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: edfbuilder
|
||||
namespace: argocd
|
||||
labels:
|
||||
env: dev
|
||||
spec:
|
||||
destination:
|
||||
name: in-cluster
|
||||
namespace: argocd
|
||||
source:
|
||||
path: registry
|
||||
repoURL: 'https://gitea.cnoe.localtest.me/giteaAdmin/edfbuilder-shoot'
|
||||
targetRevision: HEAD
|
||||
project: default
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
selfHeal: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
patches:
|
||||
- fromFieldPath: metadata.name
|
||||
toFieldPath: spec.providerConfigRef.name
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Set this values to allow many kind clusters running in parallel, if needed:
|
||||
|
||||
```
|
||||
sudo sysctl fs.inotify.max_user_watches=524288
|
||||
sudo sysctl fs.inotify.max_user_instances=512
|
||||
|
||||
To make the changes persistent, edit the file /etc/sysctl.conf and add these lines:
|
||||
fs.inotify.max_user_watches = 524288
|
||||
fs.inotify.max_user_instances = 512
|
||||
```
|
||||
|
||||
Start provider-kind:
|
||||
|
||||
```
|
||||
make build
|
||||
kind delete clusters $(kind get clusters)
|
||||
kind create cluster --name=provider-kind-dev
|
||||
DOCKER_HOST_IP="$(docker inspect $(docker ps | grep kindest | awk '{ print $1 }' | head -n1) | jq -r .[0].NetworkSettings.Networks.kind.Gateway)" make dev
|
||||
```
|
||||
|
||||
Wait until debug output of the provider-kind is shown:
|
||||
|
||||
```
|
||||
...
|
||||
namespace/crossplane-system configured
|
||||
secret/example-provider-secret created
|
||||
providerconfig.kind.crossplane.io/example-provider-config created
|
||||
14:49:50 [ .. ] Starting Provider Kind controllers
|
||||
2024-11-12T14:49:54+01:00 INFO controller-runtime.metrics Starting metrics server
|
||||
2024-11-12T14:49:54+01:00 INFO Starting EventSource {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig", "source": "kind source: *v1alpha1.ProviderConfig"}
|
||||
2024-11-12T14:49:54+01:00 INFO Starting EventSource {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig", "source": "kind source: *v1alpha1.ProviderConfigUsage"}
|
||||
2024-11-12T14:49:54+01:00 INFO Starting Controller {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig"}
|
||||
2024-11-12T14:49:54+01:00 INFO Starting EventSource {"controller": "managed/kindcluster.container.kind.crossplane.io", "controllerGroup": "container.kind.crossplane.io", "controllerKind": "KindCluster", "source": "kind source: *v1alpha1.KindCluster"}
|
||||
2024-11-12T14:49:54+01:00 INFO Starting Controller {"controller": "managed/kindcluster.container.kind.crossplane.io", "controllerGroup": "container.kind.crossplane.io", "controllerKind": "KindCluster"}
|
||||
2024-11-12T14:49:54+01:00 INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":8080", "secure": false}
|
||||
2024-11-12T14:49:54+01:00 INFO Starting workers {"controller": "providerconfig/providerconfig.kind.crossplane.io", "controllerGroup": "kind.crossplane.io", "controllerKind": "ProviderConfig", "worker count": 10}
|
||||
2024-11-12T14:49:54+01:00 DEBUG provider-kind Reconciling {"controller": "providerconfig/providerconfig.kind.crossplane.io", "request": {"name":"example-provider-config"}}
|
||||
2024-11-12T14:49:54+01:00 INFO Starting workers {"controller": "managed/kindcluster.container.kind.crossplane.io", "controllerGroup": "container.kind.crossplane.io", "controllerKind": "KindCluster", "worker count": 10}
|
||||
2024-11-12T14:49:54+01:00 INFO KubeAPIWarningLogger metadata.finalizers: "in-use.crossplane.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers
|
||||
2024-11-12T14:49:54+01:00 DEBUG provider-kind Reconciling {"controller": "providerconfig/providerconfig.kind.crossplane.io", "request": {"name":"example-provider-config"}}
|
||||
|
||||
```
|
||||
|
||||
Start kindserver:
|
||||
|
||||
see kindserver/README.md
|
||||
|
||||
When kindserver is started:
|
||||
|
||||
```
|
||||
cd examples/composition_deprecated
|
||||
kubectl apply -f definition.yaml
|
||||
kubectl apply -f composition.yaml
|
||||
kubectl apply -f cluster.yaml
|
||||
```
|
||||
|
||||
List the created elements, wait until the new cluster is created, then switch back to the primary cluster:
|
||||
|
||||
```
|
||||
kubectl config use-context kind-provider-kind-dev
|
||||
```
|
||||
|
||||
Show edfbuilder compositions:
|
||||
|
||||
```
|
||||
kubectl get edfbuilders
|
||||
NAME SYNCED READY COMPOSITION AGE
|
||||
kindcluster True True edfbuilders.edfbuilder.crossplane.io 4m45s
|
||||
```
|
||||
|
||||
Show kind clusters:
|
||||
|
||||
```
|
||||
kubectl get kindclusters
|
||||
NAME READY SYNCED EXTERNAL-NAME INTERNALIP VERSION AGE
|
||||
kindcluster-wlxrt True True kindcluster-wlxrt 192.168.199.19 v1.31.0 5m12s
|
||||
```
|
||||
|
||||
Show helm deployments:
|
||||
|
||||
```
|
||||
kubectl get releases
|
||||
NAME CHART VERSION SYNCED READY STATE REVISION DESCRIPTION AGE
|
||||
kindcluster-29dgf ingress-nginx 4.11.3 True True deployed 1 Install complete 5m32s
|
||||
kindcluster-w2dxl forgejo 10.0.2 True True deployed 1 Install complete 5m32s
|
||||
kindcluster-x8x9k argo-cd 7.6.12 True True deployed 1 Install complete 5m32s
|
||||
```
|
||||
|
||||
Show kubernetes objects:
|
||||
|
||||
```
|
||||
kubectl get objects
|
||||
NAME KIND PROVIDERCONFIG SYNCED READY AGE
|
||||
kindcluster-8tbv8 ConfigMap kindcluster True True 5m50s
|
||||
kindcluster-9lwc9 ConfigMap kindcluster True True 5m50s
|
||||
kindcluster-9sgmd Deployment kindcluster True True 5m50s
|
||||
kindcluster-ct2h7 Application kindcluster True True 5m50s
|
||||
kindcluster-s5knq ConfigMap kindcluster True True 5m50s
|
||||
```
|
||||
|
||||
Open the composition in VS Code: examples/composition_deprecated/composition.yaml
|
||||
|
||||
## What is missing
|
||||
|
||||
Currently missing is the third and final part, the imperative steps which need to be processed:
|
||||
|
||||
- creation of TLS certificates and giteaAdmin password
|
||||
- creation of a Forgejo repository for the stacks
|
||||
- uploading the stacks in the Forgejo repository
|
||||
|
||||
Connecting the definition field (ArgoCD repo URL) and composition interconnects (function-patch-and-transform) are also missing.
|
||||
|
|
@ -0,0 +1,72 @@
|
|||
<mxfile host="65bd71144e">
|
||||
<diagram id="IShv2I7JLD2IyEDAFXRT" name="Page-1">
|
||||
<mxGraphModel dx="813" dy="535" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="0" shadow="0">
|
||||
<root>
|
||||
<mxCell id="0"/>
|
||||
<mxCell id="1" parent="0"/>
|
||||
<mxCell id="19" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="240" y="20" width="300" height="520" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="2" value="provider-kind<br><b>Secret</b>" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="80" y="80" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="14" style="edgeStyle=none;html=1;entryX=1;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="3" target="2">
|
||||
<mxGeometry relative="1" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="3" value="provider-kind<br><b>ProviderConfig</b>" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="280" y="80" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="15" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="4" target="3">
|
||||
<mxGeometry relative="1" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="4" value="provider-kind<br><b>KindCluster</b>" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="280" y="160" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="16" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="5" target="4">
|
||||
<mxGeometry relative="1" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="5" value="provider-helm<br><b>ProviderConfig</b>" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="280" y="240" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="18" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="6" target="5">
|
||||
<mxGeometry relative="1" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="6" value="provider-helm<br><b>Release</b>" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="280" y="320" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="7" value="creates kind<br>cluster" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="390" y="160" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="8" value="deploys argocd" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="390" y="320" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="9" value="provider-kubernetes<br><b>ProviderConfig</b>" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="280" y="400" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="17" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="10" target="9">
|
||||
<mxGeometry relative="1" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="10" value="provider-kubernetes<br><b>Object</b>" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="280" y="480" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="11" value="deploys app of apps" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="390" y="480" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="13" value="" style="curved=1;endArrow=classic;html=1;exitX=0;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="9" target="4">
|
||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
||||
<mxPoint x="390" y="280" as="sourcePoint"/>
|
||||
<mxPoint x="440" y="230" as="targetPoint"/>
|
||||
<Array as="points">
|
||||
<mxPoint x="260" y="400"/>
|
||||
<mxPoint x="260" y="300"/>
|
||||
<mxPoint x="260" y="200"/>
|
||||
</Array>
|
||||
</mxGeometry>
|
||||
</mxCell>
|
||||
<mxCell id="20" value="Composition" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="240" y="20" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
</root>
|
||||
</mxGraphModel>
|
||||
</diagram>
|
||||
</mxfile>
|
||||
|
After Width: | Height: | Size: 40 KiB |
|
|
@ -0,0 +1,31 @@
|
|||
<mxfile host="65bd71144e">
|
||||
<diagram id="gTaMLqmeyucP2gS6krt6" name="Page-1">
|
||||
<mxGraphModel dx="813" dy="535" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="0" shadow="0">
|
||||
<root>
|
||||
<mxCell id="0"/>
|
||||
<mxCell id="1" parent="0"/>
|
||||
<mxCell id="2" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="40" y="60" width="510" height="240" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="3" value="kindserver HTTP interface" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="40" y="60" width="210" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="4" value="&nbsp; GET /api/v1/kindserver/{clustername}" style="rounded=0;whiteSpace=wrap;html=1;align=left;" vertex="1" parent="1">
|
||||
<mxGeometry x="60" y="120" width="250" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="5" value="&nbsp; DELETE /api/v1/kindserver/{clustername}" style="rounded=0;whiteSpace=wrap;html=1;align=left;" vertex="1" parent="1">
|
||||
<mxGeometry x="60" y="180" width="250" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="6" value="&nbsp; POST /api/v1/kindserver/{clustername}" style="rounded=0;whiteSpace=wrap;html=1;align=left;" vertex="1" parent="1">
|
||||
<mxGeometry x="60" y="240" width="250" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="7" value="required HTTP header" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="390" y="60" width="160" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="8" value="Authorization" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="390" y="100" width="160" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
</root>
|
||||
</mxGraphModel>
|
||||
</diagram>
|
||||
</mxfile>
|
||||
|
After Width: | Height: | Size: 19 KiB |
|
|
@ -0,0 +1,49 @@
|
|||
<mxfile host="65bd71144e">
|
||||
<diagram id="88xMscIdxIgwiurMMPnB" name="Page-1">
|
||||
<mxGraphModel dx="813" dy="535" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="0" shadow="0">
|
||||
<root>
|
||||
<mxCell id="0"/>
|
||||
<mxCell id="1" parent="0"/>
|
||||
<mxCell id="18" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry width="630" height="340" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="17" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="240" y="20" width="370" height="300" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="6" value="" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1">
|
||||
<mxGeometry x="270" y="80" width="320" height="220" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="7" value="crossplane" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1">
|
||||
<mxGeometry x="270" y="80" width="90" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="8" value="provider-kind" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1">
|
||||
<mxGeometry x="300" y="170" width="120" height="60" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="10" style="html=1;startArrow=classic;startFill=1;" parent="1" source="9" target="8" edge="1">
|
||||
<mxGeometry relative="1" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="9" value="kindserver" style="rounded=0;whiteSpace=wrap;html=1;" parent="1" vertex="1">
|
||||
<mxGeometry x="20" y="170" width="120" height="60" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="12" value="has password" style="ellipse;whiteSpace=wrap;html=1;" parent="1" vertex="1">
|
||||
<mxGeometry x="110" y="220" width="90" height="60" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="13" value="uses password" style="ellipse;whiteSpace=wrap;html=1;" parent="1" vertex="1">
|
||||
<mxGeometry x="390" y="220" width="90" height="60" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="15" value="has IP" style="ellipse;whiteSpace=wrap;html=1;" parent="1" vertex="1">
|
||||
<mxGeometry x="10" y="220" width="90" height="60" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="16" value="uses IP" style="ellipse;whiteSpace=wrap;html=1;" parent="1" vertex="1">
|
||||
<mxGeometry x="290" y="220" width="90" height="60" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="20" value="running on the local host" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry width="150" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="21" value="running inside kind cluster" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="240" y="20" width="160" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
</root>
|
||||
</mxGraphModel>
|
||||
</diagram>
|
||||
</mxfile>
|
||||
|
After Width: | Height: | Size: 26 KiB |
|
|
@ -0,0 +1,71 @@
|
|||
<mxfile host="65bd71144e">
|
||||
<diagram id="OIxMhAz8XNpLu5mdxKmc" name="Page-1">
|
||||
<mxGraphModel dx="813" dy="535" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="850" pageHeight="1100" background="#ffffff" math="0" shadow="0">
|
||||
<root>
|
||||
<mxCell id="0"/>
|
||||
<mxCell id="1" parent="0"/>
|
||||
<mxCell id="3" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="5" y="40" width="585" height="410" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="4" value="kubernetes objects" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="5" y="40" width="140" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="5" value="provider-kind ProviderConfig secret" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="20" y="100" width="230" height="50" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="13" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="6" target="5">
|
||||
<mxGeometry relative="1" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="6" value="provider-kind&nbsp;ProviderConfig" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="20" y="170" width="230" height="50" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="11" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="7" target="6">
|
||||
<mxGeometry relative="1" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="7" value="provider-kind&nbsp;KindCluster" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="20" y="240" width="230" height="50" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="17" style="edgeStyle=none;html=1;" edge="1" parent="1" source="8" target="16">
|
||||
<mxGeometry relative="1" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="8" value="provider-helm ProviderConfig" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="210" y="310" width="210" height="50" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="9" value="password 12345" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="240" y="105" width="120" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="10" value="endpoint 172.18.0.1" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="240" y="175" width="150" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="15" value="deploys to KindCluster" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="410" y="317.5" width="150" height="35" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="16" value="writes connection secret" style="ellipse;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="240" y="245" width="150" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="22" style="edgeStyle=none;html=1;" edge="1" parent="1" source="18">
|
||||
<mxGeometry relative="1" as="geometry">
|
||||
<mxPoint x="300" y="360" as="targetPoint"/>
|
||||
</mxGeometry>
|
||||
</mxCell>
|
||||
<mxCell id="18" value="argocd" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="160" y="390" width="90" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="20" style="edgeStyle=none;html=1;entryX=0.5;entryY=1;entryDx=0;entryDy=0;" edge="1" parent="1" source="19" target="8">
|
||||
<mxGeometry relative="1" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="19" value="forgejo" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="270" y="390" width="90" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
<mxCell id="23" style="edgeStyle=none;html=1;entryX=0.579;entryY=1.014;entryDx=0;entryDy=0;entryPerimeter=0;" edge="1" parent="1" source="21" target="8">
|
||||
<mxGeometry relative="1" as="geometry">
|
||||
<mxPoint x="320" y="360" as="targetPoint"/>
|
||||
</mxGeometry>
|
||||
</mxCell>
|
||||
<mxCell id="21" value="ingress-nginx" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
||||
<mxGeometry x="380" y="390" width="90" height="40" as="geometry"/>
|
||||
</mxCell>
|
||||
</root>
|
||||
</mxGraphModel>
|
||||
</diagram>
|
||||
</mxfile>
|
||||
|
After Width: | Height: | Size: 35 KiB |