refactor(ipceicis-460-cnoe-deep-dive): top level documenation hierarchy refoctored to three entries concepts, solurion, project
This commit is contained in:
parent
a847eea426
commit
4ed21ee54d
30 changed files with 56 additions and 23 deletions
8
content/en/docs/solution/_index.md
Normal file
8
content/en/docs/solution/_index.md
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
title: Solution
|
||||
weight: 2
|
||||
description: The underlying platfroming concepts of the EDF solution, the solution domain
|
||||
---
|
||||
|
||||
All output the project created: Design, Building blocks, results, show cases, artifacts
|
||||
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
+++
|
||||
title = "Existing Backstage Plugins"
|
||||
weight = 4
|
||||
+++
|
||||
|
||||
1. **Catalog**:
|
||||
- Used for managing services and microservices, including registration, visualization, and the ability to track dependencies and relationships between services. It serves as a central directory for all services in an organization.
|
||||
|
||||
2. **Docs**:
|
||||
- Designed for creating and managing documentation, supporting formats such as Markdown. It helps teams organize and access technical and non-technical documentation in a unified interface.
|
||||
|
||||
3. **API Docs**:
|
||||
- Automatically generates API documentation based on OpenAPI specifications or other API definitions, ensuring that your API information is always up to date and accessible for developers.
|
||||
|
||||
4. **TechDocs**:
|
||||
- A tool for creating and publishing technical documentation. It is integrated directly into Backstage, allowing developers to host and maintain documentation alongside their projects.
|
||||
|
||||
5. **Scaffolder**:
|
||||
- Allows the rapid creation of new projects based on predefined templates, making it easier to deploy services or infrastructure with consistent best practices.
|
||||
|
||||
6. **CI/CD**:
|
||||
- Provides integration with CI/CD systems such as GitHub Actions and Jenkins, allowing developers to view build status, logs, and pipelines directly in Backstage.
|
||||
|
||||
7. **Metrics**:
|
||||
- Offers the ability to monitor and visualize performance metrics for applications, helping teams to keep track of key indicators like response times and error rates.
|
||||
|
||||
8. **Snyk**:
|
||||
- Used for dependency security analysis, scanning your codebase for vulnerabilities and helping to manage any potential security risks in third-party libraries.
|
||||
|
||||
9. **SonarQube**:
|
||||
- Integrates with SonarQube to analyze code quality, providing insights into code health, including issues like technical debt, bugs, and security vulnerabilities.
|
||||
|
||||
10. **GitHub**:
|
||||
- Enables integration with GitHub repositories, displaying information such as commits, pull requests, and other repository activity, making collaboration more transparent and efficient.
|
||||
|
||||
11. **CircleCI**:
|
||||
- Allows seamless integration with CircleCI for managing CI/CD workflows, giving developers insight into build pipelines, test results, and deployment statuses.
|
||||
|
||||
12. **Kubernetes**:
|
||||
- Provides tools to manage Kubernetes clusters, including visualizing pod status, logs, and cluster health, helping teams maintain and troubleshoot their cloud-native applications.
|
||||
|
||||
13. **Cloud**:
|
||||
- Includes plugins for integration with cloud providers like AWS and Azure, allowing teams to manage cloud infrastructure, services, and billing directly from Backstage.
|
||||
|
||||
14. **OpenTelemetry**:
|
||||
- Helps with monitoring distributed applications by integrating OpenTelemetry, offering powerful tools to trace requests, detect performance bottlenecks, and ensure application health.
|
||||
|
||||
15. **Lighthouse**:
|
||||
- Integrates Google Lighthouse to analyze web application performance, helping teams identify areas for improvement in metrics like load times, accessibility, and SEO.
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
+++
|
||||
title = "Backstage Description"
|
||||
weight = 4
|
||||
+++
|
||||
|
||||
Backstage by Spotify can be seen as a Platform Portal. It is an open platform for building and managing internal developer tools, providing a unified interface for accessing various tools and resources within an organization.
|
||||
|
||||
Key Features of Backstage as a Platform Portal:
|
||||
Tool Integration:
|
||||
|
||||
Backstage allows for the integration of various tools used in the development process, such as CI/CD, version control systems, monitoring, and others, into a single interface.
|
||||
Service Management:
|
||||
|
||||
It offers the ability to register and manage services and microservices, as well as monitor their status and performance.
|
||||
Documentation and Learning Materials:
|
||||
|
||||
Backstage includes capabilities for storing and organizing documentation, making it easier for developers to access information.
|
||||
Golden Paths:
|
||||
|
||||
Backstage supports the concept of "Golden Paths," enabling teams to follow recommended practices for development and tool usage.
|
||||
Modularity and Extensibility:
|
||||
|
||||
The platform allows for the creation of plugins, enabling users to customize and extend Backstage's functionality to fit their organization's needs.
|
||||
Backstage provides developers with centralized and convenient access to essential tools and resources, making it an effective solution for supporting Platform Engineering and developing an internal platform portal.
|
||||
9
content/en/docs/solution/tools/Backstage/_index.md
Normal file
9
content/en/docs/solution/tools/Backstage/_index.md
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
+++
|
||||
title = "Backstage"
|
||||
weight = 2
|
||||
[params]
|
||||
author = 'evgenii.dominov@telekom.de'
|
||||
date = '2024-09-36'
|
||||
+++
|
||||
|
||||
Here you will find information about Backstage, it's plugins and usage tutorials
|
||||
67
content/en/docs/solution/tools/CNOE-competitors/_index.md
Normal file
67
content/en/docs/solution/tools/CNOE-competitors/_index.md
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
+++
|
||||
title = "Analysis of the CNOE competitors"
|
||||
weight = 1
|
||||
+++
|
||||
|
||||
## Kratix
|
||||
|
||||
Kratix is a Kubernetes-native framework that helps platform engineering teams automate the provisioning and management of infrastructure and services through custom-defined abstractions called Promises. It allows teams to extend Kubernetes functionality and provide resources in a self-service manner to developers, streamlining the delivery and management of workloads across environments.
|
||||
|
||||
### Concepts
|
||||
Key concepts of Kratix:
|
||||
- Workload:
|
||||
This is an abstraction representing any application or service that needs to be deployed within the infrastructure. It defines the requirements and dependent resources necessary to execute this task.
|
||||
- Promise:
|
||||
A "Promise" is a ready-to-use infrastructure or service package. Promises allow developers to request specific resources (such as databases, storage, or computing power) through the standard Kubernetes interface. It’s similar to an operator in Kubernetes but more universal and flexible.
|
||||
Kratix simplifies the development and delivery of applications by automating the provisioning and management of infrastructure and resources through simple Kubernetes APIs.
|
||||
|
||||
### Pros of Kratix:
|
||||
- Resource provisioning automation. Kratix simplifies infrastructure creation for developers through the abstraction of "Promises." This means developers can simply request the necessary resources (like databases, message queues) without dealing with the intricacies of infrastructure management.
|
||||
|
||||
- Flexibility and adaptability. Platform teams can customize and adapt Kratix to specific needs by creating custom Promises for various services, allowing the infrastructure to meet the specific requirements of the organization.
|
||||
|
||||
- Unified resource request interface. Developers can use a single API (Kubernetes) to request resources, simplifying interaction with infrastructure and reducing complexity when working with different tools and systems.
|
||||
|
||||
### Cons of Kratix:
|
||||
- Although Kratix offers great flexibility, it can also lead to more complex setup and platform management processes. Creating custom Promises and configuring their behavior requires time and effort.
|
||||
|
||||
- Kubernetes dependency. Kratix relies on Kubernetes, which makes it less applicable in environments that don’t use Kubernetes or containerization technologies. It might also lead to integration challenges if an organization uses other solutions.
|
||||
|
||||
- Limited ecosystem. Kratix doesn’t have as mature an ecosystem as some other infrastructure management solutions (e.g., Terraform, Pulumi). This may limit the availability of ready-made solutions and tools, increasing the amount of manual work when implementing Kratix.
|
||||
|
||||
|
||||
## Humanitec
|
||||
|
||||
Humanitec is an Internal Developer Platform (IDP) that helps platform engineering teams automate the provisioning
|
||||
and management of infrastructure and services through dynamic configuration and environment management.
|
||||
|
||||
It allows teams to extend their infrastructure capabilities and provide resources in a self-service manner to developers, streamlining the deployment and management of workloads across various environments.
|
||||
|
||||
### Concepts
|
||||
Key concepts of Humanitec:
|
||||
- Application Definition:
|
||||
This is an abstraction where developers define their application, including its services, environments, a dependencies. It abstracts away infrastructure details, allowing developers to focus on building and deploying their applications.
|
||||
|
||||
- Dynamic Configuration Management:
|
||||
Humanitec automatically manages the configuration of applications and services across multiple environments (e.g., development, staging, production). It ensures consistency and alignment of configurations as applications move through different stages of deployment.
|
||||
|
||||
Humanitec simplifies the development and delivery process by providing self-service deployment options while maintaining
|
||||
centralized governance and control for platform teams.
|
||||
|
||||
### Pros of Humanitec:
|
||||
- Resource provisioning automation. Humanitec automates infrastructure and environment provisioning, allowing developers to focus on building and deploying applications without worrying about manual configuration.
|
||||
|
||||
- Dynamic environment management. Humanitec manages application configurations across different environments, ensuring consistency and reducing manual configuration errors.
|
||||
|
||||
- Golden Paths. best-practice workflows and processes that guide developers through infrastructure provisioning and application deployment. This ensures consistency and reduces cognitive load by providing a set of recommended practices.
|
||||
|
||||
- Unified resource management interface. Developers can use Humanitec’s interface to request resources and deploy applications, reducing complexity and improving the development workflow.
|
||||
|
||||
### Cons of Humanitec:
|
||||
- Humanitec is commercially licensed software
|
||||
|
||||
- Integration challenges. Humanitec’s dependency on specific cloud-native environments can create challenges for organizations with diverse infrastructures or those using legacy systems.
|
||||
|
||||
- Cost. Depending on usage, Humanitec might introduce additional costs related to the implementation of an Internal Developer Platform, especially for smaller teams.
|
||||
|
||||
- Harder to customise
|
||||
15
content/en/docs/solution/tools/_index.md
Normal file
15
content/en/docs/solution/tools/_index.md
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
|
||||
title: Tools
|
||||
|
||||
linkTitle: Tools
|
||||
|
||||
menu: {main: {weight: 20}}
|
||||
|
||||
weight: 4
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
In this section are stored information about the tools that are used for implementing Developer Framework
|
||||
6
content/en/docs/solution/tools/idpbuilder/_index.md
Normal file
6
content/en/docs/solution/tools/idpbuilder/_index.md
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
+++
|
||||
title = "idpbuilder"
|
||||
weight = 3
|
||||
+++
|
||||
|
||||
Here you will find information about idpbuilder installation and usage
|
||||
346
content/en/docs/solution/tools/idpbuilder/installation/_index.md
Normal file
346
content/en/docs/solution/tools/idpbuilder/installation/_index.md
Normal file
|
|
@ -0,0 +1,346 @@
|
|||
+++
|
||||
title = "Installation of idpbuilder"
|
||||
weight = 1
|
||||
+++
|
||||
|
||||
## Local installation with KIND Kubernetes
|
||||
|
||||
The idpbuilder uses KIND as Kubernetes cluster. It is suggested to use a virtual machine for the installation. MMS Linux clients are unable to execute KIND natively on the local machine because of network problems. Pods for example can't connect to the internet.
|
||||
|
||||
Windows and Mac users already utilize a virtual machine for the Docker Linux environment.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Docker Engine
|
||||
- Go
|
||||
- kubectl
|
||||
|
||||
### Build process
|
||||
|
||||
For building idpbuilder the source code needs to be downloaded and compiled:
|
||||
|
||||
```
|
||||
git clone https://github.com/cnoe-io/idpbuilder.git
|
||||
cd idpbuilder
|
||||
go build
|
||||
```
|
||||
|
||||
The idpbuilder binary will be created in the current directory.
|
||||
|
||||
### Start idpbuilder
|
||||
|
||||
To start the idpbuilder binary execute the following command:
|
||||
|
||||
```
|
||||
./idpbuilder create --use-path-routing --log-level debug --package-dir https://github.com/cnoe-io/stacks//ref-implementation
|
||||
```
|
||||
|
||||
### Logging into ArgoCD
|
||||
|
||||
At the end of the idpbuilder execution a link of the installed ArgoCD is shown. The credentianls for access can be obtained by executing:
|
||||
|
||||
```
|
||||
./idpbuilder get secrets
|
||||
```
|
||||
|
||||
### Logging into KIND
|
||||
|
||||
A Kubernetes config is created in the default location `$HOME/.kube/config`. A management of the Kubernetes config is recommended to not unintentionally delete acces to other clusters like the OSC.
|
||||
|
||||
To show all running KIND nodes execute:
|
||||
|
||||
```
|
||||
kubectl get nodes -o wide
|
||||
```
|
||||
|
||||
To see all running pods:
|
||||
|
||||
```
|
||||
kubectl get pods -o wide
|
||||
```
|
||||
|
||||
### Delete the idpbuilder KIND cluster
|
||||
|
||||
The cluster can be deleted by executing:
|
||||
|
||||
```
|
||||
idpbuilder delete cluster
|
||||
```
|
||||
|
||||
## Remote installation into a bare metal Kubernetes instance
|
||||
|
||||
CNOE provides two implementations of an IDP:
|
||||
|
||||
- Amazon AWS implementation
|
||||
- KIND implementation
|
||||
|
||||
Both are not useable to run on bare metal or an OSC instance. The Amazon implementation is complex and makes use of Terraform which is currently not supported by either base metal or OSC. Therefore the KIND implementation is used and customized to support the idpbuilder installation. The idpbuilder is also doing some network magic which needs to be replicated.
|
||||
|
||||
Several prerequisites have to be provided to support the idpbuilder on bare metal or the OSC:
|
||||
|
||||
- Kubernetes dependencies
|
||||
- Network dependencies
|
||||
- Changes to the idpbuilder
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Talos Linux is choosen for a bare metal Kubernetes instance.
|
||||
|
||||
- talosctl
|
||||
- Go
|
||||
- Docker Engine
|
||||
- kubectl
|
||||
- kustomize
|
||||
- helm
|
||||
- nginx
|
||||
|
||||
As soon as the idpbuilder works correctly on bare metal, the next step is to apply it to an OSC instance.
|
||||
|
||||
#### Add *.cnoe.localtest.me to hosts file
|
||||
|
||||
Append this lines to `/etc/hosts`
|
||||
|
||||
```
|
||||
127.0.0.1 gitea.cnoe.localtest.me
|
||||
127.0.0.1 cnoe.localtest.me
|
||||
```
|
||||
|
||||
#### Install nginx and configure it
|
||||
|
||||
Install nginx by executing:
|
||||
|
||||
```
|
||||
sudo apt install nginx
|
||||
```
|
||||
|
||||
Replace `/etc/nginx/sites-enabled/default` with the following content:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 8443 ssl default_server;
|
||||
listen [::]:8443 ssl default_server;
|
||||
|
||||
include snippets/snakeoil.conf;
|
||||
|
||||
location / {
|
||||
proxy_pass http://10.5.0.20:80;
|
||||
proxy_http_version 1.1;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-Host $host;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Start nginx by executing:
|
||||
|
||||
```
|
||||
sudo systemctl enable nginx
|
||||
sudo systemctl restart nginx
|
||||
```
|
||||
|
||||
#### Building idpbuilder
|
||||
|
||||
For building idpbuilder the source code needs to be downloaded and compiled:
|
||||
|
||||
```
|
||||
git clone https://github.com/cnoe-io/idpbuilder.git
|
||||
cd idpbuilder
|
||||
go build
|
||||
```
|
||||
|
||||
The idpbuilder binary will be created in the current directory.
|
||||
|
||||
#### Configure VS Code launch settings
|
||||
|
||||
Open the idpbuilder folder in VS Code:
|
||||
|
||||
```
|
||||
code .
|
||||
```
|
||||
|
||||
Create a new launch setting. Add the `"args"` parameter to the launch setting:
|
||||
|
||||
```
|
||||
{
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"name": "Launch Package",
|
||||
"type": "go",
|
||||
"request": "launch",
|
||||
"mode": "auto",
|
||||
"program": "${fileDirname}",
|
||||
"args": ["create", "--use-path-routing", "--package", "https://github.com/cnoe-io/stacks//ref-implementation"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Create the Talos bare metal Kubernetes instance
|
||||
|
||||
Talos by default will create docker containers, similar to KIND. Create the cluster by executing:
|
||||
|
||||
```
|
||||
talosctl cluster create
|
||||
```
|
||||
|
||||
#### Install local path privisioning (storage)
|
||||
|
||||
```
|
||||
mkdir -p localpathprovisioning
|
||||
cd localpathprovisioning
|
||||
cat > localpathprovisioning.yaml <<EOF
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/rancher/local-path-provisioner/deploy?ref=v0.0.26
|
||||
patches:
|
||||
- patch: |-
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: local-path-config
|
||||
namespace: local-path-storage
|
||||
data:
|
||||
config.json: |-
|
||||
{
|
||||
"nodePathMap":[
|
||||
{
|
||||
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
|
||||
"paths":["/var/local-path-provisioner"]
|
||||
}
|
||||
]
|
||||
}
|
||||
- patch: |-
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: local-path
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
- patch: |-
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: local-path-storage
|
||||
labels:
|
||||
pod-security.kubernetes.io/enforce: privileged
|
||||
EOF
|
||||
kustomize build | kubectl apply -f -
|
||||
rm localpathprovisioning.yaml kustomization.yaml
|
||||
cd ..
|
||||
rmdir localpathprovisioning
|
||||
```
|
||||
|
||||
#### Install an external load balancer
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml
|
||||
sleep 50
|
||||
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: first-pool
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
addresses:
|
||||
- 10.5.0.20-10.5.0.130
|
||||
EOF
|
||||
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: homelab-l2
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
ipAddressPools:
|
||||
- first-pool
|
||||
EOF
|
||||
```
|
||||
|
||||
#### Install an ingress controller which uses the external load balancer
|
||||
|
||||
```
|
||||
helm upgrade --install ingress-nginx ingress-nginx \
|
||||
--repo https://kubernetes.github.io/ingress-nginx \
|
||||
--namespace ingress-nginx --create-namespace
|
||||
sleep 30
|
||||
```
|
||||
|
||||
### Execute idpbuilder
|
||||
|
||||
#### Modify the idpbuilder source code
|
||||
|
||||
Edit the function `Run` in `pkg/build/build.go` and comment out the creation of the KIND cluster:
|
||||
|
||||
```
|
||||
/*setupLog.Info("Creating kind cluster")
|
||||
if err := b.ReconcileKindCluster(ctx, recreateCluster); err != nil {
|
||||
return err
|
||||
}*/
|
||||
```
|
||||
|
||||
Compile the idpbuilder
|
||||
|
||||
```
|
||||
go build
|
||||
```
|
||||
|
||||
#### Start idpbuilder
|
||||
|
||||
Then, in VS Code, switch to main.go in the root directory of the idpbuilder and start debugging.
|
||||
|
||||
#### Logging into ArgoCD
|
||||
|
||||
At the end of the idpbuilder execution a link of the installed ArgoCD is shown. The credentianls for access can be obtained by executing:
|
||||
|
||||
```
|
||||
./idpbuilder get secrets
|
||||
```
|
||||
|
||||
#### Logging into Talos cluster
|
||||
|
||||
A Kubernetes config is created in the default location `$HOME/.kube/config`. A management of the Kubernetes config is recommended to not unintentionally delete acces to other clusters like the OSC.
|
||||
|
||||
To show all running Talos nodes execute:
|
||||
|
||||
```
|
||||
kubectl get nodes -o wide
|
||||
```
|
||||
|
||||
To see all running pods:
|
||||
|
||||
```
|
||||
kubectl get pods -o wide
|
||||
```
|
||||
|
||||
#### Delete the idpbuilder Talos cluster
|
||||
|
||||
The cluster can be deleted by executing:
|
||||
|
||||
```
|
||||
talosctl cluster destroy
|
||||
```
|
||||
|
||||
### TODO's for running idpbuilder on bare metal or OSC
|
||||
|
||||
Required:
|
||||
|
||||
- Add *.cnoe.localtest.me to the Talos cluster DNS, pointing to the host device IP address, which runs nginx.
|
||||
|
||||
- Create a SSL certificate with `cnoe.localtest.me` as common name. Edit the nginx config to load this certificate. Configure idpbuilder to distribute this certificate instead of the one idpbuilder distributes by idefault.
|
||||
|
||||
Optimizations:
|
||||
|
||||
- Implement an idpbuilder uninstall. This is specially important when working on the OSC instance.
|
||||
|
||||
- Remove or configure gitea.cnoe.localtest.me, it seems not to work even in the idpbuilder local installation with KIND.
|
||||
|
||||
- Improvements to the idpbuilder to support Kubernetes instances other then KIND. This can either be done by parametrization or by utilizing Terraform / OpenTOFU or Crossplane.
|
||||
Loading…
Add table
Add a link
Reference in a new issue