feat(otc): Added OTC overview and intro to deployments
Some checks failed
build / build (push) Failing after 55s
ci / build (push) Successful in 56s

This commit is contained in:
Patrick Sy 2025-12-18 14:24:56 +01:00
parent 48a9eed862
commit ad0052c0a7
Signed by: Patrick.Sy
GPG key ID: DDDC8EC51823195E
7 changed files with 325 additions and 58 deletions

View file

@ -1,8 +0,0 @@
---
title: Deploying to OTC
linkTitle: Deploying to OTC
weight: 100
description: TODO
---
Patrick's page

View file

@ -0,0 +1,67 @@
---
title: Deploying to OTC
linkTitle: Deploying to OTC
weight: 100
description: >
Open Telekom Cloud as deployment and infrastructure target
---
## Overview
OTC, Open Telekom Cloud, is one of the cloud platform offerings by Deutsche
Telekom and offers GDPR compliant cloud services. The system is based on
OpenStack.
## Key Features
- Managed Kubernetes
- Managed services including
- Databases
- RDS PostgreSQL
- ElasticSearch
- S3 compatible storage
- DNS Management
- Backup & Restore of Kubernetes volumes and managed services
## Purpose in EDP
OTC is used to host core infrastructure to provide the primary, public EDP
instance and as a test bed for Kubernetes based workloads that would eventually
be deployed to EdgeConnect.
Service components such as Forgejo, Grafana, Garm, and Coder are deployed in OTC
Kubernetes utilizing managed services for databases and storage to reduce the
maintenance and setup burden on the team.
Services and workloads are primarily provisioned using Terraform.
## Repository
**Code**:
- <https://edp.buildth.ing/DevFW/infra-catalogue> - Terraform modules of various
system components
- <https://edp.buildth.ing/DevFW/infra-deploy> - Runs deployment worklows,
contains base configuration of deployed system instances and various
deployment scripts
- <https://edp.buildth.ing/DevFW-CICD/stacks> - Template of a system
configuration divided into multiple, deployable application stacks
- <https://edp.buildth.ing/DevFW-CICD/stacks-instances> - System configurations
of deployed instances hydrated from the `stacks` template
**Terraform Provider**:
- <https://registry.terraform.io/providers/opentelekomcloud/opentelekomcloud/latest/docs>
**Documentation**:
- <https://www.open-telekom-cloud.com/>
- <https://www.open-telekom-cloud.com/en/products-services/core-services/technical-documentation>
**OTC Console**
- <https://console.otc.t-systems.com/console/>
TODO: EDP <-> managed services

View file

@ -0,0 +1,42 @@
---
title: EDP Environments in OTC
linkTitle: Environments
weight: 10
description: >
Instances of EDP are deployed into distinct OTC environments
---
## Architecture
Two distinct tenants are utilized within OTC to enforce a strict separation
between production (`prod`) and non-production (`non-prod`) environments. This
segregation ensures isolated resource management, security policies, and
operational workflows, preventing any potential cross-contamination or impact
between critical production systems and development/testing activities.
- **Production Tenant:** This tenant is exclusively dedicated to production
workloads and is bound to the primary domain `buildth.ing`. All
production-facing EDP instances and associated infrastructure reside within
this tenant, leveraging `buildth.ing` for public access and service discovery.
Within this tenant, each EDP instance is typically dedicated to a specific
customer. This design decision provides robust data separation, addressing
critical privacy and compliance requirements by isolating customer data. It
also allows for independent upgrade paths and maintenance windows for
individual customer instances, minimizing impact on other customers while
still benefiting from centralized management and deployment strategies. The
primary `edp.buildth.ing` instance and the `observability.buildth.ing`
instance are exceptions to this customer-dedicated model, serving foundational
platform roles.
- **Non-Production Tenant:** This tenant hosts all development, testing, and
staging environments, bound to the primary domain `t09.de`. This setup allows
for flexible experimentation and robust testing without impacting production
stability.
Each tenant is designed to accommodate multiple instances of the product, EDP.
These instances are dynamically provisioned and typically bound to specific
subdomains, which inherit from their respective primary tenant domain (e.g.,
`my-test.t09.de` for a non-production instance or `customer-a.buildth.ing` for a
production instance). This subdomain structure facilitates logical separation
and routing for individual EDP deployments.
<likec4-view view-id="otcTenants" browser="true"></likec4-view>

View file

@ -0,0 +1,109 @@
---
title: Managing Instances
linkTitle: Managing Instances
weight: 50
description: >
Managing instances of EDP deployed in OTC
---
## Deployment Strategy
The core of the deployment strategy revolves around the primary production EDP
instance, `edp.buildth.ing`. This instance acts as a centralized control plane
and code repository, storing all application code, configuration, and deployment
pipelines. It is generally responsible for orchestrating the deployment and
updates of most other EDP instances across both production and non-production
tenants, ensuring consistency and automation.
<likec4-view view-id="otcTenants" browser="true"></likec4-view>
### Circular Dependency Issue
However, a unique circular dependency exists with `observability.buildth.ing`.
While `edp.buildth.ing` manages most deployments, it cannot manage its _own_
lifecycle. Attempting to upgrade `edp.buildth.ing` itself through its own
mechanisms could lead to critical components becoming unavailable during the
process (e.g., internal container registries going offline), preventing the
system from restarting successfully. To mitigate this, `edp.buildth.ing` is
instead deployed and managed by `observability.buildth.ing`, with all its
essential deployment dependencies located within the observability environment.
Crucially, git repositories and other resources like container images are
synchronized from `edp.buildth.ing` to the observability instance, as
`observability.buildth.ing` itself does not produce artifacts. In turn,
`edp.buildth.ing` is responsible for deploying and managing
`observability.buildth.ing` itself. This creates a carefully managed circular
relationship that ensures both critical components can be deployed and
maintained effectively without single points of failure related to
self-management.
## Configuration
This section outlines the processes for deploying and managing the configuration
of EDP instances within the Open Telekom Cloud (OTC) environment. Deployments
are primarily driven by Forgejo Actions and leverage Terraform for
infrastructure provisioning and lifecycle management, adhering to GitOps
principles.
### Deployment Workflows
The lifecycle management of EDP instances is orchestrated through a set of
dedicated workflows within the `infra-deploy` Forgejo
[repository](https://edp.buildth.ing/DevFW/infra-deploy), hosted on
`edp.buildth.ing`. These workflows are designed to emulate the standard
Terraform lifecycle, offering `plan`, `deploy`, and `destroy` operations.
- **Triggering Deployments**: Workflows are manually initiated and require
explicit configuration of an OTC tenant and an environment to accurately
target a specific system instance.
- **`plan` Workflow**:
- Executes a dry-run of the proposed deployment.
- Outputs the detailed `terraform plan`, showing all anticipated
infrastructure changes.
- Shows the diff of the configuration that would be applied to the
`stacks-instances` repository, reflecting changes derived from the `stacks`
repository.
- **`deploy` Workflow**:
- Utilized for both the initial creation of new EDP instances and subsequent
updates to existing deployments.
- For new instance creation, all required configuration fields must be
populated.
- **Important Considerations**:
- Configuration fields explicitly marked as "(INITIAL)" are foundational
and, once set during the initial deployment, cannot be altered through the
workflow without manual modification of the underlying Git configuration.
- Certain changes to the configuration may lead to extensive infrastructure
redeployments, which could potentially result in data loss if not
carefully managed and accompanied by appropriate backup strategies.
- **`destroy` Workflow**:
- Initiates the deprovisioning and complete removal of an existing EDP system
instance from the OTC environment.
- While the infrastructure is torn down, the corresponding configuration entry
is intentionally retained within the `stacks-instances` repository for
historical tracking or potential re-creation.
<a href="../workflow-deploy-form.png" target="_blank">
<img alt="Deploy workflow form" src="../workflow-deploy-form.png" style="max-width: 300px;" />
</a>
### Configuration Management
The configuration for deployed EDP instances is systematically managed across
several Git repositories to ensure version control, traceability, and adherence
to GitOps practices.
- **Base Configuration**: A foundational configuration entry for each deployed
system instance is stored directly within the `infra-deploy` repository.
- **Complete System Configuration**: The comprehensive configuration for a
system instance, derived from the `stacks` template repository, is maintained
in the `stacks-instances` repository.
- **GitOps Synchronization**: ArgoCD continuously monitors the
`stacks-instances` repository. It automatically detects and synchronizes any
discrepancies between the desired state defined in Git and the actual state of
the deployed system within the OTC Kubernetes cluster. The configurations in
the `stacks-instances` repository are organized by OTC tenant and instance
name. ArgoCD monitors only the portion of the repository that is relevant to
its specific instance.

Binary file not shown.

After

Width:  |  Height:  |  Size: 209 KiB