15 KiB
| title | linkTitle | weight | description |
|---|---|---|---|
| OTC | OTC | 10 | Open Telekom Cloud infrastructure components for ingress, TLS, and storage |
Overview
The OTC (Open Telekom Cloud) stack provides essential infrastructure components for deploying applications on Open Telekom Cloud environments. It configures ingress routing, automated TLS certificate management, and cloud-native storage provisioning tailored specifically for OTC's Kubernetes infrastructure.
This stack serves as a foundational layer that other platform stacks depend on for external access, secure communication, and persistent storage.
Key Features
- Automated TLS Certificate Management: Let's Encrypt integration via cert-manager for automatic certificate provisioning and renewal
- Cloud Load Balancer Integration: Nginx ingress controller configured with OTC-specific Elastic Load Balancer (ELB) annotations
- Native Storage Provisioning: Default StorageClass using Huawei FlexVolume provisioner for block storage
- Prometheus Metrics: Built-in monitoring capabilities for ingress traffic and performance
- High Availability: Rolling update strategy with minimal downtime
- HTTP-01 Challenge Support: ACME validation through ingress for certificate issuance
Repository
Code: OTC Stack Templates
Documentation:
Getting Started
Prerequisites
- Kubernetes cluster running on Open Telekom Cloud
- ArgoCD installed (provided by
corestack) - Environment variables configured:
LOADBALANCER_ID: OTC Elastic Load Balancer IDLOADBALANCER_IP: OTC Elastic Load Balancer IP addressCLIENT_REPO_DOMAIN: Git repository domainCLIENT_REPO_ORG_NAME: Git repository organizationCLIENT_REPO_ID: Client repository identifierDOMAIN: Domain name for the environment
Quick Start
The OTC stack is deployed as part of the EDP installation process:
-
Trigger Deploy Pipeline
- Go to Infra Deploy Pipeline
- Click on Run workflow
- Enter a name in "Select environment directory to deploy". This must be DNS Compatible.
- Execute workflow
-
ArgoCD Synchronization ArgoCD automatically deploys:
- cert-manager with ClusterIssuer for Let's Encrypt
- ingress-nginx controller with OTC load balancer integration
- Default StorageClass for OTC block storage
Verification
Verify the OTC stack deployment:
# Check ArgoCD applications status
kubectl get application otc -n argocd
kubectl get application cert-manager -n argocd
kubectl get application ingress-nginx -n argocd
kubectl get application storageclass -n argocd
# Verify cert-manager pods
kubectl get pods -n cert-manager
# Check ingress-nginx controller
kubectl get pods -n ingress-nginx
# Verify ClusterIssuer status
kubectl get clusterissuer main
# Check StorageClass
kubectl get storageclass default
Architecture
Component Architecture
The OTC stack consists of three primary components:
cert-manager:
- Automates TLS certificate lifecycle management
- Integrates with Let's Encrypt ACME server (production endpoint)
- Uses HTTP-01 challenge validation via ingress
- Creates and manages certificates as Kubernetes resources
- Single replica deployment
ingress-nginx:
- Kubernetes ingress controller based on Nginx
- Routes external traffic to internal services
- Integrated with OTC Elastic Load Balancer (ELB)
- Supports TLS termination with cert-manager issued certificates
- Rolling update strategy with max 1 unavailable pod
- Prometheus metrics exporter with ServiceMonitor
StorageClass:
- Default storage provisioner for persistent volumes
- Uses Huawei FlexVolume driver (
flexvolume-huawei.com/fuxivol) - SATA block storage type
- Immediate volume binding mode
- Supports dynamic volume expansion
Integration Flow
External Traffic → OTC ELB → ingress-nginx → Kubernetes Services
↓
cert-manager (TLS certificates)
↓
Let's Encrypt ACME
Configuration
cert-manager Configuration
Helm Values (stacks/otc/cert-manager/values.yaml):
crds:
enabled: true
replicaCount: 1
ClusterIssuer (stacks/otc/cert-manager/manifests/clusterissuer.yaml):
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: main
spec:
acme:
email: admin@think-ahead.tech
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cluster-issuer-account-key
solvers:
- http01:
ingress:
ingressClassName: nginx
Key Settings:
- CRDs installed automatically
- Production Let's Encrypt ACME endpoint
- HTTP-01 validation through nginx ingress
- ClusterIssuer named
mainfor cluster-wide certificate issuance
ingress-nginx Configuration
Helm Values (stacks/otc/ingress-nginx/values.yaml):
controller:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
service:
annotations:
kubernetes.io/elb.class: union
kubernetes.io/elb.port: '80'
kubernetes.io/elb.id: {{{ .Env.LOADBALANCER_ID }}}
kubernetes.io/elb.ip: {{{ .Env.LOADBALANCER_IP }}}
ingressClassResource:
name: nginx
allowSnippetAnnotations: true
config:
proxy-buffer-size: 32k
use-forwarded-headers: "true"
metrics:
enabled: true
serviceMonitor:
additionalLabels:
release: "ingress-nginx"
enabled: true
Key Settings:
- OTC Load Balancer Integration: Annotations configure connection to OTC ELB
- Rolling Updates: Minimizes downtime with 1 pod unavailable during updates
- Snippet Annotations: Enabled for advanced ingress configuration (idpbuilder compatibility)
- Proxy Buffer: 32k buffer size for handling large headers
- Forwarded Headers: Preserves original client information through proxies
- Metrics: Prometheus ServiceMonitor for observability
StorageClass Configuration
StorageClass (stacks/otc/storageclass/storageclass.yaml):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
name: default
parameters:
kubernetes.io/hw:passthrough: "true"
kubernetes.io/storagetype: BS
kubernetes.io/volumetype: SATA
kubernetes.io/zone: eu-de-02
provisioner: flexvolume-huawei.com/fuxivol
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
Key Settings:
- Default StorageClass: Automatically used when no StorageClass specified
- OTC Zone: Provisioned in
eu-de-02availability zone - SATA Volumes: Block storage (BS) with SATA performance tier
- Volume Expansion: Supports resizing persistent volumes dynamically
- Reclaim Policy: Volumes deleted when PersistentVolumeClaim is removed
ArgoCD Application Configuration
Registry Application (template/registry/otc.yaml):
- Name:
otc - Manages the OTC stack directory
- Automated sync with prune and self-heal enabled
- Creates namespaces automatically
Component Applications:
cert-manager (referenced in stack):
- Deploys cert-manager Helm chart
- Automated self-healing enabled
- Includes ClusterIssuer manifest for Let's Encrypt
ingress-nginx (template/stacks/otc/ingress-nginx.yaml):
- Deploys from official Kubernetes ingress-nginx repository
- Chart version: helm-chart-4.12.1
- References environment-specific values from stacks-instances repository
storageclass (template/stacks/otc/storageclass.yaml):
- Deploys StorageClass manifest
- Managed as ArgoCD Application
- Automated sync with unlimited retries
Usage Examples
Creating an Ingress with Automatic TLS
Create an ingress resource that automatically provisions a TLS certificate:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
namespace: my-namespace
annotations:
cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
cert-manager will automatically:
- Detect the ingress with
cert-manager.io/cluster-issuerannotation - Create a Certificate resource
- Request certificate from Let's Encrypt using HTTP-01 challenge
- Store certificate in
myapp-tlssecret - Renew certificate before expiration
Creating a PersistentVolumeClaim
Use the default OTC StorageClass for persistent storage:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-data
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: csi-disk
Expanding an Existing Volume
Resize a persistent volume by editing the PVC:
# Edit the PVC storage request
kubectl patch pvc my-data -n my-namespace -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
# Verify expansion
kubectl get pvc my-data -n my-namespace
The volume will expand automatically due to allowVolumeExpansion: true in the StorageClass.
Custom Ingress Configuration
Use nginx ingress snippets for advanced routing:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: advanced-app
annotations:
cert-manager.io/cluster-issuer: main
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Custom-Header: value";
if ($http_user_agent ~* "bot") {
return 403;
}
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 8080
Integration Points
- Core Stack: Requires ArgoCD for deployment orchestration
- All Application Stacks: Depends on OTC stack for:
- External access via ingress-nginx
- TLS certificates via cert-manager
- Persistent storage via default StorageClass
- Observability Stack: ingress-nginx metrics exported to Prometheus
- Coder Stack: Uses ingress and cert-manager for workspace access
- Forgejo Stack: Requires ingress and TLS for Git repository access
Troubleshooting
Certificate Issuance Fails
Problem: Certificate remains in Pending state and is not issued
Solution:
-
Check Certificate status:
kubectl get certificate -A kubectl describe certificate <cert-name> -n <namespace> -
Verify ClusterIssuer is ready:
kubectl get clusterissuer main kubectl describe clusterissuer main -
Check cert-manager logs:
kubectl logs -n cert-manager -l app=cert-manager -
Verify HTTP-01 challenge can reach ingress:
kubectl get challenges -A kubectl describe challenge <challenge-name> -n <namespace> -
Common issues:
- DNS not pointing to load balancer IP
- Firewall blocking HTTP (port 80) traffic
- Ingress class not set to
nginx - Let's Encrypt rate limits exceeded
Ingress Controller Not Ready
Problem: ingress-nginx pods are not running or LoadBalancer service has no external IP
Solution:
-
Check ingress controller status:
kubectl get pods -n ingress-nginx kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller -
Verify LoadBalancer service:
kubectl get svc -n ingress-nginx kubectl describe svc ingress-nginx-controller -n ingress-nginx -
Check OTC load balancer annotations:
kubectl get svc ingress-nginx-controller -n ingress-nginx -o yaml -
Verify environment variables are set correctly:
LOADBALANCER_IDmatches OTC ELB IDLOADBALANCER_IPmatches ELB public IP
-
Check OTC console for ELB configuration and health checks
Storage Provisioning Fails
Problem: PersistentVolumeClaim remains in Pending state
Solution:
-
Check PVC status:
kubectl get pvc -A kubectl describe pvc <pvc-name> -n <namespace> -
Verify StorageClass exists and is default:
kubectl get storageclass kubectl describe storageclass default -
Check volume provisioner logs:
kubectl logs -n kube-system -l app=csi-disk-plugin -
Common issues:
- Insufficient quota in OTC project
- Invalid zone configuration (must be
eu-de-02) - Requested storage size exceeds limits
- Missing IAM permissions for volume creation
Ingress Returns 503 Service Unavailable
Problem: Ingress configured but returns 503 error
Solution:
-
Verify backend service exists:
kubectl get svc <service-name> -n <namespace> kubectl get endpoints <service-name> -n <namespace> -
Check if pods are ready:
kubectl get pods -n <namespace> -l <service-selector> -
Verify ingress configuration:
kubectl describe ingress <ingress-name> -n <namespace> -
Check nginx ingress logs:
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller --tail=100 -
Test service connectivity from ingress controller:
kubectl exec -n ingress-nginx <controller-pod> -- curl http://<service-name>.<namespace>.svc.cluster.local:<port>
TLS Certificate Shows as Invalid
Problem: Browser shows certificate warning or certificate details are incorrect
Solution:
-
Verify certificate is ready:
kubectl get certificate <cert-name> -n <namespace> -
Check certificate contents:
kubectl get secret <tls-secret-name> -n <namespace> -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout -
Ensure certificate covers the correct domain:
kubectl describe certificate <cert-name> -n <namespace> -
Force certificate renewal if expired or incorrect:
kubectl delete certificate <cert-name> -n <namespace> # cert-manager will automatically recreate it