Compare commits
1 commit
main
...
feature/ex
| Author | SHA1 | Date | |
|---|---|---|---|
| 9581133a3d |
|
|
@ -1,74 +0,0 @@
|
|||
# Technical Documentation Guidelines
|
||||
|
||||
You are an expert technical writer with deep expertise in creating clear, concise, and well-structured documentation. Your goal is to produce documentation that flows naturally while maintaining technical accuracy.
|
||||
|
||||
## Core Principles
|
||||
|
||||
### 1. Conciseness and Clarity
|
||||
- Use clear, direct language
|
||||
- Eliminate unnecessary words and redundancy
|
||||
- Make every sentence count
|
||||
- Prefer active voice over passive voice
|
||||
- Use short paragraphs (3-5 sentences maximum)
|
||||
|
||||
### 2. Structure and Organization
|
||||
- Start with the most important information
|
||||
- Use logical hierarchies with consistent heading levels
|
||||
- Group related concepts together
|
||||
- Provide clear navigation through table of contents when appropriate
|
||||
- Use lists for sequential steps or related items
|
||||
|
||||
### 3. Flow and Readability
|
||||
- Ensure smooth transitions between sections
|
||||
- Connect ideas logically
|
||||
- Build complexity gradually
|
||||
- Use examples to illustrate concepts
|
||||
- Maintain consistent terminology throughout
|
||||
|
||||
### 4. Technical Accuracy
|
||||
- Be precise with technical terms
|
||||
- Include relevant code examples that are tested and functional
|
||||
- Document edge cases and limitations
|
||||
- Provide accurate command syntax and parameters
|
||||
- Link to related documentation when appropriate
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
### Standard Document Layout
|
||||
1. **Title** - Clear, descriptive heading
|
||||
2. **Overview** - Brief introduction (2-3 sentences)
|
||||
3. **Prerequisites** - What the reader needs to know or have
|
||||
4. **Main Content** - Organized in logical sections
|
||||
5. **Examples** - Practical, real-world use cases
|
||||
6. **Troubleshooting** - Common issues and solutions (when applicable)
|
||||
7. **Related Resources** - Links to additional documentation
|
||||
|
||||
### Code Examples
|
||||
- Provide complete, runnable examples
|
||||
- Include comments for complex logic
|
||||
- Show expected output
|
||||
- Use consistent formatting and syntax highlighting
|
||||
|
||||
### Commands and APIs
|
||||
- Show full syntax with all parameters
|
||||
- Indicate required vs optional parameters
|
||||
- Provide parameter descriptions
|
||||
- Include return values or output format
|
||||
|
||||
## Writing Style
|
||||
|
||||
- **Be direct**: "Configure the database" not "You should configure the database"
|
||||
- **Be specific**: "Set timeout to 30 seconds" not "Set an appropriate timeout"
|
||||
- **Be consistent**: Use the same terms for the same concepts
|
||||
- **Be complete**: Don't assume implicit knowledge; explain as needed
|
||||
|
||||
## When Uncertain
|
||||
|
||||
**If you don't know something or need clarification:**
|
||||
- Ask specific questions
|
||||
- Request examples or use cases
|
||||
- Clarify technical details or edge cases
|
||||
- Verify terminology and naming conventions
|
||||
- Confirm target audience and their expected knowledge level
|
||||
|
||||
Your expertise is in writing excellent documentation. Use your judgment to create documentation that serves the reader's needs effectively. When in doubt, ask rather than guess.
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
.github
|
||||
.gitignore
|
||||
.vscode
|
||||
.devbox
|
||||
.devcontainer
|
||||
node_modules
|
||||
public
|
||||
resources
|
||||
tmp
|
||||
*.md
|
||||
!content/**/*.md
|
||||
TESTING.md
|
||||
Taskfile.yml
|
||||
devbox.json
|
||||
devbox.lock
|
||||
.hugo_build.lock
|
||||
.htmltest.yml
|
||||
.htmlvalidate.json
|
||||
.markdownlint.json
|
||||
|
||||
# Ensure package-lock.json is included for npm ci
|
||||
!package-lock.json
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
# Tool versions for development and CI/CD
|
||||
# These versions are used in:
|
||||
# - devbox.json (pinned versions)
|
||||
# - Dockerfile (build arguments)
|
||||
# - .github/workflows/ci.yaml (CI/CD pipeline)
|
||||
|
||||
NODE_VERSION=24.10.0
|
||||
GO_VERSION=1.25.1
|
||||
HUGO_VERSION=0.151.0
|
||||
|
|
@ -1 +0,0 @@
|
|||
use flake
|
||||
79
.github/workflows/ci.yaml
vendored
|
|
@ -1,79 +0,0 @@
|
|||
name: ci
|
||||
|
||||
on:
|
||||
push:
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-22.04
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
submodules: recursive
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Load versions from .env.versions
|
||||
id: versions
|
||||
run: |
|
||||
# Source the versions file
|
||||
set -a
|
||||
source .env.versions
|
||||
set +a
|
||||
|
||||
echo "node_version=${NODE_VERSION}" >> "$GITHUB_OUTPUT"
|
||||
echo "go_version=${GO_VERSION}" >> "$GITHUB_OUTPUT"
|
||||
echo "hugo_version=${HUGO_VERSION}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
echo "Node: ${NODE_VERSION}"
|
||||
echo "Go: ${GO_VERSION}"
|
||||
echo "Hugo: ${HUGO_VERSION}"
|
||||
|
||||
- name: Repository meta
|
||||
id: repository
|
||||
run: |
|
||||
registry=${{ github.server_url }}
|
||||
registry=${registry##http*://}
|
||||
echo "registry=${registry}" >> "$GITHUB_OUTPUT"
|
||||
echo "registry=${registry}"
|
||||
repository="$(echo "${{ github.repository }}" | tr '[:upper:]' '[:lower:]')"
|
||||
echo "repository=${repository}" >> "$GITHUB_OUTPUT"
|
||||
echo "repository=${repository}"
|
||||
|
||||
- name: Docker meta
|
||||
uses: docker/metadata-action@v5
|
||||
id: docker
|
||||
with:
|
||||
images: ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}
|
||||
tags: |
|
||||
type=sha,prefix=
|
||||
type=ref,event=tag
|
||||
-
|
||||
name: Login to registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ steps.repository.outputs.registry }}
|
||||
username: "${{ secrets.PACKAGES_USER }}"
|
||||
password: "${{ secrets.PACKAGES_TOKEN }}"
|
||||
-
|
||||
name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
with:
|
||||
buildkitd-flags: '--allow-insecure-entitlement network.host'
|
||||
driver-opts: network=host
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
allow: network.host
|
||||
network: host
|
||||
tags: ${{ steps.docker.outputs.tags }}
|
||||
labels: ${{ steps.docker.outputs.labels }}
|
||||
build-args: |
|
||||
NODE_VERSION=${{ steps.versions.outputs.node_version }}
|
||||
GO_VERSION=${{ steps.versions.outputs.go_version }}
|
||||
HUGO_VERSION=${{ steps.versions.outputs.hugo_version }}
|
||||
32
.github/workflows/delete-edge.yaml
vendored
|
|
@ -1,32 +0,0 @@
|
|||
name: delete-edge
|
||||
|
||||
on:
|
||||
workflow_run:
|
||||
workflows: [build]
|
||||
types:
|
||||
- completed
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-22.04
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Replace Image Version
|
||||
run: |
|
||||
sha="${{ github.sha }}"
|
||||
shortSha="${sha:0:7}"
|
||||
echo "Setting image version to: edp.buildth.ing/devfw-cicd/website-and-documentation:${shortSha}"
|
||||
sed -i "s@###IMAGETAG###@edp.buildth.ing/devfw-cicd/website-and-documentation:${shortSha}@g" ./k8s-deployment.yaml
|
||||
|
||||
- name: Delete action
|
||||
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-delete-action@main
|
||||
id: delete
|
||||
with:
|
||||
configFile: ./edgeconnectdeployment.yaml
|
||||
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
|
||||
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
|
||||
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
|
||||
32
.github/workflows/deploy-edge.yaml
vendored
|
|
@ -1,32 +0,0 @@
|
|||
name: deploy-edge
|
||||
|
||||
on:
|
||||
workflow_run:
|
||||
workflows: [build]
|
||||
types:
|
||||
- completed
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-22.04
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Replace Image Version
|
||||
run: |
|
||||
sha="${{ github.sha }}"
|
||||
shortSha="${sha:0:7}"
|
||||
echo "Setting image version to: edp.buildth.ing/devfw-cicd/website-and-documentation:${shortSha}"
|
||||
sed -i "s@###IMAGETAG###@edp.buildth.ing/devfw-cicd/website-and-documentation:${shortSha}@g" ./k8s-deployment.yaml
|
||||
|
||||
- name: Deploy action
|
||||
uses: https://edp.buildth.ing/DevFW-CICD/edge-connect-deploy-action@main
|
||||
id: deploy
|
||||
with:
|
||||
configFile: ./edgeconnectdeployment.yaml
|
||||
baseUrl: https://hub.apps.edge.platform.mg3.mdb.osc.live
|
||||
username: ${{ secrets.EDGEXR_PLATFORM_USERNAME }}
|
||||
password: ${{ secrets.EDGEXR_PLATFORM_PASSWORD }}
|
||||
149
.github/workflows/release.yaml
vendored
|
|
@ -1,149 +0,0 @@
|
|||
name: release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- 'v*.*.*' # Triggert auf Semantic Versioning Tags (v1.0.0, v2.1.3, etc.)
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
packages: write
|
||||
|
||||
jobs:
|
||||
release:
|
||||
runs-on: ubuntu-22.04
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
submodules: recursive
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Load versions from .env.versions
|
||||
id: versions
|
||||
run: |
|
||||
set -a
|
||||
source .env.versions
|
||||
set +a
|
||||
|
||||
echo "node_version=${NODE_VERSION}" >> "$GITHUB_OUTPUT"
|
||||
echo "go_version=${GO_VERSION}" >> "$GITHUB_OUTPUT"
|
||||
echo "hugo_version=${HUGO_VERSION}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
echo "Node: ${NODE_VERSION}"
|
||||
echo "Go: ${GO_VERSION}"
|
||||
echo "Hugo: ${HUGO_VERSION}"
|
||||
|
||||
- name: Extract version from tag
|
||||
id: version
|
||||
run: |
|
||||
VERSION=${GITHUB_REF#refs/tags/v}
|
||||
echo "version=${VERSION}" >> "$GITHUB_OUTPUT"
|
||||
echo "Version: ${VERSION}"
|
||||
|
||||
- name: Repository meta
|
||||
id: repository
|
||||
run: |
|
||||
registry=${{ github.server_url }}
|
||||
registry=${registry##http*://}
|
||||
echo "registry=${registry}" >> "$GITHUB_OUTPUT"
|
||||
echo "registry=${registry}"
|
||||
repository="$(echo "${{ github.repository }}" | tr '[:upper:]' '[:lower:]')"
|
||||
echo "repository=${repository}" >> "$GITHUB_OUTPUT"
|
||||
echo "repository=${repository}"
|
||||
|
||||
- name: Docker meta
|
||||
uses: docker/metadata-action@v5
|
||||
id: docker
|
||||
with:
|
||||
images: ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}
|
||||
tags: |
|
||||
type=semver,pattern={{version}}
|
||||
type=semver,pattern={{major}}.{{minor}}
|
||||
type=semver,pattern={{major}}
|
||||
type=raw,value=latest
|
||||
|
||||
- name: Login to registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ steps.repository.outputs.registry }}
|
||||
username: "${{ secrets.PACKAGES_USER }}"
|
||||
password: "${{ secrets.PACKAGES_TOKEN }}"
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
with:
|
||||
buildkitd-flags: '--allow-insecure-entitlement network.host'
|
||||
driver-opts: network=host
|
||||
|
||||
- name: Build and push release images
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
allow: network.host
|
||||
network: host
|
||||
platforms: linux/amd64,linux/arm64
|
||||
tags: ${{ steps.docker.outputs.tags }}
|
||||
labels: ${{ steps.docker.outputs.labels }}
|
||||
build-args: |
|
||||
NODE_VERSION=${{ steps.versions.outputs.node_version }}
|
||||
GO_VERSION=${{ steps.versions.outputs.go_version }}
|
||||
HUGO_VERSION=${{ steps.versions.outputs.hugo_version }}
|
||||
|
||||
- name: Generate changelog
|
||||
id: changelog
|
||||
run: |
|
||||
# Finde vorheriges Tag
|
||||
PREVIOUS_TAG=$(git describe --abbrev=0 --tags ${GITHUB_REF}^ 2>/dev/null || echo "")
|
||||
|
||||
if [ -z "$PREVIOUS_TAG" ]; then
|
||||
echo "Erster Release - Changelog von Anfang an"
|
||||
CHANGELOG=$(git log --pretty=format:"- %s (%h)" --no-merges)
|
||||
else
|
||||
echo "Changelog seit ${PREVIOUS_TAG}"
|
||||
CHANGELOG=$(git log ${PREVIOUS_TAG}..${GITHUB_REF} --pretty=format:"- %s (%h)" --no-merges)
|
||||
fi
|
||||
|
||||
# Schreibe in Output-Datei (multiline)
|
||||
{
|
||||
echo 'changelog<<EOF'
|
||||
echo "$CHANGELOG"
|
||||
echo 'EOF'
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Create Forgejo/Gitea Release
|
||||
uses: actions/forgejo-release@v2
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
direction: upload
|
||||
release-dir: .
|
||||
title: "Release ${{ steps.version.outputs.version }}"
|
||||
tag: ${{ github.ref_name }}
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
release-notes: |
|
||||
# Release ${{ steps.version.outputs.version }}
|
||||
|
||||
## Docker Images
|
||||
|
||||
Multi-platform images (linux/amd64, linux/arm64) sind verfügbar:
|
||||
|
||||
```bash
|
||||
docker pull ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}:${{ steps.version.outputs.version }}
|
||||
docker pull ${{ steps.repository.outputs.registry }}/${{ steps.repository.outputs.repository }}:latest
|
||||
```
|
||||
|
||||
## Build Versions
|
||||
|
||||
- Node.js: ${{ steps.versions.outputs.node_version }}
|
||||
- Go: ${{ steps.versions.outputs.go_version }}
|
||||
- Hugo: ${{ steps.versions.outputs.hugo_version }}
|
||||
|
||||
## Changes
|
||||
|
||||
${{ steps.changelog.outputs.changelog }}
|
||||
51
.github/workflows/test.yml
vendored
|
|
@ -1,51 +0,0 @@
|
|||
name: Hugo Site Tests
|
||||
|
||||
on:
|
||||
# push:
|
||||
# branches: [ main ]
|
||||
pull_request:
|
||||
branches: [ main ]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
submodules: recursive
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Setup Hugo
|
||||
uses: peaceiris/actions-hugo@v3
|
||||
with:
|
||||
hugo-version: 'latest'
|
||||
extended: true
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '24'
|
||||
cache: 'npm'
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
npm ci
|
||||
go install github.com/wjdp/htmltest@latest
|
||||
|
||||
- name: Run tests
|
||||
run: |
|
||||
npm run test:build
|
||||
npm run test:markdown
|
||||
npm run test:html
|
||||
|
||||
- name: Run link checker
|
||||
run: htmltest
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload htmltest results
|
||||
uses: actions/upload-artifact@v4
|
||||
if: always()
|
||||
with:
|
||||
name: htmltest-report
|
||||
path: tmp/.htmltest/
|
||||
42
.gitignore
vendored
|
|
@ -1,41 +1,7 @@
|
|||
# Hugo
|
||||
.hugo_build.lock
|
||||
public/
|
||||
resources/_gen/
|
||||
public
|
||||
|
||||
# Node.js / NPM
|
||||
node_modules/
|
||||
# for npm devcontainer cli
|
||||
package-lock.json
|
||||
package.json
|
||||
|
||||
# Test outputs
|
||||
tmp/
|
||||
.htmltest/
|
||||
|
||||
# devbox
|
||||
.devbox/
|
||||
|
||||
# Task cache
|
||||
.task/
|
||||
|
||||
# Generated build data
|
||||
data/
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
|
||||
### direnv ###
|
||||
.direnv
|
||||
.envrc
|
||||
|
|
|
|||
|
|
@ -1,24 +0,0 @@
|
|||
DirectoryPath: "public"
|
||||
CheckExternal: true
|
||||
CheckInternalHash: true
|
||||
IgnoreURLs:
|
||||
- "^https://example\\.docsy\\.dev"
|
||||
- "^https://example\\.com"
|
||||
- "^http://localhost"
|
||||
- "^/livereload\\.js"
|
||||
- "^https://cnoe\\.localtest\\.me"
|
||||
- "^https://technologyconversations\\.com"
|
||||
- "^https://developers\\.redhat\\.com"
|
||||
- "^https://platformengineering\\.org"
|
||||
- "^https://cnoe\\.io"
|
||||
- "^https://console\\.otc\\.t-systems\\.com"
|
||||
IgnoreInternalURLs:
|
||||
- "/docs-old/"
|
||||
- "/blog/"
|
||||
- "/docs/v1/"
|
||||
- "/docs/architecture/"
|
||||
- "/docs/documentation/"
|
||||
IgnoreInternalEmptyHashes: true
|
||||
IgnoreDirectoryMissingTrailingSlash: true
|
||||
IgnoreAltMissing: true
|
||||
CheckDoctype: true
|
||||
|
|
@ -1,27 +0,0 @@
|
|||
{
|
||||
"extends": ["html-validate:recommended"],
|
||||
"rules": {
|
||||
"no-inline-style": "off",
|
||||
"require-sri": "off",
|
||||
"no-trailing-whitespace": "off",
|
||||
"void-style": "off",
|
||||
"wcag/h30": "off",
|
||||
"wcag/h32": "off",
|
||||
"wcag/h37": "off",
|
||||
"no-redundant-role": "off",
|
||||
"unique-landmark": "off",
|
||||
"no-multiple-main": "off",
|
||||
"no-dup-id": "off",
|
||||
"element-permitted-content": "off",
|
||||
"attr-quotes": "off",
|
||||
"empty-heading": "off",
|
||||
"element-required-content": "off",
|
||||
"long-title": "off",
|
||||
"no-raw-characters": "off",
|
||||
"valid-id": "off",
|
||||
"doctype-style": "off"
|
||||
},
|
||||
"elements": [
|
||||
"html5"
|
||||
]
|
||||
}
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
{
|
||||
"default": true,
|
||||
"MD013": false,
|
||||
"MD033": false,
|
||||
"MD041": false,
|
||||
"MD024": { "siblings_only": true },
|
||||
"MD025": { "front_matter_title": "" }
|
||||
}
|
||||
|
|
@ -1,4 +0,0 @@
|
|||
# Ignore v1 documentation (legacy content with pre-existing lint issues)
|
||||
content/en/docs/v1/**
|
||||
content/en/blog/**
|
||||
content/en/docs-old/**
|
||||
69
Dockerfile
|
|
@ -1,69 +0,0 @@
|
|||
# Build arguments for version pinning (matching devbox.json)
|
||||
ARG NODE_VERSION=24.10.0
|
||||
ARG GO_VERSION=1.25.1
|
||||
ARG HUGO_VERSION=0.151.0
|
||||
|
||||
# Build stage - use same versions as local devbox environment
|
||||
FROM node:${NODE_VERSION}-bookworm AS builder
|
||||
|
||||
# Get target architecture for multi-platform builds
|
||||
ARG TARGETARCH
|
||||
|
||||
# Install Git (needed for Hugo's enableGitInfo)
|
||||
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install Go (map TARGETARCH: amd64->amd64, arm64->arm64)
|
||||
ARG GO_VERSION
|
||||
RUN wget -q https://go.dev/dl/go${GO_VERSION}.linux-${TARGETARCH}.tar.gz && \
|
||||
tar -C /usr/local -xzf go${GO_VERSION}.linux-${TARGETARCH}.tar.gz && \
|
||||
rm go${GO_VERSION}.linux-${TARGETARCH}.tar.gz
|
||||
|
||||
ENV PATH="/usr/local/go/bin:${PATH}"
|
||||
ENV GOPATH="/go"
|
||||
ENV PATH="${GOPATH}/bin:${PATH}"
|
||||
|
||||
# Install Hugo extended (map TARGETARCH: amd64->amd64, arm64->arm64)
|
||||
ARG HUGO_VERSION
|
||||
RUN wget -q https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-${TARGETARCH}.tar.gz && \
|
||||
tar -xzf hugo_extended_${HUGO_VERSION}_linux-${TARGETARCH}.tar.gz && \
|
||||
mv hugo /usr/local/bin/ && \
|
||||
rm hugo_extended_${HUGO_VERSION}_linux-${TARGETARCH}.tar.gz && \
|
||||
hugo version
|
||||
|
||||
WORKDIR /src
|
||||
|
||||
# Copy package files and install npm dependencies
|
||||
COPY package*.json ./
|
||||
RUN npm ci
|
||||
|
||||
# Copy all source files
|
||||
COPY . .
|
||||
|
||||
# Build Hugo site (Git info wird aus dem aktuellen Kontext genommen, nicht aus .git)
|
||||
# Hugo sucht nach .git, findet es nicht, und überspringt Git-Info automatisch
|
||||
RUN hugo --gc --minify
|
||||
|
||||
# Runtime stage - nginx to serve static content
|
||||
FROM nginx:1.27-alpine
|
||||
|
||||
# Copy built site from builder
|
||||
COPY --from=builder /src/public /usr/share/nginx/html
|
||||
|
||||
# Copy custom nginx config
|
||||
RUN echo 'server {' > /etc/nginx/conf.d/default.conf && \
|
||||
echo ' listen 80;' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo ' server_name _;' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo ' root /usr/share/nginx/html;' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo ' index index.html;' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo '' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo ' location / {' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo ' try_files $uri $uri/ /index.html;' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo ' }' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo '' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo ' gzip on;' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo ' gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;' >> /etc/nginx/conf.d/default.conf && \
|
||||
echo '}' >> /etc/nginx/conf.d/default.conf
|
||||
|
||||
EXPOSE 80
|
||||
|
||||
CMD ["nginx", "-g", "daemon off;"]
|
||||
115
README.md
|
|
@ -1,73 +1,88 @@
|
|||
# EDP - Edge Developer Platform
|
||||
# IPCEICIS-DeveloperFramework Documentation
|
||||
|
||||
Documentation for the edgeDeveloperFramework (eDF) project and the resulting Edge Developer Platform (EDP) product suite.
|
||||
This repo contains business and architectural design and documentation of the DeveloperFramework subproject of IPCEI-CIS.
|
||||
|
||||
## Quick Start
|
||||
## How to read and contribute to this documentation locally
|
||||
|
||||
### Development Environment
|
||||
The documentation is done in [Hugo-format](https://gohugo.io).
|
||||
|
||||
Install and enter [Devbox](https://www.jetify.com/devbox):
|
||||
```bash
|
||||
curl -fsSL https://get.jetify.com/devbox | bash
|
||||
devbox shell
|
||||
```
|
||||
Hugo is a static site renderer - so to get the documentation site presented you need a running Hugo processor. Therefore there is
|
||||
|
||||
Devbox installs Hugo, Node.js, Go, and all required tools. First-time setup requires sudo for the Nix daemon (one-time only).
|
||||
* either a Hugo [`.devcontainer`-definition](https://containers.dev/) - just run a devcontainer aware IDE or CLI, e.g. Visual Studio code
|
||||
* or a Hugo [`Devbox`-definition](https://www.jetify.com/devbox/) - in this case just run a devbox shell
|
||||
|
||||
To avoid entering the shell, run commands directly:
|
||||
```bash
|
||||
devbox run task serve
|
||||
```
|
||||
## Local installation of the Hugo documentation system
|
||||
|
||||
### Local Development
|
||||
We describe two possible ways (one with devcontainer, one with devbox) to get the Hugo-documentation system locally running.
|
||||
|
||||
```bash
|
||||
task deps:install # Install dependencies
|
||||
task serve # Start dev server at http://localhost:1313 (hot-reloading)
|
||||
task test:quick # Run tests
|
||||
task build # Build production site
|
||||
```
|
||||
For both prepare the following three steps:
|
||||
|
||||
## Architecture Diagrams (LikeC4)
|
||||
1. open a terminal on your local box
|
||||
2. clone this repo: `git clone https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW/website-and-documentation`
|
||||
3. change to the repo working dir: `cd website-and-documentation`
|
||||
|
||||
[LikeC4](https://likec4.dev/) generates interactive architecture diagrams from text-based [C4 models](https://c4model.com/). Create or edit diagrams:
|
||||
### Possibility 1: Hugo in a devcontainer
|
||||
|
||||
```bash
|
||||
cd resources/edp-likec4 # Platform architecture
|
||||
npm install # First time only
|
||||
npm start # Preview at http://localhost:5173
|
||||
```
|
||||
[`devcontainers`](https://containers.dev/) are running containers as virtual systems on your local box. The defintion is in the `.devcontainer` folder.
|
||||
Thus as preliminary you need a container daemon running, e.g. Docker.
|
||||
|
||||
Edit `.c4` files to define systems and views. Generate web components for Hugo:
|
||||
```bash
|
||||
task likec4:generate
|
||||
```
|
||||
There are several options to create and run the devcontainer - we present here two:
|
||||
|
||||
Embed in Markdown pages:
|
||||
```markdown
|
||||
{{</* likec4-view view="overview" project="architecture" */>}}
|
||||
```
|
||||
#### Option 1: Run the container triggered by and connected to an IDE, e.g. VS Code
|
||||
|
||||
See [LikeC4 documentation](https://likec4.dev/) for detailed syntax and [README-likec4.md](doc/README-likec4.md) for project-specific details.
|
||||
1. open the repo in an [Devcontainer-aware tool/IDE](https://containers.dev/supporting) (e.g. `code .`)
|
||||
1. start the `devcontainer` (in VSC it's `F1 + Reopen in Devcontainer`)
|
||||
1. when the container is up & running just open your browser with `http://localhost:1313/`
|
||||
|
||||
## Deployment
|
||||
#### Option 2: Run the container natively
|
||||
|
||||
Deployment is automatic via ArgoCD. Push to `main` triggers CI/CD build and deployment within 5-10 minutes.
|
||||
An alternative to get the container image is the [devcontainer CLI](https://github.com/devcontainers/cli), then you can run the devcontainer without VS Code.
|
||||
Thus as preliminary you need to do the install steps of the devconatiner cli.
|
||||
|
||||
**Infrastructure Configuration:**
|
||||
- ArgoCD is configured within [stacks-instances](https://edp.buildth.ing/DevFW-CICD/stacks-instances/src/branch/main/otc/edp.buildth.ing/registry/docs.yaml)
|
||||
- Documentation stack definition: [./argocd-stack/](https://edp.buildth.ing/DevFW-CICD/website-and-documentation/src/branch/main/argocd-stack)
|
||||
1. start the devcontainer by running: `devcontainer up --workspace-folder .`
|
||||
1. find out the IP address of the devconatiner by using `docker ps` and `docker inspect <id of container>`
|
||||
1. when the container is up & running just open your browser with `http://<DOCKER IP>:1313/`
|
||||
|
||||
## Documentation
|
||||
### Possibility 2: Hugo in a devbox
|
||||
|
||||
* [Developer Guide](doc/README-developer.md)
|
||||
* [Technical Writer Guide](doc/README-technical-writer.md)
|
||||
* [Release Notes](doc/RELEASE.md)
|
||||
[`Devboxes`](https://www.jetify.com/devbox/) are locally isolated environments, managed by the [Nix package manager](https://nix.dev/). So first [prepare the devbox](https://www.jetify.com/docs/devbox/installing_devbox/).
|
||||
|
||||
## Project
|
||||
Then
|
||||
|
||||
This is a Hugo-based documentation site for the Edge Developer Platform, built as part of the IPCEI-CIS project.
|
||||
1. ```devbox shell```
|
||||
1. In the shell: ```hugo serve```
|
||||
|
||||
**Website:** Access the documentation at the deployed URL or run locally with `task serve`
|
||||
|
||||
For detailed information, see the documentation in the `doc/` folder.
|
||||
## Editing
|
||||
|
||||
### Documentation language
|
||||
|
||||
The documentation is done in [Docsy-Theme](https://www.docsy.dev/).
|
||||
|
||||
So for editing content just goto the `content`-folder and edit content arrording to the [Docsy documentation](https://www.docsy.dev/docs/adding-content/)
|
||||
|
||||
## Commiting
|
||||
|
||||
After having finished a unit of work commit and push.
|
||||
|
||||
# Annex
|
||||
|
||||
## Installation steps illustrated
|
||||
|
||||
When you run the above installation, the outputs could typically look like this:
|
||||
|
||||
### In Visual Studio Code
|
||||
|
||||
#### Reopen in Container
|
||||
|
||||

|
||||
|
||||
#### Hugo server is running and (typically) listens to localhost:1313
|
||||
|
||||
After some installation time you have:
|
||||
|
||||

|
||||
|
||||
### Final result in a web browser
|
||||
|
||||

|
||||
|
|
|
|||
225
Taskfile.yml
|
|
@ -1,225 +0,0 @@
|
|||
version: '3'
|
||||
|
||||
vars:
|
||||
HUGO_CMD: hugo
|
||||
NPM_CMD: npm
|
||||
|
||||
tasks:
|
||||
default:
|
||||
desc: Show available tasks
|
||||
cmds:
|
||||
- task --list
|
||||
|
||||
# Build tasks
|
||||
build:
|
||||
desc: Build Hugo site
|
||||
deps:
|
||||
- deps:ensure-npm
|
||||
- build:generate-info
|
||||
cmds:
|
||||
- "{{.HUGO_CMD}} --gc --minify"
|
||||
|
||||
build:dev:
|
||||
desc: Build Hugo site for development
|
||||
deps:
|
||||
- deps:ensure-npm
|
||||
- build:generate-info
|
||||
cmds:
|
||||
- "{{.HUGO_CMD}}"
|
||||
|
||||
build:generate-info:
|
||||
desc: Generate build information (git commit, version, etc.)
|
||||
sources:
|
||||
- .git/HEAD
|
||||
- .git/refs/**/*
|
||||
generates:
|
||||
- data/build_info.json
|
||||
cmds:
|
||||
- ./scripts/generate-build-info.sh
|
||||
|
||||
serve:
|
||||
desc: Start Hugo dev server
|
||||
deps:
|
||||
- deps:ensure-npm
|
||||
- build:generate-info
|
||||
cmds:
|
||||
- "{{.HUGO_CMD}} server --noHTTPCache"
|
||||
|
||||
clean:
|
||||
desc: Clean build artifacts
|
||||
cmds:
|
||||
- rm -rf public resources/_gen .hugo_build.lock
|
||||
|
||||
# Test tasks
|
||||
test:
|
||||
desc: Run all tests
|
||||
deps:
|
||||
- test:build
|
||||
- test:markdown
|
||||
- test:html
|
||||
- likec4:validate
|
||||
|
||||
test:hugo:
|
||||
desc: Run Hugo-only tests (markdown, HTML, build)
|
||||
deps:
|
||||
- test:build
|
||||
- test:markdown
|
||||
- test:html
|
||||
|
||||
test:full:
|
||||
desc: Run all tests including link check (may have errors in legacy content)
|
||||
deps:
|
||||
- test:build
|
||||
- test:markdown
|
||||
- test:html
|
||||
- test:links
|
||||
- likec4:validate
|
||||
|
||||
test:quick:
|
||||
desc: Run quick tests (without link check)
|
||||
deps:
|
||||
- test:build
|
||||
- test:markdown
|
||||
- likec4:validate
|
||||
|
||||
test:build:
|
||||
desc: Test Hugo build
|
||||
deps:
|
||||
- deps:ensure-npm
|
||||
- build:generate-info
|
||||
cmds:
|
||||
- "{{.HUGO_CMD}} --gc --minify --logLevel info"
|
||||
|
||||
test:markdown:
|
||||
desc: Lint markdown files
|
||||
deps:
|
||||
- deps:ensure-npm
|
||||
cmds:
|
||||
- "{{.NPM_CMD}} run test:markdown"
|
||||
|
||||
test:html:
|
||||
desc: Validate HTML
|
||||
deps:
|
||||
- deps:ensure-npm
|
||||
cmds:
|
||||
- "{{.NPM_CMD}} run test:html"
|
||||
|
||||
test:links:
|
||||
desc: Check links (skips legacy content)
|
||||
cmds:
|
||||
- |
|
||||
# Move legacy dirs outside public temporarily
|
||||
mkdir -p /tmp/htmltest-backup-$$
|
||||
if [ -d "public/docs-old" ]; then mv public/docs-old /tmp/htmltest-backup-$$/; fi
|
||||
if [ -d "public/blog" ]; then mv public/blog /tmp/htmltest-backup-$$/; fi
|
||||
if [ -d "public/_print/docs-old" ]; then mv public/_print/docs-old /tmp/htmltest-backup-$$/docs-old-print; fi
|
||||
|
||||
# Run htmltest
|
||||
htmltest || EXIT_CODE=$?
|
||||
|
||||
# Restore directories
|
||||
if [ -d "/tmp/htmltest-backup-$$/docs-old" ]; then mv /tmp/htmltest-backup-$$/docs-old public/; fi
|
||||
if [ -d "/tmp/htmltest-backup-$$/blog" ]; then mv /tmp/htmltest-backup-$$/blog public/; fi
|
||||
if [ -d "/tmp/htmltest-backup-$$/docs-old-print" ]; then mv /tmp/htmltest-backup-$$/docs-old-print public/_print/docs-old; fi
|
||||
rm -rf /tmp/htmltest-backup-$$
|
||||
|
||||
# Exit with the original exit code
|
||||
exit ${EXIT_CODE:-0}
|
||||
|
||||
# LikeC4 tasks
|
||||
likec4:generate:
|
||||
desc: Generate LikeC4 webcomponent (includes all architecture projects)
|
||||
cmds:
|
||||
- npx likec4 codegen webcomponent --webcomponent-prefix likec4 --outfile static/js/likec4-webcomponent.js resources/edp-likec4 resources/doc-likec4
|
||||
|
||||
likec4:validate:
|
||||
desc: Validate LikeC4 models
|
||||
cmds:
|
||||
- echo "Validating EDP architecture models..."
|
||||
- npx likec4 validate --ignore-layout resources/edp-likec4
|
||||
- echo "Validating Documentation platform models..."
|
||||
- npx likec4 validate --ignore-layout resources/doc-likec4
|
||||
- echo "✓ All LikeC4 models validated successfully"
|
||||
|
||||
likec4:validate:layout:
|
||||
desc: Validate LikeC4 models including layout
|
||||
cmds:
|
||||
- echo "Validating EDP architecture models (including layout)..."
|
||||
- npx likec4 validate resources/edp-likec4
|
||||
- echo "Validating Documentation platform models (including layout)..."
|
||||
- npx likec4 validate resources/doc-likec4
|
||||
- echo "✓ All LikeC4 models and layouts validated successfully"
|
||||
|
||||
likec4:update:
|
||||
desc: Update LikeC4 to latest version
|
||||
cmds:
|
||||
- npm update likec4 --prefix resources/edp-likec4
|
||||
- npm update likec4 --prefix resources/doc-likec4
|
||||
- echo "✓ LikeC4 updated in both projects"
|
||||
|
||||
# Development tasks
|
||||
deps:ensure-npm:
|
||||
desc: Ensure npm dependencies are installed
|
||||
sources:
|
||||
- package.json
|
||||
- package-lock.json
|
||||
generates:
|
||||
- node_modules/.package-lock.json
|
||||
cmds:
|
||||
- "{{.NPM_CMD}} ci"
|
||||
status:
|
||||
- test -d node_modules
|
||||
|
||||
deps:install:
|
||||
desc: Install all dependencies
|
||||
cmds:
|
||||
- "{{.NPM_CMD}} ci"
|
||||
- "{{.HUGO_CMD}} mod get -u"
|
||||
- "{{.HUGO_CMD}} mod tidy"
|
||||
|
||||
deps:update:
|
||||
desc: Update dependencies
|
||||
cmds:
|
||||
- devbox update
|
||||
- "{{.NPM_CMD}} update"
|
||||
- "{{.HUGO_CMD}} mod get -u"
|
||||
|
||||
# CI/CD
|
||||
ci:
|
||||
desc: Run CI pipeline locally
|
||||
deps:
|
||||
- test
|
||||
|
||||
build:oci-image:
|
||||
desc: Build OCI/Docker image with versions from .env.versions
|
||||
cmds:
|
||||
- |
|
||||
set -a
|
||||
source .env.versions
|
||||
set +a
|
||||
echo "Building OCI image with versions:"
|
||||
echo " NODE_VERSION=${NODE_VERSION}"
|
||||
echo " GO_VERSION=${GO_VERSION}"
|
||||
echo " HUGO_VERSION=${HUGO_VERSION}"
|
||||
docker build --network=host \
|
||||
--build-arg NODE_VERSION=${NODE_VERSION} \
|
||||
--build-arg GO_VERSION=${GO_VERSION} \
|
||||
--build-arg HUGO_VERSION=${HUGO_VERSION} \
|
||||
-t ipceicis-developerframework:latest \
|
||||
-t ipceicis-developerframework:$(git rev-parse --short HEAD) \
|
||||
.
|
||||
|
||||
test:oci-image:
|
||||
desc: Test the built OCI image
|
||||
deps:
|
||||
- build:oci-image
|
||||
cmds:
|
||||
- |
|
||||
echo "Starting container on port 8080..."
|
||||
docker run -d -p 8080:80 --name hugo-test ipceicis-developerframework:latest
|
||||
sleep 2
|
||||
echo "Testing endpoint..."
|
||||
curl -f http://localhost:8080 > /dev/null && echo "✓ Container is running and responding" || echo "✗ Container test failed"
|
||||
echo "Cleaning up..."
|
||||
docker stop hugo-test
|
||||
docker rm hugo-test
|
||||
|
|
@ -1,28 +0,0 @@
|
|||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: docs
|
||||
namespace: argocd
|
||||
labels:
|
||||
env: prod
|
||||
spec:
|
||||
project: default
|
||||
syncPolicy:
|
||||
automated:
|
||||
selfHeal: true
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
- ServerSideApply=true
|
||||
destination:
|
||||
name: in-cluster
|
||||
namespace: docs
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
sources:
|
||||
- repoURL: https://edp.buildth.ing/DevFW-CICD/website-and-documentation
|
||||
targetRevision: HEAD
|
||||
path: argocd-stack/helm
|
||||
helm:
|
||||
parameters:
|
||||
- name: image.tag
|
||||
value: $ARGOCD_APP_REVISION_SHORT
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
|
|
@ -1,24 +0,0 @@
|
|||
apiVersion: v2
|
||||
name: helm
|
||||
description: Deploy documentation to edp.buildth.ing
|
||||
|
||||
# A chart can be either an 'application' or a 'library' chart.
|
||||
#
|
||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
||||
# to be deployed.
|
||||
#
|
||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
||||
type: application
|
||||
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.1.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
# It is recommended to use it with quotes.
|
||||
appVersion: "1.16.0"
|
||||
|
|
@ -1,62 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: docs
|
||||
name: docs
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: docs
|
||||
strategy: {}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: docs
|
||||
spec:
|
||||
containers:
|
||||
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
name: docs
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
||||
protocol: TCP
|
||||
resources: {}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: docs
|
||||
spec:
|
||||
selector:
|
||||
app: docs
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: docs
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: main
|
||||
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
rules:
|
||||
- host: docs.edp.buildth.ing
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
service:
|
||||
name: docs
|
||||
port:
|
||||
number: 80
|
||||
path: /
|
||||
pathType: Prefix
|
||||
tls:
|
||||
- hosts:
|
||||
- docs.edp.buildth.ing
|
||||
secretName: docs-edp-buildth-ing-tls
|
||||
|
|
@ -1,4 +0,0 @@
|
|||
|
||||
image:
|
||||
repository: edp.buildth.ing/devfw-cicd/website-and-documentation
|
||||
tag: "UNKNOWN_TAG"
|
||||
|
|
@ -1,524 +1,6 @@
|
|||
/*
|
||||
* Telekom-inspired Theme Variables
|
||||
* Based on https://edp.buildth.ing Telekom Design System
|
||||
*/
|
||||
|
||||
// Bootstrap/Docsy Variable Overrides (must be before imports)
|
||||
$primary: #E20074 !default;
|
||||
$secondary: #B6B6B6 !default;
|
||||
$success: #00b367 !default;
|
||||
$info: #0070ad !default;
|
||||
$warning: #ffcc00 !default;
|
||||
$danger: #d52b1e !default;
|
||||
$dark: #000000 !default;
|
||||
$light: #f9fafb !default;
|
||||
Add styles or override variables from the theme here.
|
||||
|
||||
// Link colors
|
||||
$link-color: #E20074 !default;
|
||||
$link-hover-color: #C2005E !default;
|
||||
|
||||
// Body
|
||||
$body-bg: #ffffff !default;
|
||||
$body-color: #000000 !default;
|
||||
|
||||
// Navbar
|
||||
$navbar-light-color: #000000 !default;
|
||||
$navbar-light-hover-color: #E20074 !default;
|
||||
$navbar-light-active-color: #E20074 !default;
|
||||
|
||||
// Fonts
|
||||
$font-family-sans-serif: 'TeleNeo', -apple-system, "Segoe UI", system-ui, Roboto, "Helvetica Neue", Arial, sans-serif !default;
|
||||
$font-family-base: $font-family-sans-serif !default;
|
||||
|
||||
// Telekom TeleNeo Fonts
|
||||
@font-face {
|
||||
font-family: 'TeleNeo';
|
||||
src: url('../fonts/TeleNeoOffice-Thin.53627df9.ttf') format('truetype');
|
||||
font-weight: 300;
|
||||
font-style: normal;
|
||||
}
|
||||
|
||||
@font-face {
|
||||
font-family: 'TeleNeo';
|
||||
src: url('../fonts/TeleNeoOffice-Regular.b0a2cff1.ttf') format('truetype');
|
||||
font-weight: 400;
|
||||
font-style: normal;
|
||||
}
|
||||
|
||||
@font-face {
|
||||
font-family: 'TeleNeo';
|
||||
src: url('../fonts/TeleNeoOffice-Medium.79fb426d.ttf') format('truetype');
|
||||
font-weight: 500;
|
||||
font-style: normal;
|
||||
}
|
||||
|
||||
@font-face {
|
||||
font-family: 'TeleNeo';
|
||||
src: url('../fonts/TeleNeoOffice-Bold.a7bb592b.ttf') format('truetype');
|
||||
font-weight: 600;
|
||||
font-style: normal;
|
||||
}
|
||||
|
||||
@font-face {
|
||||
font-family: 'TeleNeo';
|
||||
src: url('../fonts/TeleNeoOffice-ExtraBold.fbe9fe42.ttf') format('truetype');
|
||||
font-weight: 700;
|
||||
font-style: normal;
|
||||
}
|
||||
|
||||
// Primary Colors - Telekom Magenta
|
||||
:root {
|
||||
// Telekom Primary Color (Magenta)
|
||||
--color-primary: #E20074;
|
||||
--color-primary-contrast: #ffffff;
|
||||
--color-primary-dark-1: #C2005E;
|
||||
--color-primary-dark-2: #A5004D;
|
||||
--color-primary-dark-3: #87003D;
|
||||
--color-primary-light-1: #E7338A;
|
||||
--color-primary-light-2: #EC66A1;
|
||||
--color-primary-light-3: #F299B8;
|
||||
--color-primary-light-4: #F7CCCF;
|
||||
--color-primary-light-5: #FCEFF6;
|
||||
--color-primary-light-6: #FFF5FA;
|
||||
|
||||
// Secondary Colors
|
||||
--color-secondary: #B6B6B6;
|
||||
--color-secondary-dark: #6a7178;
|
||||
--color-secondary-light: #f9fafb;
|
||||
|
||||
// Semantic Colors
|
||||
--color-success: #00b367;
|
||||
--color-warning: #ffcc00;
|
||||
--color-error: #d52b1e;
|
||||
--color-info: #0070ad;
|
||||
|
||||
// Text Colors
|
||||
--color-text: #000000;
|
||||
--color-text-light: #666666;
|
||||
--color-text-dark: #000000;
|
||||
|
||||
// Background Colors
|
||||
--color-body: #ffffff;
|
||||
--color-card: #F1F1F1;
|
||||
--color-hover: #F1F1F1;
|
||||
--color-active: #F1F1F1;
|
||||
|
||||
// Navigation
|
||||
--color-nav-bg: #ffffff;
|
||||
--color-nav-text: #000000;
|
||||
--nav-border-color: #B6B6B6;
|
||||
|
||||
// UI Elements
|
||||
--color-input-background: #ffffff;
|
||||
--color-input-border: #cccccc;
|
||||
--color-input-text: #000000;
|
||||
--color-box-body: #f2f2f2;
|
||||
--color-box-header: #e6e6e6;
|
||||
|
||||
// Shadows & Overlays
|
||||
--color-shadow: rgba(0, 0, 0, 0.15);
|
||||
--color-overlay-backdrop: rgba(0, 0, 0, 0.5);
|
||||
|
||||
// Font Settings
|
||||
--font-family-base: 'TeleNeo', -apple-system, "Segoe UI", system-ui, Roboto, "Helvetica Neue", Arial, sans-serif;
|
||||
--nav-text-font-weight: 600;
|
||||
}
|
||||
|
||||
// Apply TeleNeo font globally
|
||||
body {
|
||||
font-family: var(--font-family-base);
|
||||
}
|
||||
|
||||
// Dark Mode Support
|
||||
@media (prefers-color-scheme: dark) {
|
||||
:root {
|
||||
// Primary Colors remain same
|
||||
--color-primary: #E20074;
|
||||
--color-primary-contrast: #000000;
|
||||
|
||||
// Dark Mode Adjustments
|
||||
--color-primary-dark-1: #E7338A;
|
||||
--color-primary-dark-2: #EC66A1;
|
||||
--color-primary-light-1: #C2005E;
|
||||
--color-primary-light-2: #A5004D;
|
||||
|
||||
// Secondary Colors for Dark Mode
|
||||
--color-secondary: #1c1c1e;
|
||||
--color-secondary-dark: #4D4D4D;
|
||||
--color-secondary-light: #0D0D0D;
|
||||
|
||||
// Text Colors
|
||||
--color-text: #FFFFFF;
|
||||
--color-text-light: #CCCCCC;
|
||||
--color-text-dark: #FFFFFF;
|
||||
|
||||
// Background Colors
|
||||
--color-body: #000000;
|
||||
--color-card: #1c1c1e;
|
||||
--color-hover: #1c1c1e;
|
||||
--color-active: #0D0D0D;
|
||||
|
||||
// Navigation
|
||||
--color-nav-bg: #000000;
|
||||
--color-nav-text: #FFFFFF;
|
||||
|
||||
// UI Elements
|
||||
--color-input-background: #1c1c1e;
|
||||
--color-input-border: #4D4D4D;
|
||||
--color-input-text: #FFFFFF;
|
||||
--color-box-body: #000000;
|
||||
--color-box-header: #1A1A1A;
|
||||
|
||||
// Semantic Colors for Dark Mode
|
||||
--color-success: #00A94F;
|
||||
--color-warning: #FFCC00;
|
||||
--color-error: #D52B1E;
|
||||
--color-info: #0070AD;
|
||||
|
||||
// Shadows
|
||||
--color-shadow: rgba(0, 0, 0, 0.35);
|
||||
}
|
||||
}
|
||||
|
||||
// Telekom-inspired Component Styling
|
||||
.td-navbar {
|
||||
background-color: var(--color-nav-bg) !important;
|
||||
border-bottom: 1px solid var(--nav-border-color);
|
||||
|
||||
.navbar-brand,
|
||||
.nav-link {
|
||||
color: var(--color-nav-text) !important;
|
||||
font-weight: var(--nav-text-font-weight);
|
||||
}
|
||||
|
||||
.nav-link:hover,
|
||||
.nav-link.active {
|
||||
color: var(--color-primary) !important;
|
||||
background: transparent !important;
|
||||
}
|
||||
}
|
||||
|
||||
// Primary Buttons - Telekom Magenta
|
||||
.btn-primary {
|
||||
background-color: var(--color-primary) !important;
|
||||
border-color: var(--color-primary) !important;
|
||||
color: var(--color-primary-contrast) !important;
|
||||
|
||||
&:hover {
|
||||
background-color: var(--color-primary-dark-1) !important;
|
||||
border-color: var(--color-primary-dark-1) !important;
|
||||
}
|
||||
|
||||
&:active,
|
||||
&:focus {
|
||||
background-color: var(--color-primary-dark-2) !important;
|
||||
border-color: var(--color-primary-dark-2) !important;
|
||||
}
|
||||
}
|
||||
|
||||
// Links
|
||||
a {
|
||||
color: var(--color-primary);
|
||||
|
||||
&:hover {
|
||||
color: var(--color-primary-dark-1);
|
||||
}
|
||||
}
|
||||
|
||||
// Cards with Telekom Style
|
||||
.card {
|
||||
background-color: var(--color-card);
|
||||
border: 1px solid var(--nav-border-color);
|
||||
|
||||
&:hover {
|
||||
background-color: var(--color-hover);
|
||||
}
|
||||
}
|
||||
|
||||
// Active/Selected States - REMOVED harsh black backgrounds
|
||||
// Now using soft Telekom colors instead
|
||||
|
||||
// Sidebar Navigation
|
||||
.td-sidebar-nav {
|
||||
.td-sidebar-link {
|
||||
&:hover {
|
||||
background-color: var(--color-primary-light-5) !important;
|
||||
color: var(--color-primary) !important;
|
||||
}
|
||||
|
||||
&.active {
|
||||
background-color: var(--color-primary-light-6);
|
||||
color: var(--color-primary);
|
||||
font-weight: 500;
|
||||
border-left: 3px solid var(--color-primary);
|
||||
}
|
||||
}
|
||||
|
||||
// All list items in sidebar
|
||||
li a {
|
||||
&:hover {
|
||||
background-color: var(--color-primary-light-5) !important;
|
||||
color: var(--color-primary) !important;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Main navigation tabs
|
||||
.td-sidebar {
|
||||
.td-sidebar-nav__section {
|
||||
.ul-1 > li > a {
|
||||
&.active,
|
||||
&.td-sidebar-link--active {
|
||||
background-color: var(--color-primary-light-6) !important;
|
||||
color: var(--color-primary) !important;
|
||||
font-weight: 500;
|
||||
border-left: 3px solid var(--color-primary);
|
||||
}
|
||||
|
||||
&:hover {
|
||||
background-color: var(--color-primary-light-5) !important;
|
||||
color: var(--color-primary) !important;
|
||||
}
|
||||
}
|
||||
|
||||
// All nested levels
|
||||
li a:hover {
|
||||
background-color: var(--color-primary-light-5) !important;
|
||||
color: var(--color-primary) !important;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Top navigation breadcrumb area
|
||||
.td-sidebar__inner {
|
||||
.td-sidebar-nav__section-title {
|
||||
&.active {
|
||||
background-color: var(--color-primary-light-5) !important;
|
||||
color: var(--color-primary) !important;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Breadcrumb navigation in header
|
||||
.breadcrumb {
|
||||
.active {
|
||||
color: var(--color-primary) !important;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
color: var(--color-primary-dark-1) !important;
|
||||
}
|
||||
}
|
||||
|
||||
// Remove harsh black backgrounds globally
|
||||
.active,
|
||||
.selected {
|
||||
background-color: var(--color-primary-light-6) !important;
|
||||
color: var(--color-primary) !important;
|
||||
}
|
||||
|
||||
// Softer hover states
|
||||
*:hover {
|
||||
transition: all 0.2s ease-in-out;
|
||||
}
|
||||
|
||||
// Override any dark/black hover backgrounds in navigation
|
||||
nav, .td-sidebar, .td-sidebar-nav {
|
||||
a:hover,
|
||||
li:hover > a,
|
||||
.nav-link:hover {
|
||||
background-color: var(--color-primary-light-5) !important;
|
||||
color: var(--color-primary) !important;
|
||||
}
|
||||
}
|
||||
|
||||
// Code Blocks
|
||||
pre,
|
||||
code {
|
||||
background-color: var(--color-box-body);
|
||||
border: 1px solid var(--color-input-border);
|
||||
}
|
||||
|
||||
// Inline code (backticks in text)
|
||||
code {
|
||||
background-color: var(--color-primary-light-6);
|
||||
color: var(--color-primary-dark-2);
|
||||
padding: 2px 6px;
|
||||
border-radius: 3px;
|
||||
border: 1px solid var(--color-primary-light-3);
|
||||
font-size: 0.9em;
|
||||
}
|
||||
|
||||
// Code blocks (fenced code)
|
||||
pre {
|
||||
background-color: var(--color-box-body);
|
||||
border: 1px solid var(--color-input-border);
|
||||
padding: 1rem;
|
||||
border-radius: 4px;
|
||||
|
||||
code {
|
||||
background-color: transparent;
|
||||
border: none;
|
||||
padding: 0;
|
||||
color: inherit;
|
||||
}
|
||||
}
|
||||
|
||||
// Tables
|
||||
table {
|
||||
thead {
|
||||
background-color: var(--color-box-header);
|
||||
}
|
||||
|
||||
tbody tr:hover {
|
||||
background-color: var(--color-hover);
|
||||
}
|
||||
}
|
||||
|
||||
// Alerts/Notifications
|
||||
.alert-success {
|
||||
background-color: var(--color-success);
|
||||
border-color: var(--color-success);
|
||||
}
|
||||
|
||||
.alert-warning {
|
||||
background-color: var(--color-warning);
|
||||
border-color: var(--color-warning);
|
||||
color: #000000;
|
||||
}
|
||||
|
||||
.alert-danger {
|
||||
background-color: var(--color-error);
|
||||
border-color: var(--color-error);
|
||||
}
|
||||
|
||||
.alert-info {
|
||||
background-color: var(--color-info);
|
||||
border-color: var(--color-info);
|
||||
}
|
||||
|
||||
// Docsy Homepage Components
|
||||
.td-cover-block {
|
||||
background-color: var(--color-primary) !important;
|
||||
|
||||
h1, h2, h3, h4, h5, h6, p {
|
||||
color: var(--color-primary-contrast) !important;
|
||||
}
|
||||
}
|
||||
|
||||
// Lead blocks with primary color background
|
||||
.td-block--primary,
|
||||
section[class*="bg-primary"],
|
||||
section[class*="color-primary"] {
|
||||
background-color: var(--color-primary) !important;
|
||||
|
||||
* {
|
||||
color: #FFFFFF !important;
|
||||
}
|
||||
|
||||
h1, h2, h3, h4, h5, h6, p, a, .lead {
|
||||
color: #FFFFFF !important;
|
||||
text-shadow: 0 1px 3px rgba(0, 0, 0, 0.3);
|
||||
}
|
||||
|
||||
// Prevent white background on hover
|
||||
&:hover,
|
||||
*:hover {
|
||||
background-color: transparent !important;
|
||||
color: #FFFFFF !important;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
color: #FFFFFF !important;
|
||||
text-decoration: underline;
|
||||
}
|
||||
}
|
||||
|
||||
.td-box {
|
||||
background-color: var(--color-card);
|
||||
border: 1px solid var(--nav-border-color);
|
||||
|
||||
&:hover {
|
||||
background-color: var(--color-hover);
|
||||
border-color: var(--color-primary);
|
||||
}
|
||||
|
||||
&--primary {
|
||||
background-color: var(--color-primary);
|
||||
border-color: var(--color-primary);
|
||||
color: var(--color-primary-contrast);
|
||||
}
|
||||
|
||||
&--secondary {
|
||||
background-color: var(--color-secondary);
|
||||
border-color: var(--color-secondary);
|
||||
}
|
||||
}
|
||||
|
||||
// Hero/Cover sections
|
||||
.td-cover {
|
||||
background: linear-gradient(135deg, var(--color-primary) 0%, var(--color-primary-dark-2) 100%);
|
||||
|
||||
.display-1, .display-2, .display-3, .display-4 {
|
||||
color: var(--color-primary-contrast) !important;
|
||||
}
|
||||
}
|
||||
|
||||
// Section backgrounds
|
||||
.td-section {
|
||||
&--primary {
|
||||
background-color: var(--color-primary-light-6);
|
||||
}
|
||||
|
||||
&--secondary {
|
||||
background-color: var(--color-secondary-light);
|
||||
}
|
||||
}
|
||||
|
||||
// Feature boxes
|
||||
.td-feature {
|
||||
border: 1px solid var(--nav-border-color);
|
||||
background-color: var(--color-card);
|
||||
|
||||
&:hover {
|
||||
border-color: var(--color-primary);
|
||||
box-shadow: 0 4px 12px var(--color-shadow);
|
||||
}
|
||||
}
|
||||
|
||||
// Feature blocks on homepage (blocks/feature)
|
||||
.td-box--dark,
|
||||
.td-box--colored,
|
||||
section[class*="bg-dark"] .td-box,
|
||||
section[class*="color-dark"] .td-box {
|
||||
.h2, .h3, .h4, .h5, h2, h3, h4, h5, p, a {
|
||||
color: #FFFFFF !important;
|
||||
}
|
||||
|
||||
&:hover {
|
||||
background-color: rgba(0, 0, 0, 0.8) !important;
|
||||
|
||||
.h2, .h3, .h4, .h5, h2, h3, h4, h5, p, a {
|
||||
color: #FFFFFF !important;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure text stays visible in dark sections
|
||||
section[class*="bg-dark"],
|
||||
section[class*="color-dark"] {
|
||||
* {
|
||||
color: #FFFFFF !important;
|
||||
}
|
||||
|
||||
.td-box, .card {
|
||||
&:hover {
|
||||
background-color: rgba(0, 0, 0, 0.8) !important;
|
||||
|
||||
* {
|
||||
color: #FFFFFF !important;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
|
|
|
|||
|
|
@ -3,84 +3,5 @@ title: IPCEI-CIS Developer Framework
|
|||
---
|
||||
|
||||
{{< blocks/cover title="IPCEI-CIS Developer Framework" image_anchor="top" height="full" >}}
|
||||
<p class="lead mt-5">
|
||||
A comprehensive enterprise development platform enabling teams to build, deploy, and operate cloud-native applications with ease.
|
||||
</p>
|
||||
{{< blocks/link-down color="info" >}}
|
||||
|
||||
{{< /blocks/cover >}}
|
||||
|
||||
{{% blocks/lead color="primary" %}}
|
||||
The IPCEI-CIS Developer Framework provides everything you need to deliver modern applications at scale.
|
||||
Built on open standards and battle-tested technologies.
|
||||
{{% /blocks/lead %}}
|
||||
|
||||
{{% blocks/section color="dark" type="row" %}}
|
||||
|
||||
{{% blocks/feature icon="fa-solid fa-diagram-project" title="Edge Developer Platform (EDP)" url="/docs/edp/" %}}
|
||||
Understand EDP as the developer platform hub (Forgejo, CI/CD, deployment, operations) and how it connects inner loop and outer loop workflows.
|
||||
|
||||
**Dive into EDP docs →**
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% blocks/feature icon="fa-solid fa-cloud" title="EdgeConnect Cloud" url="/docs/edgeconnect/" %}}
|
||||
Learn what EdgeConnect is, how it is consumed via stable entry points (CLI, SDK, Terraform), and how EDP integrates with it as a deployment target.
|
||||
|
||||
**Explore EdgeConnect →**
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% blocks/feature icon="fa-solid fa-scale-balanced" title="Governance" url="/docs/governance/" %}}
|
||||
Read the project history, decision context, and audit-oriented traceability to primary sources and repository artifacts.
|
||||
|
||||
**Go to Governance →**
|
||||
{{% /blocks/feature %}}
|
||||
|
||||
{{% /blocks/section %}}
|
||||
|
||||
{{% blocks/section color="white" %}}
|
||||
|
||||
## What's in the Platform?
|
||||
|
||||
<div class="row">
|
||||
<div class="col-lg-4 mb-4">
|
||||
|
||||
### 🚀 Developer Experience
|
||||
|
||||
* **Backstage Portal** - Self-service platform
|
||||
* **GitOps Workflows** - Automated deployments
|
||||
* **Golden Paths** - Best practices built-in
|
||||
|
||||
</div>
|
||||
<div class="col-lg-4 mb-4">
|
||||
|
||||
### 🛠️ Infrastructure as Code
|
||||
|
||||
* **Crossplane** - Cloud resource provisioning
|
||||
* **ArgoCD** - Declarative GitOps
|
||||
* **Terraform** - Infrastructure automation
|
||||
|
||||
</div>
|
||||
<div class="col-lg-4 mb-4">
|
||||
|
||||
### 📊 Observability
|
||||
|
||||
* **Prometheus & Grafana** - Metrics & dashboards
|
||||
* **Loki** - Log aggregation
|
||||
* **OpenTelemetry** - Distributed tracing
|
||||
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{{% /blocks/section %}}
|
||||
|
||||
{{% blocks/section color="light" %}}
|
||||
|
||||
## Get Started
|
||||
|
||||
Whether you're a **platform engineer**, **application developer**, or **auditor**, we have resources for you:
|
||||
|
||||
* 📖 Start at [Documentation](/docs/)
|
||||
* 🧭 Read [Edge Developer Platform (EDP)](/docs/edp/)
|
||||
* ☁️ Read [EdgeConnect Cloud](/docs/edgeconnect/)
|
||||
* 🧾 Read [Governance](/docs/governance/)
|
||||
|
||||
{{% /blocks/section %}}
|
||||
|
|
|
|||
40
content/en/blog/240823-archsession.md
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
title: Architecture session
|
||||
weight: 20
|
||||
---
|
||||
|
||||
|
||||
## Platform Generics
|
||||
|
||||
* https://tag-app-delivery.cncf.io/whitepapers/platforms/#capabilities-of-platforms
|
||||
|
||||
* https://tag-app-delivery.cncf.io/whitepapers/platform-eng-maturity-model/
|
||||
|
||||
* https://humanitec.com/blog/wtf-internal-developer-platform-vs-internal-developer-portal-vs-paas
|
||||
|
||||
## reference architecture + Portfolio
|
||||
|
||||
* https://platformengineering.org/blog/create-your-own-platform-engineering-reference-architectures
|
||||
|
||||
* https://humanitec.com/reference-architectures
|
||||
|
||||
* https://www.youtube.com/watch?v=AimSwK8Mw-U
|
||||
|
||||
|
||||
## Platform Portfolio
|
||||
|
||||
### Viktor Farcic
|
||||
|
||||
* https://technologyconversations.com/
|
||||
|
||||
* https://technologyconversations.com/2024/01/08/the-best-devops-tools-platforms-and-services-in-2024/
|
||||
|
||||
|
||||
### Internal devloper platform
|
||||
|
||||
* https://internaldeveloperplatform.org/core-components/
|
||||
|
||||
### Workflow / CI/CD
|
||||
|
||||
* https://cnoe.io/blog/optimizing-data-quality-in-dev-portals
|
||||
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
---
|
||||
title: Legacy Documentation
|
||||
linkTitle: Docs (Old)
|
||||
menu:
|
||||
main:
|
||||
weight: 50
|
||||
weight: 50
|
||||
cascade:
|
||||
- type: docs
|
||||
---
|
||||
|
||||
# Legacy Documentation
|
||||
|
||||
This section contains the previous version of the documentation for reference purposes.
|
||||
|
||||
**Note**: This documentation is archived and may be outdated. Please refer to the main [Documentation](../docs/) section for current information.
|
||||
|
||||
## Available Sections
|
||||
|
||||
* [Architecture](architecture/) - System architecture and diagrams
|
||||
* [Documentation](documentation/) - Meta documentation about the documentation system
|
||||
* [Platform Overview](platform-overview/) - Overview document
|
||||
* [v1 (Legacy)](v1/) - Original v1 documentation
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
---
|
||||
title: "Architecture"
|
||||
linkTitle: "Architecture"
|
||||
weight: 3
|
||||
description: >
|
||||
System architecture documentation and interactive diagrams
|
||||
---
|
||||
|
||||
This section contains architecture documentation for the IPCEI-CIS Developer Framework, including interactive C4 architecture diagrams.
|
||||
|
|
@ -1,79 +0,0 @@
|
|||
---
|
||||
title: "High Level Architecture"
|
||||
linkTitle: "High Level Architecture"
|
||||
weight: 1
|
||||
description: >
|
||||
Interactive high-level architecture overview of the Enterprise Development Platform
|
||||
---
|
||||
|
||||
This document describes the high-level architecture of our Enterprise Development Platform (EDP) system.
|
||||
|
||||
## Interactive Architecture Diagram
|
||||
|
||||
{{< likec4-view view="otc-faas" project="architecture" title="Enterprise Development Platform - OTC FaaS Deployment Architecture" >}}
|
||||
|
||||
{{< alert title="Interactive Diagram" >}}
|
||||
The diagram above is interactive when viewed in a compatible browser.
|
||||
You can click on components to explore the architecture details.
|
||||
|
||||
**Note:** The interactive diagram requires the LikeC4 webcomponent to be generated.
|
||||
See the [setup instructions]({{< ref "/docs-old/architecture/setup" >}}) for details.
|
||||
{{< /alert >}}
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The Enterprise Development Platform consists of several key components working together to provide a comprehensive development and deployment environment.
|
||||
|
||||
### Key Components
|
||||
|
||||
1. **OTC Foundry** - Central management and orchestration layer
|
||||
2. **Per-Tenant EDP** - Isolated development environments for each tenant
|
||||
3. **FaaS Environment** - Function-as-a-Service deployment targets on Open Telekom Cloud
|
||||
4. **Cloud Services** - Managed services including databases, storage, and monitoring
|
||||
|
||||
### Deployment Environments
|
||||
|
||||
- **Development Environment** (`*.t09.de`) - For platform team development and testing
|
||||
- **Production Environment** (`*.buildth.ing`) - For production workloads and tenant services
|
||||
|
||||
## Component Details
|
||||
|
||||
The interactive diagram above shows the relationships between different components and how they interact within the system architecture. You can explore the diagram by clicking on different elements to see more details.
|
||||
|
||||
### Infrastructure Components
|
||||
|
||||
- **Kubernetes Clusters** - Container orchestration using OTC CCE (Cloud Container Engine)
|
||||
- **ArgoCD** - GitOps continuous deployment and application lifecycle management
|
||||
- **Forgejo** - Git repository management and CI/CD pipelines
|
||||
- **Observability Stack** - Monitoring (Prometheus, Grafana), logging (Loki), and alerting
|
||||
|
||||
### Security and Management
|
||||
|
||||
- **Keycloak** - Identity and access management (IAM)
|
||||
- **OpenBao** - Secrets management (Hashicorp Vault fork)
|
||||
- **External Secrets Operator** - Kubernetes secrets integration
|
||||
- **Crossplane** - Infrastructure as Code and cloud resource provisioning
|
||||
|
||||
### Developer Experience
|
||||
|
||||
- **Backstage** - Internal developer portal and service catalog
|
||||
- **Forgejo Actions** - CI/CD pipeline execution
|
||||
- **Development Workflows** - GitOps-based inner and outer loop workflows
|
||||
|
||||
## Setup and Maintenance
|
||||
|
||||
To update or modify the architecture diagrams:
|
||||
|
||||
1. Edit the `.c4` files in `resources/edp-likec4/`
|
||||
2. Regenerate the webcomponent:
|
||||
|
||||
```bash
|
||||
cd resources/edp-likec4
|
||||
npx likec4 codegen webcomponent \
|
||||
--webcomponent-prefix likec4 \
|
||||
--outfile ../../static/js/likec4-webcomponent.js
|
||||
```
|
||||
|
||||
3. Commit both the model changes and the regenerated JavaScript file
|
||||
|
||||
For more information, see the [LikeC4 Integration Guide]({{< ref "/docs-old/architecture/setup" >}}).
|
||||
|
|
@ -1,297 +0,0 @@
|
|||
---
|
||||
title: "LikeC4 Setup Guide"
|
||||
linkTitle: "Setup"
|
||||
weight: 10
|
||||
description: >
|
||||
How to set up and use LikeC4 interactive architecture diagrams
|
||||
---
|
||||
|
||||
This guide explains how to set up and use LikeC4 interactive architecture diagrams in this documentation.
|
||||
|
||||
## Overview
|
||||
|
||||
LikeC4 enables you to create interactive C4 architecture diagrams as code. The diagrams are defined in `.c4` files and compiled into a web component that can be embedded in any HTML page.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js (v18 or later)
|
||||
- npm or yarn
|
||||
|
||||
## Initial Setup
|
||||
|
||||
### 1. Install Dependencies
|
||||
|
||||
Navigate to the LikeC4 directory and install dependencies:
|
||||
|
||||
```bash
|
||||
cd resources/edp-likec4
|
||||
npm install
|
||||
```
|
||||
|
||||
### 2. Generate the Web Component
|
||||
|
||||
Create the web component that Hugo will load:
|
||||
|
||||
```bash
|
||||
npx likec4 codegen webcomponent \
|
||||
--webcomponent-prefix likec4 \
|
||||
--outfile ../../static/js/likec4-webcomponent.js
|
||||
```
|
||||
|
||||
This command:
|
||||
|
||||
- Reads all `.c4` files from `models/` and `views/`
|
||||
- Generates a single JavaScript file with all architecture views
|
||||
- Outputs to `static/js/likec4-webcomponent.js`
|
||||
|
||||
### 3. Verify Integration
|
||||
|
||||
The integration should already be configured in:
|
||||
|
||||
- `hugo.toml` - Contains `params.likec4.enable = true`
|
||||
- `layouts/partials/hooks/head-end.html` - Loads CSS and loader script
|
||||
- `static/css/likec4-styles.css` - Diagram styling
|
||||
- `static/js/likec4-loader.js` - Dynamic module loader
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```plaintext
|
||||
resources/edp-likec4/
|
||||
├── models/ # C4 model definitions
|
||||
│ ├── components/ # Component models
|
||||
│ ├── containers/ # Container models
|
||||
│ ├── context/ # System context
|
||||
│ └── code/ # Code-level workflows
|
||||
├── views/ # View definitions
|
||||
│ ├── deployment/ # Deployment views
|
||||
│ ├── edp/ # EDP views
|
||||
│ ├── high-level-concept/ # Conceptual views
|
||||
│ └── dynamic/ # Process flows
|
||||
├── package.json # Dependencies
|
||||
└── INTEGRATION.md # Integration docs
|
||||
```
|
||||
|
||||
## Using in Documentation
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Add this to any Markdown file:
|
||||
|
||||
```html
|
||||
<div class="likec4-container">
|
||||
<div class="likec4-header">
|
||||
Your Diagram Title
|
||||
</div>
|
||||
<likec4-view view-id="YOUR-VIEW-ID" browser="true"></likec4-view>
|
||||
<div class="likec4-loading" id="likec4-loading">
|
||||
Loading architecture diagram...
|
||||
</div>
|
||||
</div>
|
||||
```
|
||||
|
||||
### Available View IDs
|
||||
|
||||
To find available view IDs, search the `.c4` files:
|
||||
|
||||
```bash
|
||||
cd resources/edp-likec4
|
||||
grep -r "view\s\+\w" views/ models/ --include="*.c4"
|
||||
```
|
||||
|
||||
Common views:
|
||||
|
||||
- `otc-faas` - OTC FaaS deployment
|
||||
- `edp` - EDP overview
|
||||
- `landscape` - Developer landscape
|
||||
- `edpbuilderworkflow` - Builder workflow
|
||||
- `keycloak` - Keycloak component
|
||||
|
||||
### With Hugo Alert
|
||||
|
||||
Combine with Docsy alerts for better UX:
|
||||
|
||||
```markdown
|
||||
<div class="likec4-container">
|
||||
<div class="likec4-header">
|
||||
System Architecture
|
||||
</div>
|
||||
<likec4-view view-id="otc-faas" browser="true"></likec4-view>
|
||||
<div class="likec4-loading" id="likec4-loading">
|
||||
Loading...
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{{</* alert title="Note" */>}}
|
||||
Click on components in the diagram to explore the architecture.
|
||||
{{</* /alert */>}}
|
||||
```
|
||||
|
||||
## Workflow for Changes
|
||||
|
||||
### 1. Modify Architecture Models
|
||||
|
||||
Edit the `.c4` files in `resources/edp-likec4/`:
|
||||
|
||||
```bash
|
||||
# Edit a model
|
||||
vi resources/edp-likec4/models/containers/argocd.c4
|
||||
|
||||
# Or edit a view
|
||||
vi resources/edp-likec4/views/deployment/otc/otc-faas.c4
|
||||
```
|
||||
|
||||
### 2. Preview Changes Locally
|
||||
|
||||
Use the LikeC4 CLI to preview:
|
||||
|
||||
```bash
|
||||
cd resources/edp-likec4
|
||||
|
||||
# Start preview server
|
||||
npx likec4 start
|
||||
|
||||
# Opens browser at http://localhost:5173
|
||||
```
|
||||
|
||||
### 3. Regenerate Web Component
|
||||
|
||||
After making changes:
|
||||
|
||||
```bash
|
||||
cd resources/edp-likec4
|
||||
npx likec4 codegen webcomponent \
|
||||
--webcomponent-prefix likec4 \
|
||||
--outfile ../../static/js/likec4-webcomponent.js
|
||||
```
|
||||
|
||||
### 4. Test in Hugo
|
||||
|
||||
Start the Hugo development server:
|
||||
|
||||
```bash
|
||||
# From repository root
|
||||
hugo server -D
|
||||
|
||||
# Open http://localhost:1313
|
||||
```
|
||||
|
||||
### 5. Commit Changes
|
||||
|
||||
Commit both the model files and the regenerated web component:
|
||||
|
||||
```bash
|
||||
git add resources/edp-likec4/
|
||||
git add static/js/likec4-webcomponent.js
|
||||
git commit -m "feat: update architecture diagrams"
|
||||
```
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Custom Styling
|
||||
|
||||
Modify `static/css/likec4-styles.css` to customize appearance:
|
||||
|
||||
```css
|
||||
.likec4-container {
|
||||
height: 800px; /* Adjust height */
|
||||
border-radius: 8px; /* Rounder corners */
|
||||
}
|
||||
```
|
||||
|
||||
### Multiple Diagrams Per Page
|
||||
|
||||
You can include multiple diagrams on a single page:
|
||||
|
||||
```html
|
||||
<!-- First diagram -->
|
||||
<div class="likec4-container">
|
||||
<div class="likec4-header">Deployment View</div>
|
||||
<likec4-view view-id="otc-faas" browser="true"></likec4-view>
|
||||
<div class="likec4-loading">Loading...</div>
|
||||
</div>
|
||||
|
||||
<!-- Second diagram -->
|
||||
<div class="likec4-container">
|
||||
<div class="likec4-header">Component View</div>
|
||||
<likec4-view view-id="edp" browser="true"></likec4-view>
|
||||
<div class="likec4-loading">Loading...</div>
|
||||
</div>
|
||||
```
|
||||
|
||||
### Disable for Specific Pages
|
||||
|
||||
Add to page front matter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: "My Page"
|
||||
params:
|
||||
disable_likec4: true
|
||||
---
|
||||
```
|
||||
|
||||
Then update `layouts/partials/hooks/head-end.html`:
|
||||
|
||||
```html
|
||||
{{ if and .Site.Params.likec4.enable (not .Params.disable_likec4) }}
|
||||
<!-- LikeC4 scripts -->
|
||||
{{ end }}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Diagram Not Loading
|
||||
|
||||
1. **Check browser console** (F12 → Console)
|
||||
2. **Verify webcomponent exists:**
|
||||
|
||||
```bash
|
||||
ls -lh static/js/likec4-webcomponent.js
|
||||
```
|
||||
|
||||
3. **Regenerate if missing:**
|
||||
|
||||
```bash
|
||||
cd resources/edp-likec4
|
||||
npm install
|
||||
npx likec4 codegen webcomponent \
|
||||
--webcomponent-prefix likec4 \
|
||||
--outfile ../../static/js/likec4-webcomponent.js
|
||||
```
|
||||
|
||||
### View Not Found
|
||||
|
||||
- Check view ID matches exactly (case-sensitive)
|
||||
- Search for the view in `.c4` files:
|
||||
|
||||
```bash
|
||||
grep -r "view otc-faas" resources/edp-likec4/
|
||||
```
|
||||
|
||||
### Styling Issues
|
||||
|
||||
- Clear browser cache (Ctrl+Shift+R)
|
||||
- Check `static/css/likec4-styles.css` is loaded in browser DevTools → Network
|
||||
|
||||
### Build Errors
|
||||
|
||||
If LikeC4 codegen fails:
|
||||
|
||||
```bash
|
||||
cd resources/edp-likec4
|
||||
rm -rf node_modules package-lock.json
|
||||
npm install
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- [LikeC4 Documentation](https://likec4.dev/)
|
||||
- [C4 Model](https://c4model.com/)
|
||||
- [Web Components](https://developer.mozilla.org/en-US/docs/Web/Web_Components)
|
||||
- [Hugo Documentation](https://gohugo.io/documentation/)
|
||||
|
||||
## Migration Notes
|
||||
|
||||
This LikeC4 integration was migrated from the edp-doc repository. This repository (`ipceicis-developerframework`) is now the primary source for architecture models.
|
||||
|
||||
The edp-doc repository can reference these models via git submodule if needed.
|
||||
|
|
@ -1,126 +0,0 @@
|
|||
# CI/CD pipeline tools for composable pipeline
|
||||
|
||||
## Context and Problem Statement
|
||||
|
||||
In order to build a composable pipeline that provides a golden path and reusable components, we need to define the tools that will be used to execute the pipeline.
|
||||
|
||||
ArgoCD is considered set in stone as the tool to manage the deployment of applications. However, the tools to compose and execute the pipeline are still up for debate.
|
||||
|
||||
> Note: The pipeline will use many other tools to perform certain actions such as testing, building, and deploying. This ADR is focused on the tools that will be used to compose and execute the pipeline itself.
|
||||
|
||||
In general, there are 2 decisions to make:
|
||||
|
||||
* What tools should we use to execute the pipeline?
|
||||
* What tools should we use to compose the pipeline?
|
||||
|
||||
The following use-cases should be considered for this decision:
|
||||
|
||||
* **User who wants to manage their own runners (???)**
|
||||
* User who only wants to use our golden path
|
||||
* User who wants to use our golden path and add custom actions
|
||||
* User who wants to use their own templates and import some of our actions
|
||||
* User who wants to import an existing GitHub repository with a pipeline
|
||||
|
||||
## Considered Options
|
||||
|
||||
* Argo Workflows + Events
|
||||
* Argo Workflows + Events + Additional Composition tool
|
||||
* Forgejo Actions
|
||||
* Forgejo Actions + Additional Composition tool
|
||||
* Dagger (as Engine)
|
||||
* Shuttle (as Engine)
|
||||
|
||||
## Decision Outcome
|
||||
|
||||
TBD
|
||||
|
||||
## Pros and Cons of the Options
|
||||
|
||||
### Argo Workflows + Events
|
||||
|
||||
#### Pro
|
||||
|
||||
* integration with ArgoCD
|
||||
* ability to trigger additional workflows based on events.
|
||||
* level of maturity and community support.
|
||||
|
||||
#### Con
|
||||
|
||||
* Ability to self-host runners?
|
||||
* way how composition for pipelines works (based on Kubernetes CRDs)
|
||||
* Templates must be available in the cluster where the pipelines are executed, so any imported templates must be applied into the cluster before the pipeline can be executed and cannot simply reference a repository
|
||||
* This makes it difficult to import existing templates from other repositories when using self-hosted runners
|
||||
* This also makes it difficult to use our golden path, or at least we will need to provide a way to import our golden path into the cluster
|
||||
* This also makes the split of every component has its own repo very difficult
|
||||
* additional UI to manage the pipeline
|
||||
* Additional complexity
|
||||
|
||||
### Argo Workflows + Events + Additional Composition tool
|
||||
|
||||
#### Pro
|
||||
|
||||
* Composability can be offloaded to another tool
|
||||
|
||||
#### Con
|
||||
|
||||
* All cons of the previous option (except composability)
|
||||
* Additional complexity by adding another tool
|
||||
|
||||
### Forgejo Actions
|
||||
|
||||
#### Pro
|
||||
|
||||
* tight integration with GitHub Actions providing a familiar interface for developers and a vast catalog of actions to choose from
|
||||
* ability to compose pipelines without relying on another tool
|
||||
* Self-hosting of runners possible
|
||||
* every component can have its own repository and use different tools (e.g. written in go, bash, python etc.)
|
||||
|
||||
#### Con
|
||||
|
||||
* level of maturity - will require additional investments to provide a production-grade system
|
||||
|
||||
### Forgejo Actions + Additional Tool
|
||||
|
||||
#### Pro
|
||||
|
||||
* may be possible to use GitHub actions alongside another tool
|
||||
|
||||
#### Con
|
||||
|
||||
* additional complexity by adding another tool
|
||||
|
||||
### Shuttle
|
||||
|
||||
#### Pro
|
||||
|
||||
* Possibility to clearly define interfaces for pipeline steps
|
||||
* Relatively simple
|
||||
|
||||
#### Con
|
||||
|
||||
* basically backed by only one company
|
||||
* **centralized templates**, so no mechanism for composing pipelines from multiple repositories
|
||||
|
||||
### Dagger
|
||||
|
||||
#### Pro
|
||||
|
||||
* Pipeline as code
|
||||
* if it runs it should run anywhere and produce the "same" / somewhat stable results
|
||||
* build environments are defined within containers / the dagger config. Dagger is the only dependency one has to install on a machine
|
||||
* DX is extremely nice, especially if you have to debug (image) builds, also type safety due to the ability to code your build in a strong language
|
||||
* additional tooling, like trivy, is added to a build pipeline with low effort due to containers and existing plugin/wrappers
|
||||
* you can create complex test environments similar to test containers and docker compose
|
||||
|
||||
#### Con
|
||||
|
||||
* relies heavily containers, which might not be available some environments (due to policy etc), it also has an effect on reproducibility and verifiability
|
||||
* as a dev you need to properly understand containers
|
||||
* dagger engine has to run privileged locally and/or in the cloud which might be a blocker or at least a big pain in the ...
|
||||
|
||||
#### Suggestion Patrick
|
||||
|
||||
* dagger is a heavy weight and might not be as productive in a dev workflow as it seems (setup lsp etc)
|
||||
* it might be too opinionated to force on teams, especially since it is not near mainstream enough, community might be too small
|
||||
* it feels like dagger gets you 95% of the way, but the remaining 5% are a real struggle
|
||||
* if we like it, we should check the popularity in the dev community before further considering as it has a direct impact on teams and their preferences
|
||||
|
|
@ -1,5 +0,0 @@
|
|||
# ADRs
|
||||
|
||||
Architecture Decision Records (ADRs) are a way to capture the important architectural decisions made during the development of a project. They are a way to document the context, the decision, and the consequences of the decision. They are a way to keep track of the architectural decisions made in a project and to communicate them to the team.
|
||||
|
||||
The [Markdown Architectural Decision Records](https://adr.github.io/madr/) (MADR) format is a simple and easy-to-use format for writing ADRs in Markdown.
|
||||
|
|
@ -1,67 +0,0 @@
|
|||
<!-- we need to disable MD025, because we use the different heading "ADR Template" in the homepage (see above) than it is foreseen in the template -->
|
||||
<!-- markdownlint-disable-next-line MD025 -->
|
||||
# {short title, representative of solved problem and found solution}
|
||||
|
||||
## Context and Problem Statement
|
||||
|
||||
{Describe the context and problem statement, e.g., in free form using two to three sentences or in the form of an illustrative story. You may want to articulate the problem in form of a question and add links to collaboration boards or issue management systems.}
|
||||
|
||||
<!-- This is an optional element. Feel free to remove. -->
|
||||
## Decision Drivers
|
||||
|
||||
* {decision driver 1, e.g., a force, facing concern, …}
|
||||
* {decision driver 2, e.g., a force, facing concern, …}
|
||||
* … <!-- numbers of drivers can vary -->
|
||||
|
||||
## Considered Options
|
||||
|
||||
* {title of option 1}
|
||||
* {title of option 2}
|
||||
* {title of option 3}
|
||||
* … <!-- numbers of options can vary -->
|
||||
|
||||
## Decision Outcome
|
||||
|
||||
Chosen option: "{title of option 1}", because {justification. e.g., only option, which meets k.o. criterion decision driver | which resolves force {force} | … | comes out best (see below)}.
|
||||
|
||||
<!-- This is an optional element. Feel free to remove. -->
|
||||
### Consequences
|
||||
|
||||
* Good, because {positive consequence, e.g., improvement of one or more desired qualities, …}
|
||||
* Bad, because {negative consequence, e.g., compromising one or more desired qualities, …}
|
||||
* … <!-- numbers of consequences can vary -->
|
||||
|
||||
<!-- This is an optional element. Feel free to remove. -->
|
||||
### Confirmation
|
||||
|
||||
{Describe how the implementation of/compliance with the ADR can/will be confirmed. Are the design that was decided for and its implementation in line with the decision made? E.g., a design/code review or a test with a library such as ArchUnit can help validate this. Not that although we classify this element as optional, it is included in many ADRs.}
|
||||
|
||||
<!-- This is an optional element. Feel free to remove. -->
|
||||
## Pros and Cons of the Options
|
||||
|
||||
### {title of option 1}
|
||||
|
||||
<!-- This is an optional element. Feel free to remove. -->
|
||||
{example | description | pointer to more information | …}
|
||||
|
||||
* Good, because {argument a}
|
||||
* Good, because {argument b}
|
||||
<!-- use "neutral" if the given argument weights neither for good nor bad -->
|
||||
* Neutral, because {argument c}
|
||||
* Bad, because {argument d}
|
||||
* … <!-- numbers of pros and cons can vary -->
|
||||
|
||||
### {title of other option}
|
||||
|
||||
{example | description | pointer to more information | …}
|
||||
|
||||
* Good, because {argument a}
|
||||
* Good, because {argument b}
|
||||
* Neutral, because {argument c}
|
||||
* Bad, because {argument d}
|
||||
* …
|
||||
|
||||
<!-- This is an optional element. Feel free to remove. -->
|
||||
## More Information
|
||||
|
||||
{You might want to provide additional evidence/confidence for the decision outcome here and/or document the team agreement on the decision and/or define when/how this decision the decision should be realized and if/when it should be re-visited. Links to other decisions and resources might appear here as well.}
|
||||
|
|
@ -1,43 +0,0 @@
|
|||
---
|
||||
title: "Documentation About Documentation"
|
||||
linkTitle: "Documentation"
|
||||
weight: 10
|
||||
description: >
|
||||
Learn how to create, maintain, and publish documentation for the developer platform.
|
||||
---
|
||||
|
||||
Welcome to the meta-documentation! This section explains how our documentation platform works and guides you through the technicalWriter role.
|
||||
|
||||
## What is a Technical Writer?
|
||||
|
||||
A **Technical Writer** is responsible for creating, maintaining, and publishing the developer platform documentation. This includes:
|
||||
|
||||
- Writing and updating content in Markdown
|
||||
- Creating architecture diagrams with LikeC4
|
||||
- Testing locally before committing
|
||||
- Following the CI/CD pipeline to production
|
||||
|
||||
## Documentation Platform Architecture
|
||||
|
||||
Our documentation is built on a modern stack:
|
||||
|
||||
- **Hugo** with the **Docsy** theme for static site generation
|
||||
- **LikeC4** for architecture visualization
|
||||
- **Taskfile** for local development automation
|
||||
- **GitHub Actions** for continuous testing
|
||||
- **Edge deployment** for hosting
|
||||
|
||||
### System Overview
|
||||
|
||||
{{< likec4-view view="overview" project="documentation-platform" >}}
|
||||
|
||||
This high-level view shows all major components of the documentation platform.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Continue to the next sections to learn about:
|
||||
|
||||
1. [Local Development](local-development/) - How to work on documentation locally
|
||||
2. [Testing](testing/) - Quality assurance processes
|
||||
3. [CI/CD Pipeline](cicd/) - Automated testing and deployment
|
||||
4. [Publishing](publishing/) - How documentation reaches production
|
||||
|
|
@ -1,264 +0,0 @@
|
|||
---
|
||||
title: "CI/CD Pipeline"
|
||||
linkTitle: "CI/CD"
|
||||
weight: 40
|
||||
description: >
|
||||
Automated testing and container build process.
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Our documentation uses a continuous integration and deployment pipeline to ensure quality and automate deployment.
|
||||
|
||||
{{< likec4-view view="cicdPipeline" project="documentation-platform" >}}
|
||||
|
||||
## GitHub Actions Workflow
|
||||
|
||||
The CI/CD pipeline is defined in `.github/workflows/test.yml` and runs on:
|
||||
|
||||
- **Pushes to `main` branch**
|
||||
- **Pull requests to `main` branch**
|
||||
|
||||
### Workflow Steps
|
||||
|
||||
#### 1. Checkout Code
|
||||
|
||||
```yaml
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
submodules: recursive
|
||||
fetch-depth: 0
|
||||
```
|
||||
|
||||
- Clones repository with full history
|
||||
- Includes Git submodules (Hugo modules)
|
||||
|
||||
#### 2. Setup Hugo
|
||||
|
||||
```yaml
|
||||
- name: Setup Hugo
|
||||
uses: peaceiris/actions-hugo@v3
|
||||
with:
|
||||
hugo-version: 'latest'
|
||||
extended: true
|
||||
```
|
||||
|
||||
- Installs Hugo Extended
|
||||
- Uses latest stable version
|
||||
|
||||
#### 3. Setup Node.js
|
||||
|
||||
```yaml
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '24'
|
||||
cache: 'npm'
|
||||
```
|
||||
|
||||
- Installs Node.js v24
|
||||
- Caches npm dependencies for faster builds
|
||||
|
||||
#### 4. Install Dependencies
|
||||
|
||||
```bash
|
||||
npm ci
|
||||
go install github.com/wjdp/htmltest@latest
|
||||
```
|
||||
|
||||
- Installs npm packages (markdownlint, htmlvalidate)
|
||||
- Installs htmltest for link checking
|
||||
|
||||
#### 5. Run Tests
|
||||
|
||||
```bash
|
||||
npm run test:build
|
||||
npm run test:markdown
|
||||
npm run test:html
|
||||
```
|
||||
|
||||
- Validates Hugo build
|
||||
- Lints Markdown files
|
||||
- Validates HTML output
|
||||
|
||||
#### 6. Link Checking
|
||||
|
||||
```yaml
|
||||
- name: Run link checker
|
||||
run: htmltest
|
||||
continue-on-error: true
|
||||
```
|
||||
|
||||
- Checks all links
|
||||
- Continues even if links fail (soft requirement)
|
||||
|
||||
#### 7. Upload Results
|
||||
|
||||
```yaml
|
||||
- name: Upload htmltest results
|
||||
uses: actions/upload-artifact@v4
|
||||
if: always()
|
||||
with:
|
||||
name: htmltest-report
|
||||
path: tmp/.htmltest/
|
||||
```
|
||||
|
||||
- Uploads link check report
|
||||
- Available for download from GitHub Actions
|
||||
|
||||
## Container Build Process
|
||||
|
||||
After tests pass, a container image is built:
|
||||
|
||||
```bash
|
||||
task build:oci-image
|
||||
```
|
||||
|
||||
### Build Process
|
||||
|
||||
1. **Reads version information** from `.env.versions`:
|
||||
- `NODE_VERSION`
|
||||
- `GO_VERSION`
|
||||
- `HUGO_VERSION`
|
||||
|
||||
2. **Builds Docker image** using `Dockerfile`:
|
||||
- Multi-stage build
|
||||
- Hugo generates static site
|
||||
- Nginx serves the content
|
||||
|
||||
3. **Tags image** with:
|
||||
- `latest`
|
||||
- Git commit SHA (short)
|
||||
|
||||
### Dockerfile Structure
|
||||
|
||||
```dockerfile
|
||||
# Build stage
|
||||
FROM node:${NODE_VERSION} as builder
|
||||
# Install Hugo, build dependencies
|
||||
# Run: hugo --gc --minify
|
||||
# Output: public/ directory
|
||||
|
||||
# Runtime stage
|
||||
FROM nginx:alpine
|
||||
# Copy public/ to /usr/share/nginx/html/
|
||||
# Configure Nginx
|
||||
```
|
||||
|
||||
### Testing the Container
|
||||
|
||||
```bash
|
||||
task test:oci-image
|
||||
```
|
||||
|
||||
This:
|
||||
|
||||
1. Builds the image
|
||||
2. Starts container on port 8080
|
||||
3. Tests HTTP endpoint
|
||||
4. Cleans up container
|
||||
|
||||
## Package.json Scripts
|
||||
|
||||
The `package.json` defines test scripts:
|
||||
|
||||
```json
|
||||
{
|
||||
"scripts": {
|
||||
"test:build": "hugo --gc --minify --logLevel info",
|
||||
"test:markdown": "markdownlint 'content/**/*.md'",
|
||||
"test:html": "htmlvalidate 'public/**/*.html'"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Running CI Locally
|
||||
|
||||
Simulate the CI environment locally:
|
||||
|
||||
```bash
|
||||
task ci
|
||||
```
|
||||
|
||||
This runs the same tests as GitHub Actions.
|
||||
|
||||
## Monitoring CI Results
|
||||
|
||||
### Successful Build
|
||||
|
||||
✅ All tests pass → Ready to deploy
|
||||
|
||||
### Failed Build
|
||||
|
||||
❌ Tests fail:
|
||||
|
||||
1. Click on the failed workflow in GitHub Actions
|
||||
2. Expand the failed step
|
||||
3. Read the error message
|
||||
4. Fix locally: `task test:<specific-test>`
|
||||
5. Commit and push fix
|
||||
|
||||
### Viewing Artifacts
|
||||
|
||||
1. Go to GitHub Actions
|
||||
2. Click on workflow run
|
||||
3. Scroll to "Artifacts" section
|
||||
4. Download `htmltest-report`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Don't push to main directly** - Use feature branches and PRs
|
||||
2. **Wait for CI before merging** - Green checkmark required
|
||||
3. **Fix broken builds immediately** - Don't let main stay red
|
||||
4. **Review CI logs** - Understand why tests fail
|
||||
5. **Update dependencies** - Keep versions current in `.env.versions`
|
||||
|
||||
## Continuous Deployment
|
||||
|
||||
After successful CI:
|
||||
|
||||
1. Container image is built
|
||||
2. Image is pushed to registry
|
||||
3. Deployment process begins (see [Publishing](../publishing/))
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tests pass locally but fail in CI
|
||||
|
||||
**Possible causes:**
|
||||
|
||||
- Different Hugo version
|
||||
- Different Node.js version
|
||||
- Missing dependencies
|
||||
- Environment-specific issues
|
||||
|
||||
**Solution:** Check versions in `.env.versions` and ensure local matches CI
|
||||
|
||||
### Build timeouts
|
||||
|
||||
**Possible causes:**
|
||||
|
||||
- Link checker taking too long
|
||||
- Large number of external links
|
||||
|
||||
**Solution:**
|
||||
|
||||
- Use `continue-on-error: true` for link checks
|
||||
- Configure `.htmltest.yml` to skip slow checks
|
||||
|
||||
### Cache issues
|
||||
|
||||
**Solution:** Clear GitHub Actions cache:
|
||||
|
||||
```yaml
|
||||
- uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.npm
|
||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||
```
|
||||
|
||||
Update the cache key to force refresh.
|
||||
|
||||
## Next Steps
|
||||
|
||||
Learn about [deployment to Edge environment](../publishing/).
|
||||
|
|
@ -1,234 +0,0 @@
|
|||
---
|
||||
title: "Local Development"
|
||||
linkTitle: "Local Development"
|
||||
weight: 20
|
||||
description: >
|
||||
Set up your local environment and learn the technicalWriter workflow.
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you start, ensure you have:
|
||||
|
||||
- **Devbox** or the following tools installed:
|
||||
- Hugo Extended (latest version)
|
||||
- Node.js (v24+)
|
||||
- Go (for htmltest)
|
||||
- Git
|
||||
|
||||
## Installation
|
||||
|
||||
1. Clone the repository:
|
||||
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd ipceicis-developerframework
|
||||
```
|
||||
|
||||
2. Install dependencies:
|
||||
|
||||
```bash
|
||||
task deps:install
|
||||
```
|
||||
|
||||
3. **If using Devbox**, enter the Devbox shell:
|
||||
|
||||
```bash
|
||||
devbox shell
|
||||
```
|
||||
|
||||
This ensures all tools (Hugo, Node.js, Go) are available in the correct versions.
|
||||
|
||||
## Local Development Workflow
|
||||
|
||||
{{< likec4-view view="localDevelopment" project="documentation-platform" >}}
|
||||
|
||||
### Starting the Development Server
|
||||
|
||||
The easiest way to work locally is to start the Hugo development server:
|
||||
|
||||
```bash
|
||||
task serve
|
||||
```
|
||||
|
||||
This will:
|
||||
|
||||
- Generate build information (git commit, version)
|
||||
- Start Hugo server on `http://localhost:1313`
|
||||
- Enable hot reload - changes appear instantly in the browser
|
||||
|
||||
### Content Structure
|
||||
|
||||
```text
|
||||
content/
|
||||
└── en/ # English content
|
||||
├── _index.md # Homepage
|
||||
├── blog/ # Blog posts
|
||||
└── docs/ # Documentation
|
||||
├── architecture/ # Architecture docs
|
||||
├── decisions/ # ADRs
|
||||
└── v1/ # Version-specific docs
|
||||
```
|
||||
|
||||
### Creating Content
|
||||
|
||||
1. **Add a new documentation page:**
|
||||
|
||||
```bash
|
||||
# Create a new markdown file
|
||||
vim content/en/docs/your-topic/_index.md
|
||||
```
|
||||
|
||||
2. **Add frontmatter:**
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: "Your Topic"
|
||||
linkTitle: "Your Topic"
|
||||
weight: 10
|
||||
description: >
|
||||
Brief description of your topic.
|
||||
---
|
||||
```
|
||||
|
||||
3. **Write your content** in Markdown
|
||||
|
||||
4. **Preview changes** - they appear immediately if `task serve` is running
|
||||
|
||||
### Creating Architecture Diagrams
|
||||
|
||||
Architecture diagrams are created with LikeC4:
|
||||
|
||||
1. **Navigate to the appropriate LikeC4 project:**
|
||||
- `resources/edp-likec4/` - Platform architecture
|
||||
- `resources/doc-likec4/` - Documentation platform architecture
|
||||
|
||||
2. **Edit or create `.c4` files** with your model
|
||||
|
||||
Example: Create a simple view in `resources/edp-likec4/views/my-view.c4`:
|
||||
|
||||
```likec4
|
||||
specification {
|
||||
element myperson
|
||||
element mysystem
|
||||
}
|
||||
|
||||
model {
|
||||
customer = myperson 'Customer' {
|
||||
description 'End user of the platform'
|
||||
}
|
||||
|
||||
mySystem = mysystem 'My System' {
|
||||
description 'Example system component'
|
||||
}
|
||||
|
||||
customer -> mySystem 'uses'
|
||||
}
|
||||
|
||||
views {
|
||||
view myCustomView {
|
||||
title "My Custom Architecture View"
|
||||
|
||||
include customer
|
||||
include mySystem
|
||||
|
||||
autoLayout TopBottom
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **Regenerate webcomponents:**
|
||||
|
||||
```bash
|
||||
task likec4:generate
|
||||
```
|
||||
|
||||
4. **Embed diagrams in Markdown:**
|
||||
|
||||
```markdown
|
||||
{{</* likec4-view view="myCustomView" project="architecture" title="My Custom Architecture View" */>}}
|
||||
```
|
||||
|
||||
**Finding available view IDs:**
|
||||
- Open the `.c4` files in your project directory
|
||||
- Look for `view <viewId> {` declarations
|
||||
- The `<viewId>` is what you use in the `view` parameter
|
||||
- Or use: `grep -r "^view " resources/edp-likec4/ --include="*.c4"`
|
||||
|
||||
## Available Tasks
|
||||
|
||||
View all available tasks:
|
||||
|
||||
```bash
|
||||
task --list
|
||||
```
|
||||
|
||||
### Common Development Tasks
|
||||
|
||||
| Task | Description |
|
||||
|------|-------------|
|
||||
| `task serve` | Start development server with hot reload |
|
||||
| `task build` | Build production-ready site |
|
||||
| `task build:dev` | Build development version |
|
||||
| `task clean` | Remove build artifacts |
|
||||
| `task test` | Run all tests |
|
||||
| `task test:quick` | Run tests without link checking |
|
||||
|
||||
## Quick Testing
|
||||
|
||||
Before committing, run quick tests:
|
||||
|
||||
```bash
|
||||
task test:quick
|
||||
```
|
||||
|
||||
This validates:
|
||||
|
||||
- Hugo build succeeds
|
||||
- Markdown syntax is correct
|
||||
|
||||
For comprehensive testing, including link checking:
|
||||
|
||||
```bash
|
||||
task test
|
||||
```
|
||||
|
||||
## Tips for Technical Writers
|
||||
|
||||
1. **Write in present tense** - "The system processes..." not "The system will process..."
|
||||
2. **Use code blocks** with syntax highlighting
|
||||
3. **Include diagrams** for complex concepts
|
||||
4. **Test locally** before pushing
|
||||
5. **Keep it concise** - readers appreciate brevity
|
||||
6. **Update regularly** - stale docs are worse than no docs
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Port 1313 already in use
|
||||
|
||||
```bash
|
||||
# Find and kill the process
|
||||
lsof -ti:1313 | xargs kill -9
|
||||
```
|
||||
|
||||
### Build errors
|
||||
|
||||
```bash
|
||||
# Clean and rebuild
|
||||
task clean
|
||||
task build:dev
|
||||
```
|
||||
|
||||
### Missing dependencies
|
||||
|
||||
```bash
|
||||
# Reinstall all dependencies
|
||||
task deps:install
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you can develop locally, learn about:
|
||||
|
||||
- [Testing processes](../testing/)
|
||||
- [CI/CD pipeline](../cicd/)
|
||||
|
|
@ -1,339 +0,0 @@
|
|||
---
|
||||
title: "Publishing to Edge"
|
||||
linkTitle: "Publishing"
|
||||
weight: 50
|
||||
description: >
|
||||
How documentation is deployed to the edge environment.
|
||||
---
|
||||
|
||||
## Deployment Overview
|
||||
|
||||
After successful CI/CD, the documentation is deployed to an edge computing environment.
|
||||
|
||||
{{< likec4-view view="deploymentFlow" project="documentation-platform" >}}
|
||||
|
||||
## Deployment Architecture
|
||||
|
||||
### Edge Connect Platform
|
||||
|
||||
Our documentation is deployed using **Edge Connect**, which orchestrates deployments to edge cloudlets.
|
||||
|
||||
Configuration: `edgeconnectdeployment.yaml`
|
||||
|
||||
```yaml
|
||||
kind: edgeconnect-deployment
|
||||
metadata:
|
||||
name: "edpdoc"
|
||||
appVersion: "1.0.0"
|
||||
organization: "edp2"
|
||||
spec:
|
||||
k8sApp:
|
||||
manifestFile: "./k8s-deployment.yaml"
|
||||
infraTemplate:
|
||||
- region: "EU"
|
||||
cloudletOrg: "TelekomOP"
|
||||
cloudletName: "Munich"
|
||||
flavorName: "EU.small"
|
||||
```
|
||||
|
||||
**Key settings:**
|
||||
|
||||
- **Deployment name:** `edpdoc`
|
||||
- **Region:** EU (Munich)
|
||||
- **Cloudlet:** TelekomOP Munich
|
||||
- **Flavor:** EU.small (resource allocation)
|
||||
|
||||
### Kubernetes Deployment
|
||||
|
||||
The application runs on Kubernetes: `k8s-deployment.yaml`
|
||||
|
||||
#### Service Definition
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: edpdoc
|
||||
labels:
|
||||
run: edpdoc
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- name: tcp80
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
run: edpdoc
|
||||
```
|
||||
|
||||
- **Type:** LoadBalancer (external access)
|
||||
- **Port:** 80 (HTTP)
|
||||
- **Selector:** Routes traffic to pods with label `run: edpdoc`
|
||||
|
||||
#### Deployment Configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: edpdoc
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
run: edpdoc
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: edpdoc
|
||||
mexDeployGen: kubernetes-basic
|
||||
spec:
|
||||
containers:
|
||||
- name: edpdoc
|
||||
image: ###IMAGETAG###
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
```
|
||||
|
||||
- **Replicas:** 1 (single instance)
|
||||
- **Image:** Injected by deployment pipeline (`###IMAGETAG###` placeholder)
|
||||
- **Pull policy:** Always (ensures latest version)
|
||||
|
||||
### Network Configuration
|
||||
|
||||
Outbound connections are configured in `edgeconnectdeployment.yaml`:
|
||||
|
||||
```yaml
|
||||
network:
|
||||
outboundConnections:
|
||||
- protocol: "tcp"
|
||||
portRangeMin: 80
|
||||
portRangeMax: 80
|
||||
remoteCIDR: "0.0.0.0/0"
|
||||
- protocol: "tcp"
|
||||
portRangeMin: 443
|
||||
portRangeMax: 443
|
||||
remoteCIDR: "0.0.0.0/0"
|
||||
```
|
||||
|
||||
- **Port 80:** HTTP outbound
|
||||
- **Port 443:** HTTPS outbound
|
||||
- **CIDR:** `0.0.0.0/0` (all destinations)
|
||||
|
||||
## Deployment Process
|
||||
|
||||
### 1. Container Image Ready
|
||||
|
||||
After CI passes:
|
||||
|
||||
- Docker image built with `task build:oci-image`
|
||||
- Tagged with git commit SHA
|
||||
- Pushed to container registry
|
||||
|
||||
### 2. Edge Connect Orchestration
|
||||
|
||||
Edge Connect:
|
||||
|
||||
1. Pulls container image
|
||||
2. Reads `edgeconnectdeployment.yaml`
|
||||
3. Provisions resources on Munich cloudlet
|
||||
4. Applies Kubernetes manifests
|
||||
|
||||
### 3. Kubernetes Deployment
|
||||
|
||||
Kubernetes:
|
||||
|
||||
1. Creates deployment with 1 replica
|
||||
2. Pulls container image (`imagePullPolicy: Always`)
|
||||
3. Starts pod running Nginx + static Hugo site
|
||||
4. Creates LoadBalancer service
|
||||
5. Assigns external IP
|
||||
|
||||
### 4. Service Available
|
||||
|
||||
Documentation is now accessible:
|
||||
|
||||
- **Protocol:** HTTP
|
||||
- **Port:** 80
|
||||
- **IP:** Assigned by LoadBalancer
|
||||
|
||||
## Complete Workflow
|
||||
|
||||
{{< likec4-view view="fullWorkflow" project="documentation-platform" >}}
|
||||
|
||||
### End-to-End Process
|
||||
|
||||
1. **Technical Writer writes content** (Markdown, LikeC4 models)
|
||||
2. **Local testing** with `task serve` and `task test`
|
||||
3. **Commit and push** to Git repository
|
||||
4. **GitHub Actions triggered** on push to main
|
||||
5. **CI tests run** (build, markdown, HTML, links)
|
||||
6. **Container image built** if tests pass
|
||||
7. **Image pushed** to registry
|
||||
8. **Edge deployment triggered**
|
||||
9. **Kubernetes applies** manifests
|
||||
10. **Service available** on edge cloudlet
|
||||
|
||||
## Monitoring Deployment
|
||||
|
||||
### Check Deployment Status
|
||||
|
||||
```bash
|
||||
kubectl get deployments -n <namespace>
|
||||
kubectl get pods -n <namespace>
|
||||
kubectl get services -n <namespace>
|
||||
```
|
||||
|
||||
### View Logs
|
||||
|
||||
```bash
|
||||
kubectl logs deployment/edpdoc -n <namespace>
|
||||
```
|
||||
|
||||
### Access Documentation
|
||||
|
||||
Find the LoadBalancer external IP:
|
||||
|
||||
```bash
|
||||
kubectl get service edpdoc -n <namespace>
|
||||
```
|
||||
|
||||
Access via: `http://<EXTERNAL-IP>`
|
||||
|
||||
## Rollback
|
||||
|
||||
If issues occur after deployment:
|
||||
|
||||
### Option 1: Revert Commit
|
||||
|
||||
```bash
|
||||
git revert <bad-commit>
|
||||
git push origin main
|
||||
```
|
||||
|
||||
CI will rebuild and redeploy.
|
||||
|
||||
### Option 2: Manual Rollback
|
||||
|
||||
```bash
|
||||
kubectl rollout undo deployment/edpdoc -n <namespace>
|
||||
```
|
||||
|
||||
Returns to previous deployment version.
|
||||
|
||||
### Option 3: Deploy Specific Version
|
||||
|
||||
Update image tag in deployment:
|
||||
|
||||
```bash
|
||||
kubectl set image deployment/edpdoc edpdoc=<registry>/<image>:<tag> -n <namespace>
|
||||
```
|
||||
|
||||
## Scaling
|
||||
|
||||
Currently: **1 replica**
|
||||
|
||||
To scale for higher traffic:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
replicas: 3
|
||||
```
|
||||
|
||||
Then apply:
|
||||
|
||||
```bash
|
||||
kubectl apply -f k8s-deployment.yaml
|
||||
```
|
||||
|
||||
Or scale dynamically:
|
||||
|
||||
```bash
|
||||
kubectl scale deployment/edpdoc --replicas=3 -n <namespace>
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Image scanning** - Scan container images for vulnerabilities
|
||||
2. **Resource limits** - Set CPU/memory limits in deployment
|
||||
3. **Network policies** - Restrict pod-to-pod communication
|
||||
4. **HTTPS** - Consider adding TLS termination (Ingress)
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
1. **CDN** - Add CDN in front of LoadBalancer
|
||||
2. **Caching** - Configure Nginx caching headers
|
||||
3. **Compression** - Enable gzip in Nginx
|
||||
4. **Image optimization** - Compress images in documentation
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Pod not starting
|
||||
|
||||
```bash
|
||||
kubectl describe pod <pod-name> -n <namespace>
|
||||
```
|
||||
|
||||
Check:
|
||||
|
||||
- Image pull errors
|
||||
- Resource constraints
|
||||
- Configuration errors
|
||||
|
||||
### Service unreachable
|
||||
|
||||
```bash
|
||||
kubectl describe service edpdoc -n <namespace>
|
||||
```
|
||||
|
||||
Check:
|
||||
|
||||
- LoadBalancer IP assigned
|
||||
- Port configuration
|
||||
- Network policies
|
||||
|
||||
### Old content served
|
||||
|
||||
Check:
|
||||
|
||||
- `imagePullPolicy: Always` in deployment
|
||||
- Image tag is updated
|
||||
- Pod has restarted
|
||||
|
||||
Force pod restart:
|
||||
|
||||
```bash
|
||||
kubectl rollout restart deployment/edpdoc -n <namespace>
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Test before deploying** - Always run `task test` locally
|
||||
2. **Use feature branches** - Don't deploy directly from local
|
||||
3. **Monitor after deployment** - Check logs and access
|
||||
4. **Document changes** - Update RELEASE.md
|
||||
5. **Version control** - Tag releases in Git
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Potential improvements:
|
||||
|
||||
- **Blue-green deployment** - Zero-downtime updates
|
||||
- **Canary releases** - Gradual rollout to subset of users
|
||||
- **Auto-scaling** - HorizontalPodAutoscaler based on traffic
|
||||
- **Multi-region** - Deploy to multiple cloudlets
|
||||
- **HTTPS** - TLS certificates and Ingress controller
|
||||
|
||||
## Summary
|
||||
|
||||
The deployment process is automated and reliable:
|
||||
|
||||
✅ **CI ensures quality** - Tests prevent broken deployments
|
||||
✅ **Edge infrastructure** - Low-latency access from EU
|
||||
✅ **Kubernetes orchestration** - Reliable, scalable platform
|
||||
✅ **Simple rollback** - Easy to recover from issues
|
||||
|
||||
As a technicalWriter, focus on content quality. The platform handles deployment automatically! 🚀
|
||||
|
|
@ -1,282 +0,0 @@
|
|||
---
|
||||
title: "Quick Reference"
|
||||
linkTitle: "Quick Reference"
|
||||
weight: 60
|
||||
description: >
|
||||
Cheat sheet for common technicalWriter tasks.
|
||||
---
|
||||
|
||||
## Common Commands
|
||||
|
||||
### Local Development
|
||||
|
||||
```bash
|
||||
# Start development server (with hot reload)
|
||||
task serve
|
||||
|
||||
# Build for production
|
||||
task build
|
||||
|
||||
# Build for development (faster, no minification)
|
||||
task build:dev
|
||||
|
||||
# Clean build artifacts
|
||||
task clean
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
```bash
|
||||
# Quick tests (build + markdown)
|
||||
task test:quick
|
||||
|
||||
# Full test suite
|
||||
task test
|
||||
|
||||
# Individual tests
|
||||
task test:build # Hugo build validation
|
||||
task test:markdown # Markdown linting
|
||||
task test:html # HTML validation
|
||||
task test:links # Link checking
|
||||
```
|
||||
|
||||
### Dependencies
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
task deps:install
|
||||
|
||||
# Update dependencies
|
||||
task deps:update
|
||||
|
||||
# Ensure npm dependencies (auto-installs if missing)
|
||||
task deps:ensure-npm
|
||||
```
|
||||
|
||||
### Container Operations
|
||||
|
||||
```bash
|
||||
# Build OCI/Docker image
|
||||
task build:oci-image
|
||||
|
||||
# Build and test container
|
||||
task test:oci-image
|
||||
```
|
||||
|
||||
## File Locations
|
||||
|
||||
### Content
|
||||
|
||||
| Path | Description |
|
||||
|------|-------------|
|
||||
| `content/en/docs/` | Main documentation |
|
||||
| `content/en/blog/` | Blog posts |
|
||||
| `content/en/_index.md` | Homepage |
|
||||
|
||||
### Architecture Models
|
||||
|
||||
| Path | Description |
|
||||
|------|-------------|
|
||||
| `resources/edp-likec4/` | Platform architecture models |
|
||||
| `resources/doc-likec4/` | Documentation platform models |
|
||||
|
||||
### Configuration
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `hugo.toml` | Hugo configuration |
|
||||
| `config.yaml` | Docsy theme config |
|
||||
| `Taskfile.yml` | Task definitions |
|
||||
| `package.json` | npm dependencies and scripts |
|
||||
| `.markdownlint.json` | Markdown linting rules |
|
||||
| `.htmlvalidate.json` | HTML validation rules |
|
||||
| `.htmltest.yml` | Link checker config |
|
||||
|
||||
### Build Output
|
||||
|
||||
| Path | Description |
|
||||
|------|-------------|
|
||||
| `public/` | Generated static site |
|
||||
| `resources/_gen/` | Generated resources (Hugo) |
|
||||
| `data/build_info.json` | Build metadata (git commit, version) |
|
||||
|
||||
## Markdown Frontmatter
|
||||
|
||||
### Standard Page
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: "Page Title"
|
||||
linkTitle: "Short Title"
|
||||
weight: 10
|
||||
description: >
|
||||
Brief description for SEO and navigation.
|
||||
---
|
||||
```
|
||||
|
||||
### Blog Post
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: "Post Title"
|
||||
date: 2025-01-15
|
||||
author: "Your Name"
|
||||
description: >
|
||||
Post summary.
|
||||
---
|
||||
```
|
||||
|
||||
## Embedding Architecture Diagrams
|
||||
|
||||
### Basic Embed
|
||||
|
||||
```markdown
|
||||
{{< likec4-view view="view-name" project="project-name" >}}
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
- `view` (required) - The view ID from your LikeC4 model
|
||||
- `project` (optional, default: "architecture") - The LikeC4 project name
|
||||
- `title` (optional, default: "Architecture View: {view}") - Custom header text above the diagram
|
||||
|
||||
### Examples
|
||||
|
||||
```markdown
|
||||
{{< likec4-view view="overview" project="documentation-platform" >}}
|
||||
{{< likec4-view view="localDevelopment" project="documentation-platform" >}}
|
||||
{{< likec4-view view="cicdPipeline" project="documentation-platform" >}}
|
||||
{{< likec4-view view="otc-faas" project="architecture" title="OTC FaaS Deployment" >}}
|
||||
```
|
||||
|
||||
## LikeC4 Commands
|
||||
|
||||
### Regenerate Webcomponents
|
||||
|
||||
After modifying `.c4` files:
|
||||
|
||||
```bash
|
||||
task likec4:generate
|
||||
```
|
||||
|
||||
This regenerates both:
|
||||
|
||||
- `static/js/likec4-webcomponent.js` (EDP architecture)
|
||||
- `static/js/likec4-doc-webcomponent.js` (Documentation platform)
|
||||
|
||||
### Start Development Server
|
||||
|
||||
```bash
|
||||
cd resources/doc-likec4 # or resources/edp-likec4
|
||||
npm install
|
||||
npm start
|
||||
```
|
||||
|
||||
Opens LikeC4 IDE at `http://localhost:5173`
|
||||
|
||||
### Export Diagrams
|
||||
|
||||
```bash
|
||||
cd resources/doc-likec4
|
||||
npx likec4 export png -o ./images .
|
||||
```
|
||||
|
||||
## Git Workflow
|
||||
|
||||
### Feature Branch
|
||||
|
||||
```bash
|
||||
# Create feature branch
|
||||
git checkout -b feature/your-feature
|
||||
|
||||
# Make changes and test
|
||||
task serve
|
||||
task test:quick
|
||||
|
||||
# Commit
|
||||
git add .
|
||||
git commit -m "Description of changes"
|
||||
|
||||
# Push
|
||||
git push origin feature/your-feature
|
||||
|
||||
# Create pull request on GitHub
|
||||
```
|
||||
|
||||
### Update from Main
|
||||
|
||||
```bash
|
||||
git checkout main
|
||||
git pull origin main
|
||||
git checkout feature/your-feature
|
||||
git rebase main
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Port 1313 in use
|
||||
|
||||
```bash
|
||||
lsof -ti:1313 | xargs kill -9
|
||||
```
|
||||
|
||||
### Build errors
|
||||
|
||||
```bash
|
||||
task clean
|
||||
task build:dev
|
||||
```
|
||||
|
||||
### Missing dependencies
|
||||
|
||||
```bash
|
||||
task deps:install
|
||||
```
|
||||
|
||||
### Hugo module issues
|
||||
|
||||
```bash
|
||||
hugo mod clean
|
||||
hugo mod get -u
|
||||
hugo mod tidy
|
||||
```
|
||||
|
||||
### LikeC4 language server
|
||||
|
||||
In VS Code: `Ctrl+Shift+P` → "LikeC4: restart language server"
|
||||
|
||||
## URLs
|
||||
|
||||
### Local Development
|
||||
|
||||
- **Documentation:** <http://localhost:1313>
|
||||
- **LikeC4 IDE:** <http://localhost:5173> (when running `npm start` in likec4 folder)
|
||||
|
||||
### Production
|
||||
|
||||
Check `edgeconnectdeployment.yaml` for deployment URL or run:
|
||||
|
||||
```bash
|
||||
kubectl get service edpdoc -n <namespace>
|
||||
```
|
||||
|
||||
## Quick Checks Before Committing
|
||||
|
||||
1. ✅ `task test:quick` passes
|
||||
2. ✅ Preview looks correct in browser
|
||||
3. ✅ No broken links (visual check)
|
||||
4. ✅ Architecture diagrams render
|
||||
5. ✅ Frontmatter is correct
|
||||
|
||||
## Getting Help
|
||||
|
||||
- **Hugo docs:** <https://gohugo.io/documentation/>
|
||||
- **Docsy theme:** <https://www.docsy.dev/docs/>
|
||||
- **LikeC4:** <https://likec4.dev/>
|
||||
- **Task:** <https://taskfile.dev/>
|
||||
|
||||
## View Documentation Architecture
|
||||
|
||||
To understand how this documentation platform works:
|
||||
|
||||
→ Start here: [Documentation About Documentation](../)
|
||||
|
|
@ -1,229 +0,0 @@
|
|||
---
|
||||
title: "Testing"
|
||||
linkTitle: "Testing"
|
||||
weight: 30
|
||||
description: >
|
||||
Quality assurance processes for documentation.
|
||||
---
|
||||
|
||||
## Testing Philosophy
|
||||
|
||||
Quality documentation requires testing. Our testing process validates:
|
||||
|
||||
- **Build integrity** - Hugo can generate the site
|
||||
- **Content quality** - Markdown follows best practices
|
||||
- **HTML validity** - Generated HTML is well-formed
|
||||
- **Link integrity** - No broken internal or external links
|
||||
|
||||
## Testing Capabilities
|
||||
|
||||
{{< likec4-view view="testingCapabilities" project="documentation-platform" >}}
|
||||
|
||||
## Local Testing
|
||||
|
||||
Before committing changes, run tests locally:
|
||||
|
||||
### Quick Tests
|
||||
|
||||
For rapid feedback during development:
|
||||
|
||||
```bash
|
||||
task test:quick
|
||||
```
|
||||
|
||||
This runs:
|
||||
|
||||
- `task test:build` - Hugo build validation
|
||||
- `task test:markdown` - Markdown linting
|
||||
|
||||
### Full Test Suite
|
||||
|
||||
Before creating a pull request:
|
||||
|
||||
```bash
|
||||
task test
|
||||
```
|
||||
|
||||
This runs all tests including:
|
||||
|
||||
- `task test:build` - Build validation
|
||||
- `task test:markdown` - Markdown linting
|
||||
- `task test:html` - HTML validation
|
||||
- `task test:links` - Link checking
|
||||
|
||||
## Individual Tests
|
||||
|
||||
You can run individual tests:
|
||||
|
||||
### Build Test
|
||||
|
||||
Validates that Hugo can build the site:
|
||||
|
||||
```bash
|
||||
task test:build
|
||||
```
|
||||
|
||||
This runs: `hugo --gc --minify --logLevel info`
|
||||
|
||||
**What it checks:**
|
||||
|
||||
- Hugo configuration is valid
|
||||
- Content files have correct frontmatter
|
||||
- Templates render without errors
|
||||
- No circular dependencies in content structure
|
||||
|
||||
### Markdown Lint
|
||||
|
||||
Checks Markdown syntax and style:
|
||||
|
||||
```bash
|
||||
task test:markdown
|
||||
```
|
||||
|
||||
This uses `markdownlint` with custom rules in `.markdownlint.json`.
|
||||
|
||||
**What it checks:**
|
||||
|
||||
- Consistent heading hierarchy
|
||||
- Proper list formatting
|
||||
- Code blocks have language tags
|
||||
- No trailing whitespace
|
||||
- Consistent line length (where applicable)
|
||||
|
||||
**Common issues:**
|
||||
|
||||
- Missing blank lines around code blocks
|
||||
- Inconsistent list markers
|
||||
- Heading levels skipped
|
||||
|
||||
### HTML Validation
|
||||
|
||||
Validates generated HTML:
|
||||
|
||||
```bash
|
||||
task test:html
|
||||
```
|
||||
|
||||
This uses `htmlvalidate` with rules in `.htmlvalidate.json`.
|
||||
|
||||
**What it checks:**
|
||||
|
||||
- Well-formed HTML5
|
||||
- Proper nesting of elements
|
||||
- Valid attributes
|
||||
- Accessible markup
|
||||
|
||||
### Link Checking
|
||||
|
||||
Verifies all links are valid:
|
||||
|
||||
```bash
|
||||
task test:links
|
||||
```
|
||||
|
||||
This uses `htmltest` configured in `.htmltest.yml`.
|
||||
|
||||
**What it checks:**
|
||||
|
||||
- Internal links point to existing pages
|
||||
- External links are reachable
|
||||
- Anchor links target existing elements
|
||||
- No redirects (301/302)
|
||||
|
||||
**Note:** This test can be slow for large sites with many external links.
|
||||
|
||||
## CI Testing
|
||||
|
||||
All tests run automatically on:
|
||||
|
||||
- **Push to `main`** - Full test suite
|
||||
- **Pull requests** - Full test suite
|
||||
|
||||
View the GitHub Actions workflow: `.github/workflows/test.yml`
|
||||
|
||||
### CI Test Results
|
||||
|
||||
If tests fail in CI:
|
||||
|
||||
1. Check the GitHub Actions logs
|
||||
2. Look for specific test failures
|
||||
3. Run the same test locally: `task test:<name>`
|
||||
4. Fix the issue
|
||||
5. Commit and push
|
||||
|
||||
### Artifacts
|
||||
|
||||
CI uploads test artifacts:
|
||||
|
||||
- `htmltest-report/` - Link checking results
|
||||
|
||||
Download these from the GitHub Actions run to investigate failures.
|
||||
|
||||
## Test Configuration Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `.markdownlint.json` | Markdown linting rules |
|
||||
| `.htmlvalidate.json` | HTML validation rules |
|
||||
| `.htmltest.yml` | Link checker configuration |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Test early, test often** - Run `task test:quick` frequently
|
||||
2. **Fix issues immediately** - Don't accumulate technical debt
|
||||
3. **Understand failures** - Read error messages carefully
|
||||
4. **Update tests** - If rules change, update config files
|
||||
5. **Document exceptions** - If you need to ignore a rule, document why
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
### Markdown: MD031 - Blank lines around fences
|
||||
|
||||
**Problem:** Missing blank line before/after code block
|
||||
|
||||
**Solution:** Add blank lines:
|
||||
|
||||
```markdown
|
||||
Some text
|
||||
|
||||
```bash
|
||||
command here
|
||||
```
|
||||
|
||||
More text
|
||||
```
|
||||
|
||||
### Markdown: MD032 - Blank lines around lists
|
||||
|
||||
**Problem:** Missing blank line before/after list
|
||||
|
||||
**Solution:** Add blank lines:
|
||||
|
||||
```markdown
|
||||
Text before
|
||||
|
||||
- List item 1
|
||||
- List item 2
|
||||
|
||||
Text after
|
||||
```
|
||||
|
||||
### HTML: Invalid nesting
|
||||
|
||||
**Problem:** Elements improperly nested
|
||||
|
||||
**Solution:** Check template files and shortcodes
|
||||
|
||||
### Link Check: 404 Not Found
|
||||
|
||||
**Problem:** Link points to non-existent page
|
||||
|
||||
**Solution:**
|
||||
|
||||
- Fix the link
|
||||
- Create the missing page
|
||||
- Remove the link if no longer relevant
|
||||
|
||||
## Next Steps
|
||||
|
||||
Learn about the automated [CI/CD pipeline](../cicd/).
|
||||
|
|
@ -1,75 +0,0 @@
|
|||
---
|
||||
title: "eDF Documentation Overview"
|
||||
description: "Comprehensive guide for users and auditors to understand and use the eDF."
|
||||
---
|
||||
|
||||
# Meta
|
||||
|
||||
## Guidelines
|
||||
|
||||
1. for users/developers/engineers we describe our output / outcome as product
|
||||
* it is usable
|
||||
* there are links / lists to repos
|
||||
2. we have and describe a 'product-structure-tree'
|
||||
3. for auditors / governance we have a list / cross reference to Jira tickets
|
||||
* R&D ?,
|
||||
* mappen auf die projektphasen, wie erstellt ein team eine plattform?
|
||||
* stw. mobbing, mob programming
|
||||
* mapping auf deliverables von IPCEI-CIS ???, bzw. mapping auf epics?
|
||||
* projekthistorie, projektdynamic, teilprojekt von eDF , teilprojekt-abhängigkiet zB 'Platform'
|
||||
* friendly user phase
|
||||
* forgejo community, OSS, PR handling
|
||||
* externe stakeholder, user experience, think ahead integration
|
||||
* technolgien, technologie-schwerpunkte, cutting-edge research
|
||||
* design/specification und bewertung von lösungsentürfen (zB VictoriaMetrics, GARM, terraform, argoCD, ...)
|
||||
* CI/CD, golden paths (anm.: ist in grobkonzept, deployment von apps von developern fehlt)
|
||||
|
||||
# Introduction
|
||||
- Purpose of the eDF
|
||||
- Target audience (developers, engineers, auditors)
|
||||
- High-level product structure overview
|
||||
- High-level architecture overview
|
||||
|
||||
# eDF Components Overview
|
||||
- List of all major components
|
||||
- Vertical and horizontal layers explained
|
||||
- Component maturity/status (fully integrated, partial, experimental)
|
||||
|
||||
# Getting Started
|
||||
- Quickstart guide for developers
|
||||
- Onboarding steps for eDF engineers
|
||||
- Prerequisites and environment setup
|
||||
|
||||
# Component Details
|
||||
For each component:
|
||||
- Description and purpose
|
||||
- Repository link
|
||||
- README summary
|
||||
- Architecture diagrams (link to Miro/Lucid)
|
||||
- Usage instructions
|
||||
- Integration points
|
||||
|
||||
# Development Experience
|
||||
- How to contribute
|
||||
- Local development workflow
|
||||
- CI/CD pipelines
|
||||
- Testing and validation
|
||||
|
||||
# Operational Experience
|
||||
- Deployment guides
|
||||
- Monitoring and observability
|
||||
- Troubleshooting
|
||||
|
||||
# Audit & Compliance
|
||||
- Overview of implemented controls
|
||||
- Ticket references (Jira, changelogs)
|
||||
- Documentation of decisions and reviews
|
||||
- Evidence of value and coverage
|
||||
|
||||
# FAQ & Support
|
||||
- Common issues and solutions
|
||||
- Contact points for help
|
||||
|
||||
# Appendix
|
||||
- Glossary
|
||||
- References to external resources
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
---
|
||||
title: "Documentation (v1 - Legacy)"
|
||||
linkTitle: "v1 (Legacy)"
|
||||
weight: 100
|
||||
description: >
|
||||
Legacy documentation - archived version of the original content.
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="warning" %}}
|
||||
This is the legacy documentation (v1). For the latest version, please visit the [current documentation](/docs/).
|
||||
{{% /alert %}}
|
||||
|
||||
This section contains the original documentation that is being migrated to a new structure.
|
||||
|
Before Width: | Height: | Size: 154 KiB |
|
Before Width: | Height: | Size: 128 KiB |
|
|
@ -1,7 +0,0 @@
|
|||
---
|
||||
title: Humanitec
|
||||
weight: 4
|
||||
---
|
||||
|
||||
|
||||
tbd
|
||||
|
|
@ -1,269 +0,0 @@
|
|||
|
||||
|
||||
# Ideen
|
||||
|
||||
* Kanban
|
||||
* product review sessions
|
||||
|
||||
# ChatGPT
|
||||
|
||||
## Fehlendes Backlog
|
||||
|
||||
> welche agilen vorgehensmodelle bzw. best practices gibt es in der software-entwicklung, wenn das product backlog noch nicht da ist. es gibt zwar eine produktvision und auch schon einen PoC, aber das Produkt ist sowohl technisch als auch nutzenseitig immer nocht nicht planbar in user stories. daher ist SCRUM nicht machbar. das team ist ca. 8 personen groß, und wir überlgen, einen meilenstein in den nächsten 8 wochen einfach durch tägliches 'PDCA' zu erstellen: ein morgendliches planning, dann in subgruppen ein erforschen, dann im Plenum wieder C (Check) zusammenführen/integrieren, und ein abschliessendes 'A' als Ergebnissicherung, Versionierierung, Dokumentation. Was hälst du von diesem Vorgehen, bzw. welches Risiken und Verbesserungsvorschläge hast du? Oder gibt es ein Standard-Vorgehen hierfür?
|
||||
|
||||
Dein beschriebenes Szenario – Vision und PoC vorhanden, aber kein ausformuliertes Product Backlog – ist typisch für eine frühe **Explorations- oder Inceptionsphase** in der Produktentwicklung. In dieser Phase ist klassisches **Scrum** tatsächlich oft zu starr, weil es eine gewisse Klarheit über Features und Stories voraussetzt. Das von euch überlegte Vorgehen klingt nach einer sehr pragmatischen und iterativen Methode, die im Kern gut zu den **Lean-Startup-Prinzipien** oder **Design Thinking + Agile Engineering** passt.
|
||||
|
||||
## Bewertung eures PDCA-basierten Vorgehens
|
||||
|
||||
**Positiv:**
|
||||
* **Täglicher PDCA-Zyklus** (Plan-Do-Check-Act) sorgt für schnelle Feedbackschleifen.
|
||||
* **Morgendliches Planning** und **Check-Meeting im Plenum** fördern Transparenz und Selbstorganisation.
|
||||
* **Subgruppen-Erkundung** erlaubt parallele Experimente.
|
||||
* **Abschließendes "A"** zur Ergebnissicherung ist essenziell, sonst bleibt es bei "busy work".
|
||||
|
||||
**Risiken:**
|
||||
|
||||
1. **Fehlende Langfriststruktur:** Ohne grobe übergreifende Richtung (z. B. Meilensteinplan, Zielbild) kann es leicht in ziellosem Explorieren enden.
|
||||
2. **Uneinheitlicher Erkenntnisgewinn:** Subgruppen könnten redundant oder inkompatibel arbeiten, wenn kein gemeinsames Verständnis besteht.
|
||||
3. **Dokumentation als „Nachsorge“:** Wenn die Doku erst am Ende passiert, droht Wissensverlust – lieber „Living Docs“ in Echtzeit pflegen.
|
||||
4. **Stakeholder-Abkopplung:** Ohne explizite Reviews oder Demo-Rituale kann Feedback von außen fehlen.
|
||||
|
||||
## Verbesserungsvorschläge
|
||||
|
||||
1. **Exploration Backlog oder Hypothesenboard:** Auch ohne klassisches Product Backlog könnt ihr mit einem **Experiment-/Hypothesenboard** (à la Lean Startup) arbeiten. Zum Beispiel:
|
||||
* Hypothese: „Feature X wird den Use Case Y verbessern.“
|
||||
* Experiment: „Prototyp bauen und testen mit Nutzergruppe Z.“
|
||||
* Ergebnis & Learnings dokumentieren.
|
||||
|
||||
2. **Wöchentliche Zielsetzungen:** Jeden Montag ein Weekly Planning mit klaren Zielen für die Woche (Outcome-fokussiert, nicht Task-fokussiert).
|
||||
|
||||
3. **Integration mit Dual Track Agile:** Das Vorgehen ähnelt dem **Dual-Track-Modell**: ein **Discovery-Track** (Erkunden, Validieren, Prototypisieren) läuft parallel zum späteren **Delivery-Track** (Releases, Sprints). Ihr seid noch voll im Discovery-Track.
|
||||
|
||||
4. **Design Studio oder Spike-Zeitfenster:** Strukturiert Zeit für z. B. 2-Tages-Design-Studios oder 1-Tages-Spikes – das erhöht die Qualität der Exploration.
|
||||
|
||||
5. **Review/Share-Session freitags:** Eine fixe wöchentliche Review-/Demo-Session (auch intern) erhöht die Disziplin und hilft beim „Check“-Teil im PDCA.
|
||||
|
||||
## Gängige Standard-Methoden in dieser Phase
|
||||
|
||||
* **Design Thinking** (für Nutzerzentrierte Exploration)
|
||||
* **Lean UX / Lean Startup** (Hypothesengetrieben)
|
||||
* **Dual Track Agile**
|
||||
* **Google Design Sprint (5 Tage)** – extrem hilfreich, wenn es um das Testen von Nutzerideen geht
|
||||
* **Shape Up (Basecamp)** – für nicht-sprintbasiertes Arbeiten mit "Bet" auf 6-Wochen-Projekte
|
||||
|
||||
---
|
||||
|
||||
**Fazit:**
|
||||
Euer PDCA-Zyklus ist ein solider, pragmatischer Einstieg in eine unsichere, vorstrukturierte Produktphase. Mit ein paar Ergänzungen (Exploration Backlog, feste Review-Elemente, Hypothesenstruktur) könnt ihr daraus ein effektives Discovery-Vorgehen machen, das den Übergang zu einem klassischen agilen Framework (z. B. Scrum) vorbereitet.
|
||||
|
||||
Möchtest du eine konkrete Vorlage für ein Exploration Backlog oder Hypothesenboard für euer Team?
|
||||
|
||||
## Architektur vorhanden
|
||||
|
||||
> ok, du findest vor allem Tages-Zyklen zu kurz. Ich möchte noch ergänzen, dass wir eine Architektur haben. Das klingt vielleicht paradox, aber es ist tatsächlich so, dass wir das Software-syewtm bereits designed haben, und die Komponenten auch bereits kennen, aber jetzt vor allem die Wege des automatischen Deployments in eine Cloud und die Konfigurations- und Verwaltungsstrukturen unsrerer Software herausfinden müssen. Das ist da, was sich gerade nicht weiter planen lässt. Kannst du mit diesen Ergänzungen dein Feedbach nochmal schärfen?
|
||||
|
||||
Danke für die Ergänzung – das ändert die Ausgangslage entscheidend und erlaubt eine deutlich präzisere Einordnung.
|
||||
|
||||
### Neue Ausgangslage
|
||||
|
||||
* Ihr **habt eine Architektur**: Die Komponenten und Schnittstellen sind klar.
|
||||
* Es gibt **bereits einen PoC** und eine Produktvision.
|
||||
* Das **Backlog fehlt**, weil ihr gerade die **"nicht-funktionalen" Themen wie Deployment, Cloud-Umgebung, Konfiguration und Betrieb** klärt.
|
||||
* Das System ist **technisch grob fertig designt**, aber die **Operations- und Deployment-Prozesse** sind unklar und werden explorativ entwickelt.
|
||||
|
||||
### Schärferes Feedback
|
||||
|
||||
#### 1. **Tages-PDCA-Zyklen wirken in diesem Kontext zu granular**
|
||||
|
||||
Bei Infrastruktur-, Deployment- oder Konfigurationsfragen entstehen oft tiefe „Spikes“ (z. B. CI/CD-Pipeline aufsetzen, Helm-Chart modellieren, Secrets Handling evaluieren). Diese brauchen oft mehr als einen Tag bis zum „Check“, weil Abhängigkeiten (z. B. Rechte, Cloud-Zugänge, Testumgebungen) entstehen. Ein **2- bis 3-Tages-Rhythmus** mit klaren Zwischenzielen wäre realistischer, ergänzt um:
|
||||
|
||||
* **Daily Standup als Taktgeber**, aber nicht zwangsläufig als vollständiger PDCA-Zyklus.
|
||||
* **Weekly Planning mit Zielvorgaben und Review-Ritualen**, um Fortschritt messbar zu machen.
|
||||
|
||||
#### 2. **Was ihr macht, ist kein Produkt-Delivery, sondern "System Enablement"**
|
||||
|
||||
Ihr steckt im Übergang von Architektur zu einem **Infrastructure as Code + Plattform Enablement Track**. Das bedeutet:
|
||||
* Die „User Stories“ sind keine klassischen Features, sondern z. B.:
|
||||
* „Als Team möchte ich unsere Software mit einem Befehl deployen können.“
|
||||
* „Als Betreiber will ich wissen, wie ich Services konfiguriere.“
|
||||
|
||||
Das spricht für ein **Infrastructure Kanban Board**, keine Sprints.
|
||||
|
||||
#### 3. **Mögliche Vorgehensmodelle / Best Practices**
|
||||
|
||||
| Modell / Methode | Warum es jetzt passt |
|
||||
|------------------------------------------|------------------------------------------------------------------|
|
||||
| **Kanban für technische Tasks** | Flexible Work-in-Progress-Kontrolle, ideal für Exploration. |
|
||||
| **Spikes** (aus XP) | Timeboxed Exploration zu offenen Fragen (z. B. Helm, Secrets). |
|
||||
| **Enabler Epics (SAFe)** | Deployment- und Ops-Fragen formal als "Enabler" fassen. |
|
||||
| **Team Topologies – Platform Team Light**| Ihr agiert gerade wie ein internes Plattform-Team. |
|
||||
| **Weekly DevOps Review** | Alle 5–7 Tage Stand der Automatisierung/Doku/Strukturen. |
|
||||
| **ADRs & Living Docs** | Architectural Decision Records für Deployment-/Ops-Entscheidungen.|
|
||||
|
||||
#### 4. **Konkrete Empfehlungen zur Strukturierung**
|
||||
|
||||
* **Ziele für die 8 Wochen formulieren** als **"Capabilities"**, z. B.:
|
||||
* „Greenfield-Clouddeployment in AWS/GCP über GitHub Actions.“
|
||||
* „Parameterisierbares Helm-Chart mit Doku.“
|
||||
* „Konfigurierbarkeit über ENV vs. Secrets evaluieren.“
|
||||
* Dazu **1–2 Woche cadence**, jede Woche ein Review/Checkpoint.
|
||||
* **Boards mit Task-Kategorien**:
|
||||
* 🧪 Spike
|
||||
* 🛠️ Tooling/Automation
|
||||
* 📦 Infrastruktur/Dienst
|
||||
* 📓 Dokumentation
|
||||
* **Ergebnisse sofort versionieren und dokumentieren**, z. B.:
|
||||
* GitOps Repo-Struktur
|
||||
* ADRs
|
||||
* Beispiel-Deployments
|
||||
|
||||
---
|
||||
|
||||
### Fazit
|
||||
|
||||
Ihr seid im **"technisch operativen Enablement"** eures Systems, nicht im Produkt-Delivery. Das rechtfertigt:
|
||||
* ein **kanban-artiges, flexibles Modell mit timeboxed Reviews**,
|
||||
* tägliche Abstimmungen, aber nicht zwingend tägliche Zyklen,
|
||||
* Fokus auf **Enabler-Ziele statt funktionale Stories**.
|
||||
|
||||
SCRUM ist hier fehl am Platz, aber euer Ansatz ist sehr nahe an einer **modernen DevOps-orientierten Arbeitsweise**, die z. B. von **Platform Engineering Teams** eingesetzt wird.
|
||||
|
||||
Möchtest du ein Beispiel für ein solches Kanban-Board mit passenden Task-Typen und Zielstruktur?
|
||||
|
||||
## Kanban
|
||||
|
||||
> ja, gerne!
|
||||
|
||||
Hier ist ein konkretes Beispiel für ein **Kanban-Board**, das speziell auf eure Situation als „Platform/Enablement“-Team mit einem vorhandenen Architekturbild, aber offenem Deployment- und Betriebsdesign zugeschnitten ist.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 **Zielstruktur (übergeordnet für 8 Wochen)**
|
||||
|
||||
Diese formuliert ihr am besten als **Capabilities** – Fähigkeiten, die das System nachher haben soll:
|
||||
|
||||
| Capability ID | Zielbeschreibung |
|
||||
|---------------|---------------------------------------------------------------------------------------------|
|
||||
| C1 | Anwendung kann automatisiert in Cloudumgebung X deployed werden (inkl. Secrets Mgmt) |
|
||||
| C2 | Konfiguration des Systems ist externisiert und dokumentiert (ENV, YAML, o. ä.) |
|
||||
| C3 | Monitoring- und Logging-Infrastruktur ist einsatzbereit und dokumentiert |
|
||||
| C4 | Dev- und Test-Umgebungen sind klar getrennt und automatisch provisionierbar |
|
||||
| C5 | Alle Plattformentscheidungen (z. B. Helm vs. Kustomize) sind versioniert und begründet |
|
||||
|
||||
Diese Capabilities bilden **Spalten** oder **Swimlanes** im Board (wenn euer Tool das unterstützt, z. B. GitHub Projects, Jira oder Trello mit Labels).
|
||||
|
||||
---
|
||||
|
||||
## 🗂️ **Kanban-Board-Spalten (klassisch)**
|
||||
|
||||
| Spalte | Zweck |
|
||||
|------------------|-----------------------------------------------------------|
|
||||
| 🔍 Backlog | Ideen, Hypothesen, Tasks – priorisiert nach Capabilities |
|
||||
| 🧪 In Exploration | Aktive Spikes, Proofs, technische Evaluierungen |
|
||||
| 🛠️ In Progress | Umsetzung mit konkretem Ziel |
|
||||
| ✅ Review / Check | Funktionsprüfung, internes Review |
|
||||
| 📦 Done | Abgeschlossen, dokumentiert, ggf. in Repo |
|
||||
|
||||
---
|
||||
|
||||
## 🏷️ **Task-Typen (Labels oder Emojis zur Kennzeichnung)**
|
||||
|
||||
| Symbol / Label | Typ | Beispiel |
|
||||
|------------------|-----------------------------|--------------------------------------------------------------------------|
|
||||
| 🧪 Spike | Technische Untersuchung | „Untersuche ArgoCD vs. Flux für GitOps Deployment“ |
|
||||
| 📦 Infra | Infrastruktur | „Provisioniere dev/test/stage in GCP mit Terraform“ |
|
||||
| 🔐 Secrets | Sicherheitsrelevante Aufgabe| „Design für Secret-Handling mit Sealed Secrets“ |
|
||||
| 📓 Docs | Dokumentation | „README für Developer Setup schreiben“ |
|
||||
| 🧰 Tooling | CI/CD, Pipelines, Linter | „GitHub Action für Build & Deploy schreiben“ |
|
||||
| 🔁 Entscheidung | Architekturentscheidung | „ADR: Helm vs. Kustomize für Service Deployment“ |
|
||||
|
||||
---
|
||||
|
||||
## 🧩 Beispielhafte Tasks für Capability „C1 – Deployment automatisieren“
|
||||
|
||||
| Task | Typ | Status |
|
||||
|----------------------------------------------|----------|----------------|
|
||||
| Write GitHub Action for Docker image push | 🧰 Tooling | 🛠️ In Progress |
|
||||
| Evaluate ArgoCD vs. Flux | 🧪 Spike | ✅ Review |
|
||||
| Create Helm Chart for Service A | 📦 Infra | 🔍 Backlog |
|
||||
| Create ADR: Deployment-Toolentscheidung | 🔁 Entscheidung | 🔍 Backlog |
|
||||
| Add deployment doc in Wiki | 📓 Docs | 🔍 Backlog |
|
||||
|
||||
---
|
||||
|
||||
## 📅 Cadence-Empfehlung
|
||||
|
||||
* **Daily Standup (15 min)**: Was läuft, was blockiert? Pull statt Push!
|
||||
* **Weekly Planning (Montag)**: Priorisieren und Backlog-Grooming (welche Capabilities sind diese Woche dran?)
|
||||
* **Weekly Review (Freitag)**: Erkenntnisse, Entscheidungen und Dokumentation präsentieren
|
||||
* Optional: **Retrospektive alle 2 Wochen**, falls das Team Energie hat
|
||||
|
||||
---
|
||||
|
||||
Wenn du magst, kann ich dir ein Beispiel-Board als Markdown-Vorlage oder als CSV-Datei zum Import in Trello, GitHub Projects, Jira etc. machen. Wäre das hilfreich?
|
||||
|
||||
## Living Docs
|
||||
|
||||
> kannst du noch etwas mehr zu 'LivingDocs' sagen?
|
||||
|
||||
Ja, gerne! **"Living Docs"** (bzw. **Living Documentation**) ist ein Konzept aus der agilen Softwareentwicklung und DevOps, das sich auf **aktuelle, direkt aus dem System oder aus dem Arbeitsprozess generierte Dokumentation** bezieht. Ziel ist es, dass die Dokumentation *nicht veraltet*, sondern so eng mit Code und Systemzustand gekoppelt ist, dass sie sich (teilweise) selbst aktualisiert oder sich mit dem Entwicklungsprozess mitbewegt.
|
||||
|
||||
---
|
||||
|
||||
## 🔍 **Was macht "Living Documentation" aus?**
|
||||
|
||||
| Merkmal | Beschreibung |
|
||||
|----------------------------------|------------------------------------------------------------------------------|
|
||||
| 🔄 Automatisch erzeugt oder aktualisiert | Generiert aus Code, Tests, Pipelines, ADRs oder Deployments |
|
||||
| 📌 Versionskontrolliert | Im Git abgelegt, gemeinsam mit dem Code gepflegt |
|
||||
| 🧑💻 Entwicklernah | Entwickelt sich mit dem Code weiter – keine Trennung zwischen "Docs" und Dev |
|
||||
| 📈 Änderbar & nachvollziehbar | Jede Änderung an Code/Doku hat einen Commit & Kontext |
|
||||
| 📚 Mehrwert für alle Beteiligten| Richtet sich an Devs, Ops, PMs oder andere Teams – nicht nur "für später" |
|
||||
|
||||
---
|
||||
|
||||
## 🧰 Typische Formen von Living Docs
|
||||
|
||||
| Typ | Beschreibung & Tools |
|
||||
|----------------------|----------------------------------------------------------------------------------------|
|
||||
| **Architecture Decision Records (ADRs)** | Markdown-Dateien im Repo (z. B. `docs/adr/001-helm-vs-kustomize.md`) |
|
||||
| **Code-Doku aus Source** | Mit Tools wie JSDoc, TypeDoc, Sphinx, Doxygen, etc. |
|
||||
| **API-Doku** | Automatisch aus Code oder OpenAPI (Swagger) generiert |
|
||||
| **Test-Doku (z. B. BDD)** | z. B. Gherkin-Style: `Given/When/Then`-Spezifikationen als Dokumentation |
|
||||
| **Monitoring & Deployment-Status** | z. B. Dashboards mit Infrastructure-Doku (Grafana, Backstage, Argo UI) |
|
||||
| **DevDocs im Repo** | z. B. `README.md`, `CONTRIBUTING.md`, `how-to/`, `docs/`, `adr/` |
|
||||
| **Playbooks / Runbooks** | Schritt-für-Schritt-Anleitungen für Betrieb oder Störungen |
|
||||
|
||||
---
|
||||
|
||||
## ✅ **Best Practices**
|
||||
|
||||
1. **Nutze Markdown + Git:** Einfach, versionierbar, lesbar → ADRs, Runbooks, HowTos.
|
||||
2. **Dokumentiere Entscheidungen bewusst:** Lieber knappe, saubere ADRs als keine Doku.
|
||||
3. **Verknüpfe Doku mit Code / CI/CD:** Beispiel: Build-Tool erzeugt automatisch API-Doku oder ADR-Lint-Checks.
|
||||
4. **Verlinke Logs, Dashboards, Monitoring:** z. B. in einem `system-overview.md`.
|
||||
5. **Behandle Doku wie Code:** Reviewpflichtig, Pull Request basiert, CI-checkbar.
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Tools & Frameworks
|
||||
|
||||
| Zweck | Tools (Beispiele) |
|
||||
|------------------------|-------------------------------------------------------------------------|
|
||||
| ADRs | [`adr-tools`](https://github.com/npryce/adr-tools), einfaches Markdown |
|
||||
| Markdown-Doku | MkDocs, Docusaurus, GitBook |
|
||||
| API-Doku | Swagger/OpenAPI, Redocly |
|
||||
| Infrastruktur-Doku | Backstage (Spotify), Terraform-docs |
|
||||
| Doku-Automatisierung | GitHub Actions, pre-commit Hooks, CI-Checks |
|
||||
|
||||
---
|
||||
|
||||
## 🧠 Denkweise: Doku = Gespräch in Zeit
|
||||
|
||||
> "Dokumentation ist kein Nachtrag, sondern ein asynchrones Gespräch mit deinem zukünftigen Ich – oder dem nächsten Teammitglied."
|
||||
|
||||
Gerade in eurem Kontext (technisches Enablement, Deployment, Betrieb) ist das **entscheidungsorientierte, leichtgewichtige Festhalten von Kontext** Gold wert. ADRs und `docs/runbooks/`-Verzeichnisse sind ein idealer Startpunkt.
|
||||
|
||||
---
|
||||
|
||||
Möchtest du eine Vorlage für ein ADR oder ein Beispielverzeichnis für eine sinnvolle Living-Docs-Struktur im Repo?
|
||||
|
|
@ -1,15 +0,0 @@
|
|||
---
|
||||
title: PoC Structure
|
||||
weight: 5
|
||||
description: Building plan of the PoC milestone (end 2024) output
|
||||
---
|
||||
|
||||
Presented and approved on tuesday, 26.11.2024 within the team:
|
||||
|
||||

|
||||
|
||||
|
||||
The use cases/application lifecycle and deployment flow is drawn here: https://confluence.telekom-mms.com/display/IPCEICIS/Proof+of+Concept+2024
|
||||
|
||||
|
||||

|
||||
|
|
@ -1,15 +0,0 @@
|
|||
---
|
||||
title: Deployment
|
||||
weight: 3
|
||||
---
|
||||
|
||||
> **Mantra**:
|
||||
>
|
||||
> 1. Everything as Code.
|
||||
> 1. Cloud natively deployable everywhere.
|
||||
> 1. Ramping up and tearing down oftenly is a no-brainer.
|
||||
> 1. Especially locally (whereby 'locally' means 'under my own control')
|
||||
|
||||
## Entwurf (28.8.24)
|
||||
|
||||

|
||||
|
|
@ -1,8 +0,0 @@
|
|||
---
|
||||
title: POCs
|
||||
weight: 2
|
||||
---
|
||||
|
||||
## Further ideas for POSs
|
||||
|
||||
* see sample apps 'metaphor' in https://docs.kubefirst.io/
|
||||
|
|
@ -1,39 +0,0 @@
|
|||
# why we have architectural documentation
|
||||
|
||||
TN: Robert, Patrick, Stefan, Stephan
|
||||
25.2.25, 13-14h
|
||||
|
||||
## referring Tickets / Links
|
||||
|
||||
* https://jira.telekom-mms.com/browse/IPCEICIS-2424
|
||||
* https://jira.telekom-mms.com/browse/IPCEICIS-478
|
||||
* Confluence: https://confluence.telekom-mms.com/display/IPCEICIS/Architecture
|
||||
|
||||
## charts
|
||||
|
||||
we need charts, because:
|
||||
|
||||
* external stakeholders (especially architects) want to understand our product and component structure(*)
|
||||
* our team needs visualization in technical discussions(**)
|
||||
* we need to have discussions during creating the documentation
|
||||
|
||||
(*): marker: "jetzt hab' ich das erste mal so halbwegs verstanden was ihr da überhaupt macht"
|
||||
(**) marker: ????
|
||||
|
||||
|
||||
## typed of charts
|
||||
|
||||
* schichtenmodell (frontend, middleware, backend)
|
||||
* bebauungsplan mit abhängigkeiten, domänen
|
||||
* kontext von außen
|
||||
* komponentendiagramm,
|
||||
|
||||
## decisions
|
||||
|
||||
* openbao is backend-system, wird über apis erreicht
|
||||
|
||||
## further topics / new requirements
|
||||
|
||||
* runbook (compare to openbao discussions)
|
||||
* persistenz der EDP konfiguartion (zb postgres)
|
||||
* OIDC vs. SSI
|
||||
|
|
@ -1,85 +0,0 @@
|
|||
|
||||
|
||||
# arbeitsteilung arcihtekur, nach innen und nach aussen
|
||||
|
||||
Sebastiano, Stefan, Robert, Patrick, Stephan
|
||||
25.2.25, 14-15h
|
||||
|
||||
## links
|
||||
|
||||
* https://confluence.telekom-mms.com/display/IPCEICIS/Team+Members
|
||||
|
||||
# montags-call
|
||||
|
||||
* Sebasriano im montags-call, inklusive florian, mindestens interim, solange wir keinen architektur-aussenminister haben
|
||||
|
||||
# workshops
|
||||
|
||||
* nach abstimmung mit hasan zu platform workshops
|
||||
* weitere beteiligung in weiteren workshop-serien to be defined
|
||||
|
||||
# programm-alignment
|
||||
|
||||
* sponsoren finden
|
||||
* erledigt sich durch die workshop-serien
|
||||
|
||||
# interne architekten
|
||||
|
||||
* robert und patrick steigen ein
|
||||
* themen-teilung
|
||||
|
||||
# produkt struktur
|
||||
|
||||
edp standalone
|
||||
ipcei edp
|
||||
|
||||
# architektur themen
|
||||
|
||||
## stl
|
||||
|
||||
produktstruktur
|
||||
application model (cnoe, oam, score, xrd, ...)
|
||||
api
|
||||
backstage (usage scenarios)
|
||||
pipelining
|
||||
'everything as code', deklaratives deployment, crossplane (bzw. orchestrator)
|
||||
|
||||
ggf:
|
||||
identity mgmt
|
||||
|
||||
nicht:
|
||||
security
|
||||
monitoring
|
||||
kubernetes internals
|
||||
|
||||
## robert
|
||||
|
||||
pipelining
|
||||
kubernetes-inetrnals
|
||||
api
|
||||
crossplane
|
||||
platforming - erzeugen von ressourcen in 'clouds' (e.g. gcp, und hetzner :-) )
|
||||
|
||||
## patrick
|
||||
|
||||
security
|
||||
identity-mgmt (SSI)
|
||||
EaC
|
||||
und alles andere macht mir auch total spass!
|
||||
|
||||
# einschätzungen
|
||||
|
||||
* ipceicis-pltaform ist wichtigstes teilprojekt (hasan + patrick)
|
||||
* offener punkt: workload-steuerung, application model (kompatibility mit EDP)
|
||||
* thema security, siehe ssi vs. oidc
|
||||
* wir brauchen eigene workshops zum definieren der zusammenarbiets-modi
|
||||
|
||||
# committements
|
||||
|
||||
* patrick und robert nehmen teil an architektur
|
||||
|
||||
# offen
|
||||
|
||||
* sebastian schwaar onboarding? (>=50%) --- robert fragt
|
||||
* alternative: consulting/support anfallsweise
|
||||
* hält eine kubernetes einführungs-schulung --> termine sind zu vereinbaren (liegt bei sophie)
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
# crossplane dawn?
|
||||
|
||||
* Monday, March 31, 2025
|
||||
|
||||
## Issue
|
||||
|
||||
Robert worked on the kindserver reconciling.
|
||||
|
||||
He got aware that crossplane is able to delete clusters when drift is detected. This mustnt happen for sure in productive clusters.
|
||||
|
||||
Even worse, if crossplane did delete the cluster and then set it up again correctly, argocd would be out of sync and had no idea by default how to relate the old and new cluster.
|
||||
|
||||
## Decisions
|
||||
|
||||
1. quick solution: crosspllane doesn't delete clusters.
|
||||
* If it detects drift with a kind cluster, it shall create an alert (like email) but not act in any way
|
||||
2. analyze how crossplane orchestration logic calls 'business logic' to decide what to do.
|
||||
* In this logic we could decide whether to delete resources like clusters and if so then how. Secondly an 'orchestration' or let's workflow how to correctly set the old state with respect to argocd could be implemented there.
|
||||
3. keep terraform in mind
|
||||
* we probably will need it in adapters anyway
|
||||
* if the crossplane design does not fir, or the benefit is too small, or we definetly ahve more ressources in developing terraform, the we could completley switch
|
||||
4. focus on EDP domain and application logic
|
||||
* for the momen (in MVP1) we need to focus on EDP higher level functionality
|
||||
|
|
@ -1,42 +0,0 @@
|
|||
|
||||
|
||||
# platform-team austausch
|
||||
|
||||
## stefan
|
||||
|
||||
* initiale fragen:
|
||||
* vor 2 wochen workshop tapeten-termin
|
||||
* wer nimmt an den workshops teil?
|
||||
* was bietet platform an?
|
||||
* EDP: könnte 5mio/a kosten
|
||||
* -> produkt pitch mit marko
|
||||
* -> edp ist unabhängig von ipceicis cloud continuum*
|
||||
* generalisierte quality of services ( <-> platform schnittstelle)
|
||||
|
||||
|
||||
## Hasan
|
||||
|
||||
* martin macht: agent based iac generation
|
||||
* platform-workshops mitgestalten
|
||||
* mms-fokus
|
||||
* connectivity enabled cloud offering, e2e von infrastruktur bis endgerät
|
||||
* sdk für latenzarme systeme, beraten und integrieren
|
||||
* monitoring in EDP?
|
||||
* beispiel 'unity'
|
||||
* vorstellung im arch call
|
||||
* wie können unterschieldiche applikationsebenen auf unterschiedliche infrastruktur(compute ebenen) verteit werden
|
||||
* zero touch application deployment model
|
||||
* ich werde gerade 'abgebremst'
|
||||
* workshop beteiligung, TPM application model
|
||||
|
||||
## martin
|
||||
|
||||
* edgeXR erlaubt keine persistenz
|
||||
* openai, llm als abstarktion nicht vorhanden
|
||||
* momentan nur compute vorhanden
|
||||
* roaming von applikationen --> EDP muss das unterstützen
|
||||
* anwendungsfall: sprachmodell übersetzt design-artifakte in architektur, dann wird provisionierung ermöglicht
|
||||
|
||||
? Applikations-modelle
|
||||
? zusammenhang mit golden paths
|
||||
* zB für reines compute faas
|
||||
|
|
@ -1,55 +0,0 @@
|
|||
+++
|
||||
title = "Existing Backstage Plugins"
|
||||
weight = 4
|
||||
+++
|
||||
|
||||
1. **Catalog**:
|
||||
* Used for managing services and microservices, including registration, visualization, and the ability to track dependencies and relationships between services. It serves as a central directory for all services in an organization.
|
||||
|
||||
2. **Docs**:
|
||||
* Designed for creating and managing documentation, supporting formats such as Markdown. It helps teams organize and access technical and non-technical documentation in a unified interface.
|
||||
|
||||
3. **API Docs**:
|
||||
* Automatically generates API documentation based on OpenAPI specifications or other API definitions, ensuring that your API information is always up to date and accessible for developers.
|
||||
|
||||
4. **TechDocs**:
|
||||
* A tool for creating and publishing technical documentation. It is integrated directly into Backstage, allowing developers to host and maintain documentation alongside their projects.
|
||||
|
||||
5. **Scaffolder**:
|
||||
* Allows the rapid creation of new projects based on predefined templates, making it easier to deploy services or infrastructure with consistent best practices.
|
||||
|
||||
6. **CI/CD**:
|
||||
* Provides integration with CI/CD systems such as GitHub Actions and Jenkins, allowing developers to view build status, logs, and pipelines directly in Backstage.
|
||||
|
||||
7. **Metrics**:
|
||||
* Offers the ability to monitor and visualize performance metrics for applications, helping teams to keep track of key indicators like response times and error rates.
|
||||
|
||||
8. **Snyk**:
|
||||
* Used for dependency security analysis, scanning your codebase for vulnerabilities and helping to manage any potential security risks in third-party libraries.
|
||||
|
||||
9. **SonarQube**:
|
||||
* Integrates with SonarQube to analyze code quality, providing insights into code health, including issues like technical debt, bugs, and security vulnerabilities.
|
||||
|
||||
10. **GitHub**:
|
||||
|
||||
* Enables integration with GitHub repositories, displaying information such as commits, pull requests, and other repository activity, making collaboration more transparent and efficient.
|
||||
|
||||
11. **CircleCI**:
|
||||
|
||||
* Allows seamless integration with CircleCI for managing CI/CD workflows, giving developers insight into build pipelines, test results, and deployment statuses.
|
||||
|
||||
12. **Kubernetes**:
|
||||
|
||||
* Provides tools to manage Kubernetes clusters, including visualizing pod status, logs, and cluster health, helping teams maintain and troubleshoot their cloud-native applications.
|
||||
|
||||
13. **Cloud**:
|
||||
|
||||
* Includes plugins for integration with cloud providers like AWS and Azure, allowing teams to manage cloud infrastructure, services, and billing directly from Backstage.
|
||||
|
||||
14. **OpenTelemetry**:
|
||||
|
||||
* Helps with monitoring distributed applications by integrating OpenTelemetry, offering powerful tools to trace requests, detect performance bottlenecks, and ensure application health.
|
||||
|
||||
15. **Lighthouse**:
|
||||
|
||||
* Integrates Google Lighthouse to analyze web application performance, helping teams identify areas for improvement in metrics like load times, accessibility, and SEO.
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
---
|
||||
title: "Autonomous UAT Agent"
|
||||
linkTitle: "Autonomous UAT Agent"
|
||||
weight: 10
|
||||
description: >
|
||||
General documentation for D66 and the Autonomous UAT Agent
|
||||
---
|
||||
|
||||
This section contains the core documentation for D66, focusing on how the Autonomous UAT Agent works and how to run it.
|
||||
|
||||
“Autonomous UAT Agent” is the current working title of the application; **UAT** stands for **User Acceptance Testing** (i.e., end-to-end testing from a user’s perspective).
|
||||
|
||||
|
|
@ -1,106 +0,0 @@
|
|||
---
|
||||
title: "Agent Workflow Diagram"
|
||||
linkTitle: "UAT Agent Workflow Diagram"
|
||||
weight: 5
|
||||
description: >
|
||||
Visual workflow of a typical Agent S (Autonomous UAT Agent) run (gui_agent_cli.py) across Ministral, Holo, and VNC
|
||||
---
|
||||
This page provides a **visual sketch** of the typical workflow (example: `gui_agent_cli.py`).
|
||||
|
||||
## Workflow (fallback without Mermaid)
|
||||
|
||||
If Mermaid rendering is not available or fails in your build, this section shows the same workflow as plain text.
|
||||
|
||||
```text
|
||||
Operator/Prompt
|
||||
-> gui_agent_cli.py
|
||||
-> (1) Planning request -> Ministral vLLM (thinking)
|
||||
<- Next action intent
|
||||
-> (2) Screenshot capture -> VNC Desktop / Firefox
|
||||
<- PNG screenshot
|
||||
-> (3) Grounding request -> Holo vLLM (vision)
|
||||
<- Coordinates + element metadata
|
||||
-> (4) Execute action -> VNC Desktop / Firefox
|
||||
-> Artifacts saved -> results/ (logs, screenshots, JSON)
|
||||
```
|
||||
|
||||
| Step | From | To | What | Output |
|
||||
|---:|---|---|---|---|
|
||||
| 0 | Operator | gui_agent_cli.py | Provide goal / prompt | Goal text |
|
||||
| 1 | gui_agent_cli.py | Ministral vLLM | Plan next step (text) | Next action intent |
|
||||
| 2 | gui_agent_cli.py | VNC Desktop | Capture screenshot | PNG screenshot |
|
||||
| 3 | gui_agent_cli.py | Holo vLLM | Ground UI element(s) | Coordinates + element metadata |
|
||||
| 4 | gui_agent_cli.py | VNC Desktop | Execute click/type/scroll | UI state change |
|
||||
| 5 | gui_agent_cli.py | results/ | Persist evidence | Logs + screenshots + JSON |
|
||||
|
||||
## High-level data flow
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
%% Left-to-right overview of one typical agent loop
|
||||
|
||||
user[Operator / Prompt] --> cli[Agent S script<br/>gui_agent_cli.py]
|
||||
|
||||
subgraph OTC[OTC (Open Telekom Cloud)]
|
||||
subgraph MIN_HOST[ecs_ministral_L4]
|
||||
MIN[(Ministral 3 8B<br/>Thinking / Planning)]
|
||||
end
|
||||
|
||||
subgraph HOLO_HOST[ecs_holo_A40]
|
||||
HOLO[(Holo 1.5-7B<br/>Vision / Grounding)]
|
||||
end
|
||||
|
||||
subgraph TARGET[GUI test target]
|
||||
VNC[VNC / Desktop]
|
||||
FF[Firefox]
|
||||
VNC --> FF
|
||||
end
|
||||
end
|
||||
|
||||
cli -->|1. plan step<br/>vLLM_THINKING_ENDPOINT| MIN
|
||||
MIN -->|next action<br/>click / type / wait| cli
|
||||
|
||||
cli -->|2. capture screenshot| VNC
|
||||
VNC -->|screenshot (PNG)| cli
|
||||
|
||||
cli -->|3. grounding request<br/>vLLM_VISION_ENDPOINT| HOLO
|
||||
HOLO -->|coordinates + UI element info| cli
|
||||
|
||||
cli -->|4. execute action<br/>mouse / keyboard| VNC
|
||||
|
||||
cli -->|logs + screenshots| artifacts[(Artifacts<br/>logs, screenshots, JSON comms)]
|
||||
```
|
||||
|
||||
## Sequence (one loop)
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
autonumber
|
||||
actor U as Operator
|
||||
participant CLI as gui_agent_cli.py
|
||||
participant MIN as Ministral vLLM (ecs_ministral_L4)
|
||||
participant VNC as VNC Desktop (Firefox)
|
||||
participant HOLO as Holo vLLM (ecs_holo_A40)
|
||||
|
||||
U->>CLI: Provide goal / prompt
|
||||
|
||||
loop Step loop (until done)
|
||||
CLI->>MIN: Plan next step (text-only reasoning)
|
||||
MIN-->>CLI: Next action (intent)
|
||||
|
||||
CLI->>VNC: Capture screenshot
|
||||
VNC-->>CLI: Screenshot (image)
|
||||
|
||||
CLI->>HOLO: Ground UI element(s) in screenshot
|
||||
HOLO-->>CLI: Coordinates + element metadata
|
||||
|
||||
CLI->>VNC: Execute click/type/scroll
|
||||
end
|
||||
|
||||
CLI-->>U: Result summary + saved artifacts
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- The **thinking** and **grounding** models are separate on purpose: it improves coordinate reliability and makes failures easier to debug.
|
||||
- The agent loop typically produces artifacts (logs + screenshots) which are later copied into D66 evidence bundles.
|
||||
|
|
@ -1,80 +0,0 @@
|
|||
---
|
||||
title: "Model Stack"
|
||||
linkTitle: "Model Stack"
|
||||
weight: 4
|
||||
description: >
|
||||
Thinking vs grounding model split for D66 (current state and target state)
|
||||
---
|
||||
|
||||
For a visual overview of how the models interact with the VNC-based GUI automation loop, see: [Workflow Diagram](./agent-workflow-diagram.md)
|
||||
|
||||
## Requirement
|
||||
|
||||
The Autonomous UAT Agent must use **open-source models from European companies**. This has been a project requirement form the very beginnning of this project.
|
||||
|
||||
## Target setup
|
||||
|
||||
- **Thinking / planning:** Ministral
|
||||
- **Grounding / coordinates:** Holo 1.5
|
||||
|
||||
The Agent S framework runs an iterative loop: it uses a reasoning model to decide *what to do next* (plan the next action) and a grounding model to translate UI intent into *pixel-accurate coordinates* on the current screenshot. This split is essential for reliable GUI automation because planning and “where exactly to click” are different problems and benefit from different model capabilities.
|
||||
|
||||
## Why split models?
|
||||
|
||||
- Reasoning models optimize planning and textual decision making
|
||||
- Vision/grounding models optimize stable coordinate output
|
||||
- Separation reduces “coordinate hallucinations” and makes debugging easier
|
||||
|
||||
## Current state in repo
|
||||
|
||||
- Some scripts and docs still reference historical **Claude** and **Pixtral** experiments.
|
||||
- **Pixtral is not suitable for pixel-level grounding in this use case**: in our evaluations it did not provide the consistency and coordinate stability required for reliable UI automation.
|
||||
- In an early prototyping phase, **Anthropic Claude Sonnet** was useful due to strong instruction-following and reasoning quality; however it does not meet the D66 constraints (open-source + European provider), so it could not be used for the D66 target solution.
|
||||
|
||||
## Current configuration (D66)
|
||||
|
||||
### Thinking model: Ministral 3 8B (Instruct)
|
||||
|
||||
- HuggingFace model card: https://huggingface.co/mistralai/Ministral-3-8B-Instruct-2512
|
||||
- Runs on **OTC (Open Telekom Cloud) ECS**: `ecs_ministral_L4` (public IP: `164.30.28.242`)
|
||||
- Flavor: GPU-accelerated | 16 vCPUs | 64 GiB | `pi5e.4xlarge.4`
|
||||
- GPU: 1 × NVIDIA Tesla L4 (24 GiB)
|
||||
- Image: `Standard_Ubuntu_24.04_amd64_bios_GPU_GitLab_3074` (Public image)
|
||||
- Deployment: vLLM OpenAI-compatible endpoint (chat completions)
|
||||
- Endpoint env var: `vLLM_THINKING_ENDPOINT`
|
||||
- Current server (deployment reference): `http://164.30.28.242:8001/v1`
|
||||
|
||||
**Operational note:** vLLM is configured to **auto-start on server boot** (OTC ECS restart) via `systemd`.
|
||||
|
||||
**Key serving settings (vLLM):**
|
||||
|
||||
- `--gpu-memory-utilization 0.90`
|
||||
- `--max-model-len 32768`
|
||||
- `--host 0.0.0.0`
|
||||
- `--port 8001`
|
||||
|
||||
**Key client settings (Autonomous UAT Agent scripts):**
|
||||
|
||||
- `model`: `/home/ubuntu/ministral-vllm/models/ministral-3-8b`
|
||||
- `temperature`: `0.0`
|
||||
|
||||
### Grounding model: Holo 1.5-7B
|
||||
|
||||
- HuggingFace model card: https://huggingface.co/holo-1.5-7b
|
||||
- Runs on **OTC (Open Telekom Cloud) ECS**: `ecs_holo_A40` (public IP: `164.30.22.166`)
|
||||
- Flavor: GPU-accelerated | 48 vCPUs | 384 GiB | `g7.12xlarge.8`
|
||||
- GPU: 1 × NVIDIA A40 (48 GiB)
|
||||
- Image: `Standard_Ubuntu_24.04_amd64_bios_GPU_GitLab_3074` (Public image)
|
||||
- Deployment: vLLM OpenAI-compatible endpoint (multimodal grounding)
|
||||
- Endpoint env var: `vLLM_VISION_ENDPOINT`
|
||||
- Current server (deployment reference): `http://164.30.22.166:8000/v1`
|
||||
|
||||
**Key client settings (grounding / coordinate space):**
|
||||
|
||||
- `model`: `holo-1.5-7b`
|
||||
- Native coordinate space: `3840×2160` (4K)
|
||||
- Client grounding dimensions:
|
||||
- `grounding_width`: `3840`
|
||||
- `grounding_height`: `2160`
|
||||
|
||||
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
---
|
||||
title: "Results & Findings"
|
||||
linkTitle: "Results"
|
||||
weight: 20
|
||||
description: >
|
||||
Results, findings, and evidence artifacts for D66
|
||||
---
|
||||
This section contains the outputs that support D66 claims: findings summaries and pointers to logs, screenshots, and run artifacts.
|
||||
|
||||
## Pages
|
||||
|
||||
- [PoC Validation](./poc-validation.md)
|
||||
- [Golden Run (Telekom Header Navigation)](./golden-run-telekom-header-nav/)
|
||||
|
|
@ -1,114 +0,0 @@
|
|||
---
|
||||
title: "Golden Run: Telekom Header Navigation"
|
||||
linkTitle: "Golden Run (Telekom)"
|
||||
weight: 3
|
||||
description: >
|
||||
Evidence pack (screenshots + logs) for the golden run on www.telekom.de header navigation
|
||||
---
|
||||
|
||||
This page is the evidence pack for the **Autonomous UAT Agent** golden run on **www.telekom.de**.
|
||||
|
||||
## Run intent
|
||||
|
||||
- Goal: Test interactive elements in the header navigation for functional weaknesses
|
||||
- Output: Click-marked screenshots + per-run log (and optionally model communication JSON)
|
||||
|
||||
## How the run was executed (ECS)
|
||||
|
||||
Command (as used in the runbook):
|
||||
|
||||
```bash
|
||||
python staging_scripts/gui_agent_cli.py \
|
||||
--prompt "Role: You are a UI/UX testing agent specializing in functional correctness.
|
||||
Goal: Test all interactive elements in the header navigation on www.telekom.de for functional weaknesses.
|
||||
Tasks:
|
||||
1. Navigate to the website
|
||||
2. Identify and test interactive elements (buttons, links, forms, menus)
|
||||
3. Check for broken flows, defective links, non-functioning elements
|
||||
4. Document issues found
|
||||
Report Format:
|
||||
Return findings in the 'issues' field as a list of objects:
|
||||
- element: Name/description of the element
|
||||
- location: Where on the page
|
||||
- problem: What doesn't work
|
||||
- recommendation: How to fix it
|
||||
If no problems found, return an empty array: []" \
|
||||
--max-steps 15
|
||||
```
|
||||
|
||||
## Artifacts
|
||||
|
||||
## Screenshot gallery
|
||||
|
||||
### Thumbnail grid (recommended for many screenshots)
|
||||
|
||||
Click any thumbnail to open the full image.
|
||||
|
||||
<div style="display:grid; grid-template-columns: repeat(auto-fit, minmax(240px, 1fr)); gap: 12px; align-items:start;">
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_001.png"><img src="screenshots/uat_agent_step_001.png" alt="UAT agent step 001" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 001</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_002.png"><img src="screenshots/uat_agent_step_002.png" alt="UAT agent step 002" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 002</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_003.png"><img src="screenshots/uat_agent_step_003.png" alt="UAT agent step 003" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 003</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_004.png"><img src="screenshots/uat_agent_step_004.png" alt="UAT agent step 004" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 004</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_005.png"><img src="screenshots/uat_agent_step_005.png" alt="UAT agent step 005" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 005</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_006.png"><img src="screenshots/uat_agent_step_006.png" alt="UAT agent step 006" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 006</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_007.png"><img src="screenshots/uat_agent_step_007.png" alt="UAT agent step 007" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 007</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_008.png"><img src="screenshots/uat_agent_step_008.png" alt="UAT agent step 008" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 008</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_010.png"><img src="screenshots/uat_agent_step_010.png" alt="UAT agent step 010" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 010</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_011.png"><img src="screenshots/uat_agent_step_011.png" alt="UAT agent step 011" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 011</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_012.png"><img src="screenshots/uat_agent_step_012.png" alt="UAT agent step 012" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 012</figcaption>
|
||||
</figure>
|
||||
<figure style="margin:0;">
|
||||
<a href="screenshots/uat_agent_step_013.png"><img src="screenshots/uat_agent_step_013.png" alt="UAT agent step 013" style="width:100%; height:auto; border:1px solid #ddd; border-radius:6px;" /></a>
|
||||
<figcaption style="text-align:center; font-size:0.9em;">Step 013</figcaption>
|
||||
</figure>
|
||||
</div>
|
||||
|
||||
<details>
|
||||
<summary>Full-size images (stacked)</summary>
|
||||
|
||||
{{< figure src="screenshots/uat_agent_step_001.png" caption="Step 001" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_002.png" caption="Step 002" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_003.png" caption="Step 003" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_004.png" caption="Step 004" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_005.png" caption="Step 005" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_006.png" caption="Step 006" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_007.png" caption="Step 007" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_008.png" caption="Step 008" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_010.png" caption="Step 010" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_011.png" caption="Step 011" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_012.png" caption="Step 012" >}}
|
||||
{{< figure src="screenshots/uat_agent_step_013.png" caption="Step 013" >}}
|
||||
|
||||
</details>
|
||||
|
Before Width: | Height: | Size: 122 KiB |
|
Before Width: | Height: | Size: 880 KiB |
|
Before Width: | Height: | Size: 502 KiB |
|
Before Width: | Height: | Size: 251 KiB |
|
Before Width: | Height: | Size: 777 KiB |
|
Before Width: | Height: | Size: 968 KiB |
|
Before Width: | Height: | Size: 578 KiB |
|
Before Width: | Height: | Size: 527 KiB |
|
Before Width: | Height: | Size: 191 KiB |
|
Before Width: | Height: | Size: 501 KiB |
|
Before Width: | Height: | Size: 881 KiB |
|
Before Width: | Height: | Size: 884 KiB |
|
|
@ -1,26 +0,0 @@
|
|||
---
|
||||
title: "PoC Validation"
|
||||
linkTitle: "PoC Validation"
|
||||
weight: 1
|
||||
description: >
|
||||
What was validated and where to find the evidence
|
||||
---
|
||||
## What was validated
|
||||
|
||||
- Autonomous GUI interaction via the Autonomous UAT Agent (Agent S3-based scripts)
|
||||
- Generation of UX findings and recommendations
|
||||
- Production of reproducible artifacts (screenshots, logs)
|
||||
|
||||
## Where to find evidence in this repo
|
||||
|
||||
- Run logs and calibration logs: `logs/`
|
||||
- Story evidence and investigation notes:
|
||||
- `docs/story-025-001-context.md`
|
||||
- `docs/story-026-001-context.md`
|
||||
- `docs/story-023-003-coordinate-space-detection.md`
|
||||
|
||||
## How to reproduce a run
|
||||
|
||||
1. Choose a script in `Backend/IPCEI-UX-Agent-S3/staging_scripts/`
|
||||
2. Set target URL (if supported) via `AS2_TARGET_URL`
|
||||
3. Run and capture artifacts (see `docs/D66/documentation/outputs-and-artifacts.md`)
|
||||
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
title: "Use Case 1 – Ergebnisse Agent S2"
|
||||
linkTitle: "UC1 – Agent S2"
|
||||
weight: 10
|
||||
draft: true
|
||||
description: >
|
||||
Roh-Ergebnisse (PoC Validation) für Use Case 1 – Agent S2
|
||||
---
|
||||
|
||||
## Kontext
|
||||
|
||||
- Use Case: **Funktionale Korrektheit von UI-Elementen**
|
||||
- Zielsystem: **leipzig.de** (Fahrrad-Seite; siehe PoC-Kontext)
|
||||
- Prompt: siehe [1 - Prompt.md](./1%20-%20Prompt.md)
|
||||
|
||||
## Executive Summary
|
||||
|
||||
- **Ergebnis (positiv):** Es wurden **keine Broken Flows oder Sackgassen** identifiziert.
|
||||
- **Ergebnis (Auffälligkeit):** Ein zentrales UI-Element (Sprachauswahl) war für den Agenten **schwer auffindbar**, was auf potenzielle Usability-/Findability-Probleme hindeutet.
|
||||
|
||||
## Findings (tabellarisch)
|
||||
|
||||
| Element | Fundstelle | Reaktion sichtbar ≤ 500 ms | Sackgasse/Broken Flow | Problem-Beschreibung | Empfehlung |
|
||||
|---|---|---:|---:|---|---|
|
||||
| Sprache-Dropdown | Kopfbereich (oben rechts) | Nicht bewertet | Nein | Schwierigkeiten beim Lokalisieren der Sprach-Funktion; mehrfaches Scrollen und Fehlklicks (Browser-Menü statt Website-Element) deuten auf unklare Positionierung bzw. geringe Auffindbarkeit hin. | Sprachauswahl prominenter platzieren und visuell stärker auszeichnen (z. B. Flaggen-Icons, höherer Kontrast, größere Schrift). |
|
||||
|
||||
## Positive Ergebnisse
|
||||
|
||||
- ✅ Keine Broken Flows oder Sackgassen gefunden
|
||||
|
||||
## Empfehlungen (konkret)
|
||||
|
||||
1. Sprachauswahl mit internationalen Flaggen-Icons ergänzen (visuelle Erkennbarkeit)
|
||||
2. Kontrast und Schriftgröße des Sprachelements erhöhen
|
||||
3. Position in der Header-Zone prüfen und ggf. prominenter platzieren
|
||||
4. Kurztest mit internationalen Nutzern durchführen (Findability)
|
||||
5. Hover-/Fokus-Effekte verbessern, um Interaktivität eindeutiger zu signalisieren
|
||||
|
|
@ -1,49 +0,0 @@
|
|||
---
|
||||
title: "Use Case 1 – Ergebnisse Anthropic Computer Use"
|
||||
linkTitle: "UC1 – Anthropic"
|
||||
weight: 20
|
||||
draft: true
|
||||
description: >
|
||||
Roh-Ergebnisse (PoC Validation) für Use Case 1 – Anthropic Computer Use
|
||||
---
|
||||
|
||||
## Kontext
|
||||
|
||||
- Use Case: **Funktionale Korrektheit von UI-Elementen**
|
||||
- Zielsystem: **leipzig.de**
|
||||
- Prompt: siehe [1 - Prompt.md](./1%20-%20Prompt.md)
|
||||
|
||||
## Executive Summary
|
||||
|
||||
- **Funktionalität:** Alle getesteten Elemente wurden als **technisch funktionsfähig** bewertet.
|
||||
- **Hauptbefund:** Mehrere Interaktionen zeigen **verzögerte Antwortzeiten** (Reaktion nicht innerhalb von 500 ms; teils > 3 Sekunden Ladezeit).
|
||||
- **Flow-Qualität:** **Keine Broken Flows oder Sackgassen** identifiziert.
|
||||
|
||||
Hinweis: Gemessene Ladezeiten können je nach Netzwerk, Serverlast und Client variieren; der Befund zeigt jedoch, dass Performance aus UX-Sicht ein zentraler Hebel ist.
|
||||
|
||||
## Findings (tabellarisch)
|
||||
|
||||
| Element | Fundstelle | Reaktion sichtbar ≤ 500 ms | Sackgasse/Broken Flow | Problem-Beschreibung | Empfehlung |
|
||||
|---|---|---:|---:|---|---|
|
||||
| Kontakt-Link (Top-Navigation) | https://www.leipzig.de/kontakt | Nein | Nein | Ladezeit über 3 Sekunden (verzögerte Antwortzeit). | Performance optimieren; Caching verbessern. |
|
||||
| RSS-Feeds Link (Top-Navigation) | https://www.leipzig.de/rss-feeds | Nein | Nein | Ladezeit über 3 Sekunden (verzögerte Antwortzeit). | Performance optimieren; Caching verbessern. |
|
||||
| Mediathek Link (Top-Navigation) | https://www.leipzig.de/servicenavigation/mediathek | Nein | Nein | Ladezeit über 3 Sekunden (verzögerte Antwortzeit). | Performance optimieren; Caching verbessern. |
|
||||
| Suchfunktion | Startseite (Suchfeld / Suchergebnisse) | Nein | Nein | Suchergebnisse laden über 3 Sekunden (verzögerte Antwortzeit). | Suchindex-Performance optimieren; Suchalgorithmus beschleunigen. |
|
||||
| Bürgerservice und Verwaltung (Hauptnavigation) | https://www.leipzig.de/buergerservice-und-verwaltung | Nein | Nein | Ladezeit über 3 Sekunden (verzögerte Antwortzeit). | Performance optimieren; Caching verbessern. |
|
||||
|
||||
## Positive Ergebnisse
|
||||
|
||||
- ✅ Alle getesteten Elemente funktionieren technisch korrekt
|
||||
- ✅ Keine Broken Flows oder Sackgassen gefunden
|
||||
- ✅ Vollständige Navigation und Breadcrumb-System
|
||||
- ✅ Funktionierende Mehrsprachigkeit
|
||||
- ✅ Korrekte URL-Struktur und Verlinkung
|
||||
- ✅ Responsive Elemente mit visueller Rückmeldung
|
||||
|
||||
## Hauptempfehlung (Performance)
|
||||
|
||||
1. Server-Response-Zeit optimieren
|
||||
2. Caching-Strategien verbessern
|
||||
3. Asset-Komprimierung implementieren
|
||||
4. CDN-Einsatz prüfen
|
||||
5. Datenbank-/Query-Optimierung
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
---
|
||||
title: "Use Case 1 – Experteneinschätzung (Agent Frameworks)"
|
||||
linkTitle: "UC1 – Experteneinschätzung"
|
||||
weight: 30
|
||||
draft: true
|
||||
description: >
|
||||
Vergleich Agent S2 vs. Anthropic Computer Use (UC1)
|
||||
---
|
||||
|
||||
Im direkten Vergleich zeigt Anthropic Computer Use eine stärkere Ausrichtung auf die technische Dimension der UI-Prüfung, insbesondere in Bezug auf Ladezeiten. Solche Aspekte sind aus UX-Sicht zwar relevant, lassen sich aber nur eingeschränkt bewerten – etwa wenn Ladezeiten subjektiv als störend empfunden werden oder Links ins Leere führen.
|
||||
|
||||
|
||||
|
||||
**Anthropic Computer Use**
|
||||
|
||||
identifiziert Unterschiede bei Ladezeiten (z. B. Kontakt, RSS-Feeds vs. Mein Stadtteil). Diese sind jedoch noch im akzeptablen Bereich und kaum auffällig. Ich hätte sogar behauptet noch deutlich unter 3 Sekunden.
|
||||
|
||||
✅ In der Mediathek sind die Ladezeiten dagegen spürbar verlängert, was tatsächlich einen UX-relevanten Warnhinweis darstellt.
|
||||
|
||||
**Agent S2**
|
||||
|
||||
 liefert Ergebnisse, die stärker auf Usability und Barrierefreiheit (Sprach-Dropdown) abzielen. Diese Perspektive ist weniger technisch, aber für die praktische UX-Bewertung ebenso zentral. Hätte ich aber eher unter dem zweiten Prompt (Visuelle Qualität & Konsistenz bzw. UX Health Check) vermutet.
|
||||
|
|
@ -1,55 +0,0 @@
|
|||
---
|
||||
title: "Use Case 1 – Prompt"
|
||||
linkTitle: "UC1 – Prompt"
|
||||
weight: 5
|
||||
draft: true
|
||||
description: >
|
||||
Prompt für Use Case 1 (funktionale Korrektheit von UI-Elementen)
|
||||
---
|
||||
|
||||
Rolle:
|
||||
Du bist ein technischer Usability- und QA-Tester mit Fokus auf funktionale Korrektheit von Websites. Du prüfst systematisch interaktive UI-Elemente und Sub-Element auf ihre technische Funktionalität, Reaktionsverhalten und mögliche Broken Flows.
|
||||
|
||||
Ziel:
|
||||
Untersuche alle interaktiven Elemente auf der Website [Website-URL] auf funktionale Schwächen oder Sackgassen. Erkenne Broken Flows, defekte Verlinkungen, Buttons ohne Funktion und fehlende Rückmeldungen. Dein Ziel ist es, jede Interaktion aus Nutzersicht lückenlos testbar und reaktionsschnell zu machen.
|
||||
|
||||
---
|
||||
|
||||
Prüfkriterien für jedes UI-Element und Sub-Element (Button, Link, CTA, Menüpunkt etc.):
|
||||
|
||||
1. Technische Funktionsfähigkeit
|
||||
- Ist das Element technisch klickbar?
|
||||
- Führt der Klick zu einer gültigen Zieladresse oder Aktion?
|
||||
- Wird ein neues Fenster geöffnet (wenn ja: sinnvoll?)?
|
||||
|
||||
2. Keine Sackgassen (Dead Ends)
|
||||
- Führt die Interaktion zu einem „404“, „Fehler“-Screen, einer leeren Seite oder ohne sichtbaren nächsten Schritt?
|
||||
- Gibt es einen Rückweg oder eine Navigation zurück?
|
||||
- Bleibt man ohne Erklärung stecken?
|
||||
|
||||
3. Reaktionszeit
|
||||
- Findet innerhalb von max. 500 ms eine sichtbare Reaktion statt?
|
||||
(z. B. Ladeanimation, Seitenwechsel, visuelle Rückmeldung)
|
||||
- Bleibt der Klick „tot“ oder reagiert verzögert?
|
||||
|
||||
4. Visuelle Rückmeldung
|
||||
- Gibt es einen Hover-/Fokus-/Aktiv-Zustand?
|
||||
- Wird der Nutzer über den Status der Aktion informiert (z. B. „wird geladen“, „erfolgreich gesendet“)?
|
||||
|
||||
5. Broken Flow Detection
|
||||
- Gibt es aufeinanderfolgende Interaktionen, die nicht korrekt abgeschlossen werden können?
|
||||
(z. B. "Jetzt kaufen" → leere Warenkorbseite)
|
||||
- Gibt es Flows, bei denen Nutzer:innen nicht weiterkommen oder Informationen fehlen?
|
||||
- Werden Ladefehler, modale Blockaden oder Fehlermeldungen korrekt gehandhabt?
|
||||
|
||||
---
|
||||
|
||||
Ausgabeformat für jedes getestete UI-Element und Sub-Element, das Probleme aufweist:
|
||||
|
||||
- Element: [Bezeichnung oder Sichtbarkeit des Buttons/Links]
|
||||
- Fundstelle: [URL oder Seitensektion]
|
||||
- Reaktion sichtbar in 500 ms: [Ja/Nein]
|
||||
- Sackgasse oder Broken Flow: [Ja/Nein]
|
||||
- Problem-Beschreibung: [z. B. "Link führt ins Leere", "kein Feedback nach Klick"]
|
||||
- Empfehlung: [z. B. „Verlinkung prüfen“, „Lade-Feedback integrieren“, „Rückweg ermöglichen“]
|
||||
|
||||
|
|
@ -1,124 +0,0 @@
|
|||
---
|
||||
title: "Use Case 2 – Ergebnisse Agent S2"
|
||||
linkTitle: "UC2 – Agent S2"
|
||||
weight: 10
|
||||
draft: true
|
||||
description: >
|
||||
Roh-Ergebnisse (PoC Validation) für Use Case 2 – Agent S2
|
||||
---
|
||||
|
||||
## GESAMTBEWERTUNG & KATEGORIEN
|
||||
|
||||
### 1. NAVIGATIONSSTRUKTUR & ORIENTIERUNG
|
||||
|
||||
**Bewertung: 3/5 (Erhebliche Mängel bei externen Verlinkungen)**
|
||||
|
||||
✅ Positive Aspekte:
|
||||
|
||||
- Keine positiven Aspekte dokumentiert
|
||||
|
||||
|
||||
|
||||
🔴 Identifizierte Probleme:
|
||||
|
||||
**Status:** Kritisches Problem
|
||||
|
||||
**Fundstelle:** Twitter/X Website ([x.com](http://x.com/))
|
||||
|
||||
**Begründung:** Nutzer werden nach Klick auf Social Media Button auf externe Plattform weitergeleitet und können durch Login-Prompts und Cookie-Banner nicht zur ursprünglichen Website zurückkehren. Back-Button funktioniert nicht.
|
||||
|
||||
**Empfehlung**: Social Media Sharing in neuem Tab/Fenster öffnen oder JavaScript-basierte Sharing-Lösung implementieren
|
||||
|
||||
**Verbesserungspotenzial**: Hoch – Verhindert kompletten Nutzungsabbruch
|
||||
|
||||
### 2. BARRIEREFREIHEIT (WCAG 2.1 AA)
|
||||
|
||||
**Bewertung:** 5/5 (Keine Probleme identifiziert)
|
||||
|
||||
✅ Positive Aspekte:
|
||||
|
||||
- Keine positiven Aspekte dokumentiert
|
||||
|
||||
🔴 Identifizierte Probleme:
|
||||
|
||||
- Keine Probleme in dieser Kategorie identifiziert
|
||||
|
||||
### 3. INTERAKTIVE ELEMENTE & USABILITY
|
||||
|
||||
Bewertung: 3/5 (Kritische Funktionsstörungen)
|
||||
|
||||
✅ Positive Aspekte:
|
||||
|
||||
- Keine positiven Aspekte dokumentiert
|
||||
|
||||
🔴 Identifizierte Probleme:
|
||||
|
||||
**Status:** Kritisches Problem
|
||||
|
||||
**Fundstelle:** Stadt Leipzig Website - Bürgerservice Sektion
|
||||
|
||||
**Begründung:** Automatisierte Tests blockieren komplett, Agent führt keine Aktionen mehr aus und zeigt wiederholt Fehlermeldungen. Testprozess funktionsunfähig.
|
||||
|
||||
**Empfehlung:** Test-Automatisierung überarbeiten - Fallback-Mechanismen für blockierte Aktionen implementieren
|
||||
|
||||
**Verbesserungspotenzial:** Hoch – Verhindert vollständige Funktionalitätsprüfung
|
||||
|
||||
### 4. UI-KONSISTENZ & DESIGNSYSTEM
|
||||
|
||||
Bewertung: 5/5 (Keine Probleme identifiziert)
|
||||
|
||||
✅ Positive Aspekte:
|
||||
|
||||
- Keine positiven Aspekte dokumentiert
|
||||
|
||||
🔴 Identifizierte Probleme:
|
||||
|
||||
- Keine Probleme in dieser Kategorie identifiziert
|
||||
|
||||
### 5. MOBILE NUTZUNG & RESPONSIVE DESIGN
|
||||
|
||||
Bewertung: 5/5 (Keine Probleme identifiziert)
|
||||
|
||||
✅ Positive Aspekte:
|
||||
|
||||
- Keine positiven Aspekte dokumentiert
|
||||
|
||||
🔴 Identifizierte Probleme:
|
||||
|
||||
- Keine Probleme in dieser Kategorie identifiziert
|
||||
|
||||
### 6. PERFORMANCE & LADEZEIT
|
||||
|
||||
Bewertung: 5/5 (Keine Probleme identifiziert)
|
||||
|
||||
✅ Positive Aspekte:**
|
||||
|
||||
- Keine positiven Aspekte dokumentiert
|
||||
|
||||
🔴 Identifizierte Probleme:
|
||||
|
||||
- Keine Probleme in dieser Kategorie identifiziert
|
||||
|
||||
## PRIORISIERTE HANDLUNGSEMPFEHLUNGEN
|
||||
|
||||
### 🚨 KRITISCH (Sofort umsetzen):
|
||||
|
||||
- Social Media Links in neuem Tab öffnen - verhindert Nutzungsabbruch
|
||||
|
||||
- Test-Automatisierung mit Fallback-Mechanismen ausstatten - ermöglicht vollständige Qualitätsprüfung
|
||||
|
||||
### 🔥 HOCH (Nächste 4 Wochen):
|
||||
|
||||
- Keine Probleme dieser Prioritätsstufe identifiziert
|
||||
|
||||
### 🟠 MITTEL (Nächste 8 Wochen):
|
||||
|
||||
- Keine Probleme dieser Prioritätsstufe identifiziert
|
||||
|
||||
### 🟢 **NIEDRIG (Kontinuierlich):**
|
||||
|
||||
- Keine Probleme dieser Prioritätsstufe identifiziert
|
||||
|
||||
## FAZIT
|
||||
|
||||
**Gesamtnote: 4/5** - Die Website zeigt grundsätzlich solide UX-Performance, wird jedoch durch zwei kritische Navigationsprobleme erheblich beeinträchtigt. Die Haupthebel liegen in der Behebung der externen Verlinkungsproblematik und der Stabilisierung der Test-Infrastruktur. Nach Lösung dieser kritischen Punkte ist eine deutliche Verbesserung der Nutzererfahrung zu erwarten.
|
||||
|
|
@ -1,209 +0,0 @@
|
|||
---
|
||||
title: "Use Case 2 – Ergebnisse Anthropic Computer Use"
|
||||
linkTitle: "UC2 – Anthropic"
|
||||
weight: 20
|
||||
draft: true
|
||||
description: >
|
||||
Roh-Ergebnisse (PoC Validation) für Use Case 2 – Anthropic Computer Use
|
||||
---
|
||||
|
||||
## GESAMTBEWERTUNG & KATEGORIEN
|
||||
|
||||
### 1. NAVIGATIONSSTRUKTUR & ORIENTIERUNG
|
||||
|
||||
**Bewertung: 4/5 (Gut)**
|
||||
|
||||
✅ **Positive Aspekte:**
|
||||
|
||||
- Klares Logo als Home-Anker vorhanden
|
||||
- Konsistente Hauptnavigation mit 5 logischen Kategorien
|
||||
- Gute Breadcrumb-Navigation
|
||||
- Sekundäre Navigation mit Service-Links
|
||||
- Funktionierendes responsive Hamburger-Menü
|
||||
- Skip-Links für Barrierefreiheit ("Zur Hauptnavigation springen", "Zur Suche springen", "Zum Inhalt springen")
|
||||
|
||||
🔴 **Identifizierte Probleme:**
|
||||
|
||||
**Status:** Problem
|
||||
**Fundstelle:** Top-Navigation, Sprach-Dropdown
|
||||
**Begründung:** Das Sprach-Dropdown ist zwar funktional, aber könnte für internationale Nutzer klarer beschriftet sein
|
||||
**Empfehlung:** Icon mit "Language/Sprache" Label ergänzen
|
||||
**Verbesserungspotenzial:** Niedrig - einfache Beschriftungsänderung
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
### 2. BARRIEREFREIHEIT (WCAG 2.1 AA)
|
||||
|
||||
**Bewertung: 4/5 (Gut)**
|
||||
|
||||
✅ **Positive Aspekte:**
|
||||
|
||||
- Ausgezeichnete Skip-Link-Implementierung
|
||||
- Funktionale Tastaturnavigation
|
||||
- Logische Tab-Reihenfolge
|
||||
- Mehrsprachigkeit verfügbar
|
||||
- Schriftgrößen sind ausreichend (≥16px)
|
||||
|
||||
🔴 **Identifizierte Probleme:**
|
||||
|
||||
**Status:** Problem
|
||||
**Fundstelle:** Farbkontraste bei Links und Buttons
|
||||
**Begründung:** Einige blaue Links könnten bessere Kontraste zum Hintergrund haben
|
||||
**Empfehlung:** Farbkontraste auf mindestens 4.5:1 prüfen und anpassen
|
||||
**Verbesserungspotenzial:** Mittel - Design-Anpassungen erforderlich
|
||||
|
||||
**Status:** Problem
|
||||
**Fundstelle:** Alt-Texte bei Bildern nicht überprüfbar
|
||||
**Begründung:** Ohne Screenreader-Test nicht vollständig bewertbar
|
||||
**Empfehlung:** Vollständige Alt-Text-Audit durchführen
|
||||
**Verbesserungspotenzial:** Hoch - Barrierefreiheit kritisch
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
### 3. INTERAKTIVE ELEMENTE & USABILITY
|
||||
|
||||
**Bewertung: 3/5 (Befriedigend)**
|
||||
|
||||
✅ **Positive Aspekte:**
|
||||
|
||||
- Suchfunktion vorhanden und funktional
|
||||
- Hover-States bei Buttons erkennbar
|
||||
- Links sind als solche erkennbar
|
||||
|
||||
🔴 **Identifizierte Probleme:**
|
||||
|
||||
**Status:** Problem
|
||||
**Fundstelle:** Suchfeld und Suchbutton
|
||||
**Begründung:** Suchbutton könnte prägnanter beschriftet sein ("Suchen" statt nur Icon)
|
||||
**Empfehlung:** Button-Beschriftung optimieren, Placeholder-Text im Suchfeld verbessern
|
||||
**Verbesserungspotenzial:** Niedrig - einfache Textanpassungen
|
||||
|
||||
**Status:** Problem
|
||||
**Fundstelle:** Touch-Targets auf mobilen Geräten
|
||||
**Begründung:** Einige Navigationselemente könnten für Touch-Bedienung zu klein sein
|
||||
**Empfehlung:** Touch-Targets auf mindestens 44x44px vergrößern
|
||||
**Verbesserungspotenzial:** Mittel - CSS-Anpassungen erforderlich
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
### 4. UI-KONSISTENZ & DESIGNSYSTEM
|
||||
|
||||
**Bewertung: 4/5 (Gut)**
|
||||
|
||||
✅ **Positive Aspekte:**
|
||||
|
||||
- Konsistente Farbverwendung (Blau für Stadt Leipzig)
|
||||
- Einheitliche Schriftarten
|
||||
- Konsistente Button-Styles
|
||||
- Klare visuelle Hierarchie
|
||||
|
||||
🔴 **Identifizierte Probleme:**
|
||||
|
||||
**Status:** Problem
|
||||
**Fundstelle:** Verschiedene Button-Stile
|
||||
**Begründung:** Primäre und sekundäre Buttons nicht immer klar unterscheidbar
|
||||
**Empfehlung:** Klares Button-Hierarchie-System etablieren
|
||||
**Verbesserungspotenzial:** Mittel - Designsystem-Überarbeitung
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
### 5. MOBILE NUTZUNG & RESPONSIVE DESIGN
|
||||
|
||||
**Bewertung: 4/5 (Gut)**
|
||||
|
||||
✅ **Positive Aspekte:**
|
||||
|
||||
- Vollständig responsive Design
|
||||
- Funktionierendes Hamburger-Menü
|
||||
- Gute mobile Navigation
|
||||
- Kein horizontales Scrollen
|
||||
|
||||
🔴 **Identifizierte Probleme:**
|
||||
|
||||
**Status:** Problem
|
||||
**Fundstelle:** Mobile Schriftgrößen
|
||||
**Begründung:** Einige Texte könnten auf sehr kleinen Bildschirmen zu klein sein
|
||||
**Empfehlung:** Minimum 18px für mobile Fließtexte
|
||||
**Verbesserungspotenzial:** Niedrig - CSS-Anpassungen
|
||||
|
||||
**Status:** Problem
|
||||
**Fundstelle:** Touch-Target-Größen
|
||||
**Begründung:** Navigationslinks könnten größere Touch-Bereiche haben
|
||||
**Empfehlung:** Padding vergrößern für bessere Touch-Erfahrung
|
||||
**Verbesserungspotenzial:** Niedrig - CSS-Padding-Anpassungen
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
### 6. PERFORMANCE & LADEZEIT
|
||||
|
||||
**Bewertung: 2/5 (Großer Handlungsbedarf)**
|
||||
|
||||
🔴 **Kritische Probleme:**
|
||||
|
||||
**Status:** Problem
|
||||
**Fundstelle:** Allgemeine Seitenladezeiten
|
||||
**Begründung:** Mehrere Bereiche laden über 3 Sekunden (RSS-Feeds, Kontakt, Mediathek, Suche)
|
||||
**Empfehlung:** Server-Performance optimieren, Caching implementieren, Asset-Komprimierung
|
||||
**Verbesserungspotenzial:** HOCH - Kritisch für User Experience
|
||||
|
||||
**Status:** Problem
|
||||
**Fundstelle:** Bildoptimierung
|
||||
**Begründung:** Große Bilder ohne Lazy Loading oder Komprimierung
|
||||
**Empfehlung:** WebP-Format implementieren, Lazy Loading, responsive Images
|
||||
**Verbesserungspotenzial:** HOCH - Große Performance-Verbesserung möglich
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
## PRIORISIERTE HANDLUNGSEMPFEHLUNGEN
|
||||
|
||||
### 🔥 KRITISCH (Sofort umsetzen):
|
||||
|
||||
1. **Performance-Optimierung** - Server-Response-Zeit unter 2 Sekunden
|
||||
2. **Bildkomprimierung** - WebP-Format und Lazy Loading implementieren
|
||||
3. **Barrierefreiheit-Audit** - Alt-Texte und Farbkontraste vollständig prüfen
|
||||
|
||||
|
||||
|
||||
### ⚡ HOCH (Nächste 4 Wochen):
|
||||
|
||||
1. **Touch-Target-Optimierung** - Alle interaktiven Elemente ≥44px
|
||||
2. **Button-Hierarchie** - Klares Designsystem für Buttons etablieren
|
||||
3. **Mobile Schriftgrößen** - Minimum 18px für Fließtext
|
||||
|
||||
|
||||
|
||||
### 📈 MITTEL (Nächste 8 Wochen):
|
||||
|
||||
1. **Suchfunktion verbessern** - Bessere Labels und Autocomplete
|
||||
2. **Sprachauswahl optimieren** - Klarere Beschriftung
|
||||
3. **Fokus-Indikatoren** - Deutlichere Keyboard-Navigation
|
||||
|
||||
|
||||
|
||||
### ✨ NIEDRIG (Kontinuierliche Verbesserung):
|
||||
|
||||
1. **Content-Optimierung** - Klarere CTAs und Labels
|
||||
2. **Micro-Interactions** - Subtile Hover-Effekte verbessern
|
||||
3. **Error-Handling** - Kontextsensitive Fehlermeldungen
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
## FAZIT
|
||||
|
||||
[Leipzig.de](http://leipzig.de/) ist eine **solide, gut strukturierte Website** mit guter Navigation und Barrierefreiheit. Der **größte Handlungsbedarf liegt bei der Performance** - hier ist eine umfassende Optimierung erforderlich. Die mobile Erfahrung ist grundsätzlich gut, könnte aber durch größere Touch-Targets verbessert werden. Das Design ist konsistent und professionell, benötigt aber ein klareres Button-Hierarchie-System.
|
||||
|
||||
**Gesamtnote: 3.5/5** - Gut mit deutlichem Verbesserungspotenzial
|
||||
|
|
@ -1,113 +0,0 @@
|
|||
---
|
||||
title: "Use Case 2 – Experteneinschätzung (Agent Frameworks)"
|
||||
linkTitle: "UC2 – Experteneinschätzung"
|
||||
weight: 30
|
||||
draft: true
|
||||
description: >
|
||||
Vergleich Agent S2 vs. Anthropic Computer Use (UC2)
|
||||
---
|
||||
|
||||
### 1. NAVIGATIONSSTRUKTUR & ORIENTIERUNG
|
||||
|
||||
**Anthropic Computer Use**
|
||||
|
||||
✅ greift den wichtigen Hinweis zum Sprach-Dropdown auf, was aus UX-Sicht eine entscheidende Komponente für Zugänglichkeit und Orientierung darstellt.
|
||||
|
||||
**Agent S2**
|
||||
|
||||
🔴 Erwähnt an dieser Stelle nicht das Problem beim Sprach-Dropdown (wie z. B. beim ersten Test zur funktionalen Korrektheit von UI-Elementen)
|
||||
|
||||
✅ hebt dagegen den separaten Tab für Social-Media-Funktionen hervor. Ein relevanter Aspekt der Informationsarchitektur, der bei Anthropic Computer Use fehlt.
|
||||
|
||||
|
||||
|
||||
### 2. BARRIEREFREIHEIT (WCAG 2.1 AA)
|
||||
|
||||
Hier schneiden beide KI Agenten verhältnismäßig unbefriedigend ab.
|
||||
|
||||
**Anthropic Computer Use**
|
||||
|
||||
🔴 Weist darauf hin, dass alle Schriftgrößen ausreichend groß sind. das ist nicht korrekt. Die Texte in der Meta-Nav oder Breadcrumb sind deutlich kleiner (ca. 14px). Zudem ist der Farbkontrast an diesen Stellen zu gering.
|
||||
|
||||
🔴 Probleme im Bereich Farbkontrast werden zwar als Empfehlung erwähnt, müssen aber manuell nochmal nachgeprüft werden.
|
||||
|
||||
🔴 Alt-Texte bei Bildern können nicht geprüft werden, sind aber für die Barrierefreiheit relevant.
|
||||
|
||||
**Agent S2**
|
||||
|
||||
🔴 Erwähnt auch nicht das Problem mit den zu geringen Schriftgrößen bzw. zu geringem Kontrast.
|
||||
|
||||
🔴 Probleme im Bereich Farbkontrast werden nicht erkannt, obwohl vorhanden z. B. bei hellgrauem Text in der Meta-Nav, Breadcrumb oder Text auf grauem Hintergrund (z. B. AKTUELLE THEMEN)
|
||||
|
||||
### 3. INTERAKTIVE ELEMENTE & USABILITY
|
||||
|
||||
**Anthropic Computer Use**
|
||||
|
||||
**🔴** Suchbutton ist beschriftet bezieht sich die Aussage, dass es lediglich ein Icon ohne Label gibt auf kleine Viewports?
|
||||
|
||||
**🔴** Fehlende Status wie Hover-Effekte bei z. B. Icon inkl. Label (z. B. Seite in Leichter Sprache lesen, Vorlesen), Icon-Buttons (z. B. Drucken, Mail) werden nicht erwähnt.
|
||||
|
||||
 Gute Empfehlung: Button-Beschriftung optimieren, Placeholder-Text im Suchfeld verbessern
|
||||
|
||||
 Gute Empfehlung: Touch-Targets auf mindestens 44x44px vergrößern → Werden Problemstellen ebenfalls nochmals benannt oder mit konkreter Fundstelle (URL/Position) bereitgestellt, damit man diese nicht manuell suchen muss?
|
||||
|
||||
**Agent S2**
|
||||
|
||||
**🔴** Was bedeutet das?
|
||||
|
||||
### 4. UI-KONSISTENZ & DESIGNSYSTEM
|
||||
|
||||
**Anthropic Computer Use**
|
||||
|
||||
**🔴** Widersprüchliche Aussage:
|
||||
|
||||
- Positiv Aspekte: Konsistente Button-Styles
|
||||
- Identifizierte Probleme: Verschiedene Button-Stile
|
||||
|
||||
 Problem ist auf jeden Fall korrekt erkannt und die Empfehlung passt ebenfalls. Jedoch wird hier dann auch wieder manueller Support benötigt, um eine klare Button-Hierarchie zu etablieren.
|
||||
|
||||
**Agent S2**
|
||||
|
||||
**🔴** Probleme werden nicht erkannt
|
||||
|
||||
|
||||
|
||||
### 5. MOBILE NUTZUNG & RESPONSIVE DESIGN
|
||||
|
||||
**Anthropic Computer Use**
|
||||
|
||||
****Gute Hinweise und Empfehlungen, hilfreich wären Angaben, wo genau nachgebessert werden muss und welche Abstände eingefügt werden müssen.
|
||||
|
||||
**Agent S2**
|
||||
|
||||
**🔴** Probleme werden nicht erkannt
|
||||
|
||||
|
||||
|
||||
### 6. PERFORMANCE & LADEZEIT
|
||||
|
||||
**Anthropic Computer Use**
|
||||
|
||||
****Gute Hinweise und Empfehlungen, hilfreich wären Angaben, wo genau nachgebessert werden muss und welche Abstände eingefügt werden müssen.
|
||||
|
||||
**Agent S2**
|
||||
|
||||
**🔴** Probleme werden nicht erkannt
|
||||
|
||||
|
||||
|
||||
## PRIORISIERTE HANDLUNGSEMPFEHLUNGEN
|
||||
|
||||
**Anthropic Computer Use**
|
||||
|
||||
****Gute Handlungsempfehlung und Priorisierung, hilfreich wären Angaben, wo genau nachgebessert muss
|
||||
|
||||
**Agent S2**
|
||||
|
||||
**🔴** Die meisten Handlungsbedarfe werden nicht erkannt und benannt
|
||||
|
||||
## FAZIT
|
||||
|
||||
- Auffällig ist, dass Agent S2 tendenziell Probleme betont, ohne positive Aspekte hervorzuheben, während Anthropic Computer Use vereinzelt auch neutrale bis positive Befunde anspricht.
|
||||
- Im Gegensatz zu Anthropic, findet Agent S2 einen Großteil der Probleme überhaupt nicht.
|
||||
- Im Bereich Barrierefreiheit schneiden beide KI Agents eher unbefriedigend ab
|
||||
|
|
@ -1,74 +0,0 @@
|
|||
---
|
||||
title: "Use Case 2 – Prompt"
|
||||
linkTitle: "UC2 – Prompt"
|
||||
weight: 5
|
||||
draft: true
|
||||
description: >
|
||||
Prompt für Use Case 2 (Visuelle Qualität & Konsistenz / UX Health Check)
|
||||
---
|
||||
|
||||
Rolle:
|
||||
Du bist ein erfahrener UX- und UI-Experte mit Spezialisierung auf heuristische Evaluation, visuelle Konsistenz und digitale Barrierefreiheit (WCAG 2.1 AA).
|
||||
|
||||
Aufgabe:
|
||||
Führe einen vollständigen UX Health Check für die Website mit folgender URL durch:[Website-URL]
|
||||
|
||||
Ziel:
|
||||
Analysiere sowohl inhaltliche UX-Aspekte als auch visuelle und technische UI-Kriterien. Die Analyse soll strukturiert, verständlich und umsetzungsorientiert sein – ideal für Stakeholder in Produkt, Design und Entwicklung. Nutze die folgenden Prüfkriterien und liefere die Auswertung als strukturierte Liste mit Empfehlungen.
|
||||
|
||||
---
|
||||
|
||||
Prüfkriterien:
|
||||
|
||||
1. Navigationsstruktur & Orientierung
|
||||
- Ist die Navigation intuitiv, konsistent und jederzeit erreichbar?
|
||||
- Ist die Informationsarchitektur logisch gegliedert?
|
||||
- Gibt es eine klar erkennbare Startseite oder einen „Home“-Anker?
|
||||
- Werden Navigationspunkte auf Mobilgeräten verständlich dargestellt (z. B. Burger-Menü mit klarer Beschriftung)?
|
||||
|
||||
2. Barrierefreiheit (Accessibility – WCAG 2.1 AA)
|
||||
- Farbkontrast: Sind Kontraste ausreichend? (Empfohlen: mind. 4.5:1 für Text, 3:1 für große Texte)
|
||||
- Schriftgröße:
|
||||
- Mindestgröße für Fließtext: 16 px (ca. 1rem) auf Desktop
|
||||
- Auf Mobilgeräten: mind. 16 px, idealerweise 18 px
|
||||
- Große Texte (z. B. Überschriften): ab 20–24 px
|
||||
- Bedienbarkeit: Funktionieren alle interaktiven Elemente mit Tastatur (Tab-Fokus, Enter)?
|
||||
- Alternativtexte: Sind Bilder und Icons korrekt mit Alt-Texten oder Aria-Labels ausgezeichnet?
|
||||
- Fokus-Indikatoren: Sind sie deutlich sichtbar (z. B. Outline oder Kontrastwechsel)?
|
||||
|
||||
3. Interaktive Elemente & Usability
|
||||
- Sind Buttons und Links visuell als solche erkennbar (Form, Farbe, Hover-Zustand)?
|
||||
- Sind Labels sprechend und handlungsorientiert (z. B. "Jetzt absenden" statt "OK")?
|
||||
- Gibt es kontextsensitive Fehlermeldungen, die Ursachen und Lösungen benennen?
|
||||
- Wird bei Formularen z. B. Autocomplete unterstützt?
|
||||
|
||||
4. UI-Konsistenz & Designsystem
|
||||
- Werden UI-Komponenten (z. B. Buttons, Input-Felder) konsistent verwendet?
|
||||
- Gibt es klare Regeln für Farben, Abstände, Schriftarten und Größen?
|
||||
- Gibt es widersprüchliche visuelle Muster (z. B. zwei verschiedene Button-Styles für dieselbe Aktion)?
|
||||
- Werden Komponenten aus einem einheitlichen Designsystem eingesetzt?
|
||||
|
||||
5. Mobile Nutzung & Responsive Design
|
||||
- Ist die Website vollständig responsive?
|
||||
- Gibt es Layoutverschiebungen oder horizontales Scrollen?
|
||||
- Touch-Ziele:
|
||||
- Sind alle klick-/tippbaren Elemente mind. 44 x 44 px groß? (gemäß Apple HIG und WCAG)
|
||||
- Haben Elemente ausreichend Abstand zueinander, um Fehleingaben zu vermeiden?
|
||||
- Werden Schriftgrößen und Abstände auf kleinen Viewports gut angepasst (keine Zoompflicht)?
|
||||
|
||||
6. Performance & Ladezeit
|
||||
- Liegt die Ladezeit der Seite unter 3 Sekunden (First Contentful Paint)?
|
||||
- Gibt es Performance-Probleme durch unoptimierte Bilder, Fonts oder JavaScript?
|
||||
- Wird Lazy Loading für nicht sichtbare Inhalte verwendet?
|
||||
|
||||
---
|
||||
|
||||
Ausgabeformat pro Hauptkategorie, die Probleme aufzeigt:
|
||||
- Vergib für jede Hauptkategorie eine Bewertung auf einer Skala von 1–5 (1 = großer Handlungsbedarf, 5 = sehr gut)
|
||||
|
||||
Ausgabeformat pro Fundstelle, die Probleme aufzeigt:
|
||||
- Status: [Problem]
|
||||
- Fundstelle: [URL/Position, Beschreibung z. B. Kontrastprobleme, kleine Klickflächen, zu kleine Schriftgrößen]
|
||||
- Begründung: Warum ist das ein Problem?
|
||||
- Empfehlung: Was sollte verbessert werden?
|
||||
- Verbesserungspotenzial: Liste konkrete Verbesserungen nach Priorität (Impact x Aufwand)
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
---
|
||||
title: "Use Case 3 – Ergebnisse Agent S2"
|
||||
linkTitle: "UC3 – Agent S2"
|
||||
weight: 10
|
||||
draft: true
|
||||
description: >
|
||||
Roh-Ergebnisse (PoC Validation) für Use Case 3 – Agent S2
|
||||
---
|
||||
|
||||
## **HAUPTAUFGABE**
|
||||
|
||||
Anmeldung für den Newsletter auf [leipzig.de](http://leipzig.de/) mit einer vorgegebenen E-Mail-Adresse durchführen.
|
||||
|
||||
### **GEFUNDENE PROBLEME**
|
||||
|
||||
Keine Probleme gefunden.
|
||||
|
||||
### **POSITIVE ERGEBNISSE**
|
||||
|
||||
✅ Alle getesteten Elemente funktionieren technisch korrekt
|
||||
✅ Keine Broken Flows oder Sackgassen gefunden
|
||||
|
||||
### **HAUPT-EMPFEHLUNG**
|
||||
|
||||
Kontinuierliche Qualitätssicherung: Da bei der aktuellen Prüfung keine technischen Probleme identifiziert wurden, sollte der Fokus auf der Aufrechterhaltung der bestehenden Qualität liegen.
|
||||
|
||||
### **Empfohlene Maßnahmen**
|
||||
|
||||
1. Regelmäßige automatisierte Tests für Newsletter-Anmeldeprozess implementieren
|
||||
2. Performance-Monitoring für kritische User-Flows einrichten
|
||||
3. Monatliche manuelle Usability-Tests durchführen
|
||||
4. Responsive Design auf verschiedenen Endgeräten kontinuierlich prüfen
|
||||
5. Barrierefreiheit gemäß WCAG-Standards regelmäßig validieren
|
||||
|
|
@ -1,47 +0,0 @@
|
|||
---
|
||||
title: "Use Case 3 – Ergebnisse Anthropic Computer Use"
|
||||
linkTitle: "UC3 – Anthropic"
|
||||
weight: 20
|
||||
draft: true
|
||||
description: >
|
||||
Roh-Ergebnisse (PoC Validation) für Use Case 3 – Anthropic Computer Use
|
||||
---
|
||||
|
||||
## **Zusammenfassung der Hauptprobleme:**
|
||||
|
||||
**Positive Aspekte:**
|
||||
|
||||
- ✅ **Newsletter-Button gut auffindbar:** Der blaue Button "Stadt Leipzig Newsletter abonnieren" war deutlich sichtbar auf der Startseite
|
||||
- ✅ **Klare Formulierung:** Button-Text und Überschriften sind verständlich
|
||||
- ✅ **Transparenter Datenschutz:** DSGVO-Hinweise und CleverReach werden explizit erwähnt
|
||||
- ✅ **Double-Opt-In erklärt:** Das Verfahren mit Bestätigungslink wird vorab erklärt
|
||||
- ✅ **Gute Fehlerbehandlung:** Klare, verständliche Fehlermeldungen bei ungültigen E-Mail-Adressen
|
||||
- ✅ **Responsive Design:** Seite funktioniert gut im Browser
|
||||
|
||||
**Usability-Probleme:**
|
||||
|
||||
- ❌ **Newsletter-Link nicht in der Hauptnavigation:** Nutzer müssen scrollen, um ihn zu finden
|
||||
- ❌ **Keine Suchfunktion für Newsletter:** Bei der Suche nach "Newsletter" wäre es hilfreich, wenn dieser direkt gefunden würde
|
||||
- ❌ **Fehlende Alternative E-Mail-Validierung:** Das System hätte vor der Weiterleitung zu CleverReach eine Frontend-Validierung durchführen können
|
||||
- ❌ **Kein direkter Link zurück:** Nach dem Fehler muss der Nutzer den Browser-Zurück-Button verwenden
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
## **Verbesserungsvorschläge:**
|
||||
|
||||
1. **Newsletter in der Hauptnavigation:** Einen "Newsletter" Link in der oberen Navigation hinzufügen, eventuell unter "Social Media" oder als eigenen Menüpunkt
|
||||
|
||||
2. **Frontend-E-Mail-Validierung:** Eine JavaScript-Validierung implementieren, die offensichtlich ungültige E-Mail-Adressen (wie [test@example.com](mailto:test@example.com)) bereits vor der Übermittlung abfängt
|
||||
|
||||
3. **Such-Integration:** Den Newsletter in die Suchfunktion der Website integrieren, so dass bei der Suche nach "Newsletter" direkt die Anmeldeseite gefunden wird
|
||||
|
||||
4. **Zurück-Button auf Fehlerseite:** Auf der CleverReach-Fehlerseite einen "Zurück zur Anmeldung" Button hinzufügen
|
||||
|
||||
5. **Newsletter-Teaser prominenter platzieren:** Den Newsletter-Button eventuell auch im Header oder Footer der Website dauerhaft sichtbar machen
|
||||
|
||||
6. **Mobile Optimierung prüfen:** Testen, wie gut der Button und das Formular auf mobilen Geräten funktionieren (Mindestgröße 44x44px)
|
||||
|
||||
|
||||
**Gesamtbewertung:** Der Newsletter-Anmeldeprozess funktioniert grundsätzlich gut und ist benutzerfreundlich. Die wichtigsten Usability-Prinzipien werden befolgt, auch wenn es Raum für kleinere Verbesserungen gibt.
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
---
|
||||
title: "Use Case 3 – Experteneinschätzung (Agent Frameworks)"
|
||||
linkTitle: "UC3 – Experteneinschätzung"
|
||||
weight: 30
|
||||
draft: true
|
||||
description: >
|
||||
Vergleich Agent S2 vs. Anthropic Computer Use (Task-based UX-Analyse)
|
||||
---
|
||||
- Beide KI Agenten erledigen die Aufgabe.
|
||||
- Während Agent S2 eher technische Empfehlungen zur Optimierung gibt, fokussiert sich Anthropic Computer Use auf Usability Optimierungen.
|
||||
- Tatsächlich muss man erstmal scrollen, um den Button für das Newsletter Abo zu finden. Annahme, dass hier reale Nutzer bereits etwas mit der Suche beschäftigt wären.
|
||||
- Anthropic Computer Use gibt hier gute Hinweise bzgl. der Usability-Probleme und mögliche Verbesserungsvorschläge, um die Usability zu optimieren.
|
||||
|
|
@ -1,61 +0,0 @@
|
|||
---
|
||||
title: "Use Case 3 – Prompt"
|
||||
linkTitle: "UC3 – Prompt"
|
||||
weight: 5
|
||||
draft: true
|
||||
description: >
|
||||
Prompt für Use Case 3 (Task-based UX-Analyse)
|
||||
---
|
||||
|
||||
Rolle:
|
||||
Du bist ein realitätsnaher, kritischer Testnutzer mit Grundverständnis für digitale Produkte – aber ohne Expertenwissen. Deine Aufgabe ist es, eine konkrete Aufgabe auf einer Website durchzuführen, so wie es echte Nutzer:innen tun würden. Du berichtest über jeden deiner Schritte, Gedanken und Reaktionen – und identifizierst dabei Usability-Probleme und Verbesserungspotenziale.
|
||||
|
||||
Ziel:
|
||||
Simuliere einen realistischen Usability-Test auf [Website-URL]. Du erhältst eine typische Nutzeraufgabe und beschreibst Schritt für Schritt, wie du die Aufgaben durchführst, was du dabei wahrnimmst, wo du stockst und wie du dich zurechtfindest.
|
||||
|
||||
Vorgehen:
|
||||
- Handle so, wie ein:e durchschnittliche:r Nutzer:in in dieser Situation handeln würde.
|
||||
- Beschreibe deine Gedanken laut („Think-Aloud“).
|
||||
- Analysiere nicht als Experte, sondern schildere deine Nutzerwahrnehmung.
|
||||
- Gib am Ende eine Zusammenfassung der größten Usability-Probleme und deiner Verbesserungsvorschläge.
|
||||
|
||||
Beachte:
|
||||
- Nutze visuelle Beschreibungen, wenn du UI-Elemente siehst (z. B. „grauer Button unten rechts“).
|
||||
- Achte besonders auf klassische Usability-Prinzipien nach Jakob Nielsen:
|
||||
- Verständliche Navigation
|
||||
- Erwartungskonforme Formulierungen
|
||||
- Sichtbarkeit von Interaktionen & Zuständen
|
||||
- Fehlerprävention und Fehlermeldungen
|
||||
- Mobiloptimierung und Touchgrößen (mind. 44x44px)
|
||||
- Lesbarkeit von Text (mind. 16 px, bevorzugt 18 px auf Mobilgeräten)
|
||||
|
||||
---
|
||||
|
||||
Aufgabe (bitte anpassen):
|
||||
„Finde auf der Seite <Website-URL> heraus, wie man sich für den Newsletter anmeldet und melde dich an.“
|
||||
|
||||
---
|
||||
|
||||
Ausgabeformat:
|
||||
|
||||
Aufgabe:
|
||||
[Hier steht die konkrete Aufgabe]
|
||||
|
||||
Schritt-für-Schritt-Vorgehen:
|
||||
1. Was ich tun will: (Ziel oder Teilaufgabe)
|
||||
2. Was ich tue: (interaktiver Schritt auf der Seite)
|
||||
3. Was ich erwarte: (Systemreaktion oder Information)
|
||||
4. Was passiert: (tatsächliche Reaktion/Anzeige)
|
||||
5. Was mir auffällt: (positive oder negative Beobachtungen)
|
||||
6. Was mich irritiert oder verwirrt: ...
|
||||
7. Wie ich weitermache / ob ich zurückgehe / abbreche: ...
|
||||
(für jeden weiteren Schritt wiederholen)
|
||||
|
||||
Am Ende:
|
||||
Zusammenfassung der Hauptprobleme:
|
||||
- [Usability-Probleme entlang des Flows]
|
||||
- [z. B. Filteroption war schwer zu finden, Tarifvergleich nicht möglich, keine klare Preistransparenz]
|
||||
|
||||
Verbesserungsvorschläge:
|
||||
- [z. B. Tarifvergleich als eigene Seite anlegen]
|
||||
- [Button-Beschriftung präzisieren („Jetzt bestellen“ → „Tarif wählen und bestellen“)]
|
||||
|
|
@ -1,61 +0,0 @@
|
|||
---
|
||||
title: "PoC Validation"
|
||||
linkTitle: "PoC Validation"
|
||||
weight: 1
|
||||
description: >
|
||||
What was validated in the PoC and where to find the evidence
|
||||
---
|
||||
This page summarizes what was validated in the Proof of Concept (PoC) for the **Autonomous UAT Agent** and where to find supporting evidence.
|
||||
|
||||
## Scope and objective
|
||||
|
||||
The PoC aimed to demonstrate that an agent can:
|
||||
|
||||
- Interact with real web UIs autonomously (VNC-based GUI automation)
|
||||
- Produce actionable UI/UX findings and recommendations
|
||||
- Generate reproducible artifacts (logs, structured reports) suitable for review and auditing
|
||||
|
||||
## Validated use cases
|
||||
|
||||
The PoC covered three concrete use cases:
|
||||
|
||||
1. **Functional correctness of UI elements**
|
||||
- Validate that interactive elements (e.g., links, buttons, menus, forms) behave as expected.
|
||||
- Identify broken interactions, dead ends, inconsistent states, and regressions.
|
||||
|
||||
2. **Visual quality & consistency (UX Health Check)**
|
||||
- Assess visual consistency and basic UX hygiene (e.g., typography, spacing, alignment, contrast, component consistency).
|
||||
- Highlight issues that impact perceived quality, accessibility, and brand consistency.
|
||||
|
||||
3. **Task-based UX analysis**
|
||||
- Execute representative user journeys end-to-end (task flows).
|
||||
- Document friction points, unnecessary steps, unclear labels, missing feedback, and other usability barriers.
|
||||
|
||||
## Where to find evidence in this documentation repo
|
||||
|
||||
- **Evidence pack example (recommended reference):**
|
||||
- Golden run (Telekom header navigation): [Golden Run: Telekom Header Navigation](../golden-run-telekom-header-nav/)
|
||||
|
||||
- **Artifact locations and guidance:**
|
||||
- See: Golden Run evidence pack and run outputs in `results/`
|
||||
|
||||
- **Model configuration context (current vs legacy):**
|
||||
- See: [Model Stack](../../model-stack/)
|
||||
|
||||
## Legacy PoC runs (Anthropic / Claude Sonnet 4.5)
|
||||
|
||||
During early prototyping, we executed a small number of runs using the **Anthropic API** with **Claude Sonnet 4.5** as the combined *vision* and *thinking* model.
|
||||
|
||||
In the legacy results, we compare two different agent approaches:
|
||||
|
||||
- **Anthropic Computer Use Agent**
|
||||
- **Agent S2** (using the **Anthropic API** with **Claude Sonnet 4.5** for both **thinking** and **grounding/vision**)
|
||||
|
||||
These runs are included here as **legacy PoC evidence** to illustrate what an end-to-end workflow can look like: a scripted run that drives a real UI and produces a report of UI/UX improvement opportunities.
|
||||
|
||||
Note: The D66 target solution uses a different model stack to meet the project constraints (open-source models from European companies). For background, see [Model Stack](../../model-stack/).
|
||||
|
||||
- [Run 1 – Functional Correctness](run-1-functional-correctness/)
|
||||
- [Run 2 – Visual Quality & Consistency (UX Health Check)](run-2-ux-health-check/)
|
||||
- [Run 3 – Task-based UX Analysis](run-3-task-based-ux-analysis/)
|
||||
|
||||
|
|
@ -1,216 +0,0 @@
|
|||
---
|
||||
title: "Run 1: Functional Correctness (Legacy PoC)"
|
||||
linkTitle: "Run 1 – Functional Correctness"
|
||||
weight: 10
|
||||
description: >
|
||||
Legacy PoC run executed with the Anthropic API and Claude Sonnet 4.5 to validate functional UI correctness checks
|
||||
---
|
||||
## Purpose
|
||||
|
||||
This run demonstrates how an autonomous agent can systematically validate the **functional correctness** of UI elements (e.g., links, buttons, menus, dialogs, forms) and produce a structured result set.
|
||||
|
||||
## Model and execution context
|
||||
|
||||
- Execution period: early prototyping / PoC phase (legacy)
|
||||
- Model provider: Anthropic API
|
||||
- Model used: Claude Sonnet 4.5
|
||||
- Role in this run: combined *vision* (screen understanding) and *thinking* (planning and reasoning)
|
||||
|
||||
Comparison note: The legacy PoC compares **two agent approaches**—the **Anthropic Computer Use Agent** and **Agent S2**—both executed via the **Anthropic API** using **Claude Sonnet 4.5** for **thinking** and **vision/grounding**.
|
||||
|
||||
Note: The current D66 target stack differs due to project constraints. See [Model Stack](../../../model-stack/).
|
||||
|
||||
## What the agent checks
|
||||
|
||||
Typical checks in a functional correctness run include:
|
||||
|
||||
- Clickability and responsiveness of interactive elements
|
||||
- Correct navigation targets (no dead links / broken routes)
|
||||
- Correct component behavior (e.g., menus open/close, dialogs dismiss)
|
||||
- Input validation and form submission feedback
|
||||
- Error states and recovery (e.g., back navigation, cancellations)
|
||||
|
||||
## Expected outputs (evidence)
|
||||
|
||||
A functional correctness run is expected to produce:
|
||||
|
||||
- A step-by-step action trace in the report (actions + observations)
|
||||
- A run log with timestamps and actions
|
||||
- A structured issue list (e.g., JSON or markdown) with:
|
||||
- element
|
||||
- location / context
|
||||
- observed problem
|
||||
- severity / impact
|
||||
- recommendation
|
||||
|
||||
## Legacy artifacts
|
||||
|
||||
This page is the intended location to attach the **original legacy artifacts** from the Anthropic/Claude run (logs and the generated report).
|
||||
|
||||
If the artifacts are available outside this documentation repository (e.g., in the agent runtime repository), link them here and/or copy the evidence pack into this page bundle folder.
|
||||
|
||||
## Results (Use Case 1) – Agent S2 vs Anthropic Computer Use
|
||||
|
||||
This section summarizes the **most important findings** from two legacy PoC runs for Use Case 1:
|
||||
|
||||
- Agent S2
|
||||
- Anthropic Computer Use (Anthropic API, Claude Sonnet 4.5)
|
||||
|
||||
The original source documents (German) are stored in the repository under the PoC Validation Confluence export folder.
|
||||
|
||||
### Source documents (German, original)
|
||||
|
||||
- Prompt: [../POC Validation Confluence docs/1- Funktionale Korrektheit von UI-Elementen/1 - Prompt.md](../POC%20Validation%20Confluence%20docs/1-%20Funktionale%20Korrektheit%20%20von%20UI-Elementen/1%20-%20Prompt.md)
|
||||
- Results – Agent S2: [../POC Validation Confluence docs/1- Funktionale Korrektheit von UI-Elementen/1 - Ergebnisse Agent S2.md](../POC%20Validation%20Confluence%20docs/1-%20Funktionale%20Korrektheit%20%20von%20UI-Elementen/1%20-%20Ergebnisse%20Agent%20S2.md)
|
||||
- Results – Anthropic Computer Use: [../POC Validation Confluence docs/1- Funktionale Korrektheit von UI-Elementen/1 - Ergebnisse Anthropic Computer Use.md](../POC%20Validation%20Confluence%20docs/1-%20Funktionale%20Korrektheit%20%20von%20UI-Elementen/1%20-%20Ergebnisse%20Anthropic%20Computer%20Use.md)
|
||||
- Expert assessment: [../POC Validation Confluence docs/1- Funktionale Korrektheit von UI-Elementen/1 - Experteneinschätzung zum Vergleich der Agent Frameworks.md](../POC%20Validation%20Confluence%20docs/1-%20Funktionale%20Korrektheit%20%20von%20UI-Elementen/1%20-%20Experteneinsch%C3%A4tzung%20zum%20Vergleich%20der%20Agent%20Frameworks.md)
|
||||
|
||||
<details>
|
||||
<summary><strong>Prompt used (translated, collapsed)</strong></summary>
|
||||
|
||||
Disclaimer: The PoC runs were executed using a **German** prompt. For documentation purposes, the prompt is **translated into English** below.
|
||||
|
||||
```text
|
||||
Role:
|
||||
You are a technical usability and QA tester with a focus on the functional correctness of websites. You systematically verify interactive UI elements and sub-elements for technical functionality, responsiveness, and potential broken flows.
|
||||
|
||||
Goal:
|
||||
Inspect all interactive elements on the website [Website-URL] for functional weaknesses or dead ends. Detect broken flows, defective links, buttons without a function, and missing feedback. Your goal is to ensure each interaction is fully testable from a user perspective and provides timely feedback.
|
||||
|
||||
---
|
||||
|
||||
Evaluation criteria for each UI element and sub-element (button, link, CTA, menu item, etc.):
|
||||
|
||||
1. Technical functionality
|
||||
- Is the element technically clickable?
|
||||
- Does the click lead to a valid target URL or action?
|
||||
- Does it open a new window (and if so: is it appropriate)?
|
||||
|
||||
2. No dead ends
|
||||
- Does the interaction lead to a "404", error screen, empty page, or a state with no visible next step?
|
||||
- Is there a way back or navigation to return?
|
||||
- Can the user get stuck without explanation?
|
||||
|
||||
3. Response time
|
||||
- Is there visible feedback within max. 500 ms?
|
||||
(e.g., loading indicator, page change, visual response)
|
||||
- Does the click appear "dead" or respond with noticeable delay?
|
||||
|
||||
4. Visual feedback
|
||||
- Are there hover/focus/active states?
|
||||
- Is the user informed about the status of the action (e.g., "loading", "successfully submitted")?
|
||||
|
||||
5. Broken flow detection
|
||||
- Are there consecutive interactions that cannot be completed successfully?
|
||||
(e.g., "Buy now" → empty cart page)
|
||||
- Are there flows where users cannot proceed or required information is missing?
|
||||
- Are load failures, modal blockers, or error messages handled correctly?
|
||||
|
||||
---
|
||||
|
||||
Output format for each tested element and sub-element with issues:
|
||||
|
||||
- Element: [Label or visible name of button/link]
|
||||
- Location: [URL or page section]
|
||||
- Visible reaction within 500 ms: [Yes/No]
|
||||
- Dead end or broken flow: [Yes/No]
|
||||
- Problem description: [e.g., "link leads nowhere", "no feedback after click"]
|
||||
- Recommendation: [e.g., "verify link target", "add loading feedback", "provide a way back"]
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Executive comparison
|
||||
|
||||
| Dimension | Agent S2 | Anthropic Computer Use |
|
||||
|---|---|---|
|
||||
| Broken flows / dead ends | None reported | None reported |
|
||||
| Main issue type | Findability / discoverability | Performance / response time |
|
||||
| Functional correctness (tested elements) | No functional defects reported | Tested elements considered technically functional |
|
||||
| Most actionable recommendation | Improve discoverability of language switcher | Systematic performance optimization (server, caching, assets, CDN, DB) |
|
||||
|
||||
### Findings (details)
|
||||
|
||||
The sections below contain a consolidated view of the **most important findings** for each agent, written in English for this documentation.
|
||||
|
||||
<details>
|
||||
<summary><strong>Results – Agent S2 (collapsed)</strong></summary>
|
||||
|
||||
### Executive summary
|
||||
|
||||
- **Positive:** No broken flows or dead ends were identified.
|
||||
- **Main observation:** A key UI element (language selection) was hard to discover, indicating a potential findability/usability issue.
|
||||
|
||||
### Findings (table)
|
||||
|
||||
| Element | Location | Visible reaction ≤ 500 ms | Dead end / broken flow | Observation | Recommendation |
|
||||
|---|---|---:|---:|---|---|
|
||||
| Language dropdown | Header (top right) | Not assessed | No | Difficulty locating the language function; repeated scrolling and mis-clicks (browser menu instead of website element) suggest unclear placement or low discoverability. | Make the language selector more prominent and visually distinct (e.g., flag icons, higher contrast, larger text). |
|
||||
|
||||
### Positive results
|
||||
|
||||
- ✅ No broken flows or dead ends identified
|
||||
|
||||
### Recommendations
|
||||
|
||||
1. Add international flag icons to improve recognition
|
||||
2. Increase contrast and font size for the language control
|
||||
3. Review header placement and move to a more prominent position if needed
|
||||
4. Run quick usability checks with international users (findability)
|
||||
5. Improve hover/focus states to make interactivity more explicit
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Results – Anthropic Computer Use (collapsed)</strong></summary>
|
||||
|
||||
### Executive summary
|
||||
|
||||
- **Functional correctness:** Tested elements were considered technically functional.
|
||||
- **Main finding:** Multiple interactions show delayed response times (no visible reaction within 500 ms; reported load times above 3 seconds).
|
||||
- **Flow quality:** No broken flows or dead ends were identified.
|
||||
|
||||
Note: Measured load times can vary by network, server load, and client conditions. The results still highlight performance as a relevant UX lever.
|
||||
|
||||
### Findings (table)
|
||||
|
||||
| Element | Location | Visible reaction ≤ 500 ms | Dead end / broken flow | Observation | Recommendation |
|
||||
|---|---|---:|---:|---|---|
|
||||
| Contact link (top navigation) | https://www.leipzig.de/kontakt | No | No | Load time reported above 3 seconds (delayed response). | Implement performance optimization; improve caching. |
|
||||
| RSS feeds link (top navigation) | https://www.leipzig.de/rss-feeds | No | No | Load time reported above 3 seconds (delayed response). | Implement performance optimization; improve caching. |
|
||||
| Media library link (top navigation) | https://www.leipzig.de/servicenavigation/mediathek | No | No | Load time reported above 3 seconds (delayed response). | Implement performance optimization; improve caching. |
|
||||
| Search | Homepage search field / results | No | No | Search results reported above 3 seconds (delayed response). | Optimize search performance (indexing and query execution). |
|
||||
| Citizen services & administration (main navigation) | https://www.leipzig.de/buergerservice-und-verwaltung | No | No | Load time reported above 3 seconds (delayed response). | Implement performance optimization; improve caching. |
|
||||
|
||||
### Positive results
|
||||
|
||||
- ✅ All tested elements functioned correctly
|
||||
- ✅ No broken flows or dead ends identified
|
||||
- ✅ Complete navigation and breadcrumb system
|
||||
- ✅ Working multilingual support
|
||||
- ✅ Correct URL structure and linking
|
||||
- ✅ Responsive elements with visible feedback
|
||||
|
||||
### Main recommendation (performance)
|
||||
|
||||
1. Optimize server response time
|
||||
2. Improve caching strategy
|
||||
3. Implement asset compression
|
||||
4. Evaluate CDN usage
|
||||
5. Optimize database queries
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Expert assessment (collapsed)</strong></summary>
|
||||
|
||||
The expert assessment below compares the practical value of the two result sets from a UX/UI perspective.
|
||||
|
||||
- **Anthropic Computer Use:** Stronger focus on technical UI checks, especially load times. From a UX perspective this is relevant, but interpretation can be limited unless delays are clearly noticeable or cause user frustration.
|
||||
- **Load-time differences:** Differences were reported (e.g., Contact, RSS feeds vs. “Mein Stadtteil”), but considered mostly within an acceptable range and not very noticeable.
|
||||
- **Notable exception:** In the Media Library (“Mediathek”), load times were perceived as clearly longer, which is a UX-relevant warning signal.
|
||||
- **Agent S2:** Produced findings that lean more towards usability and accessibility aspects (language dropdown discoverability). This view is less purely technical but equally important for practical UX evaluation.
|
||||
- **Classification note:** The expert would have expected the language dropdown topic more under the *Visual Quality & Consistency (UX Health Check)* prompt.
|
||||
|
||||
</details>
|
||||
|
||||
|
|
@ -1,232 +0,0 @@
|
|||
---
|
||||
title: "Run 2: Visual Quality & Consistency (UX Health Check) (Legacy PoC)"
|
||||
linkTitle: "Run 2 – UX Health Check"
|
||||
weight: 20
|
||||
description: >
|
||||
Legacy PoC run executed with the Anthropic API and Claude Sonnet 4.5 to assess visual quality and UX consistency
|
||||
---
|
||||
## Purpose
|
||||
|
||||
This run demonstrates how an autonomous agent can perform a lightweight **UX health check** focused on visual quality and consistency, and turn observations into actionable recommendations.
|
||||
|
||||
## Model and execution context
|
||||
|
||||
- Execution period: early prototyping / PoC phase (legacy)
|
||||
- Model provider: Anthropic API
|
||||
- Model used: Claude Sonnet 4.5
|
||||
- Role in this run: combined *vision* (screen understanding) and *thinking* (analysis and recommendation)
|
||||
|
||||
Comparison note: The legacy PoC compares **two agent approaches**—the **Anthropic Computer Use Agent** and **Agent S2**—both executed via the **Anthropic API** using **Claude Sonnet 4.5** for **thinking** and **vision/grounding**.
|
||||
|
||||
Note: The current D66 target stack differs due to project constraints. See [Model Stack](../../../model-stack/).
|
||||
|
||||
## What the agent looks for
|
||||
|
||||
Common categories for a UX health check include:
|
||||
|
||||
- Consistency of typography (sizes, weights, line heights)
|
||||
- Layout alignment and spacing rhythm
|
||||
- Color usage and contrast (including accessibility risks)
|
||||
- Component consistency (buttons, form fields, cards, navigation)
|
||||
- Visual hierarchy and clarity of call-to-action placement
|
||||
- UI state consistency (hover, focus, active, disabled)
|
||||
|
||||
## Expected outputs (evidence)
|
||||
|
||||
A UX health check run is expected to produce:
|
||||
|
||||
- A structured report documenting UI states and inconsistencies (with precise locations)
|
||||
- A structured checklist or issue list with:
|
||||
- observed issue
|
||||
- affected component/page area
|
||||
- user impact
|
||||
- recommended remediation
|
||||
- optional severity rating
|
||||
|
||||
## Legacy artifacts
|
||||
|
||||
This page is the intended location to attach the **original legacy artifacts** from the Anthropic/Claude run (logs and the generated report).
|
||||
|
||||
If the artifacts are stored elsewhere, link them here and/or copy them into this page bundle folder.
|
||||
|
||||
## Results (Use Case 2) – Agent S2 vs Anthropic Computer Use
|
||||
|
||||
This section summarizes the **most important findings** from two legacy PoC runs for Use Case 2:
|
||||
|
||||
- Agent S2
|
||||
- Anthropic Computer Use (Anthropic API, Claude Sonnet 4.5)
|
||||
|
||||
The original source documents (German) are stored in the repository under the PoC Validation Confluence export folder.
|
||||
|
||||
### Source documents (German, original)
|
||||
|
||||
- Prompt: [../POC Validation Confluence docs/2 - Visuelle Qualität & Konsistenz (UX Health Check)/2 - Prompt.md](../POC%20Validation%20Confluence%20docs/2%20-%20Visuelle%20Qualit%C3%A4t%20%26%20Konsistenz%20(UX%20Health%20Check)/2%20-%20Prompt.md)
|
||||
- Results – Agent S2: [../POC Validation Confluence docs/2 - Visuelle Qualität & Konsistenz (UX Health Check)/2 - Ergebnisse Agent S2.md](../POC%20Validation%20Confluence%20docs/2%20-%20Visuelle%20Qualit%C3%A4t%20%26%20Konsistenz%20(UX%20Health%20Check)/2%20-%20Ergebnisse%20Agent%20S2.md)
|
||||
- Results – Anthropic Computer Use: [../POC Validation Confluence docs/2 - Visuelle Qualität & Konsistenz (UX Health Check)/2 - Ergebnisse Anthropic Computer Use.md](../POC%20Validation%20Confluence%20docs/2%20-%20Visuelle%20Qualit%C3%A4t%20%26%20Konsistenz%20(UX%20Health%20Check)/2%20-%20Ergebnisse%20Anthropic%20Computer%20Use.md)
|
||||
- Expert assessment: [../POC Validation Confluence docs/2 - Visuelle Qualität & Konsistenz (UX Health Check)/2 - Experteneinschätzung zum Vergleich der Agent Frameworks.md](../POC%20Validation%20Confluence%20docs/2%20-%20Visuelle%20Qualit%C3%A4t%20%26%20Konsistenz%20(UX%20Health%20Check)/2%20-%20Experteneinsch%C3%A4tzung%20zum%20Vergleich%20der%20Agent%20Frameworks.md)
|
||||
|
||||
<details>
|
||||
<summary><strong>Prompt used (translated, collapsed)</strong></summary>
|
||||
|
||||
Disclaimer: The PoC runs were executed using a **German** prompt. For documentation purposes, the prompt is **translated into English** below.
|
||||
|
||||
```text
|
||||
Role:
|
||||
You are an experienced UX and UI expert specializing in heuristic evaluation, visual consistency, and digital accessibility (WCAG 2.1 AA).
|
||||
|
||||
Task:
|
||||
Perform a full UX health check for the website with the following URL: [Website-URL]
|
||||
|
||||
Goal:
|
||||
Analyze both UX content aspects as well as visual and technical UI criteria. The analysis should be structured, easy to understand, and actionable—ideal for stakeholders in product, design, and engineering. Use the following criteria and provide the evaluation as a structured list with recommendations.
|
||||
|
||||
---
|
||||
|
||||
Evaluation criteria:
|
||||
|
||||
1. Navigation structure & orientation
|
||||
- Is navigation intuitive, consistent, and always reachable?
|
||||
- Is the information architecture logically structured?
|
||||
- Is there a clearly identifiable home page or “Home” anchor?
|
||||
- Are navigation items understandable on mobile (e.g., burger menu with clear labeling)?
|
||||
|
||||
2. Accessibility (WCAG 2.1 AA)
|
||||
- Color contrast: Are contrasts sufficient? (Recommended: at least 4.5:1 for body text, 3:1 for large text)
|
||||
- Font sizes:
|
||||
- Minimum body text size: 16 px (~1rem) on desktop
|
||||
- On mobile: at least 16 px, ideally 18 px
|
||||
- Large text (e.g., headings): 20–24 px and above
|
||||
- Operability: Do all interactive elements work via keyboard (tab focus, Enter)?
|
||||
- Alternative text: Are images/icons correctly labeled with alt text or aria-labels?
|
||||
- Focus indicators: Are they clearly visible (e.g., outline or contrast change)?
|
||||
|
||||
3. Interactive elements & usability
|
||||
- Are buttons and links visually recognizable as interactive (shape, color, hover state)?
|
||||
- Are labels clear and action-oriented (e.g., “Submit now” instead of “OK”)?
|
||||
- Are there contextual error messages that describe causes and solutions?
|
||||
- For forms: is autocomplete supported?
|
||||
|
||||
4. UI consistency & design system
|
||||
- Are UI components (e.g., buttons, input fields) used consistently?
|
||||
- Are there clear rules for colors, spacing, typography and sizes?
|
||||
- Are there contradictory visual patterns (e.g., two different button styles for the same action)?
|
||||
- Are components derived from a unified design system?
|
||||
|
||||
5. Mobile usage & responsive design
|
||||
- Is the website fully responsive?
|
||||
- Are there layout shifts or horizontal scrolling?
|
||||
- Touch targets:
|
||||
- Are all tappable elements at least 44 x 44 px? (Apple HIG / WCAG)
|
||||
- Is spacing sufficient to prevent accidental taps?
|
||||
- Are font sizes and spacing well adapted on small viewports (no forced zooming)?
|
||||
|
||||
6. Performance & load time
|
||||
- Is page load time below 3 seconds (First Contentful Paint)?
|
||||
- Are there performance issues from unoptimized images, fonts, or JavaScript?
|
||||
- Is lazy loading used for off-screen content?
|
||||
|
||||
---
|
||||
|
||||
Output format per main category that shows problems:
|
||||
- Provide a rating per category on a 1–5 scale (1 = major need for action, 5 = very good)
|
||||
|
||||
Output format per finding:
|
||||
- Status: [Problem]
|
||||
- Location: [URL, page/section, description e.g., contrast issues, small touch targets, too-small font sizes]
|
||||
- Rationale: Why is this a problem?
|
||||
- Recommendation: What should be improved?
|
||||
- Improvement potential: List concrete improvements by priority (impact x effort)
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Executive comparison
|
||||
|
||||
| Category | Agent S2 rating | Anthropic Computer Use rating |
|
||||
|---|---:|---:|
|
||||
| Navigation structure & orientation | 3/5 | 4/5 |
|
||||
| Accessibility (WCAG 2.1 AA) | 5/5 | 4/5 |
|
||||
| Interactive elements & usability | 3/5 | 3/5 |
|
||||
| UI consistency & design system | 5/5 | 4/5 |
|
||||
| Mobile usage & responsive design | 5/5 | 4/5 |
|
||||
| Performance & load time | 5/5 | 2/5 |
|
||||
|
||||
### Findings (details)
|
||||
|
||||
The sections below contain a consolidated view of the **most important findings** for each agent, written in English for this documentation.
|
||||
|
||||
<details>
|
||||
<summary><strong>Results – Agent S2 (collapsed)</strong></summary>
|
||||
|
||||
### Executive summary
|
||||
|
||||
- The output focuses on a small number of critical blockers and otherwise reports “no issues” across most categories.
|
||||
- Key issues flagged relate to **external linking behavior** and **test execution stability**.
|
||||
|
||||
### Key findings (table)
|
||||
|
||||
| Category | Status | Location | Rationale | Recommendation | Priority |
|
||||
|---|---|---|---|---|---|
|
||||
| Navigation structure & orientation | Critical | Social media button → external Twitter/X (x.com) | Users are redirected to an external platform and may not be able to return to the original site due to login prompts/cookie banners; back navigation reportedly does not restore the prior state. | Open social media actions in a new tab/window or implement a dedicated share solution (e.g., JS-based sharing). | High |
|
||||
| Interactive elements & usability | Critical | Leipzig site “Bürgerservice” section (automated test flow) | Automated test run blocks completely; the agent stops performing actions and shows repeated error messages, making the test process unusable. | Improve test automation robustness and implement fallback mechanisms for blocked actions. | High |
|
||||
|
||||
### Prioritized recommendations
|
||||
|
||||
- **Critical (immediate):** Open social links in a new tab to prevent session loss.
|
||||
- **Critical (immediate):** Add fallback mechanisms in test automation to avoid full run failure.
|
||||
|
||||
### Overall conclusion
|
||||
|
||||
- Overall rating reported: **4/5** (solid UX baseline, but impacted by critical navigation/external linking and test stability issues).
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Results – Anthropic Computer Use (collapsed)</strong></summary>
|
||||
|
||||
### Executive summary
|
||||
|
||||
- Provides a broad UX health check across categories.
|
||||
- Highlights **performance** as the main area requiring significant improvement.
|
||||
- Also flags opportunities around accessibility (contrast and alt-text auditing), labels, touch target sizes, and button hierarchy.
|
||||
|
||||
### Key findings (table)
|
||||
|
||||
| Category | Status | Location | Rationale | Recommendation | Priority |
|
||||
|---|---|---|---|---|---|
|
||||
| Navigation structure & orientation | Low | Language dropdown (top navigation) | Functional but could be clearer for international users. | Add a language label (e.g., “Language/Sprache”) alongside the icon. | Low |
|
||||
| Accessibility (WCAG 2.1 AA) | Medium | Link/button contrast | Some blue links may have insufficient contrast. | Verify and adjust contrast to at least 4.5:1 where required. | Medium |
|
||||
| Accessibility (WCAG 2.1 AA) | High | Image alt text | Without a dedicated audit/screen-reader validation, alt-text coverage is not verifiable; alt text is critical for accessibility. | Run a complete alt-text audit and remediate gaps. | High |
|
||||
| Interactive elements & usability | Low/Medium | Search field and search button | Button label and placeholder text could be clearer. | Improve button labeling and search placeholder copy; consider autocomplete. | Medium |
|
||||
| Mobile usage & responsive design | Medium | Touch targets on mobile | Some navigation elements may be too small for comfortable tapping. | Increase touch targets to at least 44×44 px and adjust padding/spacing. | Medium |
|
||||
| UI consistency & design system | Medium | Button hierarchy / button styles | Primary vs secondary buttons are not always clearly distinguishable. | Define a clear button hierarchy and apply consistently across the design system. | Medium |
|
||||
| Performance & load time | High | General load times (RSS feeds, contact, media library, search) | Multiple areas reportedly exceed 3 seconds. | Optimize server performance, implement caching, and compress assets. | High |
|
||||
| Performance & load time | High | Image optimization | Large images without lazy loading/compression can significantly impact performance. | Use WebP, add lazy loading, and implement responsive images. | High |
|
||||
|
||||
### Prioritized recommendations
|
||||
|
||||
- **Critical (immediate):** Performance optimization (server response time, caching, asset compression).
|
||||
- **Critical (immediate):** Image compression + lazy loading (WebP, responsive images).
|
||||
- **High (next):** Accessibility audit (alt text + contrast), touch targets ≥44 px, establish button hierarchy.
|
||||
|
||||
### Overall conclusion
|
||||
|
||||
- Overall rating reported: **3.5/5** (good structure and navigation; biggest gap is performance).
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Expert assessment (collapsed)</strong></summary>
|
||||
|
||||
The expert assessment below compares the practical value of the two result sets from a UX/UI perspective.
|
||||
|
||||
- **Navigation & orientation:** Anthropic Computer Use picks up the language dropdown topic (important for orientation/accessibility). Agent S2 does not mention it here, but highlights the separate-tab behavior for social media functions, which Anthropic did not cover.
|
||||
- **Accessibility:** Both agents are assessed as relatively weak in this category.
|
||||
- Anthropic claims font sizes are sufficient, but the expert notes smaller text (e.g., meta navigation/breadcrumb ~14 px) and insufficient contrast.
|
||||
- Contrast recommendations require manual verification; alt text cannot be verified automatically but is critical.
|
||||
- Agent S2 does not mention small font sizes/low contrast and misses existing contrast issues.
|
||||
- **Interactive elements & usability:** Expert questions some details (e.g., search button label) and notes missing coverage of hover states for icon buttons/assistive icons. Recommendations around touch target sizing are considered good, but would be stronger with precise locations (URL/page/section).
|
||||
- **UI consistency/design system, mobile, performance:** Expert notes Agent S2 misses most issues, while Anthropic provides useful recommendations but often needs manual follow-up and more precise locations.
|
||||
- **Overall:** Anthropic surfaces more issues and includes neutral/positive aspects; Agent S2 tends to miss many issues and focuses more on problems without balancing positives.
|
||||
|
||||
</details>
|
||||
|
|
@ -1,197 +0,0 @@
|
|||
---
|
||||
title: "Run 3: Task-based UX Analysis (Legacy PoC)"
|
||||
linkTitle: "Run 3 – Task-based UX Analysis"
|
||||
weight: 30
|
||||
description: >
|
||||
Legacy PoC run executed with the Anthropic API and Claude Sonnet 4.5 to analyze end-to-end task flows and usability friction
|
||||
---
|
||||
## Purpose
|
||||
|
||||
This run demonstrates how an autonomous agent can execute a representative **end-to-end user task** and produce an analysis of usability and experience quality.
|
||||
|
||||
Compared to a UI element checklist, task-based analysis focuses on whether a user can complete a goal efficiently and confidently.
|
||||
|
||||
## Model and execution context
|
||||
|
||||
- Execution period: early prototyping / PoC phase (legacy)
|
||||
- Model provider: Anthropic API
|
||||
- Model used: Claude Sonnet 4.5
|
||||
- Role in this run: combined *vision* (screen understanding) and *thinking* (planning, evaluation, recommendation)
|
||||
|
||||
Comparison note: The legacy PoC compares **two agent approaches**—the **Anthropic Computer Use Agent** and **Agent S2**—both executed via the **Anthropic API** using **Claude Sonnet 4.5** for **thinking** and **vision/grounding**.
|
||||
|
||||
Note: The current D66 target stack differs due to project constraints. See [Model Stack](../../../model-stack/).
|
||||
|
||||
## What the agent evaluates
|
||||
|
||||
Typical evaluation dimensions in a task-based run include:
|
||||
|
||||
- Task completion success and failure points
|
||||
- Number of steps and unnecessary detours
|
||||
- Clarity of labels, instructions, and calls to action
|
||||
- Feedback quality (loading, confirmations, error messages)
|
||||
- Form friction (validation, input constraints, error recovery)
|
||||
- Consistency of navigation and ability to backtrack safely
|
||||
|
||||
## Expected outputs (evidence)
|
||||
|
||||
A task-based run is expected to produce:
|
||||
|
||||
- A step-by-step trace (report + log) of the journey
|
||||
- A summary table of friction points and recommendations
|
||||
- Optional: a “to-be” improved flow proposal
|
||||
|
||||
## Legacy artifacts
|
||||
|
||||
This page is the intended location to attach the **original legacy artifacts** from the Anthropic/Claude run (logs and the generated report).
|
||||
|
||||
If the artifacts are stored elsewhere, link them here and/or copy them into this page bundle folder.
|
||||
|
||||
## Results (Use Case 3) – Agent S2 vs Anthropic Computer Use
|
||||
|
||||
This section summarizes the **most important findings** from two legacy PoC runs for Use Case 3:
|
||||
|
||||
- Agent S2
|
||||
- Anthropic Computer Use (Anthropic API, Claude Sonnet 4.5)
|
||||
|
||||
The original source documents (German) are stored in the repository under the PoC Validation Confluence export folder.
|
||||
|
||||
### Source documents (German, original)
|
||||
|
||||
- Prompt: [../POC Validation Confluence docs/3 - Task-based UX-Analyse/3 - Prompt.md](../POC%20Validation%20Confluence%20docs/3%20-%20Task-based%20UX-Analyse/3%20-%20Prompt.md)
|
||||
- Results – Agent S2: [../POC Validation Confluence docs/3 - Task-based UX-Analyse/3 - Ergebnisse Agent S2.md](../POC%20Validation%20Confluence%20docs/3%20-%20Task-based%20UX-Analyse/3%20-%20Ergebnisse%20Agent%20S2.md)
|
||||
- Results – Anthropic Computer Use: [../POC Validation Confluence docs/3 - Task-based UX-Analyse/3 - Ergebnisse Anthropic Computer Use.md](../POC%20Validation%20Confluence%20docs/3%20-%20Task-based%20UX-Analyse/3%20-%20Ergebnisse%20Anthropic%20Computer%20Use.md)
|
||||
- Expert assessment: [../POC Validation Confluence docs/3 - Task-based UX-Analyse/3 - Experteneinschätzung zum Vergleich der Agent Frameworks.md](../POC%20Validation%20Confluence%20docs/3%20-%20Task-based%20UX-Analyse/3%20-%20Experteneinsch%C3%A4tzung%20zum%20Vergleich%20der%20Agent%20Frameworks.md)
|
||||
|
||||
<details>
|
||||
<summary><strong>Prompt used (translated, collapsed)</strong></summary>
|
||||
|
||||
Disclaimer: The PoC runs were executed using a **German** prompt. For documentation purposes, the prompt is **translated into English** below.
|
||||
|
||||
```text
|
||||
Role:
|
||||
You are a realistic, critical test user with basic understanding of digital products—but without expert knowledge. Your task is to complete a concrete task on a website the way real users would. You report each step, thought, and reaction, and identify usability problems and opportunities for improvement.
|
||||
|
||||
Goal:
|
||||
Simulate a realistic usability test on [Website-URL]. You receive a typical user task and describe step by step how you perform it, what you notice, where you get stuck, and how you find your way.
|
||||
|
||||
Approach:
|
||||
- Act like an average user in this situation.
|
||||
- Describe your thoughts out loud (“think-aloud”).
|
||||
- Do not analyze as an expert; report user perception.
|
||||
- At the end, summarize the biggest usability issues and your improvement suggestions.
|
||||
|
||||
Notes:
|
||||
- Use visual descriptions when you see UI elements (e.g., “grey button at bottom right”).
|
||||
- Pay particular attention to classic usability principles (Jakob Nielsen):
|
||||
- Understandable navigation
|
||||
- Wording aligned with user expectations
|
||||
- Visibility of interactions & states
|
||||
- Error prevention and error messages
|
||||
- Mobile optimization and touch targets (min. 44x44px)
|
||||
- Text readability (min. 16 px, preferably 18 px on mobile)
|
||||
|
||||
---
|
||||
|
||||
Task (please adapt as needed):
|
||||
“On <Website-URL>, find out how to subscribe to the newsletter and sign up.”
|
||||
|
||||
---
|
||||
|
||||
Output format:
|
||||
|
||||
Task:
|
||||
[The concrete task]
|
||||
|
||||
Step-by-step procedure:
|
||||
1. What I want to do (goal/subtask)
|
||||
2. What I do (interaction on the page)
|
||||
3. What I expect (system response/information)
|
||||
4. What happens (actual response)
|
||||
5. What I notice (positive/negative observations)
|
||||
6. What irritates or confuses me
|
||||
7. How I continue / whether I go back / abort
|
||||
(repeat for each step)
|
||||
|
||||
At the end:
|
||||
Summary of main problems:
|
||||
- [Usability problems along the flow]
|
||||
|
||||
Improvement suggestions:
|
||||
- [Concrete improvements]
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Executive comparison
|
||||
|
||||
| Dimension | Agent S2 | Anthropic Computer Use |
|
||||
|---|---|---|
|
||||
| Task completion | Completed | Completed |
|
||||
| Main focus | Technical QA / “no issues found” | Think-aloud usability observations |
|
||||
| Key improvement theme | Ongoing QA/monitoring | Discoverability and recovery/navigation |
|
||||
|
||||
### Findings (details)
|
||||
|
||||
The sections below contain a consolidated view of the **most important findings** for each agent, written in English for this documentation.
|
||||
|
||||
<details>
|
||||
<summary><strong>Results – Agent S2 (collapsed)</strong></summary>
|
||||
|
||||
### Executive summary
|
||||
|
||||
- The task (newsletter subscription on leipzig.de using a provided email address) was reported as completed.
|
||||
- No usability or functional issues were reported in the flow.
|
||||
|
||||
### Reported findings
|
||||
|
||||
- **Issues found:** None reported.
|
||||
|
||||
### Recommendations
|
||||
|
||||
Since no issues were detected, the recommendations focus on continuous quality assurance:
|
||||
|
||||
1. Implement regular automated tests for the newsletter signup flow
|
||||
2. Set up performance monitoring for critical user journeys
|
||||
3. Run monthly manual usability checks
|
||||
4. Continuously verify responsive behavior across devices
|
||||
5. Regularly validate accessibility against WCAG standards
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Results – Anthropic Computer Use (collapsed)</strong></summary>
|
||||
|
||||
### Executive summary
|
||||
|
||||
- The signup flow is described as generally user-friendly and functional.
|
||||
- Multiple usability improvements are suggested, mainly around **discoverability** and **error recovery**.
|
||||
|
||||
### Key findings (table)
|
||||
|
||||
| Area | Observation | Why it matters | Recommendation |
|
||||
|---|---|---|---|
|
||||
| Discoverability | Newsletter button is not in the main navigation; users need to scroll to find it. | Users may not find the feature quickly; increases friction. | Add a “Newsletter” link in top navigation and/or make it persistently visible in header/footer. |
|
||||
| Findability via search | No dedicated search support for “newsletter” is mentioned as an entry point. | Users often try search first; missing entry increases abandonment. | Include newsletter signup in site search results for “newsletter”. |
|
||||
| Validation UX | Missing client-side validation before redirecting to CleverReach. | Preventable errors reduce user confidence. | Add frontend email validation before submission/redirect. |
|
||||
| Error recovery | After an error, there is no direct link back; user must use browser back button. | Poor recovery increases frustration and drop-off. | Add a “Back to signup” link/button on the error page. |
|
||||
| Mobile UX | Mobile behavior is assumed OK but recommended to verify touch target sizes. | Small touch targets cause mis-taps; accessibility risk. | Verify on mobile; ensure touch targets ≥44×44 px. |
|
||||
|
||||
### Overall conclusion
|
||||
|
||||
- Overall assessment: works well with clear, incremental improvement opportunities.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Expert assessment (collapsed)</strong></summary>
|
||||
|
||||
The expert assessment below compares the practical value of the two result sets from a UX/UI perspective.
|
||||
|
||||
- Both agents completed the task.
|
||||
- Agent S2 focuses more on technical/operational recommendations (monitoring, automated testing), whereas Anthropic Computer Use focuses on usability optimizations.
|
||||
- In practice, users likely need to scroll to find the newsletter button; some would likely try search first.
|
||||
- Anthropic Computer Use provides helpful usability findings and concrete improvement suggestions.
|
||||
|
||||
</details>
|
||||
|
||||
|
|
@ -1,113 +0,0 @@
|
|||
---
|
||||
title: "Running Autonomous UAT Agent Scripts"
|
||||
linkTitle: "Running Autonomous UAT Agent Scripts"
|
||||
weight: 3
|
||||
description: >
|
||||
How to run the key D66 evaluation scripts and what they produce
|
||||
---
|
||||
|
||||
The **Autonomous UAT Agent** is the overall UX/UI testing use case built on top of the Agent S codebase and scripts in this repo.
|
||||
|
||||
All commands below assume you are running from the **Agent-S repository root** (Linux/ECS), `~/Projects/Agent_S3/Agent-S`. To do that, connect to the server via SSH. You will need a key pair for authentication and an open inbound port in the firewall. For information on how to obtain the key pair and request firewall access, contact [tom.sakretz@telekom.de](mailto:tom.sakretz@telekom.de).
|
||||
|
||||
## Template for running a script from command line terminal
|
||||
|
||||
### 1) Connect from Windows
|
||||
|
||||
```powershell
|
||||
ssh -i "C:\Path to KeyPair\KeyPair-ECS.pem" ubuntu@80.158.3.120
|
||||
```
|
||||
|
||||
### 2) Prepare the ECS runtime (GUI + browser)
|
||||
|
||||
```bash
|
||||
# Activate venv
|
||||
source ~/Projects/Agent_S3/Agent-S/venv/bin/activate
|
||||
|
||||
# Go to Agent-S repo root
|
||||
cd ~/Projects/Agent_S3/Agent-S
|
||||
|
||||
# Start VNC (DISPLAY=:1) and a browser
|
||||
vncserver :1
|
||||
export XAUTHORITY="$HOME/.Xauthority"
|
||||
export DISPLAY=":1"
|
||||
firefox &
|
||||
```
|
||||
|
||||
### 3) One-command recommended run (ECS)
|
||||
|
||||
If you only want to produce clean, repeatable evidence (screenshots with click markers), run the following command CLI:
|
||||
|
||||
```bash
|
||||
python staging_scripts/gui_agent_cli.py --prompt "Go to telekom.de and click the cart icon" --max-steps 10
|
||||
```
|
||||
|
||||
This will produce:
|
||||
|
||||
- Screenshots: `./results/gui_agent_cli/<timestamp>/screenshots/`
|
||||
- Text log: `./results/gui_agent_cli/<timestamp>/logs/run.log`
|
||||
- JSON comm log: `./results/gui_agent_cli/<timestamp>/logs/run.log`
|
||||
|
||||
|
||||
## Prerequisites (runtime)
|
||||
|
||||
- Linux GUI session (VNC/Xvfb) because these scripts drive a real browser via `pyautogui`.
|
||||
- A working `DISPLAY` (default for all scripts is `:1`).
|
||||
- Network access to the model endpoints (thinking + vision/grounding).
|
||||
|
||||
|
||||
## Key scripts (repo locations)
|
||||
|
||||
The GUI Agent CLI script is the most flexible entry point and is therefore the only one described in more detail in this documentation. Assumes you are in project root `~/Projects/Agent_S3/Agent-S`.
|
||||
|
||||
- GUI Agent CLI: `staging_scripts/gui_agent_cli.py`
|
||||
|
||||
Historically, we used purpose-built scripts for individual tasks. We now recommend using `gui_agent_cli.py` as the primary entry point, because the same scenarios can usually be expressed via a well-scoped prompt while keeping the workflow more flexible and easier to maintain. The scripts below are kept for reference and may not reflect the current, preferred workflow.
|
||||
|
||||
- UI check (Agent S3): `staging_scripts/1_UI_check_AS3.py`
|
||||
- Functional correctness check: `staging_scripts/1_UI_functional_correctness_check.py`
|
||||
- Visual quality audit: `staging_scripts/2_UX_visual_quality_audit.py`
|
||||
- Task-based UX flow (newsletter): `staging_scripts/3_UX_taskflow_newsletter_signup.py`
|
||||
|
||||
|
||||
## Golden run (terminal on ECS)
|
||||
|
||||
This is the “golden run” command sequence currently used for D66 evidence generation. The golden run is a complete workflow that works as a template for reproducible outcomes.
|
||||
|
||||
```bash
|
||||
python staging_scripts/gui_agent_cli.py \
|
||||
--prompt "Role: You are a UI/UX testing agent specializing in functional correctness.
|
||||
Goal: Test all interactive elements in the header navigation on www.telekom.de for functional weaknesses.
|
||||
Tasks:
|
||||
1. Navigate to the website
|
||||
2. Identify and test interactive elements (buttons, links, forms, menus)
|
||||
3. Check for broken flows, defective links, non-functioning elements
|
||||
4. Document issues found
|
||||
Report Format:
|
||||
Return findings in the 'issues' field as a list of objects:
|
||||
- element: Name/description of the element
|
||||
- location: Where on the page
|
||||
- problem: What doesn't work
|
||||
- recommendation: How to fix it
|
||||
If no problems found, return an empty array: []" \
|
||||
--max-steps 30
|
||||
```
|
||||
|
||||
Golden run artifacts:
|
||||
|
||||
- Screenshots: `./results/gui_agent_cli/<timestamp>/screenshots/`
|
||||
- Text log: `./results/gui_agent_cli/<timestamp>/logs/run.log`
|
||||
- Optional JSON comm log (if enabled): `./results/gui_agent_cli/<timestamp>/logs/calibration_log_*.json`
|
||||
|
||||
An example golden run with screenshots and log outputs can be seen in [Results](./results/).
|
||||
|
||||
## Alternative: run the agent via a web interface (Frontend)
|
||||
|
||||
Work in progress.
|
||||
|
||||
We are currently updating the web-based view and its ECS runner integration. This section will be filled with the correct, up-to-date instructions once the frontend flow supports the current Autonomous UAT Agent + `gui_agent_cli.py` workflow.
|
||||
|
||||
|
||||
## Notes on model usage
|
||||
|
||||
Some scripts still contain legacy model configs (Claude/Pixtral). The D66 target configuration is documented in [Model Stack](./model-stack.md).
|
||||
28
content/en/docs/_index.md
Normal file → Executable file
|
|
@ -1,27 +1,9 @@
|
|||
---
|
||||
title: "Documentation"
|
||||
linkTitle: "Documentation"
|
||||
menu:
|
||||
main:
|
||||
weight: 20
|
||||
title: Developer Framework Documentation
|
||||
linkTitle: Docs
|
||||
menu: {main: {weight: 20}}
|
||||
weight: 20
|
||||
---
|
||||
|
||||
# Edge Developer Platform (EDP) Documentation
|
||||
This section is the project documentation for IPCEI-CIS Developer Framework.
|
||||
|
||||
Welcome to the EDP documentation. This documentation serves developers, engineers, and auditors who want to understand, use, and audit the Edge Developer Platform.
|
||||
|
||||
It describes the outcomes and products of the edgeDeveloperFramework (eDF) sub-project within IPCEI-CIS.
|
||||
|
||||
## Target Audience
|
||||
|
||||
* **Developers & Engineers**: Learn how to use the platform, deploy applications, and integrate services
|
||||
* **Platform Engineers**: Understand the architecture, components, and operational aspects
|
||||
* **Auditors & Governance**: Access project history, decisions, and compliance information
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
The documentation is organized into three core areas:
|
||||
|
||||
* **[Edge Developer Platform (EDP)](/docs/edp/)**: The central platform to support developers working at the edge, based around Forgejo
|
||||
* **[EdgeConnect Cloud](/docs/edgeconnect/)**: The sovereign edge cloud context and key deployment target for EDP integrations
|
||||
* **[Governance](/docs/governance/)**: Project history, decision context, and audit-oriented traceability
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ weight: 2
|
|||
description: The golden paths in the engineers and product development domain
|
||||
---
|
||||
|
||||
## Rationale
|
||||
## Rationale
|
||||
|
||||
The challenge of IPCEI-CIS Developer Framework is to provide value for DTAG customers, and more specifically: for Developers of DTAG customers.
|
||||
|
||||
|
|
@ -40,22 +40,10 @@ Deploy and develop the famous socks shops:
|
|||
|
||||
* https://github.com/kezoo/nestjs-reactjs-graphql-typescript-boilerplate-example
|
||||
|
||||
### Telemetry Use Case with respect to the Fibonacci workload
|
||||
|
||||
The Fibonacci App on the cluster can be accessed on the path https://cnoe.localtest.me/fibonacci.
|
||||
It can be called for example by using the URL https://cnoe.localtest.me/fibonacci?number=5000000.
|
||||
|
||||
The resulting ressource spike can be observed one the Grafana dashboard "Kubernetes / Compute Resources / Cluster".
|
||||
The resulting visualization should look similar like this:
|
||||
|
||||
|
||||

|
||||
|
||||
|
||||
## When and how to use the developer framework?
|
||||
|
||||
### e.g. an example
|
||||
|
||||
.... taken from https://cloud.google.com/blog/products/application-development/common-myths-about-platform-engineering?hl=en
|
||||
|
||||

|
||||

|
||||
|
Before Width: | Height: | Size: 944 KiB After Width: | Height: | Size: 944 KiB |
|
Before Width: | Height: | Size: 160 KiB After Width: | Height: | Size: 160 KiB |