Compare commits

..

5 commits
v0.0.3 ... main

Author SHA1 Message Date
d0aea88a5b
refactor: Rename recommender to sizer
Some checks failed
ci / build (push) Failing after 58s
2026-02-13 16:42:37 +01:00
862fc07328
ci: generate two separate binaries
Some checks failed
ci / build (push) Failing after 59s
2026-02-13 14:24:33 +01:00
7e3a4efb2d
feat: added timestamp to HMAC to allow a TTL for the token
Some checks failed
ci / build (push) Failing after 25s
2026-02-13 12:48:57 +01:00
a96a1079eb
fix: now sizer does not round up to the next Gi when in between two 2026-02-13 12:30:32 +01:00
8101e9b20e
feat: added first iteration sizes not recomender 2026-02-13 12:02:53 +01:00
11 changed files with 1034 additions and 68 deletions

View file

@ -28,7 +28,7 @@ make install-hooks # Install pre-commit and commit-msg hooks
## Architecture Overview ## Architecture Overview
This is a Go metrics collector designed for CI/CD environments with shared PID namespaces. It consists of two binaries: A resource optimiser for CI/CD environments with shared PID namespaces. It consists of two binaries — a **collector** and a **receiver** (which includes the **sizer**):
### Collector (`cmd/collector`) ### Collector (`cmd/collector`)
Runs alongside CI workloads, periodically reads `/proc` filesystem, and pushes a summary to the receiver on shutdown (SIGINT/SIGTERM). Runs alongside CI workloads, periodically reads `/proc` filesystem, and pushes a summary to the receiver on shutdown (SIGINT/SIGTERM).
@ -40,11 +40,12 @@ Runs alongside CI workloads, periodically reads `/proc` filesystem, and pushes a
4. On shutdown, `summary.PushClient` sends the summary to the receiver HTTP endpoint 4. On shutdown, `summary.PushClient` sends the summary to the receiver HTTP endpoint
### Receiver (`cmd/receiver`) ### Receiver (`cmd/receiver`)
HTTP service that stores metric summaries in SQLite (via GORM) and provides a query API. HTTP service that stores metric summaries in SQLite (via GORM), provides a query API, and includes the **sizer** — which computes right-sized Kubernetes resource requests and limits from historical data.
**Key Endpoints:** **Key Endpoints:**
- `POST /api/v1/metrics` - Receive metrics from collectors - `POST /api/v1/metrics` - Receive metrics from collectors
- `GET /api/v1/metrics/repo/{org}/{repo}/{workflow}/{job}` - Query stored metrics - `GET /api/v1/metrics/repo/{org}/{repo}/{workflow}/{job}` - Query stored metrics
- `GET /api/v1/sizing/repo/{org}/{repo}/{workflow}/{job}` - Compute container sizes from historical data
### Internal Packages ### Internal Packages
@ -55,7 +56,7 @@ HTTP service that stores metric summaries in SQLite (via GORM) and provides a qu
| `internal/proc` | Low-level /proc parsing (stat, status, cgroup) | | `internal/proc` | Low-level /proc parsing (stat, status, cgroup) |
| `internal/cgroup` | Parses CGROUP_LIMITS and CGROUP_PROCESS_MAP env vars | | `internal/cgroup` | Parses CGROUP_LIMITS and CGROUP_PROCESS_MAP env vars |
| `internal/summary` | Accumulates samples, computes stats, pushes to receiver | | `internal/summary` | Accumulates samples, computes stats, pushes to receiver |
| `internal/receiver` | HTTP handlers and SQLite store | | `internal/receiver` | HTTP handlers, SQLite store, and sizer logic |
| `internal/output` | Metrics output formatting (JSON/text) | | `internal/output` | Metrics output formatting (JSON/text) |
### Container Metrics ### Container Metrics

View file

@ -1,10 +1,10 @@
# Forgejo Runner Resource Collector # Forgejo Runner Optimiser
A lightweight metrics collector for CI/CD workloads in shared PID namespace environments. Reads `/proc` to collect CPU and memory metrics, groups them by container/cgroup, and pushes run summaries to a receiver service for storage and querying. A resource optimiser for CI/CD workloads in shared PID namespace environments. The **collector** reads `/proc` to gather CPU and memory metrics grouped by container/cgroup, and pushes run summaries to the **receiver**. The receiver stores metrics and exposes a **sizer** API that computes right-sized Kubernetes resource requests and limits from historical data.
## Architecture ## Architecture
The system has two independent binaries: The system has two binaries — a **collector** and a **receiver** (which includes the sizer):
``` ```
┌─────────────────────────────────────────────┐ ┌──────────────────────────┐ ┌─────────────────────────────────────────────┐ ┌──────────────────────────┐
@ -19,7 +19,9 @@ The system has two independent binaries:
│ └───────────┘ └────────┘ └───────────┘ │ │ │ │ │ └───────────┘ └────────┘ └───────────┘ │ │ │ │
│ │ │ ▼ │ │ │ │ ▼ │
└─────────────────────────────────────────────┘ │ GET /api/v1/metrics/... │ └─────────────────────────────────────────────┘ │ GET /api/v1/metrics/... │
└──────────────────────────┘ │ GET /api/v1/sizing/... │
│ (sizer) │
└──────────────────────────┘
``` ```
### Collector ### Collector
@ -56,9 +58,9 @@ Runs as a sidecar alongside CI workloads. On a configurable interval, it reads `
CPU supports Kubernetes notation (`"2"` = 2 cores, `"500m"` = 0.5 cores). Memory supports `Ki`, `Mi`, `Gi`, `Ti` (binary) or `K`, `M`, `G`, `T` (decimal). CPU supports Kubernetes notation (`"2"` = 2 cores, `"500m"` = 0.5 cores). Memory supports `Ki`, `Mi`, `Gi`, `Ti` (binary) or `K`, `M`, `G`, `T` (decimal).
### Receiver ### Receiver (with sizer)
HTTP service that stores metric summaries in SQLite (via GORM) and exposes a query API. HTTP service that stores metric summaries in SQLite (via GORM), exposes a query API, and provides a **sizer** endpoint that computes right-sized Kubernetes resource requests and limits from historical run data.
```bash ```bash
./receiver --addr=:8080 --db=metrics.db --read-token=my-secret-token --hmac-key=my-hmac-key ./receiver --addr=:8080 --db=metrics.db --read-token=my-secret-token --hmac-key=my-hmac-key
@ -78,6 +80,7 @@ HTTP service that stores metric summaries in SQLite (via GORM) and exposes a que
- `POST /api/v1/metrics` — receive and store a metric summary (requires scoped push token) - `POST /api/v1/metrics` — receive and store a metric summary (requires scoped push token)
- `POST /api/v1/token` — generate a scoped push token (requires read token auth) - `POST /api/v1/token` — generate a scoped push token (requires read token auth)
- `GET /api/v1/metrics/repo/{org}/{repo}/{workflow}/{job}` — query stored metrics (requires read token auth) - `GET /api/v1/metrics/repo/{org}/{repo}/{workflow}/{job}` — query stored metrics (requires read token auth)
- `GET /api/v1/sizing/repo/{org}/{repo}/{workflow}/{job}` — compute container sizes from historical data (requires read token auth)
**Authentication:** **Authentication:**
@ -232,7 +235,7 @@ PUSH_TOKEN=$(curl -s -X POST http://localhost:8080/api/v1/token \
| `internal/cgroup` | Parses `CGROUP_PROCESS_MAP` and `CGROUP_LIMITS` env vars | | `internal/cgroup` | Parses `CGROUP_PROCESS_MAP` and `CGROUP_LIMITS` env vars |
| `internal/collector` | Orchestrates the collection loop and shutdown | | `internal/collector` | Orchestrates the collection loop and shutdown |
| `internal/summary` | Accumulates samples, computes stats, pushes to receiver | | `internal/summary` | Accumulates samples, computes stats, pushes to receiver |
| `internal/receiver` | HTTP handlers and SQLite store | | `internal/receiver` | HTTP handlers, SQLite store, and sizer logic |
| `internal/output` | Metrics output formatting (JSON/text) | | `internal/output` | Metrics output formatting (JSON/text) |
## Background ## Background

View file

@ -24,6 +24,7 @@ func main() {
dbPath := flag.String("db", defaultDBPath, "SQLite database path") dbPath := flag.String("db", defaultDBPath, "SQLite database path")
readToken := flag.String("read-token", os.Getenv("RECEIVER_READ_TOKEN"), "Pre-shared token for read endpoints (or set RECEIVER_READ_TOKEN)") readToken := flag.String("read-token", os.Getenv("RECEIVER_READ_TOKEN"), "Pre-shared token for read endpoints (or set RECEIVER_READ_TOKEN)")
hmacKey := flag.String("hmac-key", os.Getenv("RECEIVER_HMAC_KEY"), "Secret key for push token generation/validation (or set RECEIVER_HMAC_KEY)") hmacKey := flag.String("hmac-key", os.Getenv("RECEIVER_HMAC_KEY"), "Secret key for push token generation/validation (or set RECEIVER_HMAC_KEY)")
tokenTTL := flag.Duration("token-ttl", 2*time.Hour, "Time-to-live for push tokens (default 2h)")
flag.Parse() flag.Parse()
logger := slog.New(slog.NewJSONHandler(os.Stderr, &slog.HandlerOptions{ logger := slog.New(slog.NewJSONHandler(os.Stderr, &slog.HandlerOptions{
@ -37,7 +38,7 @@ func main() {
} }
defer func() { _ = store.Close() }() defer func() { _ = store.Close() }()
handler := receiver.NewHandler(store, logger, *readToken, *hmacKey) handler := receiver.NewHandler(store, logger, *readToken, *hmacKey, *tokenTTL)
mux := http.NewServeMux() mux := http.NewServeMux()
handler.RegisterRoutes(mux) handler.RegisterRoutes(mux)

View file

@ -32,7 +32,7 @@ func setupTestReceiver(t *testing.T) (*receiver.Store, *httptest.Server, func())
t.Fatalf("NewStore() error = %v", err) t.Fatalf("NewStore() error = %v", err)
} }
handler := receiver.NewHandler(store, slog.New(slog.NewTextHandler(io.Discard, nil)), testReadToken, testHMACKey) handler := receiver.NewHandler(store, slog.New(slog.NewTextHandler(io.Discard, nil)), testReadToken, testHMACKey, 0)
mux := http.NewServeMux() mux := http.NewServeMux()
handler.RegisterRoutes(mux) handler.RegisterRoutes(mux)
@ -46,9 +46,9 @@ func setupTestReceiver(t *testing.T) (*receiver.Store, *httptest.Server, func())
return store, server, cleanup return store, server, cleanup
} }
// generatePushToken generates a scoped push token for an execution context // generatePushToken generates a push token for an execution context
func generatePushToken(exec summary.ExecutionContext) string { func generatePushToken(exec summary.ExecutionContext) string {
return receiver.GenerateScopedToken(testHMACKey, exec.Organization, exec.Repository, exec.Workflow, exec.Job) return receiver.GenerateToken(testHMACKey, exec.Organization, exec.Repository, exec.Workflow, exec.Job)
} }
func TestPushClientToReceiver(t *testing.T) { func TestPushClientToReceiver(t *testing.T) {
@ -166,8 +166,8 @@ func TestPushClientIntegration(t *testing.T) {
t.Setenv("GITHUB_JOB", "push-job") t.Setenv("GITHUB_JOB", "push-job")
t.Setenv("GITHUB_RUN_ID", "push-run-456") t.Setenv("GITHUB_RUN_ID", "push-run-456")
// Generate scoped push token // Generate push token
pushToken := receiver.GenerateScopedToken(testHMACKey, "push-client-org", "push-client-repo", "push-test.yml", "push-job") pushToken := receiver.GenerateToken(testHMACKey, "push-client-org", "push-client-repo", "push-test.yml", "push-job")
// Create push client with token - it reads execution context from env vars // Create push client with token - it reads execution context from env vars
pushClient := summary.NewPushClient(server.URL+"/api/v1/metrics", pushToken) pushClient := summary.NewPushClient(server.URL+"/api/v1/metrics", pushToken)
@ -371,7 +371,7 @@ func setupTestReceiverWithToken(t *testing.T, readToken, hmacKey string) (*recei
t.Fatalf("NewStore() error = %v", err) t.Fatalf("NewStore() error = %v", err)
} }
handler := receiver.NewHandler(store, slog.New(slog.NewTextHandler(io.Discard, nil)), readToken, hmacKey) handler := receiver.NewHandler(store, slog.New(slog.NewTextHandler(io.Discard, nil)), readToken, hmacKey, 0)
mux := http.NewServeMux() mux := http.NewServeMux()
handler.RegisterRoutes(mux) handler.RegisterRoutes(mux)

View file

@ -5,9 +5,11 @@ package receiver
import ( import (
"crypto/subtle" "crypto/subtle"
"encoding/json" "encoding/json"
"fmt"
"log/slog" "log/slog"
"net/http" "net/http"
"strings" "strings"
"time"
) )
// Handler handles HTTP requests for the metrics receiver // Handler handles HTTP requests for the metrics receiver
@ -16,13 +18,18 @@ type Handler struct {
logger *slog.Logger logger *slog.Logger
readToken string // Pre-shared token for read endpoint authentication readToken string // Pre-shared token for read endpoint authentication
hmacKey string // Separate key for HMAC-based push token generation/validation hmacKey string // Separate key for HMAC-based push token generation/validation
tokenTTL time.Duration
} }
// NewHandler creates a new HTTP handler with the given store. // NewHandler creates a new HTTP handler with the given store.
// readToken authenticates read endpoints and the token generation endpoint. // readToken authenticates read endpoints and the token generation endpoint.
// hmacKey is the secret used to derive scoped push tokens. // hmacKey is the secret used to derive scoped push tokens.
func NewHandler(store *Store, logger *slog.Logger, readToken, hmacKey string) *Handler { // tokenTTL specifies how long push tokens are valid (0 uses DefaultTokenTTL).
return &Handler{store: store, logger: logger, readToken: readToken, hmacKey: hmacKey} func NewHandler(store *Store, logger *slog.Logger, readToken, hmacKey string, tokenTTL time.Duration) *Handler {
if tokenTTL == 0 {
tokenTTL = DefaultTokenTTL
}
return &Handler{store: store, logger: logger, readToken: readToken, hmacKey: hmacKey, tokenTTL: tokenTTL}
} }
// RegisterRoutes registers all HTTP routes on the given mux // RegisterRoutes registers all HTTP routes on the given mux
@ -30,6 +37,7 @@ func (h *Handler) RegisterRoutes(mux *http.ServeMux) {
mux.HandleFunc("POST /api/v1/metrics", h.handleReceiveMetrics) mux.HandleFunc("POST /api/v1/metrics", h.handleReceiveMetrics)
mux.HandleFunc("POST /api/v1/token", h.handleGenerateToken) mux.HandleFunc("POST /api/v1/token", h.handleGenerateToken)
mux.HandleFunc("GET /api/v1/metrics/repo/{org}/{repo}/{workflow}/{job}", h.handleGetByWorkflowJob) mux.HandleFunc("GET /api/v1/metrics/repo/{org}/{repo}/{workflow}/{job}", h.handleGetByWorkflowJob)
mux.HandleFunc("GET /api/v1/sizing/repo/{org}/{repo}/{workflow}/{job}", h.handleGetSizing)
mux.HandleFunc("GET /health", h.handleHealth) mux.HandleFunc("GET /health", h.handleHealth)
} }
@ -86,7 +94,7 @@ func (h *Handler) handleGenerateToken(w http.ResponseWriter, r *http.Request) {
return return
} }
token := GenerateScopedToken(h.hmacKey, req.Organization, req.Repository, req.Workflow, req.Job) token := GenerateToken(h.hmacKey, req.Organization, req.Repository, req.Workflow, req.Job)
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(TokenResponse{Token: token}) _ = json.NewEncoder(w).Encode(TokenResponse{Token: token})
@ -115,7 +123,7 @@ func (h *Handler) validatePushToken(w http.ResponseWriter, r *http.Request, exec
} }
token := strings.TrimPrefix(authHeader, bearerPrefix) token := strings.TrimPrefix(authHeader, bearerPrefix)
if !ValidateScopedToken(h.hmacKey, token, exec.Organization, exec.Repository, exec.Workflow, exec.Job) { if !ValidateToken(h.hmacKey, token, exec.Organization, exec.Repository, exec.Workflow, exec.Job, h.tokenTTL) {
h.logger.Warn("invalid push token", slog.String("path", r.URL.Path)) h.logger.Warn("invalid push token", slog.String("path", r.URL.Path))
http.Error(w, "invalid token", http.StatusUnauthorized) http.Error(w, "invalid token", http.StatusUnauthorized)
return false return false
@ -194,3 +202,71 @@ func (h *Handler) handleHealth(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]string{"status": "ok"}) _ = json.NewEncoder(w).Encode(map[string]string{"status": "ok"})
} }
func (h *Handler) handleGetSizing(w http.ResponseWriter, r *http.Request) {
if !h.validateReadToken(w, r) {
return
}
org := r.PathValue("org")
repo := r.PathValue("repo")
workflow := r.PathValue("workflow")
job := r.PathValue("job")
if org == "" || repo == "" || workflow == "" || job == "" {
http.Error(w, "org, repo, workflow and job are required", http.StatusBadRequest)
return
}
// Parse query parameters with defaults
runs := parseIntQueryParam(r, "runs", 5, 1, 100)
buffer := parseIntQueryParam(r, "buffer", 20, 0, 100)
cpuPercentile := r.URL.Query().Get("cpu_percentile")
if cpuPercentile == "" {
cpuPercentile = "p95"
}
if !IsValidPercentile(cpuPercentile) {
http.Error(w, "invalid cpu_percentile: must be one of peak, p99, p95, p75, p50, avg", http.StatusBadRequest)
return
}
metrics, err := h.store.GetRecentMetricsByWorkflowJob(org, repo, workflow, job, runs)
if err != nil {
h.logger.Error("failed to get metrics", slog.String("error", err.Error()))
http.Error(w, "failed to get metrics", http.StatusInternalServerError)
return
}
if len(metrics) == 0 {
http.Error(w, "no metrics found for this workflow/job", http.StatusNotFound)
return
}
response, err := computeSizing(metrics, buffer, cpuPercentile)
if err != nil {
h.logger.Error("failed to compute sizing", slog.String("error", err.Error()))
http.Error(w, "failed to compute sizing", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(response)
}
// parseIntQueryParam parses an integer query parameter with default, min, and max values
func parseIntQueryParam(r *http.Request, name string, defaultVal, minVal, maxVal int) int {
strVal := r.URL.Query().Get(name)
if strVal == "" {
return defaultVal
}
var val int
if _, err := fmt.Sscanf(strVal, "%d", &val); err != nil {
return defaultVal
}
if val < minVal {
return minVal
}
if val > maxVal {
return maxVal
}
return val
}

View file

@ -8,6 +8,7 @@ import (
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"path/filepath" "path/filepath"
"strings"
"testing" "testing"
"edp.buildth.ing/DevFW-CICD/forgejo-runner-optimiser/internal/summary" "edp.buildth.ing/DevFW-CICD/forgejo-runner-optimiser/internal/summary"
@ -25,7 +26,7 @@ func TestHandler_ReceiveMetrics(t *testing.T) {
Job: "build", Job: "build",
RunID: "run-123", RunID: "run-123",
} }
pushToken := GenerateScopedToken(readToken, exec.Organization, exec.Repository, exec.Workflow, exec.Job) pushToken := GenerateToken(readToken, exec.Organization, exec.Repository, exec.Workflow, exec.Job)
payload := MetricsPayload{ payload := MetricsPayload{
Execution: exec, Execution: exec,
@ -264,8 +265,13 @@ func TestHandler_GenerateToken(t *testing.T) {
if resp.Token == "" { if resp.Token == "" {
t.Error("expected non-empty token") t.Error("expected non-empty token")
} }
if len(resp.Token) != 64 { // Token format is "timestamp:hmac" where hmac is 64 hex chars
t.Errorf("token length = %d, want 64", len(resp.Token)) parts := strings.SplitN(resp.Token, ":", 2)
if len(parts) != 2 {
t.Errorf("token should have format 'timestamp:hmac', got %q", resp.Token)
}
if len(parts[1]) != 64 {
t.Errorf("HMAC part length = %d, want 64", len(parts[1]))
} }
} }
@ -357,8 +363,8 @@ func TestHandler_ReceiveMetrics_WithPushToken(t *testing.T) {
RunID: "run-1", RunID: "run-1",
} }
validToken := GenerateScopedToken(readToken, exec.Organization, exec.Repository, exec.Workflow, exec.Job) validToken := GenerateToken(readToken, exec.Organization, exec.Repository, exec.Workflow, exec.Job)
wrongScopeToken := GenerateScopedToken(readToken, "other-org", "repo", "ci.yml", "build") wrongScopeToken := GenerateToken(readToken, "other-org", "repo", "ci.yml", "build")
tests := []struct { tests := []struct {
name string name string
@ -448,7 +454,7 @@ func newTestHandler(t *testing.T) (*Handler, func()) {
} }
logger := slog.New(slog.NewTextHandler(io.Discard, nil)) logger := slog.New(slog.NewTextHandler(io.Discard, nil))
handler := NewHandler(store, logger, "", "") // no auth — endpoints will reject handler := NewHandler(store, logger, "", "", 0) // no auth — endpoints will reject
return handler, func() { _ = store.Close() } return handler, func() { _ = store.Close() }
} }
@ -467,7 +473,7 @@ func newTestHandlerWithKeys(t *testing.T, readToken, hmacKey string) (*Handler,
} }
logger := slog.New(slog.NewTextHandler(io.Discard, nil)) logger := slog.New(slog.NewTextHandler(io.Discard, nil))
handler := NewHandler(store, logger, readToken, hmacKey) handler := NewHandler(store, logger, readToken, hmacKey, 0) // 0 uses DefaultTokenTTL
return handler, func() { _ = store.Close() } return handler, func() { _ = store.Close() }
} }

221
internal/receiver/sizing.go Normal file
View file

@ -0,0 +1,221 @@
// ABOUTME: Computes ideal container sizes from historical run data.
// ABOUTME: Provides Kubernetes-style resource sizes.
package receiver
import (
"encoding/json"
"fmt"
"math"
"sort"
"edp.buildth.ing/DevFW-CICD/forgejo-runner-optimiser/internal/summary"
)
// ResourceSize holds Kubernetes-formatted resource values
type ResourceSize struct {
Request string `json:"request"`
Limit string `json:"limit"`
}
// ContainerSizing holds computed sizing for a single container
type ContainerSizing struct {
Name string `json:"name"`
CPU ResourceSize `json:"cpu"`
Memory ResourceSize `json:"memory"`
}
// SizingMeta provides context about the sizing calculation
type SizingMeta struct {
RunsAnalyzed int `json:"runs_analyzed"`
BufferPercent int `json:"buffer_percent"`
CPUPercentile string `json:"cpu_percentile"`
}
// SizingResponse is the API response for the sizing endpoint
type SizingResponse struct {
Containers []ContainerSizing `json:"containers"`
Total struct {
CPU ResourceSize `json:"cpu"`
Memory ResourceSize `json:"memory"`
} `json:"total"`
Meta SizingMeta `json:"meta"`
}
// validPercentiles lists the allowed percentile values
var validPercentiles = map[string]bool{
"peak": true,
"p99": true,
"p95": true,
"p75": true,
"p50": true,
"avg": true,
}
// IsValidPercentile checks if the given percentile string is valid
func IsValidPercentile(p string) bool {
return validPercentiles[p]
}
// selectCPUValue extracts the appropriate value from StatSummary based on percentile
func selectCPUValue(stats summary.StatSummary, percentile string) float64 {
switch percentile {
case "peak":
return stats.Peak
case "p99":
return stats.P99
case "p95":
return stats.P95
case "p75":
return stats.P75
case "p50":
return stats.P50
case "avg":
return stats.Avg
default:
return stats.P95 // default to p95
}
}
// formatMemoryK8s converts bytes to Kubernetes memory format (Mi)
func formatMemoryK8s(bytes float64) string {
const Mi = 1024 * 1024
return fmt.Sprintf("%.0fMi", math.Ceil(bytes/Mi))
}
// formatCPUK8s converts cores to Kubernetes CPU format (millicores or whole cores)
func formatCPUK8s(cores float64) string {
millicores := cores * 1000
if millicores >= 1000 && math.Mod(millicores, 1000) == 0 {
return fmt.Sprintf("%.0f", cores)
}
return fmt.Sprintf("%.0fm", math.Ceil(millicores))
}
// roundUpMemoryLimit rounds bytes up to the next power of 2 in Mi
func roundUpMemoryLimit(bytes float64) float64 {
const Mi = 1024 * 1024
if bytes <= 0 {
return Mi // minimum 1Mi
}
miValue := bytes / Mi
if miValue <= 1 {
return Mi // minimum 1Mi
}
// Find next power of 2
power := math.Ceil(math.Log2(miValue))
return math.Pow(2, power) * Mi
}
// roundUpCPULimit rounds cores up to the next 0.5 increment
func roundUpCPULimit(cores float64) float64 {
if cores <= 0 {
return 0.5 // minimum 0.5 cores
}
return math.Ceil(cores*2) / 2
}
// containerAggregation holds accumulated stats for a single container across runs
type containerAggregation struct {
cpuValues []float64
memoryPeaks []float64
}
// computeSizing calculates ideal container sizes from metrics
func computeSizing(metrics []Metric, bufferPercent int, cpuPercentile string) (*SizingResponse, error) {
if len(metrics) == 0 {
return nil, fmt.Errorf("no metrics provided")
}
// Aggregate container stats across all runs
containerStats := make(map[string]*containerAggregation)
for _, m := range metrics {
var runSummary summary.RunSummary
if err := json.Unmarshal([]byte(m.Payload), &runSummary); err != nil {
continue // skip invalid payloads
}
for _, c := range runSummary.Containers {
if _, exists := containerStats[c.Name]; !exists {
containerStats[c.Name] = &containerAggregation{
cpuValues: make([]float64, 0),
memoryPeaks: make([]float64, 0),
}
}
agg := containerStats[c.Name]
agg.cpuValues = append(agg.cpuValues, selectCPUValue(c.CPUCores, cpuPercentile))
agg.memoryPeaks = append(agg.memoryPeaks, c.MemoryBytes.Peak)
}
}
// Calculate sizing for each container
bufferMultiplier := 1.0 + float64(bufferPercent)/100.0
var containers []ContainerSizing
var totalCPU, totalMemory float64
// Sort container names for consistent output
names := make([]string, 0, len(containerStats))
for name := range containerStats {
names = append(names, name)
}
sort.Strings(names)
for _, name := range names {
agg := containerStats[name]
// CPU: max of selected percentile values across runs
maxCPU := 0.0
for _, v := range agg.cpuValues {
if v > maxCPU {
maxCPU = v
}
}
// Memory: peak of peaks
maxMemory := 0.0
for _, v := range agg.memoryPeaks {
if v > maxMemory {
maxMemory = v
}
}
// Apply buffer
cpuWithBuffer := maxCPU * bufferMultiplier
memoryWithBuffer := maxMemory * bufferMultiplier
containers = append(containers, ContainerSizing{
Name: name,
CPU: ResourceSize{
Request: formatCPUK8s(cpuWithBuffer),
Limit: formatCPUK8s(roundUpCPULimit(cpuWithBuffer)),
},
Memory: ResourceSize{
Request: formatMemoryK8s(memoryWithBuffer),
Limit: formatMemoryK8s(roundUpMemoryLimit(memoryWithBuffer)),
},
})
totalCPU += cpuWithBuffer
totalMemory += memoryWithBuffer
}
response := &SizingResponse{
Containers: containers,
Meta: SizingMeta{
RunsAnalyzed: len(metrics),
BufferPercent: bufferPercent,
CPUPercentile: cpuPercentile,
},
}
response.Total.CPU = ResourceSize{
Request: formatCPUK8s(totalCPU),
Limit: formatCPUK8s(roundUpCPULimit(totalCPU)),
}
response.Total.Memory = ResourceSize{
Request: formatMemoryK8s(totalMemory),
Limit: formatMemoryK8s(roundUpMemoryLimit(totalMemory)),
}
return response, nil
}

View file

@ -0,0 +1,494 @@
package receiver
import (
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
"edp.buildth.ing/DevFW-CICD/forgejo-runner-optimiser/internal/summary"
)
func TestFormatMemoryK8s(t *testing.T) {
tests := []struct {
bytes float64
want string
}{
{0, "0Mi"},
{1024 * 1024, "1Mi"},
{256 * 1024 * 1024, "256Mi"},
{512 * 1024 * 1024, "512Mi"},
{1024 * 1024 * 1024, "1024Mi"},
{2 * 1024 * 1024 * 1024, "2048Mi"},
{1.5 * 1024 * 1024 * 1024, "1536Mi"},
{100 * 1024 * 1024, "100Mi"},
}
for _, tt := range tests {
got := formatMemoryK8s(tt.bytes)
if got != tt.want {
t.Errorf("formatMemoryK8s(%v) = %q, want %q", tt.bytes, got, tt.want)
}
}
}
func TestFormatCPUK8s(t *testing.T) {
tests := []struct {
cores float64
want string
}{
{0, "0m"},
{0.1, "100m"},
{0.5, "500m"},
{1.0, "1"},
{1.5, "1500m"},
{2.0, "2"},
{2.5, "2500m"},
{0.123, "123m"},
}
for _, tt := range tests {
got := formatCPUK8s(tt.cores)
if got != tt.want {
t.Errorf("formatCPUK8s(%v) = %q, want %q", tt.cores, got, tt.want)
}
}
}
func TestRoundUpMemoryLimit(t *testing.T) {
Mi := float64(1024 * 1024)
tests := []struct {
bytes float64
want float64
}{
{0, Mi}, // minimum 1Mi
{100, Mi}, // rounds up to 1Mi
{Mi, Mi}, // exactly 1Mi stays 1Mi
{1.5 * Mi, 2 * Mi},
{200 * Mi, 256 * Mi},
{300 * Mi, 512 * Mi},
{600 * Mi, 1024 * Mi},
}
for _, tt := range tests {
got := roundUpMemoryLimit(tt.bytes)
if got != tt.want {
t.Errorf("roundUpMemoryLimit(%v) = %v, want %v", tt.bytes, got, tt.want)
}
}
}
func TestRoundUpCPULimit(t *testing.T) {
tests := []struct {
cores float64
want float64
}{
{0, 0.5}, // minimum 0.5
{0.1, 0.5},
{0.5, 0.5},
{0.6, 1.0},
{1.0, 1.0},
{1.1, 1.5},
{1.5, 1.5},
{2.0, 2.0},
{2.3, 2.5},
}
for _, tt := range tests {
got := roundUpCPULimit(tt.cores)
if got != tt.want {
t.Errorf("roundUpCPULimit(%v) = %v, want %v", tt.cores, got, tt.want)
}
}
}
func TestSelectCPUValue(t *testing.T) {
stats := summary.StatSummary{
Peak: 10.0,
P99: 9.0,
P95: 8.0,
P75: 6.0,
P50: 5.0,
Avg: 4.0,
}
tests := []struct {
percentile string
want float64
}{
{"peak", 10.0},
{"p99", 9.0},
{"p95", 8.0},
{"p75", 6.0},
{"p50", 5.0},
{"avg", 4.0},
{"invalid", 8.0}, // defaults to p95
}
for _, tt := range tests {
got := selectCPUValue(stats, tt.percentile)
if got != tt.want {
t.Errorf("selectCPUValue(stats, %q) = %v, want %v", tt.percentile, got, tt.want)
}
}
}
func TestIsValidPercentile(t *testing.T) {
valid := []string{"peak", "p99", "p95", "p75", "p50", "avg"}
for _, p := range valid {
if !IsValidPercentile(p) {
t.Errorf("IsValidPercentile(%q) = false, want true", p)
}
}
invalid := []string{"p80", "p90", "max", ""}
for _, p := range invalid {
if IsValidPercentile(p) {
t.Errorf("IsValidPercentile(%q) = true, want false", p)
}
}
}
func TestComputeSizing_SingleRun(t *testing.T) {
runSummary := summary.RunSummary{
Containers: []summary.ContainerSummary{
{
Name: "runner",
CPUCores: summary.StatSummary{Peak: 1.0, P99: 0.9, P95: 0.8, P75: 0.6, P50: 0.5, Avg: 0.4},
MemoryBytes: summary.StatSummary{Peak: 512 * 1024 * 1024}, // 512Mi
},
},
}
payload, _ := json.Marshal(runSummary)
metrics := []Metric{{Payload: string(payload)}}
resp, err := computeSizing(metrics, 20, "p95")
if err != nil {
t.Fatalf("computeSizing() error = %v", err)
}
if len(resp.Containers) != 1 {
t.Fatalf("got %d containers, want 1", len(resp.Containers))
}
c := resp.Containers[0]
if c.Name != "runner" {
t.Errorf("container name = %q, want %q", c.Name, "runner")
}
// CPU: 0.8 * 1.2 = 0.96 -> 960m request, 1 limit
if c.CPU.Request != "960m" {
t.Errorf("CPU request = %q, want %q", c.CPU.Request, "960m")
}
if c.CPU.Limit != "1" {
t.Errorf("CPU limit = %q, want %q", c.CPU.Limit, "1")
}
// Memory: 512Mi * 1.2 = 614.4Mi -> 615Mi request, 1024Mi limit
if c.Memory.Request != "615Mi" {
t.Errorf("Memory request = %q, want %q", c.Memory.Request, "615Mi")
}
if c.Memory.Limit != "1024Mi" {
t.Errorf("Memory limit = %q, want %q", c.Memory.Limit, "1024Mi")
}
if resp.Meta.RunsAnalyzed != 1 {
t.Errorf("runs_analyzed = %d, want 1", resp.Meta.RunsAnalyzed)
}
if resp.Meta.BufferPercent != 20 {
t.Errorf("buffer_percent = %d, want 20", resp.Meta.BufferPercent)
}
if resp.Meta.CPUPercentile != "p95" {
t.Errorf("cpu_percentile = %q, want %q", resp.Meta.CPUPercentile, "p95")
}
}
func TestComputeSizing_MultipleRuns(t *testing.T) {
// Run 1: lower values
run1 := summary.RunSummary{
Containers: []summary.ContainerSummary{
{
Name: "runner",
CPUCores: summary.StatSummary{Peak: 0.5, P95: 0.4},
MemoryBytes: summary.StatSummary{Peak: 256 * 1024 * 1024},
},
},
}
// Run 2: higher values (should be used)
run2 := summary.RunSummary{
Containers: []summary.ContainerSummary{
{
Name: "runner",
CPUCores: summary.StatSummary{Peak: 1.0, P95: 0.8},
MemoryBytes: summary.StatSummary{Peak: 512 * 1024 * 1024},
},
},
}
payload1, _ := json.Marshal(run1)
payload2, _ := json.Marshal(run2)
metrics := []Metric{
{Payload: string(payload1)},
{Payload: string(payload2)},
}
resp, err := computeSizing(metrics, 0, "p95") // no buffer for easier math
if err != nil {
t.Fatalf("computeSizing() error = %v", err)
}
c := resp.Containers[0]
// CPU: max(0.4, 0.8) = 0.8
if c.CPU.Request != "800m" {
t.Errorf("CPU request = %q, want %q", c.CPU.Request, "800m")
}
// Memory: max(256, 512) = 512Mi
if c.Memory.Request != "512Mi" {
t.Errorf("Memory request = %q, want %q", c.Memory.Request, "512Mi")
}
if resp.Meta.RunsAnalyzed != 2 {
t.Errorf("runs_analyzed = %d, want 2", resp.Meta.RunsAnalyzed)
}
}
func TestComputeSizing_MultipleContainers(t *testing.T) {
runSummary := summary.RunSummary{
Containers: []summary.ContainerSummary{
{
Name: "runner",
CPUCores: summary.StatSummary{P95: 1.0},
MemoryBytes: summary.StatSummary{Peak: 512 * 1024 * 1024},
},
{
Name: "dind",
CPUCores: summary.StatSummary{P95: 0.5},
MemoryBytes: summary.StatSummary{Peak: 256 * 1024 * 1024},
},
},
}
payload, _ := json.Marshal(runSummary)
metrics := []Metric{{Payload: string(payload)}}
resp, err := computeSizing(metrics, 0, "p95")
if err != nil {
t.Fatalf("computeSizing() error = %v", err)
}
if len(resp.Containers) != 2 {
t.Fatalf("got %d containers, want 2", len(resp.Containers))
}
// Containers should be sorted alphabetically
if resp.Containers[0].Name != "dind" {
t.Errorf("first container = %q, want %q", resp.Containers[0].Name, "dind")
}
if resp.Containers[1].Name != "runner" {
t.Errorf("second container = %q, want %q", resp.Containers[1].Name, "runner")
}
// Total should be sum
if resp.Total.CPU.Request != "1500m" {
t.Errorf("total CPU request = %q, want %q", resp.Total.CPU.Request, "1500m")
}
if resp.Total.Memory.Request != "768Mi" {
t.Errorf("total memory request = %q, want %q", resp.Total.Memory.Request, "768Mi")
}
}
func TestComputeSizing_NoMetrics(t *testing.T) {
_, err := computeSizing([]Metric{}, 20, "p95")
if err == nil {
t.Error("computeSizing() with no metrics should return error")
}
}
func TestHandler_GetSizing(t *testing.T) {
const readToken = "test-token"
h, cleanup := newTestHandlerWithToken(t, readToken)
defer cleanup()
// Save metrics with container data
for i := 0; i < 3; i++ {
runSummary := summary.RunSummary{
Containers: []summary.ContainerSummary{
{
Name: "runner",
CPUCores: summary.StatSummary{Peak: 1.0, P99: 0.9, P95: 0.8, P75: 0.6, P50: 0.5, Avg: 0.4},
MemoryBytes: summary.StatSummary{Peak: 512 * 1024 * 1024},
},
},
}
payload := &MetricsPayload{
Execution: ExecutionContext{
Organization: "org",
Repository: "repo",
Workflow: "ci.yml",
Job: "build",
RunID: "run-" + string(rune('1'+i)),
},
Summary: runSummary,
}
if _, err := h.store.SaveMetric(payload); err != nil {
t.Fatalf("SaveMetric() error = %v", err)
}
}
req := httptest.NewRequest(http.MethodGet, "/api/v1/sizing/repo/org/repo/ci.yml/build", nil)
req.Header.Set("Authorization", "Bearer "+readToken)
rec := httptest.NewRecorder()
mux := http.NewServeMux()
h.RegisterRoutes(mux)
mux.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Errorf("status = %d, want %d", rec.Code, http.StatusOK)
}
var resp SizingResponse
if err := json.NewDecoder(rec.Body).Decode(&resp); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
if len(resp.Containers) != 1 {
t.Errorf("got %d containers, want 1", len(resp.Containers))
}
if resp.Meta.RunsAnalyzed != 3 {
t.Errorf("runs_analyzed = %d, want 3", resp.Meta.RunsAnalyzed)
}
if resp.Meta.BufferPercent != 20 {
t.Errorf("buffer_percent = %d, want 20", resp.Meta.BufferPercent)
}
if resp.Meta.CPUPercentile != "p95" {
t.Errorf("cpu_percentile = %q, want %q", resp.Meta.CPUPercentile, "p95")
}
}
func TestHandler_GetSizing_CustomParams(t *testing.T) {
const readToken = "test-token"
h, cleanup := newTestHandlerWithToken(t, readToken)
defer cleanup()
// Save one metric
runSummary := summary.RunSummary{
Containers: []summary.ContainerSummary{
{
Name: "runner",
CPUCores: summary.StatSummary{Peak: 1.0, P99: 0.9, P95: 0.8, P75: 0.6, P50: 0.5, Avg: 0.4},
MemoryBytes: summary.StatSummary{Peak: 512 * 1024 * 1024},
},
},
}
payload := &MetricsPayload{
Execution: ExecutionContext{Organization: "org", Repository: "repo", Workflow: "ci.yml", Job: "build", RunID: "run-1"},
Summary: runSummary,
}
if _, err := h.store.SaveMetric(payload); err != nil {
t.Fatalf("SaveMetric() error = %v", err)
}
req := httptest.NewRequest(http.MethodGet, "/api/v1/sizing/repo/org/repo/ci.yml/build?runs=10&buffer=10&cpu_percentile=p75", nil)
req.Header.Set("Authorization", "Bearer "+readToken)
rec := httptest.NewRecorder()
mux := http.NewServeMux()
h.RegisterRoutes(mux)
mux.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Errorf("status = %d, want %d", rec.Code, http.StatusOK)
}
var resp SizingResponse
if err := json.NewDecoder(rec.Body).Decode(&resp); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
if resp.Meta.BufferPercent != 10 {
t.Errorf("buffer_percent = %d, want 10", resp.Meta.BufferPercent)
}
if resp.Meta.CPUPercentile != "p75" {
t.Errorf("cpu_percentile = %q, want %q", resp.Meta.CPUPercentile, "p75")
}
// CPU: 0.6 * 1.1 = 0.66
c := resp.Containers[0]
if c.CPU.Request != "660m" {
t.Errorf("CPU request = %q, want %q", c.CPU.Request, "660m")
}
}
func TestHandler_GetSizing_NotFound(t *testing.T) {
const readToken = "test-token"
h, cleanup := newTestHandlerWithToken(t, readToken)
defer cleanup()
req := httptest.NewRequest(http.MethodGet, "/api/v1/sizing/repo/org/repo/ci.yml/build", nil)
req.Header.Set("Authorization", "Bearer "+readToken)
rec := httptest.NewRecorder()
mux := http.NewServeMux()
h.RegisterRoutes(mux)
mux.ServeHTTP(rec, req)
if rec.Code != http.StatusNotFound {
t.Errorf("status = %d, want %d", rec.Code, http.StatusNotFound)
}
}
func TestHandler_GetSizing_InvalidPercentile(t *testing.T) {
const readToken = "test-token"
h, cleanup := newTestHandlerWithToken(t, readToken)
defer cleanup()
req := httptest.NewRequest(http.MethodGet, "/api/v1/sizing/repo/org/repo/ci.yml/build?cpu_percentile=p80", nil)
req.Header.Set("Authorization", "Bearer "+readToken)
rec := httptest.NewRecorder()
mux := http.NewServeMux()
h.RegisterRoutes(mux)
mux.ServeHTTP(rec, req)
if rec.Code != http.StatusBadRequest {
t.Errorf("status = %d, want %d", rec.Code, http.StatusBadRequest)
}
}
func TestHandler_GetSizing_AuthRequired(t *testing.T) {
const readToken = "test-token"
h, cleanup := newTestHandlerWithToken(t, readToken)
defer cleanup()
tests := []struct {
name string
authHeader string
wantCode int
}{
{"no auth", "", http.StatusUnauthorized},
{"wrong token", "Bearer wrong-token", http.StatusUnauthorized},
{"valid token", "Bearer " + readToken, http.StatusNotFound}, // no metrics, but auth works
}
mux := http.NewServeMux()
h.RegisterRoutes(mux)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, "/api/v1/sizing/repo/org/repo/ci.yml/build", nil)
if tt.authHeader != "" {
req.Header.Set("Authorization", tt.authHeader)
}
rec := httptest.NewRecorder()
mux.ServeHTTP(rec, req)
if rec.Code != tt.wantCode {
t.Errorf("status = %d, want %d", rec.Code, tt.wantCode)
}
})
}
}

View file

@ -103,6 +103,16 @@ func (s *Store) GetMetricsByWorkflowJob(org, repo, workflow, job string) ([]Metr
return metrics, result.Error return metrics, result.Error
} }
// GetRecentMetricsByWorkflowJob retrieves the last N metrics ordered by received_at DESC
func (s *Store) GetRecentMetricsByWorkflowJob(org, repo, workflow, job string, limit int) ([]Metric, error) {
var metrics []Metric
result := s.db.Where(
"organization = ? AND repository = ? AND workflow = ? AND job = ?",
org, repo, workflow, job,
).Order("received_at DESC").Limit(limit).Find(&metrics)
return metrics, result.Error
}
// Close closes the database connection // Close closes the database connection
func (s *Store) Close() error { func (s *Store) Close() error {
sqlDB, err := s.db.DB() sqlDB, err := s.db.DB()

View file

@ -1,5 +1,5 @@
// ABOUTME: HMAC-SHA256 token generation and validation for scoped push authentication. // ABOUTME: HMAC-SHA256 token generation and validation for scoped push authentication.
// ABOUTME: Tokens are derived from a key + scope, enabling stateless validation without DB storage. // ABOUTME: Tokens are derived from a key + scope + timestamp, enabling stateless validation with expiration.
package receiver package receiver
import ( import (
@ -7,19 +7,71 @@ import (
"crypto/sha256" "crypto/sha256"
"crypto/subtle" "crypto/subtle"
"encoding/hex" "encoding/hex"
"fmt"
"strconv"
"strings"
"time"
) )
// GenerateScopedToken computes an HMAC-SHA256 token scoped to a specific org/repo/workflow/job. // DefaultTokenTTL is the default time-to-live for push tokens.
// The canonical input is "v1\x00<org>\x00<repo>\x00<workflow>\x00<job>". const DefaultTokenTTL = 2 * time.Hour
func GenerateScopedToken(key, org, repo, workflow, job string) string {
mac := hmac.New(sha256.New, []byte(key)) // GenerateToken creates a token with embedded timestamp for expiration support.
mac.Write([]byte("v1\x00" + org + "\x00" + repo + "\x00" + workflow + "\x00" + job)) // Format: "<unix_timestamp>:<hmac_hex>"
return hex.EncodeToString(mac.Sum(nil)) func GenerateToken(key, org, repo, workflow, job string) string {
return GenerateTokenAt(key, org, repo, workflow, job, time.Now())
} }
// ValidateScopedToken checks whether a token matches the expected HMAC for the given scope. // GenerateTokenAt creates a token with the specified timestamp.
// Uses constant-time comparison to prevent timing attacks. // The HMAC input is "v1\x00<org>\x00<repo>\x00<workflow>\x00<job>\x00<timestamp>".
func ValidateScopedToken(key, token, org, repo, workflow, job string) bool { func GenerateTokenAt(key, org, repo, workflow, job string, timestamp time.Time) string {
expected := GenerateScopedToken(key, org, repo, workflow, job) ts := strconv.FormatInt(timestamp.Unix(), 10)
return subtle.ConstantTimeCompare([]byte(token), []byte(expected)) == 1 mac := hmac.New(sha256.New, []byte(key))
mac.Write([]byte("v1\x00" + org + "\x00" + repo + "\x00" + workflow + "\x00" + job + "\x00" + ts))
return ts + ":" + hex.EncodeToString(mac.Sum(nil))
}
// ValidateToken validates a token and checks expiration.
// Returns true if the token is valid and not expired.
func ValidateToken(key, token, org, repo, workflow, job string, ttl time.Duration) bool {
return ValidateTokenAt(key, token, org, repo, workflow, job, ttl, time.Now())
}
// ValidateTokenAt validates a token against a specific reference time.
func ValidateTokenAt(key, token, org, repo, workflow, job string, ttl time.Duration, now time.Time) bool {
parts := strings.SplitN(token, ":", 2)
if len(parts) != 2 {
return false
}
tsStr, hmacHex := parts[0], parts[1]
ts, err := strconv.ParseInt(tsStr, 10, 64)
if err != nil {
return false
}
tokenTime := time.Unix(ts, 0)
if now.Sub(tokenTime) > ttl {
return false
}
// Recompute expected HMAC
mac := hmac.New(sha256.New, []byte(key))
mac.Write([]byte("v1\x00" + org + "\x00" + repo + "\x00" + workflow + "\x00" + job + "\x00" + tsStr))
expected := hex.EncodeToString(mac.Sum(nil))
return subtle.ConstantTimeCompare([]byte(hmacHex), []byte(expected)) == 1
}
// ParseTokenTimestamp extracts the timestamp from a timestamped token without validating it.
func ParseTokenTimestamp(token string) (time.Time, error) {
parts := strings.SplitN(token, ":", 2)
if len(parts) != 2 {
return time.Time{}, fmt.Errorf("invalid token format")
}
ts, err := strconv.ParseInt(parts[0], 10, 64)
if err != nil {
return time.Time{}, fmt.Errorf("invalid timestamp: %w", err)
}
return time.Unix(ts, 0), nil
} }

View file

@ -1,20 +1,35 @@
package receiver package receiver
import ( import (
"encoding/hex" "strconv"
"strings"
"testing" "testing"
"time"
) )
func TestGenerateScopedToken_Deterministic(t *testing.T) { func TestGenerateToken_Format(t *testing.T) {
token1 := GenerateScopedToken("key", "org", "repo", "wf", "job") token := GenerateToken("key", "org", "repo", "wf", "job")
token2 := GenerateScopedToken("key", "org", "repo", "wf", "job") parts := strings.SplitN(token, ":", 2)
if len(parts) != 2 {
t.Fatalf("token should have format 'timestamp:hmac', got %q", token)
}
if len(parts[1]) != 64 {
t.Errorf("HMAC part length = %d, want 64", len(parts[1]))
}
}
func TestGenerateTokenAt_Deterministic(t *testing.T) {
ts := time.Unix(1700000000, 0)
token1 := GenerateTokenAt("key", "org", "repo", "wf", "job", ts)
token2 := GenerateTokenAt("key", "org", "repo", "wf", "job", ts)
if token1 != token2 { if token1 != token2 {
t.Errorf("tokens differ: %q vs %q", token1, token2) t.Errorf("tokens differ: %q vs %q", token1, token2)
} }
} }
func TestGenerateScopedToken_ScopePinning(t *testing.T) { func TestGenerateTokenAt_ScopePinning(t *testing.T) {
base := GenerateScopedToken("key", "org", "repo", "wf", "job") ts := time.Unix(1700000000, 0)
base := GenerateTokenAt("key", "org", "repo", "wf", "job", ts)
variants := []struct { variants := []struct {
name string name string
@ -31,7 +46,7 @@ func TestGenerateScopedToken_ScopePinning(t *testing.T) {
for _, v := range variants { for _, v := range variants {
t.Run(v.name, func(t *testing.T) { t.Run(v.name, func(t *testing.T) {
token := GenerateScopedToken("key", v.org, v.repo, v.wf, v.job) token := GenerateTokenAt("key", v.org, v.repo, v.wf, v.job, ts)
if token == base { if token == base {
t.Errorf("token for %s should differ from base", v.name) t.Errorf("token for %s should differ from base", v.name)
} }
@ -39,40 +54,127 @@ func TestGenerateScopedToken_ScopePinning(t *testing.T) {
} }
} }
func TestGenerateScopedToken_DifferentKeys(t *testing.T) { func TestGenerateTokenAt_DifferentKeys(t *testing.T) {
token1 := GenerateScopedToken("key-a", "org", "repo", "wf", "job") ts := time.Unix(1700000000, 0)
token2 := GenerateScopedToken("key-b", "org", "repo", "wf", "job") token1 := GenerateTokenAt("key-a", "org", "repo", "wf", "job", ts)
token2 := GenerateTokenAt("key-b", "org", "repo", "wf", "job", ts)
if token1 == token2 { if token1 == token2 {
t.Error("different keys should produce different tokens") t.Error("different keys should produce different tokens")
} }
} }
func TestGenerateScopedToken_ValidHex(t *testing.T) { func TestGenerateTokenAt_DifferentTimestamps(t *testing.T) {
token := GenerateScopedToken("key", "org", "repo", "wf", "job") ts1 := time.Unix(1700000000, 0)
if len(token) != 64 { ts2 := time.Unix(1700000001, 0)
t.Errorf("token length = %d, want 64", len(token)) token1 := GenerateTokenAt("key", "org", "repo", "wf", "job", ts1)
} token2 := GenerateTokenAt("key", "org", "repo", "wf", "job", ts2)
if _, err := hex.DecodeString(token); err != nil { if token1 == token2 {
t.Errorf("token is not valid hex: %v", err) t.Error("different timestamps should produce different tokens")
} }
} }
func TestValidateScopedToken_Correct(t *testing.T) { func TestValidateToken_Correct(t *testing.T) {
token := GenerateScopedToken("key", "org", "repo", "wf", "job") ts := time.Now()
if !ValidateScopedToken("key", token, "org", "repo", "wf", "job") { token := GenerateTokenAt("key", "org", "repo", "wf", "job", ts)
t.Error("ValidateScopedToken should accept correct token") if !ValidateToken("key", token, "org", "repo", "wf", "job", 5*time.Minute) {
t.Error("ValidateToken should accept correct token")
} }
} }
func TestValidateScopedToken_WrongToken(t *testing.T) { func TestValidateToken_WrongToken(t *testing.T) {
if ValidateScopedToken("key", "deadbeef", "org", "repo", "wf", "job") { if ValidateToken("key", "12345:deadbeef", "org", "repo", "wf", "job", 5*time.Minute) {
t.Error("ValidateScopedToken should reject wrong token") t.Error("ValidateToken should reject wrong token")
} }
} }
func TestValidateScopedToken_WrongScope(t *testing.T) { func TestValidateToken_WrongScope(t *testing.T) {
token := GenerateScopedToken("key", "org", "repo", "wf", "job") ts := time.Now()
if ValidateScopedToken("key", token, "org", "repo", "wf", "other-job") { token := GenerateTokenAt("key", "org", "repo", "wf", "job", ts)
t.Error("ValidateScopedToken should reject token for different scope") if ValidateToken("key", token, "org", "repo", "wf", "other-job", 5*time.Minute) {
t.Error("ValidateToken should reject token for different scope")
}
}
func TestValidateToken_Expired(t *testing.T) {
ts := time.Now().Add(-10 * time.Minute)
token := GenerateTokenAt("key", "org", "repo", "wf", "job", ts)
if ValidateToken("key", token, "org", "repo", "wf", "job", 5*time.Minute) {
t.Error("ValidateToken should reject expired token")
}
}
func TestValidateTokenAt_NotExpired(t *testing.T) {
tokenTime := time.Unix(1700000000, 0)
token := GenerateTokenAt("key", "org", "repo", "wf", "job", tokenTime)
// Validate at 4 minutes later (within 5 minute TTL)
now := tokenTime.Add(4 * time.Minute)
if !ValidateTokenAt("key", token, "org", "repo", "wf", "job", 5*time.Minute, now) {
t.Error("ValidateTokenAt should accept token within TTL")
}
}
func TestValidateTokenAt_JustExpired(t *testing.T) {
tokenTime := time.Unix(1700000000, 0)
token := GenerateTokenAt("key", "org", "repo", "wf", "job", tokenTime)
// Validate at 6 minutes later (beyond 5 minute TTL)
now := tokenTime.Add(6 * time.Minute)
if ValidateTokenAt("key", token, "org", "repo", "wf", "job", 5*time.Minute, now) {
t.Error("ValidateTokenAt should reject token beyond TTL")
}
}
func TestValidateToken_InvalidFormat(t *testing.T) {
if ValidateToken("key", "no-colon-here", "org", "repo", "wf", "job", 5*time.Minute) {
t.Error("ValidateToken should reject token without colon")
}
if ValidateToken("key", "not-a-number:abc123", "org", "repo", "wf", "job", 5*time.Minute) {
t.Error("ValidateToken should reject token with invalid timestamp")
}
}
func TestParseTokenTimestamp(t *testing.T) {
ts := time.Unix(1700000000, 0)
token := GenerateTokenAt("key", "org", "repo", "wf", "job", ts)
parsed, err := ParseTokenTimestamp(token)
if err != nil {
t.Fatalf("ParseTokenTimestamp failed: %v", err)
}
if !parsed.Equal(ts) {
t.Errorf("parsed timestamp = %v, want %v", parsed, ts)
}
}
func TestParseTokenTimestamp_Invalid(t *testing.T) {
_, err := ParseTokenTimestamp("no-colon")
if err == nil {
t.Error("ParseTokenTimestamp should fail on missing colon")
}
_, err = ParseTokenTimestamp("not-a-number:abc123")
if err == nil {
t.Error("ParseTokenTimestamp should fail on invalid timestamp")
}
}
func TestValidateToken_TamperedTimestamp(t *testing.T) {
// Generate a valid token
ts := time.Now()
token := GenerateTokenAt("key", "org", "repo", "wf", "job", ts)
parts := strings.SplitN(token, ":", 2)
if len(parts) != 2 {
t.Fatalf("unexpected token format: %q", token)
}
hmacPart := parts[1]
// Tamper with timestamp (e.g., attacker tries to extend token lifetime)
tamperedTimestamp := strconv.FormatInt(time.Now().Add(1*time.Hour).Unix(), 10)
tamperedToken := tamperedTimestamp + ":" + hmacPart
if ValidateToken("key", tamperedToken, "org", "repo", "wf", "job", 5*time.Minute) {
t.Error("ValidateToken should reject token with tampered timestamp")
} }
} }