Site's Architecture: Spec-Driven Development, Static Site & Privacy-First Analytics

Draft — This post describes the technical architecture of gllabs.eu as it currently stands. Configuration snippets can be added once the final setup is validated.

Every blog eventually writes the obligatory “here’s how this site is built” post. Here’s mine — covering the three pillars that hold gllabs.eu together: a spec-driven development workflow powered by GitHub Copilot, a static site deployed on a Hetzner VPS behind Apache, and privacy-first analytics via a self-hosted Umami instance.


The Development Philosophy: Spec-Driven Development (SDD)

Rather than reaching for a framework or CMS, gllabs.eu is generated by a small custom Node.js build pipeline (src/build.js) that converts Markdown files to static HTML. But the real distinguishing factor is how the site evolves — through a Spec-Driven Development (SDD) workflow.

What is SDD?

SDD flips the usual “code first, document later” habit. Before any implementation work begins, a feature is described in a structured specification. That spec is then used to drive planning, task generation, and finally implementation — keeping intent and code in sync throughout the lifetime of the feature.

GitHub Copilot + speckit

The tooling that makes this practical is speckit — a set of VS Code agent modes that orchestrate the spec lifecycle. A typical feature flow looks like this:

sequenceDiagram actor Dev as Developer create participant Specify as "speckit.specify" Dev->>Specify: Natural language feature request Specify-->>Dev: spec.md created create participant Clarify as "speckit.clarify" Dev->>Clarify: Refine requirements Clarify-->>Dev: Requirements clarified create participant Planner as "speckit.plan" Dev->>Planner: Plan action Planner-->>Dev: Requirements clarified create participant Tasks as "speckit.tasks" Dev->>Tasks: Generate ordered task list Tasks-->>Dev: tasks.md created create participant Analyze as "speckit.analyze" Dev->>Analyze: Validate plan (optional) Analyze-->>Dev: Analysis complete create participant Implement as "speckit.implement" Dev->>Implement: Execute tasks Implement-->>Dev: Feature implemented destroy Implement destroy Analyze destroy Tasks destroy Planner destroy Clarify destroy Specify

Each feature gets its own directory under specs/ (e.g., specs/002-markdown-styling/) containing:

File Purpose
spec.md Requirements and acceptance criteria
plan.md Design decisions, component breakdown
tasks.md Ordered, dependency-aware task list
checklists/requirements.md Verification checklist

The result is a lightweight, auditable trail of why the site looks and behaves the way it does — without the overhead of a full project management tool.


The Static Site: Node.js Build → Apache on Hetzner

Build Pipeline

The site is intentionally dependency-light. The build process:

  1. Reads Markdown files from content/posts/ and content/pages/
  2. Parses front matter with gray-matter
  3. Renders Markdown to HTML with markdown-it
  4. Injects rendered HTML + dynamically generated sidebar into src/template.html
  5. Writes static .html files to public/

Tag pages, category pages, and monthly archive pages are all generated automatically from post front matter — no database involved.

Hosting: Hetzner VPS + Apache

The public/ directory is served directly from a Hetzner VPS (ARM-based CX-series) via Apache HTTP Server. A dedicated virtual host handles the domain, with TLS provided by Certbot (Let’s Encrypt):

# Apache VirtualHost — placeholder, full config to be added
<VirtualHost *:443>
    ServerName gllabs.eu
    DocumentRoot /var/www/gllabs.eu
    SSLEngine on
    SSLCertificateFile      /etc/letsencrypt/live/gllabs.eu/fullchain.pem
    SSLCertificateKeyFile   /etc/letsencrypt/live/gllabs.eu/privkey.pem
</VirtualHost>

Certificate renewal is handled automatically by the Certbot systemd timer. Deployment is a simple rsync from the local public/ output to the server document root.


Privacy-First Analytics: Self-Hosted Umami

Google Analytics is not on this site. Instead, visitor metrics are collected by Umami — an open-source, privacy-respecting analytics platform that collects no personally identifiable information and requires no cookie consent banner.

Why Umami

  • Collects only anonymous aggregate data (page views, referrers, device type, country)
  • No cookies, no cross-site tracking, GDPR-compliant by design
  • Self-hosted — data stays on my own infrastructure
  • Lightweight tracking script (~2 KB)

Container Stack: Podman + podman-compose

Umami and its PostgreSQL database run as containers on the same Hetzner VPS, orchestrated with podman-compose. Using Podman (rootless) rather than Docker improves the security posture — containers run without elevated host privileges.

A simplified view of the compose setup:

# podman-compose — placeholder, full compose file to be added
services:
  umami:
    image: ghcr.io/umami-software/umami:postgresql-latest
    ports:
      - "3000:3000"
    environment:
      DATABASE_URL: postgresql://umami:umami@db:5432/umami
      APP_SECRET: <redacted>
    depends_on:
      - db

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: umami
      POSTGRES_USER: umami
      POSTGRES_PASSWORD: <redacted>
    volumes:
      - umami-db:/var/lib/postgresql/data

volumes:
  umami-db:

Apache acts as a reverse proxy in front of Umami, terminating TLS and forwarding requests from analytics.gllabs.eu (or a subpath) to the container port. This keeps Umami off a raw port and behind the same certificate infrastructure as the main site.

Data Flow

graph LR Browser["Visitor Browser"] -- "HTTPS page request" --> Apache["Apache\n:443"] Apache -- "serve static files" --> Public["public/ directory"] Browser -- "tracking script (2KB)" --> UmamiJS["umami.js\n(self-hosted)"] UmamiJS -- "anonymous event" --> Apache Apache -- "reverse proxy" --> Umami["Umami\n:3000 (Podman)"] Umami --> PG["PostgreSQL\n(Podman)"]

Summary

Concern Solution
Feature workflow SDD with GitHub Copilot + speckit
Site generation Custom Node.js pipeline (Markdown → HTML)
Hosting Hetzner VPS, Apache HTTP Server
TLS Certbot / Let’s Encrypt
Analytics Self-hosted Umami (Podman + PostgreSQL)
Container runtime Podman (rootless) + podman-compose

The stack stays intentionally minimal. No CDN, no managed database, no SaaS dependencies beyond the VPS itself. If you’d like to see the Apache virtual host config, podman-compose file, or full build script in detail, feel free to reach out — or watch this space for a follow-up post with the actual config dumps.


Infrastructure at a Glance

The diagram below uses Iconify icon packs (logos and mdi) to render service logos directly inside the Mermaid architecture-beta diagram:

architecture-beta group vps(logos:cloudinary-icon)[Hetzner VPS] group podman(devicon:podman-wordmark)[Podman] in vps group bare(mdi:gear-play)[service] in vps service apache(logos:apache)[Apache HTTP] in vps service node(devicon:podman)[Contact API] in podman service umami(devicon:podman)[Umami Analytics] in podman service db(devicon:podman)[PostgreSQL] in podman service web(mdi:web)[gllabs website] in bare apache:R --> L:node{group} apache:L --> R:web{group} apache:B --> L:umami{group} umami:R -- L: db