Building This Site: Go, HTMX, and Zero JavaScript Frameworks
Why Go for a Personal Site?
Instead of building a boring, static personal site, I wanted something different. Something with live telemetry, a hot reload blog system, and a dead-simple deploy system. The blog you are reading now was made live with no database writes, no restarts, no deployment or building. I dropped the md file into a directory and it was made live seconds later. The next post covers how these systems work together to form a seamless simple system for all my needs.
Go gave me that. While my day to day is primarily C#, the simplicity of Go with its single binary, no runtime dependencies, ~10MB of memory at idle made it the choice for me. The site runs cheaply, simply, using simple Linux built-ins. The source code stays minimal, targeted, and easy to modify without a lot of the ceremony that I find C#/.NET code to have.
The stack: Go 1.22, Chi router, HTMX, Tailwind CSS v4, SQLite. No React, no Next.js, no build pipeline beyond Tailwind's CLI. The entire frontend is server-rendered HTML with HTMX handling the interactive bits.
The Core Philosophy
I wanted to build a simple, composable, efficient site -- following best practices while trying to make a robust system I can fan out. It started as a spot to hold my resume, making a quick view into who I am professionally, and extended into a backend project featuring live server metrics and a self-designed blog.
The blog involves a self-discovery mechanism and caching strategies to allow me to write freeform and make modifications as quickly as possible.
This is a place to learn and to teach; recording my journey as an engineer.
The base code is easy to maintain and requires next to no maintenance at all.
I want this site to showcase who I am: Straightforward, functional, and efficient.
The Router and Handler Pattern
Chi was the obvious choice for routing. It's minimal, composable, and doesn't try to be a framework. The handler architecture is straightforward — a Handler struct holds templates, data stores, and helper services. Everything is injected at startup, no globals.
type Handler struct {
templates map[string]*template.Template
partials map[string]*template.Template
// ...
}
Templates are parsed once at startup and reused. Each page template is cloned from a base layout. Partials are separate — they're used for HTMX out-of-band swaps, not full page renders. Blog content is loaded once at startup. Due to the low number of blogs and small footprint, this is still a sub-millisecond activity.
HTMX Instead of a Frontend Framework
The contact form is the most interactive part of the site. It validates on submit, shows field-level errors, displays toast notifications, and clears state — all without a page reload. In React, this is a form library, state management, and a few hundred lines of JSX. With HTMX, it's HTML attributes.
The form posts to /contact/submit. On validation failure, the server returns HTML fragments with hx-swap-oob="true" — HTMX swaps them into the right spots on the page. Field errors appear next to their fields. A toast slides in from the right. The form stays put.
No client-side state. No hydration. No bundle size to worry about. It just works.
The contact form also silently lies to spammers. It accepts their submission and even shows the success message without ever touching the database or queuing any follow up work. That's just one layer — MX record validation, field sanitization, and rate limiting form the rest. That's its own post.
The Metrics Dashboard
This is the part I built for fun. The /metrics page shows live system telemetry — uptime, request counts, goroutines, heap allocation, GC cycles — streamed in real time via Server-Sent Events.
The interesting engineering problem: runtime.ReadMemStats() is expensive and blocking. If 50 people are watching the dashboard, you don't want to block 50 times per second. Something I figured out while showing off the dashboard and doing some stress testing. The solution for me was a cached snapshot.
type Collector struct {
cachedMetrics atomic.Pointer[MetricData]
// ...
}
A background goroutine calls ReadMemStats() once per second and stores the result in an atomic.Pointer. Another reason to land on Go for this project. It was trivial to stand up a new goroutine to keep a long-lived background task, a pattern I use several times through this project.
The SSE handler reads from the pointer — no locks, no contention, no matter how many viewers. The atomic.Pointer gives you lock-free reads, which is exactly what you want when N goroutines are reading the same data and one goroutine is writing it.
Request counting uses atomic.AddUint64() — no mutexes for the hot path. Session tracking uses secure random IDs in HttpOnly cookies with a 5-minute TTL, pruned by a background goroutine every 30 seconds.
The concurrency story sounds clean, right? It was — except for one field I didn't think to protect. The request counter tracks counts in a circular buffer of time buckets, and a bare int index selects the current bucket. A background goroutine advances that index every minute. HTTP handlers read it on every request to know which bucket to increment. Three goroutines, one plain int, zero synchronization. A textbook data race hiding behind a wall of correct atomics. It "works" — the worst case is incrementing the wrong bucket by one — but go test -race would catch it immediately. The full breakdown, the fix, and why this pattern is easy to miss is its own post.
Middleware Stack
The middleware is layered intentionally:
- Chi Logger — request logging
- Recoverer — panic recovery so one bad request doesn't crash the server
- Security headers — CSP, X-Frame-Options, MIME sniffing prevention
- Metrics counting — increments request counters, manages session cookies
Cache headers are applied selectively — 30 minutes on /static/*, nothing on dynamic routes. Extremely simple and flexible so that I can gain value quickly and easily extend as I build.
I built the site without metrics to start. Then as I was working on the metrics board, I wanted more data to display. I was able to build a new middleware to grab some simple data and see it live within minutes.
SQLite: The Right Database
SQLite is perfect for my needs on this site. I am not dealing with huge scale, I am not dealing with thousands of objects or relationships. Instead, I get a single file, with no external services or dependencies, to hold the minimal data I do need. I don't need to deal with Docker, networking, security, or the SRE tasks that would normally be required to maintain a more robust database. A simple cron backup copies the existing file into a bak regularly.
Deployment
The deploy process is intentionally boring:
make deploy
That builds the Tailwind CSS, compiles the Go binary, and restarts the systemd service. The server handles SIGTERM with a graceful shutdown — in-flight requests get 10 seconds to complete before the process exits. No dropped connections on deploy.
For local development, Air watches .go, .html, and .css files and rebuilds on change. Hot reload without the complexity of a dev server.
Blog System
The blog system allows me to push blogs in without restarting the service or maintaining awkward database records, write in an easy to read md format, and provides exactly the features I need for this project.
The metrics system and the blog cache solve the same concurrency problem — one writer, many readers — with completely different primitives. The next post explains why, and when atomic.Pointer stops being the right answer.
I can take my whole blog infrastructure and push it into obsidian, claude code, or put them in an email with no modifications or worry about encoding or formatting. And I have the ability to queue blogs for the near future, so that I have time to proofread, or publish immediately.
Again, goroutines come into play to maintain an active index of my blogs on a reasonable timescale, picking up modifications, new articles, or publish activity.
What's Next
The metrics system will extend to track per-post views with hashed IPs for unique counts — analytics without storing PII.
The site is a living project. Every feature is an excuse to solve an interesting problem.