The Boring Deploy: make deploy, systemd, and Nothing Else

devops deployment systemd go blue-green
RSS Feed

The entire deploy process for this site is make deploy. It builds the Tailwind CSS, cross-compiles the Go binary, copies it to the server, and restarts the systemd service. No Docker registry, no CI/CD pipeline, no Kubernetes cluster.

This is never how I would build a production-ready system, but a lot of the ceremony goes away when you develop a pet project and are the solo developer. I still have the data backed up via git on GitHub. I have server build scripts replicated in another project. If this server explodes, I can have a new one up and running in about as long as it takes to install Ubuntu.

Since the site is primarily markdown files and what little data I do have lives in a SQLite database, periodic backup commits of the data to GitHub are more than enough. And easily enough, that is trivial to set up using systemd.

As a pet project, I am comfortable losing a little bit of information. If this ever expands and becomes something I do need to worry about, database replication can be added.

I don't need the portability or headaches of Docker. This is a single binary, the Go process is trivial to spin up additional instances if I need to spread process load, but also Go is good at concurrency on its own. Setting up Docker containers and maintaining images just adds to the build time. Instead, I can make a change and Air will refresh my dev instance almost instantly. And deploying?

I just build the executable and do a blue-green cutover of the content and executable.

.PHONY: build build-css build-go deploy

PROD_DIR=/opt/personal-site

build: build-css build-go

build-css:
	npx tailwindcss -i web/static/css/input.css -o web/static/css/style.css

build-go:
	go build -o bin/personal-site ./cmd/web

deploy: build
	bash scripts/deploy.sh

The deploy is a little more complicated, to the point I had to pull it out into a separate bash script. This lets me rebuild the CSS separately, which helps while testing.

#!/usr/bin/env bash
set -euo pipefail

PROD_DIR=/opt/personal-site
ACTIVE_SLOT_FILE="$PROD_DIR/active-slot"
CADDY_FILE=/etc/caddy/Caddyfile

BLUE_PORT=8082
GREEN_PORT=8083

# Determine current active slot
if [[ ! -f "$ACTIVE_SLOT_FILE" ]]; then
    echo "ERROR: $ACTIVE_SLOT_FILE not found. Run scripts/setup-blue-green.sh first."
    exit 1
fi

ACTIVE=$(cat "$ACTIVE_SLOT_FILE")
if [[ "$ACTIVE" == "blue" ]]; then
    DEPLOY_SLOT="green"
    DEPLOY_PORT=$GREEN_PORT
    OLD_SERVICE="personal-site-blue"
else
    DEPLOY_SLOT="blue"
    DEPLOY_PORT=$BLUE_PORT
    OLD_SERVICE="personal-site-green"
fi
NEW_SERVICE="personal-site-${DEPLOY_SLOT}"

echo "Active slot: $ACTIVE"
echo "Deploying to: $DEPLOY_SLOT (port $DEPLOY_PORT)"

# Copy binary, templates, and static files to the inactive slot
DEPLOY_DIR="$PROD_DIR/$DEPLOY_SLOT"
echo "Copying files to $DEPLOY_DIR..."
sudo mkdir -p "$DEPLOY_DIR/bin" "$DEPLOY_DIR/web"
sudo cp bin/personal-site "$DEPLOY_DIR/bin/"
sudo rsync -a --delete web/templates/ "$DEPLOY_DIR/web/templates/"
sudo rsync -a --delete web/static/ "$DEPLOY_DIR/web/static/"

# Start the inactive service
echo "Starting $NEW_SERVICE..."
sudo systemctl start "$NEW_SERVICE"

# Health check -- wait up to 10 seconds for the new service to respond
echo "Running health check on port $DEPLOY_PORT..."
HEALTHY=false
for i in $(seq 1 10); do
    if curl -sf "http://localhost:${DEPLOY_PORT}/" > /dev/null 2>&1; then
        HEALTHY=true
        break
    fi
    sleep 1
done

if [[ "$HEALTHY" != "true" ]]; then
    echo "ERROR: Health check failed on port $DEPLOY_PORT. Rolling back."
    sudo systemctl stop "$NEW_SERVICE" || true
    exit 1
fi
echo "Health check passed."

# Update Caddyfile -- only replace the blue/green port, not other proxy lines
echo "Updating Caddyfile to point to port $DEPLOY_PORT..."
sudo sed -i \
    "s|reverse_proxy localhost:${BLUE_PORT}|reverse_proxy localhost:${DEPLOY_PORT}|" \
    "$CADDY_FILE"
sudo sed -i \
    "s|reverse_proxy localhost:${GREEN_PORT}|reverse_proxy localhost:${DEPLOY_PORT}|" \
    "$CADDY_FILE"

# Reload Caddy (zero-downtime)
echo "Reloading Caddy..."
sudo caddy reload --config "$CADDY_FILE"

# Stop old service
echo "Stopping $OLD_SERVICE..."
sudo systemctl stop "$OLD_SERVICE"

# Update active slot
echo "$DEPLOY_SLOT" | sudo tee "$ACTIVE_SLOT_FILE" > /dev/null

echo "Deploy complete. Active slot: $DEPLOY_SLOT (port $DEPLOY_PORT)"

The magic of rsync brings my static files over into the correct directory. We run on separate ports so that these can run in parallel and wait for the service to start fully before cutting over. If this doesn't work, we early exit and never swap over colors, leaving the working version running. I update the Caddyfile to point to the new service and then kill the old service. You can see which color is currently running on the /metrics page.

A single make command to build, and single command to deploy, all done in a matter of seconds. No ceremony, no dependencies to maintain. Sure, there are risks. I may deploy bad code, I may forget to run my tests, I may need to rollback.

However, blue-green should keep me alive most of the time, and if I push bad code, my monitoring should tell me very quickly after the fact, but the health check before going live should tell me before we even get close to affecting users.

If any metrics grow out of control, the website is already wired to send me an email. Thanks to that, it's trivial to alert myself on outages.

For the size of this project, the boring deploy pattern works great for me. The industry can be full of gold plating and ceremony, using Jenkins or CircleCI, creating deployable container images, and orchestrating elaborate CI/CD pipelines. At scale, all of these things can become important. Automated work is the only way an engineering team can scale out and produce reliable results.

The patterns exist for reasons, but we need to remember to always question what we are doing. Maybe we don't need to add the complexity to every project. Maybe some things are fine to be done the boring way.