I just wanted to learn Kubernetes


It started, as these things always do, with a simple thought:

“I should learn Kubernetes.”

A reasonable idea. Maybe spin up a few VMs, deploy a pod or two, call it a day. That was the plan. The plan did not survive.

The escalation

What I actually ended up with: four physical machines in a 10-inch mini rack, a dedicated control plane node, three worker nodes, NFS storage on an Unraid server, a full monitoring stack, and a GitLab CI/CD pipeline that deploys everything automatically.

You know, just to learn the basics.

The hardware

RoleHardwareRAM
Control PlaneFujitsu Futro S920 (Thin Client)4 GB
Worker 1Lenovo M710q8 GB
Worker 2Lenovo M710q8 GB
Worker 3Lenovo M710q8 GB

The Fujitsu is a thin client from roughly the dinosaur era of IT. 4 GB RAM. Runs the entire k3s control plane without breaking a sweat, because k3s is absurdly lightweight — about 512 MB for the server process. The Lenovo M710q mini PCs are compact, silent, and sip power. Perfect for a homelab that shares the room with you.

Could I have used VMs? Yes. Would I have learned about flashing custom ISOs, debugging GRUB boot configs, and swearing at USB sticks? No. And where’s the fun in that?

Automating the boring parts

I’m lazy in the good way. So instead of installing Ubuntu on four machines by hand, I built a custom autoinstall ISO. One USB stick, boot, walk away, done. The machine sets up the user, SSH keys, locale — everything.

Of course, building the ISO was its own adventure. GRUB interprets semicolons as command separators, so the autoinstall boot parameter needed creative escaping. The extracted ISO files are read-only by default. xorriso doesn’t support the flags you think it does. Classic Linux experience.

Ansible takes over

Once the machines had an OS, Ansible did the rest:

  1. Discover — talk to the Unifi controller API to fix DHCP leases and generate an inventory
  2. Prepare — system updates, packages, hostnames
  3. Install k3s — server on the control plane, agents on the workers
  4. Storage — NFS provisioner pointing to the Unraid server
  5. GitLab Runner — deployed in the cluster so CI/CD jobs run as Kubernetes pods

Six playbooks, zero manual SSH sessions (after the initial setup). Secrets handled by Ansible Vault, because putting passwords in Git is a career-limiting move.

The monitoring rabbit hole

A cluster without monitoring is just a collection of computers you hope are working. So naturally, I went a bit overboard.

Prometheus + Grafana

The kube-prometheus-stack Helm chart brings Prometheus, Grafana, Alertmanager, and Node Exporter in one package. Grafana runs on NodePort 30080 with dashboards for node metrics and cluster health.

But it doesn’t stop at the cluster. I also scrape:

  • The Unraid server (Node Exporter in Docker)
  • My Mac mini (Node Exporter via Homebrew)
  • Home Assistant (Prometheus integration with a long-lived access token)

Because why monitor four machines when you can monitor everything?

Loki + Promtail — logs, logs everywhere

Prometheus collects metrics. Loki collects logs. Promtail ships them to Loki from everywhere:

  • In the cluster: DaemonSet, automatically grabs all pod logs
  • On Unraid: Docker container, ships syslog and Docker container logs
  • On the Mac mini: Homebrew service, ships system logs

The biggest headache was Grafana 12 and Loki compatibility. Grafana’s health check sends vector(1)+vector(1) to Loki — a function that didn’t exist before Loki 2.7. The Helm chart defaults to Loki 2.6.1. Updating to 2.9.10 broke the chart’s generated config because deprecated fields were removed. The fix: override the entire Loki config manually and pretend the defaults never existed.

Uptime Kuma

Because sometimes you just want a simple dashboard that shows green or red. Monitors Grafana, Loki, Home Assistant, the k3s API, all worker nodes, and the Mac mini. Deployed via a simple Kubernetes manifest — no Helm needed for something this straightforward.

The deployment flow

Every piece of infrastructure lives in Git. Push to main and GitLab CI/CD handles the rest. The runner executes inside the cluster as a Kubernetes pod, so it has native access to kubectl and helm. No SSH keys to servers, no manual steps.

git push → GitLab CI → Pod spins up → helm upgrade → done

Secrets never touch the repository. They live as CI/CD variables in GitLab and get injected at deploy time via sed on placeholder values. Not the most elegant solution, but it works and the secrets stay where they belong.

What I actually learned

I wanted to learn Kubernetes. I did. But I also learned:

  • How to build custom Ubuntu ISOs (and how GRUB tries to ruin your day)
  • That the Unifi controller API has completely different endpoints depending on whether it’s self-hosted or cloud
  • That Ansible group_vars don’t load when you think they do
  • That Helm chart defaults are suggestions, not guarantees
  • That Homebrew services have their own opinions about config file locations
  • That “just one more thing to monitor” is a slippery slope with no bottom

What’s next

  • Ingress + Cert-Manager — proper URLs instead of NodePorts, because typing :30080 gets old
  • n8n AI workflow — automated log analysis with Loki as data source
  • Migrate more Unraid Docker containers to k3s — the long-term goal: everything in the cluster

The 10-inch rack has room for more machines. That’s either a good thing or a very dangerous thing.

I haven’t decided yet.