Skip to content
Self-hosted home automation

Smart home on K3s

Single-node Kubernetes cluster on a Raspberry Pi 5. Home Assistant, ArgoCD, Prometheus and Grafana, all GitOps-reconciled. Twenty-plus lights, plugs and sensors. Zero ports exposed to the internet.

2024 → ongoing
Single-node K3s, 20+ devices, 0 cloud accounts
// the premise

The same discipline, in miniature

Most smart-home setups end up as a pile of vendor apps tied together with cloud accounts. That works until a vendor goes away, or you realise your motion sensor is reporting to a server in another country.

I wanted the opposite: everything local, every config change under version control, every metric scraped by Prometheus. Same shape as the platform I run at work, just sized to a flat.

One Raspberry Pi 5, K3s, ArgoCD, and a Zigbee dongle. The bill of materials fits on a postcard.

// the hardware

One Pi, no cloud

A Raspberry Pi 5 (8GB) is the whole control plane. NVMe over USB for the SSD because SD cards die under sustained writes. UPS on the power side because Home Assistant restarting at 3am because someone tripped a fuse is not the experience anyone wants. Hardwired ethernet because Wi-Fi is not a network.

A SONOFF Zigbee USB coordinator handles the radio. Devices pair directly with Zigbee2MQTT, which talks to Home Assistant over MQTT. No bridges, no cloud round-trip — bulb to coordinator to broker to automation in single-digit milliseconds.

Twenty-plus endpoints today: six Philips Hue colour bulbs in the bedroom, three Innr smart plugs on power-monitored circuits (kitchen heater, hallway lamp, dining-room TV), two SONOFF temperature and humidity sensors, motion and contact sensors, a solar-powered Reolink camera, plus a planned smart-TRV install across the radiators. Every one of them reports to Prometheus.

// the cluster

GitOps for the living room

Everything on the Pi is a Kubernetes deployment, reconciled by ArgoCD from a git repo. Adding a new automation, tweaking a Grafana dashboard, bumping the Home Assistant version — all of it goes through a commit. The cluster pulls; nothing pushes.

That sounds like overkill for a home lab, and it would be if it were any other tool. ArgoCD on K3s is genuinely 80MB of memory and a few CRDs. The payoff is a setup that survives me — if I blat the SD card tomorrow, a fresh install plus argocd app sync brings everything back.

argocd apps in the cluster
$ kubectl get applications -n argocd
NAME                  SYNC STATUS   HEALTH STATUS
home-assistant        Synced        Healthy
zigbee2mqtt           Synced        Healthy
mosquitto             Synced        Healthy
prometheus            Synced        Healthy
grafana               Synced        Healthy
node-exporter         Synced        Healthy
// observability

Power draw and humidity, in Grafana

Prometheus scrapes metrics from Home Assistant's exporter and from node-exporter on the Pi itself. Innr smart plugs report real-time power draw on the kitchen heater and the hallway lamp. SONOFF LCD sensors report temperature and humidity per room.

Grafana sits on top with dashboards for power consumption, environmental trends, and system health. It is exactly the same shape as a small platform monitoring stack — just instead of microservices, the targets are bulbs and a kettle.

// access

Tailscale, not port-forwarding

Zero ports exposed to the internet. Remote access goes through Tailscale — every device on my account joins a private overlay network and reaches the Pi by its tailnet IP. Nothing on the router needs opening.

The blast radius if Home Assistant is compromised is limited to the LAN, and the LAN is segmented so the IoT VLAN can't reach anything else. Boring threat-model, boring controls. That's the point.

// design

A few decisions worth flagging

Local-first by default

No vendor cloud, no telemetry I can't see, no remote kill-switch. If my internet goes down the lights still work. That constraint shaped every choice — which devices, which stack, which network topology.

Treat it like work

Same git, same ArgoCD, same Prometheus. The cost of applying real discipline to a home lab is small. The cost of not doing it is a fragile pile of config that nobody, me included, wants to debug at the weekend.

Right-sized infrastructure

K3s and ArgoCD are not heavy. The whole control plane plus every workload runs comfortably on a single Pi 5 with room to spare. The temptation to add a second node for "HA" is a trap — for a flat, the right number of nodes is one.

// what's next

Where this might go

Smart TRV radiator valves are next, which would let me schedule heating per room instead of per house. After that, presence detection that's good enough to stop relying on motion sensors — they're fine for "is someone in the hallway" and bad for "is anyone home".

The longer arc is local LLM integration — running a small model on a separate node so voice control and natural-language automation don't round-trip to a cloud API. Same instinct as the rest of the system: keep the data local, keep the latency small, keep the dependencies countable.

// the numbers

What it adds up to

Single-node

K3s + ArgoCD + Prometheus

20+

lights, plugs and sensors

6

GitOps-reconciled apps

0

ports exposed to the internet

The headline isn't a number. It's that everything in this house — lights, heating, sensors, cameras — runs on kit I own, code I wrote, and a network I control. The same principles I apply at work, applied at home.

Thanks for reading.

If any of this resonates — or you want to dig into the parts I didn't write up — drop me a note. Always happy to talk shop.