One week into 100 Days of Homelab and the experience has been great! I dropped two days this week due to personal commitments but made more progress than I expected despite the missing days. Going in, I was convinced that one hour per day wouldn’t be enough to see substantial progress over a week however the time constraint has been more impactful than I expected. The time pressure of a one-hour session requires that it’s properly scoped and crystal clear in intent. As a result, I’m focusing on smaller more manageable objectives which offer regular boosts of motivation to keep going.
Day 1
Day one was an effort of exploration. A plan was set to GitOps all my home tech. Starting with container technologies I thought this might be a good opportunity to get hands-on with Kubernetes (K8s). I spent some time researching K8s options and trying to get a better sense of whether it might be viable on a Raspberry Pi. Some initial research lead me to believe I’d have some luck on a Pi 4 but couldn’t find much on a Pi 3.
Hardware limitations were something to consider as I’ve got both a Pi 3 and Pi 4 (the latter of which is much more powerful) however K8s require three devices for high availability. In light of supply chain shortages, getting my hands on another Pi 4 was going to be a challenge.
Knowledge nuggets from the K8s exploration:
- K8s is made up of worker nodes that run containers meanwhile control plane components manage and coordinate the cluster. Both can run on a host simultaneously which means I could have two Pi 4’s with both control plane and nodes plus a Pi 3 running just the control plane components. The Pi 3 would then act as the tie breaker if one of the Pi 4s went offline to signal to the remaining Pi 4 that it should take over node operations.
- K8s requires a database to keep track of everything. Luckily it supports the distributed etcd as a backing store. This would avoid the need to run a database and eliminate the resulting high availability concerns.
- There are quite a few flavours of K8s, some interesting lightweight alternatives include K3s and microK8s
The question that remained open at the end of this session was how to maintain a highly available proxy server for the Control Plane API and another highly available proxy for exposing services running on the nodes. In light of time constraints, I thought it best to shelve these questions and investigate viability.
Day 2
Despite supply chain issues being a pressing factor for selecting K8s I was still curious about the potential of running the control plane components on a Pi 3. I narrowed the flavours of K8s down to two options, K3S and MicroK8s. Both had surprisingly high idle CPU utilisation with no containers running, as an example, MicroK8s sat around ~30%. Although the Pi 3 is much less powerful than the Pi 4 I was surprised coming from Docker which is almost unnoticeable with no containers running. That said, it does make sense since you’re not running control plane services with Docker.
Ultimately the high CPU utilisation on the Pi3 had me doubting whether it could keep up as demand on the control plane increases over time. This was the decider to shelve K8s for another time.
Day 3 - 7
With K8s out of the equation, I returned to none other than Docker! The revised goal is two docker hosts with mirrored configurations that sync data nightly. Failover would be manual but the Pi 3 could be reintroduced to run critical containers while optional containers are left stopped during a failover event.
My current solution for deploying a new docker host is a janky shell script that requires manual intervention. Surely this experience could be faster and better. Thankfully the Raspberry Pi Imager enables the configuration of some basic host options when imaging the SD card. This eliminates some of the clumsy initial setup and configures the Pi into a state where Ansible can take over.
The Raspberry Pi Imager is used to configure:
- Hostname
- Default user (inc SSH key)
- Network
- SSH
- Locale
Ansible is then used to:
- Install Docker
- Add scripts and docker-compose file
- Schedule scripts with cron
- Run docker-compose to bring the containers online
The biggest win from all of this is the ability to deploy a new Pi host running Docker without ever opening a shell to the host. This feels like it might be as low touch as it gets without having pre-existing infrastructure like an imaging server on the network.
Overall I’m excited with how the first week turned out and energised for the next!