My Kubernetes & BOINC Raspberry Pi Cluster
In the last two months I managed to buy two cheap, second-hand Raspberry Pi 4s with 4GB of RAM. The second one I bought the day after Christmas, it was described as a “failed present”. Somebody had a bad experience under the Christmas tree which turned out to be a win for me.
Having now enough Raspberry Pis I could start with my plan that I had had for a very long time: create a cluster.
Why would I want to create a cluster of Raspberry Pis? Mostly for learning, and fun. Kubernetes is a technology that has been on my to-do list for years now, and while I did some learning using single node Minikube, I wanted to try out the “real thing” and run it on a multi-computer setup. Having a cluster is also an incentive to try out other administration and management tools like Ansible. And also I enjoy the hardware aspect of the Pis, it’s like a LEGO for adults. I mean, LEGO is also for adults, so um, Pis are like LEGOs, and both are for adults. And teens? I don’t know where this is going. You get me.
Anyway, let’s talk about the cluster.
The cluster in its full glory, with the bit too long cables.
Hardware
The heart of the cluster is three Raspberry Pi 4b, each having 4GB of RAM. They are mounted in an unbranded open cluster case The case is universal enough that it can also accommodate any other Pi, so at some point I might be able to switch the Pis to model 5.
All three Pis are connected to my NETGEAR PoE+ switch using cables that I made by myself .
The PoE HAT
One of the Pis is using the Raspberry Pi PoE+ HAT. Why only one? Because those HATs are a bit too expensive to buy three at once, and also I wanted to test it out before committing to install them on every Pi that I have.
After a week of using it I am now sure that at some point I will buy the PoE HATs for other Pis. It may not sound like much, but removing the power cable and the wall charger makes a real difference in the tidiness and space usage of the whole cluster
An upside or a downside, however you look at it, is that the HAT has a fan. Active cooling helps a lot with keeping the thermals down, especially under prolonged loads (and I’ve been doing them, more on that soon), but the fan can be noisy at times. To combat that, I used Jeff Geerling’s hack to change the fan curve and that helped with the noise. I also added another, larger fan to blow air over the whole cluster, which is another factor in stopping the HATs fan from ramping up.
What I do not like about the PoE HAT is that it is very close to the board. It makes sense in tight cases, I imagine, but in my case it only meant that I had to remove all the radiators that I glued to my Pi, apart from the CPU one. And even that one would fit only when I removed one of the screws holding the fan.
Software
The Pis are running the latest Raspberry Pi OS 64 bit without a desktop environment. Everything is done via SSH.
The hostnames are Atlas, Agena and Thor. Yes, I named them after NASA rockets :)
Kubernetes
The cluster is running K3s, a distribution of Kubernetes aimed at lower power devices and people who do not need the power or have the knowledge about the full-fledged Kubenernetes, and my usecase ticks both of those boxes.
My general homelabbing plan for 2025 is to separate computer from storage, and this cluster fits into it well. So far I have deployed on the cluster Grafana for monitoring and Readeck for bookmark management. This way I can access those services even when my main, large NAS/server machine is not running. The plan is to have running on the cluster all the services that just run and do not need heavy resources. The NAS will then be only for services that require large storage or compute, like Immich for photos or Pinchflat for YouTube archiving.
I won’t be diving into how I configured K3s or how I am deploying containers, I am a total noob at it so far, and there are tons of much better posts, written by infinitely more knowledgeable people, from which you can learn. Maybe at one point when I learn more and fix all my mistakes, I will write a longer post focusing on my Kubernetes journey.
Ansible
As I wrote in the introduction, having a cluster also encouraged me to dive deeper into Ansible. Ansible is one of those technologies that I’ve been going in and out with for many years now, and never really learnt it in depth. Now, with three identical computers I have more will to learn, as it can actually save me work to automate some of the tasks. (insert obligatory XKCD reference).
For example, I wrote a Ansible playbook that installs and runs Prometheus and Node Exporter agents on every Pi in the cluster:
- name: Install Prometheus and Node Exporter
hosts: cluster
become: yes
tasks:
- name: Create a user for Prometheus
ansible.builtin.user:
name: prometheus
- name: Create folder for Prometheus and Node Exporter
file:
path: /opt/prometheus
state: directory
- name: Upload Prometheus
ansible.builtin.copy:
src: ./prometheus
dest: /opt/prometheus/
- name: Upload Node Exporter
ansible.builtin.copy:
src: ./node_exporter
dest: /opt/prometheus/
- name: Upload Prometheus config file
ansible.builtin.copy:
src: ./prometheus.yml
dest: /opt/prometheus/
- name: Upload Prometheus service
ansible.builtin.copy:
src: ./prometheus.service
dest: /etc/systemd/system/
- name: Upload Node Exporter service
ansible.builtin.copy:
src: ./node_exporter.service
dest: /etc/systemd/system/
- name: Recursively change ownership of a directory
ansible.builtin.file:
path: /opt/prometheus
state: directory
recurse: yes
owner: prometheus
group: prometheus
- name: Make Prometheus executable
ansible.builtin.file:
path: /opt/prometheus/prometheus
mode: a+x
- name: Make Node Exporter executable
ansible.builtin.file:
path: /opt/prometheus/node_exporter
mode: a+x
- name: Issue daemon-reload to pick up config changes
ansible.builtin.systemd_service:
daemon_reload: true
- name: Enable Prometheus service
ansible.builtin.systemd_service:
name: prometheus
enabled: true
- name: Enable Node Exporter service
ansible.builtin.systemd_service:
name: node_exporter
enabled: true
- name: Make sure Prometheus is running
ansible.builtin.systemd_service:
state: started
name: prometheus
- name: Make sure Node Exporter is running
ansible.builtin.systemd_service:
state: started
name: node_exporter
And with that, I can show the stats in Grafana running in K3s:
Yes, I know that I can also run Prometheus straight in K3s are get metrics about Kubernetes themselves, I have not reached that point yet :)
BOINC
And finally, BOINC. The Pis have been mostly idling, running only a few containers each, so I decided to use them for distributed computing. They won’t do much but with BOINC, every little helps. I installed boinc-client on all three, and assigned them to do tasks for Asteroids@Home.
Thermals are an important issue when running BOINC, so I added a 120mm BeQuiet fan to the cluster. It’s running from the USB port, so it’s slow and quiet, but moves enough air to keep the Pis cool. To further keep the temps down, and also leave some compute for the K3s, I am only crunching on three of the four cores on the Pis CPU. So in total I am crunching on nine cores. The cluster in total crunches around 20 tasks per day.
Bottom Line
I believe that having a small cluster is a good addition to one’s homelab. It not only allows you to tinker more with hardware, but also opens totally new possibilities when it comes to software and administrations. A single Raspberry Pi 4 can do quite a lot, but combined with others it can do even more, with an added bonus of high availability.
Maybe one day I will build such a cluster, not from Pis, but from USFF x86 PCs like the Lenovo Tiny? We will see.
This post has been all over the place. I hope you enjoyed it. In 2025 I have this idea to write more blogs posts that are not tutorials but more like, actually blog posts, where I loosely talk about me and what I’m up to.
Thanks for reading!
If you enjoyed this post, please consider helping me make new projects by supporting me on the following crowdfunding sites: