Kubernetes Home Cloud with Raspberry Pi

How I setup my local Kubernetes cluster on Raspberry Pi using k3s to run different network services for my homelab

When you are running a home lab like mine, that is a dual use equipment – I use a Dell Precision 7750 dual booting ESXi 7.0U1 and Ubuntu 18.04- it gets cumbersome to host some basic services as VMs inside your enviroment like DNS and NTP.

Dell Precision 7750. Dual Boot Ubuntu 18.04 and ESXi 7.0U1

So looking for a cheap way to run this kind of services outside of my virtual enviroment I started to use a old Raspberry Pi 2, running basic dnsmasq and ntpd to get this kind of services.

Recently I decided to buy a new Raspberry Pi 4, and upgrade my setup to run this services more reliable, running now on top of Kubernetes. In this post I’m going to explain my setup.

Raspbian Installation

I’m going to run a single host / control-plane with a Raspberry 4 – 8GB board, with a 32GB SanDisk SD Card, connected using the 1Gb Ethernet Port.

I decided to run the normal Rapsbian OS as base for my setup. So I started by downloading the Raspberry Pi Imager.

I’m using the Raspberry Pi OS Lite, we area not going to be using the GUI so we can save some space and resources. So after selecting it as Operating System and flashing your SD card. You are ready to insert it in your Pi and boot it.

Raspberry Pi Imager

After boot, and logged with default user pi and password raspberry , you need to do the basic configuration. For that you can use raspi-config to change the password, setup your country and enable ssh.

Will be best to set also an static IP address. You can do so by editing /etc/dhcpcd.conf and setting your configuration like this:

#Example static IP configuration:
 interface eth0 
 static ip_address=192.168.68.2/24 
 static routers=192.168.68.1 
 static domain_name_servers=192.168.68.1 8.8.8.8

In preparation to install Kubernetes we need also to add the following to /boot/cmdline.txt.

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Now we are ready to start Kubernetes in our Raspberry Pi.

Installing K3s

We are going to be using k3s distribution from Rancher that is lightweight and optimized for low resource systems like the Raspberry Pi.

Image result for k3s

There’s different ways to install k3s, but I’m using k3sup from Alex Ellis that uses ssh to install k3s to a remote Linux host. 

We can install it downloading the binary directly this way:

$ curl -sLS https://get.k3sup.dev | sh
$ sudo install k3sup /usr/local/bin/

k3sup uses password-less login by default, so you need to copy your SSH key to the Raspberry Pi with:

$ ssh-copy-id pi@<pi IP or hostname>

If you have no SSH key on your local computer yet, then run ssh-keygen

$ export IP=192.168.68.2
$ k3sup install --ip $IP --user pi --k3s-extra-args '--disable servicelb'

In a few moments you’ll receive a kubeconfig file into your local directory, with an instruction on how to use it.

Find the node, and check if it’s ready yet

$ mv kubeconfig ~/.kube/config
$ kubectl get node -o wide

We have ready our control plane node. In my case this will be my only node but if you have more Raspberry Pi’s to add as agents you can do it this way:

$ k3sup join --ip $AGENT_IP --server-ip $SERVER_IP --user $USER

Now we have a working Kubernetes cluster. Now let’s deploy MetalLB so we can deploy services of type LoadBalancer.

MetalLB

I like to use LoadBalancer to expose my services in Kubernetes so I can use the normal port of the application, not a 30000+ port.

MetalLB is a easy to deploy loadbalancer solution especially when we deploy it in a flat L2 home network. You can run the manifests for deployment directly this way.

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

This will create a namespace called metallb-system, with all the components inside it.

Now you need to configure it. You do so using a configmap, in my case I’m using a basic L2 Network setup but you can use BGP for a more complex network. If you’ve named the configmap config.yaml, you can deploy the manifest with kubectl apply -f config.yaml.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.68.10-192.168.68.20 #IP Range you want use to assign to the services

Pi-Hole

Now is time to deploy our first service. This first service that we are going to install is pi-hole.

pi-hole user interface

Pi-hole is described as a DNS sinkhole that protects your devices from unwanted content, without installing any client-side software. That is very useful by itself but the main use case for me is that I can use it as my local DNS server with a nice and easy to use interface for my lab.

Local DNS from pi-hole

To deploy pi-hole in Kubernetes the first thing we need is persistent storage. k3s comes with a default storageclass that creates persistent volumes in local disk, because I’m using a single node cluster that works for me, probably I will change this to network storage as I grow my cluster. At this moment I don’t have a NAS that support NFS.

With that been said, we are using PersistentVolumeClaims to request storage for the etc and dnsmasq folder of pi-hole.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pihole-etc-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pihole-dnsmasq-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi

After that we can then create our deployment with this manifest. Yo need to update the value of the TZ based on where you are located and the password you want to use to login to the admin interface.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pihole
  labels:
    app: pihole
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pihole
  template:
    metadata:
      labels:
        app: pihole
        name: pihole
    spec:
      containers:
      - name: pihole
        image: pihole/pihole:latest
        imagePullPolicy: Always
        env:
        - name: TZ
          value: "America/Panama"  #Change to your Time Zone https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
        - name: WEBPASSWORD
          value: "<Your Password>"
        volumeMounts:
        - name: pihole-local-etc-volume
          mountPath: "/etc/pihole"
        - name: pihole-local-dnsmasq-volume
          mountPath: "/etc/dnsmasq.d"
      volumes:
      - name: pihole-local-etc-volume
        persistentVolumeClaim:
          claimName: pihole-etc-pvc
      - name: pihole-local-dnsmasq-volume
        persistentVolumeClaim:
          claimName: pihole-dnsmasq-pvc

For the services, I’m using two services of type LoadBalancer, one for UDP and other for TCP because at this time is not possible to create a service of this type with multiple types of protocols.

---
apiVersion: v1
kind: Service
metadata:
  name: pihole-tcp
spec:
  selector:
    app: pihole
  ports:
  - port: 8000
    targetPort: 80
    name: pihole-admin
  - port: 53
    targetPort: 53
    protocol: TCP
    name: dns-tcp
  type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
  name: pihole-udp
spec:
  selector:
    app: pihole
  ports:
  - port: 53
    targetPort: 53
    protocol: UDP
    name: dns-udp
  type: LoadBalancer

You can find the find the external IP of your services with kubectl get svc , in my case now I can go to http://192.168.68.11:8000/admin/ for managing pi-hole and setup my DHCP server to 192.168.68.13.

❯ kubectl get svc
 NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                       AGE
 kubernetes   ClusterIP      10.43.0.1                       443/TCP                       2d15h
 pihole-tcp   LoadBalancer   10.43.126.187   192.168.68.11   8000:32266/TCP,53:30859/TCP   21h
 pihole-udp   LoadBalancer   10.43.142.255   192.168.68.13   53:30407/UDP                  21h
Setup DHCP Setting to include the external IP of my pi-hole service. The one of UDP 53 port that is the one used for name resolution queries.

NTP Service

NTP is key to keep all your devices in sync. For this purpose I found this project https://github.com/cturra/docker-ntp, that is that container runs chrony on Alpine Linux, and have an arm image. I’m using a very basic setup here with the defaults, with a Deployment and again a service of type LoadBalancer for port 123.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ntp
  labels:
    app: ntp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ntp
  template:
    metadata:
      labels:
        app: ntp
    spec:
      containers:
      - image: cturra/ntp
        name: ntp
apiVersion: v1
kind: Service
metadata:
  name: ntp-svc
spec:
  selector:
    app: ntp 
  ports:
    - protocol: UDP
      port: 123
      targetPort: 123
  type: LoadBalancer

After deployed you can check if it works checking the external IP and using ntpdate command:

❯ ntpdate -q 192.168.68.14
 server 192.168.68.14, stratum 4, offset 0.027998, delay 0.02797
  7 Feb 12:44:07 ntpdate[53319]: adjust time server 192.168.68.14 offset 0.027998 sec

That’s all for now. The main limitation for running workload in a Raspiberry Pi based Kubernetes cluster is that there’s still several application that are not ready to run in ARM architecture but I can see this changing as ARM get’s more popular. I also have a minecraft server that will be documenting on my next post. I will continue looking at workloads useful to run here, if you have some that can suggest will you are welcome to write in the comment section.