Setting up a Kubernetes cluster at home
This was an adventure.
- Control plane. I'm using a pair of Rapsberry Pi 4 4GB models.
- Workers. I'm using a set of Ubuntu x64 VMs.
Steps to get to
kubectl get nodes:
- Prepare machines
- Set up control plane load balancing
- Create the cluster
- Install networking
- Join additional nodes
- Install Kubernetes dashboard
- Install MetalLB for load balancing.
- Install OpenEBS for storage.
- Install Helm for installing stacks on the cluster.
It took a few tries to get this reliably working.
The files I used for this are checked into andyoakley/kubernetes-at-home.
I'm using Ubuntu because it's familiar. We'll need to start with Docker. Also install the iSCSI initiator to use for mounting OpenEBS storage, more on that later. Finally we install Kubernetes.
This is manual enough that it's probably worth doing in Ansible.
Control plane load balancing
To give some real "high availability" feeling, we'll use two control plane nodes. These will take care of staying in sync with each other, but we still need to have them respond to requests at single virtual IP. The high availability instructions suggest one path of using well-established
apt-get install keepalived haproxy
In my setup I'm using 10.50.1.21, 10.50.1.22, etc. as individual control plane nodes. They all know how to respond to a virtual IP at 10.50.1.20.
At this point, it's OK to have just one member in the load balance rotation.
Create the cluster
With the control plane load balancer in place, we can now create the cluster using
kubeadm. The control plane endpoint is probably hard to change later, so getting the loadbalancing right now is important.
The default pod network space of 10.244.0.0/16 works fine with Flannel (later).
kubeadm init --pod-address-cidr=10.244.0.0/16 --control-plane-endpoint "cp.cluster.foobar.com:443" --upload-certs
This prints out some instructions that are important for later.
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join CPNAME:443 --token TOKENTOKENTOKEN \ --discovery-token-ca-cert-hash sha256:HASHHASHHASH \ --control-plane --certificate-key KEYKEYKEY Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join CPNAME:443 --token TOKENTOKENTOKEN \ --discovery-token-ca-cert-hash sha256:HASHHASHHASH
In particular, copy the
/etc/kubernetes/admin.conf file into the default location, on both the control plane node and your workstation.
kubectl get all should return some results at this point.
We'll use Flannel because it works. Different container network implementations are for exploring later.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Join additional nodes
This part is easy, just use the commands in the output of
kubeadm init from above.
Following Kubernetes Web UI (Dashboard).
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml kubectl apply -f users.yaml
Once in place you need to run
kubectl proxy to get access into the cluster, followed by browsing to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. The helper script can be used to get the bearer token.
Since we're running without the help of a cloud load balancer, we need something to hand out IP addresses to services running in the cluster. MetalLB seems to be the right solution for that.
Instructions at MetalLB installation:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" kubectl apply -f confmap.yaml
OpenEBS provides a way of using storage of cluster nodes. We installed
open-iscsi earlier during the preparation step; it's required to be able to mount these volumes.
helm repo add openebs https://openebs.github.io/charts helm repo update kubectl create ns openebs helm install openebs --namespace openebs openebs/openebs
Confirm this works
kubectl get blockdevice -n openebs
NAME NODENAME SIZE CLAIMSTATE STATUS AGE blockdevice-a143f959561853e2634b59961b57d87c 5254025305fe 21474836480 Unclaimed Active 6m35s
The approach I used was to create a cStor pool.
kubectl apply -f 00pool.yaml
Then we'll create a default storage class which will just provision volumes in this pool.
kubectl apply -f 01class.yaml
Confirm these work
kubectl get spc,
kubectl get csp
This is simple and described in Installing Helm. Basically just a download and copy to the path.
At this point, the cluster can bring itself up.
kubectl get all should return something reassuring.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 17m
And we can start installing applications by hand or with
I watched Why is Kubernetes On-Prem so much harder after the fact. And I agree.
Here were a few things specific to my environment.
Setting up VMs
By default a
macvlan interface does not allow host-guest communication for QEMU. This helps.
#!/bin/bash # With help from https://www.furorteutonicus.eu/2013/08/04/enabling-host-guest-networking-with-kvm-macvlan-and-macvtap/ sudo ip link add link eth1 macvlan1 type macvlan mode bridge sudo ip address add <MYIP> dev macvlan1 sudo ip link set dev macvlan1 up sudo ip route flush dev eth1 sudo ip route flush dev macvlan1 sudo ip route add 10.10.10.0/24 dev macvlan1 metric 0 sudo ip route add default via 10.10.10.1
This was already set up in my environment but makes it easier to stand up nodes from scratch (they can image themselves and register as workers)
This is another alternative for storage, allowing persistent volume claims to be fulfilled on an NFS share. I ran into some permissions issues with this and ultimately decided I wanted something in-cluster anyway.