I finally have to admit it. I would like a kubernetes cluster at home. Like a real cluster I can run my own software on. Sure, I would love to build an awesome platform like Kubernetes myself. That would be a dream. However for those applications I just need to run, well, I just need them running in a sane environment. Plus, k8s moves way faster than anything I can build.

Looking into various approaches I came across some referneces to kubeadm which might make this easy. There seems to be a thousand different ways to build kubernetes clusters, so I am just looking for something with sane defaults. As I learn more about the internal machinery through accident I will gladly hack on it more. I am probably setting myself up for more pain than just configuring the components by hand. We will see though.

First step as always is to obtain kubeadm. Looks like a version already exists on my target machine. Now to figure out the version. Looks like the version is 1.12. From what I can gather kubeadm is apart of Kubernetes.

Turns out they ahve some nice instructions on their website, doh! I should have checked there first :-D. I try to use some Salt whenever possible, so here is the script I have which I went with.

apt-transport-https:
  pkg.installed:
    - refresh: True

curl:
  pkg.installed:
    - refresh: True

k8s-repo:
  pkgrepo.managed:
    - humanname: Google's Kubernetes Repository
    - name: deb https://apt.kubernetes.io/ kubernetes-xenial main
    - gpgcheck: 1
    - gpgkey: https://packages.cloud.google.com/apt/doc/apt-key.gpg

kubelet:
  pkg.installed:
    - version: 1.16.2-00
kubeadm:
  pkg.installed:
    - version: 1.16.2-00
kubectl:
  pkg.installed:
    - version: 1.16.2-00

That worked. Next up is to figure out how to configure the networking layer. I will be using the block of 172.31/12 for the Kubernetes network. Since I have no idea what I am doing here, I’ll also use Flannel. kubeadm gets really upset about swap space. This is something I would like to defer until later, so you can just throw on a --ignore-preflight-errors=Swap.

kubeadm init --control-plane-endpoint api-server.k8s.internal --pod-network-cidr '172.31.0.0/12'  --ignore-preflight-errors=Swap 

When running this there were some warnings:

        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09

Definitely more to check out later. Probably when I break something :-D. So following to script they output worked fine to import the k8s cluster. It is a bit sad it overwrites any existing configurations you have though. Once this is done I need to finish the network configuration: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml. Then untaint the node: kubectl taint nodes --all node-role.kubernetes.io/master-….and CoreDNS entered a crash loop.

Turns out I missed the note Flannel must be configured for 10.244.0.0/24. Well that is a bit disappointing. It’s actually contained in the ConfigMap kube-flannel-cfg in kube-system. I wonder if I can just modify the ConfigMap to work with my desired configuration!

Well that didn’t go quiet as I expected. Turns out I need to restart the DaemonSet kube-flannel-ds-amd64. Sounds like I just delete the pod and let Kubernetes do it’s thing. Flannel now looks happy. But CoreDNS is not. The error I am receiving is the following:

.:53
plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
[FATAL] plugin/loop: Loop (127.0.0.1:41995 -> :53) detected for zone ".", see https://coredns.io/plugins/loop#troubleshooting. Query: "HINFO 1028767696865333038.8032668207056260049."

Sounds like this might be the result of a DNS loop within my system. A resolvectl shows the system is using the correct resolver. The upstreams on my PowerDNS systems appear to be setup correclty. Might be a common SystemD issue?. That does not seem right. Seems like I actually have a bigger problem: kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' returns a range already in use on my network. I set the netmask wrong.

Restarting

Turns out restarting is fairly simple. kubeadm reset and confirm you would like to start over. You also need to remove the interfaces cni0 and flannel.1 using ifconfig <name> down. Once this is done you can start again. I was smart this time, downloaded the Flannel configuration file, edited the network, then applied that.

Well, the CoreDNS IP address is correct this time. To resolve the final issue I ran kubectl edit configmap coredns -n kube-system and updating the forwarding rule to match my actual DNS server.

Now to test a pod will actually run:

apiVersion: v1
kind: Namespace
metadata:
  name: test
---
apiVersion: v1
kind: Pod
metadata:
  name: smoke-test
  namespace: test
spec:
  containers:
  - name: test
    image: meschbach/docker-hello-world
    resources:
      limits:
        memory: "200Mi"
      requests:
        memory: "40Mi"

A brief nc -vvv <ip> 3000 confirmed it worked.