In an effort to get everything running again I made a mess out of my k8s control plane. This resulted in subtle errors, including nodes going offline for a brief period of time. Longer term I am rethinking how the servers get configured and brought in similar to like AWS. However, the solution requires a layered architecture which I need to think on a bit more later. There are lessons to be learned today.

I am moving the control plane to First up is setting the name to resolve to the target hosts. Next is updating kubeadm-config’s ClusterConfiguration.controlPlaneEndpoint and ClusterStatus.apiEndpoints. I was not able to find documentation for either of these.

Next up, for each control plane host the following needs to be done:

cd /etc/kubernetes
rm pki/apiserver.{crt,key}
# your service dns domain will most likely be different
kubeadm init phase certs apiserver --control-plane-endpoint --service-dns-domain=workshop.k8s
sed -i '/kubernetes.default.svc.workshop.k8s/' admin.conf controller-manager.conf kubelet.conf scheduler.conf
systemctl restart kubelet 

The sed line replaces the old host name. The last line actually restart the entire node. Once all nodes are restarted you can verify the certificates resolve as expected via openssl s_client -connect host:6443 </dev/null 2>/dev/null | openssl x509 -noout -text | grep DNS: