By defualt kubernetes cluster DNS suffix is cluster.local.. This name conflicts with mDNS and will result in random lookup failures when attempting to access services on a home network. I would love to place this somewhere like however I have had trouble with delegation with PowerDNS. In the I might revisit this however for now my Kubernetes clusters will use workshop.k8s. as it’s DNS name.

To reconfigure there are several steps.

#1. Update kube-system/configmaps

Each of the following configmap keys need to be modified:

  • kubeadm-config: data.ClusterConfiguration.networking.dnsDomain -> workshop.k8s
  • coredns: date.Corefile -> The root zone identified by . has a kubernetes directive with a series of existing domain names. A number of serivces may assume cluster.local exists so it’s probably best to leave it in. Before that domain place your own. Also a good time to change the pods directive to verified.
  • kubelet-config-1.{k8s-minor}: data.kubelet.clusterDomain -> workshop.k8s

#2. Restart CoreDNS

This restarts CoreDNS within the cluster to pickup the changes. I keep searching for a better method to do this, perhaps with replicasets however I need to level up my Kubernetes skills first. In the meantime the following should cause CoreDNS to restart due to replicaset being below the desired count.

kubectl get pods -n kube-system |grep coredns |awk '{print $1;}' |xargs kubectl delete --namespace kube-system pod

#3. Reconfigure existing nodes

Only have two nodes make this rather easy, however for each node you must do the following:

  • Pass the flag --cluster-domain=workshop.k8s to each kubelet instance. Probably best to modify /etc/defaults/kubelet, adding this in quotes. One could also add this within KUBELET_CONFIG_ARGS within the SystemD unit file. Ensure systemctl daemon-reload && systemctl restart kubelet is also run.

#4. Configure network resolvers

Next the network resolvers will need to override the domain suffix for the target address of the CoreDNS serivce or maybe just your base station.


Definitely not an overly sensitive operation. I would not do this on a highly loaded production cluster without further consideration. Might even be better to run concurrent clusters, slowly migrating nodes and services over.

To test the sanity of the configuration use your favorite DNS querying tool against ns.dns.workshop.k8s if your suffix is workshop.k8s. For example:

:> dig @ ns.dns.workshop.k8s

; <<>> DiG 9.10.6 <<>> @ ns.dns.workshop.k8s
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41398
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

; EDNS: version: 0, flags:; udp: 4096
;ns.dns.workshop.k8s.		IN	A

ns.dns.workshop.k8s.	30	IN	A
ns.dns.workshop.k8s.	30	IN	A
ns.dns.workshop.k8s.	30	IN	A

;; Query time: 44 msec
;; WHEN: Wed Apr 29 16:10:40 PDT 2020
;; MSG SIZE  rcvd: 153