k8s: Changing Cluster DNS
• Mark Eschbach
By defualt kubernetes cluster DNS suffix is cluster.local.
. This name conflicts with mDNS and will result in random
lookup failures when attempting to access services on a home network. I would love to place this somewhere like
k8s.workshop.meschbach.net.
however I have had trouble with delegation with PowerDNS. In the I might revisit this
however for now my Kubernetes clusters will use workshop.k8s.
as it’s DNS name.
To reconfigure there are several steps.
#1. Update kube-system/configmaps
Each of the following configmap keys need to be modified:
kubeadm-config
:data.ClusterConfiguration.networking.dnsDomain
->workshop.k8s
coredns
:date.Corefile
-> The root zone identified by.
has akubernetes
directive with a series of existing domain names. A number of serivces may assumecluster.local
exists so it’s probably best to leave it in. Before that domain place your own. Also a good time to change thepods
directive toverified
.kubelet-config-1.{k8s-minor}
:data.kubelet.clusterDomain
->workshop.k8s
#2. Restart CoreDNS
This restarts CoreDNS within the cluster to pickup the changes. I keep searching for a better method to do this, perhaps with replicasets however I need to level up my Kubernetes skills first. In the meantime the following should cause CoreDNS to restart due to replicaset being below the desired count.
kubectl get pods -n kube-system |grep coredns |awk '{print $1;}' |xargs kubectl delete --namespace kube-system pod
#3. Reconfigure existing nodes
Only have two nodes make this rather easy, however for each node you must do the following:
- Pass the flag
--cluster-domain=workshop.k8s
to eachkubelet
instance. Probably best to modify/etc/defaults/kubelet
, adding this in quotes. One could also add this withinKUBELET_CONFIG_ARGS
within the SystemD unit file. Ensuresystemctl daemon-reload && systemctl restart kubelet
is also run.
#4. Configure network resolvers
Next the network resolvers will need to override the domain suffix for the target address of the CoreDNS serivce or maybe just your base station.
Conclusion
Definitely not an overly sensitive operation. I would not do this on a highly loaded production cluster without further consideration. Might even be better to run concurrent clusters, slowly migrating nodes and services over.
To test the sanity of the configuration use your favorite DNS querying tool against ns.dns.workshop.k8s
if your suffix
is workshop.k8s
. For example:
:> dig @10.96.0.10 ns.dns.workshop.k8s
; <<>> DiG 9.10.6 <<>> @10.96.0.10 ns.dns.workshop.k8s
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41398
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;ns.dns.workshop.k8s. IN A
;; ANSWER SECTION:
ns.dns.workshop.k8s. 30 IN A 10.96.0.10
ns.dns.workshop.k8s. 30 IN A 172.31.0.138
ns.dns.workshop.k8s. 30 IN A 172.31.1.24
;; Query time: 44 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Apr 29 16:10:40 PDT 2020
;; MSG SIZE rcvd: 153
:>