Im about to deploy a new K8S baremetal cluster using KubeSpray.
On on my agents, I have resolvd running that does not take the DNS settings from /etc/resolvd.conf but rather takes it from /etc/systemd/resolved.conf.
So which is the best DNS setting to use ? CoreDNS ? KubeDNS ? Just want to make sure that the pods I deploy use the same DNS servers as configured on my agent nodes.
whats should be my selection for
# Can be dnsmasq_kubedns, kubedns, coredns, coredns_dual, manual or none
dns_mode: kubedns
# Set manual server if using a custom cluster DNS server
#manual_dns_server: 10.x.x.x
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
?
As per official documentation:
As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. However, kube-dns may still be installed by default with certain Kubernetes installer tools. Refer to the documentation provided by your installer to know which DNS server is installed by default.
The CoreDNS Deployment is exposed as a Kubernetes Service with a static IP. Both the CoreDNS and kube-dns Service are named
kube-dnsin themetadata.namefield. This is done so that there is greater interoperability with workloads that relied on the legacykube-dnsService name to resolve addresses internal to the cluster. It abstracts away the implementation detail of which DNS provider is running behind that common endpoint.If a Pod’s
dnsPolicyis set to “default”, it inherits the name resolution configuration from the node that the Pod runs on. The Pod’s DNS resolution should behave the same as the node. But see Known issues.If you don’t want this, or if you want a different DNS config for pods, you can use the kubelet’s
--resolv-confflag. Set this flag to “” to prevent Pods from inheriting DNS. Set it to a valid file path to specify a file other than/etc/resolv.conffor DNS inheritance.Known Issue:
Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces
/etc/resolv.confwith a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s--resolv-confflag to point to the correctresolv.conf(Withsystemd-resolved, this is/run/systemd/resolve/resolv.conf). kubeadm 1.11 automatically detectssystemd-resolved, and adjusts the kubelet flags accordingly.Kubernetes installs do not configure the nodes’
resolv.conffiles to use the cluster DNS by default, because that process is inherently distribution-specific. This should probably be implemented eventually.Linux’s libc is impossibly stuck (see this bug from 2005) with limits of just 3 DNS
nameserverrecords and 6 DNSsearchrecords. Kubernetes needs to consume 1nameserverrecord and 3searchrecords. This means that if a local installation already uses 3nameservers or uses more than 3searches, some of those settings will be lost. As a partial workaround, the node can rundnsmasqwhich will provide morenameserverentries, but not moresearchentries. You can also use kubelet’s--resolv-confflag.If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. Check here for more information.