Im about to deploy a new K8S baremetal cluster using KubeSpray.
On on my agents, I have resolvd running that does not take the DNS settings from /etc/resolvd.conf
but rather takes it from /etc/systemd/resolved.conf
.
So which is the best DNS setting to use ? CoreDNS ? KubeDNS ? Just want to make sure that the pods I deploy use the same DNS servers as configured on my agent nodes.
whats should be my selection for
# Can be dnsmasq_kubedns, kubedns, coredns, coredns_dual, manual or none
dns_mode: kubedns
# Set manual server if using a custom cluster DNS server
#manual_dns_server: 10.x.x.x
# Can be docker_dns, host_resolvconf or none
resolvconf_mode: docker_dns
?
As per official documentation:
As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. However, kube-dns may still be installed by default with certain Kubernetes installer tools. Refer to the documentation provided by your installer to know which DNS server is installed by default.
The CoreDNS Deployment is exposed as a Kubernetes Service with a static IP. Both the CoreDNS and kube-dns Service are named
kube-dns
in themetadata.name
field. This is done so that there is greater interoperability with workloads that relied on the legacykube-dns
Service name to resolve addresses internal to the cluster. It abstracts away the implementation detail of which DNS provider is running behind that common endpoint.If a Pod’s
dnsPolicy
is set to “default
”, it inherits the name resolution configuration from the node that the Pod runs on. The Pod’s DNS resolution should behave the same as the node. But see Known issues.If you don’t want this, or if you want a different DNS config for pods, you can use the kubelet’s
--resolv-conf
flag. Set this flag to “” to prevent Pods from inheriting DNS. Set it to a valid file path to specify a file other than/etc/resolv.conf
for DNS inheritance.Known Issue:
Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces
/etc/resolv.conf
with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s--resolv-conf
flag to point to the correctresolv.conf
(Withsystemd-resolved
, this is/run/systemd/resolve/resolv.conf
). kubeadm 1.11 automatically detectssystemd-resolved
, and adjusts the kubelet flags accordingly.Kubernetes installs do not configure the nodes’
resolv.conf
files to use the cluster DNS by default, because that process is inherently distribution-specific. This should probably be implemented eventually.Linux’s libc is impossibly stuck (see this bug from 2005) with limits of just 3 DNS
nameserver
records and 6 DNSsearch
records. Kubernetes needs to consume 1nameserver
record and 3search
records. This means that if a local installation already uses 3nameserver
s or uses more than 3search
es, some of those settings will be lost. As a partial workaround, the node can rundnsmasq
which will provide morenameserver
entries, but not moresearch
entries. You can also use kubelet’s--resolv-conf
flag.If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. Check here for more information.