Intermittent failure of K8S DNS resolver / dial udp / Operation cancelled

998 views Asked by At

We're having a medium sized Kubernetes cluster. So imagine a situation where approximately 70 pods are being connecting to a one socket server. It works fine most of the time, however, from time to time one or two pods just fail to resolve k8s DNS, and it times out with the following error:

Error: dial tcp: lookup thishost.production.svc.cluster.local on 10.32.0.10:53: read udp 100.65.63.202:36638->100.64.209.61:53: i/o timeout at

What we noticed is that this is not the only service that's failing intermittently. Other services experience that from time to time. We used to ignore it, since it was very random and rate, however in the above case that is very noticeable. The only solution is to actually kill the faulty pod. (Restarting doesn't help)

Has anyone experienced this? Do you have any tips on how to debug it/ fix?

It almost feels as if it's beyond our expertise and is fully related to the internals of the DNS resolver.

Kubernetes version: 1.23.4 Container Network: cilium

1

There are 1 answers

9
mohalahmad On

this issue most probably will be related to the CNI. I would suggest following the link to debug the issue: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

and to be able to help you we need more information:

  1. is this cluster on-premise or cloud?

  2. what are you using for CNI?

  3. how many nodes are running and are they all in the same subnet? if yes, dose they have other interfaces?

  4. share the below command result.

    kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o wide

  5. when you restart the pod to solve the issue temp does it stay on the same node or does it change?