Kubernetes "no endpoints available for service \"kube-dns\""

4.3k views Asked by At

So I have a 3 node kubernetes cluster running on 3 raspberry pis running HypriotOS. I haven't done anything to it since starting up and joining nodes, except for installing weave. However when I enter kubectl cluster-info, I'm presented with two options,

Kubernetes master is running at https://192.168.0.35:6443
KubeDNS is running at https://192.168.0.35:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

When I curl the second url I get the following response:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "no endpoints available for service \"kube-dns\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

Here's some more information about the state of my cluster.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"}




$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   etcd-node01                             1/1       Running            0          13d
kube-system   kube-apiserver-node01                   1/1       Running            21         13d
kube-system   kube-controller-manager-node01          1/1       Running            5          13d
kube-system   kube-dns-2459497834-v1g4n               3/3       Running            43         13d
kube-system   kube-proxy-1hplm                        1/1       Running            0          5h
kube-system   kube-proxy-6bzvr                        1/1       Running            0          13d
kube-system   kube-proxy-cmp3q                        1/1       Running            0          6d
kube-system   kube-scheduler-node01                   1/1       Running            8          13d
kube-system   weave-net-5cq9c                         2/2       Running            0          6d
kube-system   weave-net-ff5sz                         2/2       Running            4          13d
kube-system   weave-net-z3nq3                         2/2       Running            0          5h


$ kubectl get svc --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             10.96.0.1        <none>        443/TCP         13d
kube-system   kube-dns               10.96.0.10       <none>        53/UDP,53/TCP   13d


$ kubectl --namespace kube-system describe pod kube-dns-2459497834-v1g4n
Name:           kube-dns-2459497834-v1g4n
Namespace:      kube-system
Node:           node01/192.168.0.35
Start Time:     Wed, 23 Aug 2017 20:34:56 +0000
Labels:         k8s-app=kube-dns
                pod-template-hash=2459497834
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kube-dns-2459497834","uid":"37640de4-8841-11e7-ad32-b827eb0a...
                scheduler.alpha.kubernetes.io/critical-pod=
Status:         Running
IP:             10.32.0.2
Created By:     ReplicaSet/kube-dns-2459497834
Controlled By:  ReplicaSet/kube-dns-2459497834
Containers:
  kubedns:
    Container ID:       docker://9a781f1fea4c947a9115c551e65c232d5fe0aa2045e27e79eae4b057b68e4914
    Image:              gcr.io/google_containers/k8s-dns-kube-dns-arm:1.14.4
    Image ID:           docker-pullable://gcr.io/google_containers/k8s-dns-kube-dns-arm@sha256:ac677e54bef9717220a0ba2275ba706111755b2906de689d71ac44bfe425946d
    Ports:              10053/UDP, 10053/TCP, 10055/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-dir=/kube-dns-config
      --v=2
    State:              Running
      Started:          Tue, 29 Aug 2017 19:09:10 +0000
    Last State:         Terminated
      Reason:           Error
      Exit Code:        137
      Started:          Tue, 29 Aug 2017 17:07:49 +0000
      Finished:         Tue, 29 Aug 2017 19:09:08 +0000
    Ready:              True
    Restart Count:      18
    Limits:
      memory:   170Mi
    Requests:
      cpu:      100m
      memory:   70Mi
    Liveness:   http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /kube-dns-config from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-rf19g (ro)
  dnsmasq:
    Container ID:       docker://f8e17df36310bc3423a74e3f6989204abac9e83d4a8366561e54259418030a50
    Image:              gcr.io/google_containers/k8s-dns-dnsmasq-nanny-arm:1.14.4
    Image ID:           docker-pullable://gcr.io/google_containers/k8s-dns-dnsmasq-nanny-arm@sha256:a7469e91b4b20f31036448a61c52e208833c7cb283faeb4ea51b9fa22e18eb69
    Ports:              53/UDP, 53/TCP
    Args:
      -v=2
      -logtostderr
      -configDir=/etc/k8s/dns/dnsmasq-nanny
      -restartDnsmasq=true
      --
      -k
      --cache-size=1000
      --log-facility=-
      --server=/cluster.local/127.0.0.1#10053
      --server=/in-addr.arpa/127.0.0.1#10053
      --server=/ip6.arpa/127.0.0.1#10053
    State:              Running
      Started:          Tue, 29 Aug 2017 19:09:52 +0000
    Last State:         Terminated
      Reason:           Error
      Exit Code:        137


$ kubectl --namespace kube-system describe svc kube-dns
Name:           kube-dns
Namespace:      kube-system
Labels:         k8s-app=kube-dns
            kubernetes.io/cluster-service=true
            kubernetes.io/name=KubeDNS
Annotations:        <none>
Selector:       k8s-app=kube-dns
Type:           ClusterIP
IP:         10.96.0.10
Port:           dns 53/UDP
Endpoints:      10.32.0.2:53
Port:           dns-tcp 53/TCP
Endpoints:      10.32.0.2:53
Session Affinity:   None
Events:         <none>

I cannot figure out what is happening here, since I haven't done anything other than follow the instructions here. This issue has persisted between multiple versions of kubernetes as well as multiple network overlays, including flannel. So it's beginning to make me think that it's some issue with the rpis themselves.

1

There are 1 answers

0
fishi0x01 On

UPDATE: The assumption below is not a complete explanation for this error message. The proxy API states:

Create Connect Proxy

connect GET requests to proxy of Pod

GET /api/v1/namespaces/{namespace}/pods/{name}/proxy

The question now is what connect GET requests to proxy of Pod exactly means, but I strongly believe it means forwarding the GET requests to the pod. This would mean that the below assumption is correct.

I checked with other services not designed for HTTP traffic and they all yield to this error message, whereas services designed for HTTP traffic work well (e.g., /api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy).


I believe this is normal behavior - nothing to worry about. If you look at the kube-dns service object inside your cluster you can see that it only serves internal endpoints to port 53, which is the standard DNS port - so I assume that the kube-dns service only responds well to proper DNS queries. With curl you are trying to make a normal GET request on this service, which should lead to an error response.

Judging from your given cluster info all your pods look well and I bet your service endpoints are also exposed properly. You can check that via kubectl get ep kube-dns --namespace=kube-system which should yield something like that:

$ kubectl get ep kube-dns --namespace=kube-system
NAME       ENDPOINTS                                                         AGE
kube-dns   100.101.26.65:53,100.96.150.198:53,100.101.26.65:53 + 1 more...   20d

On my clusters (k8s 1.7.3) a curl GET to /api/v1/namespaces/kube-system/services/kube-dns/proxy also leads to your mentioned error message, but I never had a DNS issue, so I hope my assumption on this one is correct.