Metrics-Server: Node had no addresses that matched types [InternalIP]

488 views Asked by At

I'm using Rancher 2.5.8 to manage my Kubernetes clusters. Today, I created a new cluster and everything worked as expected, except the metrics-server. The status of the metrics-server is always "CrashLoopBackOff" and the logs are telling me the following:

E0519 11:46:39.225804       1 server.go:132] unable to fully scrape metrics: [unable to fully scrape metrics from node worker1: unable to fetch metrics from node worker1: unable to extract connection information for node "worker1": node worker1 had no addresses that matched types [InternalIP], unable to fully scrape metrics from node worker2: unable to fetch metrics from node worker2: unable to extract connection information for node "worker2": node worker2 had no addresses that matched types [InternalIP], unable to fully scrape metrics from node worker3: unable to fetch metrics from node worker3: unable to extract connection information for node "worker3": node worker3 had no addresses that matched types [InternalIP], unable to fully scrape metrics from node main1: unable to fetch metrics from node main1: unable to extract connection information for node "main1": node main1 had no addresses that matched types [InternalIP]]
I0519 11:46:39.228205       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0519 11:46:39.228222       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0519 11:46:39.228290       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0519 11:46:39.228301       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0519 11:46:39.228310       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0519 11:46:39.228314       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0519 11:46:39.229241       1 secure_serving.go:197] Serving securely on [::]:4443
I0519 11:46:39.229280       1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
I0519 11:46:39.229302       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0519 11:46:39.328399       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0519 11:46:39.328428       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0519 11:46:39.328505       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController

Is anyone having any idea how I can solve the issue so that the metrics-server isn't crashing anymore?

Here's the output of the kubectl get nodes worker1 -oyaml:

status:
  addresses:
  - address: worker1
    type: Hostname
  - address: 65.21.<any>.<ip>
    type: ExternalIP
1

There are 1 answers

1
Matt On BEST ANSWER

The issue was with the metrics server.

Metrics server was configured to use kubelet-preferred-address-types=InternalIP but worker node didn't have any InternalIP listed:

$ kubectl get nodes worker1 -oyaml:
[...]
status:
  addresses:
  - address: worker1
    type: Hostname
  - address: 65.21.<any>.<ip>
    type: ExternalIP

The solution was to set --kubelet-preferred-address-types=ExternalIP in metrics server deployment yaml.

But probably better solution would be to configure it as in official metrics server deployment yaml (source):

- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

As states in metrics-server configuration docs:

--kubelet-preferred-address-types - The priority of node address types used when determining an address for connecting to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])