Prometheus Adapter Helm Chart Unauthorized

343 views Asked by At

In our EKS cluster we have prometheus installed using the prometheus-community helm chart. We wanted to start feeding the metrics to an HPA to allow scaling based on CPU, memory and network activity (which is not available in the default metrics server).

After running the helm install command

helm install custom-metrics prometheus-community/prometheus-adapter --namespace monitoring --set prometheus.url=http://prometheus-operated.monitoring.svc --set hostNetwork.enabled=true

it runs successfully and after a few minutes you can see in Kubectl get apiservices that they have come up and now show as true.

The issue that I'm experiencing is that running any kubectl command such as

kubectl get pods -n monitoring

the first line always shows

E0726 13:18:37.044670   28536 memcache.go:287] couldn't get resource list for custom.metrics.k8s.io/v1beta1: Unauthorized

followed by the expected output of the get pods command.

When I try to run a kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 -n monitoring I get the follow error message:

Error from server (NotFound): the server could not find the requested resource

Does anyone know what configuration changes I need to make in order to setup my HPA? I know that custom rules need to be created for the HPA to access them but I plan on setting those up once the adapter is running.

I've tried setting hostNetwork.enabled to true and leaving it at the default as false.

Here is the output of my get SVC command:

NAME                                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                            ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   21d
custom-metrics-prometheus-adapter                ClusterIP   172.20.67.162    <none>        443/TCP                      82m
kube-prometheus-stack-alertmanager               ClusterIP   172.20.196.135   <none>        9093/TCP                     21d
kube-prometheus-stack-grafana                    ClusterIP   172.20.175.12    <none>        80/TCP                       21d
kube-prometheus-stack-kube-state-metrics         ClusterIP   172.20.20.63     <none>        8080/TCP                     21d
kube-prometheus-stack-operator                   ClusterIP   172.20.145.105   <none>        443/TCP                      21d
kube-prometheus-stack-prometheus                 ClusterIP   172.20.178.62    <none>        9090/TCP                     21d
kube-prometheus-stack-prometheus-node-exporter   ClusterIP   172.20.217.223   <none>        9100/TCP                     21d
prometheus-operated                              ClusterIP   None             <none>        9090/TCP                     21d
1

There are 1 answers

3
Saifeddine Rajhi On

The custom.metrics.k8s.io/v1beta1 is not a standard Kubernetes api. Kubernetes API can be extended with Custom Resources that are fully managed by Kubernetes and available to Kubectl and other tools.

SO Can you try to list all api servers:

kubectl get apiservice

all values of should be True

You should see something like

v1beta1.metrics.k8s.io      kube-system/metrics-server                   True        252d

You might want to have a look at whether the metrics server inside of the kube-system is crashing.. something like:

kubectl get deployments -n kube-system | grep metrics-server

kubectl describe deployment [metrics deployment name]

or

kubectl get pods -n kube-system | grep metrics-server

You can also use metrics-server as per AWS documentations