micrometer exposing actuator metrics vs kube-state-metrics vs metrics-server to set pod request/limits

458 views Asked by At

micrometer exposing actuator metrics to set request/limit to pods in K8svs metrics-server vs kube-state-metrics -> K8s Mixin from kube-promethteus-stack Grafana dashboad It's really blurry and frustrating to me to understand why there is such big difference between values from the 3 in the title and how should one utilize K8s Mixin to set proper request/limits and if that is expected at al. I was hoping I can just see same data that I see when I type kubectl top podname --containers to what I see when I open K8s -> ComputeResources -> Pods dashboard in Grafana. But not only the values differ by more than a double, but also reported values from actuator differ from both. When exposing spring data with micrometer the sum of jvm_memory_used_bytes is corresponding more to what I get from metrics-server (0.37.0) rather then what I see on Grafana from the mixin dashboards, but it is still far off. I am using K8s: 1.14.3 on Ubuntu 18.04 LTS managed by kubespray. kube-prometheus-stack 9.4.4 installed with helm 2.14.3. Spring boot 2.0 with Micrometer. I saw the explanation on metrics-server git that this is the value that kubelet use for OOMKill, but again this is not helpful at all as what should I do with the dashboard? What is the the way to handle this?

enter image description here

enter image description here

enter image description here

1

There are 1 answers

0
anVzdGFub3RoZXJodW1hbg On

Based on what I see so far, I have found the root cause, renamed kubelet service from old chart to new that can get targeted by serviceMonitors. So for me the best solution would be grafana kube-state-metrics + comparing what I see in the jvm dashboard