I'm working on memory monitoring using Prometheus (prometheus-operator Helm chart).
While investigating values I've noticed that memory usage (container_memory_working_set_bytes
) is being scraped from two endpoints:
/metrics/cadvisor
/metrics/resource/v1alpha1
(/metrics/resource
from kubernetes 1.18)
I've figured out how to disable one of the endpoints in the chart but I'd like to understand the purpose of both.
I understand that /metrics/cadvisor
returns three values - pod's container (or more if a pod has multiple containers), some special container POD
(is it some internal memory usage to run a POD service?) and a sum of all containers (then the result has empty label container=""
).
On the other hand /metrics/resource/v1alpha1
returns only memory usage of a pod's containers (without container="POD"
and without sum of these container=""
)
Is /metrics/resource/v1alpha1
then planned to replace /metrics/cadvisor
as a single source of metrics?
Seeing that both endpoints (both are enabled by default in prometheus-operator
) return the same metrics any sum()
queries can return values 2 as big as a real memory usage.
Appreciate any clarification in this subject!
Answer is partial
container_name=="POD"
is the "pause" container for the pods. The pause container is a container which holds the network namespace for the pod. Kubernetes creates pause containers to acquire the respective pod’s IP address and set up the network namespace for all other containers that join that pod. This container is a part of whole ecosystem and it starts first in pods to configure PODs network in the first place prior to scheduling another pods. After pod has been started - there is nothing to do for pause container.Pause container code for your reference: https://github.com/kubernetes/kubernetes/tree/master/build/pause
Example of pause containers:
container_name!=="POD"
It filters out metric streams for the pause container, not metadata generally. Most people, if they want to graph the containers in their pod, don't want to see resource usage for the pause container, as it doesn't do much. The name of the pause container is an implementation detail of some container runtimes, but doesn't apply to all, and isn't guaranteed to stick around.Official (obsolete v1.14) page shows differences between cadvisor and metrics resource monitoring:
Kubelet
cAdvisor
Also you should know that kubelet exposes metrics in /metrics/cadvisor, /metrics/resource and /metrics/probes endpoints. Those 3 metrics do not have same lifecycle.
As per helm prometheus values yaml - there are 3 options and you can disable what you dont need
My opinion
/metrics/resource/
wont replace google's cadvisor. Just disable in your case what you dont need. It just depends on your needs. For example, I found an article Kubernetes: monitoring with Prometheus – exporters, a Service Discovery, and its roles where 4 diff tools being used to monitor everything.metrics-server – CPU, memory, file-descriptors, disks, etc of the cluster
cAdvisor – a Docker daemon metrics – containers monitoring
kube-state-metrics – deployments, pods, nodes
node-exporter: EC2 instances metrics – CPU, memory, network
In your case, to monitor memory i believe it will be enough 1 :)