2023-12-07 09:47:00.000 container_memory_working_set_bytes{
container="my-app",
endpoint="https-metrics",
id="/kubepods/burstable/podb7ecf824-6797-4183-9566-434142df3757/e5c6aec2cc6cf7abbd2163cd195b181954df28fc134478e2cfd27283c2a7838f",
image="my-image",
instance="172.22.106.15:10250",
job="kubelet",
metrics_path="/metrics/cadvisor",
name="571e004d52fa4ed1c4b3eacef27feb844bfaf1d7e6b8bdbcdae1e35bce9f4b83",
namespace="my-app",
node="node2",
pod="web-5dfd4896b4-x4nsr"
} 915144704
2023-12-07 09:47:00.000 container_memory_working_set_bytes{
container="my-app",
endpoint="https-metrics",
id="/kubepods/burstable/podb7ecf824-6797-4183-9566-434142df3757/fcce62cda4d8300c82f4552ddd069b59e8de30c31ece187a6349fe786f182e7a",
image="my-image",
instance="172.22.106.15:10250",
job="kubelet",
metrics_path="/metrics/cadvisor",
name="326d2a900bcbd9bd983e128e39b3391dcf37469dd863c19b222a834adc59400d",
namespace="my-app",
node="node2",
pod="web-5dfd4896b4-x4nsr"
} 799879168
Is the used memory the sum of these two: 915144704 + 799879168 ?
or the maximum of these two: max(915144704, 799879168) ?
related google/cadvisor - Duplicated metrics for restarted Pods
Based on provided issue (specifically graphs in it) looks like (a not so correct behaviour of) the staleness is at play.
Most likely former metric (with name ending in
_7
in your case) is simply a leftovers, and should be ignored. And correct size of memory used is simply799879168
(value of the latter metric).I'm not familiar with cadvisor enough to explain this behaviour.