Why pprof heap inuse_space less than container_working_set_size?

1.1k views Asked by At

I found in grafana that my pod <***-qkcdl> occupated about 1.0G of container_memory_working_set_bytes, and 1.4G of container_memory_rss;

pods momery usage in grafana

container_memory_rss of pod(max avg current)

and my query of container_memory_working_set_bytes and container_memory_rss is:

container_memory_working_set_bytes{k8s_cluster="$cluster", namespace="$dept", pod=~'$pod', container=~"$container"}

container_memory_cache{k8s_cluster="$cluster", namespace="$dept", pod=~'$pod', container=~"$container"}

then when I track the pprof heap inuse_space, it shows:

go tool pprof --inuse_space pprof http://{pod_ip}:8899/debug/pprof/heap
Fetching profile over HTTP from http://{pod_ip}:8899/debug/pprof/heap
pprof: read pprof: is a directory
Fetched 1 source profiles out of 2
Saved profile in {local_path}
File: {app}
Type: inuse_space
Time: Oct 15, 2021 at 6:38pm (CST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof)
(pprof) top10
Showing nodes accounting for 335.36MB, 91.58% of 366.19MB total
Dropped 195 nodes (cum <= 1.83MB)
Showing top 10 nodes out of 77
...

so, why my golang application use only 335.36MB heap space, but the grafana show about 1.0G of working_set_size and 1.4G of rss, what does the "335.36MB", "1.0G" and "1.4G" means ? why ?

PS: I know what the metrics means, but it does nothing to me

container_memory_rss: The amount of anonymous and swap cache memory (includes transparent hugepages).

container_memory_working_set_bytes: The amount of working set memory, this includes recently accessed memory,dirty memory, and kernel memory. Working set is <= "usage".

0

There are 0 answers