Grafana is showing last 3 days of metrics instead of full month

1.1k views Asked by At

I have a prometheus server querying a number of hosts and running a thanos-sidecar configured against s3 bucket. Metrics are pushed without errors, on my test s3 bucket i see them earliest mid of september.

Thanos-server, which is a separate machine is running:

  • grafana (default source - thanos-query localhost:29090)
  • thanos-store (its able to read the s3 bucket)
  • thanos-query (pointed to thanos-store)
  • thanos-compactor as cronjob (just in case it matters: --retention.resolution-raw=3d --retention.resolution-5m=14d --retention.resolution-1h=90d)

I have set prometheus retention to 3days. Which should be the value indicating when prometheus clears up local storage.

When i make a simple test with uptime node-exporter statistics over 30days, shows just last 3 days of history. Grafana is querying thanos-query which should pull down the metrics via thanos-store from the s3 bucket and show me all data on the bucket

I most likely missed a piece of configuration in the stack.

1

There are 1 answers

0
Víctor Oriol On

It's been a while, but in case anyone stops by...the essential configuration would be:

  1. thanos-sidecar: points to prometheus with the parameter --prometheus.url
  2. thanos-query:
  • Points to thanos-sidecar and thanos-store with the parameter --endpoint.
  • For show in grafana the full "retention.resolution" metrics, it's very important set the parameter: --query.auto-downsampling

Without --query.auto-downsampling it's not possible to get the entire range of data