I have a prometheus server querying a number of hosts and running a thanos-sidecar configured against s3 bucket. Metrics are pushed without errors, on my test s3 bucket i see them earliest mid of september.
Thanos-server, which is a separate machine is running:
- grafana (default source - thanos-query localhost:29090)
- thanos-store (its able to read the s3 bucket)
- thanos-query (pointed to thanos-store)
- thanos-compactor as cronjob (just in case it matters: --retention.resolution-raw=3d --retention.resolution-5m=14d --retention.resolution-1h=90d)
I have set prometheus retention to 3days. Which should be the value indicating when prometheus clears up local storage.
When i make a simple test with uptime node-exporter statistics over 30days, shows just last 3 days of history. Grafana is querying thanos-query which should pull down the metrics via thanos-store from the s3 bucket and show me all data on the bucket
I most likely missed a piece of configuration in the stack.
It's been a while, but in case anyone stops by...the essential configuration would be:
Without --query.auto-downsampling it's not possible to get the entire range of data