I have defined a custom metric in my application via a Gauge. I have then exposed this metric via port 9090 on all Pods for the Deployment for Prometheus to scrape.
I can gather these custom metrics just fine, but how do I change the scrape_interval
for this single metric?
I think I might need to add a job to the additionalScrapeConfigs
in the Prometheus Operator's prometheus-config.yaml
.
- Is this the correct way to add a
scrape_interval
to a custom metric? - If it is, how should my
scrape_config
look to gather the metrics from any replica of this deployment?
These are the configurations I've tried, but neither of them seem to work:
# 1
- job_name: 'my-super-cool-app-requests'
scrape_interval: 200ms
scrape_timeout: 190ms
static_configs:
- targets:
# I don't think this is right as my pod name isn't exactly "my-super-cool-app"
# as it has a hash at the end "my-super-cool-app-hx728".
# FYI the service does not expose 9090 for prometheus, only the pod.
# so I couldn't do "my-super-cool-app.super-cool-ns.svc:9090"
- my-super-cool-app.super-cool-ns.pod:9090
# 2
- job_name: 'my-super-cool-app-requests'
scrape_interval: 200ms
scrape_timeout: 190ms
metrics_path: /apis/custom.metrics.k8s.io/v1beta1/namespaces/super-cool-ns/metrics/my-custom-metric-name