I'm using this prometheus helm chart.
I was wondering if it is possible to setup the prometheus operator to automatically monitor every service in the cluster or namespace without having to create a ServiceMonitor
for every service.
With the current setup, when I want to monitor a service, I have to create a ServiceMonitor
with the label release: prometheus
.
Edit:
Service with monitoring: "true"
label
apiVersion: v1
kind: Service
metadata:
name: issue-manager-service
labels:
app: issue-manager-app
monitoring: "true"
spec:
selector:
app: issue-manager-app
ports:
- protocol: TCP
name: http
port: 80
targetPort: 7200
"Catch-All" Servicemonitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: service-monitor-scraper
labels:
release: prometheus
spec:
endpoints:
- port: metrics
interval: 30s
path: /metrics
jobLabel: monitoring
namespaceSelector:
any: true
selector:
matchLabels:
monitoring: "true"
Only if you have a common label on all services
Then you define a single, cross-namespace
ServiceMonitor
, that covers all labeled services:Then to make sure this
ServiceMonitor
is discovered by the Prometheus Operator you either:ServiceMonitor
via the built-in operator template: https://github.com/prometheus-community/helm-charts/blob/4164ad5fdb6a977f1aba7b65f4e65582d3081528/charts/kube-prometheus-stack/values.yaml#L2008serviceMonitorSelector
that points to yourServiceMonitor
https://github.com/prometheus-community/helm-charts/blob/4164ad5fdb6a977f1aba7b65f4e65582d3081528/charts/kube-prometheus-stack/values.yaml#L1760This additional explicit linkage between Prometheus Operator and ServiceMonitor is done intentionally - in this way, if you have 2 Prometheus instances on your cluster (say Infra and Product) you can separate which Prometheus will get which Pods to its scraping config.
From your question, it sounds like you already have a
serviceMonitorSelector
based onrelease: prometheus
label - try adding that on your catch-allServiceMonitor
as well.