kube-prometheus-stack / prometheus targets status down (only prometheus server instance is up)

1.7k views Asked by At

Before the issue happened, I install prometheus without prometheus-operator follow this chart(https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus), and the kube metrics is all worked and targets shows up. Then, I build up aws eks and install kube-prometheus-stack(https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) and thanos(improbable/[email protected]) in monitoring namespace, whatever the thanos source comes from, the prometheus-server > status > targets page shows as below: targets

Take the "serviceMonitor/monitoring/prometheus-kube-prometheus-kube-proxy/0 (1/9 up)" as example, only the instance that prometheus-server lived is up status, others instance is down. prometheus-kube-prometheus-kube-proxy

In other targets also only can scraped the prometheus-instance, and I don't know why using kube-prometheus-stack is different from org prometheus? And I go into the prometheus-pods to query other instance (https://xxx.xxx.xxx.xxx:10250/metrics) and it timeout return: wget: can't connect to remote host (xxx.xxx.xxx.xxx): Connection timed out

The override values of charts as below:

prometheus-node-exporter:
  prometheus:
    monitor:
      scrapeTimeout: "10m"
kube-state-metrics:
  prometheus:
    monitor:
      scrapeTimeout: "10m"
prometheus:
  serviceAccount:
    create: true
    name: "thanos-eks-sa"
  prometheusSpec:
    scrapeTimeout: "10m"
    serviceMonitorSelectorNilUsesHelmValues: false
    podMonitorSelectorNilUsesHelmValues: false
    serviceMonitorSelector: {}
    serviceMonitorNamespaceSelector: {}
    additionalScrapeConfigs:
      - job_name: prometheus
        static_configs:
          - targets:
            - localhost:9090
      - job_name: 'kubernetes-apiservers'
        kubernetes_sd_configs:
          - role: endpoints
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          insecure_skip_verify: true
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        relabel_configs:
          - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
            action: keep
            regex: default;kubernetes;https
      - job_name: 'kubernetes-nodes'
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          insecure_skip_verify: true
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
          - role: node
        relabel_configs:
          - action: labelmap
            regex: __meta_kubernetes_node_label_(.+)
          - target_label: __address__
            replacement: kubernetes.default.svc:443
          - source_labels: [__meta_kubernetes_node_name]
            regex: (.+)
            target_label: __metrics_path__
            replacement: /api/v1/nodes/$1/proxy/metrics
      - job_name: 'kubernetes-nodes-cadvisor'
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          insecure_skip_verify: true
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
          - role: node
        relabel_configs:
          - action: labelmap
            regex: __meta_kubernetes_node_label_(.+)
          - target_label: __address__
            replacement: kubernetes.default.svc:443
          - source_labels: [__meta_kubernetes_node_name]
            regex: (.+)
            target_label: __metrics_path__
            replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
    tolerations:
      - key: "dedicated"
        operator: "Equal"
        value: "prometheus"
        effect: "NoSchedule"
    nodeSelector:
      dedicated: prometheus
      lifecycle: on-demand
    externalLabels:
      cluster: dev-general
      environment: dev
    resources: {}
    storageSpec:
      volumeClaimTemplate:
        spec:
          storageClassName: gp2
          resources:
            requests:
              storage: 10Gi
    thanos:
      baseImage: improbable/thanos
      version: v0.2.1
      objectStorageConfig:
        key: thanos.yaml
        name: thanos-objstore-config
  thanosService:
    enabled: true
  thanosServiceMonitor:
      enabled: true
      interval: 5s

kubeProxy:
  metricsBindAddress: 0.0.0.0

kubeconfig:
  enabled: true
prometheusOperator:
  namespaces: ''
  denyNamespaces: ''
  prometheusInstanceNamespaces: ''
  alertmanagerInstanceNamespaces: ''
  thanosRulerInstanceNamespaces: ''
  tolerations:
    - key: "dedicated"
      operator: "Equal"
      value: "prometheus"
      effect: "NoSchedule"
  nodeSelector:
    dedicated: prometheus
    lifecycle: on-demand
  thanosImage:
    repository: improbable/thanos
    version: v0.2.1
  admissionWebhooks:
    patch:
      podAnnotations:
        linkerd.io/inject: disabled
  serviceMonitor:
    scrapeTimeout: "10m"


## alertmanager
alertmanager:
  alertmanagerSpec:
    tolerations:
      - key: "dedicated"
        operator: "Equal"
        value: "prometheus"
        effect: "NoSchedule"
    nodeSelector:
      dedicated: prometheus
      lifecycle: on-demand

## grafana
grafana:    
  sidecar:
    dashboards:
      multicluster:
        global:
          enabled: true
  adminPassword: admin
  tolerations:
    - key: "dedicated"
      operator: "Equal"
      value: "prometheus"
      effect: "NoSchedule"
  nodeSelector:
    dedicated: prometheus
    lifecycle: on-demand

So I think this is a networking issue, but I don't know how to fix it? I don't understand in the same k8s situation, why am I using the charts prometheus and kube-prometheus-stack of prometheus-community are different?

1

There are 1 answers

0
jacklong On

This is because eks's proxy metrics expose on 127.0.0.1