I have updated the standard dashboard yaml to those who iterates over the configuration:

{{- /*
    Generated from 'apiserver' from https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
    Do not change in-place! In order to change this file first read following link:
      https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/hack
    */ -}}
  {{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }}
  {{- if $.Values.grafana.sidecar.dashboards.alerting }}
  {{- if and (semverCompare ">=1.14.0-0" $kubeTargetVersion) (semverCompare "<9.9.9-9" $kubeTargetVersion) .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled .Values.kubeApiServer.enabled }}
  {{ $c := 0 | int }}
  {{- range $key, $value := .Values.grafana.sidecar.dashboards.alerting }}
  {{ $c = add1 $c }}
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: {{ $.Release.Namespace }}
  name: {{ printf "%s-%s" (include "kube-prometheus-stack.fullname" $) "alerting" | trunc 63 | trimSuffix "-" }}-{{ $key }}
  annotations:
  {{ toYaml $.Values.grafana.sidecar.dashboards.annotations | indent 4 }}
  labels:
  {{- if $.Values.grafana.sidecar.dashboards.label }}
    {{ $.Values.grafana.sidecar.dashboards.label }}: "{{ $c }}"
    {{- end }}
    app: {{ template "kube-prometheus-stack.name" $ }}-grafana
  {{ include "kube-prometheus-stack.labels" $ | indent 4 }}
data:
  alerting{{ $key }}.json: |-
{
  "annotations": {
    "list": [
      {
        "builtIn": 1,
        "datasource": "-- Grafana --",
        "enable": true,
        "hide": true,
        "iconColor": "rgba(0, 211, 255, 1)",
        "name": "Annotations & Alerts",
        "type": "dashboard"
      }
    ]
  },
  "description": "Custom {{ $key }} alerting setup",
  "editable": true,
  "gnetId": 11542{{ $c }},
  "id": 900{{ $c }},
  "graphTooltip": 0,
  "links": [],
  "panels": [

...
],
      "timezone": "",
      "title": "{{ $.Values.grafana.sidecar.dashboards.env }} Custom {{ $key }} Alerting setup",
      "uid": "alerting{{ $key }}",
      "version": 1
    }
  {{- end }}
  {{- end }}
  {{- end }}

and then it goes the dashboard part, yet parametrized with values from the yaml like that:

  sidecar:
    dashboards:
      enabled: true
      label: grafana_dashboard
      env: Dev
      ## Annotations for Grafana dashboard configmaps
      ##
      annotations: {}
      multicluster: false
      alerting:
        trains:
          ourdashboardstuff: ourvalues
        ships:
          ourdashboardstuff: ourvalues

As the result, 2 different ConfigMaps are produced by helm upgrade. I can even see them with --dry-run. So, my expectation that I'll see 2 different dashbords like Dev Custom ships Alerting setup and Dev Custom trains Alerting setup, but I see an only one like grafana ignores 2 ConfigMaps with dashboard details at one helm output.

All keys are definitely not collide to each others, what makes Grafana to ignore the second dashboard config produced by the helm output?

0

There are 0 answers