We have an instance of Concourse which is deployed to Kubernetes using Flux via a HelmRelease file, which holds our custom values and references the Concourse Helm Chart. Note: We're using Helm v3.
The chart allows you to specify addionalVolumes
and additionalVolumneMounts
; a feature I'm hoping to use to map /etc/Docker/Daemon.json
to our worker pods to make them use our pull-through mirror proxy (i.e. to avoid Docker rate limit issues).
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: concourse
namespace: concourse
spec:
helmVersion: v3
releaseName: concourse
chart:
repository: https://concourse-charts.storage.googleapis.com/
name: concourse
version: 14.2.0
spec:
#...
values:
#...
worker:
#...
additionalVolumes:
- name: "concourse-worker-docker-daemon"
configMap:
name: "concourse-worker-docker-daemon"
additionalVolumeMounts:
- name: "concourse-worker-docker-daemon"
mountPath: /etc/docker/daemon.json
subPath: daemon.json
readOnly: true
However, I need to create this configMap resource with something like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: concourse-worker-docker-daemon
labels:
app: concourse-worker
data:
daemon.json: |
{
"registry-mirrors": ["https://myDockerMirror.example.com:5000"]
}
I've seen how I could define such a resource if I were developing the chart itself, but as we're using a third party chart and just providing values to the release, I'm unsure how this should be achieved (e.g. is there a way to provide the configMap's definition inline in the HelmRelease's values so that it's created when the chart's deployed, do I need to create a custom chart which wraps the third party chart and adds this resource, or do I need to have the configMap created outside of any charts, then refer to the pre-existing resource via the HelmRelease file).
I want to define resources in such a way that they're fully managed in my flux repo; i.e. rather than creating the config map manually by running kubectl apply ...
; so that any changes to this resource which are pushed to our main branch will automatically be synced to our kubernetes instance.
My background's Windows full stack, so I'm very new to all the concepts involved with Linux, Kubernetes, Flux, and Helm, so apologies in advance if I've overlooked something obvious.
I ended up resolving this by creating a configMap outside of the helm chart, then consuming it within.
Config Map:
Helm Release: