We use kustomize to create a unique configMap for our Deployments whenever a new change to configMap data is made. We now are left with a number of old configMaps which are no longer in use by any Pods - I can find them in Rancher, but that's a pain - how can I automate cleaning up those configMaps that are no longer used by any Pods?
I've tried running: kubectl get configmaps --namespace mynamespace --output=json
I was hoping to see a reverse reference to the Pod that's using it - but I can't find the right info in there.
If your configmaps can be identified using a label, you can just use the --prune flag to get rid of the dangling resources. If you'd add it to your deployment pipelines the orphaned resources should slowly be cleaned from the cluster.
See this comment for how people are using this in conjunction with kustomize.