I have stored the SSL certificates in the GCP secret manager. I'm using Helm to deploy the application and configure the GKE ingress load balancer. I followed this blog to add TLS certificate to GKE from Google Secret Manager.
I stored the certificate in below format in Secret Manager
-----BEGIN PRIVATE KEY-----
MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
-----END CERTIFICATE----
In my helm deployment.yaml file, added the secret as volume
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "test-frontend.fullname" . }}
namespace: {{ .Values.global.namespace }}
labels:
{{- include "test-frontend.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "test-frontend.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "test-frontend.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "test-frontend.serviceAccountName" . }}
automountServiceAccountToken: true
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
envFrom:
- configMapRef:
name: {{ include "test-frontend.configMapName" . }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: ispsecret
mountPath: /var/secret
volumes:
- name: testsecret
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "test-tls"
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
and created test-secret-provider.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: test-tls
spec:
provider: gcp
secretObjects:
- secretName: test-tls-csi
type: kubernetes.io/tls
data:
- objectName: "testcert.pem"
key: tls.key
- objectName: "testcert.pem"
key: tls.crt
parameters:
secrets: |
- resourceName: "projects/$PROJECT_ID/secrets/test_ssl_secret/versions/latest"
fileName: "testcert.pem"
When I deploy the application using Helm, I'm getting below error in the GKE logs
MountVolume.SetUp failed for volume "testsecret" : rpc error: code = InvalidArgument desc = failed to mount secrets store objects for pod test/test-frontend-5f98895b4b-6zq9t, err: rpc error: code = InvalidArgument desc = failed to unmarshal secrets attribute: yaml: line 1: did not find expected key
How to fix this error
Make sure the YAML indentation and format in the
parameterssection of yourSecretProviderClassare correct. YAML is very sensitive to indentation, and even a small mistake can lead to parsing errors.You would find a similar error in
Azure/secrets-store-csi-driver-provider-azureissue 290 for illustration.The
resourceName/fileNamein theparameterssection are properly indented as a part of the list undersecrets.And Online YAML Parser seems to error on:
The error:
A better indentation:
The
volumesblock was being indented in a way that made it appear as a continuation of the properties oftestsecretundervolumeMounts, which is not structurally valid.Now,
volumeMountsandvolumesare at the same indentation level, indicating they are part of the same Kubernetes resource definition (e.g., a Pod spec).Check also the logs for the CSI driver pods in your cluster. You can find the CSI driver pods in the
kube-systemnamespace or another namespace if you have configured it differently. Look for any errors that mention issues with processing theSecretProviderClassor accessing the secrets from Google Secret Manager.Your current error message suggests that there is a failure in parsing the
volumeAttributesor similar configuration passed to the CSI driver, specifically indicating a problem with parsing or recognizing a key in the provided YAML configuration.Check that:
the
volumeAttributesmatches the expected keys and structure defined by your CSI driver and theSecretProviderClass. Verify that every key and value undervolumeAttributesis expected and supported.the
secretProviderClassvalue exactly matches the name of an existingSecretProviderClassin your cluster.the configuration within your
SecretProviderClasscorrectly references the secret in the Secret Manager. Any discrepancy here, such as an incorrectresourceNameor a misconfiguredsecretObjectssection, can lead to errors.the format expected by the CSI driver is correct for the CSI convert them into Kubernetes secrets correctly ( a bit as in
aws/secrets-store-csi-driver-provider-awsissue 77): review theSecretProviderClassYAML, specifically theparameters.secretsblock, to make sure it correctly formats the reference to your secret in Google Secret Manager.In the context of Kubernetes and Helm, environment variable placeholders like
$PROJECT_IDare not automatically resolved within raw YAML files. For dynamic value substitution (e.g., inserting thePROJECT_IDinto your YAML), you typically need to use a templating engine or pass these values in through a process that understands how to replace them. Helm, for example, uses template values fromvalues.yamland template functions to insert dynamic content into your YAML files before they are applied to the Kubernetes cluster.See for instance: "How to pull environment variables with Helm charts"
The "secret access permissions" error could indeed be related to a permissions issue if the Kubernetes cluster (specifically, the node pool's service account) does not have permission to access the Google Secret Manager secret. Make sure the service account associated with your GKE nodes has the required roles/permissions (
secretmanager.versions.access) to access secrets in Google Secret Manager.However, the error message you shared suggests that the problem occurs earlier in the process, during the parsing of the YAML configuration, rather than at the point of accessing the secret in Secret Manager.
So make sure that the
PROJECT_IDis correctly injected into yourSecretProviderClassYAML. Since Helm is being used, you can utilize Helm's templating capabilities to achieve this:In the above snippet,
{{ .Values.projectID }}is a Helm template directive that tells Helm to replace this placeholder with the value ofprojectIDdefined in your Helm chart'svalues.yamlfile. You would need to make surevalues.yaml(or whichever values file you are using) includes something like:The revised deployment YAML you have shared looks properly formatted regarding indentation and structure. The persistent error, "failed to unmarshal secrets attribute," points toward an issue with how the secrets data is formatted or interpreted rather than a straightforward syntax error in your YAML.
Considering the context of dynamic variable replacement discussed earlier, the error might be from an improperly formatted or unresolved
resourceNamein yourSecretProviderClassconfiguration, if the dynamic substitution forPROJECT_IDwas not effectively applied.