Problems mounting Persistent Volume as ReadOnlyMany across multiple pods

2.9k views Asked by At

I'm having some trouble getting a ReadOnlyMany persistent volume to mount across multiple pods on GKE. Right now it's only mounting on one pod and failing to mount on any others (due to the volume being in use on the first pod), causing the deployment to be limited to one pod.

I suspect the issue is related to the volume being populated from a volume snapshot.

Looking through related questions, I've sanity-checked that spec.containers.volumeMounts.readOnly = true and spec.containers.volumes.persistentVolumeClaim.readOnly = true which seemed to be the most common fixes for related issues.

I've included the relevant yaml below. Any help would be greatly appreciated!

Here's (most of) the deployment spec:

spec:
  containers:
  - env:
    - name: GOOGLE_APPLICATION_CREDENTIALS
      value: /var/secrets/google/key.json
    image: eu.gcr.io/myimage
    imagePullPolicy: IfNotPresent
    name: monsoon-server-sha256-1
    resources:
      requests:
        cpu: 100m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /mnt/sample-ssd
      name: sample-ssd
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: gke-cluster-1-default-pool-3d6123cf-kcjo
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 29
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: sample-ssd
    persistentVolumeClaim:
      claimName: sample-ssd-read-snapshot-pvc-snapshot-5
      readOnly: true

The storage class (which is also the default storage class for this cluster):

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sample-ssd
provisioner: pd.csi.storage.gke.io
volumeBindingMode: Immediate
parameters:
    type: pd-ssd

The PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sample-ssd-read-snapshot-pvc-snapshot-5
spec:
  storageClassName: sample-ssd
  dataSource:
    name: sample-snapshot-5
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 20Gi
1

There are 1 answers

0
Mr.KoopaKiller On

Google Engineers are aware about this issue.

More details about this issue you can find in issue report and pull request on GitHub.

There's a temporary workaround if you're trying to provision a PD from a snapshot and make it ROX:

  1. Provision a PVC with datasource as RWO;

It will create a new Compute Disk with the content of the source disk
2. Take the PV that was provisioned and copy it to a new PV that's ROX according to the docs

You can execute it with the following commands:

Step 1

Provision a PVC with datasource as RWO;

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: workaround-pvc
spec:
  storageClassName: ''
  dataSource:
    name: sample-ss
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

You can check the disk name with command:

kubectl get pvc and check the VOLUME column. This is the disk_name

Step 2

Take the PV that was provisioned and copy it to a new PV that's ROX

As mentioned in the docs you need to create another disk using the previous disk (created in step 1) as source:

# Create a disk snapshot:
gcloud compute disks snapshot <disk_name>

# Create a new disk using snapshot as source
gcloud compute disks create pvc-rox --source-snapshot=<snapshot_name>

Create a new PV and PVC ReadOnlyMany

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-readonly-pv
spec:
  storageClassName: ''
  capacity:
    storage: 20Gi
  accessModes:
    - ReadOnlyMany
  claimRef:
    namespace: default
    name: my-readonly-pvc
  gcePersistentDisk:
    pdName: pvc-rox
    fsType: ext4
    readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-readonly-pvc
spec:
  storageClassName: ''
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 20Gi

Add the readOnly: true on your volumes and volumeMounts as mentioned here

readOnly: true