Can 3 replicas use the same PersistentVolume in a StatefulSet in Kubernetes?

4.3k views Asked by At

I created a StatefulSet for running my NodeJS with 3 replicas and want to attach to a gce disk that can become a data storage for user to upload files.

My project naming: carx; Server name: car-server

However I got an error while creating the second pod.

kubectl describe pod car-server-statefulset-1

AttachVolume.Attach failed for volume "my-app-data" : googleapi: Error 400: RESOURCE_IN_USE_BY_ANOTHER_RESOURCE - The disk resource 'projects/.../disks/carx-disk' is already being used by 'projects/.../instances/gke-cluster-...-2dw1'


car-server-statefulset.yml

apiVersion: v1
kind: Service
metadata:
  name: car-server-service
  labels:
    app: car-server
spec:
  ports:
  - port: 8080
    name: car-server
  clusterIP: None
  selector:
    app: car-server
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: car-server-statefulset
spec:
  serviceName: "car-server-service"
  replicas: 3
  template:
    metadata:
      labels:
        app: car-server
    spec:
      containers:
        - name: car-server
          image: myimage:latest
          ports:
            - containerPort: 8080
              name: nodejs-port
          volumeMounts:
          - name: my-app-data
            mountPath: /usr/src/app/mydata
      volumes:
      - name: my-app-data
        persistentVolumeClaim:
          claimName: example-local-claim
  selector:
    matchLabels:
      app: car-server

pvc.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: example-local-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: standard

pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-app-data
  labels:
    app: my-app
spec:
  capacity:
    storage: 60Gi
  storageClassName: standard
  accessModes:
    - ReadWriteMany
  gcePersistentDisk:
    pdName: carx-disk
    fsType: ext4
2

There are 2 answers

5
Jonas On

The Access Mode field is treated as a request, but it is not sure that you get what you requests. In your case, GCEPersistentDisk only support ReadWriteOnce or ReadOnlyMany.

Your PV is now mounted as ReadWriteOnce but can only be mounted on one node at the same time. So the other replicas will fail to mount the volume.

When using StatefulSet, it is common that each replica use its own volume, use the volumeClaimTemplate: part of the StatefulSet manifest for that.

Example:

  volumeClaimTemplates:
  - metadata:
      name: example-claim
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "standard"
      resources:
        requests:
          storage: 5Gi

In case that you only can use a single volume, you may consider to run the StatefulSet with only one replica, e.g. replicas: 1.

If you want disk-replication, you can use a StorageClass for regional disks that are replicated to another AZ as well. See Regional Persistent Disk, but it still have the same access modes.

0
Arghya Sadhu On

From the docs

A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many Pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous writers allowed.

Alternative solution is Google Cloud Filestore which is a NAS offering. You can mount the Filestore in Compute Engine and Kubernetes Engine instances. However, the problem with Filestore is that it designed with large file storage systems in mind and has minimum capacity of 1TB which is expensive for small use cases.

An inexpensive way to solve the issue is to setup a NFS server in your cluster backed up by a ReadWriteOnce PV and then create NFS based PV (which supports ReadWriteMany) using this NFS server