K8s PersistentVolume shared among multiple PersistentVolumeClaims for local testing

786 views Asked by At

Could someone help me please and point me what configuration should I be doing for my use-case?

I'm building a development k8s cluster and one of the steps is to generate security files (private keys) that are generated in a number of pods during deployment (let's say for a simple setup I have 6 pods that each build their own security keys). I need to have access to all these files, also they must be persistent after the pod goes down.

I'm trying to figure out now how to set up it locally for internal testing. From what I understand Local PersistentVolumes only allow 1:1 with PersistentVolumeClaims, so I would have to create a separate PersistentVolume and PersistentVolumeClaim for each pod that get's configured. I would prefer to void this and use one PersistentVolume for all.

Could someone be so nice and help me or point me to the right setup that should be used?

-- Update: 26/11/2020 So this is my setup:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hlf-nfs--server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hlf-nfs--server
  template:
    metadata:
      labels:
        app: hlf-nfs--server
    spec:
      containers:
        - name: hlf-nfs--server
          image: itsthenetwork/nfs-server-alpine:12
          ports:
            - containerPort: 2049
              name: tcp
            - containerPort: 111
              name: udp
          securityContext:
            privileged: true
          env:
            - name: SHARED_DIRECTORY
              value: "/opt/k8s-pods/data"
          volumeMounts:
            - name: pvc
              mountPath: /opt/k8s-pods/data
      volumes:
        - name: pvc
          persistentVolumeClaim:
            claimName: shared-nfs-pvc
apiVersion: v1
kind: Service
metadata:
  name: hlf-nfs--server
  labels:
    name: hlf-nfs--server
spec:
  type: ClusterIP
  selector:
    app: hlf-nfs--server
  ports:
    - name: tcp-2049
      port: 2049
      protocol: TCP
    - name: udp-111
      port: 111
      protocol: UDP
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  resources:
    requests:
      storage: 1Gi

These three are being created at once, after that, I'm reading the IP of the service and adding it to the last one:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: shared-nfs-pv
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /opt/k8s-pods/data
    server: <<-- IP from `kubectl get svc -l name=hlf-nfs--server`

The problem I'm getting and trying to resolve is that the PVC does not get bound with the PV and the deployment keeps in READY mode.

Did I miss anything?

3

There are 3 answers

0
Sniady On BEST ANSWER

So finally, I did it by using a dynamic provider.

I installed the stable/nfs-server-provisioner with helm. With proper configuration, it managed to create a pv named nfs two which my pvc's are able to bound :)

helm install stable/nfs-server-provisioner --name nfs-provisioner -f nfs_provisioner.yaml

the nfs_provisioner.yaml is as follows

persistence:
  enabled: true
  storageClass: "standard"
  size: 20Gi

storageClass:
  # Name of the storage class that will be managed by the provisioner
  name: nfs
  defaultClass: true
2
Lukman On

You can create a NFS and have the pods use NFS volume. Here is the manifest file to create such in-cluster NFS server (make sure you modify STORAGE_CLASS and the other variables below):

export NFS_NAME="nfs-share"
export NFS_SIZE="10Gi"
export NFS_IMAGE="itsthenetwork/nfs-server-alpine:12"
export STORAGE_CLASS="thin-disk"

kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: ${NFS_NAME}
  labels:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
spec:
  ports:
  - name: tcp-2049
    port: 2049
    protocol: TCP
  - name: udp-111
    port: 111
    protocol: UDP
  selector:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
  name: ${NFS_NAME}
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: $STORAGE_CLASS
  volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ${NFS_NAME}
  labels:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nfs-server
      app.kubernetes.io/instance: ${NFS_NAME}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nfs-server
        app.kubernetes.io/instance: ${NFS_NAME}
    spec:
      containers:
      - name: nfs-server
        image: ${NFS_IMAGE}
        ports:
        - containerPort: 2049
          name: tcp
        - containerPort: 111
          name: udp
        securityContext:
          privileged: true
        env:
        - name: SHARED_DIRECTORY
          value: /nfsshare
        volumeMounts:
        - name: pvc
          mountPath: /nfsshare
      volumes:
      - name: pvc
        persistentVolumeClaim:
          claimName: ${NFS_NAME}
EOF

Below is an example how to point the other pods to this NFS. In particular, refer to the volumes section at the end of the YAML:

export NFS_NAME="nfs-share"
export NFS_IP=$(kubectl get --template={{.spec.clusterIP}} service/$NFS_NAME)

kubectl apply -f - <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
  name: apache
  labels:
    app: apache
spec:
  replicas: 2
  selector:
    matchLabels:
      app: apache
  template:
    metadata:
      labels:
        app: apache
      containers:
        - name: apache
          image: apache
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /var/www/html/
              name: nfs-vol
              subPath: html
      volumes:
        - name: nfs-vol
          nfs: 
            server: $NFS_IP
            path: /
EOF
0
Jonas On

It is correct that there is a 1:1 relation between a PersistentVolumeClaim and a PersistentVolume.

However, Pods running on the same Node can concurrently mount the same volume, e.g. use the same PersistentVolumeClaim.

If you use Minikube for local development, you only have one node, so you can use the same PersistentVolumeClaim. Since you want to use different files for each app, you could use a unique directory for each app in that shared volume.