Failed mounting to Persistent-Memory-Backed local persisten volume in kubernetes 1.20

1.1k views Asked by At

I'm trying to make a k8s pod able to use PMEM without using the privileged mode. The way I'm trying is to create a local PV on top of a fsdax directory with PVC in k8s and let my pod use it. However, I always get the MountVolume.NewMounter initialization failed ... : path does not exist error.

Here are my yaml files and PMEM status:

Storage Class yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

PV yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pmem-pv-volume
spec:
  capacity:
    storage: 50Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/pmem0/vol1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: disktype
          operator: In
          values:
          - pmem

PVC yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pmem-pv-claim
spec:
  storageClassName: local-storage
  volumeName: pmem-pv-volume
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Pod yaml:

apiVersion: v1
kind: Pod
metadata:
  name: daemon
  labels:
    env: test
spec:
  hostNetwork: true
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - pmem
  containers:
  - name: daemon-container
    command: ["/usr/bin/bash", "-c", "sleep 3600"]
    image: mm:v2
    imagePullPolicy: Never
    volumeMounts:
    - mountPath: /mnt/pmem
      name: pmem-pv-storage
    - mountPath: /tmp
      name: tmp
    - mountPath: /var/log/memverge
      name: log
    - mountPath: /var/memverge/data
      name: data
  volumes:
    - name: pmem-pv-storage
      persistentVolumeClaim:
        claimName: pmem-pv-claim
    - name: tmp
      hostPath:
        path: /tmp
    - name: log
      hostPath:
        path: /var/log/memverge
    - name: data
      hostPath:
        path: /var/memverge/data

Some status and k8s outputs:

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0 745.2G  0 disk
├─sda1        8:1    0     1G  0 part /boot
└─sda2        8:2    0   740G  0 part
  ├─cl-root 253:0    0   188G  0 lvm  /
  ├─cl-swap 253:1    0    32G  0 lvm  [SWAP]
  └─cl-home 253:2    0   520G  0 lvm  /home
sr0          11:0    1  1024M  0 rom
nvme0n1     259:0    0     7T  0 disk
└─nvme0n1p1 259:1    0     7T  0 part /mnt/nvme
pmem0       259:2    0 100.4G  0 disk /mnt/pmem0
$ kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS    REASON   AGE
pmem-pv-volume   50Gi       RWO            Delete           Bound    default/pmem-pv-claim   local-storage            20h
$ kubectl get pvc
NAME            STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS    AGE
pmem-pv-claim   Bound    pmem-pv-volume   50Gi       RWO            local-storage   20h
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY   STATUS              RESTARTS   AGE
default       daemon                             0/1     ContainerCreating   0          20h
kube-system   coredns-74ff55c5b-5crgg            1/1     Running             0          20h
kube-system   etcd-minikube                      1/1     Running             0          20h
kube-system   kube-apiserver-minikube            1/1     Running             0          20h
kube-system   kube-controller-manager-minikube   1/1     Running             0          20h
kube-system   kube-proxy-2m7p6                   1/1     Running             0          20h
kube-system   kube-scheduler-minikube            1/1     Running             0          20h
kube-system   storage-provisioner                1/1     Running             0          20h
$ kubectl get events
LAST SEEN   TYPE      REASON        OBJECT       MESSAGE
108s        Warning   FailedMount   pod/daemon   MountVolume.NewMounter initialization failed for volume "pmem-pv-volume" : path "/mnt/pmem0/vol1" does not exist
47m         Warning   FailedMount   pod/daemon   Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[tmp log data default-token-4t8sv pmem-pv-storage]: timed out waiting for the condition
37m         Warning   FailedMount   pod/daemon   Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[default-token-4t8sv pmem-pv-storage tmp log data]: timed out waiting for the condition
13m         Warning   FailedMount   pod/daemon   Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[pmem-pv-storage tmp log data default-token-4t8sv]: timed out waiting for the condition
$ ls -l /mnt/pmem0
total 20
drwx------ 2 root root 16384 Jan 20 15:35 lost+found
drwxrwxrwx 2 root root  4096 Jan 21 17:56 vol1

It is complaining path "/mnt/pmem0/vol1" does not exist but it actually does exist:

$ ls -l /mnt/pmem0
total 20
drwx------ 2 root root 16384 Jan 20 15:35 lost+found
drwxrwxrwx 2 root root  4096 Jan 21 17:56 vol1

Besides using local-PV, I also tried:

  1. PMEM-CSI. But the PMEM-CSI method is blocked from me by a containerd/kernel issue: https://github.com/containerd/containerd/issues/3221

  2. PV. When I trying to create PV backed by PMEM, the pod cannot claim the PMEM storage correctly, but always mount as an overlay fs which is on top of / on the host.

Could anyone give some help? Thanks a lot!

1

There are 1 answers

0
Vit On BEST ANSWER

As was discussed in comments:

Using minikube, rancher and any other containerized version of kubelets will lead to MountVolume.NewMounter initialization failed for volume, stating this path does not exist.

If the kubelet is running in a container, it cannot access the host filesystem at the same path. You must adjust hostDir to the correct path in the kubelet container.

Also what you can do is add bindings for local volumes as was suggested on github. Please adjust copy pasted example for your needs, if you will use it

    "HostConfig": {
        "Binds": [
            "/mnt/local:/mnt/local"
        ],

Regular installations(not conteinerized), like kubeadm will not act the same and you will not receive such errors.