Kubernetes associate cinder storage with pod

1k views Asked by At

I have a K8 cluster and need to associate my pods with a cinder storage options. I tried two options but both fail. Can anyone shed light on what is happening?

Option1: Manually created the volume in OpenStack and integrated it in my yaml files. Kubectl describe on pod shows below error

Error: Volumes:
  jenkins-volume:
    Type:      Cinder (a Persistent Disk resource in OpenStack)
    VolumeID:  09405897-8477-4479-9730-843a80f88302
    FSType:    ext4
    ReadOnly:  false
  default-token-x76pk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-x76pk
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age   From                     Message
  ----     ------                 ----  ----                     -------
  Normal   Scheduled              2m    default-scheduler        Successfully assigned mongo1-6dfcc8fb88-rzh88 to k8slave1
  Normal   SuccessfulMountVolume  2m    kubelet, k8slave1        MountVolume.SetUp succeeded for volume "default-token-x76pk"
  Warning  FailedAttachVolume     1m    attachdetach-controller  AttachVolume.Attach failed for volume "jenkins-volume" : Volume "09405897-8477-4479-9730-843a80f88302" failed to be attached within the alloted time
  Warning  FailedMount            24s   kubelet, k8slave1        Unable to mount volumes for pod "mongo1-6dfcc8fb88-rzh88_db(6cf912cf-c238-11e8-8224-fa163e01527a)": timeout expired waiting for volumes to attach or mount for pod "db"/"mongo1-6dfcc8fb88-rzh88". list of unmounted volumes=[jenkins-volume]. list of unattached volumes=[jenkins-volume default-token-x76pk]

YAML files:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo1
 spec:
  replicas: 1 
  selector:
    matchLabels:
      app: mongo1
  template:
    metadata:
      labels:
        app: mongo1
    spec:
     containers:
      - name: mongo1
        image: mongo:3.5
        volumeMounts:
        - name: jenkins-volume
          mountPath: /data/db 
        ports:
         - containerPort: 27017
     volumes:
      - name: jenkins-volume
        cinder:
         volumeID: 09405897-8477-4479-9730-843a80f88302
         fsType: ext4 

Option2: Create a new storage class, and create a new PV, PVC. This creates a new volume in OpenStack and it shows up in my K8 cluster too.

Associate the claim to yaml files:

root@K8Masternew cinder]# kubectl get pvc

NAME            STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim1-volume   Bound     pvc-2bb2a16a-c23d-11e8-8224-fa163e01527a   2Gi        RWO            test           29m


Error:
Events:
  Type     Reason            Age               From               Message
  ----     ------            ----              ----               -------
  Warning  FailedScheduling  9s (x6 over 24s)  default-scheduler  0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) had no available volume zone.

YAML files:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo1
spec:
  replicas: 1 
  selector:
    matchLabels:
      app: mongo1
  template:
    metadata:
      labels:
        app: mongo1
    spec:
     containers:
      - name: mongo1
        image: mongo:3.5
        volumeMounts:
        - name: jenkins-volume
          mountPath: /data-db 
        ports:
         - containerPort: 27017
     volumes:
      - name: jenkins-volume
        persistentVolumeClaim:
          claimName: claim1-volum

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: test
provisioner: kubernetes.io/cinder
parameters:
  availability: nova
************

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: claim1-volume
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: test
1

There are 1 answers

1
Rico On

Looks like you may have some taints on all or some of your nodes. Look for Taints on the output of kubectl describe node <node-name>. So if the output is something like this:

Taints:             node-role.kubernetes.io/slave:NoSchedule

You can specify on your pod definition with something like this:

tolerations:
- key: "node-role.kubernetes.io/slave"
  operator: "Equal"
  value: "NoSchedule"
  effect: "NoSchedule"
- key: "node-role.kubernetes.io/slave"
  operator: "Equal"
  value: "NoSchedule"
  effect: "NoExecute"

The other thing that you can do is remove all your taints altogether:

kubectl taint nodes <node-name> node-role.kubernetes.io/slave:NoSchedule-
kubectl taint nodes <node-name> node-role.kubernetes.io/slave:NoExecute-