kubernetes storage class node selector

7.1k views Asked by At

I'm trying to leverage a local volume dynamic provisioner for k8s, Rancher's one, with multiple instances, each with its own storage class so that I can provide multiple types of local volumes based on their performance (e.g. ssd, hdd ,etc).

The underlying infrastructure is not symmetric; some nodes only have ssds, some only hdds, some of them both.

I know that I can hint the scheduler to select the proper nodes by providing node affinity rules for pods.

But, is there a better way to address this problem at the level of provisioner / storage class only ? E.g., make a storage class only available for a subset of the cluster nodes.

1

There are 1 answers

1
mario On

I know that I can hint the scheduler to select the proper nodes by providing node affinity rules for pods.

There is no need to define node affinity rules on Pod level when using local persistent volumes. Node affinity can be specified in PersistentVolume definition.

But, is there a better way to address this problem at the level of provisioner / storage class only ? E.g., make a storage class only available for a subset of the cluster nodes.

No, it cannot be specified on a StorageClass level. Neither you can make a StorageClass available only for a subset of nodes.

But when it comes to a provisioner, I would say yes, it should be feasible as one of the major storage provisioner tasks is creating matching PersistentVolume objects in response to PersistentVolumeClaim created by the user. You can read about it here:

Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users.

So looking at the whole volume provision process from the very beginning it looks as follows:

User creates only PersistenVolumeClaim object, where he specifies a StorageClass:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-storage ### 

and it can be used in a Pod definition:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: "/var/www/html"
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: myclaim ### 

So in practice, in a Pod definition you need only to specify the proper PVC. No need for defining any node-affinity rules here.

A Pod references a PVC, PVC then references a StorageClass, StorageClass references the provisioner that should be used:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/my-fancy-provisioner ### 
volumeBindingMode: WaitForFirstConsumer

So in the end it is the task of a provisioner to create matching PersistentVolume object. It can look as follows:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /var/tmp/test
  nodeAffinity: ### 
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - ssd-node ### 

So a Pod which uses myclaim PVC -> which references the local-storage StorageClass -> which selects a proper storage provisioner will be automatically scheduled on the node selected in PV definition created by this provisioner.