I'm trying to leverage a local volume dynamic provisioner for k8s, Rancher's one, with multiple instances, each with its own storage class so that I can provide multiple types of local volumes based on their performance (e.g. ssd, hdd ,etc).
The underlying infrastructure is not symmetric; some nodes only have ssds, some only hdds, some of them both.
I know that I can hint the scheduler to select the proper nodes by providing node affinity rules for pods.
But, is there a better way to address this problem at the level of provisioner / storage class only ? E.g., make a storage class only available for a subset of the cluster nodes.
There is no need to define node affinity rules on
Pod
level when using local persistent volumes. Node affinity can be specified inPersistentVolume
definition.No, it cannot be specified on a
StorageClass
level. Neither you can make aStorageClass
available only for a subset of nodes.But when it comes to a provisioner, I would say yes, it should be feasible as one of the major storage provisioner tasks is creating matching
PersistentVolume
objects in response toPersistentVolumeClaim
created by the user. You can read about it here:So looking at the whole volume provision process from the very beginning it looks as follows:
User creates only
PersistenVolumeClaim
object, where he specifies aStorageClass
:and it can be used in a
Pod
definition:So in practice, in a
Pod
definition you need only to specify the properPVC
. No need for defining any node-affinity rules here.A
Pod
references aPVC
,PVC
then references aStorageClass
,StorageClass
references theprovisioner
that should be used:So in the end it is the task of a
provisioner
to create matchingPersistentVolume
object. It can look as follows:So a
Pod
which uses myclaimPVC
-> which references the local-storageStorageClass
-> which selects a proper storageprovisioner
will be automatically scheduled on the node selected inPV
definition created by this provisioner.