EKS nodeSelector results in pending pod

852 views Asked by At

I am encountering problems when using nodeSelector in my Kubernetes manifest. I have a nodegroup in EKS with the label eks.amazonaws.com/nodegroup=dev-nodegroup. This node has a name with the corresponding ip, as usual in AWS. If I set the nodeName in the manifest, everything works and the pod is deployed in the corresponding node but when I do:

nodeSelector:
      eks.amazonaws.com/nodegroup: dev-nodegroup

in my manifest, at the same indentation level as the containers there is a FailedScheduling

 Warning  FailedScheduling  3m31s (x649 over 11h)  default-scheduler  0/1 nodes are available: 1 node(s) had no available disk.

Am I doing something wrong? I would also like to add the zone label to the node selector but it yields the same problem.

What does 'had no available disk' mean? I have chechedk my node doing df -h and there is enough free disk space. I have seen other questions where the output is that the node is unreachable or have some taint, mine doesn't have any.

Any help is greatly appreciated.

EDIT

I have a volume mounted in the pod like this:

volumes:
    - name: <VOLUME_NAME>
      awsElasticBlockStore:
        volumeID: <EBS_ID>
        fsType: ext4

Since EBS are deployed only in one zone I would need to set the zone selector as well.

Also I have this storageClass (just noticed it):

Name:            gp2
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/aws-ebs
Parameters:            fsType=ext4,type=gp2
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

EDIT2

My cluster has only one nodegroup with one node, in case this helps, too.

1

There are 1 answers

2
gohm'c On

Yes, otherwise it would not deploy the pod when I set the nodeName instead

For EBS volume it can only mount to a node once. The second time you run a pod trying to mount the same volume on the same node you will get this error. For your case, you should delete the pod that currently have the volume mounted since you only have 1 node (given your error: default-scheduler 0/1 nodes are available: 1 node(s) had no available disk.), before you run another pod that would mount the same volume again.