Running nitro enclaves and on Amazon EKS and getting Insufficient hugepages-2Mi on pods

94 views Asked by At

I am following this article to use Nitro Enclaves on EKS. My pods giving me a warning and are stuck in a pending state.

0/2 nodes are available: 2 Insufficient aws.ec2.nitro/nitro_enclaves, 2 
Insufficient hugepages-2Mi. preemption: 0/2 nodes are available: 
2 No preemption victims found for incoming pod.

On checking the nodes I see the following:

kubectl describe node ip-x.us-east-2.compute.internal | grep -A 8 "Allocated resources:"
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                325m (4%)   0 (0%)
  memory             140Mi (0%)  340Mi (2%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)

kubectl describe node ip-x.us-east-2.compute.internal | grep -A 13 "Capacity:"                                                                                                                                                                                          
Capacity:
  cpu:                8
  ephemeral-storage:  83873772Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15896064Ki
  pods:               29
Allocatable:
  cpu:                7910m
  ephemeral-storage:  76224326324
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             14879232Ki
  pods:               29

Pod Definition Include:

"containers": [
      {
        "name": "hello-container",
        "image": "hello-f9c725ee-4d02-4f48-8c3f-f341a754061b:latest",
        "command": [
          "/home/run.sh"
        ],
        "resources": {
          "limits": {
            "aws.ec2.nitro/nitro_enclaves": "1",
            "cpu": "250m",
            "hugepages-2Mi": "100Mi"
          },
          "requests": {
            "aws.ec2.nitro/nitro_enclaves": "1",
            "cpu": "250m",
            "hugepages-2Mi": "100Mi"
          }
        },

Things that I have tried: Tried vertical and horizontal scaling and also restarting the Kubelet service after reading a couple of other articles, but with no success, and pods are still stuck in a pending state.

1

There are 1 answers

0
Gram On

I think there might be two potential problems here, one related to the lack of hugepages-2Mi, and one related to the lack of aws.ec2.nitro/nitro_enclaves. I'll be referencing https://docs.aws.amazon.com/enclaves/latest/user/kubernetes.html during this.

For hugepages-2Mi, make sure that the launch template created in Step 1 is actually applied to the nodes in your nitro-providing EKS node group, and that the user data is correctly set on that launch template. Note that if you modified the user data to supply a # of MB that is a multiple of 1024, instead of hugepages-2Mi, you'll get hugepages-1Gi, as described in step 5.1 under limits.

For aws.ec2.nitro/nitro_enclaves, you need to make sure that an pod of the DaemonSet provided at https://raw.githubusercontent.com/aws/aws-nitro-enclaves-k8s-device-plugin/main/aws-nitro-enclaves-k8s-ds.yaml is running on your nitro-enabled node. It could be missing because either the DaemonSet was not correctly added to your K8S, or because the node label for your nitro-enabled nodes are incorrect (they should be aws-nitro-enclaves-k8s-dp=enabled, which should be visible in kubectl describe node). If the DaemonSet pod is in fact up and running, it also may have issues. You can check with kubectl logs --namespace=kube-system -l name=aws-nitro-enclaves-k8s-dp --tail=1000