In our Amazon EKS cluster, we have observed that the Cluster Autoscaler is only launching m6i.large instances and is not considering m6i.xlarge instances for scaling. This issue affects the functionality and resource scaling capabilities of our EKS cluster.
Steps to Reproduce:
Create an Amazon EKS cluster with a node group named "emissary-124-managed" configured as described below:
- name: xxx-124-managed
availabilityZones: ["ap-southeast-2a","ap-southeast-2b","ap-southeast-2c"]
instanceTypes: ["m6i.xlarge","m6i.large"]
privateNetworking: true
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/xxxxx
withAddonPolicies:
autoScaler: true
externalDNS: true
albIngress: true
cloudWatch: true
securityGroups:
attachIDs:
- sg-xxxxx # SG-1
minSize: 1
maxSize: 20
labels:
nodegroup-type: xxxxx
k8s.xxxx/nodeGroup: xxxxx
desiredCapacity: 1
taints:
- effect: NoSchedule
key: k8s.xxxxx/nodeGroup
value: xxxxxx
Expected Behavior: The Cluster Autoscaler should consider both m6i.xlarge and m6i.large instance types for scaling based on the workload requirements and resource demands.
Actual Behavior: The Cluster Autoscaler is only launching m6i.large instances, which limits the scaling options and resource flexibility for our workloads.
Additional Information:
Node group configuration includes both m6i.xlarge and m6i.large instance types. The taints and tolerations have been reviewed, and no specific restrictions are in place that would prevent the launching of m6i.xlarge instances. Pod scheduling requirements have been checked, and tolerations match taints.
Environment Details: Amazon EKS Cluster Kubernetes Version: EKS v1.24 Node Group Configuration: Managed Node Group as provided above
Please help/suggestion to resolve this issue to ensure that both m6i.xlarge and m6i.large instances are considered for scaling by the Cluster Autoscaler, providing the necessary resource flexibility for our workloads.