HPA scale down not happening properly

17.6k views Asked by At

I have created HPA for my deployment, it’s working fine for scaling up to max replicas (6 in my case), when load reduces its scale down to 5 but it supposed to come to my original state of replicas (1 in my case) as load becomes normal . I have verified after 30-40 mins still my application have 5 replicas .. It supposed to be 1 replica.

[ec2-user@ip-192-168-x-x ~]$ kubectl describe hpa admin-dev -n dev

Name: admin-dev
Namespace: dev
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 24 Oct 2019 07:36:32 +0000
Reference: Deployment/admin-dev
Metrics: ( current / target )
resource memory on pods (as a percentage of request): 49% (1285662037333m) / 60%
Min replicas: 1
Max replicas: 10
Deployment pods: 3 current / 3 desired
Conditions:
  Type           Status Reason             Message
  ----           ------ ------             -------
  AbleToScale    True   ReadyForNewScale   recommended size matches current size
  ScalingActive  True   ValidMetricFound   the HPA was able to successfully calculate a replica count from memory resource utilization (percentage of request)
  ScalingLimited False  DesiredWithinRange the desired count is within the acceptable range 

Events:
  Type   Reason            Age   From                      Message
  ----   ------            ----  ----                      -------
  Normal SuccessfulRescale 13m   horizontal-pod-autoscaler New size: 2; reason: memory resource utilization (percentage of request) above target
  Normal SuccessfulRescale 5m27s horizontal-pod-autoscaler New size: 3; reason: memory resource utilization (percentage of request) above target
4

There are 4 answers

0
Azeer Esmail On

i answered this on github: https://github.com/kubernetes/kubernetes/issues/78761#issuecomment-1075814510

heres a summary: the problem is in the calculation method that decides if it should scale down or up, the equation when scaling down works when the change in utilization due to load difference is big, usually with cpu ( e.g. 100m - 500m <=> 20% - 100%), but it fails when the change in utilization is small, usually with memory (e.g. 160Mi - 200Mi <=> 80% - 100%) for now its better to stick to CPU metric and make sure currentMetricValue at idle is at most half desiredMetricValue. you can apply this for both metrics: currentMetricValue * 2 =< desiredMetricValue

to make sure it always scales down

2
kool On

In this case Horizontal Pod Autoscaler is working as designed.

Autoscaler can be configured to use one or more metrics.

  1. Autoscaling based on a single metric - sums up the metrics values of all the pods, divides that by the target value set on the HorizontalPodAutoscaler resource, and then rounds it up to the next-larger integer.

desired_replicas = sum(utilization) / desired_utilization.

Example: When it's configured to scale considering CPU. If target is set to 30% and CPU usage is 97%: 97%/30%=3.23 and HPA will round it to 4 (next larger integer).

  1. Autoscaling based on multiple pod metrics - calculates the replica count for each metric individually and then takes the highest value.

Example: if three pods are required to achieve the target CPU usage, and two pods are required to achieve the target memory usage, the Autoscaler will scale to three pods - highest number needed to meet the target.

  1. Autoscaling on custom metrics - allows you to scale up/down based on non-resource metric types, for example scaling your frontend application based on Queries-Per-Second.

I hope it helps.

2
weibeld On

When the load decreases, the HPA intentionally waits a certain amount of time before scaling the app down. This is known as the cooldown delay and helps that the app is scaled up and down too frequently. The result of this is that for a certain time the app runs at the previous high replica count even though the metric value is way below the target. This may look like the HPA doesn't respond to the decreased load, but it eventually will.

However, the default duration of the cooldown delay is 5 minutes. So, if after 30-40 minutes the app still hasn't been scaled down, it's strange. Unless the cooldown delay has been set to something else with the --horizontal-pod-autoscaler-downscale-stabilization flag of the controller manager.

In the output that you posted the metric value is 49% with a target of 60% and the current replica count is 3. This seems actually not too bad.

An issue might be that you're using the memory utilisation as a metric, which is not a good autoscaling metric.

An autoscaling metric should linearly respond to the current load across the replicas of the app. If the number of replicas is doubled, the metric value should halve, and if the number of replicas is halved, the metric value should double. The memory utilisation in most cases doesn't show this behaviour. For example, if each replica uses a fixed amount of memory, then the average memory utilisation across the replicas stays roughly the same regardless of how many replicas were added or removed. The CPU utilisation generally works much better in this regard.

0
Raviraj Pophale On

Change the auto-scaling policy, Keep only CPU utilization metrics policy. In most application CUP metrics functions properly. If app is memory driven then only needs to use memory metrics for auto-scale policy. enter image description here

Ref.: https://docs.openshift.com/container-platform/4.8/nodes/pods/nodes-pods-autoscaling.html