What happened: I've configured a hpa with these details:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: api-horizontalautoscaler
namespace: develop
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: api-deployment
minReplicas: 1
maxReplicas: 4
metrics:
- type: Resource
resource:
name: memory
targetAverageValue: 400Mib
What I expected to happen: The pods scaled up to 3 when we put some load and the average memory exceeded 400 which was expected. Now the average memory has gone back down to roughly 300 and still the pods haven't scaled down even though they have been below the target for a couple of hours now.
A day later:
I expected the pods to scale down when the memory fell below 400
Environment:
- Kubernetes version (using
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.9", GitCommit:"3e4f6a92de5f259ef313ad876bb008897f6a98f0", GitTreeState:"clean", BuildDate:"2019-08-05T09:22:00Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.10", GitCommit:"37d169313237cb4ceb2cc4bef300f2ae3053c1a2", GitTreeState:"clean", BuildDate:"2019-08-19T10:44:49Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}re configuration:
- OS (e.g:
cat /etc/os-release
):
> cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
- Kernel (e.g.
uname -a
): x86_64 x86_64 x86_64 GNU/Linux
I would really like to know why this is. Any information needed I will be happy to provide.
Thanks!
There are two things to look at:
The
autoscaling/v2beta2
was introduced in K8s 1.12 so despite the fact you are using 1.13 (which is 6 major versions old now) it should work fine (however, upgrading to a newer version is recommended). Try changing yourapiVersion:
toautoscaling/v2beta2
.Check the value of this particular flag after changing the API suggested above.