This is an open issue: https://github.com/kubernetes/minikube/issues/9370
Steps to reproduce:
$ minikube start — extra-config=controller-manager.horizontal-pod-autoscaler-upscale-delay=1m — extra-config=controller-manager.horizontal-pod-autoscaler-downscale-delay=1m — extra-config=controller-manager.horizontal-pod-autoscaler-sync-period=10s — extra-config=controller-manager.horizontal-pod-autoscaler-downscale-stabilization=1m
$ minikube add-ons enable metrics-server
- Create
.yaml
with resource requests and limits:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: orion
name: orion
spec:
replicas: 1
selector:
matchLabels:
app: orion
template:
metadata:
labels:
app: orion
spec:
containers:
- args:
- -dbhost
- mongo-db
- -logLevel
- DEBUG
- -noCache
name: fiware-orion
image: fiware/orion:2.3.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1026
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 200m
memory: 0.5Gi
restartPolicy: Always
$ kubectl -n test-1 autoscale deployment orion --min=1 --max=5 --cpu-percent=50
Full output of failed command:
Command $ kubectl -n test-1 describe hpa orion
returns:
Name: orion
Namespace: udp-test-1
Labels: <none>
Annotations: CreationTimestamp: Thu, 01 Oct 2020 14:00:46 +0000
Reference: Deployment/orion
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (0) / 20%
Min replicas: 1
Max replicas: 5
Deployment pods: 1 current / 1 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 39s (x12 over 4m27s) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
Warning FailedGetResourceMetric 24s (x13 over 4m27s) horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from resource metrics API
Command $ minikube addons list
returns:
|-----------------------------|----------|--------------|
| ADDON NAME | PROFILE | STATUS |
|-----------------------------|----------|--------------|
| ambassador | minikube | disabled |
| dashboard | minikube | enabled ✅ |
| default-storageclass | minikube | enabled ✅ |
| efk | minikube | disabled |
| freshpod | minikube | disabled |
| gvisor | minikube | disabled |
| helm-tiller | minikube | disabled |
| ingress | minikube | enabled ✅ |
| ingress-dns | minikube | disabled |
| istio | minikube | disabled |
| istio-provisioner | minikube | disabled |
| kubevirt | minikube | disabled |
| logviewer | minikube | disabled |
| metallb | minikube | disabled |
| metrics-server | minikube | enabled ✅ |
| nvidia-driver-installer | minikube | disabled |
| nvidia-gpu-device-plugin | minikube | disabled |
| olm | minikube | disabled |
| pod-security-policy | minikube | disabled |
| registry | minikube | disabled |
| registry-aliases | minikube | disabled |
| registry-creds | minikube | disabled |
| storage-provisioner | minikube | enabled ✅ |
| storage-provisioner-gluster | minikube | disabled |
|-----------------------------|----------|--------------|
As you may see in the commands output, even though it seems that the metrics server is working properly (metrics in hpa Orion say: resource cpu on pods (as a percentage of request): 0%
), when it comes to the events produced by the Orion hpa there is an error regarding the computation of the metrics:
horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from resource metrics API
What is the reason for this horizontal pod autoscaler not working properly?
Other details:
Minikube version:
minikube version: v1.12.1
commit: 5664228288552de9f3a446ea4f51c6f29bbdd0e0-dirty
Kubernetes version:
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:56:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}