HorizontalPodAutoscaler: missing field "conditions"

2.1k views Asked by At

Friends, I am trying to implement a HPA following the hpa tutorial of k8s and I am having the following error:

ValidationError(HorizontalPodAutoscaler.status): missing required field "conditions" in io.k8s.api.autoscaling.v2beta2.HorizontalPodAutoscalerStatus.

I couldn't find anything about this field "conditions". Does someone have an idea what I may be doing wrong? Here its the YAML of my HPA:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: {{ .Values.name }}
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: {{ .Values.name}}
  minReplicas: {{ .Values.deployment.minReplicas }}
  maxReplicas: {{ .Values.deployment.maxReplicas }}
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50
status:
  observedGeneration: 1
  lastScaleTime: <some-time>
  currentReplicas: 2
  desiredReplicas: 2
  currentMetrics:
  - type: Resource
    resource:
      name: cpu
      current:
        averageValue: 0

And here the manifest of my deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.name }}
spec:
  replicas: {{ .Values.deployment.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Values.labels}}
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
      labels:
        app: {{ .Values.labels }}
    spec:
      initContainers:
      - name: check-rabbitmq
        image: {{ .Values.initContainers.image }}
        command: ['sh', '-c',
        'until wget http://$(RABBITMQ_DEFAULT_USER):$(RABBITMQ_DEFAULT_PASS)@rabbitmq:15672/api/aliveness-test/%2F; 
        do echo waiting; sleep 2; done;']
        envFrom:
        - configMapRef:
            name: {{ .Values.name }}
      - name: check-mysql
        image: {{ .Values.initContainers.image }}
        command: ['sh', '-c', 'until nslookup mysql-primary.default.svc.cluster.local; do echo waiting for mysql; sleep 2; done;']
      containers:
      - name: {{ .Values.name }}
        image: {{ .Values.deployment.image }}
        ports:
        - containerPort: {{ .Values.ports.containerPort }} 
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 200m
        envFrom:
        - configMapRef:
            name: {{ .Values.name }}
1

There are 1 answers

0
PjoterS On

Background

Not sure, why you want to create HPA with status section. If you would remove this section it will create HPA without any issue.

In documentation Understanding Kubernetes Objects - Object Spec and Status you can find information:

Almost every Kubernetes object includes two nested object fields that govern the object's configuration: the object spec and the object status. For objects that have a spec, you have to set this when you create the object, providing a description of the characteristics you want the resource to have: its desired state.

The status describes the current state of the object, supplied and updated by the Kubernetes system and its components. The Kubernetes control plane continually and actively manages every object's actual state to match the desired state you supplied.

Your situation is partially described in Appendix: Horizontal Pod Autoscaler Status Conditions

When using the autoscaling/v2beta2 form of the HorizontalPodAutoscaler, you will be able to see status conditions set by Kubernetes on the HorizontalPodAutoscaler. These status conditions indicate whether or not the HorizontalPodAutoscaler is able to scale, and whether or not it is currently restricted in any way.

Example from my GKE test cluster

As I mention before, if you will remove status section, you will be able to create HPA.

$ kubectl apply -f - <<EOF
> apiVersion: autoscaling/v2beta2
> kind: HorizontalPodAutoscaler
> metadata:
>   name: hpa-apache
> spec:
>   scaleTargetRef:
>     apiVersion: apps/v1
>     kind: Deployment
>     name: php-apache
>   minReplicas: 1
>   maxReplicas: 3
>   metrics:
>   - type: Resource
>     resource:
>       name: cpu
>       target:
>         type: Utilization
>         averageUtilization: 50
> EOF
horizontalpodautoscaler.autoscaling/hpa-apache created

Following HPA documentation, I've created PHP Deployment.

$ kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
deployment.apps/php-apache created
service/php-apache created

When you executed command kubectl autoscale you have created HPA for deployment php-apache

$ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/php-apache autoscaled

Now you are able to see hpa resource using kubectl get hpa or kubectl get hpa.v2beta2.autoscaling. Output is the same.

First command will show all HPA objects with any apiVersion (v2beta2, v2beta1, etc), second command will show HPA only with apiVersion: hpa.v2beta2.autoscaling. My cluster as default is using v2beta2 so output of both commands is the same.

$ kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/50%    1         10        1          76s
$ kubectl get hpa.v2beta2.autoscaling
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/50%    1         10        1          84s

When execute command below, new file with hpa configuration will be created. This file is based on already created HPA from previous kubectl autoscale command.

$ kubectl get hpa.v2beta2.autoscaling -o yaml > hpa-v2.yaml
# If I would use command `kubectl get hpa hpa-apache > hpa-v2.yaml` file would look the same
$ cat hpa-v2.yaml
apiVersion: v1
items:
- apiVersion: autoscaling/v2beta2
  kind: HorizontalPodAutoscaler
  metadata:
...
status:
    conditions:
    - lastTransitionTime: "2020-12-11T10:44:43Z"
      message: recent recommendations were higher than current one, applying the highest
        recent recommendation
      reason: ScaleDownStabilized
      status: "True"
      type: AbleToScale
      ...
    currentMetrics:
    - resource:
        current:
          averageUtilization: 0
          averageValue: 1m

Conclusion

The status describes the current state of the object, supplied and updated by the Kubernetes system and it's components.

If you want to create resource based on YAML with status, you have to provide value in status.conditions, where condition require array value.

status:
  conditions:
  - lastTransitionTime: "2020-12-11T10:44:43Z"

Quick solution

Just remove status section, from your YAML.

Let me know if you still encounter any issue after removing status section from YAML manifest.