My Requirement is Scale up PODS on Custom metrics like pending messages from queue increases pods has to increase to process jobs. In kubernetes Scale up is working fine with prometheus adapter & prometheus operator.
I have long running process in pods, but HPA checks the custom metrics and try to scale down, Due to this process killing mid of operations and loosing that message. How i can control the HPA kill only free pods where no process is running.
AdapterService to collect custom metrics
- seriesQuery: '{namespace="default",service="hpatest-service"}' resources: overrides: namespace: resource: "namespace" service: resource: "service" name: matches: "msg_consumergroup_lag" metricsQuery: 'avg_over_time(msg_consumergroup_lag{topic="test",consumergroup="test"}[1m])'
HPA Configuration
- type: Object object: describedObject: kind: Service name: custommetric-service metric: name: msg_consumergroup_lag target: type: Value value: 2
I will suggest and idea here , You can run a custom script to disable HPA as soon as it scales up and the script should keep checking the resource and process and when no process enable HPA and scale down , or kill the pods using kubectl command and enable HPA back.