I was exploring the resource quota in kubernetes. My problem statement is there has been a situation where a person accidently wrote a large value for memory limit like 10Gi and that caused a unwanted autoscaling triggered.
I want to cap the resourcequota. I was reading about Limit Ranges (https://kubernetes.io/docs/concepts/policy/limit-range/) and Resource Quota Per PriorityClass (https://kubernetes.io/docs/concepts/policy/resource-quotas/). I want to cap on memory and cpu limit requests values for a pod/container. What are the best practices or recommendations for such use case.
If you use terraform and eks blueprints you can just define the quotas per teams as is explained here
In my case I created a quota per namespace in the vars.yaml for each cluster and add them with a for expression:
main.tf
values.yaml