I have 3 k8s control plane nodes. Each of which must run a haproxy pod.
DaemonSet approach: A normal solution would be to deploy the haproxy pods as daemonset and each would get one haproxy pod. However, during deployment of a new version, there will be downtime as daemonset pods are not allowed to run concurrently.
Deployment approach Another solution would be to deploy them using deployment. Now I'll define that I need 3 replicas of haproxy to run and I have to decide how to spread them.
- I can't use strict
antiAffinity
withrequiredDuringSchedulingIgnoredDuringExecution
because pods with the new version will never get scheduled on the nodes.preferredDuringSchedulingIgnoredDuringExecution
does whatever it wants.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- haproxy
topologyKey: "kubernetes.io/hostname"
- I tried with
topologySpreadConstraints
, however during deployment we have terminating and starting nodes in parallel which causes the scheduling to assign pods to nodes unevenly. It can't distinguish between pods in Terminating state and those in Running state. The below config is what I use.
topologySpreadConstraints:
- labelSelector:
matchLabels:
app: haproxy
maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
I've read about descheduler but I want to save my resources and keep my cluster as predictable as possible.
It's possible I run more replicas (pods) and hope there's at least one pod on each node. But that's a waste of resources.
matchLabelKeys
described here might work in distinguishing between new and old pods, but when I add it to my 1.26.5 cluster it doesn't apply by the looks of it.
What are my options here?
Thanks
You can use labels that match the spec.tolerations such as node-role.kubernetes.io/control-plane:NoSchedule as explained in the documentation.
Below is a sample yaml file for reference use:
You can review more about daemonset with this documentation.