Reputation: 866
I have a EKS cluster with three availability zones and a hand full of nodes per zone.
I use topologySpreadConstraints
to spread pods across all nodes in all zones:
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: foo
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: foo
To update nodes I stop all nodes in a zone and then spin up new nodes:
eksctl delete nodegroup --cluster $CLUSTER -n $NODE_GROUP_NAME -w --region $REGION
eksctl create nodegroup --config-file nodegroup.yaml
This leads to evicting all pods to other nodes in other zones. After the new nodes are up the pods will not move back. This means the pods aren't spread across all zones anymore.
I tried podAntiAffinity
but I only found rules for scheduling, not for execution. And having anti-affinity rule for zones means only one pod can be scheduled per zone, which limits scalability.
I use AWS autoscaler, not Karpenter.
Is there a way to keep all pods spread across nodes and zones?
Upvotes: 0
Views: 44