Ruchir Bharadwaj
Ruchir Bharadwaj

Reputation: 1272

Soft Scheduling placing pod on same Node

I have a requirement to schedule pods on a different Node, however if pod replicas goes on to increase more then the number of nodes, it should try to utilizes existing resources.

For the above Requirement I chose to use (Soft Scheduling)

podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:

Which as my understanding means, if there are 3 nodes available it will distribute replicas across each one of them and then reuse the existing one.

However I am seeing the behaviour that pod is getting scheduled on same node each time .i.e.

Deployment definition should get 4 nodes for the deployment .i.e.

kubectl get nodes --selector "kubernetes.io/hostname,provisioner=arm64-ondemand-provisioner,arch=arm64"
NAME                                        STATUS   ROLES    AGE     VERSION
ip-10-20-0-190.us-west-2.compute.internal   Ready    <none>   5h11m   v1.24.15-eks-fae4244
ip-10-20-1-226.us-west-2.compute.internal   Ready    <none>   5h14m   v1.24.15-eks-fae4244
ip-10-20-2-207.us-west-2.compute.internal   Ready    <none>   10m     v1.25.11-eks-a5565ad
ip-10-20-2-45.us-west-2.compute.internal    Ready    <none>   2m36s   v1.25.11-eks-a5565ad

However even though my podAntiAffinity rules is set as

spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - foo
          topologyKey: kubernetes.io/hostname
        weight: 100

I get the same node for the pod,

kubectl get po --selector "app=foo" -o wide --show-labels

NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES   LABELS
foo-86d79fcd86-f55rt   1/1     Running   0          64m   10.20.1.68    ip-10-20-1-226.us-west-2.compute.internal   <none>           <none>            app.kubernetes.io/instance=foo,app.kubernetes.io/name=foo,app=foo,pod-template-hash=86d79fcd86
foo-86d79fcd86-hkdr9   1/1     Running   0          64m   10.20.1.203   ip-10-20-1-226.us-west-2.compute.internal   <none>           <none>            app.kubernetes.io/instance=foo,app.kubernetes.io/name=foo,app=foo,pod-template-hash=86d79fcd86

I have confirmed that other 3 Nodes dont have pod foo however they are not getting picked up via scheduler

If I change Soft Scheduling to Hard Scheduling, it does work however then if replicas increased more then the number of node, A new node has to be provision.

Kindly help in understanding this behaviour and what can be done here for Soft Scheduling to work as I intended to do.

Upvotes: 0

Views: 238

Answers (0)

Related Questions