love
love

Reputation: 1040

How to assign different number of pods to different nodes in Kubernetes for same deployment?

I am running a deployment on a cluster of 1 master and 4 worker nodes (2-32GB and 2-4GB machine). I want to run a maximum of 10 pods on 4GB machines and 50 pods in 32GB machines.

Is there a way to assign different number of pods to different nodes in Kubernetes for same deployment?

Upvotes: 2

Views: 672

Answers (2)

acid_fuji
acid_fuji

Reputation: 6853

I want to run a maximum of 10 pods on 4GB machines and 50 pods in 32GB machines.

This is possible with configuring kubelet to limit the maximum pod count on the node:

// maxPods is the number of pods that can run on this Kubelet.
MaxPods int32 `json:"maxPods,omitempty"`

Github can be found here.

Is there a way to assign different number of pods to different nodes in Kubernetes for same deployment?

Adding this to your request makes it not possible. There is no such native mechanism in Kubernetes at this point to suffice this. And this more or less goes in spirit of how Kubernetes works and its principles. Basically you schedule your application and let scheduler decides where it should go, unless there is very specific resource required like GPU. And this is possible with labels,affinity etc .

If you look at the Kubernetes-API you notice the there is no such field that will make your request possible. However, API functionality can be extended with custom resources and this problem can be tackled with creating your own scheduler. But this is not the easy way of fixing this.

You may want to also set appropriate memory requests. Higher requests will tell scheduler to deploy more pods into node which has more memory resources. It's not ideal but it is something.

Upvotes: 2

Tushar Mahajan
Tushar Mahajan

Reputation: 2160

Well in general the scheduling is done on basis of algorithms like round robin, least used etc.

And likely we have the independence of adding node affinities via selectors but that won't even tackle the count.

Maybe you have to manually reset this thing up along the worker nodes.

Say -

you did kubectl top nodes to get the available spaces, once the deployment has been done.

and kubectl get po -o wide will give you the nodes taken on by the pods.

Now to force the Pod to get spawned in a specific node, let's say the one with 32GB then you can temporarily mark the 4GB nodes as "Not ready" by executing following command

Kubectl cordon {node_name}

And now kill the pods those are running in 4GB machines and you want those to run in 32GB machines. After killing them, they will automatically get spawned in any of the 32GB nodes

then you can execute

Kubectl uncordon {node_name} to mark the node as "ready" again.

This is bit involved stuff and will need lots of calculations as well.

Upvotes: 0

Related Questions