Reputation: 589
I have a tiny Kubernetes cluster consisting of just two nodes running on t3a.micro
AWS EC2 instances (to save money).
I have a small web app that I am trying to run in this cluster. I have a single Deployment
for this app. This deployment has spec.replicas
set to 4.
When I run this Deployment
, I noticed that Kubernetes scheduled 3 of its pods in one node and 1 pod in the other node.
Is it possible to force Kubernetes to schedule at most 2 pods of this Deployment
per node? Having 3 instances in the same pod puts me dangerously close to running out of memory in these tiny EC2 instances.
Thanks!
Upvotes: 3
Views: 2173
Reputation: 54211
The correct solution for this would be to set memory requests and limits correctly matching your steady state and burst RAM consumption levels on every pod, then the scheduler will do all this math for you.
But for the future and for others, there is a new feature which kind of allows this https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/. It's not an exact match, you can't put a global cap, rather you can require pods be evenly spaced over the cluster subject to maximum skew caps.
Upvotes: 5