Reputation: 267
I am having a cluster on EKS with cluster autoscaler enabled. Lets assume there are 3 nodes node-1,node-2,node3. The nodes can have maximum of 10 pods each. Now when the 31st pod comes into picture the CA will launch a new node and schedule the pod on that. Now maybe lets say the 4 pods from node 2 are not required and they go down. Now according to the requirement if a new pod is launched the scheduler places the new pod on the 4th node (launched by the CA) and not on the second node. Also I want that going down further if the pods are removed from the nodes then the new pods should come into the already existing node and not in a new node put up by CA. I tried updating the EKS default scheduler config file using a scheduler plugin but am unable to do so.
I think we can create a second scheduler but I am not aware of the process properly. Any workaround or suggestions will help a lot.
This is the command: "kube-scheduler --config custom.config" and this is the error "attempting to acquire leader lease kube-system/kube-scheduler..."
This is my custom.config file
apiVersion: kubescheduler.config.k8s.io/v1beta1
clientConnection:
kubeconfig: /etc/kubernetes/scheduler.conf
kind: KubeSchedulerConfiguration
percentageOfNodesToScore: 100
profiles:
- schedulerName: kube-scheduler-new
plugins:
score:
disabled:
- name: '*'
enabled:
- name: NodeResourcesMostAllocated
Upvotes: 4
Views: 7018
Reputation: 6853
How to manage pods scheduling?
Custom scheduler is, of course one way to go if you have some specific use case but if you just want to have some particular node that you want to schedule the pod into to Kubernetes provides an options to do so.
Scheduling algorithm selection can be broken into two parts:
Kubernetes works great if you let scheduler decides which nodes the pod should go and it comes with tools that will give scheduler hints:
NoSchedule
which means there will be no schedulingPreferNoSchedule
which means scheduler will try to avoid schedulingNoExecute
also affects scheduling and affects pods already running on the node. IF you add this taint to node, pods that are running on the node and don't tolerate that will be evicted.Node affinity on the other side can be used to attract some certain pods into specific nodes. Similar to tains node affinity does give me some options for fine tuning your scheduling preferences:
requiredDuringSchedulingIgnoredDuringExecution
which can be used as hard requirement and tell scheduler that rules must be met for pod to be scheduled onto node.preferredDuringSchedulingIgnoredDuringExecution
which can be used as soft requirement and tell scheduler to try to enforce it but it does not have to be guaranteedPodAffinity can be used if you for example want your front-end pods to run on the same node as your database pod. This can be in similar way described as hard or soft requirement respectively.
podAntiAffinity can be used if you wish not have some certain pod to be running with each other.
Upvotes: 3