Mazia
Mazia

Reputation: 63

Kubernetes doesnt schedules the pods on the worker nodes

I have the following code for pod creation. I have two nodes one master and another node is worker node, I am creating two pods I need one pod to be scheduled on the master and the other pod on the worker node. I have not specified for pod second testing1 to be scheduled on a worker node because by default pods are scheduled on worker nodes. But the second pod testing1 is also scheduled on the master node.

Yaml file:

apiVersion: v1
kind: Pod
metadata:
   name: test
   labels:
      app: test
spec:
   containers:
     - name: test
       image: test:latest
       command: ["sleep"]
       args: ["infinity"]
       imagePullPolicy: Never
       ports:
         - containerPort: 8080
    nodeSelector:
      node_type: master_node
    tolerations:
     - key: node-role.kubernetes.io/master
       effect: NoSchedule

kind: Pod
metadata:
   name: testing1
   labels:
      app: testing1
spec:
   containers:
     - name: testing1
       image: testing1:latest
       command: ["sleep"]
       args: ["infinity"]
       imagePullPolicy: Never

Thanks help is highly appreciated in solving this issue.

Help is highly appreciated. Thanks

Upvotes: 0

Views: 1995

Answers (2)

Malgorzata
Malgorzata

Reputation: 7023

When the Kubernetes cluster is first set up, a Taint is set on the master node. This automatically prevents any pods from being scheduled on this node. You have enabled scheduling pods on master - because you labeled it to make pod test run on master. So kube-scheduler will took master node under consideration while scheduling other pods. For every newly created pod or other unscheduled pods, kube-scheduler selects an optimal node for them to run on, also master node if it is not set NotSchedule. Read more about node-selection.

It means that you also have to configure worker node and pod testing1 to make it possible to deploy this pod on specific worker node.

Different options of scheduling pods on specific nodes except NodeSelector:

  1. instead of nodeSelector you can add affinity: under spec: section:
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
        matchExpressions:
        ...

Add the nodeSelector to your pod:

apiVersion: extensions/v1beta1
kind: Pod
...
spec:
...
      annotations:
        scheduler.alpha.kubernetes.io/tolerations: |
          [
            {
              ...
            }
          ]
    spec:
      nodeSelector:
        ...
    […]

  1. instead of nodeSelector you can add an annotation like below:
scheduler.alpha.kubernetes.io/affinity: >
  {
    "nodeAffinity": {
      "requiredDuringSchedulingIgnoredDuringExecution": {
        "nodeSelectorTerms": [
          {
            "matchExpressions": [
              {
               ...
              }
            ]
          }
        ]
      }
    }
  }

Take a look: pod-deployment-on-master.

Keep in mind that NoSchedule will not evict pods that are already scheduled. So if you want to test and make changes firstly delete deployment/pods and after labeling/tainting etc redeploy them.

Note for future:

If you allow a rogue pod running on a master node to access/hijack Kubernetes functionality not normally accessible to pods on non-master nodes there's always the possibility of someone hacking access to a pod/container and thereby gaining access to the master node. If you run a rogue pod on your master node that disrupts the master components, it can destabilize your entire cluster. Clearly this is a concern for production deployments, but if you are looking to maximize utilization of a small number of nodes in a development / experimentation environment, then it should be fine to run a couple of extra pods on the master.

Upvotes: 0

John Peterson
John Peterson

Reputation: 389

You can use nodeAffinity / antiAffinity to solve this.

Why would you assign pods to a master node?

The master nodes are the control plane for your k8s cluster which may cause a negative impact if the pods you schedule on the master node consume too much resources.

If you really want to assign a pod to a master node I recommend you untaint 1 master node and remove the NoSchedule taint and then assign a nodeAffinity to this single master node unless you really need to run this on all your master nodes .

Upvotes: 1

Related Questions