PlagTag
PlagTag

Reputation: 6429

Kubernetes assign pods to pool

is there a way to tell kubectl that my pods should only deployed on a certain instance pool?

For example:

nodeSelector:
      pool: poolname

Assumed i created already my pool with something like:

gcloud container node-pools create poolname --cluster=cluster-1 --num-nodes=10 --machine-type=n1-highmem-32

Upvotes: 34

Views: 18352

Answers (4)

John David
John David

Reputation: 772

If you are using Digital ocean Kubernetes you have access to the labels below for every node pool.

doks.digitalocean.com/node-id
doks.digitalocean.com/node-pool
doks.digitalocean.com/node-pool-id

You can use nodeSelector with any of the provided labels. The first label allows one to assign a deployment to a particular node while the last two targets the node pool.

I will say targeting the NodePool is preferable instead of a specific pods as pods can be destroyed and new ones created. Quick example below

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ipengine
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ipengine
    spec:
      containers:
      - name: ipengine
        image: <imageaddr.>
        args:
        - ipengine
        - --ipython-dir=/tmp/config/
        - --location=ipcontroller.default.svc.cluster.local
        - --log-level=0
        resources:
          requests:
            cpu: 1
            #memory: 3Gi
      nodeSelector:
        doks.digitalocean.com/node-pool: pool-highcpu32

The label doks.digitalocean.com/node-pool expects the pool name as value and you can also use doks.digitalocean.com/node-pool-id which expects the id of the pool as value.

Upvotes: 3

Phil P
Phil P

Reputation: 11

Or you do both!

  • use labels to select which pool to run on
  • use taints and tolerations to ensure that only other pods don't try to run on this node-pool

That means you don't need to taint-n-tolerate on every pool (eg if you have a 'default pool' where you want things to run by default (ie if uusers do nothing special to their pods, they will deploy here) and "other pools" for more special/restricted use cases.

This model allows pods to run without any special tweaks to the config rather than tain-n-tolerate everything which means pods never run if configured without tolerations.

Depends on your/your user needs, how rigidly locked down you need everything, etc.

As always, there's more than one way to peel the dermis off a feline.

Upvotes: 1

Joseph Lust
Joseph Lust

Reputation: 19975

You can also use taints and tolerations. That way, you don't have to know/hardcode the specific pool name, but simply that it will have the taint high-cpu, for example. Then you give your pods a tolerance for that taint, and they can schedule on that target pool.

That allows you to have multiple pools, or to have HA pool deployment, where you can migrate from one pool to another by changing the taints on the pools.

The gotcha here, however, is that while a toleration allows pods to schedule on a tainted pool, it won't prevent them from scheduling elsewhere. So, you've need to taint pool-a with taint-a, and pool-b with taint-b, and give pods for pool-a and pool-b the proper taints to keep them out of eachother's pools.

Upvotes: 6

PlagTag
PlagTag

Reputation: 6429

Ok, i found out a solution:

gcloud creates a label for the pool name. In my manifest i just dropped that under the node selector. Very easy.

Here comes my manifest.yaml: i deploy ipyparallel with kubernetes

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ipengine
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ipengine
    spec:
      containers:
      - name: ipengine
        image: <imageaddr.>
        args:
        - ipengine
        - --ipython-dir=/tmp/config/
        - --location=ipcontroller.default.svc.cluster.local
        - --log-level=0
        resources:
          requests:
            cpu: 1
            #memory: 3Gi
      nodeSelector:
        #<labelname>:value
        cloud.google.com/gke-nodepool: pool-highcpu32

Upvotes: 56

Related Questions