Reputation: 35
I am using a command for pod creation plus node selection
kubectl run newpod --image image1 --command run over --overrides='{ "apiVersion": "v1", "spec": { "template": { "spec": { "nodeSelector": { "kubernetes.io/hostname": "one-worker-node" } } } } }'
the problem is it that it runs on one worker node named "one-worker-node". I could not make it run for two or more worker nodes. like
kubectl run newpod --image image1 --command run over --overrides='{ "apiVersion": "v1", "spec": { "template": { "spec": { "nodeSelector": { "kubernetes.io/hostname": "one-worker-node" : "second-worker-node"} } } } }'
Upvotes: 3
Views: 6692
Reputation: 8786
You can't do it with nodeSelector
, because you would need to pass two key-values, with the same key. Something like this:
kubectl run newpod --image image1 --overrides='{ "apiVersion": "v1", "spec": { "nodeSelector": { "kubernetes.io/hostname": "one-worker-node", "kubernetes.io/hostname": "second-worker-node" } } }'
Which is equivalent to this:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
run: nginx
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
kubernetes.io/hostname: one-worker-node
kubernetes.io/hostname: second-worker-node
If you'd deploy this pod, only the last label will take effect, as the first label will be overritten in the Pod yaml.
So, you would use nodeAffinity
. This one should work from the command line:
kubectl run newpod --image image1 --overrides='{ "spec": { "affinity": { "nodeAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": { "nodeSelectorTerms": [{ "matchExpressions": [{ "key": "kubernetes.io/hostname", "operator": "In", "values": [ "one-worker-node", "second-worker-node" ]} ]} ]} } } } }'
Which is equivalent to this:
apiVersion: v1
kind: Pod
metadata:
name: newpod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- one-worker-node
- second-worker-node
containers:
- name: image1
image: image1
You can add all your candidates to the values
. Also you might want to make it a preference with preferredDuringSchedulingIgnoredDuringExecution
, or have both; a preference and a requirement.
PoC
root@master-1-v1-20:~# kubectl get no
NAME STATUS ROLES AGE VERSION
master-1-v1-20 Ready master 43d v1.20.2
worker-1-v1-20 Ready worker-1 42d v1.20.2
worker-2-v1-20 Ready worker-2 42d v1.20.2
root@master-1-v1-20:~# grep affinity -A8 affinity.yaml
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-1-v1-20
root@master-1-v1-20:~# kubectl create -f affinity.yaml
pod/newpod created
root@master-1-v1-20:~# kubectl get po newpod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
newpod 1/1 Running 0 18s 192.168.127.102 worker-1-v1-20 <none> <none>
I change the name to newpod-2
and configure it to run on the second node:
root@master-1-v1-20:~# vim affinity.yaml
root@master-1-v1-20:~# grep affinity -A8 affinity.yaml
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-2-v1-20
root@master-1-v1-20:~# kubectl create -f affinity.yaml
pod/newpod-2 created
root@master-1-v1-20:~# kubectl get po newpod newpod-2 -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
newpod 1/1 Running 0 4m26s 192.168.127.102 worker-1-v1-20 <none> <none>
newpod-2 1/1 Running 0 3m25s 192.168.118.172 worker-2-v1-20 <none> <none>
Upvotes: 6