Reputation: 2545
When I provision a Kubernetes cluster using kubeadm, I get my nodes tagged as "none". It's a known bug in Kubernetes and currently a PR is in progress.
However, I would like to know if there is an option to add a Role name manually for the node.
root@ip-172-31-14-133:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-14-133 Ready master 19m v1.9.3
ip-172-31-6-147 Ready <none> 16m v1.9.3
Upvotes: 96
Views: 174718
Reputation: 4116
On top of everything that has already been answered since k8S 1.16, the kubernetes.io/ and k8s.io/ prefixes are reserved for Kubernetes core components.
The suggested way is to create your own namespace to label a kubernetes node by simply reproducing the kubelet way.
For example as a label key node-role.[your-domain]/[role]=
. No label value is needed, the label key is actually a tag.
kubectl label node [node-name] node-role.[your-namespace]/[role]=
Where:
node-name
is your node name (returned by kubectl get node
)your-namespace
is the "domain" of your namespace (eg. acme.org)role
is the node role, (eg. database
)For example:
kubectl label node node001 node-role.acme.org/database=
Upvotes: 1
Reputation:
In my case i was able to do that with below commands
kubectl label nodes ip-10-0-47-13.ec2.internal kubernetes.io/role=worker
Output
node/ip-10-0-47-13.ec2.internal labeled
mansoor.hasan@PURE-DEV-MANSOORHASAN EKS % kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-47-13.ec2.internal Ready worker 161m v1.20.15-eks-fb459a0
Upvotes: 2
Reputation: 3074
Add Roles
$ kubectl get nodes
master Ready control-plane,master 166m v1.21.1
worker1 Ready worker 48m v1.21.1
worker2 Ready worker 16m v1.21.1
worker3 Ready worker 9m57s v1.21.1
$ kubectl label node worker1 node-role.kubernetes.io/worker=worker
$ kubectl label node worker2 node-role.kubernetes.io/worker=worker
$ kubectl label node worker3 node-role.kubernetes.io/worker=worker
In case if you wanted to overwrite, please use this below command.
$ kubectl label node worker1 node-role.kubernetes.io/worker=worker --overwrite
Upvotes: 5
Reputation: 2397
Add Role
kubectl label node <node name> node-role.kubernetes.io/<role name>=<key - (any name)>
Remove Role
kubectl label node <node name> node-role.kubernetes.io/<role name>-
Upvotes: 32
Reputation: 679
Before label:
general@master-node:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node Ready master 23m v1.18.2
slave-node Ready <none> 19m v1.18.2
kubectl label nodes <your_node> kubernetes.io/role=<your_label>
In my case slave-node e.g.
kubectl label nodes slave-node kubernetes.io/role=worker
After label:
general@master-node:~$ kubectl label nodes slave-node kubernetes.io/role=worker
node/slave-node labeled
general@master-node:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node Ready master 24m v1.18.2
slave-node Ready worker 21m v1.18.2
You can also change the label. Just put --overwrite
kubectl label --overwrite nodes <your_node> kubernetes.io/role=<your_new_label>
e.g.
kubectl label --overwrite nodes slave-node kubernetes.io/role=worker1
After overwriting the label:
general@master-node:~$ kubectl label --overwrite nodes slave-node kubernetes.io/role=worker1
node/slave-node labeled
general@master-node:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node Ready master 36m v1.18.2
slave-node Ready worker1 32m v1.18.2
Upvotes: 15
Reputation: 1144
This worked for me:
kubectl label node cb2.4xyz.couchbase.com node-role.kubernetes.io/worker=worker
NAME STATUS ROLES AGE VERSION
cb2.4xyz.couchbase.com Ready custom,worker 35m v1.11.1
cb3.5xyz.couchbase.com Ready worker 29m v1.11.1
I could not delete/update the old label, but I can live with it.
Upvotes: 111
Reputation: 18161
A node role is just a label with the format node-role.kubernetes.io/<role>
You can add this yourself with kubectl label
Upvotes: 65