Reputation: 6558
I'm expecting that kubectl get nodes <node> -o yaml
to show the spec.providerID
(see reference below) once the kubelet has been provided the additional flag --provider-id=provider://nodeID
. I've used /etc/default/kubelet
file to add more flags to the command line when kubelet is start/restarted. (On a k8s 1.16 cluster) I see the additional flags via a systemctl status kubelet --no-pager
call, so the file is respected.
However, I've not seen the value get returned by kubectl get node <node> -o yaml
call. I was thinking it had to be that the node was already registered, but I think kubectl re-registers when it starts up. I've seen the log line via journalctl -u kubelet
suggest that it has gone through registration.
How can I add a provider ID to a node manually?
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#nodespec-v1-core
Upvotes: 2
Views: 6361
Reputation: 596
How a kubelet
is configured on the node itself is separate (AFAIK) from its definition in the master
control plane, which is responsible for updating state in the central etcd
store; so it's possible for these to fall out of sync. i.e., you need to communicate to the control place to update its records.
In addition to Subramanian's suggestion, kubectl patch node
would also work, and has the added benefit of being easily reproducible/scriptable compared to manually editing the YAML manifest; it also leaves a "paper trail" in your shell history should you need to refer back. Take your pick :) For example,
$ kubectl patch node my-node -p '{"spec":{"providerID":"foo"}}'
node/my-node patched
$ kubectl describe node my-node | grep ProviderID
ProviderID: foo
Hope this helps!
Upvotes: 4
Reputation: 1359
You can edit the node config and append providerID information under spec section.
kubectl edit node <Node Name>
...
spec:
podCIDR:
providerID:
Upvotes: -1