user15224751
user15224751

Reputation:

how to stop and restart nodes in kubernetes

i have 3 nodes kubectl get nodes

NAME      STATUS   ROLES    AGE    VERSION
a1       Ready    master   133m   v1.18.6-gke.6600
a2       Ready    master   132m   v1.18.6-gke.6600
a3       Ready    master   132m   v1.18.6-gke.6600

so the status of that nodes is Ready I want to stop first node and again restart that nodes

i tried with

kubectl cordon a1

NAME      STATUS                     ROLES    AGE    VERSION
a1     Ready,SchedulingDisabled   master   138m   v1.18.6-gke.6600
a2     Ready                      master   137m   v1.18.6-gke.6600
a3     Ready                      master   137m   v1.18.6-gke.6600

but my backend is still working and although if icordon all the nodes in that case also my backend is working i want my backend service will be stop and again resume i also tried with

kubectl drain a1

error: unable to drain node "abm-cp2", aborting command...

There are pending nodes to be drained:
 a2
error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/anetd-4pr9j, kube-system/etcd-defrag-8fs99, kube-system/kube-proxy-8cgpf, kube-system/localpv-mlfnf, kube-system/metallb-speaker-ljsdv, kube-system/node-exporter-dfrnq, kube-system/stackdriver-log-forwarder-t5s88

Upvotes: 4

Views: 30015

Answers (1)

Harsh Manvar
Harsh Manvar

Reputation: 30113

May you are getting the wrong meaning of cordon and drain node.

Cordon node :

it means no more new container will get the scheduled on this node however existing running container will be kept on that same node.

Drain node :

The drain node will remove all the containers from that specific node and schedule all the containers to another node.

As much as I under what you want do is

I want to stop first node and again restart those nodes

if you can access the Node and do the SSH into worker nodes you can also run inside node after SSH : systemctl restart kubelet

OR

you can stop or scale down the deployment to zero mean you can pause or restart the container or pod

with node you can delete node and new will will join the Kubernetes cluster.

kubectl delete node a1

which will be similar to restarting the node in this case you must be using the node pools in GKE or AWS other cloud providers.

Note : if you are running single replicas of you application you might face the downtime if delete the node or restart the kubelet

i would suggest you to cordon and drain node before you restart.

  1. kubectl cordon a1 (stop new pod sceduling)
  2. kubectl drain a1 (remove running containers)
  3. kubectl delete node a1 (remove the node from cluster) or systemctl restart kubelet (restart the node)

Regarding the error :

There are pending nodes to be drained: a2 error: cannot delete DaemonSet-managed Pods

You need to use the --ignore-daemonsets key when you drain Kubernetes nodes:

--ignore-daemonsets=false: Ignore DaemonSet-managed pods.

so command will be something like

kubectl drain node <node-name> --ignore-daemonsets

Upvotes: 8

Related Questions