hlwo jiv
hlwo jiv

Reputation: 1

pod status when k8s upgrading cluster

The k8s documentation says that kubeadm upgrade does not touch your workloads, only components internal to Kubernetes, but I don't understand the status of the Pods at this time.

Upvotes: 0

Views: 519

Answers (2)

matt_j
matt_j

Reputation: 4614

There are different upgrade strategies, but I assume you want to upgrade your cluster with zero downtime.
In this case, the upgrade procedure at high level is the following:

  1. Upgrade control plane nodes - should be executed one node at a time.
  2. Upgrade worker nodes - should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.

It's important to prepare the node for maintenance by marking it 'unschedulable' and evicting the workloads (moving the workloads to other nodes):

$ kubectl drain <node-to-drain> --ignore-daemonsets

NOTE: If there are Pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet , the drain operation will be refused, unless you use the --force option.

As you can see in the Safely Drain a Node documentation:

You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.). Safe evictions allow the pod's containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.

If you finished the upgrade procedure on this node, you need to bring the node back online by running:

$ kubectl uncordon <node name> 

To sum up: kubectl drain changes the status of the Pods (workflow moves to another node). Unlike kubectl drain, kubeadm upgrade does not touch/affect your workloads, only modifies components internal to Kubernetes.

Using "kube-scheduler" as an example, we can see what exactly happens to the control plane components when we run the kubeadm upgrade apply command:

[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-07-15-42-15/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!

Upvotes: 1

hagrawal7777
hagrawal7777

Reputation: 14668

  • The way Pods can be scaled up and down, cluster can also be scaled up and down to increase/decrease the Nodes.
  • While scaling a cluster, if you increase the size of a node pool, existing Pods will not be moved to newer nodes.
  • While scaling a cluster, if you manually decrease the size of a node pool, any Pods on deleted nodes will not be restarted on other nodes.
  • If autoscaling decreases the size of a node pool, any Pods on deleted nodes that aren't managed by a replication controller will not be lost.

Upvotes: 0

Related Questions