Vinson Xing
Vinson Xing

Reputation: 11

kubernetes node affinity require pod restart even the pod has meet the rule in node affinity

I have a running pod (pod-1), deployed from a k8s deployment (deploy-1), on k8s node-1. Someday, I want to patch node affinity to this deployment. For example, the target node must have label 'data=allowed'.

My steps:

  1. Add label 'data=allowed' to node-1 first
  2. Patch the node affinity definition to deploy-1

My expectation is that the pod-1 should not be rescheduled by k8s, since it is already on node-1, which is already meet the node affinity rule(Step 1). But the result is that pod-1 was recreated, although still on node-1.

Is there any configuration to prevent the recreation if the living pod/deployment has meet the new defined node affinity rule? Thanks.

Upvotes: 0

Views: 953

Answers (1)

Suresh Vishnoi
Suresh Vishnoi

Reputation: 18413

As you want to change the state of the cluster by adding a new label to the deployment which means your desired state has been changed, therefore, k8s makes sure that current state === the desired state. This is fundamental design.

In order to utilise above functionality, We need to use declarative approach rather than imperative approach.

For instance, It's better to use apply operation rather than create operation in k8s cluster. Now If you wish to change or modify other fields in k8s resources, It makes sure that dependent fields won't change or restart container or external IPs.

I have added the reference for further research.

kubectl-apply-vs-kubectl-create

object-management/

Upvotes: 0

Related Questions