Reputation: 11
I have a running pod (pod-1), deployed from a k8s deployment (deploy-1), on k8s node-1. Someday, I want to patch node affinity to this deployment. For example, the target node must have label 'data=allowed'.
My steps:
My expectation is that the pod-1 should not be rescheduled by k8s, since it is already on node-1, which is already meet the node affinity rule(Step 1). But the result is that pod-1 was recreated, although still on node-1.
Is there any configuration to prevent the recreation if the living pod/deployment has meet the new defined node affinity rule? Thanks.
Upvotes: 0
Views: 953
Reputation: 18413
As you want to change the state of the cluster by adding a new label to the deployment which means your desired state has been changed, therefore, k8s makes sure that current state === the desired state. This is fundamental design.
In order to utilise above functionality, We need to use declarative approach rather than imperative approach.
For instance, It's better to use apply operation rather than create operation in k8s cluster. Now If you wish to change or modify other fields in k8s resources, It makes sure that dependent fields won't change or restart container or external IPs.
I have added the reference for further research.
kubectl-apply-vs-kubectl-create
Upvotes: 0