Reputation: 518
Kubernetes kind Deployment
doesn't allow patch changes in spec.selector.matchLabels
, so any new deployments (managed by Helm or otherwise) that want to change the labels can't use the RollingUpdate feature within a Deployment. What's the best way to achieve a rollout of a new deployment without causing downtime?
Minimum example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: foo
template:
metadata:
labels:
app: foo
spec:
containers:
- name: foo
image: ubuntu:latest
command: ["/bin/bash", "-ec", "sleep infinity"]
Apply this, then edit the labels (both matchLabels and metadata.labels) to foo2
. If you try to apply this new deployment, k8s will complain (by design) The Deployment "foo" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"foo2"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
.
The only way I can think of right now is to use a new Deployment name so the new deployment does not try to patch the old one, and then delete the old one, with the ingress/load balancer resources handling the transition. Then we can redeploy with the old name, and delete the new name, completing the migration.
Is there a way to do it with fewer k8s CLI steps? Perhaps I can edit/delete something that keeps the old pods alive while the new pods roll out under the same name?
Upvotes: 13
Views: 4468
Reputation: 1861
The easier way is to configure temporarily both the old label and the new label. Once your Pods
have both label, delete your Deployment
and orphan the pods:
kubectl delete deployment YOUR-DEPLOYMENT --cascade=orphan
Now the Deployment
is gone, but your pods are still running, and you can deploy the Deployment
again, this time with the new label selector. It will find the running Pods
as they had also the new label.
And once this is done, you can finish by removing the old label from your pods.
Upvotes: 1
Reputation: 5219
I just did this, and I followed the four-step process you describe. I think the answer is no, there is no better way.
My service was managed by Helm. For that I literally created four merge requests that needed to be rolled out sequentially:
I tested shortcutting the process (combining step 1 and 2), but it doesn't work - helm deletes one deployment before it creates the other, and then you have downtime.
The good news is: in my case i didn't need to change any other descriptors (charts), so it was not so bad. All the relationships (traffic routing, etc) were made via label matching. Since foo-temp had the same labels, the relationships worked automatically. The only issue was that my HPA referenced the name, not the labels. Instead of modifying it, I left foo-temp without an HPA and just specified a high amount of replicas for it. The HPA didn't complain when its target didn't exist between step 2 and 3.
Upvotes: 6
Reputation: 350
From my experience, while using helm when I use
helm upgrade release -f values .
I do not get downtime. Also when using helm I noticed that until the new deployment get ready by X/X it does not terminate the old deployment. I can suggest using it. This way it can be as painless as it gets.
Also from the section Updating Deployment from Kubernetes docs it is said that, A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template) is changed.
Therefore, you can use label changes with helm.
Hopefully I was a little help.
Beware! untried method: kubectl has an edit subcommand which enabled me to update ConfigMaps, PersistentVolumeClaims and etc. Maybe you can use it to update your Deployment. Syntax:
kubectl edit [resource] [resource-name]
But before doing that please choose a proper text editor since you will be dealing with yaml formatted files. Do so by using,
export KUBE_EDITOR=/bin/{nano,vim,yourFavEditor}
Upvotes: -3