shoosh
shoosh

Reputation: 78914

Whole Application level rolling update

My kubernetes application is made of several flavors of nodes, a couple of “schedulers” which send tasks to quite a few more “worker” nodes. In order for this app to work correctly all the nodes must be of exactly the same code version.

The deployment is performed using a standard ReplicaSet and when my CICD kicks in it just does a simple rolling update. This causes a problem though since during the rolling update, nodes of different code versions co-exist for a few seconds, so a few tasks during this time get wrong results.

Ideally what I would want is that deploying a new version would create a completely new application that only communicates with itself and has time to warm its cache, then on a flick of a switch this new app would become active and start to get new client requests. The old app would remain active for a few more seconds and then shut down.

I’m using Istio sidecar for mesh communication.

Is there a standard way to do this? How is such a requirement usually handled?

Upvotes: 1

Views: 315

Answers (2)

P Ekambaram
P Ekambaram

Reputation: 17621

You should consider Blue/Green Deployment strategy

Upvotes: 0

Vasilii Angapov
Vasilii Angapov

Reputation: 9022

I also had such a situation. Kubernetes alone cannot satisfy your requirement, I was also not able to find any tool that allows to coordinate multiple deployments together (although Flagger looks promising).

So the only way I found was by using CI/CD: Jenkins in my case. I don't have the code, but the idea is the following:

  1. Deploy all application deployments using single Helm chart. Every Helm release name and corresponding Kubernetes labels must be based off of some sequential number, e.g. Jenkins $BUILD_NUMBER. Helm release can be named like example-app-${BUILD_NUMBER} and all deployments must have label version: $BUILD_NUMBER . Important part here is that your Services should not be a part of your Helm chart because they will be handled by Jenkins.

  2. Start your build with detecting the current version of the app (using bash script or you can store it in ConfigMap).

  3. Start helm install example-app-{$BUILD_NUMBER} with --atomic flag set. Atomic flag will make sure that the release is properly removed on failure. And don't delete previous version of the app yet.

  4. Wait for Helm to complete and in case of success run kubectl set selector service/example-app version=$BUILD_NUMBER. That will instantly switch Kubernetes Service from one version to another. If you have multiple services you can issue multiple set selector commands (each command executes immediately).

  5. Delete previous Helm release and optionally update ConfigMap with new app version.

Depending on your app you may want to run tests on non user facing Services as a part of step 4 (after Helm release succeeds).

Another good idea is to have preStop hooks on your worker pods so that they can finish their jobs before being deleted.

Upvotes: 2

Related Questions