gautham
gautham

Reputation: 87

Kubernetes restart how to run terminated processes

Am new to Kubernetes, my question is related to Google Cloud platform.

Given a scenario we need to restart a kubernetes cluster and we have some services in Spring boot. As Spring boot services are like individual JVM's each and run like an independent process. Once the Kubernetes is restarted in order to restart the Spring boot services I need help in understanding what type of a script or mechanism to use to restart all the services in Kubernetes. Please let me know and thank you and appreciate all your inputs.

Upvotes: 1

Views: 1315

Answers (1)

PjoterS
PjoterS

Reputation: 14112

I am not sure if I fully understood your question but I think the best approach for you would be to Pack your Spring Boot app to a Docker container and then use it on GKE.

Good guide about Packing your Spring Boot application to container can be found in CodeLabs tutorial.

When you will have your application in container you will be able to use it in Deployment or Statefulsets configuration file and deploy it in your cluster.

As mentioned in Deployment Documentation:

A Deployment provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

In short, Deployment controller ensure to keep your application in your desired state.

For example if you would like to restart your application you just could scale down Deployment to 0 replicas and scale up to 5 replicas.

Also as GKE is working on Google Compute Engine VMs you can also scale your cluster nodes number.

Examples

Restarting Application

For my test I've used Nginx container in Deployment but it should work similar with your Spring boot app container.

Let's say you have 2 node cluster with 5 replicas aplication.

$ kubectl create deployment nginx --image=nginx --replicas=5
deployment.apps/nginx created
$ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP          NODE                                       NOMINATED NODE   READINESS GATES
nginx-86c57db685-2x8tj   1/1     Running   0          2m45s   10.4.1.5    gke-cluster-1-default-pool-faec7b51-6kc3   <none>           <none>
nginx-86c57db685-6lpfg   1/1     Running   0          2m45s   10.4.1.6    gke-cluster-1-default-pool-faec7b51-6kc3   <none>           <none>
nginx-86c57db685-8lvqq   1/1     Running   0          2m45s   10.4.0.9    gke-cluster-1-default-pool-faec7b51-x07n   <none>           <none>
nginx-86c57db685-lq6l7   1/1     Running   0          2m45s   10.4.0.11   gke-cluster-1-default-pool-faec7b51-x07n   <none>           <none>
nginx-86c57db685-xn7fn   1/1     Running   0          2m45s   10.4.0.10   gke-cluster-1-default-pool-faec7b51-x07n   <none>           <none>

Now you would need to change some environment variables inside your application using ConfigMap. To apply this change you could just use rollout. It would restart your application and provide additional data from ConfigMap.

$ kubectl rollout restart deployment nginx
deployment.apps/nginx restarted
$ kubectl get po -o wide
NAME                     READY   STATUS        RESTARTS   AGE     IP          NODE                                       NOMINATED NODE   READINESS GATES
nginx-6c98778485-2k98b   1/1     Running       0          6s      10.4.0.13   gke-cluster-1-default-pool-faec7b51-x07n   <none>           <none>
nginx-6c98778485-96qx7   1/1     Running       0          6s      10.4.1.7    gke-cluster-1-default-pool-faec7b51-6kc3   <none>           <none>
nginx-6c98778485-qb89l   1/1     Running       0          6s      10.4.0.12   gke-cluster-1-default-pool-faec7b51-x07n   <none>           <none>
nginx-6c98778485-qqs97   1/1     Running       0          4s      10.4.1.8    gke-cluster-1-default-pool-faec7b51-6kc3   <none>           <none>
nginx-6c98778485-skbwv   1/1     Running       0          4s      10.4.0.14   gke-cluster-1-default-pool-faec7b51-x07n   <none>           <none>
nginx-86c57db685-2x8tj   0/1     Terminating   0          4m38s   10.4.1.5    gke-cluster-1-default-pool-faec7b51-6kc3   <none>           <none>
nginx-86c57db685-6lpfg   0/1     Terminating   0          4m38s   <none>      gke-cluster-1-default-pool-faec7b51-6kc3   <none>           <none>
nginx-86c57db685-8lvqq   0/1     Terminating   0          4m38s   10.4.0.9    gke-cluster-1-default-pool-faec7b51-x07n   <none>           <none>
nginx-86c57db685-xn7fn   0/1     Terminating   0          4m38s   10.4.0.10   gke-cluster-1-default-pool-faec7b51-x07n   <nont e>           <none>

Draining node to perform node operations

Another example can be when you need to do something with your VMs. You can do it using by draining node.

You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.). Safe evictions allow the pod's containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.

So it will reschedule all pods from this node to another nodes.

Restarting Cluster

Keep in Mind that GKE is managed by google and you cannot restart one machine as it's managed by Managed instance group. You can ssh to each node, change some settings. When you scale them to 0 and scale up, you will get new machine with your requirements with new ExternalIP.

$ kubectl get nodes -o wide
NAME                                       STATUS   ROLES    AGE    VERSION             INTERNAL-IP   EXTERNAL-IP     OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
gke-cluster-1-default-pool-faec7b51-6kc3   Ready    <none>   3d1h   v1.17.14-gke.1600   10.128.0.25   34.XX.176.56    Container-Optimized OS from Google   4.19.150+        docker://19.3.6
gke-cluster-1-default-pool-faec7b51-x07n   Ready    <none>   3d1h   v1.17.14-gke.1600   10.128.0.24   23.XXX.50.249   Container-Optimized OS from Google   4.19.150+        docker://19.3.6

$ gcloud container clusters resize cluster-1 --node-pool default-pool \
>     --num-nodes 0 \
>     --zone us-central1-c
Pool [default-pool] for [cluster-1] will be resized to 0.

$ kubectl get nodes -o wide
No resources found

$ gcloud container clusters resize cluster-1 --node-pool default-pool     --num-nodes 2     --zone us-central1-c
Pool [default-pool] for [cluster-1] will be resized to 2.

Do you want to continue (Y/n)?  y

$ $ kubectl get nodes -o wide
NAME                                       STATUS   ROLES    AGE   VERSION             INTERNAL-IP   EXTERNAL-IP     OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
gke-cluster-1-default-pool-faec7b51-n5hm   Ready    <none>   68s   v1.17.14-gke.1600   10.128.0.26   23.XXX.50.249   Container-Optimized OS from Google   4.19.150+        docker://19.3.6
gke-cluster-1-default-pool-faec7b51-xx01   Ready    <none>   74s   v1.17.14-gke.1600   10.128.0.27   35.XXX.135.41   Container-Optimized OS from Google   4.19.150+        docker://19.3.6

Conclusion

When you are using GKE you are using pre-definied nodes, managed by google and those nodes are automatically upgrading (some security features, etc). Due to that, changing nodes capacity it's easy.

When you pack your application to container and used it in Deployment, your application will be handled by Deployment Controller which will try to keep desired state all the time.

As mention in Service Documentation.

In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them

Service will be still visible in your cluster even if you will scale you cluster to 0 node as this is abstraction. You don't have to restart it. However if you would change some static service configuration (like port) you would need to recreate service with new configuration.

Useful links

Upvotes: 2

Related Questions