Kent Ho
Kent Ho

Reputation: 119

Kubernetes Statefulset problem with Cluster Autoscaler and Multi-AZ

I have a EKS cluster with cluster autoscaler setup, spanning across three availability zones. I have deployed a Redis Cluster using helm and it works fine. Basically it is a statefulset of 6 replicas with dynamic PVC.

Currently, my EKS cluster has two worker nodes, which I will name as Worker-1A and Worker-1B in AZ 1A and 1B respectively, and has no worker node on AZ 1C. I am doing some testing to make sure the Redis Cluster can always spin up and attach the volume properly. All the Redis Cluster pods are created in Worker-1B. In my testing, I kill all the pods in the Redis Cluster, and before it spins new pods up, I deploy some other deployments to use all the resources in Worker-1A and Worker-1B. Now since that the worker nodes have no resource to create new pods, the cluster autoscaler will create a worker node in AZ 1C (to balance nodes across AZ). Now the problem comes, when the Redis Cluster statefulset trying to recreate the pods, it cannot create in Worker-1B because there is no resource, and it will try to create in Worker-1C instead, and the pods will hit the following error: node(s) had volume node affinity conflict.

I know this situation might be rare but how do I fix this issue if it ever happens? I am hoping if there is an automated way to solve this instead of fixing it manually.

Upvotes: 2

Views: 359

Answers (0)

Related Questions