codeaprendiz
codeaprendiz

Reputation: 3205

How can we limit the number of pods in a given worker node (VM) based on memory utilization parameter and trigger scaling of nodes

Specs

I have a Kubernetes cluster with following configuration (giving only relevant info)

Requirement

  1. When the memory utilization of any node (VM) reaches 80%, i want the node to stop spawning any more pods

  2. Naturally, I would want a new node (VM) to spawn up automatically if both of my existing VM's are at 80% memory threshold.

Already gone through

question It gives a way to achieve but there is no memory parameter involved.

Upvotes: 0

Views: 298

Answers (1)

Sriram G
Sriram G

Reputation: 409

You need to scale in 2 levels.

  1. Pod level autoscaling. This can be achieved by using the HPA (Horizontal Pod Autoscaler). You can set a scaling policy based on CPU and/or Memory metric.

  2. You have to set node level autoscaling. Ref: Cluster Autoscaler

The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail to launch due to lack of resources or when nodes in the cluster are underutilized and their pods can be rescheduled onto other nodes in the cluster.

Also, note that the cluster autoscaler decide when to scale out and scale in based on the resource availability (Memory or CPU). I don't think you can set a 80% memory threshold as a scaling policy here.

AWS EKS Documentation for scaling AWS EKS Autoscaling

Upvotes: 1

Related Questions