blacktulip71
blacktulip71

Reputation: 41

AWS ECS service on fargate does not scale in properly

2 out of my 26 ECS services in us-west-2 do not scale in to 1 task as desired.

The service configuration is like:

Behavior:

  1. The service had 1 task running
  2. The service scaled out to 2 tasks when the CPUUtilization > 40% for 3 datapoints within 3 minutes based on the target tracking alarm for scale out in aws Cloudwatch. The desired tasks showed 2.
  3. After a while, the target tracking alarm for scale in was in alarm state when CPUUtilization < 36% for 15 datapoints within 15 minutes. At this point, MemoryUtilization < 54% for 15 datapoints within 15 minutes.

By right it should auto scale in. I was expecting the desired tasks to be automatically updated to 1 and in the ecs service event history, there should be 3 entries that indicated a task had been deregistered, had begun draining and then stopped. But the desired tasks stayed at 2 and no entries were found in the event history that a task had been stopped.

This problem doesn't happen to all the services. Is this a known bug in AWS? Or is it due to some wrong configuration in the ECS service?

Upvotes: 3

Views: 1635

Answers (2)

Carsten Mohs
Carsten Mohs

Reputation: 13

Still the same issue and solution. Removing the ECSServiceAvarageMemoryUtilization metric immediately 'enables' the scale-in activity, if ECSServiceAvarageCPUUtilization is below threshold. In our case the memory utilization (like 58%) where constantly operating close to the ECSServiceAvarageMemoryUtilization threshold of 60%. This might be a reason, too. I would expect, that there is a hysteresis within the scaling algorithm. Try to iterate with the memory utilization threshold.

Upvotes: 1

blacktulip71
blacktulip71

Reputation: 41

The auto-scale in works fine now after I change the scale-in rule to use CPUUtlization metric only. But still don't understand the root cause. Couldn't find a satisfying answer on the internet.

Upvotes: 1

Related Questions