Ashay Fernandes
Ashay Fernandes

Reputation: 383

Amazon ECS capacity provider is not scaling out instance at target_capacity=100

I have enabled ECS to use capacity provider, here is my configuration details

Initially after deployment 2 ECS task were running on 2 different EC2 instance, then I ran load test to verify if Cluster and ECS task scales as expected but ECS task desired count was changed to 8 and only 4 ECS task were running(which is expected as there are 2 EC2 instance and I use t3.medium(3 ENI) instance with AWSVPC for my deployment ).

Why did the cluster did not scale as there are already 4 task which are in provisioning state capacity provider threshold is at 100?

I'm using terraform to deploy my infrastructure and details for capacity provider is as below,

resource "aws_autoscaling_group" "kong" {
  ...
  min_size             = var.asg_min_size
  max_size             = var.asg_max_size
  desired_capacity     = var.asg_desired_capacity
  protect_from_scale_in = true
  tags = [
    {
      "key"                 = "Name"
      "value"               = local.name
      "propagate_at_launch" = true
    },
    {
      "key"                 = "AmazonECSManaged"
      "value"               = ""
      "propagate_at_launch" = true 
    }
  ]
}

resource "aws_ecs_capacity_provider" "capacity_provider" {
   name = local.name

   auto_scaling_group_provider {
      auto_scaling_group_arn         = aws_autoscaling_group.kong.arn
      managed_termination_protection = "ENABLED"

      managed_scaling {
           maximum_scaling_step_size = 4
           minimum_scaling_step_size = 1
           status                    = "ENABLED"
           target_capacity           = 100
      }
   }

  
   provisioner "local-exec" {
      when = destroy

      command = "aws ecs put-cluster-capacity-providers --cluster ${self.name} --capacity-providers [] --default-capacity-provider-strategy []"
   }
}

resource "aws_ecs_cluster" "kong-reg-proxy" {
  name      = local.name
  capacity_providers = [
    aws_ecs_capacity_provider.capacity_provider.name,
  ]
  tags = merge(
    {
      "Name"        = local.name,
      "Environment" = var.environment,
      "Description" = var.description,
      "Service"     = var.service,
    },
    var.tags
  )
}

Upvotes: 6

Views: 2496

Answers (1)

martoskin
martoskin

Reputation: 447

Reducing the target capacity below 100% is a dirty trick. Having that makes you running few spare EC2 instances. Probably that's what you also may want to have, but this way ECS does not control the EC2 autoscaling. Most likely you've got the service assigned with a launch_type="EC2", like that

resource "aws_ecs_service" "foo" {
  ...
  launch_type = "EC2"
  ...
}

What you should do instead is to assign a service with a capacity provider strategy so that ECS can manage tasks in PROVISIONING state and populate the CapacityProviderReservation metric correctly

resource "aws_ecs_service" "foo" {
  ...
  capacity_provider_strategy {
    capacity_provider = aws_ecs_capacity_provider.foo.name
    weight = 1
  }
  ...
}

See the reference for more information https://aws.amazon.com/en/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/

Upvotes: 3

Related Questions