xinyi
xinyi

Reputation: 51

How to limit the memory size of the instance of EKS fargate

I tried to limit the memory of my pod

      resources:
        requests:
          cpu: 2000m
          memory: 100Mi
        limits:
          cpu: 2000m
          memory: 140Mi

However, if I use kubectl describe nodes I still get allocated a 2vCPU, 16G memory node.

Upvotes: 1

Views: 6401

Answers (2)

daniel
daniel

Reputation: 75

The output of kubectl describe nodes is not what counts in this case.

AWS sets an annotation "CapacityProvisioned" to the pod which describes the used instance size. The annotations are displayed in the UI Console under your cluster, Workloads, then Pods on the bottom right.

It is possible a node larger than requested is used, however, you are still limited to the requested resources.

Source: https://github.com/aws/containers-roadmap/issues/942#issuecomment-747416514

Upvotes: 3

70ny
70ny

Reputation: 741

Looks like the value of memory is invalid. From the AWS documentation Fargate rounds up to the compute configuration shown below that most closely matches the sum of vCPU and memory requests in order to ensure pods always have the resources that they need to run. (reference here)

You are defining a 2 vCPU with 140 Mebibyte of memory, that are way less that the 4GB minimum for that level of CPUs (4G = 3817Mi, you can run conversion here)

I personally expect, reading AWS configuration, that a pod with 2 vCPUs and 4GB of RAM is set in place. But maybe the 140Mi is considered invalid and it's round up to the maximum value for that range.

Maybe you were meaning 14000Mi (so 14.6 Gigabytes) of RAM?

Upvotes: 1

Related Questions