Scott Decker
Scott Decker

Reputation: 4297

AWS Lambda function needs more than max memory reported in CloudWatch

I have a Lambda function that reads messages off an SQS queue and inserts items into Dynamo. At first, I had it at 512MB of memory. In cloud watch, it reported the max memory used was around 58MB. I assumed I could then lower the memory to 128MB and see the same rate of processing SQS messages. However, that wasn't the case. Things noticeably slowed. Can anyone explain?

Here is cloud watch showing max memory with 512MB Lambda: enter image description here

Here is cloud watch showing max memory with 128MB Lambda: enter image description here

Here you can see the capacity of the Dynamo table really dropped enter image description here

Here you can see the number of messages being processes really slowed, as evidence by the lower slope enter image description here

Upvotes: 3

Views: 2446

Answers (1)

Michael - sqlbot
Michael - sqlbot

Reputation: 179044

This seems counter-intuitive, but there's a logical explanation:

Reducing memory also reduces the available CPU cycles. You're paying for very short term use of a fixed fraction of the resources of an EC2 instance, which has a fixed ratio of CPU to memory.

Q: How are compute resources assigned to an AWS Lambda function?

In the AWS Lambda resource model, you choose the amount of memory you want for your function, and are allocated proportional CPU power and other resources. For example, choosing 256MB of memory allocates approximately twice as much CPU power to your Lambda function as requesting 128MB of memory and half as much CPU power as choosing 512MB of memory. You can set your memory in 64MB increments from 128MB to 1.5GB.

https://aws.amazon.com/lambda/faqs/

So, how much CPU capacity are we talking about?

AWS Lambda allocates CPU power proportional to the memory by using the same ratio as a general purpose Amazon EC2 instance type, such as an M3 type.

http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction-function.html

We can extrapolate.

In the M3 class, regardless of instance size, the provisioning factors look like this:

CPU = Xeon E5-2670 v2 (Ivy Bridge) × 8 cores
Relative Compute Performance = 26 ECU
Memory = 30 GiB

An ECU is an EC2 (or possibly "Elastic" or "Equivalent") Compute Unit, where 1.0 ECU is approximately equivalent to the compute capacity of a 1GHz Opteron. It's a dimensionless quantity for simplifying comparison of the relative CPU capacity of differing instance types.

So the provisioning ratios look like this:

8/30 Cores/GiB
26/30 ECU/GiB

So at 512 MiB memory, your Lambda function's container's share of this machine would be...

8 ÷ 30 ÷ (1024/512) = 0.133 of 1 core (~13.3% CPU)
26 ÷ 30 ÷ (1024/512) = 0.433 ECU (~433 MHz equivalent)

At 128 MiB, it's only about 1/4 of that.

These numbers seem really small, but they are not inappropriate for the typical Lambda use-case -- single-threaded, asynchronous actions that are not CPU intensive.

Upvotes: 5

Related Questions