Tiago Ávila
Tiago Ávila

Reputation: 2867

AWS Lambda function increasing Max Memory Used

I have an AWS Lambda function built in .NET Core 2.1. Which is triggered by a SQS Queue.

This function has a Max Memory of 512MB and a timeout of 2 min.

Looking into the CloudWatch logs I'm seeing the Max Memory Used being increased after some number of executions. See the images below:

enter image description here enter image description here

It keeps increasing after some executions, it goes from 217MB to 218MB, after to 219MB and so on. This function is running multiple times and with a high frequence.

Have anyone faced this on AWS Lambda? Thanks in advance for any help.

Upvotes: 4

Views: 12321

Answers (2)

Adiii
Adiii

Reputation: 59926

Today I was trying to optimize memory for Lambda function and go through this post, so sharing some thought regarding memory so might help others.

First thing better to run below query using log insight to view the memory stats i.e memory used and total allocated memory and allocate memory to your Lambda function base on usage as from you logs you are not optimizing memory properly, which mean there is 250MB or 50% memory is over-provisioned, you pay for this without utilizing it in your case.

Run below query to find over-provisioned memory.

filter @type = "REPORT"
| stats max(@memorySize / 1024 / 1024) as provisonedMemoryMB,
  min(@maxMemoryUsed / 1024 / 1024) as smallestMemoryRequestMB,
  avg(@maxMemoryUsed / 1024 / 1024) as avgMemoryUsedMB,
  max(@maxMemoryUsed / 1024 / 1024) as maxMemoryUsedMB,
  provisonedMemoryMB - maxMemoryUsedMB as overProvisionedMB

output

Field   Value
avgMemoryUsedMB 
449.654
maxMemoryUsedMB 
616.0736
overProvisionedMB   
909.8053
provisonedMemoryMB  
1525.8789
smallestMemoryRequestMB 
324.2493

You can set something that is a bit bigger then maxMemoryUsedMB.

it goes from 217MB to 218MB, after to 219MB and so on. This function is running multiple times and with a high frequency

You show very small statistics which is just a few requests, and one reason mentioned by @Matthew, also this is expected as not every SQS message will be same and nor the destination from lambda, for example, pushing the message to RDS etc so something processing and consuming SQS and interacting with our resources from your Lambda may increase or decrease the memory.

Also, the duration can affect the memory, so run the query to find the expensive request.

filter @type = "REPORT"
| stats avg(@duration), max(@duration), min(@duration),max(@memorySize / 1024 / 1024) by bin(5m)

As you can see the memory change with a request that has took more time

enter image description here or

filter @type = "REPORT"
| fields @requestId, @billedDuration, @maxMemoryUsed/1024/1024
| sort by @billedDuration desc

Upvotes: 6

Matthew
Matthew

Reputation: 25763

This is assuming there isn't a bug in your code somewhere and you're processing the same amount of work...

AWS Lambda keeps instances of your application code running for some time to ensure subsequent requests to it are speedy, so it could simply be a matter of the garbage collection hasn't been run on that process running your code.

Instead, what I would be more concerned about is paying for 512MB when your application isn't even using 256MB. Keep in mind that you don't pay for what you use, you pay for what you allocate.

EDIT: As per cementblock's comment, keep in mind that changing memory allocation will affect your CPU and networking shares.

Upvotes: 5

Related Questions