Reputation: 4378
I realized that I need to allocate much more memory than needed to my AWS Lambda functions otherwise I get:
{
"errorMessage": "Metaspace",
"errorType": "java.lang.OutOfMemoryError"
}
For instance I have a Lambda function with 128MB allocated, it crashes all the time with that error whereas in the console it says "Max memory used 56 MB".
Then I allocate 256MB, it doesn't crash anymore but it always give me a "Max memory used" between 75 and 85MB.
How come? Thanks.
Upvotes: 26
Views: 5918
Reputation: 6515
What is happening here could be one of two things:
When you go to a larger memory footprint, it is increasing your metaspace allocation, which enables the function to run.
Upvotes: 3
Reputation: 706
The amount of memory you allocate to your java lambda function is shared by heap, meta, and reserved code memory.
The java command executed by the container for a function allocated 256M is something like:
java -XX:MaxHeapSize=222823k -XX:MaxMetaspaceSize=26214k -XX:ReservedCodeCacheSize=13107k -XX:+UseSerialGC -Xshare:on -XX:-TieredCompilation -jar /var/runtime/lib/LambdaJavaRTEntry-1.0.jar
222823k + 26214k + 13107k = 256M
The java command executed by the container for a function allocated 384M is something like
java -XX:MaxHeapSize=334233k -XX:MaxMetaspaceSize=39322k -XX:ReservedCodeCacheSize=39322k -XX:+UseSerialGC -Xshare:on -XX:-TieredCompilation -jar /var/runtime/lib/LambdaJavaRTEntry-1.0.jar
334233k + 39322k + 39322k = 384M
So, the formula appears to be
85% heap + 10% meta + 5% reserved code cache = 100% of configured function memory
I honestly don't know how the "Max Memory Used" value reported in Cloudwatch logs is calculated. It doesn't align with anything that I'm seeing.
Upvotes: 18