BrainPermafrost
BrainPermafrost

Reputation: 674

What effects GCP Cloud Function memory usage

I recently redeployed a hanful of python GCP cloud functions and noticed they are taking about 50mbs more memory, triggering memory limit errors (I had to increase the memory allocation from 256mb to 512mb to get them to run). Unfortunately, that is 2x the cost.

I am trying to figure what caused the memory increase. The only thing I can think of is a python package recent upgrade. So, I specified all package versions in the requirements.txt, based off of my local virtual env, which has not changed lately. The memory usage increase remained.

Are there other factors that would lead to a memory utilization increase? Python runtime is still 3.7, the data that the functions processed has not changed. It also doesn't seem to be a change that GCP has made to cloud functions in general, because it has only happened with functions I have redeployed.

Upvotes: 2

Views: 3814

Answers (1)

Zeenath S N
Zeenath S N

Reputation: 1170

I can point out few possibilities of memory limit errors which are:

  1. One of the reasons for out of memory in Cloud Functions is as discussed in the document.

Files that you write consume memory available to your function, and sometimes persist between invocations. Failing to explicitly delete these files may eventually lead to an out-of-memory error and a subsequent cold start.

  1. As mentioned in this StackOverflow Answer, that if you allocate anything in global memory space without deallocating it, the memory allocation will count this with the future invocations. To minimize memory usage, only allocate objects locally that will get cleaned up when the function is complete. Memory leaks are often difficult to detect.

  2. Also, The cloud functions need to respond when they're done. if they don't respond then their allocated resources won't be free. Any exception in the cloud functions may cause a memory limit error.

  3. You may also wanna check Auto-scaling and Concurrency which mentions another possibility.

Each instance of a function handles only one concurrent request at a time. This means that while your code is processing one request, there is no possibility of a second request being routed to the same instance. Thus the original request can use the full amount of resources (CPU and memory) that you requested.

  1. Lastly, this may be caused by issues with logging. If you are logging objects, this may prevent these objects from being garbage collected. You may need to make the logging less verbose and use string representations to see if the memory usage gets better. Either way, you could try using the Profiler in order to get more information about what’s going on with your Cloud Function’s memory.

Upvotes: 6

Related Questions