Reputation: 1461
Need help in understanding how memory utilization is calculated while running a container.
say we have a container that is based of Ubuntu 18.04, which we are running inside another Ubuntu 18.04, rather same OS. with following options -
docker run -it -p 8080:8080 --cpus 2 --memory 2048m
Now the question here is, the same process when we run in our local machine with containers, it works perfectly fine with out OOM.. however, the very moment we put the same container and process in google cloud run, the container goes Out Of memory. So the question here is
and is there a way we can tackle this issue by changing our system to any other platform while running on cloudrun?
Upvotes: 4
Views: 1268
Reputation: 45282
As @Dustin said, if you write/modify files to the local disk on Cloud Run, it will count towards your available memory. This is most likely the problem.
However, if your code (or modules you import, such as Google Cloud Client libraries) follows different code paths on your laptop vs deployed app, that might be the reason of the OOMs as well.
Containers do not run operating systems. Your container image being based off of an ubuntu:18.10
image doesn't provide a memory optimization (or make use of shared dynamic library cache).
It just makes your app use the binaries and dynamic libraries available on that base image, and use that distro's package manager. Similarly, you have no control over the host machine Cloud Run runs your containers on.
Also note that current memory limitation on Cloud Run is 2 GB, however this will soon be increased to something like 4 GB.
Upvotes: 4
Reputation: 21570
Without seeing any details about your image or application, it's hard to say for sure, but one big difference between Cloud Run and your local machine is that on Cloud Run, memory and "on disk" files consume the same quota.
From https://cloud.google.com/run/docs/tips/general#deleting_temporary_files:
In the Cloud Run (fully managed) environment disk storage is an in-memory filesystem. Files written to disk consume memory otherwise available to your service, and can persist between invocations. Failing to delete these files can eventually lead to an out-of-memory error and a subsequent cold start.
Upvotes: 4