Lee
Lee

Reputation: 3104

Sharing data across my gunicorn workers

I have a Flask app, served by Nginx and Gunicorn with 3 workers. My Flask app is a API microservice designed for doing NLP stuff and I am using the spaCy library for it.

My problem is that they are taking huge number of RAM as loading the spaCy pipeline spacy.load('en') is very memory-intensive and since I have 3 gunicorn workers, each will take about 400MB of RAM.

My question is, is there a way to load the pipeline once and share it across all my gunicorn workers?

Upvotes: 28

Views: 8782

Answers (4)

thethiny
thethiny

Reputation: 1248

This is an answer that works in 2021 using Python3.6 and 3.9 (tested both). I had the same setup as you, using flask to deploy a Spacy NLU API. The solution was to simply was to append --preload to the gunicorn command like so: gunicorn src.main:myFlaskApp --preload. This would cause the fork to happen after the entire src/main.py file has been executed, and not after the myFlaskApp = Flask(__name__).

Upvotes: 0

markus barth
markus barth

Reputation: 63

I need to share Gigabytes of data among instances and use a memory mapped file (https://docs.python.org/3/library/mmap.html). If the amount of data you need to retrieve per request from the pool is small this works fine. Otherwise you can mount a ramdisk where you locate the mounted file.

As I am not familiar with SpaCy I am not sure if this helps. I would have one worker for actually processing the data while loading (spacy.load?) and writing the resulting doc (pickling/marshalling) to the mmf where the other workers can read from it.

To get a better feel of mmap have a look at https://realpython.com/python-mmap/

Upvotes: 1

Nathan Hardy
Nathan Hardy

Reputation: 308

Sharing the pipeline in memory between workers may help you.

Please check gc.freeze

I think, just do this, in your app.py:

  1. freeze the gc
  2. load pipeline or any other resources that is going to use a big amount of memory
  3. unfreeze gc

and,

  • make sure your worker will not modify (directly or indirectly) any object created during freezing
  • pass the app.py to gunicorn

When fork happens, those memory page holding big resources will not be truly copied by os, because you make sure there are no write operations on it.

If you do not freeze gc, the memory pages will still be written, because gc is writing object reference counts. That why freeze matters.

I just know this way but I didn't try it.

Upvotes: 0

Krithika Ramakrishnan
Krithika Ramakrishnan

Reputation: 128

One workaround is, you can load the spaCy pipeline before-hand, pickle (or any comfortable way of serializing) the resultant object and store it in a DB or file system. Each worker can just fetch the serialized object, and simply deserialize it.

Upvotes: 0

Related Questions