user124114
user124114

Reputation: 8692

Python's multiprocessing and memory

I am using multiprocessing.imap_unordered to perform a computation on a list of values:

def process_parallel(fnc, some_list):
    pool = multiprocessing.Pool()
    for result in pool.imap_unordered(fnc, some_list):
        for x in result:
            yield x
    pool.terminate()

Each call to fnc returns a HUGE object as a result, by design. I can store N instances of such object in RAM, where N ~ cpu_count, but not much more (not hundreds).

Now, using this function takes up too much memory. The memory is entirely spent in the main process, not in the workers.

How does imap_unordered store the finished results? I mean the results that were already returned by workers but not yet passed on to user. I thought it was smart and only computed them "lazily" as needed, but apparently not.

It looks like since I cannot consume the results of process_parallel fast enough, the pool keeps queueing these huge objects from fnc somewhere, internally, and then blows up. Is there a way to avoid this? Limit its internal queue somehow?


I'm using Python2.7. Cheers.

Upvotes: 21

Views: 3685

Answers (2)

Eric des Courtis
Eric des Courtis

Reputation: 5445

The simplest solution I can think of would be to add a closure to wrap your fnc function which would use a semaphore to control the total number of simultaneous job executions that can execute at one time (I assume the main process/thread would be incrementing the semaphore). The semaphore value could be calculated based on job size and available memory.

Upvotes: 2

rumpel
rumpel

Reputation: 8290

As you can see by looking into the corresponding source file (python2.7/multiprocessing/pool.py), the IMapUnorderedIterator uses a collections.deque instance for storing the results. If a new item comes in, it is added and removed in the iteration.

As you suggested, if another huge object comes in while the main thread is still processing the object, those will be stored in memory too.

What you might try is something like this:

it = pool.imap_unordered(fnc, some_list)
for result in it:
    it._cond.acquire()
    for x in result:
        yield x
    it._cond.release()

This should cause the task-result-receiver-thread to get blocked while you process an item if it is trying to put the next object into the deque. Thus there should not be more than two of the huge objects in memory. If that works for your case, I don't know ;)

Upvotes: 12

Related Questions