curiouscientist
curiouscientist

Reputation: 181

Python Multiprocessing within Jupyter Notebook

I am new to the multiprocessing module in Python and work with Jupyter notebooks. I have tried the following code snippet from PMOTW:

import multiprocessing

def worker():
    """worker function"""
    print('Worker')
    return

if __name__ == '__main__':
    jobs = []
    for i in range(5):
        p = multiprocessing.Process(target=worker)
        jobs.append(p)
        p.start()

When I run this as is, there is no output.

I have also tried creating a module called worker.py and then importing that to run the code:

import multiprocessing
from worker import worker

if __name__ == '__main__':
    jobs = []
    for i in range(5):
        p = multiprocessing.Process(target=worker)
        jobs.append(p)
        p.start()

There is still no output in that case. In the console, I see the following error (repeated multiple times):

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 106, in spawn_main
    exitcode = _main(fd)
  File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 116, in _main
    self = pickle.load(from_parent)
AttributeError: Can't get attribute 'worker' on <module '__main__' (built-in)>

However, I get the expected output when the code is saved as a Python script and exectued.

What can I do to run this code directly from the notebook without creating a separate script?

Upvotes: 18

Views: 32261

Answers (5)

Sam
Sam

Reputation: 339

Much like you I encountered the attribute error. The problem seems to be related how jupyter handles multithreading. The fastest result I got was to follow the Multi-processing example.

So the ThreadPool took care of my issue.

from multiprocessing.pool import ThreadPool as Pool

def worker():
    """worker function"""
    print('Worker\n')
    return


pool = Pool(4)
for result in pool.map(worker, range(5)):
    pass    # or print diagnostics

Upvotes: 5

Berg
Berg

Reputation: 46

Save the function to a separate Python file then import the function back in. It should work fine that way.

Upvotes: 1

sebtac
sebtac

Reputation: 578

This works for me on MAC (cannot make it work on windows):

import multiprocessing as mp
mp_start_count = 0

if __name__ == '__main__':
    if mp_start_count == 0:
        mp.set_start_method('fork')
        mp_start_count += 1

Upvotes: 0

Eden Trainor
Eden Trainor

Reputation: 591

I'm relatively new to parallel computing so I may be wrong with some technicalities. My understanding is this:

Jupyter notebooks don't work with multiprocessing because the module pickles (serialises) data to send to processes. multiprocess is a fork of multiprocessing that uses dill instead of pickle to serialise data which allows it to work from within Jupyter notebooks. The API is identical so the only thing you need to do is to change

import multiprocessing

to...

import multiprocess

You can install multiprocess very easily with a simple

pip install multiprocess

You will however find that your processes will still not print to the output, (although in Jupyter labs they will print out to the terminal the server out is running in). I stumbled upon this post trying to work around this and will edit this post when I find out how to.

Upvotes: 28

user5538922
user5538922

Reputation:

I'm not an export either in multiprocessing or in ipykernel(which is used by jupyter notebook) but because there seems nobody gives an answer, I will tell you what I guessed. I hope somebody complements this later on.

I guess your jupyter notebook server is running on Windows host. In multiprocessing there are three different start methods. Let's focus on spawn, which is the default on windows, and fork, the default on Unix.

Here is a quick overview.

  • spawn

    • (cpython) interactive shell - always raise an error
    • run as a script - okay only if you nested multiprocessing code in if __name__ == '__main'__
  • Fork

    • always okay

For example,

import multiprocessing

def worker():
    """worker function"""
    print('Worker')
    return

if __name__ == '__main__':
    multiprocessing.set_start_method('spawn')
    jobs = []
    for i in range(5):
        p = multiprocessing.Process(target=worker)
        jobs.append(p)
        p.start()

This code works when it's saved and run as a script, but raises an error when entered in an python interactive shell. Here is the implementation of ipython kernel, and my guess is that that it uses some kind of interactive shell and so doesn't go well with spawn(but please don't trust me).


For a side note, I will give you an general idea of how spawn and fork are different. Each subprocess is running a different python interpreter in multiprocessing. Particularly, with spawn, a child process starts a new interpreter and imports necessary module from scratch. It's hard to import code in interactive shell, so it may raise an error.

fork is different. With fork, a child process copies the main process including most of the running states of the python interpreter and then continues execution. This code will help you understand the concept.

import os


main_pid = os.getpid()

os.fork()
print("Hello world(%d)" % os.getpid())  # print twice. Hello world(id1) Hello world(id2)

if os.getpid() == main_pid:
    print("Hello world(main process)")  # print once. Hello world(main process)

Upvotes: 8

Related Questions