ilovecp3
ilovecp3

Reputation: 2905

Python Multiprocessing

I used multiprocessing in Python to run my code in parallel, like the following,

result1 = pool.apply_async(set1, (Q, n))
result2 = pool.apply_async(set2, (Q, n))

set1 and set2 are two independent function and this code is in a while loop.

Then I test the running time, if I run my code in sequence, the for particular parameter, it is 10 seconds, however, when I run in parallel, it only took around 0.2 seconds. I used time.clock() to record the time. Why the running time decreased so much, for intuitive thinking of parallel programming, shouldn't be the time in parallel be between 5 seconds to 10 seconds? I have no idea how to analyze this in my report... Anyone can help? Thanks

Upvotes: 1

Views: 258

Answers (1)

Tim Peters
Tim Peters

Reputation: 70572

To get a definitive answer, you need to show all the code and say which operating system you're using.

My guess: you're running on a Linux-y system, so that time.clock() returns CPU time (not wall-clock time). Then you run all the real work in new, distinct processes. The CPU time consumed by those doesn't show up in the main program's time.clock() results at all. Try using time.time() instead for a quick sanity check.

Upvotes: 1

Related Questions