phypho
phypho

Reputation: 33

Multiprocessing from joblib doesn't parallelize?

Since I moved from python3.5 to 3.6 the Parallel computation using joblib is not reducing the computation time. Here are the librairies installed versions: - python: 3.6.3 - joblib: 0.11 - numpy: 1.14.0

Based on a very well known example, I give below a sample code to reproduce the problem:

import time
import numpy as np
from joblib import Parallel, delayed

def square_int(i):
    return i * i

ndata = 1000000 
ti = time.time()
results = []    
for i in range(ndata):
    results.append(square_int(i))

duration = np.round(time.time() - ti,4)
print(f"standard computation: {duration} s" )

for njobs in [1,2,3,4] :
    ti = time.time()  
    results = []
    results = Parallel(n_jobs=njobs, backend="multiprocessing")\
        (delayed(square_int)(i) for i in range(ndata))
    duration = np.round(time.time() - ti,4)
    print(f"{njobs} jobs computation: {duration} s" )

I got the following ouput:

While I am increasing by a factor of 10 the number of ndata and removing the 1 core computation, I get those results:

Does anyone have an idea in which direction I should investigate ?

Upvotes: 3

Views: 6626

Answers (1)

Diansheng
Diansheng

Reputation: 1141

I think the primary reason is that your overhead from parallel beats the benefits. In another word, your square_int is too simple to earn any performance improvement via parallel. The square_int is so simple that passing input and output between processes may take more time than executing the function square_int.

I modified your code by creating a square_int_batch function. It reduced the computation time a lot, though it is still more than the serial implementation.

import time
import numpy as np
from joblib import Parallel, delayed

def square_int(i):
    return i * i

def square_int_batch(a,b):
    results=[]
    for i in range(a,b):
        results.append(square_int(i))
    return results

ndata = 1000000 
ti = time.time()
results = []    
for i in range(ndata):
    results.append(square_int(i))

# results = [square_int(i) for i in range(ndata)]

duration = np.round(time.time() - ti,4)
print(f"standard computation: {duration} s" )

batch_num = 3
batch_size=int(ndata/batch_num)

for njobs in [2,3,4] :
    ti = time.time()  
    results = []
    a = list(range(ndata))
#     results = Parallel(n_jobs=njobs, )(delayed(square_int)(i) for i in range(ndata))
#     results = Parallel(n_jobs=njobs, backend="multiprocessing")(delayed(
    results = Parallel(n_jobs=njobs)(delayed(
        square_int_batch)(i*batch_size,(i+1)*batch_size) for i in range(batch_num))
    duration = np.round(time.time() - ti,4)
    print(f"{njobs} jobs computation: {duration} s" )

And the computation timings are

standard computation: 0.3184 s
2 jobs computation: 0.5079 s
3 jobs computation: 0.6466 s
4 jobs computation: 0.4836 s

A few other suggestions that will help reduce the time.

  1. Use list comprehension results = [square_int(i) for i in range(ndata)] to replace for loop in your specific case, it is faster. I tested.
  2. Set batch_num to a reasonable size. The larger this value is, the more overhead. It started to get significantly slower when batch_num exceed 1000 in my case.
  3. I used the default backend loky instead of multiprocessing. It is slightly faster, at least in my case.

From a few other SO questions, I read that the multiprocessing is good for cpu-heavy tasks, for which I don't have an official definition. You can explore that yourself.

Upvotes: 9

Related Questions