Abhishek Bhatia
Abhishek Bhatia

Reputation: 9806

Allocating a task to each separate core

Code:

import multiprocessing,pdb

def do_calculation(data):
    print"check %d"%(data)
    return data * 2

def start_process():
    print 'Starting', multiprocessing.current_process().name

if __name__ == '__main__':
    inputs = list(range(10)) 
    pool_size = multiprocessing.cpu_count()
    pool = multiprocessing.Pool(processes=pool_size,
                                initializer=start_process,)
    pool_outputs= pool.map(do_calculation, inputs)
    pool.close() # no more tasks
    pool.join()  # wrap up current tasks
    print 'Pool    :', pool_outputs

Output:

Starting PoolWorker-1
check 0
check 1
check 2
check 3
check 4
check 5
check 6
check 7
check 8
check 9
Starting PoolWorker-2
Starting PoolWorker-3
Starting PoolWorker-4
Starting PoolWorker-5
Starting PoolWorker-6
Starting PoolWorker-7
Starting PoolWorker-8
Pool    : [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

I want to run a CPU intensive task on each core whereby every core executes an instance. In the above example as representative, I find that only core is doing all the work. (I also care about the output order from the pool.)

Am I doing something wrong or misintrepreting the I/O output?

Upvotes: 1

Views: 66

Answers (1)

falsetru
falsetru

Reputation: 369134

The function do_calculation's job is not realistic. It run so fast. So it ends even before the other workers start.

If you make the function do more jobs, you will see the difference. For example,

import time

def do_calculation(data):
    time.sleep(1)  # <----
    print"check %d"%(data)
    return data * 2

Upvotes: 1

Related Questions