Reputation: 2853
I have an array (called data_inputs
) containing the names of hundreds of astronomy images files. These images are then manipulated. My code works and takes a few seconds to process each image. However, it can only do one image at a time because I'm running the array through a for
loop:
for name in data_inputs:
sci=fits.open(name+'.fits')
#image is manipulated
There is no reason why I have to modify an image before any other, so is it possible to utilise all 4 cores on my machine with each core running through the for loop on a different image?
I've read about the multiprocessing
module but I'm unsure how to implement it in my case.
I'm keen to get multiprocessing
to work because eventually I'll have to run this on 10,000+ images.
Upvotes: 136
Views: 235879
Reputation: 566
I would suggest to use imap_unordered
with chunksize
if you are only using a for
loop to iterate over an iterable. It will return results from each loop as soon as they are calculated. map
waits for all results to be computed and hence is blocking.
Upvotes: 4
Reputation: 860
Alternatively
with Pool() as pool:
pool.map(fits.open, [name + '.fits' for name in datainput])
Upvotes: 11
Reputation: 48297
You can simply use multiprocessing.Pool
:
from multiprocessing import Pool
def process_image(name):
sci=fits.open('{}.fits'.format(name))
<process>
if __name__ == '__main__':
pool = Pool() # Create a multiprocessing Pool
pool.map(process_image, data_inputs) # process data_inputs iterable with pool
Upvotes: 149
Reputation: 359
You can use multiprocessing.Pool
:
from multiprocessing import Pool
class Engine(object):
def __init__(self, parameters):
self.parameters = parameters
def __call__(self, filename):
sci = fits.open(filename + '.fits')
manipulated = manipulate_image(sci, self.parameters)
return manipulated
try:
pool = Pool(8) # on 8 processors
engine = Engine(my_parameters)
data_outputs = pool.map(engine, data_inputs)
finally: # To make sure processes are closed in the end, even if errors happen
pool.close()
pool.join()
Upvotes: 35