Reputation: 131
I have a python script that works great when run by itself. Based on a hardcoded input directory it scans for all .mdb files and puts that into a list, then iterates through them all in a for loop. Each iteration involves multiple table restrictions, joins, queries, and more.
The only problem.. it takes about 36 hours to run on the input dataset and while this script will only ever be used for this dataset in this instance, I would like to increase the performance as I often edit field selections, results to include, join methods, etc. I would like to say it takes a long time because my script is inefficient, but any inefficiency would be small as nearly ALL processing time is dedicated to the geoprocessor object.
All I have of relevance in my main script is:
indir = "D:\\basil\\input"
mdblist = createDeepMdbList(indir)
for infile in mdblist:
processMdb(infile)
It also executes flawlessly when executed sequentially.
I have tried using Parallel Python:
ppservers = ()
job_server = pp.Server(ppservers=ppservers)
inputs = tuple(mdblist)
functions = (preparePointLayer, prepareInterTable, jointInterToPoint,\
prepareDataTable, exportElemTables, joinDatatoPoint, exportToShapefile)
modules = ("sys", "os", "arcgisscripting", "string", "time")
fn = pp.Template(job_server, processMdb, functions, modules)
jobs = [(input, fn.submit(input)) for input in inputs]
It succeeds to create 8 processes, 8 geoprocessor objects... and then fails.
I have not experimented extensively with the built in Python multithreading tools but was hoping for some guidance to simply spawn up to 8 processes going through the queue represented by the mdblist. At no point would any files be attempted to be written or read by multiple processes at the same time. To make things temporarily simpler I have also removed all my logging tools due to this concern; I have run this script enough times to know that it works except for the 4 files of the input of 4104 that have slightly different data formats.
Advice? Wisdom with trying to multithread Arc Python scripts?
Upvotes: 4
Views: 3152
Reputation: 1
I compared the above methods in the same function. the result:
Starting pp with 1 workers
Time elapsed: 4.625 s
Starting pp with 2 workers
Time elapsed: 2.43700003624 s
Starting pp with 4 workers
Time elapsed: 2.42100000381 s
Starting pp with 8 workers
Time elapsed: 2.375 s
Starting pp with 16 workers
Time elapsed: 2.43799996376 s
Starting mul_pool with 1 p
Time elapsed: 5.31299996376 s
Starting mul_pool with 2
Time elapsed: 3.125 s
Starting mul_pool with 4
Time elapsed: 3.56200003624 s
Starting mul_pool with 8
Time elapsed: 4.5 s
Starting mul_pool with 16
Time elapsed: 5.92199993134 s
Upvotes: -1
Reputation: 131
Thought I'd share what ended up working for me and my experiences.
Using the backport of the multiprocessing module (code.google.com/p/python-multiprocessing) as per Joe's comment worked well. I had to change a couple things around in my script to deal with local/global variables and logging.
Main script is now:
if __name__ == '__main__':
indir = r'C:\basil\rs_Rock_and_Sediment\DVD_Data\testdir'
mdblist = createDeepMdbList(indir)
processes = 6 # set num procs to use here
pool = multiprocessing.Pool(processes)
pool.map(processMdb, mdblist)
Total time went from ~36 hours to ~8 using 6 processes.
Some issues I encountered were that by using separate processes, they address different memory stacks and take global variables out entirely. Queues can be used for this but I have not implemented this so everything is just declared locally.
Furthermore, since pool.map can only take one argument, each iteration must create and then delete the geoprocessor object rather than being able to create 8 gp's and pass an available one to each iteration. Each iteration takes about a minute so the couple seconds to create it is not a big deal, but it adds up. I have not done any concrete tests, but this could actually be good practice as anyone who has worked with Arcgis and python will know that scripts drastically slow down the longer the geoprocessor is active (eg. One of my scripts was used by a co-worker who overloaded the input and time estimates to completion went from 50 hours after 1 hour run time to 350 hours after running overnight to 800 hours after running 2 days... it got cancelled and input restricted).
Hope that helps anyone else looking to multiprocess a large itterable input :). Next step: recursive, multiprocessed appends!
Upvotes: 5