Reputation: 33
I'm doing a monte carlo simulation with multiple processes using python's multiprocessing library. The processes basically guess some object and if it meets some condition it is added to a shared list. My calculation is finished if this list meets some condition.
My current code looks like this: (pseudocode without unimportant details)
mgr = Manager()
ns = mgr.Namespace()
ns.mylist = []
ns.othersharedstuff = x
killsig = mgr.Event()
processes = [ MyProcess(ns, killsig) for _ in range(8) ]
for p in processes: p.start()
for p in processes: p.join()
get data from ns.mylist()
def MyProcess.run(self):
localdata = y
while not killsig.is_set():
x = guessObject()
if x.meetsCondition():
add x to ns.mylist and put local data into ns()
if ns.mylist meets condition:
killsig.set()
put local data into ns()
When I replace 'while not killsig.is_set():' with 'while True:', the speed of my simulation increases by about 25%! (except it doesn't terminate anymore of course)
Is there a faster way than using signals? It is not important if the unsynchronized local data of each process is lost, so something involving process.terminate() would be fine too.
Upvotes: 3
Views: 3605
Reputation: 1144
Since you've got the original process that has a list of all your subprocesses, why not use that to terminate the processes? I'm picturing something like this:
ns.othersharedstuff = x
killsig = mgr.Event()
processes = [ MyProcess(ns, killsig) for _ in range(8) ]
for p in processes: p.start()
while not killsig.isSet():
time.sleep(0.01) # 10 milliseconds
for p in processes: p.terminate()
get data from ns.mylist()
Then you can just set the while loop to while true:
Upvotes: 3