Reputation: 4352
I am testing my code with docker for a multi process implementation which I need the processes to run independent of its parent and they simply run to completion and do not need to communicate anything back to the parent.
I have implemented the application in two ways just in case one breaks, one with multiprocessing (mp) library, the other with a simple fork (f). This works perfectly with mp because it has instructions to wait for the process to complete before the parent exits. However I thought the container would wait for the subprocesses to complete even if the parent exits. Does anyone know of a workaround for this with Docker + fork?
multiprocess
imdata = [pools.apply_async(detector_optimized.run_full_detection, (gpu_slot, batch_jobs[gpu_slot][0],
batch_jobs[gpu_slot][1], src, dst, ext)) for gpu_slot in xrange(num_jobs)]
# Wait for all subprocesses to complete
map(lambda x: x.get(), imdata)
pools.close()
pools.join()
fork
for gpu_slot in xrange(len(batch_jobs)):
pid = os.fork()
if pid == 0:
detector_optimized.run_full_detection( gpu_slot, batch_jobs[gpu_slot][0], batch_jobs[gpu_slot][1],
src, dst, detector_optimized.vars.get_EXTENSION( ) )
os._exit(0)
If I remove the os._exit(0) line the forks execute sequentially and no longer in parallel...
Upvotes: 0
Views: 501
Reputation: 133
The Docker container exits when the process with PID 1 exits. Any other process is ignored. If you have multiple processes in a container, you could use a tool that waits for all the processes to complete, and then exit. That tool would have the PID 1. You can check Supervisor, Monit, Chaperone.
Upvotes: 1