Reputation: 4328
I'm trying to use python to launch a command in multiple seperate instances of terminal simultaneously. What is the best way to do this? Right now I am trying to use the subprocess module with popen which works for one command but not multiple.
Thanks in advance.
Edit:
Here is what I am doing:
from subprocess import*
Popen('ant -Dport='+str(5555)+ ' -Dhost='+GetIP()+ ' -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8', shell=True)
The problem for me is this launches a java process in the terminal which I want to have keep running indefinatley. Secondly, I want to run a similar command multiple times in multiple different processes.
Upvotes: 3
Views: 3242
Reputation: 7033
This should stay open as long as the process is running. If you want to launch multiple simultanously, just wrap it in a thread
untested code, but you should get the general idea:
class PopenThread(threading.Thread):
def __init__(self, port):
threading.Thread.__init__(self)
self.port=port
def run(self):
Popen('ant -Dport='+str(self.port)+ ' -Dhost='+GetIP()+
' -DhubURL=http://192.168.1.113:4444'
' -Denvironment=*firefox launch-remote-control'
' $HOME/selenium-grid-1.0.8', shell=True)
if '__main__'==__name__:
PopenThread(5555).start()
PopenThread(5556).start()
PopenThread(5557).start()
EDIT: The double-fork method described down here: https://stackoverflow.com/a/3765162/450517 by Mike would be the proper way to launch a daemon, i.e. a long-running process which won't communicate per stdio.
Upvotes: 1
Reputation: 21218
Here is a poor version of a blocking queue. You can fancify it with collections.deque or the like, or go even fancier with Twisted deferreds, or what not. Crummy parts include:
season to taste!
import logging
basicConfig = dict(level=logging.INFO, format='%(process)s %(asctime)s %(lineno)s %(levelname)s %(name)s %(message)s')
logging.basicConfig(**basicConfig)
logger = logging.getLogger({"__main__":None}.get(__name__, __name__))
import subprocess
def wait_all(list_of_Popens,sleep_time):
""" blocking wait for all jobs to return.
Args:
list_of_Popens. list of possibly opened jobs
Returns:
list_of_Popens. list of possibly opened jobs
Side Effect:
block until all jobs complete.
"""
jobs = list_of_Popens
while None in [j.returncode for j in jobs]:
for j in jobs: j.poll()
logger.info("not all jobs complete, sleeping for %i", last_sleep)
time.sleep(sleep_time)
return jobs
jobs = [subprocess.Popen('sleep 1'.split()) for x in range(10)]
jobs = wait_all(jobs)
Upvotes: 0
Reputation: 42805
The simple answer I can come up with is to have Python use Popen
to launch a shell script similar to:
gnome-terminal --window -e 'ant -Dport=5555 -Dhost=$IP1 -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8' &
disown
gnome-terminal --window -e 'ant -Dport=5555 -Dhost=$IP2 -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8' &
disown
# etc. ...
There's a fully-Python way to do this, but it's ugly, only works on Unix-like OSes, and I don't have time to write the code out. Basically, subprocess.Popen
doesn't support it because it assumes you want to either wait for the subprocess to finish, interact with the subprocess, or monitor the subprocess. It doesn't support the "just launch it and don't bother me with it ever again" case.
The way that's done in Unix-like OSes is to:
fork
to spawn a subprocessfork
a subprocess of its own/dev/null
and then use one of the exec
functions to launch the process you really want to start (might be able to use Popen
for this part)SIGCHLD
signal, and if the grandparent terminates it doesn't kill all the grandchildren.I might be off in the details, but that's the gist. Backgrounding (&
) and disown
ing in bash
are supposed to accomplish the same thing.
Upvotes: 1