Reputation: 11494
I want to write some logged process from a forked Python process to a parent, so I am using pipe pair for that:
rpipe, wpipe = os.pipe()
pid = os.fork()
if pid == -1:
raise TestError("Failed to fork() in prepare_test_dir")
if pid == 0:
# Child -- do the copy, print log to pipe and exit
try:
os.close(rpipe)
os.dup2(wpipe, sys.stdout.fileno())
os.dup2(wpipe, sys.stderr.fileno())
os.close(wpipe)
self._prepare_test_dir(test)
sys.stdout.write(self.copy_log)
finally:
os._exit(1)
os.close(wpipe)
_, status = os.waitpid(pid, 0)
# XXX: if copy_log is larger than PIPE_BUF (4-8k), everything
# then is going badly
outf = os.fdopen(rpipe)
self.copy_log = outf.read()
return os.WEXITSTATUS(status)
It doesn't work, nothing appears in self.copy_log
. I have also tried to explicitly construct stdout object with fdopen
:
sys.stdout = os.fdopen(wpipe, 'w')
Not working too. However if I put print
before dup2
:
if pid == 0:
try:
print 'HELLO'
os.close(rpipe)
os.dup2(wpipe, sys.stdout.fileno())
os.dup2(wpipe, sys.stderr.fileno())
...
Copy log is successfully passed to parent and 'HELLO' is printed to controlling terminal. I presume that print
somehow affects sys.stdout
(with lazy initialization or whatever). Any ideas?
I use Python 2.6 and 2.7 on various Linux platforms.
Upvotes: 1
Views: 2099
Reputation: 11494
The problem appears to be is not within pipe/dup2 related code but with os._exit
. It is brutal way to kill python interpreter (but OK for forked processes so it won't touch "shared" objects), but it causes stdout
and stderr
not to be flushed, and data is lost.
I came to the following code in child:
try:
os.close(rpipe)
os.dup2(wpipe, sys.stdout.fileno())
os.dup2(wpipe, sys.stderr.fileno())
os.close(wpipe)
print 'aaaaaaaaa'
except:
traceback.print_exc(20, sys.stderr)
finally:
sys.stdout.flush()
sys.stderr.flush()
os._exit(1)
Upvotes: 1