Reputation: 741
I am using Python and it's subprocess library to check output from calls using strace, something in the matter of:
subprocess.check_output(["strace", str(processname)])
However, this only gives me the output after the called subprocess already finished, which is very limiting for my use-case.
I need a kind of "stream" or live-output from the process, so I need to read the output while the process is still running instead of only after it finished.
Is there a convenient way to achieve this using the subprocess library? I'm thinking of a kind of poll every x seconds, but did not find any hints regarding on how to implement this in the documentation.
Many thanks in advance.
Upvotes: 14
Views: 16589
Reputation: 1577
If you want to treat stdout and stderr separately, you can spawn two threads that handle them concurrently (live as the output is produced).
Adapted from my more detailed answer:
import logging
from collections import deque
from concurrent.futures import ThreadPoolExecutor
from functools import partial
from subprocess import PIPE, CalledProcessError, CompletedProcess, Popen
def stream_command(
args,
*,
stdout_handler=logging.info,
stderr_handler=logging.error,
check=True,
text=True,
stdout=PIPE,
stderr=PIPE,
**kwargs,
):
"""Mimic subprocess.run, while processing the command output in real time."""
with (
Popen(args, text=text, stdout=stdout, stderr=stderr, **kwargs) as process,
ThreadPoolExecutor(2) as pool, # two threads to handle the (live) streams separately
):
exhaust = partial(deque, maxlen=0) # collections recipe: exhaust an iterable at C-speed
exhaust_async = partial(pool.submit, exhaust) # exhaust non-blocking in a background thread
exhaust_async(stdout_handler(line[:-1]) for line in process.stdout)
exhaust_async(stderr_handler(line[:-1]) for line in process.stderr)
retcode = process.poll() # block until both iterables are exhausted (process finished)
if check and retcode:
raise CalledProcessError(retcode, process.args)
return CompletedProcess(process.args, retcode)
Call with simple print
handlers:
stream_command(["echo", "test"], stdout_handler=print, stderr_handler=print)
# test
Or with custom handlers:
outs, errs = [], []
def stdout_handler(line):
outs.append(line)
print(line)
def stderr_handler(line):
errs.append(line)
print(line)
stream_command(
["echo", "test"],
stdout_handler=stdout_handler,
stderr_handler=stderr_handler,
)
# test
print(outs)
# ['test']
Upvotes: 0
Reputation: 6723
As of Python 3.2 (when context manager support was added to Popen
), I have found this to be the most straightforward way to continuously stream output from a subprocess:
import subprocess
def run(args):
with subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) as process:
for line in process.stdout:
print(line.decode('utf8'))
Upvotes: 18
Reputation: 6170
According to the documentation:
Popen.poll()
Check if child process has terminated. Set and return returncode attribute.
So based on this you can:
process = subprocess.Popen('your_command_here',stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if process.poll() is not None and output == '':
break
if output:
print (output.strip())
retval = process.poll()
This will loop, reading the stdout, and display the output in real time.
This does not work in current versions of python. (At least) for Python 3.8.5 and newer you should replace output == ''
with output == b''
Upvotes: 5
Reputation: 1857
Had some problems referencing the selected answer for streaming output from a test runner. The following worked better for me:
import subprocess
from time import sleep
def stream_process(process):
go = process.poll() is None
for line in process.stdout:
print(line)
return go
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while stream_process(process):
sleep(0.1)
Upvotes: 9