Reputation: 1581
I manage to call and use a terminal command in my python script, in which it is working.
But currently I am trying to save the 'output' result from this command into a text file but I am getting errors.
This is my initial code in which it is doing correctly:
import os
os.system("someCmds --proj ARCH --all")
The code that I used when I tried to save the outputs into a text file:
import os
import sys
sys.stdout=open("output.txt","w")
command = os.system("someCmds --proj ARCH --all")
print command
sys.stdout.close()
However I got the following error - ValueError: I/O operation on closed file
I did close the file as I found this online. So where am I wrong?
Upvotes: 0
Views: 1732
Reputation: 487695
The Python programming part of this is more appropriate for stackoverflow.com. However, there's a Unix-oriented component as well.
Every process has three well-known file descriptors. While they have names—stdin, stdout, and stderr—they are actually known by their file numbers, which are 0, 1, and 2 respectively. To run a program and capture its stdout—its file-number-1 output—you must connect that process's file-descriptor-1 to a file or pipe.
In the command-line shells, there is syntax for this:
prog > file
runs program prog
, in a process whose stdout is connected to an open file descriptor that has been moved into descriptor-slot-1. Actually making all that happen at the system-call level is complicated:
fork
system call or one of its variants: this makes a clone of yourself.exec
family of calls to terminate yourself (the clone) but in the process, replace everything with the program prog
. This maintains all your open file descriptors, including the one pointing to the file or pipe you moved in step 3. Once the exec
succeeds, you no longer exist and cannot do anything. (If the exec
fails, report the failure and exit.)Python being what it is, this multi-step sequence is all wrapped up for you, with fancy error checking, in the subprocess
module. Instead of using os.system
, you can use subprocess.Popen
. It's designed to work with pipes—which are actually more difficult than files—so if you really want to redirect to a file, rather than simply reading the program's output through a pipe, you will want to open the file first, then invoke subprocess.Popen
.
In any case, altering your Python process's sys.stdout
is not helpful, as sys.stdout
is a Python data structure quite independent of the underlying file descriptor system. By open
ing a Python stream, you do obtain a file descriptor (as well as a Python data structure), but it is not file-descriptor-number-1. The underlying file descriptor number, whatever it is, has to be moved to the slot-1 position after the fork
call, in the clone. Even if you use subprocess.Popen
, the only descriptor Python will move, post-fork, is the one you pass as the stdout=
argument.
(Subprocess's Popen
accepts any of these:
stream.fileno()
to get the descriptor number, orstdin=
, stdout=
, or stderr=
, orsubprocess.PIPE
or, for stderr=
, subprocess.STDOUT
: these tell it that Python should create a pipe, or re-use the previously-created stdout pipe for the special stderr=subprocess.STDOUT
case.The library is pretty fancy and knows how to report, with a Python traceback, a failure to exec
, or various other failures that occur in the child. It does this using another auxiliary pipe with close-on-exec. EOF on this pipe means the exec succeeded; otherwise the data arriving on this extra pipe include the failure, converted to a byte stream using the pickle
module.)
Upvotes: 2