Reputation: 2714
I am using a third-party program designed to be run as a command line program, which outputs files that I later need to use in my code. I am working in Jupyter Lab and want to integrate the function calls into my code. The typical way to run this is:
python create_files.py -a input_a -b input_b -c -d
I then want to call this within my Jupyter notebook. I have been able to get it to work by using !
, i.e.:
! python create_files.py -a input_a -b input_b -c -d
The problem with this is that when I want to specify input_a
or input_b
using variables, this doesn't work because it seems that !
expects a literal string, so to speak.
Is there a more clean way of doing this without having to alter the source code of this program (I have tried looking into that, and the code is written such that there is no simple way to call its main function.)
Upvotes: 3
Views: 3170
Reputation: 1
Your question is similar to the one:
How to execute a * .PY file from a * .IPYNB file on the Jupyter notebook?
You may use the following command, which is a little hacky:
%run -i 'create_files.py'
A "correct" way is to use the autoreload method. An example is as follows:
%load_ext autoreload
%autoreload 2
from create_files import some_function
output=some_function(input)
The reference of autoreload is as follows: https://ipython.org/ipython-doc/3/config/extensions/autoreload.html
Hope it helps.
Upvotes: 0
Reputation: 18822
On Jupyter notebook, the use of subprocess
to run command line script goes like this:
Simple command line version:
dir *.txt /s /b
On Jupyter notebook:
import subprocess
sp = subprocess.Popen(['dir', '*.txt', '/s', '/b'], \
stderr=subprocess.PIPE, \
stdout=subprocess.PIPE, \
shell=True)
(std_out, std_err) = sp.communicate() # returns (stdout, stderr)
Printing out the error message, just in case:
print('std_err: ', std_err)
Printing out the echoing message:
print('std_out: ', std_out)
I think the example is clear enough that you can adapt it to your need. Hope it helps.
Upvotes: 1