Jellicle
Jellicle

Reputation: 30206

Communicating multiple times w/ subprocess (multiple calls to stdout)

There's a similar question to mine on [this thread][1].

I want to send a command to my subprocess, interpret the response, then send another command. It would seem a shame to have to start a new subprocess to accomplish this, particularly if subprocess2 must perform many of the same tasks as subprocess1 (e.g. ssh, open mysql).

I tried the following:

subprocess1.stdin.write([my commands])
subprocess1.stdin.flush()
subprocess1.stout.read()

But without a definite parameter for bytes to read(), the program gets stuck executing that instruction, and I can't supply an argument for read() because I can't guess how many bytes are available in the stream.

I'm running WinXP, Py2.7.1

EDIT

Credit goes to @regularfry for giving me the best solution for my real intention (read the comments in his response, as they pertain to accomplishing my goal through an SSH tunnel). (His/her answer has been voted up.) For the benefit of any viewer who hereafter comes for an answer to the title question, however, I've accepted @Mike Penningtion's answer.

Upvotes: 0

Views: 1932

Answers (3)

wberry
wberry

Reputation: 19347

This approach will work (I've done this) but will take some time and it uses Unix-specific calls. You'll have to abandon the subprocess module and roll your own equivalent based on fork/exec and os.pipe().

Use the fcntl.fcntl function to place the stdin/stdout file descriptors (read and write) for your child process into non-blocking mode (O_NONBLOCK option constant) after creating them with os.pipe().

Use the select.select function to poll or wait for availability on your file descriptors. To avoid deadlocks you will need to use select() to ensure that writes will not block, just like reads. Even still, you must account for OSError exceptions when you read and write, and retry when you get EAGAIN errors. (Even when using select before read/write, EAGAIN can occur in non-blocking mode; this is a common kernel bug that has proven difficult to fix.)

If you are willing to implement on the Twisted framework, they have supposedly solved this problem for you; all you have to do is write a Process subclass. But I haven't tried that myself yet.

Upvotes: 1

Mike Pennington
Mike Pennington

Reputation: 43077

@JellicleCat, I'm following up on the comments. I believe wexpect is a part of sage... AFAIK, it is not packaged separately, but you can download wexpect here.

Honestly, if you're going to drive programmatic ssh sessions, use paramiko. It is supported as an independent installation, has good packaging, and should install natively on windows.

EDIT

Sample paramiko script to cd to a directory, execute an ls and exit... capturing all results...

import sys
sys.stderr = open('/dev/null')       # Silence silly warnings from paramiko
import paramiko as pm
sys.stderr = sys.__stderr__
import os

class AllowAllKeys(pm.MissingHostKeyPolicy):
    def missing_host_key(self, client, hostname, key):
        return

HOST = '127.0.0.1'
USER = ''
PASSWORD = ''

client = pm.SSHClient()
client.load_system_host_keys()
client.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
client.set_missing_host_key_policy(AllowAllKeys())
client.connect(HOST, username=USER, password=PASSWORD)

channel = client.invoke_shell()
stdin = channel.makefile('wb')
stdout = channel.makefile('rb')

stdin.write('''
cd tmp
ls
exit
''')
print stdout.read()

stdout.close()
stdin.close()
client.close()

Upvotes: 1

regularfry
regularfry

Reputation: 3268

Your choices are:

  1. Use a line-oriented protocol (and use readline() rather than read()), and ensure that every possible line sent is a valid message;
  2. Use read(1) and a parser to tell you when you've read a full message; or
  3. Pickle message objects into the stream from the subprocess, then unpickle them in the parent. This handles the message length problem for you.

Upvotes: 2

Related Questions