James Wright
James Wright

Reputation: 1435

Launching subprocesses on resource limited machine

Edit:

The original intent of this question was to find a way to launch an interactive ssh session via a Python script. I'd tried subprocess.call() before and had gotten a Killed response before anything was output onto the terminal. I just assumed this was an issue/limitation with the subprocess module instead of an issue somewhere else.This was found not to be the case when I ran the script on a non-resource limited machine and it worked fine.

This then turned the question into: How can I run an interactive ssh session with whatever resource limitations were preventing it from running?

Shoutout to Charles Duffy who was a huge help in trying to diagnose all of this .

Below is the original question:

Background:

So I have a script that is currently written in bash. It parses the output of a few console functions and then opens up an ssh session based on those parsed outputs.

It currently works fine, but I'd like to expand it's capabilities a bit by adding some flag arguments to it. I've worked with argparse before and thoroughly enjoyed it. I tried to do some flag work in bash, and let's just say it leaves much to be desired.

The Actual Question:

Is it possible to have python to do stuff in a console and then put the user in that console?

Something like using subprocess to run a series of commands onto the currently viewed console? This in contrast to how subprocess normally runs, where it runs commands and then shuts the intermediate console down

Specific Example because I'm not sure if what I'm describing makes sense:

So here's a basic run down of the functionality I was wanting:

  1. Run a python script
  2. Have that script run some console command and parse the output
  3. Run the following command:

    ssh -t $correctnode "cd /local_scratch/pbs.$jobid; bash -l"

This command will ssh to the $correctnode, change directory, and then leave a bash window in that node open.

I already know how to do parts 1 and 2. It's part three that I can't figure out. Any help would be appreciated.

Edit: Unlike this question, I am not simply trying to run a command. I'm trying to display a shell that is created by a command. Specifically, I want to display a bash shell created through an ssh command.

Upvotes: 2

Views: 134

Answers (2)

Charles Duffy
Charles Duffy

Reputation: 295650

Context For Readers

The OP is operating on a very resource-constrained (particularly, it appears, process-constrained) jumphost box, where starting an ssh process as a subprocess of python goes over a relevant limit (on number of processes, perhaps?)


Approach A: Replacing The Python Interpreter With Your Interactive Process

Using the exec*() family of system calls causes your original process to no longer be in memory (unlike the fork()+exec*() combination used to start a subprocess while leaving the parent process running), so it doesn't count against the account's limits.

import argparse
import os

try:
    from shlex import quote
except ImportError:
    from pipes import quote

parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()

remote_cmd_str = 'cd /local_scratch/pbs.%s && exec bash -i' % (quote(args.jobid))
local_cmd = [
  '/usr/bin/env', 'ssh', '-tt', node, remote_cmd_str
]
os.execv("/usr/bin/env", local_cmd)

Approach B: Generating Shell Commands From Python

If we use Python to generate a shell command, the shell can invoke that command only after the Python process exited, such that we stay under our externally-enforced process limit.

First, a slightly more robust approach at generating eval-able output:

import argparse

try:
    from shlex import quote
except ImportError:
    from pipes import quote

parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()

remoteCmd = ['cd', '/local_scratch/pbs.%s' % (args.jobid)]
remoteCmdStr = ' '.join(quote(x) for x in remoteCmd) + ' && bash -l'
cmd = ['ssh', '-t', args.correctnode, remoteCmdStr]
print(' '.join(pipes.quote(x) for x in cmd)

To run this from a shell, if the above is named as genSshCmd:

#!/bin/sh
eval "$(genSshCmd "$@")"

Note that there are two separate layers of quoting here: One for the local shell running eval, and the second for the remote shell started by SSH. This is critical -- you don't want a jobid of $(rm -rf ~) to actually invoke rm.

Upvotes: 1

Marat
Marat

Reputation: 15738

This is in no way a real answer, just an illustration to my comment.

Let's say you have a Python script, test.py:

import argparse


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('myarg', nargs="*")
    args = parser.parse_args()

    print("echo Hello world! My arguments are: " + " ".join(args.myarg))

So, you create a bash wrapper around it, test.sh

set -e
$(python test.py $*)

and this is what you get:

$ bash test.sh
Hello world! My arguments are:
$ bash test.sh one two
Hello world! My arguments are: one two

What is going on here:

  • python script does not execute commands. Instead, it outputs the commands bash script will run (echo in this example). In your case, the last command will be ssh blabla
  • bash executes the output of the python script (the $(...) part), passing on all its arguments (the $* part)
  • you can use argparse inside the python script; if anything is wrong with the arguments, the message will be put to stderr and will not be executed by bash; bash script will stop because of set -e flag

Upvotes: 1

Related Questions