Reputation: 87
I am using ssh to start a script on a remote host. After execution I need to come back to the original host. The issue is that the script1 I want to execute, starts a tool in background on the remote host and until this process does not get killed, it remains on the remote host.
ssh -Y -l "test" login 'path/to/script1'
[CTRL+C]
If I execute the command in the terminal i can come back by typing CTRL+C But now I want to execute the command in Perl, where I cannot simply presh CRTL+C
system(qq{ssh -Y -l "testlogin" 'path/to/script1'});
Does anyone now how to kill this process on the remote host without knowing the PID?
Upvotes: 3
Views: 315
Reputation: 66883
I take it that you don't want the program to wait for that whole ssh...
thing to finish but would rather have that operation non-blocking so that the program can start it and right away proceed to do other things.
There are various ways to do that, depending on what precisely is needed. I'd like to first show the canonical fork
+exec
way. Then I post a little program which also uses: system
to put a job in the background, a thread, pipe-open, and a module.
A basic way is to fork and then exec that child process with the desired program.†
FORK_EXEC: {
my @cmd = ('ssh', '-Y', '-l', 'testlogin', 'path/to/script1');
local $SIG{CHLD} = 'IGNORE'; # Don't care about the child process here
my $pid = fork // die "Can't fork: $!";
if ($pid == 0) {
exec @cmd;
die "Shouldn't return after 'exec', there were errors: $!"
}
}
# the parent (main program) carries on
This assumes that there is no need to keep a watch or terminate the process at any point.
Next, here is a program that shows a few other ways to fire off a job in a non-blocking way.
For a demo it prints to screen as it gets the control back right after starting a job, and then sleeps a little so that the command (which prints Job done
) can show its completion clearly. The program was tested to run the command on a remote host over ssh
as well.
use warnings;
use strict;
use feature 'say';
# Will use array with command terms when possible, or string if needed
my @cmd = ('perl', '-E', 'sleep 5; say "Job done"');
my $cmd_str = join(' ', @cmd[0,1]) . qq('$cmd[-1]');
say "Command: $cmd_str";
# Command shown in question
#my @cmd = ('ssh', '-Y', '-l', 'testlogin', 'path/to/script1');
#my $cmd_str = join ' ', @cmd;
BACKGROUND_SYSTEM: { #last;
# Uses shell. Probably simplest
system("$cmd_str &") == 0 or die $!;
say "\nRan in background via system. Sleep 10";
sleep 10;
};
WITH_THREADS: { #last;
# Do it any way you want it in a thread
use threads;
my $thr = async { system(@cmd) };
say "\nStarted a thread, id ", $thr->tid, ". Sleep 10";
sleep 10;
# At some point "join" the thread (once it completed, or will wait)
$thr->join;
# Or, have the thread terminated at program exit if still running
# $thr->detach;
}
FORK_EXEC: { #last;
# Again, have full freedom to do it any way needed
local $SIG{CHLD} = 'IGNORE';
my $pid = fork // die "Can't fork: $!";
if ($pid == 0) {
exec @cmd or die "Can't be here, there were errors: $!";
}
say "\nIn parent, forked $pid and exec-ed the program in it. Sleep 10";
sleep 10;
};
PIPE_OPEN: { #last;
# Strictly no shell. Normally used to read from child as output comes
my $pid = open(my $pipe, '-|', @cmd)
// die "cant open pipe: $!";
say "\nPipe-opened a process $pid.";
print while <$pipe>; # demo, not necessary
close $pipe or die "Error with pipe-open: $!";
};
# Uncomment after installing the module, if desired
#WITH_PROC_BACKGROUND: { #last;
# use Proc::Background;
#
# my $proc = Proc::Background->new( @cmd );
#
# say "\nStarted a process with Proc::Background, ", $proc->pid, ". Sleep 10";
# sleep 10;
#}
All these start a process and are then free to continue doing other things, and they print to screen (and then wait for the process/thread to finish, as a demonstration). The approaches have different advantages and typical uses.
Brief explanation
system passes its command to a shell if it is in one string and has shell metacharacters. Ours here does (&
), and that tells to shell to put the command in "background": fork another process and execute the command in it, thus returning control right away. It works much like our fork
+exec
example, which is *nix's venerable way to do multiprocessing
A separate thread runs independently, so we can proceed with other work in the main program. That thread needs to be managed in the end, either by join
-ing it or perhaps by detach
-ing it right away in which case we don't care about how it does (and it will be terminated at the end of the program)
The pipe-open also forks a separate process, which thus frees the main program to continue with other work. The idea is that the filehandle (what i named $pipe
) is used to receive the process's output, as its STDOUT
is instead hooked to that resource (the "filehandle"). However, we don't have to take output and this can also be used merely to spawn a separate process. It avoids the shell altogether and for sure, and that's usually a good thing. Also see perlipc
If you need to actually control that process, either ssh
(on your host) or the job on the remote host, then more or else is needed. Please clarify if that's the case.
In the end the question says
... how to kill this process on the remote host without knowing the PID?
This seems to disagree with comments, and above I addressed what seems to be the quest. But if it is indeed needed to terminate the remote process, then print its PID from the command on the remote host -- all methods shown above know the PID of the process they start -- and capture it in the script. Then later the script can ssh
and kill
the process, once that is needed.
† Another way to not worry about reaping the child process is to "disown" it in a "double fork"
{
my $pid = fork // die "Can't fork: $!";
if ($pid == 0) {
my $grandkid_pid = fork // die "Cannot fork: $!";
if ($grandkid_pid == 0) {
# do here what you need
exec @cmd;
die "Should never get here, there were errors: $!"
}
}
exit; # parent of second child process exits right away
waitpid $pid, 0; # and is reaped
}
# the main program continues
The first child exits right away after fork
-ing and is reaped, so its child process is taken over by init
and there is no issues with zombies. This isn't the best way to go if that child process need be tracked or queried from outside.
Upvotes: 3
Reputation: 119
First of all you might want to consider using &
at the end of the command to put the process in the background, it will remove the need for CTRL+C
About killing the process:
pkill toolName
Will kill it by name instead of PID.
Upvotes: 0