Reputation: 879
I have a perl script which forks child processes.
sub my_exec{
my($args,$stdout, $stderr) = @_;
my $processes = fork();
die("Cant fork") unless defined($processes);
if(processes == 0){
if(defined $stdout){
close(STDOUT);
open STDOUT, $stdout;
}
if(defined $stderr){
close(STDERR);
open STDERR, $stderr;
}
exec @$args;
}else{
...
}
}
My main issue is that I want to add a timestamp to every line of output to stderr. I was wondering if it could be done here. as you can see, stderr isn't always changed. I am assuming I could do it via some sort of pipe? I would also like to redirect the parent script(Daemon with both stdout and stderr redirected to files) to use timestamps as well.
Thanks
Upvotes: 1
Views: 230
Reputation: 139631
Say you write my_exec
as below.
sub my_exec {
my($args,$stdout,$stderr) = @_; # caller untaints
open my $oldout, ">&STDOUT" or die "$0: save STDOUT: $!";
my $pid = open my $pipe, "-|" // die "$0: fork: $!";
if ($pid) {
if (defined $stderr) {
open STDERR, ">", $stderr or die "$0: open: $!";
}
while (<$pipe>) {
print STDERR scalar(localtime), ": ", $_;
}
close $pipe or die $! ? "$0: error closing $args->[0] pipe: $!"
: "$0: exit status " . ($? >> 8) . " from $args->[0]";
}
else {
open STDERR, ">&STDOUT" or die "$0: pipe STDERR: $!";
if (defined $stdout) {
open STDOUT, ">", $stdout or die "$0: open: $!";
}
else {
open STDOUT, ">&", $oldout or die "$0: restore STDOUT: $!";
}
exec @$args or die "$0: exec @$args: $!";
}
}
The main point is described in the documentation on open
:
If you open a pipe on the command
-
(that is, specify either|-
or-|
with the one– or two–argument forms of open), an implicitfork
is done, so open returns twice: in the parent process it returns the pid of the child process, and in the child process it returns (a defined)0
. Usedefined($pid)
or//
to determine whether the open was successful.
The point of the implicit fork
is setting up a pipe between the parent and child processes.
The filehandle behaves normally for the parent, but I/O to that filehandle is piped from the
STDOUT
of the child process. In the child process, the filehandle isn’t opened—I/O happens from the newSTDOUT
.
That is almost perfect, except you want to modify the standard error, not the standard output.
This means we need to save the parent’s STDOUT
so the child can restore it. This is what is happening with $oldout
.
Duping the child’s (redirected) STDOUT
onto its STDERR
arranges for the underlying daemon’s standard error to run through the pipe, which the parent reads, modifies, and outputs.
One slightly tricky point is where the redirections are processed. If the caller wants to redirect STDOUT
, that needs to happen in the child. But to redirect STDERR
, the parent needs to do so because this gives the parent the opportunity to modify the stream.
The code for a complete example is of the following form. You mentioned a daemon, so I enabled Perl’s dataflow analysis known as taint mode.
#! /usr/bin/perl -T
use strict;
use warnings;
use v5.10.0; # for defined-or //
$ENV{PATH} = "/bin:/usr/bin";
sub my_exec {
# paste code above
}
#my_exec ["./mydaemon"];
#my_exec ["./mydaemon"], "my-stdout";
my_exec ["./mydaemon"], "my-stdout", "my-stderr";
With a simple mydaemon
of
#! /usr/bin/env perl
print "Hello world!\n";
warn "This is a warning.\n";
die "All done.\n";
the output is goes to separate files.
1. my-stdout
:
Hello world!
2. my-stderr
:
Tue Nov 5 17:58:20 2013: This is a warning. Tue Nov 5 17:58:20 2013: All done. ./wrapper: exit status 255 from ./mydaemon at ./wrapper line 23.
Upvotes: 2
Reputation: 386396
fork
is so low level. IPC::Open3 is the minimum you should use.
use IPC::Open3 qw( open3 );
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
open(local *CHILD_STDOUT, '>', $stdout) or die $!;
my $pid = open3(
'<&CHILD_STDIN',
'>&CHILD_STDOUT',
\local *CHILD_STDERR,
$prog, @$args
);
open(my $stderr_fh, '>', $stderr) or die $!;
while (<CHILD_STDERR>) {
print $stderr_fh $ts . $_;
}
waitpid($pid, 0);
Upvotes: 1