Bartender1382
Bartender1382

Reputation: 403

Can I have a Perl script, initiated from a browser, fork itself, and not wait for the child to end?

Also posted on PerlMonks.

I have this very simple Perl script on my linux server.

What I would like to be able to do is to call the script from a browser on a separate machine
Have the script initiate a fork
Have the parent send an httpResponse (freeing up the browser)
Immediately end the parent
Allow the child to do its job, heavy complex database work, which could take a minute or two
Have the child end itself with no output whatsoever

When I call this script from a browser, the browser does not receive the sent response till the child is complete.

Yes, it works when called from the command line.

Is what I want to do possible? p.s. I even tried it with ProcSimple, but I get the same hang up.

#!/usr/bin/perl
local $SIG{CHLD} = "IGNORE";
use lib '/var/www/cgi-bin';
use CGI;

my $q = new CGI;

if(!defined($pid = fork())) {
   die "Cannot fork a child: $!";
} elsif ($pid == 0) {
   print $q->header();
   print "i am the child\n";
   sleep(10);
   print "child is done\n";
   exit;
} else {
    print $q->header();
    print "I am the parent\n";
       print "parent is done\n";
   exit 0;
}
exit 0;

Upvotes: 3

Views: 223

Answers (3)

Bruce Van Allen
Bruce Van Allen

Reputation: 177

Similarly to @mob's post, here's how my web apps do it:

    # fork long task
    if (my $pid = fork) {
        # parent: return with http response to web client
    } else {
        # child: suppress further IO to ensure termination of http connection to client
        open STDOUT, '>', "/dev/null";
        open STDIN, '>', "/dev/null";
        open STDERR, '>', "/dev/null";
    }

    # Child carries on from here, 

Sometimes the (child) long process prints to a semaphore or status file that the web client may watch to see when the long process is complete.

I don't remember which Perl adept suggested this years ago, but it's served reliably in many situations, and seems very clear from the "re-visit it years later - what was I doing?" perspective...

Note that if /dev/null doesn't work outside of UNIX/Linux, then @mob's use of close might be more universal.

Upvotes: 1

zdim
zdim

Reputation: 66964

One way for a parent process to start another process that will go on its own is to "double fork." The child itself forks and it then exits right away, so its child is taken over by init and can't be a zombie.

This may help here as it does seem that there may be blocking since file descriptors are shared between parent and child, as brought up in comments. If the child were to exit quickly that may work but as you need a process for a long running job then fork twice

use warnings;
use strict;
use feature 'say';

my $pid = fork // die "Can't fork: $!";

if ($pid == 0) { 
    say "\tChild. Fork";

    my $ch_pid = fork // die "Can't fork from child: $!";

    if ($ch_pid == 0) {
        # grandchild, run the long job
        sleep 10; 
        say "\t\tgrandkid done";
        exit;
    }   

    say "\tChild, which just forked, exiting right away.";
    exit;
}

say "Parent, and done";

I am not sure how to simulate your setup to test whether this helps but since you say that the child produces "no output whatsoever" it may be enough. It should be worth trying since it's simpler than demonizing the process (which I'd expect to do the trick).

Upvotes: 2

mob
mob

Reputation: 118695

In general you must detach the child process from its parent to allow the parent to exit cleanly -- otherwise the parent can't assume that it won't need to handle more input/output.

} elsif ($pid == 0) {
   close STDIN;
   close STDERR;
   close STDOUT;   # or redirect
   do_long_running_task();
   exit;

In your example, the child process is making print statements until it exits. Where do those prints go if the parent process has been killed and closed its I/O handles?

Upvotes: 2

Related Questions