genx1mx6
genx1mx6

Reputation: 453

How do I read a file exclusively in perl?

I have a perl module that my library of collecting scripts use. These scripts are for scanning my network, doing various tasks on my network devices, etc.

there are about 15 users and I only want 1 person at a time to run a collecting script. if a second user tries to run the script, then I want them to wait until the first person is finished.

The code below is just a test bed, so i can get it working correctly before i put it into production. I have a module that has a nap function. I only want one person to nap at a time.

sub nap {
        my $program = shift;
        my @arr;

        #open file to check the queue
        open(IN, $path); @arr=<IN>; close IN;

        #if someone is in the queue, print it out!
        $|++;
        if (@arr > 0) { print @arr; }

        #keep checking the queue, once the queue is empty it's my turn!
        while (@arr != 0) {
                open(IN, $path); @arr=<IN>; close IN;
                sleep 1;
        }

        #it's my turn, put my name in the queue
        open(IN,">",$path);
        print IN "$ENV{USER},$program";
        close IN;

        #take a nap!
        print "\n Sleep starting \n";
        sleep 10;

        #I'm finished with my nap, clear the queue so other's can go
        open(IN,">",$path);
        close IN;
        print "\nsleep over\n";
}

my issue is it works if 1 user is waiting, but if 2 users are waiting, they both still take a nap at the same time ( after the first user finishes )

can I lock or block this file? I've seen flock but it appears no matter how you lock it, the user can still read.

is this even a correct solution? or is there something better used for these purposes?

Upvotes: 4

Views: 417

Answers (2)

dpw
dpw

Reputation: 1586

You can lock the DATA section of a file to lock the file itself, so you can (ab)use that to control exclusive access to that script.

I put this in a library file nap.pl:

#!usr/bin/env perl
use strict;
use Fcntl qw(LOCK_EX LOCK_NB);

sub nap {
    ## make sure this script only runs one copy of itself
    until ( flock DATA, LOCK_EX | LOCK_NB) {
        print "someone else has a lock\n";
        sleep 5;
    }
}

__DATA__
This exists to allow the locking code at the beginning of the file to work.
DO NOT REMOVE THESE LINES!

Then I opened 3 terminals and ran this in each one:

#!/usr/bin/env perl
use strict;
do 'nap.pl';

&nap;
print `ls /tmp/`;
sleep 5;

The first terminal printed the contents of my /tmp directory immediately. The second terminal printed "someone else has a lock" and then after 5 seconds, it printed the contents of /tmp. The third terminal printed "someone else has a lock" twice, once immediately and then once after 5 seconds, and then printed the contents of /tmp/.

Be careful about where you put this in your library though, you'd want to make sure that things aren't locking unrelated subroutines.

I would personally put the lock code in each collecting script, not in the library. The collecting script is what you're actually trying to only run one instance of. It seems your title isn't accurate: you're not trying to exclusively read a file, you're trying to exclusively run a file.

Upvotes: 1

user3967089
user3967089

Reputation:

Your solution is not correct. For one thing, you're using plain open, which buffers reads and writes, which causes complications when you want several processes to communicate via one file.

As you seem to already suspect, and as others have commented, there is no (reasonable) way on a Unixlike operating system to forcibly make it so that only one process can read from a file. The in some sense correct way to handle it is to use a lock file, and only have the process currently holding the lock read from the data/communication file. Check perldoc -f flock for details on that.

File locks on Unix does have some drawbacks, unfortunately. Particularly if the lock file resides on a network filesystem they can be unreliable. With NFS, for example, functional locks depend on all machines mounting the filesystem having a lock daemon running. One somewhat hacky but traditional way to work around this is to abuse the semantics of mkdir. If a bunch of processes all try to create a directory with the same name, it is guaranteed that only one of them will succeed (well, or none, but let's skip that for now). You can use that for synchronizing the processes. Before a process starts doing the thing that only one at a time should be doing, it tries to create a directory with a pre-determined name. If it succeeds, fine, it can go on. If it fails, someone else is already working and it has to wait. When the active process is done working, it removes the directory so another process can then succeed in creating it.

Anyway, the basic message is that you need two filesystem thingies: one that your processes use to determine which one of them may work, and one for the actual work.

Upvotes: 4

Related Questions