Reputation: 2300
An usual way to prevent a bash (or something else) script from parallel execution is making a lock directory on starting, removing it on finishing, and checking it on starting. However there may be cases that the script is killed or terminated abnormally, making the lock permanent.
An approach for resolving this is to set a threshold, and any the lock is considered stale and should be ignored if created too long ago. However again, my bash script calls other programs and could be executed for an undermined long time (and also, can hardly be split into multiple small works), making this approach unavailable...
Is there an approach to solve this problem? Or there's another way to lock a script from parallel execution?
Upvotes: 1
Views: 1259
Reputation: 19621
My first thought was to write just enough C to use Linux calls such as flock() which are automatically released on the death of a process. Looking to refresh my memory on this I found info at http://linux.die.net/man/1/flock on "flock - Manage locks from shell scripts" - if you have access to this, I would us it. If not, something like it might be reasonably easy to implement.
Upvotes: 0
Reputation: 45115
In addition to the lock directory, you could also save the process ID of the instance that holds the lock. If another instance finds itself blocked, it can check that process ID to see if it's still running, and if not, delete the lock directory and PID record, then try to reacquire the lock.
Editorial note: Using the filesystem's underlying serialization properties to implement higher-level synchronization is a common approach -- great for quick-and-dirty solutions, or if you absolutely must accomplish the whole thing in a shell script. If you need something more bulletproof than that, TCP/IP might offer a less fragile, more portable approach. You could write a lightweight TCP/IP server in C (or almost any other language) : it just listens on some port for application launch requests (single threaded, one connection at a time, letting the TCP layer resolve all the race conditions), while monitoring the job in progress to see when it's time to accept a new request.
Upvotes: 2