Reputation: 1084
I have found this solution to do a process locking mechanism in python:
import socket
import sys
import time
def get_lock(process_name):
global lock_socket
lock_socket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
try:
lock_socket.bind('\0' + process_name)
print 'I got the lock'
except socket.error:
print 'lock exists'
sys.exit()
get_lock('running_test')
while True:
time.sleep(3)
this was found here: Check to see if python script is running
Can the same be achieved in bash? If so, could someone please post a working example?
Many thanks
Upvotes: 1
Views: 1443
Reputation: 1
I've used the following code fragment to avoid multiple concurrent invocation of a bash script
#!/usr/bin/env bash
ScriptName=${0##*/}
FlockFile="/tmp/${ScriptName}.lck"
touch ${FlockFile} 2>/dev/null
[[ $? -ne 0 ]] && echo "Cannot access lock file ${FlockFile}" && exit 2
exec 5>${FlockFile}
flock -nx 5
[[ $? -ne 0 ]] && echo "Another instance of ${ScriptName} is running" && exit 1
chmod 666 ${FlockFile} 2>/dev/null
# Main script here
Upvotes: 0
Reputation: 891
While I wrote a bash script editing a file, meanwhile multiple process instances of this script need run simultaneously, and this caused some race condition which may mess my file. I found a mutual exclusive lock solution to solve my problem:
# Add mutual exclusive lock in case there's race condition.
{
flock 201
echo 'replace this echo with actions for your case' >> /tmp/file_to_edit
} 201> /tmp/file_used_as_lock
Please note flock
has a option -n
, which means failure rather than wait. So with this option, you can decide to output error message instead of waiting.
Please refer to this http://mywiki.wooledge.org/BashFAQ/045.
Upvotes: 3