Reputation: 951
I need to create some sort of fail safe in one of my scripts, to prevent it from being re-executed immediately after failure. Typically when a script fails, our support team reruns the script using a 3rd party tool. Which is usually ok, but it should not happen for this particular script.
I was going to echo out a time-stamp into the log, and then make a condition to see if the current time-stamp is at least 2 hrs greater than the one in the log. If so, the script will exit itself. I'm sure this idea will work. However, this got me curious to see if there is a way to pull in the last run time of the script from the system itself? Or if there is an alternate method of preventing the script from being immediately rerun.
It's a SunOS Unix system, using the Ksh Shell.
Upvotes: 1
Views: 3207
Reputation: 63902
Just do it, as you proposed, save the date >some file
and check it at the script start. You can:
somefile
Other common method is create one specified lock-file
, or pid-file
such /var/run/script.pid
, Its content is usually the PID (and hostname, if needed) of the process what created it. Of course, the file-modification time tell you when it is created, by its content you can check the running PID. If the PID doesn't exists, (e.g. pre process is died) and the file modification time is older as X minutes, you can start the script again.
This method is good mainly because you can use simply the cron
+ some script_starter.sh
what will periodically check the script running status and restart it when needed.
If you want use system resources (and have root access) you can use the accton + lastcomm
.
I don't know SunOS but probably knows those programs. The accton
starts the system-wide accounting of all programs, (needs to be root) and the lastcomm command_name | tail -n 1
shows when the command_name
is executed last time.
Check the man lastcomm
for the command line switches.
Upvotes: 1