How to check if script is running or not from script itself? - linux

Having below sample script sample.sh
#!/bin/bash
if ps aux | grep -o "sample.sh" >/dev/null
then
echo "Already script running"
exit 0
fi
echo "start script"
while true
do
echo "script running"
sleep 5
done
In above script i want to check if this script previously running or not if running then not run it again.
problem is check condition always become true (because to check the condition require to run script) and it always show me "Already script running" message.
Any idea how to solve it?

You need a proper lock. I'd do using flock like this:
exec 201> /tmp/lock.$(basename $0).file
if ! flock -n 201 ; then
echo "another instance of $0 is running";
exit 1
fi
# cmds
exec 201>&-
rm -rf /tmp/lock.$(basename $0).file
This basically creates lock for script using a temporary file. The temporary file has particular significance other than it's used to tell whether your script has acquired a lock.
When there's an instance of this program running, the next run of the same program can't run as the lock will prevent it.

For me will be safer to use a lock file , create it when process start and delete after completion.

Let the script record its own PID in a file. Before doing so, it first checks if that file currently contains an active PID, in which case it exits.
pid=$(< ${PID_FILE:?} || exit
kill -0 $PID && exit
The next exercise is to prevent race conditions when writing the file.

Try this, it gives number of sample.sh run by the user
ps -aux | awk -v app='sample.sh' '$0 ~ app { print $1 }' |grep $USERNAME|wc -l

Wtite a tmp file to the /tmp directory.
have your script check to see if the file exists, if it does then don't run.
#!/bin/sh
# our tmpfile
tmpfile="/tmp/mytmpfile"
# check to see if it exists.
# if it does then exit script
if [[ -f ${tmpfile} ]]; then
echo script already running.
exit
fi
# it doesn't exist at this point so lets make one
touch ${tmpfile}
# do whatever now.
# end of script
rm ${tmpfile}

Related

Shell scripts and how to avoid running the same script at the same time on a Linux machine

I have Linux centralize server – Linux 5.X.
In some cases on my Linux server the get_hosts.ksh script could be run from some other different hosts.
For example get_hosts.ksh could run on my Linux machine three or more times at the same time.
My question:
How to avoid running multiple instances of process/script?
A common solution for your problem on *nix systems is to check for a lock file existence.
Usually lock file contains current process PID.
This is an example ksh script:
#!/bin/ksh
pid="/var/run/get_hosts.pid"
trap "rm -f $pid" SIGSEGV
trap "rm -f $pid" SIGINT
if [ -e $pid ]; then
exit # pid file exists, another instance is running, so now we politely exit
else
echo $$ > $pid # pid file doesn't exit, create one and go on
fi
# your normal workflow here...
rm -f $pid # remove pid file just before exiting
exit
UPDATE: Answering to OP comment, I add handling program interruptions and segfaults with trap command.
The normal way of doing this is to write the process id into a file. The first thing the script does is check for the existence of the file, read the pid, check if a process with that pid exists, and for extra paranoia points, if that process actually runs the script. If yes, the script exits.
Here's a simple example. The process in question is a binary, and this script makes sure the binary runs only once. This is not exactly what you need, but you should be able to adapt this:
RUNNING=0
PIDFILE=$PATH_TO/var/run/example.pid
if [ -f $PIDFILE ]
then
PID=`cat $PIDFILE`
ps -eo pid | grep $PID >/dev/null 2>&1
if [ $? -eq 0 ]
then
RUNNING=1
fi
fi
if [ $RUNNING -ne 1 ]
then
run_binary
PID=$!
echo $PID > $PIDFILE
fi
This is not very elaborate but should get you on the right track.
You can use a pid file to keep track of when the process is running. At the top of the script, check for the existence of the pid file and if it doesn't exist, create it and run the script, otherwise return.
Some sample code can be seen in this answer to a similar question.
You might consider using the (optional) lockfile(1) command (provided by procmail package on Debian).
I have a lot of scripts, and using this below code for prevent multiple/simulate run:
PID="/var/scripts/PID.txt" # Temp file
if [ ! -f "$PID" ]; then
echo $$ > "$PID" # Print actual PID into a file
else
ps -p $(cat "$PID") > /dev/null && exit || echo $$ > "$PID"
fi
Building on wallenborn's answer I also added a "staleness" check just in case the PID lock file is beyond a certain expected age in seconds.
# prevent simultaneous executions within an hourish
pid_file="$HOME/.harness.pid"
max_stale_seconds=3600
if [ -f $pid_file ]; then
pid="$(cat "$pid_file")"
let age_in_seconds="$(date +%s) - $(date -r "$pid_file" +%s)"
if ps $pid >/dev/null && [ $age_in_seconds -lt $max_stale_seconds ]; then
exit 1
fi
fi
echo $$>"$pid_file"
trap "rm -f \"$pid_file\"" SIGSEGV
trap "rm -f \"$pid_file\"" SIGINT
This could be made "smarter" to kill off the other executions should the PID be valid but this would be dangerous. Consider a sudden power failure and reset situation where the PID file contains a number that may now reference a completely different process.

How to run bash script when a program open in liunx

Is there a way to execute bash script when I click a program like NetBeans or DropBox on Ubuntu
and execute a bash script when exit it
My idea create bash script on cronjob #reboot check every second if the program exist in the current processes
#!/bin/bash
NameOfprogram="NetBeans"
while [[ true ]]; do
countOfprocess=$(ps -ef |grep $NameOfprogram | wc -l)
if [[ $countOfprocess -gt 1 ]]; then
#execute bash
fi
sleep 1
done
But I think this idea not the best ,Is there a better way to achieve it?
A better approach is to wrap the executable in a script. That means you put a script with the name of the program in your path (probably $HOME/bin) and Linux will use that instead of the real executable.
Now you can execute the real program using:
/usr/bin/NetBeans "$#"
So to execute the real executable, you just put the absolute path in front of the name. The odd "$#" too pass on any arguments someone might have given the script.
Put a loop around this:
while [[ true ]]; do
/usr/bin/NetBeans "$#"
done
But there is a problem: You can't exit this program anymore. As soon as you try, it restarts. So if you just want a restart when it crashes:
while [[ true ]]; do
/usr/bin/NetBeans "$#" && exit 0
done
As long as the program exits because of an error, it will be restarted. If you quit it, the script will stop.

check if file exist and wait for a sh script to run until the file is found

I'm working on a linux server which is sometimes very slow. So when i add some jobs to run for me i have to wait for a few hours just to run a simple calculation.
I was wondering if i am able to start the next analysis but let it wait until the output of the previous analysis is there. (the second analysis needs the first analysis output)
I tried to make except and other options working but still no success (found except and other options in previous question on stackoverflow):
expect {
'output/analysis_file1.txt'
}
Any ideas/hints are appreciated and will help me allot.
The only thing i want is to let the second scrip wait till the text file of the first script is given.
The 4 scripts:1.
#!/bin/bash
#$ -cwd
./script1.sh
. ./script2.sh $repla
. ./script3.sh $replac
2:
repla=''
for i in 'abcdefghijklmnopqrst'
do
repla=`echo $i | sed 's/'abc'/'xyz'/g'`
#echo $repla
done
3:
replac=''
for j in $1
do
replac=`echo $j | sed 's/'xyz'/'san'/g'`
#echo $replac
done
4:
replace=''
for h in $1
do
replace=`echo $h | sed 's/'san'/'sander'/g'`
#echo $replace
done
you can use below core with some modifications
#!/bin/bash
while [ ! -f FILE_NAME ]
do
sleep SOME_SECONDS
done
echo "file found"
You can use wait if you know the pid of the process running in background. Wait will also return the same exit code of the process it is waiting to stop.
firstProcess & # Running in background
firstPid=$!
otherProcess # Concurrent with firstProcess
wait $firstPid # Wait firstProcess finish
anotherProcess
Instead of executing multiple scripts independently you should create a master script runner like this:
#!/bin/bash
# sanity checks & parse arguments
./script1
ret=$?
# check for return value of script1 using $ret variable
./script2
ret=$?
# check for return value of script2 using $ret variable
./script3
ret=$?
# check for return value of script3 using $ret variable
...
# do cleanup and reporting

launch process in background and modify it from bash script

I'm creating a bash script that will run a process in the background, which creates a socket file. The socket file then needs to be chmod'd. The problem I'm having is that the socket file isn't being created before trying to chmod the file.
Example source:
#!/bin/bash
# first create folder that will hold socket file
mkdir /tmp/myproc
# now run process in background that generates the socket file
node ../main.js &
# finally chmod the thing
chmod /tmp/myproc/*.sock
How do I delay the execution of the chmod until after the socket file has been created?
The easiest way I know to do this is to busywait for the file to appear. Conveniently, ls returns non-zero when the file it is asked to list doesn't exist; so just loop on ls until it returns 0, and when it does you know you have at least one *.sock file to chmod.
#!/bin/sh
echo -n "Waiting for socket to open.."
( while [ ! $(ls /tmp/myproc/*.sock) ]; do
echo -n "."
sleep 2
done ) 2> /dev/null
echo ". Found"
If this is something you need to do more than once wrap it in a function, but otherwise as is should do what you need.
EDIT:
As pointed out in the comments, using ls like this is inferior to -e in the test, so the rewritten script below is to be preferred. (I have also corrected the shell invocation, as -n is not supported on all platforms in sh emulation mode.)
#!/bin/bash
echo -n "Waiting for socket to open.."
while [ ! -e /tmp/myproc/*.sock ]; do
echo -n "."
sleep 2
done
echo ". Found"
Test to see if the file exists before proceeding:
while [[ ! -e filename ]]
do
sleep 1
done
If you set your umask (try umask 0) you may not have to chmod at all. If you still don't get the right permissions check if node has options to change that.

How to make sure only one instance of a Bash script is running at a time?

I want to make a sh script that will only run at most once at any point.
Say, if I exec the script then I go to exec the script again, how do I make it so that if the first exec of the script is still working the second one will fail with an error. I.e. I need to check if the script is running elsewhere before doing anything. How would I go about doing this??
The script I have runs a long running process (i.e. runs forever). I wanted to use something like cron to call the script every 15mins so in case the process fails, it will be restarted by the next cron run script.
You want a pid file, maybe something like this:
pidfile=/path/to/pidfile
if [ -f "$pidfile" ] && kill -0 `cat $pidfile` 2>/dev/null; then
echo still running
exit 1
fi
echo $$ > $pidfile
I think you need to use lockfile command. See using lockfiles in shell scripts (BASH) or http://www.davidpashley.com/articles/writing-robust-shell-scripts.html.
The second article uses "hand-made lock file" and shows how to catch script termination & releasing the lock; although using lockfile -l <timeout seconds> will probably be a good enough alternative for most cases.
Example of usage without timeout:
lockfile script.lock
<do some stuff>
rm -f script.lock
Will ensure that any second script started during this one will wait indefinitely for the file to be removed before proceeding.
If we know that the script should not run more than X seconds, and the script.lock is still there, that probably means previous instance of the script was killed before it removed script.lock. In that case we can tell lockfile to force re-create the lock after a timeout (X = 10 below):
lockfile -l 10 /tmp/mylockfile
<do some stuff>
rm -f /tmp/mylockfile
Since lockfile can create multiple lock files, there is a parameter to guide it how long it should wait before retrying to acquire the next file it needs (-<sleep before retry, seconds> and -r <number of retries>). There is also a parameter -s <suspend seconds> for wait time when the lock has been removed by force (which kind of complements the timeout used to wait before force-breaking the lock).
You can use the run-one package, which provides run-one, run-this-one and keep-one-running.
The package: https://launchpad.net/ubuntu/+source/run-one
The blog introducing it: http://blog.dustinkirkland.com/2011/02/introducing-run-one-and-run-this-one.html
Write the process id into a file and then when a new instance starts, check the file to see if the old instance is still running.
(
if flock -n 9
then
echo 'Not doing the critical operation (lock present).'
exit;
fi
# critical section goes here
) 9>'/run/lock/some_lock_file'
rm -f '/run/lock/some_lock_file'
From example in flock(1) man page. Very practical for using in shell scripts.
I just wrote a tool that does this:
https://github.com/ORESoftware/quicklock
writing a good one takes about 15 loc, so not something you want to include in every shell script.
basically works like this:
$ ql_acquire_lock
the above calls this bash function:
function ql_acquire_lock {
set -e;
name="${1:-$PWD}" # the lock name is the first argument, if that is empty, then set the lockname to $PWD
mkdir -p "$HOME/.quicklock/locks"
fle=$(echo "${name}" | tr "/" _)
qln="$HOME/.quicklock/locks/${fle}.lock"
mkdir "${qln}" &> /dev/null || { echo "${ql_magenta}quicklock: could not acquire lock with name '${qln}'${ql_no_color}."; exit 1; }
export quicklock_name="${qln}"; # export the var *only if* above mkdir command succeeds
trap on_ql_trap EXIT;
}
when the script exits, it automatically releases the lock using trap
function on_ql_trap {
echo "quicklock: process with pid $$ was trapped.";
ql_release_lock
}
to manually release the lock at will, use ql_release_lock:
function ql_maybe_fail {
if [[ "$1" == "true" ]]; then
echo -e "${ql_magenta}quicklock: exiting with 1 since fail flag was set for your 'ql_release_lock' command.${ql_no_color}"
exit 1;
fi
}
function ql_release_lock () {
if [[ -z "${quicklock_name}" ]]; then
echo -e "quicklock: no lockname was defined. (\$quicklock_name was not set).";
ql_maybe_fail "$1";
return 0;
fi
if [[ "$HOME" == "${quicklock_name}" ]]; then
echo -e "quicklock: dangerous value set for \$quicklock_name variable..was equal to user home directory, not good.";
ql_maybe_fail "$1"
return 0;
fi
rm -r "${quicklock_name}" &> /dev/null &&
{ echo -e "quicklock: lock with name '${quicklock_name}' was released."; } ||
{ echo -e "quicklock: no lock existed for lockname '${quicklock_name}'."; ql_maybe_fail "$1"; }
trap - EXIT # clear/unset trap
}
I suggest using flock, but in a different way than suggested by #Josef Kufner. I think this is quite easy and flock should be available on most systems by default:
flock -n lockfile myscript.sh

Resources