Re-run bash script if another instance was invoked - linux

I have a bash script that may be invoked multiple times simultaneously. To protect the state information (saved in a /tmp file) that the script accesses, I am using file locking like this:
do_something()
{
...
}
// Check if there are any other instances of the script; if so exit
exec 8>$LOCK
if ! flock -n -x 8; then
exit 1
fi
// script does something...
do_something
Now any other instance that was invoked when this script was running exits. I want the script to run only one extra time if there were n simultaneous invocations, not n-times, something like this:
do_something()
{
...
}
// Check if there are any other instances of the script; if so exit
exec 8>$LOCK
if ! flock -n -x 8; then
exit 1
fi
// script does something...
do_something
// check if another instance was invoked, if so re-run do_something again
if [ condition ]; then
do_something
fi
How can I go about doing this? Touching a file inside the flock before quitting and having that file as the condition for the second if doesn't seem to work.

Have one flag (lockfile) to signal that a things needs doing, and always set it. Have a separate flag that is unset by the execution part.
REQUEST_FILE=/tmp/please_do_something
LOCK_FILE=/tmp/doing_something
# request running
touch $REQUEST_FILE
# lock and run
if ln -s /proc/$$ $LOCK_FILE 2>/dev/null ; then
while [ -e $REQUEST_FILE ]; do
do_something
rm $REQUEST_FILE
done
rm $LOCK_FILE
fi
If you want to ensure that "do_something" is run exactly once for each time the whole script is run, then you need to create some kind of a queue. The overall structure is similar.

They're not everone's favourite, but I've always been a fan of symbolic links to make lockfiles, since they're atomic. For example:
lockfile=/var/run/`basename $0`.lock
if ! ln -s "pid=$$ when=`date '+%s'` status=$something" "$lockfile"; then
echo "Can't set lock." >&2
exit 1
fi
By encoding useful information directly into the link target, you eliminate the race condition introduced by writing to files.
That said, the link that Dennis posted provides much more useful information that you should probably try to understand before writing much more of your script. My example above is sort of related to BashFAQ/045 which suggests doing a similar thing with mkdir.
If I understand your question correctly, then what you want to do might be achieved (slightly unreliably) by using two lock files. If setting the first lock fails, we try the second lock. If setting the second lock fails, we exit. The error exists if the first lock is delete after we check it but before check the second existant lock. If this level of error is acceptable to you, that's great.
This is untested; but it looks reasonable to me.
#!/usr/local/bin/bash
lockbase="/tmp/test.lock"
setlock() {
if ln -s "pid=$$" "$lockbase".$1 2>/dev/null; then
trap "rm \"$lockbase\".$1" 0 1 2 5 15
else
return 1
fi
}
if setlock 1 || setlock 2; then
echo "I'm in!"
do_something_amazing
else
echo "No lock - aborting."
fi

Please see Process Management.

Related

How best to implement atomic update on a file inside a bash script [duplicate]

This question already has answers here:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
(43 answers)
Closed 3 years ago.
I have a script which has multiple functions, running in parallel which checks a file and updates it frequently. I dont want two functions to update the file at the same time and create an issue. So what will be the best way to have an atomic update. I have the following so far.
counter(){
a=$1
while true;do
if [ ! -e /tmp/counter.lock ];then
touch /tmp/counter.lock
curr_count=`cat /tmp/count.txt`
n_count=`echo "${curr_count} + $a" | bc`
echo ${n_count} > /tmp/count.txt
rm -fv /tmp/counter.lock
break
fi
sleep 1
done
}
I am not sure how to convert my function to use flock, since it uses file descriptor and it might create issue if I call this function multiple time(or I think so.)
flock works by letting anyone open the lock file, but blocking if someone else locks it first. In your code, a second process could test for the existence of the lock after you see it doesn't exist but before you actually create it.
counter () {
a=$1
{
flock -s 200
read current_count < /tmp/count.xt
...
echo new_count > /tmp/count.txt
} 200> /tmp/counter.lock
}
Here, two processes can open /tmp/counter.lock for writing. In one process, flock will get the lock and exit immediately. In the other, flock will block until the first process releases the lock by closing its file descriptor once the command block completes.

Prevent race condition when creating lock file

My script must not be run more than once concurrently. So it creates a lock file, and deletes it before exiting. It checks that lock file doesn't exist before starting its work.
A very common approach to locking is something like this:
function setupLockFile() {
if (set -o noclobber; echo "lock" > "$lockfile") 2>/dev/null; then
trap "rm -f $lockfile; exit $?" INT TERM EXIT
else
echo "Script running... exiting!"
exit 1
fi
}
However there is a race condition - the if creates the file if it doesn't exist, and the script could be terminated before the trap is defined. Then the lockfile will not be deleted.
So what is a safe way to do this?
That's not a race - it's resilience to failure. In situations where the script dies before it can remove the file, you need manual cleanup.
The usual way to try and automate this cleanup is to read the PID from any existing file, test to see if the process still exists, and essentially ignore its existence if it doesn't. Unfortunately without an atomic compare-and-set operation that's not trivial to do correctly, since it introduces a new race, between the read of the PID and someone else trying to ignore its existence.
Check out this question for more ideas around locking using just the file system.
My advice is to either store the lock file on a temporary filesystem (/var/run is usually tmpfs to permit pidfiles to disappear safely on reboot) so that things fix themselves after a reboot, or have the script throw up its hands and ask for manual intervention. Handling every failure case reliably increases complexity and thus probably introduces more probability of failure than asking a human for help.
And complexity isn't just today, it's for the lifetime of the code. It might be correct when you're done, but will the next person along break it?
Let's try another approach:
set up the trap before lock file is created
store PID in the lock file
make the trap check if the PID of current instance matches whatever is in the lockfile
For example:
trap "cleanUp" INT TERM EXIT
function cleanUp {
if [[ $$ -eq $(<$lockfile) ]]; then
rm -f $lockfile
exit $?
fi
}
function setupLockFile {
if ! (set -o noclobber; echo "$$" > "$lockfile") 2>/dev/null; then
echo "Script running... exiting!"
exit 1
fi
}
This way you keep the check for lock file existence and its creation as a single operation, while also preventing the trap from deleting a lockfile of a previously running instance.
Additionally, as I mentioned in the comments below, in case the lock file already exist I'd suggest to check if a process with given PID is running.
Because you never know if for whatever reason the lock file can still remain orphaned on the disk.
So if you want to mitigate the need for manual removal of orphaned lock fiels, you can add additional logic to check if the PID is orphaned or not.
For example - if no running process with given PID from the lock file not found, you can assume that this is an orphaned lock file from aprevious instance that, and you can overwrite it with your current PID and continue.
If a process is found, you can compare its name to see if it really is another instance of the same script or not - if not, you can overwrite the PID in the lock file and continue.
I did not include this in the code to keep it simple, you can try to create this logic by yourself if you want. :)
First check for lockfile, then trap, then write to it:
function setupLockFile() {
if [ -f "$lockfile" ]; then
echo "Script running... exiting!"
exit 1
else trap "rm -f $lockfile; exit $?" INT TERM EXIT
set -o noclobber; echo "lock" > "$lockfile" || exit 1
fi
}
And there is an "official" way to check lockfiles with flock command which is part of util-linux.

Dual use bash script - source but also exec subshell? Dynamic return/exit?

My current setup starts with a function that is ostensibly in .bashrc (.bash_it/custom/funcs.bash to be precise)
#!/usr/bin/env bash
function proset() {
. proset-core "$#";
}
proset-core does some decrypting of secrets and exports those secrets to the session, hence the need for the . instead of just running it as a script/subshell.
If something goes wrong in proset-core, I use return instead of exit since I don't want the SSH connection to be dropped.
if [ "${APP_JSON}" = "null" ] ; then
echo -e "\n${redtext}App named $NAME not found in ${APPCONF}. Aborting.${resettext}\n";
return;
fi
This makes sense in the context of the exported proset function, but precludes usage as a script since return isn't valid except from within a function.
Is there a way to detect how it's being called and return one or the other as appropriate?
Just try to return, and exit if it fails.
_retval=$?
return 2>/dev/null || exit "$_retval"
The only case where your code will still be continuing after the return was invoked at top-level (outside of a function) is if you were executed rather than sourced, and should that happen, exiting is the Right Thing.
Make the builtin variable $SHLVL part of $# args as the last arg. Then at test point:
if [ "${#: -1}" -lt $SHLVL ]; then
# SHLVL arg is less than current SHLVL
# we are in a subshell
exit
else
return
fi
Ended up using
calledBy="$(ps -o comm= $PPID)";
if [ "x${calledBy}" = "xsshd" ]; then
return 1;
else
exit 1;
fi
since it didn't require passing anything extra. Anything that might cause this to be problematic please comment. Not too worried about being bash-specific or portable.
Credit: get the name of the caller script in bash script

Any way to exit bash script, but not quitting the terminal

When I use exit command in a shell script, the script will terminate the terminal (the prompt). Is there any way to terminate a script and then staying in the terminal?
My script run.sh is expected to execute by directly being sourced, or sourced from another script.
EDIT:
To be more specific, there are two scripts run2.sh as
...
. run.sh
echo "place A"
...
and run.sh as
...
exit
...
when I run it by . run2.sh, and if it hit exit codeline in run.sh, I want it to stop to the terminal and stay there. But using exit, the whole terminal gets closed.
PS: I have tried to use return, but echo codeline will still gets executed....
The "problem" really is that you're sourcing and not executing the script. When you source a file, its contents will be executed in the current shell, instead of spawning a subshell. So everything, including exit, will affect the current shell.
Instead of using exit, you will want to use return.
Yes; you can use return instead of exit. Its main purpose is to return from a shell function, but if you use it within a source-d script, it returns from that script.
As §4.1 "Bourne Shell Builtins" of the Bash Reference Manual puts it:
return [n]
Cause a shell function to exit with the return value n.
If n is not supplied, the return value is the exit status of the
last command executed in the function.
This may also be used to terminate execution of a script being executed
with the . (or source) builtin, returning either n or
the exit status of the last command executed within the script as the exit
status of the script.
Any command associated with the RETURN trap is executed
before execution resumes after the function or script.
The return status is non-zero if return is used outside a function
and not during the execution of a script by . or source.
You can add an extra exit command after the return statement/command so that it works for both, executing the script from the command line and sourcing from the terminal.
Example exit code in the script:
if [ $# -lt 2 ]; then
echo "Needs at least two arguments"
return 1 2>/dev/null
exit 1
fi
The line with the exit command will not be called when you source the script after the return command.
When you execute the script, return command gives an error. So, we suppress the error message by forwarding it to /dev/null.
Instead of running the script using . run2.sh, you can run it using sh run2.sh or bash run2.sh
A new sub-shell will be started, to run the script then, it will be closed at the end of the script leaving the other shell opened.
Actually, I think you might be confused by how you should run a script.
If you use sh to run a script, say, sh ./run2.sh, even if the embedded script ends with exit, your terminal window will still remain.
However if you use . or source, your terminal window will exit/close as well when subscript ends.
for more detail, please refer to What is the difference between using sh and source?
This is just like you put a run function inside your script run2.sh.
You use exit code inside run while source your run2.sh file in the bash tty.
If the give the run function its power to exit your script and give the run2.sh
its power to exit the terminator.
Then of cuz the run function has power to exit your teminator.
#! /bin/sh
# use . run2.sh
run()
{
echo "this is run"
#return 0
exit 0
}
echo "this is begin"
run
echo "this is end"
Anyway, I approve with Kaz it's a design problem.
I had the same problem and from the answers above and from what I understood what worked for me ultimately was:
Have a shebang line that invokes the intended script, for example,
#!/bin/bash uses bash to execute the script
I have scripts with both kinds of shebang's. Because of this, using sh or . was not reliable, as it lead to a mis-execution (like when the script bails out having run incompletely)
The answer therefore, was
Make sure the script has a shebang, so that there is no doubt about its intended handler.
chmod the .sh file so that it can be executed. (chmod +x file.sh)
Invoke it directly without any sh or .
(./myscript.sh)
Hope this helps someone with similar question or problem.
To write a script that is secure to be run as either a shell script or sourced as an rc file, the script can check and compare $0 and $BASH_SOURCE and determine if exit can be safely used.
Here is a short code snippet for that
[ "X$(basename $0)" = "X$(basename $BASH_SOURCE)" ] && \
echo "***** executing $name_src as a shell script *****" || \
echo "..... sourcing $name_src ....."
I think that this happens because you are running it on source mode
with the dot
. myscript.sh
You should run that in a subshell:
/full/path/to/script/myscript.sh
'source' http://ss64.com/bash/source.html
It's correct that sourced vs. executed scripts use return vs. exit to keep the same session open, as others have noted.
Here's a related tip, if you ever want a script that should keep the session open, regardless of whether or not it's sourced.
The following example can be run directly like foo.sh or sourced like . foo.sh/source foo.sh. Either way it will keep the session open after "exiting". The $# string is passed so that the function has access to the outer script's arguments.
#!/bin/sh
foo(){
read -p "Would you like to XYZ? (Y/N): " response;
[ $response != 'y' ] && return 1;
echo "XYZ complete (args $#).";
return 0;
echo "This line will never execute.";
}
foo "$#";
Terminal result:
$ foo.sh
$ Would you like to XYZ? (Y/N): n
$ . foo.sh
$ Would you like to XYZ? (Y/N): n
$ |
(terminal window stays open and accepts additional input)
This can be useful for quickly testing script changes in a single terminal while keeping a bunch of scrap code underneath the main exit/return while you work. It could also make code more portable in a sense (if you have tons of scripts that may or may not be called in different ways), though it's much less clunky to just use return and exit where appropriate.
Also make sure to return with expected return value. Else if you use exit when you will encounter an exit it will exit from your base shell since source does not create another process (instance).
Improved the answer of Tzunghsing, with more clear results and error re-direction, for silent usage:
#!/usr/bin/env bash
echo -e "Testing..."
if [ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ]; then
echo "***** You are Executing $0 in a sub-shell."
exit 0
else
echo "..... You are Sourcing $BASH_SOURCE in this terminal shell."
return 0
fi
echo "This should never be seen!"
Or if you want to put this into a silent function:
function sExit() {
# Safe Exit from script, not closing shell.
[ "X$(basename $0 2>/dev/null)" = "X$(basename $BASH_SOURCE)" ] && exit 0 || return 0
}
...
# ..it have to be called with an error check, like this:
sExit && return 0
echo "This should never be seen!"
Please note that:
if you have enabled errexit in your script (set -e) and you return N with N != 0, your entire script will exit instantly. To see all your shell settings, use, set -o.
when used in a function, the 1st return 0 is exiting the function, and the 2nd return 0 is exiting the script.
if your terminal emulator doesn't have -hold you can sanitize a sourced script and hold the terminal with:
#!/bin/sh
sed "s/exit/return/g" script >/tmp/script
. /tmp/script
read
otherwise you can use $TERM -hold -e script
If a command succeeded successfully, the return value will be 0. We can check its return value afterwards.
Is there a “goto” statement in bash?
Here is some dirty workaround using trap which jumps only backwards.
#!/bin/bash
set -eu
trap 'echo "E: failed with exitcode $?" 1>&2' ERR
my_function () {
if git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
echo "this is run"
return 0
else
echo "fatal: not a git repository (or any of the parent directories): .git"
goto trap 2> /dev/null
fi
}
my_function
echo "Command succeeded" # If my_function failed this line is not printed
Related:
https://stackoverflow.com/a/19091823/2402577
How to use $? and test to check function?
I couldn't find solution so for those who want to leave the nested script without leaving terminal window:
# this is just script which goes to directory if path satisfies regex
wpr(){
leave=false
pwd=$(pwd)
if [[ "$pwd" =~ ddev.*web ]]; then
# echo "your in wordpress instalation"
wpDir=$(echo "$pwd" | grep -o '.*\/web')
cd $wpDir
return
fi
echo 'please be in wordpress directory'
# to leave from outside the scope
leave=true
return
}
wpt(){
# nested function which returns $leave variable
wpr
# interupts the script if $leave is true
if $leave; then
return;
fi
echo 'here is the rest of the script, executes if leave is not defined'
}
I have no idea whether this is useful for you or not, but in zsh, you can exit a script, but only to the prompt if there is one, by using parameter expansion on a variable that does not exist, as follows.
${missing_variable_ejector:?}
Though this does create an error message in your script, you can prevent it with something like the following.
{ ${missing_variable_ejector:?} } 2>/dev/null
1) exit 0 will come out of the script if it is successful.
2) exit 1 will come out of the script if it is a failure.
You can try these above two based on ur req.

Multi-threaded BASH programming - generalized method?

Ok, I was running POV-Ray on all the demos, but POV's still single-threaded and wouldn't utilize more than one core. So, I started thinking about a solution in BASH.
I wrote a general function that takes a list of commands and runs them in the designated number of sub-shells. This actually works but I don't like the way it handles accessing the next command in a thread-safe multi-process way:
It takes, as an argument, a file with commands (1 per line),
To get the "next" command, each process ("thread") will:
Waits until it can create a lock file, with: ln $CMDFILE $LOCKFILE
Read the command from the file,
Modifies $CMDFILE by removing the first line,
Removes the $LOCKFILE.
Is there a cleaner way to do this? I couldn't get the sub-shells to read a single line from a FIFO correctly.
Incidentally, the point of this is to enhance what I can do on a BASH command line, and not to find non-bash solutions. I tend to perform a lot of complicated tasks from the command line and want another tool in the toolbox.
Meanwhile, here's the function that handles getting the next line from the file. As you can see, it modifies an on-disk file each time it reads/removes a line. That's what seems hackish, but I'm not coming up with anything better, since FIFO's didn't work w/o setvbuf() in bash.
#
# Get/remove the first line from FILE, using LOCK as a semaphore (with
# short sleep for collisions). Returns the text on standard output,
# returns zero on success, non-zero when file is empty.
#
parallel__nextLine()
{
local line rest file=$1 lock=$2
# Wait for lock...
until ln "${file}" "${lock}" 2>/dev/null
do sleep 1
[ -s "${file}" ] || return $?
done
# Open, read one "line" save "rest" back to the file:
exec 3<"$file"
read line <&3 ; rest=$(cat<&3)
exec 3<&-
# After last line, make sure file is empty:
( [ -z "$rest" ] || echo "$rest" ) > "${file}"
# Remove lock and 'return' the line read:
rm -f "${lock}"
[ -n "$line" ] && echo "$line"
}
#adjust these as required
args_per_proc=1 #1 is fine for long running tasks
procs_in_parallel=4
xargs -n$args_per_proc -P$procs_in_parallel povray < list
Note the nproc command coming soon to coreutils will auto determine
the number of available processing units which can then be passed to -P
If you need real thread safety, I would recommend to migrate to a better scripting system.
With python, for example, you can create real threads with safe synchronization using semaphores/queues.
sorry to bump this after so long, but I pieced together a fairly good solution for this IMO
It doesnt work perfectly, but it will limit the script to a certain number of child tasks running, and then wait for all the rest at the end.
#!/bin/bash
pids=()
thread() {
local this
while [ ${#} -gt 6 ]; do
this=${1}
wait "$this"
shift
done
pids=($1 $2 $3 $4 $5 $6)
}
for i in 1 2 3 4 5 6 7 8 9 10
do
sleep 5 &
pids=( ${pids[#]-} $(echo $!) )
thread ${pids[#]}
done
for pid in ${pids[#]}
do
wait "$pid"
done
it seems to work great for what I'm doing (handling parallel uploading of a bunch of files at once) and keeps it from breaking my server, while still making sure all the files get uploaded before it finishes the script
I believe you're actually forking processes here, and not threading. I would recommend looking for threading support in a different scripting language like perl, python, or ruby.

Resources