Concurrency with shell scripts in failure-prone environments - linux

Good morning all,
I am trying to implement concurrency in a very specific environment, and keep getting stuck. Maybe you can help me.
this is the situation:
-I have N nodes that can read/write in a shared folder.
-I want to execute an application in one of them. this can be anything, like a shell script, an installed code, or whatever.
-To do so, I have to send the same command to all of them. The first one should start the execution, and the rest should see that somebody else is running the desired application and exit.
-The execution of the application can be killed at any time. This is important because does not allow relying on any cleaning step after the execution.
-if the application gets killed, the user may want to execute it again. He would then send the very same command.
My current approach is to create a shell script that wraps the command to be executed. This could also be implemented in C. Not python or other languages, to avoid library dependencies.
#!/bin/sh
# (folder structure simplified for legibility)
mutex(){
lockdir=".lock"
firstTask=1 #false
if mkdir "$lockdir" &> /dev/null
then
controlFile="controlFile"
#if this is the first node, start coordinator
if [ ! -f $controlFile ]; then
firstTask=0 #true
#tell the rest of nodes that I am in control
echo "some info" > $controlFile
fi
# remove control File when script finishes
trap 'rm $controlFile' EXIT
fi
return $firstTask
}
#The basic idea is that a task executes the desire command, stated as arguments to this script. The rest do nothing
if ! mutex ;
then
exit 0
fi
#I am the first node and the only one reaching this, so I execute whatever
$#
If there are no failures, this wrapper works great. The problem is that, if the script is killed before the execution, the trap is not executed and the control file is not removed. Then, when we execute the wrapper again to restart the task, it won't work as every node will think that somebody else is running the application.
A possible solution would be to remove the control script just before the "$#" call, but that it would lead to some race condition.
Any suggestion or idea?
Thanks for your help.
edit: edited with correct solution as future reference

Your trap syntax looks wrong: According to POSIX, it should be:
trap [action condition ...]
e.g.:
trap 'rm $controlFile' HUP INT TERM
trap 'rm $controlFile' 1 2 15
Note that $controlFile will not be expanded until the trap is executed if you use single quotes.

Related

Can't get BASH script to wait for PID

I am building a script to make my life easier when setting up servers.
I am having a issue with this line:
# Code to MV/CP/CHOWN files (working as intended)
sudo su $INSTALL_USER -c \
"sh $SOFTWARE_DIR/soa/Disk1/runInstaller \
-silent -response $REPONSE_LOC/response_wls.rsp \
-invPtrLoc $ORA_LOC/oraInsta.loc \
-jreLoc /usr/java/latest" >&3
SOA_PID = pgrep java
wait $SOA_PID
# Code below this which requires this be completed before execution.
I am trying to get my script to wait for the process to complete before it continues on.
The script executes, but instead of waiting, it continues on, and the execution of the installer runs after the script finishes. I have other installer pieces that need this part installed before they start their own process, hence the wait.
I've tried using $! etc, but since this piece get executing by a separate user, I don't know if that would work.
Thanks for any assistance.
The command SOA_PID = pgrep java should result in an error.
Try to capture the PID like this:
SOA_PID=$( pgrep java ) || exit
The || exit forces an exit if pgrep does not return a value,
preventing nonsense happening further on.
An alternative would be to rely on wait to return immediately,
but it's better to be explicit.
When using this in a function you'd use || return instead, depending
on circumstances.

crash-stopping bash pipeline [duplicate]

This question already has answers here:
How do you catch error codes in a shell pipe?
(5 answers)
Closed 7 years ago.
I have a pipeline, say a|b where if a runs into a problem, I want to stop the whole pipeline.
'a exiting with exit=1 doesn't do this as often 'b doesn't care about return codes.
e.g.
echo 1|grep 0|echo $? <-- this shows that grep did exit=1
but
echo 1|grep 0 | wc <--- wc is unfazed by grep's exit here
If I ran the pipeline as a subprocess of an owning process, any of the pipeline processes could kill the owning process. But this seems a bit clumsy -- but it would zap the whole pipeline.
Not possible with basic shell constructs, probably not possible in shell at all.
Your first example doesn't do what you think. echo doesn't use standard input, so putting it on the right side of a pipe is never a good idea. The $? that you're echoing is not the exit value of the grep 0. All commands in a pipeline run simultaneously. echo has already been started, with the existing value of $?, before the other commands in the pipeline have finished. It echoes the exit value of whatever you did before the pipeline.
# The first command is to set things up so that $? is 2 when the
# second command is parsed.
$ sh -c 'exit 2'
$ echo 1|grep 0|echo $?
2
Your second example is a little more interesting. It's correct to say that wc is unfazed by grep's exit status. All commands in the pipeline are children of the shell, so their exit statuses are reported to the shell. The wc process doesn't know anything about the grep process. The only communication between them is the data stream written to the pipe by grep and read from the pipe by wc.
There are ways to find all the exit statuses after the fact (the linked question in the comment by shx2 has examples) but a basic rule that you can't avoid is that the shell will always wait for all the commands to finish.
Early exits in a pipeline sometimes do have a cascade effect. If a command on the right side of a pipe exits without reading all the data from the pipe, the command on the left of that pipe will get a SIGPIPE signal the next time it tries to write, which by default terminates the process. (The 2 phrases to pay close attention to there are "the next time it tries to write" and "by default". If a the writing process spends a long time doing other things between writes to the pipe, it won't die immediately. If it handles the SIGPIPE, it won't die at all.)
In the other direction, when a command on the left side of a pipe exits, the command on the right side of that pipe gets EOF, which does cause the exit to happen fairly soon when it's a simple command like wc that doesn't do much processing after reading its input.
With direct use of pipe(), fork(), and wait3(), it would be possible to construct a pipeline, notice when one child exits badly, and kill the rest of them immediately. This requires a language more sophisticated than the shell.
I tried to come up with a way to do it in shell with a series of named pipes, but I don't see it. You can run all the processes as separate jobs and get their PIDs with $!, but the wait builtin isn't flexible enough to say "wait for any child in this set to exit, and tell me which one it was and what the exit status was".
If you're willing to mess with ps and/or /proc you can find out which processes have exited (they'll be zombies), but you can't distinguish successful exit from any other kind.
Write
set -e
set -o pipefail
at the beginning of your file.
-e will exit on an error and -o pipefail will produce an errorcode on each stage of you "pipeline"

BASH: How monitor a script for execution failure

I'm using Linux to watch a script execution in order for it to be respawned when the script runs into an execution failure. Given is a simple 1-line script which should help demonstrate the problem.
Here's my script
#!/bin/bash
echo '**************************************'
echo '* Run IRC Bot *'
echo '**************************************'
echo '';
if [ -z "$1" ]
then
echo 'Example usage: ' $0 'intelbot'
fi
until `php $1.php`;
do
echo "IRC bot '$1' crashed with the code $?. Respawning.." >&2;
sleep 5
done;
What kill option should I use to say to until, hey I want this process to be killed and I want you to get it working again!
Edit
The aim here was to manually check for a script-execution failure so the IRC Bot can be re-spawned. The posted answer is very detailed so +1 to the contributor - a supervisor is indeed the best way to tackle this problem.
First -- don't do this at all; use a proper process supervision system to automate restarting your program for you, not a shell script. Your operating system will ship with one, be it SysV init's /etc/inittab (which, yes, will restart programs so listed when they exit if given an appropriate flag), or the more modern upstart (shipped with Ubuntu), systemd (shipped with current Fedora and Arch Linux), runit, daemontools, supervisord, launchd (shipped with MacOS X), etc.
Second: The backticks actually make your code behave in unpredictable ways; so does the lack of quotes on an expansion.
`php $1.php`
...does the following:
Substitutes the value of $1 into a string; let's say it's my * code.php.
String-splits that value; in this case, it would change it into three separate arguments: my, *, and code.php
Glob-expands those arguments; in this case, the * would be replaced with a separate argument for each file in the current directory
Runs the resulting program
Reads the output that program wrote to stdout, and runs that output as a separate command
Returns the exit status of that separate command.
Instead:
until php "$1.php"; do
echo "IRC bot '$1' crashed with the code $?. Respawning.." >&2;
sleep 5
done;
Now, the exit status returned by PHP when it receives a SIGTERM is something that can be controlled by PHP's signal handler -- unless you tell us how your PHP code is written, only codes which can't be handled (such as SIGKILL) will behave in a manner that's entirely consistent, and because they can't be handled, they're dangerous if your program needs to do any kind of safe shutdown or cleanup.
If you want your PHP code to install a signal handler, so you can control its exit status when signaled, see http://php.net/manual/en/function.pcntl-signal.php

Bash Command Substitution Giving Weird Inconsistent Output

For some reasons not relevant to this question, I am running a Java server in a bash script not directly but via command substitution under a separate sub-shell, and in the background. The intent is for the subcommand to return the process id of the Java server as its standard output. The fragement in question is as follows:
launch_daemon()
{
/bin/bash <<EOF
$JAVA_HOME/bin/java $JAVA_OPTS -jar $JAR_FILE daemon $PWD/config/cl.yml <&- &
pid=\$!
echo \${pid} > $PID_FILE
echo \${pid}
EOF
}
daemon_pid=$(launch_daemon)
echo ${daemon_pid} > check.out
The Java daemon in question prints to standard error and quits if there is a problem in initialization, otherwise it closes standard out and standard err and continues on its way. Later in the script (not shown) I do a check to make sure the server process is running. Now on to the problem.
Whenever I check the $PID_FILE above, it contains the correct process id on one line.
But when I check the file check.out, it sometimes contains the correct id, other times it contains the process id repeated twice on the same line separated by a space charcater as in:
34056 34056
I am using the variable $daemon_pid in the script above later on in the script to check if the server is running, so if it contains the pid repeated twice this totally throws off the test and it incorrectly thinks the server is not running. Fiddling with the script on my server box running CentOS Linux by putting in more echo statements etc. seems to flip the behavior back to the correct one of $daemon_pid containing the process id just once, but if I think that has fixed it and check in this script to my source code repo and do a build and deploy again, I start seeing the same bad behavior.
For now I have fixed this by assuming that $daemon_pid could be bad and passing it through awk as follows:
mypid=$(echo ${daemon_pid} | awk '{ gsub(" +.*",""); print $0 }')
Then $mypid always contains the correct process id and things are fine, but needless to say I'd like to understand why it behaves the way it does. And before you ask, I have looked and looked but the Java server in question does NOT print its process id to its standard out before closing standard out.
Would really appreciate expert input.
Following the hint by #WilliamPursell, I tracked this down in the bash source code. I honestly don't know whether it is a bug or not; all I can say is that it seems like an unfortunate interaction with a questionable use case.
TL;DR: You can fix the problem by removing <&- from the script.
Closing stdin is at best questionable, not just for the reason mentioned by #JonathanLeffler ("Programs are entitled to have a standard input that's open.") but more importantly because stdin is being used by the bash process itself and closing it in the background causes a race condition.
In order to see what's going on, consider the following rather odd script, which might be called Duff's Bash Device, except that I'm not sure that even Duff would approve: (also, as presented, it's not that useful. But someone somewhere has used it in some hack. Or, if not, they will now that they see it.)
/bin/bash <<EOF
if (($1<8)); then head -n-$1 > /dev/null; fi
echo eight
echo seven
echo six
echo five
echo four
echo three
echo two
echo one
EOF
For this to work, bash and head both have to be prepared to share stdin, including sharing the file position. That means that bash needs to make sure that it flushes its read buffer (or not buffer), and head needs to make sure that it seeks back to the end of the part of the input which it uses.
(The hack only works because bash handles here-documents by copying them into a temporary file. If it used a pipe, it wouldn't be possible for head to seek backwards.)
Now, what would have happened if head had run in the background? The answer is, "just about anything is possible", because bash and head are racing to read from the same file descriptor. Running head in the background would be a really bad idea, even worse than the original hack which is at least predictable.
Now, let's go back to the actual program at hand, simplified to its essentials:
/bin/bash <<EOF
cmd <&- &
echo \$!
EOF
Line 2 of this program (cmd <&- &) forks off a separate process (to run in the background). In that process, it closes stdin and then invokes cmd.
Meanwhile, the foreground process continues reading commands from stdin (its stdin fd hasn't been closed, so that's fine), which causes it to execute the echo command.
Now here's the rub: bash knows that it needs to share stdin, so it can't just close stdin. It needs to make sure that stdin's file position is pointing to the right place, even though it may have actually read ahead a buffer's worth of input. So just before it closes stdin, it seeks backwards to the end of the current command line. [1]
If that seek happens before the foreground bash executes echo, then there is no problem. And if it happens after the foreground bash is done with the here-document, also no problem. But what if it happens while the echo is working? In that case, after the echo is done, bash will reread the echo command because stdin has been rewound, and the echo will be executed again.
And that's precisely what is happening in the OP. Sometimes, the background seek completes at just the wrong time, and causes echo \${pid} to be executed twice. In fact, it also causes echo \${pid} > $PID_FILE to execute twice, but that line is idempotent; had it been echo \${pid} >> $PID_FILE, the double execution would have been visible.
So the solution is simple: remove <&- from the server start-up line, and optionally replace it with </dev/null if you want to make sure the server can't read from stdin.
Notes:
Note 1: For those more familiar with bash source code and its expected behaviour than I am, I believe that the seek and close takes place at the end of case r_close_this: in function do_redirection_internal in redir.c, at approximately line 1093:
check_bash_input (redirector);
close_buffered_fd (redirector);
The first call does the lseek and the second one does the close. I saw the behaviour using strace -f and then searched the code for a plausible looking lseek, but I didn't go to the trouble of verifying in a debugger.

How to handle error/exception in shell script?

Below is my script that I am executing in the bash. And it works fine.
fileexist=0
for i in $( ls /data/read-only/clv/daily/Finished-HADOOP_EXPORT_&processDate#.done); do
mv /data/read-only/clv/daily/Finished-HADOOP_EXPORT_&processDate#.done /data/read-only/clv/daily/archieve-wip/
fileexist=1
done
Problem Statement:-
In my above shell script which has to be run daily using cron job, I don't have any error/exception handling mechanism. Suppose if anything gets wrong then I don't know what's has happened?
As after the above script is executed, there are some other scripts that will be dependent on the data provided by above script, so I always get's complaint from the other people who are depending on my script data that something wrong has happened.
So is there any way I can get notified if anything wrong has happened in my script? Suppose if the cluster is having some maintenance and at that time I am running my script, so definitely it will be failing for sure, so can I be notified if my above scripts failed, so that I will be sure something wrong has happened.
Hope my question is clear enough.
Any thoughts will be appreciated.
You can check for the exit status of each command, as freetx answered, but this is manual error checking rather than exception handling. The standard way to get the equivalent of exception handling in sh is to start the script with set -e. That tells sh to exit with a non-zero status as soon as any executed command fails (i.e. exits with a non-zero exit status).
If it is intended for some command in such a script to (possibly) fail, you can use the construct COMMAND || true, which will force a zero exit status for that expression. For example:
#!/bin/sh
# if any of the following fails, the script fails
set -e
mkdir -p destdir/1/2
mv foo destdir/1/2
touch /done || true # allowed to fail
Another way to ensure that you are notified when things go wrong in a script invoked by cron is to adhere to the Unix convention of printing nothing unless an error ocurred. Successful runs will then pass without notice, and unsuccessful runs will cause the cron daemon to notify you of the error via email. Note that local mail delivery must be correctly configured on your system for this to work.
Its customary for every unix command line utility to return 0 upon success and non-zero on failure. Therefore you can use the $? pattern to display the last return value and handle things accordingly.
For instance:
> ls
> file1 file2
> echo $?
> 0
> ls file.no.exist
> echo $?
> 1
Therefore, you can use this as rudimentary error detection to see if something goes wrong. So the normal approach would be
some_command
if [ $? -gt 0 ]
then
handle_error here
fi
well if other scripts are on the same machine, then you could do a pgrep in other scripts for this script if found to sleep for a while and try other scripts later rechecking process is gone.
If script is on another machine or even local the other method is to produce a temp file on remote machine accessible via a running http browser that other scripts can check status i.e. running or complete
You could also either wrap script around another that looks for these errors and emails you if it finds it if not sends result as per normal to who ever
go=0;
function check_running() {
running=`pgrep -f your_script.sh|wc -l `
if [ $running -gt 1 ]; then
echo "already running $0 -- instances found $running ";
go=1;
}
check_running;
if [ $go -ge 1 ];then
execute your other script
else
sleep 120;
check_running;
fi

Resources