Bash Script - How take a result of command and use it - linux

I would like to know how to do this, if for example, when executing a command in a script, retrieve the message linked to this command.
For example "AZERTY already exists, please check the error" is the message that comes out after a command (wrong), how do I so that if my BASH script sees this info, tell it to stop the script?

You almost certainly should not check the error message. Instead just do:
cmd || exit
If cmd fails and writes the message "AZERTY already exists, please check the error" to stderr, then that message will appear and the script will exit with whatever non-zero value cmd exited with. Some commands return non-zero values that have meaning, and you may want to suppress that (for consistent return values from your script) or change that with something like:
cmd || exit 1
On the other hand, if cmd is poorly written and returns a zero value when it "fails" (I'm putting that in quotes since a reasonable definition of "fail" is "return non-zero"), then that is a bug in cmd which should be fixed.

I recommend using strict mode in all your bash scripts. http://redsymbol.net/articles/unofficial-bash-strict-mode/
Basically put this at the top
#!/bin/bash
set -euo pipefail
IFS=$'\n\t'
The option you specifically asked about is -e
The set -e option instructs bash to immediately exit if any command [1] has a non-zero exit status. You wouldn't want to set this for your command-line shell, but in a script it's massively helpful. In all widely used general-purpose programming languages, an unhandled runtime error - whether that's a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.

if ./command | grep -q 'AZERTY already exists, please check the error'; then
echo "Error found, exiting"
exit 1
fi

Related

A way to specify a command to run if the previous fails

Is it possible to trap error (unknown command) from the CLI, and do something in the case an error occured ?
To be more precise, I search a way to do something like this:
if [ previousCommandFails ] ; then
echo lastCommand >> somewhere.txt
fi
Echo is just an example to say that I need to access this lastCommand.
I want it to be a default behaviour in my computer, so the code must be placed somewhere like ~/.bashrc.
You can try the following solution. I don't guarantee that it's a good solution but it may help with your case.
Create a small script which can test the previous command i.e. test.sh with content:
if [ $? -ne 0 ]
then
history 1 >> /path/to/failed_commands.txt
fi
Then set this variable:
PROMPT_COMMAND+="source /path/to/test.sh"
PROMPT_COMMAND If set, the value is executed as a command prior to
issuing each primary prompt.
It depends on what you call fail. If it is just returning a non 0 value, I am afraid that you have to explicitely test it after each command, or use a specialized shell (*).
But trap can be used to execute a specific command when a signal is received:
trap action signal
If this is not enough, you will have to get the source of a shell (posix shell or bash) and tweak it for meet you needs...

How to exit a shell script when a function is not found? [duplicate]

This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 3 years ago.
I've been writing some shell script and I would find it useful if there was the ability to halt the execution of said shell script if any of the commands failed. See below for an example:
#!/bin/bash
cd some_dir
./configure --some-flags
make
make install
So in this case, if the script can't change to the indicated directory, then it would certainly not want to do a ./configure afterwards if it fails.
Now I'm well aware that I could have an if check for each command (which I think is a hopeless solution), but is there a global setting to make the script exit if one of the commands fails?
Use the set -e builtin:
#!/bin/bash
set -e
# Any subsequent(*) commands which fail will cause the shell script to exit immediately
Alternatively, you can pass -e on the command line:
bash -e my_script.sh
You can also disable this behavior with set +e.
You may also want to employ all or some of the the -e -u -x and -o pipefail options like so:
set -euxo pipefail
-e exits on error, -u errors on undefined variables, and -o (for option) pipefail exits on command pipe failures. Some gotchas and workarounds are documented well here.
(*) Note:
The shell does not exit if the command that fails is part of the
command list immediately following a while or until keyword,
part of the test following the if or elif reserved words, part
of any command executed in a && or || list except the command
following the final && or ||, any command in a pipeline but
the last, or if the command's return value is being inverted with
!
(from man bash)
To exit the script as soon as one of the commands failed, add this at the beginning:
set -e
This causes the script to exit immediately when some command that is not part of some test (like in a if [ ... ] condition or a && construct) exits with a non-zero exit code.
Use it in conjunction with pipefail.
set -e
set -o pipefail
-e (errexit): Abort the script at the first error, when a command exits with non-zero status (except in until or while loops, if-tests, and list constructs)
-o pipefail: Causes a pipeline to return the exit status of the last command in the pipe that returned a non-zero return value.
Chapter 33. Options
Here is how to do it:
#!/bin/sh
abort()
{
echo >&2 '
***************
*** ABORTED ***
***************
'
echo "An error occurred. Exiting..." >&2
exit 1
}
trap 'abort' 0
set -e
# Add your script below....
# If an error occurs, the abort() function will be called.
#----------------------------------------------------------
# ===> Your script goes here
# Done!
trap : 0
echo >&2 '
************
*** DONE ***
************
'
An alternative to the accepted answer that fits in the first line:
#!/bin/bash -e
cd some_dir
./configure --some-flags
make
make install
One idiom is:
cd some_dir && ./configure --some-flags && make && make install
I realize that can get long, but for larger scripts you could break it into logical functions.
I think that what you are looking for is the trap command:
trap command signal [signal ...]
For more information, see this page.
Another option is to use the set -e command at the top of your script - it will make the script exit if any program / command returns a non true value.
One point missed in the existing answers is show how to inherit the error traps. The bash shell provides one such option for that using set
-E
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a subshell environment. The ERR trap is normally not inherited in such cases.
Adam Rosenfield's answer recommendation to use set -e is right in certain cases but it has its own potential pitfalls. See GreyCat's BashFAQ - 105 - Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?
According to the manual, set -e exits
if a simple commandexits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in a if statement, part of an && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted via !".
which means, set -e does not work under the following simple cases (detailed explanations can be found on the wiki)
Using the arithmetic operator let or $((..)) ( bash 4.1 onwards) to increment a variable value as
#!/usr/bin/env bash
set -e
i=0
let i++ # or ((i++)) on bash 4.1 or later
echo "i is $i"
If the offending command is not part of the last command executed via && or ||. For e.g. the below trap wouldn't fire when its expected to
#!/usr/bin/env bash
set -e
test -d nosuchdir && echo no dir
echo survived
When used incorrectly in an if statement as, the exit code of the if statement is the exit code of the last executed command. In the example below the last executed command was echo which wouldn't fire the trap, even though the test -d failed
#!/usr/bin/env bash
set -e
f() { if test -d nosuchdir; then echo no dir; fi; }
f
echo survived
When used with command-substitution, they are ignored, unless inherit_errexit is set with bash 4.4
#!/usr/bin/env bash
set -e
foo=$(expr 1-1; true)
echo survived
when you use commands that look like assignments but aren't, such as export, declare, typeset or local. Here the function call to f will not exit as local has swept the error code that was set previously.
set -e
f() { local var=$(somecommand that fails); }
g() { local var; var=$(somecommand that fails); }
When used in a pipeline, and the offending command is not part of the last command. For e.g. the below command would still go through. One options is to enable pipefail by returning the exit code of the first failed process:
set -e
somecommand that fails | cat -
echo survived
The ideal recommendation is to not use set -e and implement an own version of error checking instead. More information on implementing custom error handling on one of my answers to Raise error in a Bash script

Concurrency with shell scripts in failure-prone environments

Good morning all,
I am trying to implement concurrency in a very specific environment, and keep getting stuck. Maybe you can help me.
this is the situation:
-I have N nodes that can read/write in a shared folder.
-I want to execute an application in one of them. this can be anything, like a shell script, an installed code, or whatever.
-To do so, I have to send the same command to all of them. The first one should start the execution, and the rest should see that somebody else is running the desired application and exit.
-The execution of the application can be killed at any time. This is important because does not allow relying on any cleaning step after the execution.
-if the application gets killed, the user may want to execute it again. He would then send the very same command.
My current approach is to create a shell script that wraps the command to be executed. This could also be implemented in C. Not python or other languages, to avoid library dependencies.
#!/bin/sh
# (folder structure simplified for legibility)
mutex(){
lockdir=".lock"
firstTask=1 #false
if mkdir "$lockdir" &> /dev/null
then
controlFile="controlFile"
#if this is the first node, start coordinator
if [ ! -f $controlFile ]; then
firstTask=0 #true
#tell the rest of nodes that I am in control
echo "some info" > $controlFile
fi
# remove control File when script finishes
trap 'rm $controlFile' EXIT
fi
return $firstTask
}
#The basic idea is that a task executes the desire command, stated as arguments to this script. The rest do nothing
if ! mutex ;
then
exit 0
fi
#I am the first node and the only one reaching this, so I execute whatever
$#
If there are no failures, this wrapper works great. The problem is that, if the script is killed before the execution, the trap is not executed and the control file is not removed. Then, when we execute the wrapper again to restart the task, it won't work as every node will think that somebody else is running the application.
A possible solution would be to remove the control script just before the "$#" call, but that it would lead to some race condition.
Any suggestion or idea?
Thanks for your help.
edit: edited with correct solution as future reference
Your trap syntax looks wrong: According to POSIX, it should be:
trap [action condition ...]
e.g.:
trap 'rm $controlFile' HUP INT TERM
trap 'rm $controlFile' 1 2 15
Note that $controlFile will not be expanded until the trap is executed if you use single quotes.

Get output from executing an applescript within a bash script

I tried this technique for storing the output of a command in a BASH variable. It works with "ls -l", but it doesn't work when I run an apple script. For example, below is my BASH script calling an apple script.
I tried this:
OUTPUT="$(osascript myAppleScript.scpt)"
echo "Error is ${OUTPUT}"
I can see my apple script running on the command line, and I can see the error outputting on the command line, but when it prints "Error is " it's printing a blank as if the apple script output isn't getting stored.
Note: My apple script is erroring out on purpose to test this. I'm trying to handle errors correctly by collecting the apple scripts output
Try this to redirect stderr to stdout:
OUTPUT="$(osascript myAppleScript.scpt 2>&1)"
echo "$OUTPUT"
On success, the script's output is written to STDOUT. On failure, the error message is written to STDERR, and a non-zero return code set. You want to check the return code first, e.g. if [ $? -ne 0 ]; then..., and if you need the details then you'll need to capture osascript's STDERR.
Or, depending what you're doing, it may just be simplest to put set -e at the top of your shell script so that it terminates as soon as any error occurs anywhere in it.
Frankly, bash and its ilk really are a POS. The only half-decent *nix shell I've ever seen is fish, but it isn't standard on anything (natch). For complex scripting, you'd probably be better using Perl/Python/Ruby instead.
You can also use the clipboard as a data bridge. For example, if you wanted to get the stdout into the clipboard you could use:
osascript myAppleScript.scpt | pbcopy
In fact, you can copy to clipboard directly from your applescript eg. with:
set the clipboard to "precious data"
-- or to set the clipboard from a variable
set the clipboard to myPreciousVar
To get the data inside a bash script you can read the clipboard to a variable with:
data="$(pbpaste)"
See also man pbpase.

How to handle error/exception in shell script?

Below is my script that I am executing in the bash. And it works fine.
fileexist=0
for i in $( ls /data/read-only/clv/daily/Finished-HADOOP_EXPORT_&processDate#.done); do
mv /data/read-only/clv/daily/Finished-HADOOP_EXPORT_&processDate#.done /data/read-only/clv/daily/archieve-wip/
fileexist=1
done
Problem Statement:-
In my above shell script which has to be run daily using cron job, I don't have any error/exception handling mechanism. Suppose if anything gets wrong then I don't know what's has happened?
As after the above script is executed, there are some other scripts that will be dependent on the data provided by above script, so I always get's complaint from the other people who are depending on my script data that something wrong has happened.
So is there any way I can get notified if anything wrong has happened in my script? Suppose if the cluster is having some maintenance and at that time I am running my script, so definitely it will be failing for sure, so can I be notified if my above scripts failed, so that I will be sure something wrong has happened.
Hope my question is clear enough.
Any thoughts will be appreciated.
You can check for the exit status of each command, as freetx answered, but this is manual error checking rather than exception handling. The standard way to get the equivalent of exception handling in sh is to start the script with set -e. That tells sh to exit with a non-zero status as soon as any executed command fails (i.e. exits with a non-zero exit status).
If it is intended for some command in such a script to (possibly) fail, you can use the construct COMMAND || true, which will force a zero exit status for that expression. For example:
#!/bin/sh
# if any of the following fails, the script fails
set -e
mkdir -p destdir/1/2
mv foo destdir/1/2
touch /done || true # allowed to fail
Another way to ensure that you are notified when things go wrong in a script invoked by cron is to adhere to the Unix convention of printing nothing unless an error ocurred. Successful runs will then pass without notice, and unsuccessful runs will cause the cron daemon to notify you of the error via email. Note that local mail delivery must be correctly configured on your system for this to work.
Its customary for every unix command line utility to return 0 upon success and non-zero on failure. Therefore you can use the $? pattern to display the last return value and handle things accordingly.
For instance:
> ls
> file1 file2
> echo $?
> 0
> ls file.no.exist
> echo $?
> 1
Therefore, you can use this as rudimentary error detection to see if something goes wrong. So the normal approach would be
some_command
if [ $? -gt 0 ]
then
handle_error here
fi
well if other scripts are on the same machine, then you could do a pgrep in other scripts for this script if found to sleep for a while and try other scripts later rechecking process is gone.
If script is on another machine or even local the other method is to produce a temp file on remote machine accessible via a running http browser that other scripts can check status i.e. running or complete
You could also either wrap script around another that looks for these errors and emails you if it finds it if not sends result as per normal to who ever
go=0;
function check_running() {
running=`pgrep -f your_script.sh|wc -l `
if [ $running -gt 1 ]; then
echo "already running $0 -- instances found $running ";
go=1;
}
check_running;
if [ $go -ge 1 ];then
execute your other script
else
sleep 120;
check_running;
fi

Resources