How to run bash script while it returns code 0? - linux

I have bash script with many lines of code and I need run it while it returns $? == 0, but in case if it has error I need stop it and exit with code 1?
The question is how to do it?
I tried to use set -e command, but Jenkins does not marks build as failed, for him it looks like Success
I also need to get the Error message to show it in my Jenkins log
I managed to get error code(in my case it will be 126), but how to get error message?
main file
fileWithError.sh
rc=$?; if [[ $rc != 0 ]]; then
echo "exit {$rc} ";
fi
fileWithError.sh
#!/bin/sh
set -e
echo "Test"
agjfsjgfshgd
echo "Test2"
echo "Test3"

Just add the command set -e to the beginning of the file
This should look something similar to this
#!/bin/sh
set -e
#...Your code...

I think you just want:
#!/bin/sh
while fileWithError.sh; do
sleep 1;
done
echo fileWithError.sh failed!! >&2
Note that if the script is written well, then the echo is
redundant as fileWithError.sh should have written a decent
error message already. Also, the sleep may not be needed, but is useful to prevent a fast loop if the script succeeds quickly.
You can get the explicit return value, but it requires a bit of refactoring.
#!/bin/sh
true
while test $? = 0; do fileWithError.sh; done
echo fileWithError.sh failed with status $?!! >&2
since the return value of the while script will be the
return value of sleep in the first construction.

Its not quite easy to get an error code only.
How about this ...
#!/bin/bash
Msg=$(fileWithError.sh 2>&1) # redirect all error messages to stdout
if [ "$?" -ne 0 ] # Not Equal
then
echo "$Msg"
exit 1
fi
exit 0
You catch all messages created by fileWithError.sh and if the programm returned an error code then you have the error message already saved in a variable.
But this will make a disadvantage, because you will temporary store all messages created by fileWithError.sh till the error appears.
You can filter the error message with echo "$Msg" |tail -n 1, but its not 100% save.
You should also do some changes in fileWithError.sh...
Switch set -e with trap "exit 1" ERR. this will close the script on errors.
Hope this will help.

Related

Check all commands exit code within a bash script

Consider the case where I have a very long bash script with several commands. Is there a simple way to check the exit status for ALL of them easily. So if there's some failure I can show which command has failed and its return code.
I mean, I don't want to use the test for each one of them checks like the following:
my_command
if [ $status -ne 0 ]; then
#error case
echo "error while executing " my_command " ret code:" $?
exit 1
fi
You can do trap "cmd" ERR, which invokes cmd when a command fails. However this solution has a couple of drawbacks. For example, it does not catch a failure inside a pipe.
In a nutshell, you are better off doing your error management properly, on a case by case basis.
You can write a function that launches:
function test {
"$#"
local status=$?
if [ $status -ne 0 ]; then
echo "error with $1" >&2
fi
return $status
}
test command1
test command2
One can test the value of $? or simply put set -e at the top of the script which will cause it to exit upon any command which errors.
#!/bin/sh
set -xe
my_command1
# never makes it here if my_command1 fails
my_command2

Redirecting stdout only if command failed?

I'm writing a bash script that is supposed to be "transparent" to the user. It reads commands from the user and intercepts them, allowing only some of them to be executed by bash, depending on some criteria. It (basically) works like this:
while true; do
read COMMAND
can_be_done $COMMAND
if [ $? == 0 ]; then
eval $COMMAND
if [ $? != 0 ]; then
echo "Error: command not found"
fi
fi
done
The problem is, when the command fails, you also get stuff printed to the console. BUT, if I keep the result in a variable and only print it when it doesn't fail, like so:
RESULT=$(eval $COMMAND)
Then there's another problem: The special formatting gets lost (for example, "ls --color" doesn't show colors anymore)
My question is: Is there a way to have the command print to STDOUT if successful, but to /dev/null if it fails?
Do you really need the second part, replacing the output of the command with an error message? Linux commands print their own error messages, which aren't necessarily "command not found". You'd be hiding the true error (permission denied, file not found, out of memory, segfault, etc.) with an oftentimes incorrect error message (command not found).
If you remove that check, you could simplify the loop to something like this:
while true; do
read -e COMMAND
if can_be_done "$COMMAND"; then
eval "$COMMAND"
fi
done
read -e uses readline to obtain the command, making the prompt a lot more shell-like (↑ and ↓ for history, for instance).
command; if [ $? == 0 ]; then is more idiomatically written as if <command>; then.
Quoting makes sure special characters and whitespace are handled properly.
I would argue strongly that you should not do this. If you do not want to see output, redirect it to /dev/null. If you do want to see errors, do not redirect stderr. If you are using a program that prints its error messages on stdout instead of stderr, FIX THE PROGRAM! Error messages belong on stderr. Note that this means your program is broken, as it ought to read:
echo "Error: command not found" >&2
I'm not sure if it is rule number 1, but it certainly belongs in the top 10, and it may be the most often violated rule: Error messages belong on stderr. A program which prints error messages on stdout is broken.
if false > /dev/null;then echo 1; else echo 2; fi 2> /dev/null
Will output 2
if true > /dev/null;then echo 1; else echo 2; fi 2> /dev/null
Will output 1
remove the > /dev/null to print the command also to stdout
for example
if echo 123;then echo 1; else echo 2; fi 2> /dev/null
Will output
123 & 1
Assuming that the command is not very expensive to run you can do this:
test `ls /mooo 2>/dev/null` || echo moo not found
test will return true only if the command exits with 0, in this case ls is the command. You could have put this in an if statement too like so:
if [ `ls /moo 2>/dev/null` ];then
echo moo is a folder
fi

shell to find a file , execute it - exit if 'error' and continue if ' no error'

I have to write a shell script and i don't know how to go about it.
Basically i have to write a script where i'd find a file ( it could be possibly named differently). If either file exists then it must be executed, if it returns a 0 ( no error), it should continue the build, if it's not equal to 0 ( returns with error), it should exit. If either file is not found it should continue the build.
the file i have to find could be either file.1 or file.2 so it could be either named (file.1), or (file.2).
some of the conditions to make it more clear.
1) if either file exists , it should execute - if it has any errors it should exit, if no errors it should continue.
2) none could exist, if that's the case then it should continue the build.
3) both files will not be present at the same time ( additional info)
I have tried to write a script but i doubt it's even closer to what i am looking for.
if [-f /home/(file.1) or (file.2)]
then
-exec /home/(file.1) or (file.2)
if [ $! -eq 0]; then
echo "no errors continuing build"
fi
else
if [ $! -ne 0] ; then
exit ;
fi
else
echo "/home/(file.1) or (file.2) not found, continuing build"
fi
any help is much appreciated.
Thanks in advance
DOIT=""
for f in file1.sh file2.sh; do
if [ -x /home/$f ]; then DOIT="/home/$f"; break; fi
done
if [ -z "$DOIT" ]; then echo "Files not found, continuing build"; fi
if [ -n "$DOIT" ]; then $DOIT && echo "No Errors" || exit 1; fi
For those confused about my syntax, try running this:
true && echo "is true" || echo "is false"
false && echo "is true" || echo "is false"
Just putting the line
file.sh
in your script should work, if you set up your script to exit on errors.
For example, if your script was
#!/bin/bash -e
echo one
./file.sh
echo two
Then if file.sh exists and is executable it would run and your whole script would run. If not, the script would fail when it tried to execute the non-existing file.
If you want to execute one file or the other, extend the idea to the following:
#!/bin/bash -e
echo one
./file1.sh || ./file2.sh
echo two
This means if file1.sh does not exist, it will try file2.sh and if that is there it will run and your whole script will run.
This give preference to file1 of course, meaning if they both exist, then only file1 will run.

wget with errorlevel bash output

I want to create a bash file (.sh) which does the following:
I call the script like ./download.sh www.blabla.com/bla.jpg
the script has to echo then if the file has downloaded or not...
How can I do this? I know I can use errorlevel but I'm new to linux so...
Thanks in advance!
Typically applications in Linux will set the value of the environment variable $? on failure. You can examine this return code and see if it gets you any error for wget.
#!/bin/bash
wget $1 2>/dev/null
export RC=$?
if [ "$RC" = "0" ]; then
echo $1 OK
else
echo $1 FAILED
fi
You could name this script download.sh. Change the permissions to 755 with chmod 755. Call it with the name of the file you wish to download. ./download.sh www.google.com
You could try something like:
#!/bin/sh
[ -n $1 ] || {
echo "Usage: $0 [url to file to get]" >&2
exit 1
}
wget $1
[ $? ] && {
echo "Could not download $1" | mail -s "Uh Oh" you#yourdomain.com
echo "Aww snap ..." >&2
exit 1
}
# If we're here, it downloaded successfully, and will exit with a normal status
When making a script that will (likely) be called by other scripts, it is important to do the following:
Ensure argument sanity
Send e-mail, write to a log, or do something else so someone knows what went wrong
The >&2 simply redirects the output of error messages to stderror, which allows a calling script to do something like this:
foo-downloader >/dev/null 2>/some/log/file.txt
Since it is a short wrapper, no reason to forsake a bit of sanity :)
This also allows you to selectively direct the output of wget to /dev/null, you might actually want to see it when testing, especially if you get an e-mail saying it failed :)
wget executes in non-interactive way. This means that wget work in the background and you can't catch de return code with $?.
One solution it's to handle the "--server-response" property, searching http 200 status code
Example:
wget --server-response -q -o wgetOut http://www.someurl.com
sleep 5
_wgetHttpCode=`cat wgetOut | gawk '/HTTP/{ print $2 }'`
if [ "$_wgetHttpCode" != "200" ]; then
echo "[Error] `cat wgetOut`"
fi
Note: wget need some time to finish his work, for that reason I put "sleep 5". This is not the best way to do but worked ok for test the solution.

Aborting a shell script if any command returns a non-zero value

I have a Bash shell script that invokes a number of commands.
I would like to have the shell script automatically exit with a return value of 1 if any of the commands return a non-zero value.
Is this possible without explicitly checking the result of each command?
For example,
dosomething1
if [[ $? -ne 0 ]]; then
exit 1
fi
dosomething2
if [[ $? -ne 0 ]]; then
exit 1
fi
Add this to the beginning of the script:
set -e
This will cause the shell to exit immediately if a simple command exits with a nonzero exit value. A simple command is any command not part of an if, while, or until test, or part of an && or || list.
See the bash manual on the "set" internal command for more details.
It's really annoying to have a script stubbornly continue when something fails in the middle and breaks assumptions for the rest of the script. I personally start almost all portable shell scripts with set -e.
If I'm working with bash specifically, I'll start with
set -Eeuo pipefail
This covers more error handling in a similar fashion. I consider these as sane defaults for new bash programs. Refer to the bash manual for more information on what these options do.
To add to the accepted answer:
Bear in mind that set -e sometimes is not enough, specially if you have pipes.
For example, suppose you have this script
#!/bin/bash
set -e
./configure > configure.log
make
... which works as expected: an error in configure aborts the execution.
Tomorrow you make a seemingly trivial change:
#!/bin/bash
set -e
./configure | tee configure.log
make
... and now it does not work. This is explained here, and a workaround (Bash only) is provided:
#!/bin/bash
set -e
set -o pipefail
./configure | tee configure.log
make
The if statements in your example are unnecessary. Just do it like this:
dosomething1 || exit 1
If you take Ville Laurikari's advice and use set -e then for some commands you may need to use this:
dosomething || true
The || true will make the command pipeline have a true return value even if the command fails so the the -e option will not kill the script.
If you have cleanup you need to do on exit, you can also use 'trap' with the pseudo-signal ERR. This works the same way as trapping INT or any other signal; bash throws ERR if any command exits with a nonzero value:
# Create the trap with
# trap COMMAND SIGNAME [SIGNAME2 SIGNAME3...]
trap "rm -f /tmp/$MYTMPFILE; exit 1" ERR INT TERM
command1
command2
command3
# Partially turn off the trap.
trap - ERR
# Now a control-C will still cause cleanup, but
# a nonzero exit code won't:
ps aux | grep blahblahblah
Or, especially if you're using "set -e", you could trap EXIT; your trap will then be executed when the script exits for any reason, including a normal end, interrupts, an exit caused by the -e option, etc.
The $? variable is rarely needed. The pseudo-idiom command; if [ $? -eq 0 ]; then X; fi should always be written as if command; then X; fi.
The cases where $? is required is when it needs to be checked against multiple values:
command
case $? in
(0) X;;
(1) Y;;
(2) Z;;
esac
or when $? needs to be reused or otherwise manipulated:
if command; then
echo "command successful" >&2
else
ret=$?
echo "command failed with exit code $ret" >&2
exit $ret
fi
Run it with -e or set -e at the top.
Also look at set -u.
On error, the below script will print a RED error message and exit.
Put this at the top of your bash script:
# BASH error handling:
# exit on command failure
set -e
# keep track of the last executed command
trap 'LAST_COMMAND=$CURRENT_COMMAND; CURRENT_COMMAND=$BASH_COMMAND' DEBUG
# on error: print the failed command
trap 'ERROR_CODE=$?; FAILED_COMMAND=$LAST_COMMAND; tput setaf 1; echo "ERROR: command \"$FAILED_COMMAND\" failed with exit code $ERROR_CODE"; put sgr0;' ERR INT TERM
An expression like
dosomething1 && dosomething2 && dosomething3
will stop processing when one of the commands returns with a non-zero value. For example, the following command will never print "done":
cat nosuchfile && echo "done"
echo $?
1
#!/bin/bash -e
should suffice.
I am just throwing in another one for reference since there was an additional question to Mark Edgars input and here is an additional example and touches on the topic overall:
[[ `cmd` ]] && echo success_else_silence
Which is the same as cmd || exit errcode as someone showed.
For example, I want to make sure a partition is unmounted if mounted:
[[ `mount | grep /dev/sda1` ]] && umount /dev/sda1

Resources