don't fail jenkins build if execute shell fails - linux

As part of my build process, I am running a git commit as an execute shell step. However, if there are no changes in the workspace, Jenkins is failing the build. This is because git is returning an error code when there are no changes to commit. I'd like to either abort the build, or just mark it as unstable if this is the case. Any ideas?

To stop further execution when command fails:
command || exit 0
To continue execution when command fails:
command || true

Jenkins is executing shell build steps using /bin/sh -xe by default. -x means to print every command executed. -e means to exit with failure if any of the commands in the script failed.
So I think what happened in your case is your git command exit with 1, and because of the default -e param, the shell picks up the non-0 exit code, ignores the rest of the script and marks the step as a failure. We can confirm this if you can post your build step script here.
If that's the case, you can try to put #!/bin/sh so that the script will be executed without option; or do a set +e or anything similar on top of the build step to override this behavior.
Edited: Another thing to note is that, if the last command in your shell script returns non-0 code, the whole build step will still be marked as fail even with this setup. In this case, you can simply put a true command at the end to avoid that.
Another related question

If there is nothing to push git returns exit status 1. Execute shell build step is marked as failed respectively. You can use OR statement || (double pipe).
git commit -m 'some messasge' || echo 'Commit failed. There is probably nothing to commit.'
That means, execute second argument if first failed (returned exit status > 0). Second command always returns 0. When there is nothing to push (exit status 1 -> execute second command) echo will return 0 and build step continues.
To mark build as unstable you can use post-build step Jenkins Text Finder. It can go through console output, match pattern (your echo) and mark build as unstable.

There is another smooth way to tell Jenkins not to fail.
You can isolate your commit in a build step and set the shell to not fail:
set +e
git commit -m "Bla."
set -e

This answer is correct, but it doesn't specify the || exit 0 or || true goes inside the shell command. Here's a more complete example:
sh "adb uninstall com.example.app || true"
The above will work, but the following will fail:
sh "adb uninstall com.example.app" || true
Perhaps it's obvious to others, but I wasted a lot of time before I realized this.

I was able to get this working using the answer found here:
How to git commit nothing without an error?
git diff --quiet --exit-code --cached || git commit -m 'bla'

https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#sh-shell-script
if you include a returnStatus: true property, then the shell return is ignored.

Jenkins determines the success/failure of a step by the return value of the step. For the case of a shell, it should be the return of the last value. For both Windows CMD and (POSIX) Bash shells, you should be able to set the return value manually by using exit 0 as the last command.

On the (more general) question in title - to prevent Jenkins from failing you can prevent it from seeing exit code 1. Example for ping:
bash -c "ping 1.2.3.9999 -c 1; exit 0"
And now you can e.g. get output of ping:
output=`bash -c "ping 1.2.3.9999 -c 1; exit 0"`
Of course instead of ping ... You can use any command(s) - including git commit.

You can use the Text-finder Plugin. It will allow you to check the output console for an expression of your choice then mark the build as Unstable.

For multiple shell commands, I ignores the failures by adding:
set +e
commands
true

If you put this commands into shell block:
false
true
your build will be marked as fail ( at least 1 non-zero exit code ), so you can add (set +e) to ignore it:
set +e
false
true
will not fail. However, this will fail even with the (set +e) in place:
set +e
false
because the last shell command must exit with 0.

The following works for mercurial by only committing if there are changes. So the build only fails if the commit fails.
hg id | grep "+" || exit 0
hg commit -m "scheduled commit"

Another one answer with some tips, can be helpful for somebody:
remember to separate your commands with the following rule:
command1 && command2 - means, that command2 will be executed, only if command1 success
command1 ; command2 - means, that command 2 will be executed despite on result of command1
for example:
String run_tests = sh(script: "set +e && cd ~/development/tests/ && gmake test ;set -e;echo 0 ", returnStdout: true).trim()
println run_tests
will be executed successfully with set -e and echo 0 commands if gmake test failed (your tests failed), while the following code snipped:
String run_tests = sh(script: "set +e && cd ~/development/tests/ && gmake test && set -e && echo 0 ", returnStdout: true).trim()
println run_tests
a bit wrong and commands set -e and echo 0 in&& gmake test && set -e && echo 0 will be skipped, with the println run_tests statement, because failed gmake test will abort the jenkins build. As workaround you can switch to returnStatus:true, but then you will miss the output from your command.

Related

How to exit a shell script when a function is not found? [duplicate]

This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 3 years ago.
I've been writing some shell script and I would find it useful if there was the ability to halt the execution of said shell script if any of the commands failed. See below for an example:
#!/bin/bash
cd some_dir
./configure --some-flags
make
make install
So in this case, if the script can't change to the indicated directory, then it would certainly not want to do a ./configure afterwards if it fails.
Now I'm well aware that I could have an if check for each command (which I think is a hopeless solution), but is there a global setting to make the script exit if one of the commands fails?
Use the set -e builtin:
#!/bin/bash
set -e
# Any subsequent(*) commands which fail will cause the shell script to exit immediately
Alternatively, you can pass -e on the command line:
bash -e my_script.sh
You can also disable this behavior with set +e.
You may also want to employ all or some of the the -e -u -x and -o pipefail options like so:
set -euxo pipefail
-e exits on error, -u errors on undefined variables, and -o (for option) pipefail exits on command pipe failures. Some gotchas and workarounds are documented well here.
(*) Note:
The shell does not exit if the command that fails is part of the
command list immediately following a while or until keyword,
part of the test following the if or elif reserved words, part
of any command executed in a && or || list except the command
following the final && or ||, any command in a pipeline but
the last, or if the command's return value is being inverted with
!
(from man bash)
To exit the script as soon as one of the commands failed, add this at the beginning:
set -e
This causes the script to exit immediately when some command that is not part of some test (like in a if [ ... ] condition or a && construct) exits with a non-zero exit code.
Use it in conjunction with pipefail.
set -e
set -o pipefail
-e (errexit): Abort the script at the first error, when a command exits with non-zero status (except in until or while loops, if-tests, and list constructs)
-o pipefail: Causes a pipeline to return the exit status of the last command in the pipe that returned a non-zero return value.
Chapter 33. Options
Here is how to do it:
#!/bin/sh
abort()
{
echo >&2 '
***************
*** ABORTED ***
***************
'
echo "An error occurred. Exiting..." >&2
exit 1
}
trap 'abort' 0
set -e
# Add your script below....
# If an error occurs, the abort() function will be called.
#----------------------------------------------------------
# ===> Your script goes here
# Done!
trap : 0
echo >&2 '
************
*** DONE ***
************
'
An alternative to the accepted answer that fits in the first line:
#!/bin/bash -e
cd some_dir
./configure --some-flags
make
make install
One idiom is:
cd some_dir && ./configure --some-flags && make && make install
I realize that can get long, but for larger scripts you could break it into logical functions.
I think that what you are looking for is the trap command:
trap command signal [signal ...]
For more information, see this page.
Another option is to use the set -e command at the top of your script - it will make the script exit if any program / command returns a non true value.
One point missed in the existing answers is show how to inherit the error traps. The bash shell provides one such option for that using set
-E
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a subshell environment. The ERR trap is normally not inherited in such cases.
Adam Rosenfield's answer recommendation to use set -e is right in certain cases but it has its own potential pitfalls. See GreyCat's BashFAQ - 105 - Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?
According to the manual, set -e exits
if a simple commandexits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in a if statement, part of an && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted via !".
which means, set -e does not work under the following simple cases (detailed explanations can be found on the wiki)
Using the arithmetic operator let or $((..)) ( bash 4.1 onwards) to increment a variable value as
#!/usr/bin/env bash
set -e
i=0
let i++ # or ((i++)) on bash 4.1 or later
echo "i is $i"
If the offending command is not part of the last command executed via && or ||. For e.g. the below trap wouldn't fire when its expected to
#!/usr/bin/env bash
set -e
test -d nosuchdir && echo no dir
echo survived
When used incorrectly in an if statement as, the exit code of the if statement is the exit code of the last executed command. In the example below the last executed command was echo which wouldn't fire the trap, even though the test -d failed
#!/usr/bin/env bash
set -e
f() { if test -d nosuchdir; then echo no dir; fi; }
f
echo survived
When used with command-substitution, they are ignored, unless inherit_errexit is set with bash 4.4
#!/usr/bin/env bash
set -e
foo=$(expr 1-1; true)
echo survived
when you use commands that look like assignments but aren't, such as export, declare, typeset or local. Here the function call to f will not exit as local has swept the error code that was set previously.
set -e
f() { local var=$(somecommand that fails); }
g() { local var; var=$(somecommand that fails); }
When used in a pipeline, and the offending command is not part of the last command. For e.g. the below command would still go through. One options is to enable pipefail by returning the exit code of the first failed process:
set -e
somecommand that fails | cat -
echo survived
The ideal recommendation is to not use set -e and implement an own version of error checking instead. More information on implementing custom error handling on one of my answers to Raise error in a Bash script

capture compile error and terminate bash script

I've written a bash script which install 3 packages from source. The script is fairly simple with the ./configure, make, make install statements written thrice (after cding into the source folder). To make it look a little cleaner, I have redirected the output to another file like so:./configure >> /usr/local/the_packages/install.log 2>&1.
The question is, if any one package fails to compile due to some reason (I'm not even sure of what reason, because it has always run successfully till now - this is just something I want to add), I'd like to terminated the script and rollback.
I figure rolling back would simply be deleting the destination folders specified in prefix=/install/path but how do I terminate the script itself?
Perhaps something like this could work:
./configure && make && make install || rm -rf /install/path
Option 1
You can check the return code of something run from a script with the $? bash variable.
moo#cow:~$ false
moo#cow:~$ echo $?
1
moo#cow:~$ true
moo#cow:~$ echo $?
0
Option 2
You can also check the return code by directly putting the command into a if statement like so.
moo#cow:~$ if echo a < bad_command; then echo "success"; else echo "fail"; fi
fail
Invert the return code
The return code of a command can be inverted with the ! character.
moo#cow:~$ if ! echo a < bad_command; then echo "success"; else echo "fail"; fi
success
Example Script
Just for fun, I decided to write this script based on your question.
#!/bin/bash
_installed=()
do_rollback() {
echo "rolling back..."
for i in "${_installed[#]}"; do
echo "removing $i"
rm -rf "$i"
done
}
install_pkg() {
local _src_dir="$1"
local _install_dir="$2"
local _prev_dir="$PWD"
local _res=0
# Switch to source directory
cd "$_src_dir"
# Try configuring
if ! ./configure --prefix "$_install_dir"; then
echo "error: could not configure pkg in $_src_dir"
do_rollback
exit 1
fi
# Try making
if ! make; then
echo "error: could not make pkg in $_src_dir"
do_rollback
exit 1
fi
# Try installing
if ! make install; then
echo "error: could not install pkg from $_src_dir"
do_rollback
exit 1
fi
# Update installed array
echo "installed pkg from $_src_dir"
_installed=("${_installed[#]}" "$_install_dir")
# Restore previous directory
cd "$_prev_dir"
}
install_pkg /my/source/directory1 /opt/install/dir1
install_pkg /my/source/directory2 /opt/install/dir2
install_pkg /my/source/directory3 /opt/install/dir3
In two parts:
To make the script abort as soon as any command returns an error, you want to use set -e. From the man page (BUILTINS section; description of the set builtin):
-e
Exit immediately if a pipeline (which may consist of a single simple command),
a subshell command enclosed in parentheses, or one of the commands executed as
part of a command list enclosed by braces (see SHELL GRAMMAR above) exits with
a non-zero status. The shell does not exit if the command that fails is part of
the command list immediately following a while or until keyword, part of the
test following the if or elif reserved words, part of any command executed in a
&& or ││ list except the command following the final && or ││, any command in a
pipeline but the last, or if the command's return value is being inverted with
!. A trap on ERR, if set, is executed before the shell exits. This option
applies to the shell environment and each subshell environment separately (see
COMMAND EXECUTION ENVIRONMENT above), and may cause subshells to exit before
executing all the commands in the subshell.
You can set this in three ways: Chang your shebang line to #!/bin/bash -e; call the script as bash -e scriptname; or simply use set -e near the top of your script.
The second part of the question is (to paraphrase) how to catch the exit and clean up before exiting. The answer is referenced above - You want to set a trap on ERR.
To show you how these work together, here's a simple script being run. Note that as soon as we have a non-zero exit code, execution transfers to the signal handler which takes care of doing the cleanup:
james#bodacious:tmp$cat test.sh
#!/bin/bash -e
cleanup() {
echo I\'m cleaning up!
}
trap cleanup ERR
echo Hello
false
echo World
james#bodacious:tmp$./test.sh
Hello
I'm cleaning up!
james#bodacious:tmp$

why bash script when set -e is set and two pipelines concatenated by && failed [duplicate]

This question already has answers here:
set -e and short tests
(4 answers)
Closed 9 years ago.
Let's say we have a script like this.
#!/bin/bash
set -e
ls notexist && ls notexist
echo still here
won't exit because of set -e
but
#!/bin/bash
set -e
ls notexist || ls notexist
echo still here
will.
Why?
the bash manual says for set -e:
The shell does not exit if the command that fails is [...]
part of any command executed in a && or || list except the
command following the final && or ||
the dash manual says:
If not interactive, exit immediately if any untested command fails.
The exit status of a command is considered to be explicitly tested
if the command is used to control an if, elif, while, or until;
or if the command is the left hand operand of an “&&” or “||” operator.
for the AND test, the shell will stop early, during the test of the "left hand operand".
because there are still tests, it will consider the entire command to be "tested" and thus will not abort.
for the OR test, the shell has to run all (both) tests, and once the last test fails, it will conclude that there has been an unchecked error and thus will abort.
i agree it's a bit counterintuitive.
Because as Bash manual says regarding set -e:
The shell does not exit if the command that fails is part of the command
list immediately following a while or until keyword, part of the test
following the if or elif reserved words, part of any command executed in a
&& or || list except the command following the final && or ||, any command
in a pipeline but the last, or if the command's return value is being
inverted with !.
The ls notexist || ls notexist command terminates the shell because the second (last) ls notexist exits unsuccessfully. The ls notexist && ls notexist doesn't terminate the shell, because execution of "&& list" is stopped after the first ls notexist fails and the second (last) one is never reached.
BTW, it's easier and more reliable to do such tests using true and false instead of specific commands such as ls notexist.

How to handle error/exception in shell script?

Below is my script that I am executing in the bash. And it works fine.
fileexist=0
for i in $( ls /data/read-only/clv/daily/Finished-HADOOP_EXPORT_&processDate#.done); do
mv /data/read-only/clv/daily/Finished-HADOOP_EXPORT_&processDate#.done /data/read-only/clv/daily/archieve-wip/
fileexist=1
done
Problem Statement:-
In my above shell script which has to be run daily using cron job, I don't have any error/exception handling mechanism. Suppose if anything gets wrong then I don't know what's has happened?
As after the above script is executed, there are some other scripts that will be dependent on the data provided by above script, so I always get's complaint from the other people who are depending on my script data that something wrong has happened.
So is there any way I can get notified if anything wrong has happened in my script? Suppose if the cluster is having some maintenance and at that time I am running my script, so definitely it will be failing for sure, so can I be notified if my above scripts failed, so that I will be sure something wrong has happened.
Hope my question is clear enough.
Any thoughts will be appreciated.
You can check for the exit status of each command, as freetx answered, but this is manual error checking rather than exception handling. The standard way to get the equivalent of exception handling in sh is to start the script with set -e. That tells sh to exit with a non-zero status as soon as any executed command fails (i.e. exits with a non-zero exit status).
If it is intended for some command in such a script to (possibly) fail, you can use the construct COMMAND || true, which will force a zero exit status for that expression. For example:
#!/bin/sh
# if any of the following fails, the script fails
set -e
mkdir -p destdir/1/2
mv foo destdir/1/2
touch /done || true # allowed to fail
Another way to ensure that you are notified when things go wrong in a script invoked by cron is to adhere to the Unix convention of printing nothing unless an error ocurred. Successful runs will then pass without notice, and unsuccessful runs will cause the cron daemon to notify you of the error via email. Note that local mail delivery must be correctly configured on your system for this to work.
Its customary for every unix command line utility to return 0 upon success and non-zero on failure. Therefore you can use the $? pattern to display the last return value and handle things accordingly.
For instance:
> ls
> file1 file2
> echo $?
> 0
> ls file.no.exist
> echo $?
> 1
Therefore, you can use this as rudimentary error detection to see if something goes wrong. So the normal approach would be
some_command
if [ $? -gt 0 ]
then
handle_error here
fi
well if other scripts are on the same machine, then you could do a pgrep in other scripts for this script if found to sleep for a while and try other scripts later rechecking process is gone.
If script is on another machine or even local the other method is to produce a temp file on remote machine accessible via a running http browser that other scripts can check status i.e. running or complete
You could also either wrap script around another that looks for these errors and emails you if it finds it if not sends result as per normal to who ever
go=0;
function check_running() {
running=`pgrep -f your_script.sh|wc -l `
if [ $running -gt 1 ]; then
echo "already running $0 -- instances found $running ";
go=1;
}
check_running;
if [ $go -ge 1 ];then
execute your other script
else
sleep 120;
check_running;
fi

How to write a shell script that runs next command even if current command fails?

I currently need to automate a task where I need to execute 5 commands, even if any command fails, all next commands should get executed, currently if 4th commands fails, shell script exits and dont run 5th command.
So, what should I do so that, shell script run all the next commands even if current command fails?
Assuming that you have bash as your script interpreter
(or /bin/sh is a symbolic link to bash),
please check if your script does not have
set -e
anywhere in the code.
Well, what's wrong with code below?
#!/bin/bash
cmd1
cmd2
cmd3
cmd4
cmd5
I would do
|| true
Example:
( set -e; false ; echo not here; )
( set -e; false || true ; echo still here; )
Example:
command1 || true
command2 || true
command3 || true
command4 || true
command5 || true
That way you can keep the -e flag, its a good flag.
The "trap" command may be helpful here:
trap

Resources