Is it possible to catch an error exit code in node.js from shell script? - node.js

as the title of the question, here is the scenario.
A shell script file script.sh that does some operations and at some point it requires to launch a node file.
#! /bin/bash
node "$(dirname "$0")/script.js" "$input"
echo "${?}\n"
In the node file script.js there are some controls and in case of error the script return with an error exit code.
process.exit(1)
Is it possible to catch this error exit code in order to let the command to be executed in the shell script script.sh?
Currently the execution is interrupted with this error error Command failed with exit code 1., as expected by the way.
But I would like to know if I can on shell script to catch this error and continue to execute the last part of the code echo "${?}\n".
Thanks in advance

You can do something like this in your bash script in case your node script return 1
node "$(dirname "$0")/script.js" "$input"
echo "Script: $? - Successfull"
if [ $? != 0 ]; then
echo "${?}\n". 1>&2 && exit 1
fi

Related

How to fail a bash script when while/if/etc has errors?

I run a Jenkins pipeline job with Groovy. The Groovy calls bash scripts for each step.
I want to fail the whole job when something in the way has errors.
For Groovy I use the returnStatus: true.
For Bash I use set -e.
But a bash script with set -e, does not exit if, for example, a while statement has errors. This is what should actually happen, according to the Linux manual page for 'set'.
I would like to know how to exit immediately in that scenario.
The script:
[jenkins-user#jenkins ~]$ cat script.sh
#!/bin/bash
set -xe
FILE=commands.txt
echo "echos before while"
# Run the commands in the commands file
while read COMMAND
do
$COMMAND
done < $FILE
echo $?
echo "echos after faulty while"
Let's say 'commands.txt' doesn't exist.
Running script:
[jenkins-user#jenkins ~]$ sh script.sh
echos before while
script.sh: line 13: commands.txt: No such file or directory
1
echos after faulty while
[jenkins-user#jenkins ~]$ echo $?
0
Although the while statement returns exit code 1, the script continues and ends successfully, as checked right after, with echo $?.
This is how I force the Groovy to fail, after a step with bash/python/etc command/script returns a none-zero exit code:
pipeline {
agent any
stages {
stage("A") {
steps {
script {
def rc = sh(script: "sh A.sh", returnStatus: true)
if (rc != 0) {
error "Failed, exiting now..."
}
}
}
}
}
}
First question, how can I make the SHELL script to fail when the while/if/etc statements have errors? I know I can use command || exit 1 but it doesn't seem elegant if I have dozens of statements like this in the script.
Second question, is my Groovy error handling correct? Can anyone suggest an event better way? Or maybe there is a Jenkins plugin/official way to do so?
First question this link may be helpful Aborting a shell script if any command returns a non-zero value
Second question: You can improve your error handling using try and catch for exception handling.
try{
def rc = sh(script: "sh A.sh", returnStatus: true)
if (rc != 0) {
error "Failed, exiting now..."
}
}
catch (Exception er){
errorMessage = er.getMessage();
}
About the Bash script.
Your issue is that the fail redirection does not abort the bash script, despite the use of set -e. I was surprised my-self. But it's my first disappointment about set -e, so now I consider to not trust it and I abuse of stuff like $command || exit 1 ...
Here you can do the following:
set -xe -o pipefail
cat $FILE | while read command; do $command ; done
But the whole loop should be simplified into:
bash $FILE
Why don't you just use the while exit code and return it? (See this modified version of your script, the last lines)
[jenkins-user#jenkins ~]$ cat script.sh
#!/bin/bash
set -xe
FILE=commands.txt
echo "echos before while"
# Run the commands in the commands file
while read COMMAND
do
$COMMAND
done < $FILE
status=$?
echo "echos after faulty while"
exit $status
[jenkins-user#jenkins ~]$ cat script.sh
#!/bin/bash
set -xe
FILE=commands.txt
echo "echos before while"
# Run the commands in the commands file
while read COMMAND
do
$COMMAND
done < $FILE
echo $?
echo "echos after faulty while"
When yor perform a echo $? after this script it will always be 0, because the last command was echo "echos after faulty while" you can ad an exit 1 at the end of your script. In exit 1 the number 1 will be the error code, you can use other. So the script will be
[jenkins-user#jenkins ~]$ cat script.sh
#!/bin/bash
set -xe
FILE=commands.txt
echo "echos before while"
# Run the commands in the commands file
while read COMMAND
do
$COMMAND
done < $FILE
echo $?
exit 1

How to run bash script while it returns code 0?

I have bash script with many lines of code and I need run it while it returns $? == 0, but in case if it has error I need stop it and exit with code 1?
The question is how to do it?
I tried to use set -e command, but Jenkins does not marks build as failed, for him it looks like Success
I also need to get the Error message to show it in my Jenkins log
I managed to get error code(in my case it will be 126), but how to get error message?
main file
fileWithError.sh
rc=$?; if [[ $rc != 0 ]]; then
echo "exit {$rc} ";
fi
fileWithError.sh
#!/bin/sh
set -e
echo "Test"
agjfsjgfshgd
echo "Test2"
echo "Test3"
Just add the command set -e to the beginning of the file
This should look something similar to this
#!/bin/sh
set -e
#...Your code...
I think you just want:
#!/bin/sh
while fileWithError.sh; do
sleep 1;
done
echo fileWithError.sh failed!! >&2
Note that if the script is written well, then the echo is
redundant as fileWithError.sh should have written a decent
error message already. Also, the sleep may not be needed, but is useful to prevent a fast loop if the script succeeds quickly.
You can get the explicit return value, but it requires a bit of refactoring.
#!/bin/sh
true
while test $? = 0; do fileWithError.sh; done
echo fileWithError.sh failed with status $?!! >&2
since the return value of the while script will be the
return value of sleep in the first construction.
Its not quite easy to get an error code only.
How about this ...
#!/bin/bash
Msg=$(fileWithError.sh 2>&1) # redirect all error messages to stdout
if [ "$?" -ne 0 ] # Not Equal
then
echo "$Msg"
exit 1
fi
exit 0
You catch all messages created by fileWithError.sh and if the programm returned an error code then you have the error message already saved in a variable.
But this will make a disadvantage, because you will temporary store all messages created by fileWithError.sh till the error appears.
You can filter the error message with echo "$Msg" |tail -n 1, but its not 100% save.
You should also do some changes in fileWithError.sh...
Switch set -e with trap "exit 1" ERR. this will close the script on errors.
Hope this will help.

Check all commands exit code within a bash script

Consider the case where I have a very long bash script with several commands. Is there a simple way to check the exit status for ALL of them easily. So if there's some failure I can show which command has failed and its return code.
I mean, I don't want to use the test for each one of them checks like the following:
my_command
if [ $status -ne 0 ]; then
#error case
echo "error while executing " my_command " ret code:" $?
exit 1
fi
You can do trap "cmd" ERR, which invokes cmd when a command fails. However this solution has a couple of drawbacks. For example, it does not catch a failure inside a pipe.
In a nutshell, you are better off doing your error management properly, on a case by case basis.
You can write a function that launches:
function test {
"$#"
local status=$?
if [ $status -ne 0 ]; then
echo "error with $1" >&2
fi
return $status
}
test command1
test command2
One can test the value of $? or simply put set -e at the top of the script which will cause it to exit upon any command which errors.
#!/bin/sh
set -xe
my_command1
# never makes it here if my_command1 fails
my_command2

Re-installing Linux O.S. and then running bunch of commands in a .sh script , how to stop the script if something fails?

If i copy and paste all the commands into the terminal..
some do not even go through.
so the solution is perhaps to turn the file into an executable file
and then execute it.
but what if some commands fail.
the script keeps on executing the other commands.
obviously there is no solution to this right ?
The easiest way to do this is to use the -e option in your shell. For example:
#!/bin/sh -e
command1
command2
In this script, if command1 fails, then the script as a whole will fail at that point without running any further commands.
You can check the error code from commands you run
#!/bin/bash
function test {
"$#"
status=$?
if [ $status -ne 0 ]; then
echo "error with $1"
exit 255
fi
return $status
}
test ls
test ps -ef
test not_a_command
taken from here for more information Checking Bash exit status of several commands efficiently
#Terminal, you were almost there.
If you just stick && on the end of each command, then execution will stop with the first failure (ie. the first command that returns a non-zero exit code).
Example:
#!/bin/sh
true &&
echo 'got here' &&
echo 'got here too' &&
false &&
echo 'also got here'
produces the output
got here
got here too
(Actually, I thought it would also require line-continuation markers too: && \, but a quick test showed otherwise.)
Note: All of the above assumes that your shell is bash; I can't speak for other shells.

Shell script continues to run even after exit command

My shell script is as shown below:
#!/bin/bash
# Make sure only root can run our script
[ $EUID -ne 0 ] && (echo "This script must be run as root" 1>&2) || (exit 1)
# other script continues here...
When I run above script with non-root user, it prints message "This script..." but it doe not exit there, it continues with the remaining script. What am I doing wrong?
Note: I don't want to use if condition.
You're running echo and exit in subshells. The exit call will only leave that subshell, which is a bit pointless.
Try with:
#! /bin/sh
if [ $EUID -ne 0 ] ; then
echo "This script must be run as root" 1>&2
exit 1
fi
echo hello
If for some reason you don't want an if condition, just use:
#! /bin/sh
[ $EUID -ne 0 ] && echo "This script must be run as root" 1>&2 && exit 1
echo hello
Note: no () and fixed boolean condition. Warning: if echo fails, that test will also fail to exit. The if version is safer (and more readable, easier to maintain IMO).
I think you need && rather than||, since you want to echo and exit (not echo or exit).
In addition (exit 1) will run a sub-shell that exits rather than exiting your current shell.
The following script shows what you need:
#!/bin/bash
[ $1 -ne 0 ] && (echo "This script must be run as root." 1>&2) && exit 1
echo Continuing...
Running this with ./myscript 0 gives you:
Continuing...
while ./myscript 1 gives you:
This script must be run as root.
I believe that's what you were looking for.
I would write that as:
(( $EUID != 0 )) && { echo "This script must be run as root" 1>&2; exit 1; }
Using { } for grouping, which executes in the current shell. Note that the spaces around the braces and the ending semi-colon are required.

Resources