Execute Shell script after other script got executed successfully - linux

Problem Statement:-
I have four shell script that I want to execute only when the previous script got executed successfully. And I am running it like this currently-
./verify-export-realtime.sh
sh -x lca_query.sh
sh -x liv_query.sh
sh -x lqu_query.sh
So In order to make other scripts run after previous script was successful. I need to do something like below? I am not sure whether I am right? If any script got failed due to any reason it will print as Failed due to some reason right?
./verify-export-realtime.sh
RET_VAL_STATUS=$?
echo $RET_VAL_STATUS
if [ $RET_VAL_STATUS -ne 0 ]; then
echo "Failed due to some reason"
exit
fi
sh -x lca_query.sh
RET_VAL_STATUS=$?
echo $RET_VAL_STATUS
if [ $RET_VAL_STATUS -ne 0 ]; then
echo "Failed due to some reason"
exit
fi
sh -x liv_query.sh
RET_VAL_STATUS=$?
echo $RET_VAL_STATUS
if [ $RET_VAL_STATUS -ne 0 ]; then
echo "Failed due to some reason"
exit
fi
sh -x lqu_query.sh

The shell provides an operator && to do exactly this. So you could write:
./verify-export-realtime.sh && \
sh -x lca_query.sh && \
sh -x liv_query.sh && \
sh -x lqu_query.sh
or you could get rid of the line continuations (\) and write it all on one line
./verify-export-realtime.sh && sh -x lca_query.sh && sh -x liv_query.sh && sh -x lqu_query.sh
If you want to know how far it got, you can add extra commands that just set a variable:
done=0
./verify-export-realtime.sh && done=1 &&
sh -x lca_query.sh && done=2 &&
sh -x liv_query.sh && done=3 &&
sh -x lqu_query.sh && done=4
The value of $done at the end tells you how many commands completed successfully. $? will get set to the exit value of the last command run (which is the one that failed), or 0 if all succeeded

You can simply run a chain of scripts in the command line (or from other script), when the first failing command will break this chain, using "&&" operator:
$ script1.sh && echo "First done, running the second" && script2.sh && echo "Second done, running the third" && script3.sh && echo "Third done, cool!"
And so on. The operation will break once one of the steps fails.

That should be right. You can also print the error code if necessary by echoing the $ variable. You can also make your own return value codes by actually returning your own values in those scripts and checking them in this main one. It might be more helpful then "The script failed for some reason".

if you want more flexible of handling errors
script1.sh
rc=$?
if [ ${rc} -eq 0 ];then
echo "script1 pass, starting script2"
script2.sh
rc=$?
if [ ${rc} -eq 0 ];then
echo "script2 pass"
else
echo "script2 failed"
fi
else
echo "script1 failed"
fi

The standard way to do this is to simply add a shell option that causes the script to abort if any simple command fails. Simply write the interpreter line as:
#!/bin/sh -e
or add the command:
set -e
(It is also common to do cmd1 && cmd2 && cmd3 as mentioned in other solutions.)
You absolutely should not attempt to print an error message. The command should print a relevant error message before it exits if it encounters an error. If the commands are not well behaved and do not write useful error messages, you should fix them rather than trying to guess what error they encountered. If you do write an error message, at the very least write it to the correct place. Errors belong on stderr:
echo "Some error occurred" >&2

As #William Pursell said, your scripts really should report their own errors. If you also need error reporting in the calling script, the easiest way to do it is like this:
if ! ./verify-export-realtime.sh; then
echo "Error running verify-export-realtime.sh; rest of script cancelled." >&2
elif ! sh -x lca_query.sh; then
echo "Error running lca_query.sh; rest of script cancelled." >&2
elif ! sh -x liv_query.sh; then
echo "Error running liv_query.sh; rest of script cancelled." >&2
elif ! sh -x lqu_query.sh; then
echo "Error running lqu_query.sh." >&2
fi

Related

How can I execute more than one command in exec command of linux expect

I'm trying to detect a host is available by using expect of Linux.
Now, I can using the following commands via bash to get the return value and check host status.
#!/usr/bin/bash
nc -z $DUT_IP 22 && echo $?
I want to do the same things via expect. But seems I failed.
#!/usr/bin/expect --
set DUT_IP "192.168.1.9"
set result [exec nc -z $DUT_IP 22]
send_user "$result\n\n"
I got the following error messages:
invalid port &&
while executing
"exec nc -z $DUT_IP 22 && echo $?"
invoked from within
"set result [exec nc -z $DUT_IP 22 && echo $?] "
(file "reboot.exp" line 44)
Use of && to separate commands is a shell construct, not an expect construct. You can explicitly launch a shell for that command list:
set result [exec sh -c "nc -z $DUT_IP 22 && echo $?"]
Note that will only print the exit status if the command succeeded, thus result will either be "0" or empty. You want to use ; instead of &&

$? in shell script

I'm trying to figure out what this means/how $? gets populated in linux, I tried doing a search but if someone could clarify that would be great:
exitstat=$?
if [ $exitstat -ne 0 ]
then
echo -e "Could Not Extract"
echo -e "Aborting Script `date`"
exit $exitstat
fi
The code above that is:
_xfile << %% 2> /files/thefile-7000.log | _afile -x -r 10 2> /files/thefile-7000.log > /files/thefile.7000
OperatorHigh = $finalnumber
%%
$? expands to the exit status of the most recent foreground command.
Since your prior command is a pipeline, the exit status is that of the last command in the pipeline -- in this case, _afile -- unless the pipefail shell option is set, in which case failures elsewhere in the pipeline can also make exit status nonzero.

Check all commands exit code within a bash script

Consider the case where I have a very long bash script with several commands. Is there a simple way to check the exit status for ALL of them easily. So if there's some failure I can show which command has failed and its return code.
I mean, I don't want to use the test for each one of them checks like the following:
my_command
if [ $status -ne 0 ]; then
#error case
echo "error while executing " my_command " ret code:" $?
exit 1
fi
You can do trap "cmd" ERR, which invokes cmd when a command fails. However this solution has a couple of drawbacks. For example, it does not catch a failure inside a pipe.
In a nutshell, you are better off doing your error management properly, on a case by case basis.
You can write a function that launches:
function test {
"$#"
local status=$?
if [ $status -ne 0 ]; then
echo "error with $1" >&2
fi
return $status
}
test command1
test command2
One can test the value of $? or simply put set -e at the top of the script which will cause it to exit upon any command which errors.
#!/bin/sh
set -xe
my_command1
# never makes it here if my_command1 fails
my_command2

Shell script continues to run even after exit command

My shell script is as shown below:
#!/bin/bash
# Make sure only root can run our script
[ $EUID -ne 0 ] && (echo "This script must be run as root" 1>&2) || (exit 1)
# other script continues here...
When I run above script with non-root user, it prints message "This script..." but it doe not exit there, it continues with the remaining script. What am I doing wrong?
Note: I don't want to use if condition.
You're running echo and exit in subshells. The exit call will only leave that subshell, which is a bit pointless.
Try with:
#! /bin/sh
if [ $EUID -ne 0 ] ; then
echo "This script must be run as root" 1>&2
exit 1
fi
echo hello
If for some reason you don't want an if condition, just use:
#! /bin/sh
[ $EUID -ne 0 ] && echo "This script must be run as root" 1>&2 && exit 1
echo hello
Note: no () and fixed boolean condition. Warning: if echo fails, that test will also fail to exit. The if version is safer (and more readable, easier to maintain IMO).
I think you need && rather than||, since you want to echo and exit (not echo or exit).
In addition (exit 1) will run a sub-shell that exits rather than exiting your current shell.
The following script shows what you need:
#!/bin/bash
[ $1 -ne 0 ] && (echo "This script must be run as root." 1>&2) && exit 1
echo Continuing...
Running this with ./myscript 0 gives you:
Continuing...
while ./myscript 1 gives you:
This script must be run as root.
I believe that's what you were looking for.
I would write that as:
(( $EUID != 0 )) && { echo "This script must be run as root" 1>&2; exit 1; }
Using { } for grouping, which executes in the current shell. Note that the spaces around the braces and the ending semi-colon are required.

Aborting a shell script if any command returns a non-zero value

I have a Bash shell script that invokes a number of commands.
I would like to have the shell script automatically exit with a return value of 1 if any of the commands return a non-zero value.
Is this possible without explicitly checking the result of each command?
For example,
dosomething1
if [[ $? -ne 0 ]]; then
exit 1
fi
dosomething2
if [[ $? -ne 0 ]]; then
exit 1
fi
Add this to the beginning of the script:
set -e
This will cause the shell to exit immediately if a simple command exits with a nonzero exit value. A simple command is any command not part of an if, while, or until test, or part of an && or || list.
See the bash manual on the "set" internal command for more details.
It's really annoying to have a script stubbornly continue when something fails in the middle and breaks assumptions for the rest of the script. I personally start almost all portable shell scripts with set -e.
If I'm working with bash specifically, I'll start with
set -Eeuo pipefail
This covers more error handling in a similar fashion. I consider these as sane defaults for new bash programs. Refer to the bash manual for more information on what these options do.
To add to the accepted answer:
Bear in mind that set -e sometimes is not enough, specially if you have pipes.
For example, suppose you have this script
#!/bin/bash
set -e
./configure > configure.log
make
... which works as expected: an error in configure aborts the execution.
Tomorrow you make a seemingly trivial change:
#!/bin/bash
set -e
./configure | tee configure.log
make
... and now it does not work. This is explained here, and a workaround (Bash only) is provided:
#!/bin/bash
set -e
set -o pipefail
./configure | tee configure.log
make
The if statements in your example are unnecessary. Just do it like this:
dosomething1 || exit 1
If you take Ville Laurikari's advice and use set -e then for some commands you may need to use this:
dosomething || true
The || true will make the command pipeline have a true return value even if the command fails so the the -e option will not kill the script.
If you have cleanup you need to do on exit, you can also use 'trap' with the pseudo-signal ERR. This works the same way as trapping INT or any other signal; bash throws ERR if any command exits with a nonzero value:
# Create the trap with
# trap COMMAND SIGNAME [SIGNAME2 SIGNAME3...]
trap "rm -f /tmp/$MYTMPFILE; exit 1" ERR INT TERM
command1
command2
command3
# Partially turn off the trap.
trap - ERR
# Now a control-C will still cause cleanup, but
# a nonzero exit code won't:
ps aux | grep blahblahblah
Or, especially if you're using "set -e", you could trap EXIT; your trap will then be executed when the script exits for any reason, including a normal end, interrupts, an exit caused by the -e option, etc.
The $? variable is rarely needed. The pseudo-idiom command; if [ $? -eq 0 ]; then X; fi should always be written as if command; then X; fi.
The cases where $? is required is when it needs to be checked against multiple values:
command
case $? in
(0) X;;
(1) Y;;
(2) Z;;
esac
or when $? needs to be reused or otherwise manipulated:
if command; then
echo "command successful" >&2
else
ret=$?
echo "command failed with exit code $ret" >&2
exit $ret
fi
Run it with -e or set -e at the top.
Also look at set -u.
On error, the below script will print a RED error message and exit.
Put this at the top of your bash script:
# BASH error handling:
# exit on command failure
set -e
# keep track of the last executed command
trap 'LAST_COMMAND=$CURRENT_COMMAND; CURRENT_COMMAND=$BASH_COMMAND' DEBUG
# on error: print the failed command
trap 'ERROR_CODE=$?; FAILED_COMMAND=$LAST_COMMAND; tput setaf 1; echo "ERROR: command \"$FAILED_COMMAND\" failed with exit code $ERROR_CODE"; put sgr0;' ERR INT TERM
An expression like
dosomething1 && dosomething2 && dosomething3
will stop processing when one of the commands returns with a non-zero value. For example, the following command will never print "done":
cat nosuchfile && echo "done"
echo $?
1
#!/bin/bash -e
should suffice.
I am just throwing in another one for reference since there was an additional question to Mark Edgars input and here is an additional example and touches on the topic overall:
[[ `cmd` ]] && echo success_else_silence
Which is the same as cmd || exit errcode as someone showed.
For example, I want to make sure a partition is unmounted if mounted:
[[ `mount | grep /dev/sda1` ]] && umount /dev/sda1

Resources