capture exit code of child script called via ssh from parent script using shell script - linux

I have a child script called deployment.sh and parent script called deloy_base.sh. From parent deploy_base.sh, iam invoking the child script via ssh on other server. Actually child script is doing the deployment process. If any commands failed in the child script, error status should be sent to the parent script, so that i can send email based on the success or failure error code from child script.
Now irrespective of any command failed in child script, its always going to success block. Please help.
ssh user#10.0.0.1 "/home/scripts/deployment.sh" DEV 2>&1 | tee /home/release/DEV.log
if [ $? -ne 0 ]
then
`mail -s "DEPLOYMENT FAILED" -a /home/DEV.log
-r "Deployment_Log#example.com" user#example.com > /dev/null 2>&1`
else
`mail -s "DEPLOYMENT SUCCESS" -a /home/DEV.log
-r "Deployment_Log#example.com" user#example.com > /dev/null 2>&1`
Fi

man bash
The return status of a pipeline is the exit status of the last command,
unless the pipefail option is enabled. If pipefail is enabled, the
pipeline's return status is the value of the last (rightmost) command
to exit with a non-zero status, or zero if all commands exit successfully.
In your case $? is the exit status of tee, but
set -o pipefail
solves the problem.

Related

Is there a way in the shell script that if [ <script exits with non-0 value> ] then; do <something>

In the shell script, I want to do that
if the shell script failed ( exited with non zero value), then before exiting the process, do something.
How could I insert such a if statement block in my shell script.
Is that feasible?
For example,
set -e
echo $password > confidential.txt
rm <file-that-does-not-exist>
rm confidential.txt
I want to make sure that the confidential.txt is made sure to be removed anyways
Use the trap command:
trap 'if [ $? -ne 0 ]; then echo failed; fi' EXIT
The EXIT trap is run when the script exits, and $? contains the status of the last command before it exited.
Note that a shell script's exit status is the status of the last command that it executed. So in your script, it will be the status of
rm confidential.txt
not the error from
rm filethatdoesnotexist
Unless you use set -e in the script, which makes it exit as soon as any command gets an error.
Use trap with the EXIT pseudo signal:
remove_secret () {
rm -f /path/to/confidential.txt
}
trap remove_secret EXIT
You probably don't want the file to remain if the script exits with 0, so EXIT happens regardless of the exit code.
Note that without set -e, rm on a non-existent file doesn't stop the script.
Assuming you're on Linux (or another operating system with /proc/*/fd), you have an even better option: Delete confidential.txt before putting the password into it at all.
That can look something like the following:
exec 3<>confidential.txt
rm -f -- confidential.txt
printf '%s\n' "$password" >&3
...and then, to read from that deleted file:
cat "/proc/$$/fd/3" ## where $$ is the PID of the shell that ran the exec command above
Because the file is already deleted, it's guaranteed to be eligible for garbage collection by your filesystem the moment your script (or the last program it started inheriting its file descriptors) exits or is killed, even if it's killed in a way that doesn't permit traps or signal processing to take place.

Get return code from command run on ssh tunnel [duplicate]

This question already has answers here:
Exit when one process in pipe fails
(2 answers)
Closed 4 years ago.
Even if the mycode.sh has non-0 exit code this command returns 0 as ssh connection was successful. How to get the actual return code of the .sh on remote server?
/home/mycode.sh '20'${ODATE} 1 | ssh -L 5432:localhost:5432 myuser#myremotehost cat
This is not related to SSH, but to how bash handles the exit status in pipelines. From the bash manual page:
The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully. If the reserved word ! precedes a pipeline, the exit status of that pipeline is the logical negation of the exit status as described above. The shell waits for all commands in the pipeline to terminate before returning a value.
If you want to check that there was an error in the pipeline due to any of the commands involved, just set the pipefail option:
set -o pipefail
your_pipeline_here
echo $? # Prints non-zero if something went wrong
It is not possible to actually send the exit status to the next command in the pipeline (in your case, ssh) without additional steps. If you really want to do that, the command will have to be split like this:
res="$(/home/mycode.sh '20'${ODATE} 1)"
if (( $? == 0 )); then
echo -n "$res" | ssh -L 5432:localhost:5432 myuser#myremotehost cat
else
# You can do anything with the exit status here - even pass it on as an argument to the remote command
echo "mycode.sh failed" >&2
fi
You may want to save the output of mycode.sh to a temporary file instead of the $res variable if it's too large.
/home/mycode.sh is located onto the local host.
the ssh command is running cat on the remote server.
All text printed to the standard output of the /home/mycode.sh is redirected to the cat standard input.
The man ssh reads:
EXIT STATUS
ssh exits with the exit status of the remote command or with 255 if an error occurred.
Conclusion: the ssh exists with the EXIT STATUS of the cat or 255 if an error occurred.
if /home/mycode.sh script prints commands to the standard input, they can be run on the remote server when the cat is not present:
/home/mycode.sh '20'${ODATE} 1 | ssh -L 5432:localhost:5432 myuser#myremotehost
In my test, the EXIT STATUS of the last command executed on the remote server is returned by ssh:
printf "%s\n" "uname -r" date "ls this_file_does_not_exist" |\
ssh -L 5432:localhost:5432 myuser#myremotehost ;\
printf "EXIT STATUS of the last command, executed remotely with ssh is %d\n" $?
4.4.0-119-generic
Wed Aug 29 02:55:04 EDT 2018
ls: cannot access 'this_file_does_not_exist': No such file or directory
EXIT STATUS of the last command, executed remotely with ssh is 2

Prompt not printed after redirecting bash script output to syslog

I found this article, that explains how to redirect output of a bash script to syslog. This is exactly what I needed, but there is a problem.
#!/bin/bash
# Don't ignore any error and return when first error occurs.
exec 1> >(logger -s -t $(basename $0)) 2>&1
set -e
# a list of command(s) that can fail:
chown -R user1:user1 /tmp/myappData/*
chown -R user1:user1 /tmp/myappTmp/*
chown -R user1:user1 /tmp/myappLog/*
#...
exit 0
When I execute above script, and an error occurs, I see that sometimes, the prompt doesn't return after the script is executed. I can't figure out why this is happening. The prompt doesn't return unless I hit enter.
I am concerned that if an app uses this script, it may not get proper exit code back.
If I comment out "set -e", then the prompt always returns properly after the script has executed.
So my question is, what is the proper way to setup a script so that it exits on error, and logs the corresponding message to syslog?
Thank you for your help and suggestions!
The problem here is that the logger pipeline is still running after your script exits, so some of the last content to be logged print after the parent shell has emitted its prompt. If you scroll up, you'll find the prompt hidden somewhere in that prior output.
If you have a very, very new bash, you can collect the PID of the process substitution, and wait for it later.
exec {orig_out}>&1 {orig_err}>&2 1> >(logger -s -t "${0##*/}") 2>&1; logger_pid=$!
[[ $logger_pid ]] || { echo "ERROR: Needs a newer bash" >&2; exit 1; }
cleanup() {
exec >&$orig_out 2>&$orig_err
wait "$logger_pid"
}
trap cleanup EXIT
With an older bash, you can consider other tricks. For example, on Linux, you can use the flock command to try to grab exclusive access to a lockfile before exiting, after ensuring that that lock is held for as long as the logger is running:
log_lock=$(mktemp "${TMPDIR:-/tmp}/logger.XXXXXX")
exec >(flock -x "$log_lock" logger -s -t "${0##*/}") 2>&1
cleanup() {
exec >/dev/tty 2>&1 || exec >/dev/null 2>&1
flock -x "$log_lock" true
}
trap cleanup EXIT

ksh su -c return value

inside of my script I need to run two scripts as another user, so I used the following line:
su otherUser -c "firstScript;secondScript;status=$?"
echo "returning $status"
return $status
The problem is that $status will always return 0.I did test with secondScript failing (wrong argument). Not sure if it's because I exited otherUser or if the $status is actually the result of the su command. Any suggestions?!
You need to capture status inside your outer shell, not in the inner shell invoked by su; otherwise, the captured value is thrown away as soon as that inner shell exits.
This is made much easier because su passes through the exit status of the command it runs -- if that command exits with a nonzero status, so will su.
su otherUser -c 'firstScript; secondScript'; status=$?
echo "returning $status"
return $status
Note that this only returns the exit status of secondScript (as would your original code have done, were it working correctly). You might think about what you want to do if firstScript fails.
Now, it's a little more interesting if you only want to return the exit code of firstScript; in that case, you need to capture the exit status in both shells:
su otherUser -c 'firstScript; status=$?; secondScript; exit $status'); status=$?
echo "returning $status"
return $status
If you want to run secondScript only if firstScript succeeds, and return a nonzero value if either of them fails, it becomes easy again:
su otherUser -c 'firstScript && secondScript'); status=$?
echo "returning $status"
return $status

Bash not trapping interrupts during rsync/subshell exec statements

Context:
I have a bash script that contains a subshell and a trap for the EXIT pseudosignal, and it's not properly trapping interrupts during an rsync. Here's an example:
#!/bin/bash
logfile=/path/to/file;
directory1=/path/to/dir
directory2=/path/to/dir
cleanup () {
echo "Cleaning up!"
#do stuff
trap - EXIT
}
trap '{
(cleanup;) | 2>&1 tee -a $logfile
}' EXIT
(
#main script logic, including the following lines:
(exec sleep 10;);
(exec rsync --progress -av --delete $directory1 /var/tmp/$directory2;);
) | 2>&1 tee -a $logfile
trap - EXIT #just in case cleanup isn't called for some reason
The idea of the script is this: most of the important logic runs in a subshell which is piped through tee and to a logfile, so I don't have to tee every single line of the main logic to get it all logged. Whenever the subshell ends, or the script is stopped for any reason (the EXIT pseudosignal should capture all of these cases), the trap will intercept it and run the cleanup() function, and then remove the trap. The rsync and sleep commands (the sleep is just an example) are run through exec to prevent the creation of zombie processes if I kill the parent script while they're running, and each potentially-long-running command is wrapped in its own subshell so that when exec finishes, it won't terminate the whole script.
The problem:
If I interrupt the script (via kill or CTRL+C) during the exec/subshell wrapped sleep command, the trap works properly, and I see "Cleaning up!" echoed and logged. If I interrupt the script during the rsync command, I see rsync end, and write rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(544) [sender=3.0.6] to the screen, and then the script just dies; no cleanup, no trapping. Why doesn't an interrupting/killing of rsync trigger the trap?
I've tried using the --no-detach switch with rsync, but it didn't change anything.
I have bash 4.1.2, rsync 3.0.6, centOS 6.2.
How about just having all the output from point X be redirected to tee without having to repeat it everywhere and mess with all the sub-shells and execs ... (hope I didn't miss something)
#!/bin/bash
logfile=/path/to/file;
directory1=/path/to/dir
directory2=/path/to/dir
exec > >(exec tee -a $logfile) 2>&1
cleanup () {
echo "Cleaning up!"
#do stuff
trap - EXIT
}
trap cleanup EXIT
sleep 10
rsync --progress -av --delete $directory1 /var/tmp/$directory2
In addition to set -e, I think you want set -E:
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a sub‐shell environment. The ERR trap is normally not inherited in such cases.
Alternatively, instead of wrapping your commands in subshells use curly braces which will still give you the ability to redirect command outputs but will execute them in the current shell.
The interupt will be properly caught if you add INT to the trap
trap '{
(cleanup;) | 2>&1 tee -a $logfile
}' EXIT INT
Bash is trapping interrupts correctly. However, this does not anwer the question, why the script traps on exit if sleep is interupted, nor why it does not trigger on rsync, but makes the script work as it is supposed to. Hope this helps.
Your shell might be configured to exit on error:
bash # enter subshell
set -e
trap "echo woah" EXIT
sleep 4
If you interrupt sleep (^C) then the subshell will exit due to set -e and print woah in the process.
Also, slightly unrelated: your trap - EXIT is in a subshell (explicitly), so it won't have an effect after the cleanup function returns
It's pretty clear from experimentation that rsync behaves like other tools such as ping and do not inherit signals from the calling Bash parent.
So you have to get a little creative with this and do something like the following:
$ cat rsync.bash
#!/bin/sh
set -m
trap '' SIGINT SIGTERM EXIT
rsync -avz LargeTestFile.500M root#host.mydom.com:/tmp/. &
wait
echo FIN
Now when I run it:
$ ./rsync.bash
X11 forwarding request failed
building file list ... done
LargeTestFile.500M
^C^C^C^C^C^C^C^C^C^C
sent 509984 bytes received 42 bytes 92732.00 bytes/sec
total size is 524288000 speedup is 1027.96
FIN
And we can see the file did fully transfer:
$ ll -h | grep Large
-rw-------. 1 501 games 500M Jul 9 21:44 LargeTestFile.500M
How it works
The trick here is we're telling Bash via set -m to disable job controls on any background jobs within it. We're then backgrounding the rsync and then running a wait command which will wait on the last run command, rsync, until it's complete.
We then guard the entire script with the trap '' SIGINT SIGTERM EXIT.
References
https://access.redhat.com/solutions/360713
https://access.redhat.com/solutions/1539283

Resources