closing stdout and stderr (1>&- 2>&-) modifies exit status? - linux

Does ls -l /path/to/dir/*.extension 1>&- 2>&-
really giving exit status as 2 even if files with .extension exist or am I making some mistake?
What I know by doing 1>&- 2>&- I am closing stdout and stderr for command, and it should do nothing with exit status of command!
But as one expects following always works pretty fine:
ls -l /path/to/dir/*.extension &>/dev/null
exit status is 0 as expected.
Just looking for an explanation for this behaviour.
#UNDERSTANDING
On basis of answer given by Simon Richter:
If we do something like
jordan-a#hosties:exp$ echo "Life" 1>&-
-bash: echo: write error: Bad file descriptor
it's throwing error since echo couldn't write to stdout and
if we do
jordan-a#hosties:exp$ echo "Life" 1>&- 2>&-
We would not even get to know the error, till the time we check $?

You are closing the file descriptors, so the program will get an error trying to write to them, and it communicates that error to you.
If you redirect the output to /dev/null, the write succeeds, and ls doesn't know that the output got discarded.

Related

No Error from Script for Non-Existent File

I have a shell script that reads a text file and uses its content. So far so good. But now I'm trying to make the script exit if the file is not found. The script looks like this
#!/bin/bash
function errorcatcher() {
errorcode=$?
echo "ERROR CODE : ${errorcode}"
exit ${errorcode}
}
trap errorcatcher ERR
MYFILE=$1
IFS='|'
while read line; do
echo ${line}
done < ${MYFILE}
echo "Execution complete"
And I run the script as
sh myscript.sh /home/mydir/ABC.txt
and it works fine. But if I try this
sh myscript.sh /home/mydir/nonexisting.file
I get
myscript.sh: line 17: /home/mydir/nonexisting.file: No such file or directory
Execution complete
Function errorcatcher does not get invoked and instead of exiting with an error code, the execution continues and I get the line Execution complete even though the file in question doesn't exist. My guess is no error is generated here, so I added this line before reading the text file
ls ${MYFILE}
The errorcatcher gets invoked this time. But if I try
sh myscript.sh /home/mydir/ABC.tx
Instead of existing file ABC.txt, I pass its incomplete name ABC.tx and again, the errorcatcher function is not invoked and the script completes successfully (Execution complete gets echoed).
Could someone help me with this? I'm curious as to why errorcatcher doesn't get invoked
for a non existing file without ls
for incomplete file name (ABC.tx) with ls
Function errorcatcher does not get invoked …
Indeed, with an error in the redirection of a loop like
while read line; do
…
done < ${MYFILE}
the ERR trap is not invoked. You have discovered an undocumented exception in the implementation of the trap command, or, if you prefer, a bug.
You can evade that by adding an additional test of the redirection before the while, e. g. the line
<$MYFILE
on its own will invoke the error trap.

How to capture linux command log into the file?

Let's say I have the below command.
STATE_NOT_C_COUNT=`mongo --host "${DB_HOST}" --port 27017 "${MONGO_DATABASE}" --eval "db.$MONGO_DATABASE.count({\"state\" : {"'"$ne"'":\"C\"},\"physicalTableName\":\"table_name\"},{nolock:true})" | tail -1`
When I used to run the above command, got the exception like
exception: connect failed
I want to capture this exception in into the file via the error function.
error(){
if [ "$?" -ne "0" ]; then
echo "$1" 2>&1 error_log
exit 1
fi
}
I'm using the above function like this:
error $STATE_NOT_C_COUNT
But I'm not able to capture the exception through the function in files.
What you are doing is terrible. Let the program that fails print its error messages to stderr, and ensure that stderr is pointed to the right thing. However, the major issue you are having is just lack of quotes. Try:
error "$STATE_NOT_C_COUNT"
The issue is that the command error $STATE_NOT_C_COUNT is subject to field splitting, so if $STATE_NOT_C_COUNT contains any whitespace it is split into arguments, and you are only writing the first one. Another alternative is to write echo "$#" in the function, but this will squash whitespace. However, it cannot be stressed enough that this is a terrible approach, completely against the unix philosophy. The program should write its error to stderr, and you should let them go there. Just make sure stderr is pointed where you want it. The only possible reason to capture stderr is if you want to write it to multiple locations, so you might pipe it to tee or to a syslogger, or some other message bus, but doing such a thing is questionable.

Chronologically capturing STDOUT and STDERR

This very well may fall under KISS (keep it simple) principle but I am still curious and wish to be educated as to why I didn't receive the expected results. So, here we go...
I have a shell script to capture STDOUT and STDERR without disturbing the original file descriptors. This is in hopes of preserving the original order of output (see test.pl below) as seen by a user on the terminal.
Unfortunately, I am limited to using sh, instead of bash (but I welcome examples), as I am calling this from another suite and I may wish to use it in a cron in the future (I know cron has the SHELL environment variable).
wrapper.sh contains:
#!/bin/sh
stdout_and_stderr=$1
shift
command=$#
out="${TMPDIR:-/tmp}/out.$$"
err="${TMPDIR:-/tmp}/err.$$"
mkfifo ${out} ${err}
trap 'rm ${out} ${err}' EXIT
> ${stdout_and_stderr}
tee -a ${stdout_and_stderr} < ${out} &
tee -a ${stdout_and_stderr} < ${err} >&2 &
${command} >${out} 2>${err}
test.pl contains:
#!/usr/bin/perl
print "1: stdout1\n";
print STDERR "2: stderr1\n";
print "3: stdout2\n";
In the scenario:
sh wrapper.sh /tmp/xxx perl test.pl
STDOUT contains:
1: stdout1
3: stdout2
STDERR contains:
2: stderr1
All good so far...
/tmp/xxx contains:
2: stderr1
1: stdout1
3: stdout2
However, I was expecting /tmp/xxx to contain:
1: stdout1
2: stderr1
3: stdout2
Can anyone explain to me why STDOUT and STDERR are not appending /tmp/xxx in the order that I expected? My guess would be that the backgrounded tee processes are blocking the /tmp/xxx resource from one another since they have the same "destination". How would you solve this?
related: How do I write stderr to a file while using "tee" with a pipe?
It is a feature of the C runtime library (and probably is imitated by other runtime libraries) that stderr is not buffered. As soon as it is written to, stderr pushes all of its characters to the destination device.
By default stdout has a 512-byte buffer.
The buffering for both stderr and stdout can be changed with the setbuf or setvbuf calls.
From the Linux man page for stdout:
NOTES: The stream stderr is unbuffered. The stream stdout is line-buffered when it points to a terminal. Partial lines will not appear until fflush(3) or exit(3) is called, or a newline is printed. This can produce unexpected results, especially with debugging output. The buffering mode of the standard streams (or any other stream) can be changed using the setbuf(3) or setvbuf(3) call. Note that in case stdin is associated with a terminal, there may also be input buffering in the terminal driver, entirely unrelated to stdio buffering. (Indeed, normally terminal input is line buffered in the kernel.) This kernel input handling can be modified using calls like tcsetattr(3); see also stty(1), and termios(3).
After a little bit more searching, inspired by #wallyk, I made the following modification to wrapper.sh:
#!/bin/sh
stdout_and_stderr=$1
shift
command=$#
out="${TMPDIR:-/tmp}/out.$$"
err="${TMPDIR:-/tmp}/err.$$"
mkfifo ${out} ${err}
trap 'rm ${out} ${err}' EXIT
> ${stdout_and_stderr}
tee -a ${stdout_and_stderr} < ${out} &
tee -a ${stdout_and_stderr} < ${err} >&2 &
script -q -F 2 ${command} >${out} 2>${err}
Which now produces the expected:
1: stdout1
2: stderr1
3: stdout2
The solution was to prefix the $command with script -q -F 2 which makes script quite (-q) and then forces file descriptor 2 (STDOUT) to flush immediately (-F 2).
I am now researching to determine how portable this is. I think -F pipe may be Mac and FreeBSD and -f or --flush may be other distros...
related: How to make output of any shell command unbuffered?

Raise error in a Bash script

I want to raise an error in a Bash script with message "Test cases Failed !!!". How to do this in Bash?
For example:
if [ condition ]; then
raise error "Test cases failed !!!"
fi
This depends on where you want the error message be stored.
You can do the following:
echo "Error!" > logfile.log
exit 125
Or the following:
echo "Error!" 1>&2
exit 64
When you raise an exception you stop the program's execution.
You can also use something like exit xxx where xxx is the error code you may want to return to the operating system (from 0 to 255). Here 125 and 64 are just random codes you can exit with. When you need to indicate to the OS that the program stopped abnormally (eg. an error occurred), you need to pass a non-zero exit code to exit.
As #chepner pointed out, you can do exit 1, which will mean an unspecified error.
Basic error handling
If your test case runner returns a non-zero code for failed tests, you can simply write:
test_handler test_case_x; test_result=$?
if ((test_result != 0)); then
printf '%s\n' "Test case x failed" >&2 # write error message to stderr
exit 1 # or exit $test_result
fi
Or even shorter:
if ! test_handler test_case_x; then
printf '%s\n' "Test case x failed" >&2
exit 1
fi
Or the shortest:
test_handler test_case_x || { printf '%s\n' "Test case x failed" >&2; exit 1; }
To exit with test_handler's exit code:
test_handler test_case_x || { ec=$?; printf '%s\n' "Test case x failed" >&2; exit $ec; }
Advanced error handling
If you want to take a more comprehensive approach, you can have an error handler:
exit_if_error() {
local exit_code=$1
shift
[[ $exit_code ]] && # do nothing if no error code passed
((exit_code != 0)) && { # do nothing if error code is 0
printf 'ERROR: %s\n' "$#" >&2 # we can use better logging here
exit "$exit_code" # we could also check to make sure
# error code is numeric when passed
}
}
then invoke it after running your test case:
run_test_case test_case_x
exit_if_error $? "Test case x failed"
or
run_test_case test_case_x || exit_if_error $? "Test case x failed"
The advantages of having an error handler like exit_if_error are:
we can standardize all the error handling logic such as logging, printing a stack trace, notification, doing cleanup etc., in one place
by making the error handler get the error code as an argument, we can spare the caller from the clutter of if blocks that test exit codes for errors
if we have a signal handler (using trap), we can invoke the error handler from there
Error handling and logging library
Here is a complete implementation of error handling and logging:
https://github.com/codeforester/base/blob/master/lib/stdlib.sh
Related posts
Error handling in Bash
The 'caller' builtin command on Bash Hackers Wiki
Are there any standard exit status codes in Linux?
BashFAQ/105 - Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?
Equivalent of __FILE__, __LINE__ in Bash
Is there a TRY CATCH command in Bash
To add a stack trace to the error handler, you may want to look at this post: Trace of executed programs called by a Bash script
Ignoring specific errors in a shell script
Catching error codes in a shell pipe
How do I manage log verbosity inside a shell script?
How to log function name and line number in Bash?
Is double square brackets [[ ]] preferable over single square brackets [ ] in Bash?
There are a couple more ways with which you can approach this problem. Assuming one of your requirement is to run a shell script/function containing a few shell commands and check if the script ran successfully and throw errors in case of failures.
The shell commands in generally rely on exit-codes returned to let the shell know if it was successful or failed due to some unexpected events.
So what you want to do falls upon these two categories
exit on error
exit and clean-up on error
Depending on which one you want to do, there are shell options available to use. For the first case, the shell provides an option with set -e and for the second you could do a trap on EXIT
Should I use exit in my script/function?
Using exit generally enhances readability In certain routines, once you know the answer, you want to exit to the calling routine immediately. If the routine is defined in such a way that it doesn’t require any further cleanup once it detects an error, not exiting immediately means that you have to write more code.
So in cases if you need to do clean-up actions on script to make the termination of the script clean, it is preferred to not to use exit.
Should I use set -e for error on exit?
No!
set -e was an attempt to add "automatic error detection" to the shell. Its goal was to cause the shell to abort any time an error occurred, but it comes with a lot of potential pitfalls for example,
The commands that are part of an if test are immune. In the example, if you expect it to break on the test check on the non-existing directory, it wouldn't, it goes through to the else condition
set -e
f() { test -d nosuchdir && echo no dir; }
f
echo survived
Commands in a pipeline other than the last one, are immune. In the example below, because the most recently executed (rightmost) command's exit code is considered ( cat) and it was successful. This could be avoided by setting by the set -o pipefail option but its still a caveat.
set -e
somecommand that fails | cat -
echo survived
Recommended for use - trap on exit
The verdict is if you want to be able to handle an error instead of blindly exiting, instead of using set -e, use a trap on the ERR pseudo signal.
The ERR trap is not to run code when the shell itself exits with a non-zero error code, but when any command run by that shell that is not part of a condition (like in if cmd, or cmd ||) exits with a non-zero exit status.
The general practice is we define an trap handler to provide additional debug information on which line and what cause the exit. Remember the exit code of the last command that caused the ERR signal would still be available at this point.
cleanup() {
exitcode=$?
printf 'error condition hit\n' 1>&2
printf 'exit code returned: %s\n' "$exitcode"
printf 'the command executing at the time of the error was: %s\n' "$BASH_COMMAND"
printf 'command present on line: %d' "${BASH_LINENO[0]}"
# Some more clean up code can be added here before exiting
exit $exitcode
}
and we just use this handler as below on top of the script that is failing
trap cleanup ERR
Putting this together on a simple script that contained false on line 15, the information you would be getting as
error condition hit
exit code returned: 1
the command executing at the time of the error was: false
command present on line: 15
The trap also provides options irrespective of the error to just run the cleanup on shell completion (e.g. your shell script exits), on signal EXIT. You could also trap on multiple signals at the same time. The list of supported signals to trap on can be found on the trap.1p - Linux manual page
Another thing to notice would be to understand that none of the provided methods work if you are dealing with sub-shells are involved in which case, you might need to add your own error handling.
On a sub-shell with set -e wouldn't work. The false is restricted to the sub-shell and never gets propagated to the parent shell. To do the error handling here, add your own logic to do (false) || false
set -e
(false)
echo survived
The same happens with trap also. The logic below wouldn't work for the reasons mentioned above.
trap 'echo error' ERR
(false)
Here's a simple trap that prints the last argument of whatever failed to STDERR, reports the line it failed on, and exits the script with the line number as the exit code. Note these are not always great ideas, but this demonstrates some creative application you could build on.
trap 'echo >&2 "$_ at $LINENO"; exit $LINENO;' ERR
I put that in a script with a loop to test it. I just check for a hit on some random numbers; you might use actual tests. If I need to bail, I call false (which triggers the trap) with the message I want to throw.
For elaborated functionality, have the trap call a processing function. You can always use a case statement on your arg ($_) if you need to do more cleanup, etc. Assign to a var for a little syntactic sugar -
trap 'echo >&2 "$_ at $LINENO"; exit $LINENO;' ERR
throw=false
raise=false
while :
do x=$(( $RANDOM % 10 ))
case "$x" in
0) $throw "DIVISION BY ZERO" ;;
3) $raise "MAGIC NUMBER" ;;
*) echo got $x ;;
esac
done
Sample output:
# bash tst
got 2
got 8
DIVISION BY ZERO at 6
# echo $?
6
Obviously, you could
runTest1 "Test1 fails" # message not used if it succeeds
Lots of room for design improvement.
The draw backs include the fact that false isn't pretty (thus the sugar), and other things tripping the trap might look a little stupid. Still, I like this method.
You have 2 options: Redirect the output of the script to a file, Introduce a log file in the script and
Redirecting output to a file:
Here you assume that the script outputs all necessary info, including warning and error messages. You can then redirect the output to a file of your choice.
./runTests &> output.log
The above command redirects both the standard output and the error output to your log file.
Using this approach you don't have to introduce a log file in the script, and so the logic is a tiny bit easier.
Introduce a log file to the script:
In your script add a log file either by hard coding it:
logFile='./path/to/log/file.log'
or passing it by a parameter:
logFile="${1}" # This assumes the first parameter to the script is the log file
It's a good idea to add the timestamp at the time of execution to the log file at the top of the script:
date '+%Y%-m%d-%H%M%S' >> "${logFile}"
You can then redirect your error messages to the log file
if [ condition ]; then
echo "Test cases failed!!" >> "${logFile}";
fi
This will append the error to the log file and continue execution. If you want to stop execution when critical errors occur, you can exit the script:
if [ condition ]; then
echo "Test cases failed!!" >> "${logFile}";
# Clean up if needed
exit 1;
fi
Note that exit 1 indicates that the program stop execution due to an unspecified error. You can customize this if you like.
Using this approach you can customize your logs and have a different log file for each component of your script.
If you have a relatively small script or want to execute somebody else's script without modifying it to the first approach is more suitable.
If you always want the log file to be at the same location, this is the better option of the 2. Also if you have created a big script with multiple components then you may want to log each part differently and the second approach is your only option.
I often find it useful to write a function to handle error messages so the code is cleaner overall.
# Usage: die [exit_code] [error message]
die() {
local code=$? now=$(date +%T.%N)
if [ "$1" -ge 0 ] 2>/dev/null; then # assume $1 is an error code if numeric
code="$1"
shift
fi
echo "$0: ERROR at ${now%???}${1:+: $*}" >&2
exit $code
}
This takes the error code from the previous command and uses it as the default error code when exiting the whole script. It also notes the time, with microseconds where supported (GNU date's %N is nanoseconds, which we truncate to microseconds later).
If the first option is zero or a positive integer, it becomes the exit code and we remove it from the list of options. We then report the message to standard error, with the name of the script, the word "ERROR", and the time (we use parameter expansion to truncate nanoseconds to microseconds, or for non-GNU times, to truncate e.g. 12:34:56.%N to 12:34:56). A colon and space are added after the word ERROR, but only when there is a provided error message. Finally, we exit the script using the previously determined exit code, triggering any traps as normal.
Some examples (assume the code lives in script.sh):
if [ condition ]; then die 123 "condition not met"; fi
# exit code 123, message "script.sh: ERROR at 14:58:01.234564: condition not met"
$command |grep -q condition || die 1 "'$command' lacked 'condition'"
# exit code 1, "script.sh: ERROR at 14:58:55.825626: 'foo' lacked 'condition'"
$command || die
# exit code comes from command's, message "script.sh: ERROR at 14:59:15.575089"

What does this if-statement from a bash script do?

I am new to bash scripting and learning through some examples. One of the examples that I saw is using an if-statement to test if a previously assigned output file is valid, like this:
if [ -n "$outputFile" ] && ! 2>/dev/null : >> $outputFile ; then
exit 1
fi
I understand what [ -n "$outputFile" ] does but not the rest of the conditional. Can someone explain what ! 2>/dev/null : >> $outputFile mean/does?
I have googled for answers but most links found were explanations on I/O redirection, which are definitely relevant but still unclear about the ! : >> structure.
That's some oddly written code!
The : command is built into bash. It's equivalent to true; it does nothing, successfully.
: >> $outputFile
attempts to do nothing, and appends the (empty) output to $outputFile -- which has already been confirmed to be a non-empty string. The >> redirection operator will create the file if it doesn't already exist.
I/O redirections such as 2>/dev/null can appear anywhere in a simple command; they don't have to be at the end. So the stdout of the : command (which is empty) is appended to $outputFile, and any stderr output is redirected to /dev/null. Any such stderr output would be the result of a failure in the redirection, since the : command itself does nothing and won't fail to do so. I don't know why the redirection of stdout (onto the end of $outputFile and the redirection of stderr (to /dev/null) are on opposite sides of the : command.
The ! operator is a logical "not"; it checks whether the following command succeeded, and inverts the result.
The net result, written in English-ish text is:
if "$outputFile" is set and is not an empty string, and if we don't have permission to write to it, then terminate the script with a status of 1.
In short, it tests whether we're able to write to $outputFile, and bails out if we don't.
The script is attempting to make sure $outputFile is writable in a not-so-obvious way.
: is the null command in bash, it does nothing. The fact that stderr is redirected to /dev/null is simply to suppress the permission denied error, should one occur.
If the file is not writable, then the command fails, which makes the condition true since it's negated with ! and the script exits.

Resources