Wrapper script that write logs of arguments, stdin and stdout - linux

I want to create a wrapper script which writes logs of arguments, stdin and stdout.
I have written the following script wrapper.sh, which works almost fine.
#! /bin/bash
wrapped_command="/path/to/command" # Set the path to the command which we want to wrap
log_dir="/tmp/stdio-log"
mkdir -p $log_dir
args_logfile=$log_dir/args
stdin_logfile=$log_dir/stdin
stdout_logfile=$log_dir/stdout
stderr_logfile=$log_dir/stderr
echo "$#" > "$args_logfile"
tee -a "$stdin_logfile" |
"$wrapped_command" "$#" > >(tee -a "$stdout_logfile") 2> >(tee -a "$stderr_logfile" >&2)
I expect that ./wrapper.sh arg1 arg2 gives the same result as /path/to/command arg1 arg2 with logs in /tmp/stdio-log/.
But it gives a slightly different result in Example 2 below.
Example 1: a command that accepts standard inputs
#! /bin/bash
while read line
do
echo "input: $line"
done
The above wrapper script works as expected with this example.
Example 2: a command that does not accept standard inputs
#! /bin/bash
echo "Example command"
With this example, I got the following different behavior:
/path/to/command exits immediately.
./wrapper.sh does not exit immediately. I must type <Return> once to finish wrapper.sh.
Question
How can I fix the wrapper script (or rewrite with different methods) so that it works as expected with both examples simultaneously?

The in the foreground invoked tee -a "$stdin_logfile" waits for input.
How can I fix the wrapper script …?
It works as you expect if stdin is handled just like stdout with Process Substitution.
"$wrapped_command" "$#" < <(tee -a "$stdin_logfile")\
> >(tee -a "$stdout_logfile") 2> >(tee -a "$stderr_logfile" >&2)

Related

Bash command with pipe('|') alway return exit code of 0, even in error case [duplicate]

I want to execute a long running command in Bash, and both capture its exit status, and tee its output.
So I do this:
command | tee out.txt
ST=$?
The problem is that the variable ST captures the exit status of tee and not of command. How can I solve this?
Note that command is long running and redirecting the output to a file to view it later is not a good solution for me.
There is an internal Bash variable called $PIPESTATUS; it’s an array that holds the exit status of each command in your last foreground pipeline of commands.
<command> | tee out.txt ; test ${PIPESTATUS[0]} -eq 0
Or another alternative which also works with other shells (like zsh) would be to enable pipefail:
set -o pipefail
...
The first option does not work with zsh due to a little bit different syntax.
Dumb solution: Connecting them through a named pipe (mkfifo). Then the command can be run second.
mkfifo pipe
tee out.txt < pipe &
command > pipe
echo $?
using bash's set -o pipefail is helpful
pipefail: the return value of a pipeline is the status of
the last command to exit with a non-zero status,
or zero if no command exited with a non-zero status
There's an array that gives you the exit status of each command in a pipe.
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo $?
0
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo ${PIPESTATUS[*]}
1 0
$ touch x
$ cat x| sed 's'
sed: 1: "s": substitute pattern can not be delimited by newline or backslash
$ echo ${PIPESTATUS[*]}
0 1
This solution works without using bash specific features or temporary files. Bonus: in the end the exit status is actually an exit status and not some string in a file.
Situation:
someprog | filter
you want the exit status from someprog and the output from filter.
Here is my solution:
((((someprog; echo $? >&3) | filter >&4) 3>&1) | (read xs; exit $xs)) 4>&1
echo $?
See my answer for the same question on unix.stackexchange.com for a detailed explanation and an alternative without subshells and some caveats.
By combining PIPESTATUS[0] and the result of executing the exit command in a subshell, you can directly access the return value of your initial command:
command | tee ; ( exit ${PIPESTATUS[0]} )
Here's an example:
# the "false" shell built-in command returns 1
false | tee ; ( exit ${PIPESTATUS[0]} )
echo "return value: $?"
will give you:
return value: 1
So I wanted to contribute an answer like lesmana's, but I think mine is perhaps a little simpler and slightly more advantageous pure-Bourne-shell solution:
# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; printf $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.
I think this is best explained from the inside out - command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, printf will execute and print icommand1's exit code on its stdout, but that stdout is redirected to file descriptor 3.
While command1 is running, its stdout is being piped to command2 (printf's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor 1 - because we want file descriptor 1 free for a little bit later, because we will bring the printf output on file descriptor 3 back down into file descriptor 1 - because that's what the command substitution (the backticks), will capture and that's what will get placed into the variable.
The final bit of magic is that first exec 4>&1 we did as a separate command - it opens file descriptor 4 as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it - but since command2's output is going to file descriptor 4 as far as the command substitution is concerned, the command substitution doesn't capture it - however once it gets "out" of the command substitution it is effectively still going to the script's overall file descriptor 1.
(The exec 4>&1 has to be a separate command because many common shells don't like it when you try to write to a file descriptor inside a command substitution, that is opened in the "external" command that is using the substitution. So this is the simplest portable way to do it.)
You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the printf's output jumps over command 2 so that command2 doesn't catch it, and then command 2's output jumps over and out of the command substitution just as printf lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its merry way being written to the standard output, just as in a normal pipe.
Also, as I understand it, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out - this, and not having to define an additional function, is why I think this might be a somewhat better solution than the one proposed by lesmana.
Per the caveats lesmana mentions, it's possible that command1 will at some point end up using file descriptors 3 or 4, so to be more robust, you would do:
exec 4>&1
exitstatus=`{ { command1 3>&-; printf $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-
Note that I use compound commands in my example, but subshells (using ( ) instead of { } will also work, though may perhaps be less efficient.)
Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.
I'm not sure how often things use file descriptor three and four directly - I think most of the time programs use syscalls that return not-used-at-the-moment file descriptors, but sometimes code writes to file descriptor 3 directly, I guess (I could imagine a program checking a file descriptor to see if it's open, and using it if it is, or behaving differently accordingly if it's not). So the latter is probably best to keep in mind and use for general-purpose cases.
(command | tee out.txt; exit ${PIPESTATUS[0]})
Unlike #cODAR's answer this returns the original exit code of the first command and not only 0 for success and 127 for failure. But as #Chaoran pointed out you can just call ${PIPESTATUS[0]}. It is important however that all is put into brackets.
In Ubuntu and Debian, you can apt-get install moreutils. This contains a utility called mispipe that returns the exit status of the first command in the pipe.
Outside of bash, you can do:
bash -o pipefail -c "command1 | tee output"
This is useful for example in ninja scripts where the shell is expected to be /bin/sh.
The simplest way to do this in plain bash is to use process substitution instead of a pipeline. There are several differences, but they probably don't matter very much for your use case:
When running a pipeline, bash waits until all processes complete.
Sending Ctrl-C to bash makes it kill all the processes of a pipeline, not just the main one.
The pipefail option and the PIPESTATUS variable are irrelevant to process substitution.
Possibly more
With process substitution, bash just starts the process and forgets about it, it's not even visible in jobs.
Mentioned differences aside, consumer < <(producer) and producer | consumer are essentially equivalent.
If you want to flip which one is the "main" process, you just flip the commands and the direction of the substitution to producer > >(consumer). In your case:
command > >(tee out.txt)
Example:
$ { echo "hello world"; false; } > >(tee out.txt)
hello world
$ echo $?
1
$ cat out.txt
hello world
$ echo "hello world" > >(tee out.txt)
hello world
$ echo $?
0
$ cat out.txt
hello world
As I said, there are differences from the pipe expression. The process may never stop running, unless it is sensitive to the pipe closing. In particular, it may keep writing things to your stdout, which may be confusing.
PIPESTATUS[#] must be copied to an array immediately after the pipe command returns.
Any reads of PIPESTATUS[#] will erase the contents.
Copy it to another array if you plan on checking the status of all pipe commands.
"$?" is the same value as the last element of "${PIPESTATUS[#]}",
and reading it seems to destroy "${PIPESTATUS[#]}", but I haven't absolutely verified this.
declare -a PSA
cmd1 | cmd2 | cmd3
PSA=( "${PIPESTATUS[#]}" )
This will not work if the pipe is in a sub-shell. For a solution to that problem,
see bash pipestatus in backticked command?
Base on #brian-s-wilson 's answer; this bash helper function:
pipestatus() {
local S=("${PIPESTATUS[#]}")
if test -n "$*"
then test "$*" = "${S[*]}"
else ! [[ "${S[#]}" =~ [^0\ ] ]]
fi
}
used thus:
1: get_bad_things must succeed, but it should produce no output; but we want to see output that it does produce
get_bad_things | grep '^'
pipeinfo 0 1 || return
2: all pipeline must succeed
thing | something -q | thingy
pipeinfo || return
Pure shell solution:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (cat || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
hello world
And now with the second cat replaced by false:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (false || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
Some command failed:
Second command failed: 1
First command failed: 141
Please note the first cat fails as well, because it's stdout gets closed on it. The order of the failed commands in the log is correct in this example, but don't rely on it.
This method allows for capturing stdout and stderr for the individual commands so you can then dump that as well into a log file if an error occurs, or just delete it if no error (like the output of dd).
It may sometimes be simpler and clearer to use an external command, rather than digging into the details of bash. pipeline, from the minimal process scripting language execline, exits with the return code of the second command*, just like a sh pipeline does, but unlike sh, it allows reversing the direction of the pipe, so that we can capture the return code of the producer process (the below is all on the sh command line, but with execline installed):
$ # using the full execline grammar with the execlineb parser:
$ execlineb -c 'pipeline { echo "hello world" } tee out.txt'
hello world
$ cat out.txt
hello world
$ # for these simple examples, one can forego the parser and just use "" as a separator
$ # traditional order
$ pipeline echo "hello world" "" tee out.txt
hello world
$ # "write" order (second command writes rather than reads)
$ pipeline -w tee out.txt "" echo "hello world"
hello world
$ # pipeline execs into the second command, so that's the RC we get
$ pipeline -w tee out.txt "" false; echo $?
1
$ pipeline -w tee out.txt "" true; echo $?
0
$ # output and exit status
$ pipeline -w tee out.txt "" sh -c "echo 'hello world'; exit 42"; echo "RC: $?"
hello world
RC: 42
$ cat out.txt
hello world
Using pipeline has the same differences to native bash pipelines as the bash process substitution used in answer #43972501.
* Actually pipeline doesn't exit at all unless there is an error. It executes into the second command, so it's the second command that does the returning.
Why not use stderr? Like so:
(
# Our long-running process that exits abnormally
( for i in {1..100} ; do echo ploop ; sleep 0.5 ; done ; exit 5 )
echo $? 1>&2 # We pass the exit status of our long-running process to stderr (fd 2).
) | tee ploop.out
So ploop.out receives the stdout. stderr receives the exit status of the long running process. This has the benefit of being completely POSIX-compatible.
(Well, with the exception of the range expression in the example long-running process, but that's not really relevant.)
Here's what this looks like:
...
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
5
Note that the return code 5 does not get output to the file ploop.out.

command to redirect output to console and to a file at the same time works fine in bash. But how do i make it work in korn shell(ksh)

command to redirect output to console and to a file at the same time works fine in bash. But how do i make it work in korn shell(ksh).
All my scripts runs on korn shell so cant change them to bash for this particular command to work.
exec > >(tee -a $LOGFILE) 2>&1
In the code beneath I use the variable logfile, lowercase is better.
You can try something like
touch "${logfile}"
tail -f "${logfile}"&
tailpid=$!
trap 'kill -9 ${tailpid}' EXIT INT TERM
exec 1>"${logfile}" 2>&1
A not too unreasonable technique is to re-exec the shell with output to tee. That is, at the top of the script, do something like:
#!/bin/sh
test -z "$REXEC" && { REXEC=1 exec "$0" "$#" | tee -a $LOGFILE; exit; }

In bash tee is making function variables local, how do I escape this?

I have stucked with a bash scipt which should write both to stdout and into file. I'm using functions and some variables inside them. Whenever I try to redirect the function to a file and print on the screen with tee I can't use the variables that I used in function, so they become local somehow.
Here is simple example:
#!/bin/bash
LOGV=/root/log
function var()
{
echo -e "Please, insert VAR value:\n"
read -re VAR
}
var 2>&1 | tee $LOGV
echo "This is VAR:$VAR"
Output:
[root#testbox ~]# ./var.sh
Please, insert VAR value:
foo
This is VAR:
[root#testbox ~]#
Thanks in advance!
EDIT:
Responding on #Etan Reisner suggestion to use
var 2>&1 > >(tee $LOGV)
The only problem of this construction is that log file dosn't receive everything...
[root#testbox~]# ./var.sh
Please, insert VAR value:
foo
This is VAR:foo
[root#testbox ~]# cat log
Please, insert VAR value:
This is a variant of BashFAQ #24.
var 2>&1 | tee $LOGV
...like any shell pipeline, has the option to run the function var inside a subprocess -- and, in practice, behaves this way in bash. (The POSIX sh specification leaves the details of which pipeline components, if any, run inside the parent shell undefined).
Avoiding this is as simple as not using a pipeline.
var > >(tee "$LOGV") 2>&1
...uses process substitution (a ksh extension adopted by bash, not present in POSIX sh) to represent the tee subprocess through a filename (in the form /dev/fd/## on modern Linux) which output can be redirected to without moving the function into a pipeline.
If you want to ensure that tee exits before other commands run, use a lock:
#!/bin/bash
logv=/tmp/log
collect_var() {
echo "value for var:"
read -re var
}
collect_var > >(logv="$logv" flock "$logv" -c 'exec tee "$logv"') 2>&1
flock "$logv" -c true # wait for tee to exit
echo "This is var: $var"
Incidentally, if you want to run multiple commands with their output being piped in this way, you should invoke the tee only once, and feed into it as appropriate:
#!/bin/bash
logv=/tmp/log
collect_var() { echo "value for var:"; read -re var; }
exec 3> >(logv="$logv" flock "$logv" -c 'exec tee "$logv"') # open output to log
collect_var >&3 2>&3 # run function, sending stdout/stderr to log
echo "This is var: $var" >&3 # ...and optionally run other commands the same way
exec 3>&- # close output
flock "$logv" -c true # ...and wait for tee to finish flushing and exit.

Force `tee` to run for every command in a shell script?

I would like to have a script wherein all commands are tee'd to a log file.
Right now I am running every command in the script thusly:
<command> | tee -a $LOGFILE
Is there a way to force every command in a shell script to pipe to tee?
I cannot force users to add appropriate teeing when running the script, and want to ensure it logs properly even if the calling user doesn't add a logging call of their own.
You can do a wrapper inside your script:
#!/bin/bash
{
echo 'hello'
some_more_commands
echo 'goodbye'
} | tee -a /path/to/logfile
Edit:
Here's another way:
#!/bin/bash
exec > >(tee -a /path/to/logfile)
echo 'hello'
some_more_commands
echo 'goodbye'
Why not expose a wrapper that's simply:
/path/to/yourOriginalScript.sh | tee -a $LOGFILE
Your users should not execute (nor even know about) yourOriginalScript.sh.
Assuming that your script doesn't take a --tee argument, you can do this (if you do use that argument, just replace --tee below with an argument you don't use):
#!/bin/bash
if [ -z "$1" ] || [ "$1" != --tee ]; then
$0 --tee "$#" | tee $LOGFILE
exit $?
else
shift
fi
# rest of script follows
This just has the script re-run itself, using the special argument --tee to prevent infinite recursion, piping its output into tee.
Some approach would be creation of runner script "run_it" that all users invoke their own scripts.
run_it my_script
All the magic would be done within, e.g. could look like that:
LOG_DIR=/var/log/
$# | tee -a $LOG_DIR/

How do I write standard error to a file while using "tee" with a pipe?

I know how to use tee to write the output (standard output) of aaa.sh to bbb.out, while still displaying it in the terminal:
./aaa.sh | tee bbb.out
How would I now also write standard error to a file named ccc.out, while still having it displayed?
I'm assuming you want to still see standard error and standard output on the terminal. You could go for Josh Kelley's answer, but I find keeping a tail around in the background which outputs your log file very hackish and cludgy. Notice how you need to keep an extra file descriptor and do cleanup afterward by killing it and technically should be doing that in a trap '...' EXIT.
There is a better way to do this, and you've already discovered it: tee.
Only, instead of just using it for your standard output, have a tee for standard output and one for standard error. How will you accomplish this? Process substitution and file redirection:
command > >(tee -a stdout.log) 2> >(tee -a stderr.log >&2)
Let's split it up and explain:
> >(..)
>(...) (process substitution) creates a FIFO and lets tee listen on it. Then, it uses > (file redirection) to redirect the standard output of command to the FIFO that your first tee is listening on.
The same thing for the second:
2> >(tee -a stderr.log >&2)
We use process substitution again to make a tee process that reads from standard input and dumps it into stderr.log. tee outputs its input back on standard output, but since its input is our standard error, we want to redirect tee's standard output to our standard error again. Then we use file redirection to redirect command's standard error to the FIFO's input (tee's standard input).
See Input And Output
Process substitution is one of those really lovely things you get as a bonus of choosing Bash as your shell as opposed to sh (POSIX or Bourne).
In sh, you'd have to do things manually:
out="${TMPDIR:-/tmp}/out.$$" err="${TMPDIR:-/tmp}/err.$$"
mkfifo "$out" "$err"
trap 'rm "$out" "$err"' EXIT
tee -a stdout.log < "$out" &
tee -a stderr.log < "$err" >&2 &
command >"$out" 2>"$err"
Simply:
./aaa.sh 2>&1 | tee -a log
This simply redirects standard error to standard output, so tee echoes both to log and to the screen. Maybe I'm missing something, because some of the other solutions seem really complicated.
Note: Since Bash version 4 you may use |& as an abbreviation for 2>&1 |:
./aaa.sh |& tee -a log
This may be useful for people finding this via Google. Simply uncomment the example you want to try out. Of course, feel free to rename the output files.
#!/bin/bash
STATUSFILE=x.out
LOGFILE=x.log
### All output to screen
### Do nothing, this is the default
### All Output to one file, nothing to the screen
#exec > ${LOGFILE} 2>&1
### All output to one file and all output to the screen
#exec > >(tee ${LOGFILE}) 2>&1
### All output to one file, STDOUT to the screen
#exec > >(tee -a ${LOGFILE}) 2> >(tee -a ${LOGFILE} >/dev/null)
### All output to one file, STDERR to the screen
### Note you need both of these lines for this to work
#exec 3>&1
#exec > >(tee -a ${LOGFILE} >/dev/null) 2> >(tee -a ${LOGFILE} >&3)
### STDOUT to STATUSFILE, stderr to LOGFILE, nothing to the screen
#exec > ${STATUSFILE} 2>${LOGFILE}
### STDOUT to STATUSFILE, stderr to LOGFILE and all output to the screen
#exec > >(tee ${STATUSFILE}) 2> >(tee ${LOGFILE} >&2)
### STDOUT to STATUSFILE and screen, STDERR to LOGFILE
#exec > >(tee ${STATUSFILE}) 2>${LOGFILE}
### STDOUT to STATUSFILE, STDERR to LOGFILE and screen
#exec > ${STATUSFILE} 2> >(tee ${LOGFILE} >&2)
echo "This is a test"
ls -l sdgshgswogswghthb_this_file_will_not_exist_so_we_get_output_to_stderr_aronkjegralhfaff
ls -l ${0}
In other words, you want to pipe stdout into one filter (tee bbb.out) and stderr into another filter (tee ccc.out). There is no standard way to pipe anything other than stdout into another command, but you can work around that by juggling file descriptors.
{ { ./aaa.sh | tee bbb.out; } 2>&1 1>&3 | tee ccc.out; } 3>&1 1>&2
See also How to grep standard error stream (stderr)? and When would you use an additional file descriptor?
In bash (and ksh and zsh), but not in other POSIX shells such as dash, you can use process substitution:
./aaa.sh > >(tee bbb.out) 2> >(tee ccc.out)
Beware that in bash, this command returns as soon as ./aaa.sh finishes, even if the tee commands are still executed (ksh and zsh do wait for the subprocesses). This may be a problem if you do something like ./aaa.sh > >(tee bbb.out) 2> >(tee ccc.out); process_logs bbb.out ccc.out. In that case, use file descriptor juggling or ksh/zsh instead.
To redirect standard error to a file, display standard output to the screen, and also save standard output to a file:
./aaa.sh 2>ccc.out | tee ./bbb.out
To display both standard error and standard output to screen and also save both to a file, you can use Bash's I/O redirection:
#!/bin/bash
# Create a new file descriptor 4, pointed at the file
# which will receive standard error.
exec 4<>ccc.out
# Also print the contents of this file to screen.
tail -f ccc.out &
# Run the command; tee standard output as normal, and send standard error
# to our file descriptor 4.
./aaa.sh 2>&4 | tee bbb.out
# Clean up: Close file descriptor 4 and kill tail -f.
exec 4>&-
kill %1
If using Bash:
# Redirect standard out and standard error separately
% cmd >stdout-redirect 2>stderr-redirect
# Redirect standard error and out together
% cmd >stdout-redirect 2>&1
# Merge standard error with standard out and pipe
% cmd 2>&1 |cmd2
Credit (not answering from the top of my head) goes here: Re: bash : stderr & more (pipe for stderr)
If you're using Z shell (zsh), you can use multiple redirections, so you don't even need tee:
./cmd 1>&1 2>&2 1>out_file 2>err_file
Here you're simply redirecting each stream to itself and the target file.
Full example
% (echo "out"; echo "err">/dev/stderr) 1>&1 2>&2 1>/tmp/out_file 2>/tmp/err_file
out
err
% cat /tmp/out_file
out
% cat /tmp/err_file
err
Note that this requires the MULTIOS option to be set (which is the default).
MULTIOS
Perform implicit tees or cats when multiple redirections are attempted (see Redirection).
Like the accepted answer well explained by lhunath, you can use
command > >(tee -a stdout.log) 2> >(tee -a stderr.log >&2)
Beware than if you use bash you could have some issue.
Let me take the matthew-wilcoxson example.
And for those who "seeing is believing", a quick test:
(echo "Test Out";>&2 echo "Test Err") > >(tee stdout.log) 2> >(tee stderr.log >&2)
Personally, when I try, I have this result:
user#computer:~$ (echo "Test Out";>&2 echo "Test Err") > >(tee stdout.log) 2> >(tee stderr.log >&2)
user#computer:~$ Test Out
Test Err
Both messages do not appear at the same level. Why does Test Out seem to be put like if it is my previous command?
The prompt is on a blank line letting me think the process is not finished, and when I press Enter this fix it.
When I check the content of the files, it is ok, and redirection works.
Let’s take another test.
function outerr() {
echo "out" # stdout
echo >&2 "err" # stderr
}
user#computer:~$ outerr
out
err
user#computer:~$ outerr >/dev/null
err
user#computer:~$ outerr 2>/dev/null
out
Trying again the redirection, but with this function:
function test_redirect() {
fout="stdout.log"
ferr="stderr.log"
echo "$ outerr"
(outerr) > >(tee "$fout") 2> >(tee "$ferr" >&2)
echo "# $fout content: "
cat "$fout"
echo "# $ferr content: "
cat "$ferr"
}
Personally, I have this result:
user#computer:~$ test_redirect
$ outerr
# stdout.log content:
out
out
err
# stderr.log content:
err
user#computer:~$
No prompt on a blank line, but I don't see normal output. The stdout.log content seem to be wrong, and only stderr.log seem to be ok.
If I relaunch it, the output can be different...
So, why?
Because, like explained here:
Beware that in bash, this command returns as soon as [first command] finishes, even if the tee commands are still executed (ksh and zsh do wait for the subprocesses)
So, if you use Bash, prefer use the better example given in this other answer:
{ { outerr | tee "$fout"; } 2>&1 1>&3 | tee "$ferr"; } 3>&1 1>&2
It will fix the previous issues.
Now, the question is, how to retrieve exit status code?
$? does not work.
I have no found better solution than switch on pipefail with set -o pipefail (set +o pipefail to switch off) and use ${PIPESTATUS[0]} like this:
function outerr() {
echo "out"
echo >&2 "err"
return 11
}
function test_outerr() {
local - # To preserve set option
! [[ -o pipefail ]] && set -o pipefail; # Or use second part directly
local fout="stdout.log"
local ferr="stderr.log"
echo "$ outerr"
{ { outerr | tee "$fout"; } 2>&1 1>&3 | tee "$ferr"; } 3>&1 1>&2
# First save the status or it will be lost
local status="${PIPESTATUS[0]}" # Save first, the second is 0, perhaps tee status code.
echo "==="
echo "# $fout content :"
echo "<==="
cat "$fout"
echo "===>"
echo "# $ferr content :"
echo "<==="
cat "$ferr"
echo "===>"
if (( status > 0 )); then
echo "Fail $status > 0"
return "$status" # or whatever
fi
}
user#computer:~$ test_outerr
$ outerr
err
out
===
# stdout.log content:
<===
out
===>
# stderr.log content:
<===
err
===>
Fail 11 > 0
In my case, a script was running command while redirecting both stdout and stderr to a file, something like:
cmd > log 2>&1
I needed to update it such that when there is a failure, take some actions based on the error messages. I could of course remove the dup 2>&1 and capture the stderr from the script, but then the error messages won't go into the log file for reference. While the accepted answer from lhunath is supposed to do the same, it redirects stdout and stderr to different files, which is not what I want, but it helped me to come up with the exact solution that I need:
(cmd 2> >(tee /dev/stderr)) > log
With the above, log will have a copy of both stdout and stderr and I can capture stderr from my script without having to worry about stdout.
The following will work for KornShell (ksh) where the process substitution is not available,
# create a combined (standard input and standard output) collector
exec 3 <> combined.log
# stream standard error instead of standard output to tee, while draining all standard output to the collector
./aaa.sh 2>&1 1>&3 | tee -a stderr.log 1>&3
# cleanup collector
exec 3>&-
The real trick here, is the sequence of the 2>&1 1>&3 which in our case redirects the standard error to standard output and redirects the standard output to file descriptor 3. At this point the standard error and standard output are not combined yet.
In effect, the standard error (as standard input) is passed to tee where it logs to stderr.log and also redirects to file descriptor 3.
And file descriptor 3 is logging it to combined.log all the time. So the combined.log contains both standard output and standard error.
Thanks lhunath for the answer in POSIX.
Here's a more complex situation I needed in POSIX with the proper fix:
# Start script main() function
# - We redirect standard output to file_out AND terminal
# - We redirect standard error to file_err, file_out AND terminal
# - Terminal and file_out have both standard output and standard error, while file_err only holds standard error
main() {
# my main function
}
log_path="/my_temp_dir"
pfout_fifo="${log_path:-/tmp}/pfout_fifo.$$"
pferr_fifo="${log_path:-/tmp}/pferr_fifo.$$"
mkfifo "$pfout_fifo" "$pferr_fifo"
trap 'rm "$pfout_fifo" "$pferr_fifo"' EXIT
tee -a "file_out" < "$pfout_fifo" &
tee -a "file_err" < "$pferr_fifo" >>"$pfout_fifo" &
main "$#" >"$pfout_fifo" 2>"$pferr_fifo"; exit
Compilation errors which are sent to standard error (STDERR) can be redirected or save to a file by:
Bash:
gcc temp.c &> error.log
C shell (csh):
% gcc temp.c |& tee error.log
See: How can I redirect compilation/build error to a file?

Resources