Bash output to screen and logfile differently - linux

I have been trying to get a bash script to output different things on the terminal and logfile but am unsure of what command to use.
For example,
#!/bin/bash
freespace=$(df -h / | grep -E "/" | awk '{print $4}')
greentext="\033[32m"
bold="\033[1m"
normal="\033[0m"
logdate=$(date +"%Y%m%d")
logfile="$logdate"_report.log
exec > >(tee -i $logfile)
echo -e $bold"Quick system report for "$greentext"$HOSTNAME"$normal
printf "\tSystem type:\t%s\n" $MACHTYPE
printf "\tBash Version:\t%s\n" $BASH_VERSION
printf "\tFree Space:\t%s\n" $freespace
printf "\tFiles in dir:\t%s\n" $(ls | wc -l)
printf "\tGenerated on:\t%s\n" $(date +"%m/%d/%y") # US date format
echo -e $greentext"A summary of this info has been saved to $logfile"$normal
I want to omit the last output (echo "A summary...") in the logfile while displaying it in the terminal. Is there a command to do so? It would be great if a general solution can be provided instead of a specific one because I want to apply this to other scripts.
EDIT 1 (after applying >&6):
Files in dir: 7
A summary of this info has been saved to 20160915_report.log
Generated on: 09/15/16

One option:
exec 6>&1 # save the existing stdout
exec > >(tee -i $logfile) # like you had it
#... all your outputs
echo -e $greentext"A summary of this info has been saved to $logfile"$normal >&6
# writes to the original stdout, saved in file descriptor 6 ------------^^^
The >&6 sends echo's output to the saved file descriptor 6 (the terminal, if you're running this from an interactive shell) rather than to the output path set up by tee (which is on file descriptor 1). Tested on bash 4.3.46.
References: "Using exec" and "I/O Redirection"
Edit As OP found, the >&6 message is not guaranteed to appear after the lines printed by tee off stdout. One option is to use script, e.g., as in the answers to this question, instead of tee, and then print the final message outside of the script. Per the docs, the stdbuf answers to that question won't work with tee.
Try a dirty hack:
#... all your outputs
echo >&6 # <-- New line
echo -e $greentext ... >&6
Or, equally hackish, (Note that, per OP, this worked)
#... all your outputs
sleep 0.25s # or whatever time you want <-- New line
echo -e ... >&6

Related

Log command line plus its output in a bash script

Is there a way for a script to log both, the command line run (including piped ones) plus its output without duplicating the line for the command?
The intention is that the script should have a clean output, but should log verbosely into a log file (so no set -x). Apart from the output, it shall also log the command line causing the output, which could be a piped command-one liner.
The most basic approach is to duplicate the command line in the script and then dump it into the log followed by the captured output of the actual command being run:
echo "command argument1 \"quoted argument2\" | grep -oE \"some output\"" >> file.log
output="$(command argument1 "quoted argument2" 2>&1 | grep -oE "some output")"
echo "${output}" >> file.log
This has the side effect that quoted sections would need to be escaped for the log, which can lead to errors resulting in confusion.
If none of the commands were piped, one could store the command line in an array and then "run" the array.
command=(command argument1 "quoted argument2")
echo "${command[#]}" >> file.log
output="$("${command[#]}" 2>&1)"
echo "${output}" >> file.log
Though with this approach "quoted argument2" would become quoted argument2 in the log.
Is there a way (in bash) to realize this without having to duplicate the commands?
You could play with redirections, switch the x option on and off on demand, unset PS4 to get rid of the leading + , and define log_on and log_off functions for easier coding. Something like this:
$ cat script.sh
#!/usr/bin/env bash
function log_on {
exec 3>&1 4>&2
exec &> >( sed -E '/^(set \+x|log_off)$/d' >> file.log )
ps4=$PS4
PS4=
set -x
}
function log_off {
set +x
exec 1>&3 2>&4
PS4=$ps4
}
echo something not logged
log_on
echo something logged
log_off
echo something else not logged
$ rm -f file.log
$ ./script.sh
something not logged
something else not logged
$ cat file.log
echo something logged
something logged
The exec <redirection> commands look a bit cryptic (as most redirections) but they are rather simple:
exec 3>&1 4>&2 makes copies of file descriptors fd1 and fd2 (stdout and stderr by default) to be able to restore these in log_off. After this fd3 and fd4 are copies of fd1 and fd2, respectively. Pick other fd than 3 or 4 if you already use them.
exec &> >( sed ... ) redirect fd1 and fd2 to the standard input of a sed command.
The sed command sed -E '/^(set \+x|log_off)$/d' >> file.log deletes lines containing only set + or log_off and appends its output to file.log. Without this sed command you would always see the two following lines:
log_off
set +x
in your logs, after a group of logged commands.
exec 1>&3 2>&4 restores fd1 and fd2 from their copies in fd3 and fd4.
The rest is straightforward: save PS4 in ps4 such that it can be restored, enable/disable the x option. This should be easy to adapt or extend if needed.
The x option displays the simple commands separately. It breaks pipes, for instance. If you prefer a command log that looks more like the commands you wrote you can replace set -/+x by set -/+v.
IMHO this has already been answered here:
For simplicity the set linux command is what you need.
set -x or set -v

Different results when running commands in braces within a bash script

I was editing a script and as the script was getting a bit long I decided to enclose the main part of the script in braces and divert the output to a log file instead of having individual log redirects for commands. Then I noticed that a command block that checks for a running copy of the script gives 2 different results depending if it is enclosed in braces.
I run the script as:
$ /bin/bash scriptname.bash
My question is why the same command block returns 2 different results and if it is possible to have the command block work inside the braces.
Below is the command block:
#!/bin/bash
#set -x # Uncomment to debug this shell script
#
##########################################################
# DEFINE FILES AND VARIABLES HERE
##########################################################
THIS_SCRIPT=$(basename $0)
TIMESTAMP=$(date +%Y-%m-%d_%H%M%S)
LOGFILE=process_check_$TIMESTAMP.log
##########################################################
# BEGINNING OF MAIN
##########################################################
{
printf "%s\n" "Checking for currently runnning versions of this script"
MYPID=$$ # Capture this scripts PID
MYOTHERPROCESSES=$(ps -ef | \grep $THIS_SCRIPT | \grep -v $MYPID | \grep -v grep | awk '{print $2}')
if [[ "$MYOTHERPROCESSES" != "" ]]
then
printf "%s\n" "ERROR: Another version of this script is running...exiting!"
exit 2
else
printf "%s\n" "No other versions running...proceeding"
fi
printf "%s\n" "Doing some script stuff..."
exit 0
} | tee -a $LOGFILE 2>&1
# End of script
This is not due to the braces, this is due to the pipe.
When you combine commands with a pipe like command | tee, each side of the pipe is executed in a separate sub-process. Shell commands are therefore executed in a sub-shell. That's this sub-shell that you detect.
PS: avoid constructs like ps | grep -v grep, use pidof or pgrep instead

In Bash, how to not create the redirect output file once the command fails

Usually we may redirect a command output to a file, as following:
cat a.txt >> output.txt
As I tried, if cat failed, the output.txt will still be created, which isn't my expected. I know I could test as this:
if [ "$?" -ne "0"]; then
rm output.txt
fi
But this may cause some issues overhead when there's already such output.txt prior to my cat execution.
So I also need store the output.txt state before cat, if there's already such output.txt before cat execution, I should not rm output.txt by mistake... but there may still be problem on race condition, what if any other process create this output.txt right before my cat very closely?
So is there any simple way that, if the command fails, the redirection output.txt will be removed, or even not created?
Fixed output file names are bad news; don't use them.
You should probably redesign the processing so that you have a date-stamped file name. Failing that, you should use the mktemp command to create a temporary file, have the command you want executed write to that, and when the command is successful, you can move the temporary to the 'final' output — and you can automatically clean up the temporary on failure.
outfile="./output-$(date +%Y-%m-%d.%H:%M:%S).txt"
tmpfile="$(mktemp ./gadget-maker.XXXXXXXX)"
trap "rm -f '$tmpfile'; exit 1" 0 1 2 3 13 15
if cat a.txt > "$tmpfile"
then mv "$tmpfile" "$outfile"
else rm "$tmpfile"
fi
trap 0
You can simplify the outfile to output.txt if you insist (but it isn't safe). You can use any prefix you like with the mktemp command. Note that by creating the temporary file in the current directory, where the final output file will be created too, you avoid cross-device file copying at the mv phase of operations — it is a link() and an unlink() system call (or maybe even a rename() system call if such a thing exists on your machine; it does on Mac OS X) only.
You can't tell that the command has failed until it terminates, and by then it might have produced some output.
Probably a more useful condition is to avoid creating the output file until the command actually produces some output, and not worry about its status code.
This comes close:
command | { IFS= read -rn1 -d '' a &&
{ printf %s "$a" >> output.txt
cat >> output.txt
}
}
However, if the first character output by command is a NUL byte, the NUL won't be written to the output file. Since the extension of the output file is .txt, that's unlikely in this particular case, but it could be handled by adding the command
[[ -z $a ]] && printf '\0' >> output.txt
after the printf and before the cat.
I think this will work, check this out.
[ -e output.txt ] && (mv output.txt output.txt_bkp)
cat a.txt > /dev/null 2>&1;[ $? -eq 0 ] && (cat a.txt > output.txt)
another way as suggested by Jonathan,
[ -e output.txt ] && (mv output.txt output.txt_bkp)
if cat a.txt > /dev/null 2>&1
then
cat a.txt > output.txt
fi

Bash script does not continue to read the next line of file

I have a shell script that saves the output of a command that is executed to a CSV file. It reads the command it has to execute from a shell script which is in this format:
ffmpeg -i /home/test/videos/avi/418kb.avi /home/test/videos/done/418kb.flv
ffmpeg -i /home/test/videos/avi/1253kb.avi /home/test/videos/done/1253kb.flv
ffmpeg -i /home/test/videos/avi/2093kb.avi /home/test/videos/done/2093kb.flv
You can see each line is an ffmpeg command. However, the script just executes the first line. Just a minute ago it was doing nearly all of the commands. It was missing half for some reason. I edited the text file that contained the commands and now it will only do the first line. Here is my bash script:
#!/bin/bash
# Shell script utility to read a file line line.
# Once line is read it will run processLine() function
#Function processLine
processLine(){
line="$#"
START=$(date +%s.%N)
eval $line > /dev/null 2>&1
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF" >> file.csv 2>&1
echo "It took $DIFF seconds"
echo $line
}
# Store file name
FILE=""
# get file name as command line argument
# Else read it from standard input device
if [ "$1" == "" ]; then
FILE="/dev/stdin"
else
FILE="$1"
# make sure file exist and readable
if [ ! -f $FILE ]; then
echo "$FILE : does not exists"
exit 1
elif [ ! -r $FILE ]; then
echo "$FILE: can not read"
exit 2
fi
fi
# read $FILE using the file descriptors
# Set loop separator to end of line
BAKIFS=$IFS
IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<$FILE
while read line
do
# use $line variable to process line in processLine() function
processLine $line
done
exec 0<&3
# restore $IFS which was used to determine what the field separators are
BAKIFS=$ORIGIFS
exit 0
Thank you for any help.
UPDATE 2
Its the ffmpeg commands rather than the shell script that isn't working. But I should of been using just "\b" as Paul pointed out. I am also making use of Johannes's shorter script.
I think that should do the same and seems to be correct:
#!/bin/bash
CSVFILE=/tmp/file.csv
cat "$#" | while read line; do
echo "Executing '$line'"
START=$(date +%s)
eval $line &> /dev/null
END=$(date +%s)
let DIFF=$END-$START
echo "$line, $START, $END, $DIFF" >> "$CSVFILE"
echo "It took ${DIFF}s"
done
no?
ffmpeg reads STDIN and exhausts it. The solution is to call ffmpeg with:
ffmpeg </dev/null ...
See the detailed explanation here: http://mywiki.wooledge.org/BashFAQ/089
Update:
Since ffmpeg version 1.0, there is also the -nostdin option, so this can be used instead:
ffmpeg -nostdin ...
I just had the same problem.
I believe ffmpeg is responsible for this behaviour.
My solution for this problem:
1) Call ffmpeg with an "&" at the end of your ffmpeg command line
2) Since now the skript will not wait till completion of the ffmpeg process,
we have to prevent our script from starting several ffmpeg processes.
We achieve this goal by delaying the loop pass while there is at least
one running ffmpeg process.
#!/bin/bash
cat FileList.txt |
while read VideoFile; do
<place your ffmpeg command line here> &
FFMPEGStillRunning="true"
while [ "$FFMPEGStillRunning" = "true" ]; do
Process=$(ps -C ffmpeg | grep -o -e "ffmpeg" )
if [ -n "$Process" ]; then
FFMPEGStillRunning="true"
else
FFMPEGStillRunning="false"
fi
sleep 2s
done
done
I would add echos before and after the eval to see what it's about to eval (in case it's treating the whole file as one big long line) and after (in case one of the ffmpeg commands is taking forever).
Unless you are planning to read something from standard input after the loop, you don't need to preserve and restore the original standard input (though it is good to see you know how).
Similarly, I don't see a reason for dinking with IFS at all. There is certainly no need to restore the value of IFS before exit - this is a real shell you are using, not a DOS BAT file.
When you do:
read var1 var2 var3
the shell assigns the first field to $var1, the second to $var2, and the rest of the line to $var3. In the case where there's just one variable - your script, for example - the whole line goes into the variable, just as you want it to.
Inside the process line function, you probably don't want to throw away error output from the executed command. You probably do want to think about checking the exit status of the command. The echo with error redirection is ... unusual, and overkill. If you're sufficiently sure that the commands can't fail, then go ahead with ignoring the error. Is the command 'chatty'; if so, throw away the chat by all means. If not, maybe you don't need to throw away standard output, either.
The script as a whole should probably diagnose when it is given multiple files to process since it ignores the extraneous ones.
You could simplify your file handling by using just:
cat "$#" |
while read line
do
processline "$line"
done
The cat command automatically reports errors (and continues after them) and processes all the input files, or reads standard input if there are no arguments left. The use of double quotes around the variable means that it is passed as a single unit (and therefore unparsed into separate words).
The use of date and bc is interesting - I'd not seen that before.
All in all, I'd be looking at something like:
#!/bin/bash
# Time execution of commands read from a file, line by line.
# Log commands and times to CSV logfile "file.csv"
processLine(){
START=$(date +%s.%N)
eval "$#" > /dev/null
STATUS=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF, $STATUS" >> file.csv
echo "${DIFF}s: $STATUS: $line"
}
cat "$#" |
while read line
do
processLine "$line"
done

How do I write standard error to a file while using "tee" with a pipe?

I know how to use tee to write the output (standard output) of aaa.sh to bbb.out, while still displaying it in the terminal:
./aaa.sh | tee bbb.out
How would I now also write standard error to a file named ccc.out, while still having it displayed?
I'm assuming you want to still see standard error and standard output on the terminal. You could go for Josh Kelley's answer, but I find keeping a tail around in the background which outputs your log file very hackish and cludgy. Notice how you need to keep an extra file descriptor and do cleanup afterward by killing it and technically should be doing that in a trap '...' EXIT.
There is a better way to do this, and you've already discovered it: tee.
Only, instead of just using it for your standard output, have a tee for standard output and one for standard error. How will you accomplish this? Process substitution and file redirection:
command > >(tee -a stdout.log) 2> >(tee -a stderr.log >&2)
Let's split it up and explain:
> >(..)
>(...) (process substitution) creates a FIFO and lets tee listen on it. Then, it uses > (file redirection) to redirect the standard output of command to the FIFO that your first tee is listening on.
The same thing for the second:
2> >(tee -a stderr.log >&2)
We use process substitution again to make a tee process that reads from standard input and dumps it into stderr.log. tee outputs its input back on standard output, but since its input is our standard error, we want to redirect tee's standard output to our standard error again. Then we use file redirection to redirect command's standard error to the FIFO's input (tee's standard input).
See Input And Output
Process substitution is one of those really lovely things you get as a bonus of choosing Bash as your shell as opposed to sh (POSIX or Bourne).
In sh, you'd have to do things manually:
out="${TMPDIR:-/tmp}/out.$$" err="${TMPDIR:-/tmp}/err.$$"
mkfifo "$out" "$err"
trap 'rm "$out" "$err"' EXIT
tee -a stdout.log < "$out" &
tee -a stderr.log < "$err" >&2 &
command >"$out" 2>"$err"
Simply:
./aaa.sh 2>&1 | tee -a log
This simply redirects standard error to standard output, so tee echoes both to log and to the screen. Maybe I'm missing something, because some of the other solutions seem really complicated.
Note: Since Bash version 4 you may use |& as an abbreviation for 2>&1 |:
./aaa.sh |& tee -a log
This may be useful for people finding this via Google. Simply uncomment the example you want to try out. Of course, feel free to rename the output files.
#!/bin/bash
STATUSFILE=x.out
LOGFILE=x.log
### All output to screen
### Do nothing, this is the default
### All Output to one file, nothing to the screen
#exec > ${LOGFILE} 2>&1
### All output to one file and all output to the screen
#exec > >(tee ${LOGFILE}) 2>&1
### All output to one file, STDOUT to the screen
#exec > >(tee -a ${LOGFILE}) 2> >(tee -a ${LOGFILE} >/dev/null)
### All output to one file, STDERR to the screen
### Note you need both of these lines for this to work
#exec 3>&1
#exec > >(tee -a ${LOGFILE} >/dev/null) 2> >(tee -a ${LOGFILE} >&3)
### STDOUT to STATUSFILE, stderr to LOGFILE, nothing to the screen
#exec > ${STATUSFILE} 2>${LOGFILE}
### STDOUT to STATUSFILE, stderr to LOGFILE and all output to the screen
#exec > >(tee ${STATUSFILE}) 2> >(tee ${LOGFILE} >&2)
### STDOUT to STATUSFILE and screen, STDERR to LOGFILE
#exec > >(tee ${STATUSFILE}) 2>${LOGFILE}
### STDOUT to STATUSFILE, STDERR to LOGFILE and screen
#exec > ${STATUSFILE} 2> >(tee ${LOGFILE} >&2)
echo "This is a test"
ls -l sdgshgswogswghthb_this_file_will_not_exist_so_we_get_output_to_stderr_aronkjegralhfaff
ls -l ${0}
In other words, you want to pipe stdout into one filter (tee bbb.out) and stderr into another filter (tee ccc.out). There is no standard way to pipe anything other than stdout into another command, but you can work around that by juggling file descriptors.
{ { ./aaa.sh | tee bbb.out; } 2>&1 1>&3 | tee ccc.out; } 3>&1 1>&2
See also How to grep standard error stream (stderr)? and When would you use an additional file descriptor?
In bash (and ksh and zsh), but not in other POSIX shells such as dash, you can use process substitution:
./aaa.sh > >(tee bbb.out) 2> >(tee ccc.out)
Beware that in bash, this command returns as soon as ./aaa.sh finishes, even if the tee commands are still executed (ksh and zsh do wait for the subprocesses). This may be a problem if you do something like ./aaa.sh > >(tee bbb.out) 2> >(tee ccc.out); process_logs bbb.out ccc.out. In that case, use file descriptor juggling or ksh/zsh instead.
To redirect standard error to a file, display standard output to the screen, and also save standard output to a file:
./aaa.sh 2>ccc.out | tee ./bbb.out
To display both standard error and standard output to screen and also save both to a file, you can use Bash's I/O redirection:
#!/bin/bash
# Create a new file descriptor 4, pointed at the file
# which will receive standard error.
exec 4<>ccc.out
# Also print the contents of this file to screen.
tail -f ccc.out &
# Run the command; tee standard output as normal, and send standard error
# to our file descriptor 4.
./aaa.sh 2>&4 | tee bbb.out
# Clean up: Close file descriptor 4 and kill tail -f.
exec 4>&-
kill %1
If using Bash:
# Redirect standard out and standard error separately
% cmd >stdout-redirect 2>stderr-redirect
# Redirect standard error and out together
% cmd >stdout-redirect 2>&1
# Merge standard error with standard out and pipe
% cmd 2>&1 |cmd2
Credit (not answering from the top of my head) goes here: Re: bash : stderr & more (pipe for stderr)
If you're using Z shell (zsh), you can use multiple redirections, so you don't even need tee:
./cmd 1>&1 2>&2 1>out_file 2>err_file
Here you're simply redirecting each stream to itself and the target file.
Full example
% (echo "out"; echo "err">/dev/stderr) 1>&1 2>&2 1>/tmp/out_file 2>/tmp/err_file
out
err
% cat /tmp/out_file
out
% cat /tmp/err_file
err
Note that this requires the MULTIOS option to be set (which is the default).
MULTIOS
Perform implicit tees or cats when multiple redirections are attempted (see Redirection).
Like the accepted answer well explained by lhunath, you can use
command > >(tee -a stdout.log) 2> >(tee -a stderr.log >&2)
Beware than if you use bash you could have some issue.
Let me take the matthew-wilcoxson example.
And for those who "seeing is believing", a quick test:
(echo "Test Out";>&2 echo "Test Err") > >(tee stdout.log) 2> >(tee stderr.log >&2)
Personally, when I try, I have this result:
user#computer:~$ (echo "Test Out";>&2 echo "Test Err") > >(tee stdout.log) 2> >(tee stderr.log >&2)
user#computer:~$ Test Out
Test Err
Both messages do not appear at the same level. Why does Test Out seem to be put like if it is my previous command?
The prompt is on a blank line letting me think the process is not finished, and when I press Enter this fix it.
When I check the content of the files, it is ok, and redirection works.
Let’s take another test.
function outerr() {
echo "out" # stdout
echo >&2 "err" # stderr
}
user#computer:~$ outerr
out
err
user#computer:~$ outerr >/dev/null
err
user#computer:~$ outerr 2>/dev/null
out
Trying again the redirection, but with this function:
function test_redirect() {
fout="stdout.log"
ferr="stderr.log"
echo "$ outerr"
(outerr) > >(tee "$fout") 2> >(tee "$ferr" >&2)
echo "# $fout content: "
cat "$fout"
echo "# $ferr content: "
cat "$ferr"
}
Personally, I have this result:
user#computer:~$ test_redirect
$ outerr
# stdout.log content:
out
out
err
# stderr.log content:
err
user#computer:~$
No prompt on a blank line, but I don't see normal output. The stdout.log content seem to be wrong, and only stderr.log seem to be ok.
If I relaunch it, the output can be different...
So, why?
Because, like explained here:
Beware that in bash, this command returns as soon as [first command] finishes, even if the tee commands are still executed (ksh and zsh do wait for the subprocesses)
So, if you use Bash, prefer use the better example given in this other answer:
{ { outerr | tee "$fout"; } 2>&1 1>&3 | tee "$ferr"; } 3>&1 1>&2
It will fix the previous issues.
Now, the question is, how to retrieve exit status code?
$? does not work.
I have no found better solution than switch on pipefail with set -o pipefail (set +o pipefail to switch off) and use ${PIPESTATUS[0]} like this:
function outerr() {
echo "out"
echo >&2 "err"
return 11
}
function test_outerr() {
local - # To preserve set option
! [[ -o pipefail ]] && set -o pipefail; # Or use second part directly
local fout="stdout.log"
local ferr="stderr.log"
echo "$ outerr"
{ { outerr | tee "$fout"; } 2>&1 1>&3 | tee "$ferr"; } 3>&1 1>&2
# First save the status or it will be lost
local status="${PIPESTATUS[0]}" # Save first, the second is 0, perhaps tee status code.
echo "==="
echo "# $fout content :"
echo "<==="
cat "$fout"
echo "===>"
echo "# $ferr content :"
echo "<==="
cat "$ferr"
echo "===>"
if (( status > 0 )); then
echo "Fail $status > 0"
return "$status" # or whatever
fi
}
user#computer:~$ test_outerr
$ outerr
err
out
===
# stdout.log content:
<===
out
===>
# stderr.log content:
<===
err
===>
Fail 11 > 0
In my case, a script was running command while redirecting both stdout and stderr to a file, something like:
cmd > log 2>&1
I needed to update it such that when there is a failure, take some actions based on the error messages. I could of course remove the dup 2>&1 and capture the stderr from the script, but then the error messages won't go into the log file for reference. While the accepted answer from lhunath is supposed to do the same, it redirects stdout and stderr to different files, which is not what I want, but it helped me to come up with the exact solution that I need:
(cmd 2> >(tee /dev/stderr)) > log
With the above, log will have a copy of both stdout and stderr and I can capture stderr from my script without having to worry about stdout.
The following will work for KornShell (ksh) where the process substitution is not available,
# create a combined (standard input and standard output) collector
exec 3 <> combined.log
# stream standard error instead of standard output to tee, while draining all standard output to the collector
./aaa.sh 2>&1 1>&3 | tee -a stderr.log 1>&3
# cleanup collector
exec 3>&-
The real trick here, is the sequence of the 2>&1 1>&3 which in our case redirects the standard error to standard output and redirects the standard output to file descriptor 3. At this point the standard error and standard output are not combined yet.
In effect, the standard error (as standard input) is passed to tee where it logs to stderr.log and also redirects to file descriptor 3.
And file descriptor 3 is logging it to combined.log all the time. So the combined.log contains both standard output and standard error.
Thanks lhunath for the answer in POSIX.
Here's a more complex situation I needed in POSIX with the proper fix:
# Start script main() function
# - We redirect standard output to file_out AND terminal
# - We redirect standard error to file_err, file_out AND terminal
# - Terminal and file_out have both standard output and standard error, while file_err only holds standard error
main() {
# my main function
}
log_path="/my_temp_dir"
pfout_fifo="${log_path:-/tmp}/pfout_fifo.$$"
pferr_fifo="${log_path:-/tmp}/pferr_fifo.$$"
mkfifo "$pfout_fifo" "$pferr_fifo"
trap 'rm "$pfout_fifo" "$pferr_fifo"' EXIT
tee -a "file_out" < "$pfout_fifo" &
tee -a "file_err" < "$pferr_fifo" >>"$pfout_fifo" &
main "$#" >"$pfout_fifo" 2>"$pferr_fifo"; exit
Compilation errors which are sent to standard error (STDERR) can be redirected or save to a file by:
Bash:
gcc temp.c &> error.log
C shell (csh):
% gcc temp.c |& tee error.log
See: How can I redirect compilation/build error to a file?

Resources