What's the difference of redirect an output using >, &>, >& and 2&>?
> redirects stdout to a file
2>& redirects file handle "2" (almost always stderr) to some other file handle (it's generally written as 2>&1, which redirects stderr to the same place as stdout).
&> and >& redirect both stdout and stderr to a file. It's normally written as &>file (or >&file). It's functionally the same as >file 2>&1.
2> redirects output to file handle 2 (usually stderr) to a file.
1> (or >) is for stdout, the output of a command.
2> is for stderr, the error output of the command.
This page is a bit wordy, but has good explanations and examples of the different command combinations.
Related
Hi expertsI want commands out and err is appended to one file, like this command > logOutErr.txt 2>&1
but I also want that err is also appended to another command 2> logErrOnly.txt
# This is a non working redirection
exec 1>> logOutErr.txt 2>> logOutErr.txt 2>> logErrOnly.txt
# This should be in Out log only
echo ten/two: $((10/2))
# This should be in both Out and Out+Err log files
echo ten/zero: $((10/0))
I understand than the last redirect 2>> overrides the preceding ...so what? tee? but how?
I have to do this once at the beginning of the script, without modifying the rest of the script (because it is dynamically generated and any modification is too complicated)
Please don't answer only with links to the theory, I have already spent two days reading everything with no good results, I would like a working example
Thanks
With the understanding that you lose ordering guarantees when doing this:
#!/usr/bin/env bash
exec >>logOutErr.txt 2> >(tee -a logErrOnly.txt)
# This should be in OutErr
echo "ten/two: $((10/2))"
# This should be in Err and OutErr
echo "ten/zero: $((10/0))"
This works because redirections are processed left-to-right: When tee is started, its stdout is already pointed to logOutErr.txt, so it appends to that location after first writing to logErrOnly.txt.
This very well may fall under KISS (keep it simple) principle but I am still curious and wish to be educated as to why I didn't receive the expected results. So, here we go...
I have a shell script to capture STDOUT and STDERR without disturbing the original file descriptors. This is in hopes of preserving the original order of output (see test.pl below) as seen by a user on the terminal.
Unfortunately, I am limited to using sh, instead of bash (but I welcome examples), as I am calling this from another suite and I may wish to use it in a cron in the future (I know cron has the SHELL environment variable).
wrapper.sh contains:
#!/bin/sh
stdout_and_stderr=$1
shift
command=$#
out="${TMPDIR:-/tmp}/out.$$"
err="${TMPDIR:-/tmp}/err.$$"
mkfifo ${out} ${err}
trap 'rm ${out} ${err}' EXIT
> ${stdout_and_stderr}
tee -a ${stdout_and_stderr} < ${out} &
tee -a ${stdout_and_stderr} < ${err} >&2 &
${command} >${out} 2>${err}
test.pl contains:
#!/usr/bin/perl
print "1: stdout1\n";
print STDERR "2: stderr1\n";
print "3: stdout2\n";
In the scenario:
sh wrapper.sh /tmp/xxx perl test.pl
STDOUT contains:
1: stdout1
3: stdout2
STDERR contains:
2: stderr1
All good so far...
/tmp/xxx contains:
2: stderr1
1: stdout1
3: stdout2
However, I was expecting /tmp/xxx to contain:
1: stdout1
2: stderr1
3: stdout2
Can anyone explain to me why STDOUT and STDERR are not appending /tmp/xxx in the order that I expected? My guess would be that the backgrounded tee processes are blocking the /tmp/xxx resource from one another since they have the same "destination". How would you solve this?
related: How do I write stderr to a file while using "tee" with a pipe?
It is a feature of the C runtime library (and probably is imitated by other runtime libraries) that stderr is not buffered. As soon as it is written to, stderr pushes all of its characters to the destination device.
By default stdout has a 512-byte buffer.
The buffering for both stderr and stdout can be changed with the setbuf or setvbuf calls.
From the Linux man page for stdout:
NOTES: The stream stderr is unbuffered. The stream stdout is line-buffered when it points to a terminal. Partial lines will not appear until fflush(3) or exit(3) is called, or a newline is printed. This can produce unexpected results, especially with debugging output. The buffering mode of the standard streams (or any other stream) can be changed using the setbuf(3) or setvbuf(3) call. Note that in case stdin is associated with a terminal, there may also be input buffering in the terminal driver, entirely unrelated to stdio buffering. (Indeed, normally terminal input is line buffered in the kernel.) This kernel input handling can be modified using calls like tcsetattr(3); see also stty(1), and termios(3).
After a little bit more searching, inspired by #wallyk, I made the following modification to wrapper.sh:
#!/bin/sh
stdout_and_stderr=$1
shift
command=$#
out="${TMPDIR:-/tmp}/out.$$"
err="${TMPDIR:-/tmp}/err.$$"
mkfifo ${out} ${err}
trap 'rm ${out} ${err}' EXIT
> ${stdout_and_stderr}
tee -a ${stdout_and_stderr} < ${out} &
tee -a ${stdout_and_stderr} < ${err} >&2 &
script -q -F 2 ${command} >${out} 2>${err}
Which now produces the expected:
1: stdout1
2: stderr1
3: stdout2
The solution was to prefix the $command with script -q -F 2 which makes script quite (-q) and then forces file descriptor 2 (STDOUT) to flush immediately (-F 2).
I am now researching to determine how portable this is. I think -F pipe may be Mac and FreeBSD and -f or --flush may be other distros...
related: How to make output of any shell command unbuffered?
I have C code, that has been compiled. Then I have to execute from command line
../../../PStomo_eq665/pstomo_eq par=syn.par >& log.syn
What does >& mean in this context? Both files syn.par and log.syn contain parameters for pstomo_eq.
Redirect stderr and stdout
>& is equivalent to &> and redirects both standard error and standard output.
From the bash man page:
There are two formats for redirecting standard output and standard
error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically
equivalent to:
>word 2>&1
In your question, stderr and stdout are redirected to log.syn
What's the difference of redirect an output using >, &>, >& and 2&>?
> redirects stdout to a file
2>& redirects file handle "2" (almost always stderr) to some other file handle (it's generally written as 2>&1, which redirects stderr to the same place as stdout).
&> and >& redirect both stdout and stderr to a file. It's normally written as &>file (or >&file). It's functionally the same as >file 2>&1.
2> redirects output to file handle 2 (usually stderr) to a file.
1> (or >) is for stdout, the output of a command.
2> is for stderr, the error output of the command.
This page is a bit wordy, but has good explanations and examples of the different command combinations.
When I run the command haizea -c simulated.conf > result.txt, the program (haizea) still prints its output to the screen. But when I try haizea -c simulated.conf 1>& result.txt, the output is now on the file result.txt. I'm quite confused about this situation. What is the difference between > and 1>&, then?
What you're seeing on the terminal is the standard error of your process. Both of these are directed to the same terminal device by default (assuming no redirection put into effect).
The redirection >&xyz redirects both standard output and error to the file xyz.
I've never used it but I would think, by extension, that N>&xyz would redirect file handle N and standard error to your file. So 1>&xyz is equivalent to >&xyz which is also equivalent to >xyz 2>&1.
The number before the > stands for the descriptor.
Standard Input - 0
Standard Output - 1
Standard Error - 2
The & will direct both standard output and standard error.
http://linuxdevcenter.com/pub/a/linux/lpt/13_01.html#doc2ac15b1c13
> redirects standard output alone.
>& or &> or 1>& redirect both standard output and standard error.
Your program is printing on standard error which is not getting redirected in case 1.