I would like to execute a shell script, and pass the stderr to the second script. How do I access the stderr from shell1.sh within shell2.sh and read the data to find some text in there?
sh shell1.sh | sh shell2.sh
Simply redirect the stderr to stdout (using 2>&1) before piping it into your other program:
sh shell1.sh 2>&1 | sh shell2.sh
Whatever enters the pipe through the stdout, will exit the pipe through the stdin, so your other program will then see both the original stdout and stderr on the stdin.
If you are not interested in the (original) stdout of the first program, just re-direct it to /dev/null
sh shell1.sh 2>&1 >/dev/null | sh shell2.sh
Note, that there is only a single input channel for a program (stdin), so once you have redirected stderr to stdout there is no way to differentiate between the original stdout and stderr in your piped-to program.
You can use this way to pass only stderr to RHS command while ignoring stdout:
sh shell1.sh 2>&1 >/dev/null | shell2.sh
2>&1 redirects stderr to stdout and >/dev/null redirects stdout to null.
Related
I have a shell script whose stdout and stderr I want to write to a logfile. I know that this can be achieved via
sh script.sh >> both.log 2>&1
However, I also want to simultaneously write the stderr to a separate file, "error.log". Is this achievable?
You can use tee to duplicate output to two locations. Combine that with some tricky redirections, and...
script.sh 2>&1 >> both.log | tee -a both.log >> error.log
This redirects stderr to stdout, and then stdout to both.log. stderr remains, and is piped to tee, which copies it to both log files.
For this you need to first switch stdout and stderr, which requires an additional file descriptor:
sh script.sh 3>&2 2>&1 1>&3 3>&-
The last operator closes the auxiliary file descriptor.
After that you can use tee to duplicate the error stream (which is now on stdin) and append it to your error log:
sh script.sh 3>&2 2>&1 1>&3 3>&- | tee -a error.log
And after that you can then direct both stdin and stderr to your combined log:
(sh script.sh 3>&2 2>&1 1>&3 3>&- | tee -a error.log) >> both.log 2>&1
The parentheses around the command are important to capture the error stream of the whole command. Without them only the (empty) error stream of the tee command would be captured and the rest would still go to the terminal.
Note: this does not check wheater the file descriptor 3 was in use (open) before. In bash you can use this to choose a previously unused file descriptor and close it on the last redirection:
sh script.sh {tmpfd}>&2 2>&1 1>&${tmpfd}-
I'm executing a program that dumps crash report into STDERR from where I have to filter some necessary information. The problem is that I'm unable to redirect STDERR to STDOUT and PIPE it with grep
command 2>&1 >/dev/null | grep "^[^-]" >& /tmp/fl
Getting error: Ambiguous output redirect.
Same command works under bash terminal.
What should I change to make it work ?
csh is significantly more limited than bash when it comes to file redirection. In csh, you can redirect stdout with the usual > operator, you can redirect both stdout and stderr with the >& operator, you can pipe stdout and stderr with the |& operator, but there is no single operator to redirect stderr alone.
The usual workaround is to execute the command in a sub-shell, redirecting stdout in that sub-shell to whatever file you want (/dev/null in this case), and then use the |& operator to redirect stdout and stderr of the sub-shell to the next command in the main shell.
In your case, this means something like:
( command >/dev/null ) |& grep "^[^-]" >&/tmp/fl
Because stdout is redirected to /dev/null inside the sub-shell, the |& operator will end up acting as 2>&1 in bash - since stdout is discarded in the sub-shell, nothing written to stdout will ever reach the pipe.
If you dont mind mixing stdout and stderr into the pipe you can use
command |& grep "^[^-]" >& /tmp/fl
Otherwise you can do the hack:
(command >/dev/null) |& grep "^[^-]" >& /tmp/fl
which separates out stdout to null, then piping stdout and stderr just gives
stderr as content.
I have a question for redirection.
I always use anycommands > /dev/null 2>&1 when I need not any output. But I have never used anycommands 2> /dev/null >&2
Question: Which one is the best way to expect no outputs? What's the difference between anycommands > /dev/null 2>&1 and anycommands 2> /dev/null >&2
case#1:(echo stdout;echo stderr>&2) >/dev/null 2>&1
stdout(1) is replaced by an fd to /dev/null
stderr(2) descriptor is copied from &1 which now is an fd to /dev/null
result: no output at all
case#2:(echo stdout;echo stderr>&2) 2>&1 >/dev/null
stderr(2) descriptor is copied from &1 which is the default stdout
stdout(1) is replaced by an fd to /dev/null
result: stderr is empty, stdout not shown, stderr on stdout
case#3: (echo stdout; echo stderr >&2) 2> /dev/null >&2
same as case#1, stderr and stdout have switched roles
Effectively, the two are equivalent. cmd > /dev/null 2>&1 connects stdout of the command to /dev/null, and then connects stderr to the same file. cmd 2>/dev/null >&2 connects stderr to /dev/null, and then connects stdout to it. The only difference is in the order in which the two streams are associated with /dev/null, which has no bearing on the status of the command when it is run. In both cases, both streams are redirected to the bit bucket.
If you're using only BASH, use &> to redirect both stdout and stderr. That's the most compact, safe and simple solution.
Regarding your question, the first one is equivalent to &> (it redirects both stdout and stderr to /dev/null.
The second connects stderr to /dev/null and redirects stdout to the new stderr, so it's equivalent to as far as the output is concerned. Just the order of file descriptor operations is reversed.
The following writes stdout to a logfile and prints stderr:
bash script.sh >> out.log
This again writes both stdout and stderr to a logfile:
bash script.sh >> out.log 2>&1
How to combine both features, so that stdout and stderr are logged to a file and stderr is emailed to my inbox?
bash script.sh 2>&1 >> out.log | tee -a out.log
First, I'm redirecting stdout to file and stderr to stdout (stdout line gets to the out.log file and stderr to pipe).
The tee command prints stdin to both stdout and file (resemblance with the letter T). Thus second, I'm printing the original stderr to both stdout and the out.log file (the -a argument means append).
You can keep stdout in a separate file and stderr in separate file:
0 * * * * bash script.sh > out.log 2> err.log
and then email yourself err.log file.
Here is a working solution:
./myscript.sh &> all.txt 2> stderr.txt
&> all.txt to have both stderr and stdout
2> stderr.txt to have only stderr
And then just do whatever you want with those files, such as email logging for instance!
Using process substitution you can try:
0 * * * * bash script.sh >> out.log 2> >(exec tee >(exec cat >> mail))
Or
0 * * * * bash -c 'exec bash script.sh >> out.log 2> >(exec tee >(exec cat >> mail))'
exec cat >> mail imitates mailing. Replace it with a command that actually does the mailing.
I have checked couple of relevant posts regarding this in stackoverflow and other sources regarding the usage of 2>&1.
Unfortunately so far have not get my head around it completely.
I understand that 2 is the stderr and 1 is the stdout and we are combining with the 2>&1.
But my question is what is difference between:
1. mycommand > /dev/null
2. mycommand 2> /dev/null
3. mycommand > /dev/null 2>&1
I was thinking:
will redirect stdout and stderr to /dev/null
will redirect stderr to /dev/null
will redirect stdout and stderr to /dev/null
Relevant posts:
What does "/dev/null" mean at the end of shell commands)
i/o stream redirection on linux shell. how does the shell process a command with redirection?
What does “> /dev/null 2>&1″ mean? (http://www.xaprb.com/blog/2006/06/06/what-does-devnull-21-mean/)
See this:
mycommand > /dev/null
it will redirect channel 1 (which is stdout) of mycommand to /dev/null
mycommand 2> /dev/null
it will redirect channel 2 (which is stderr) to /dev/null
mycommand > /dev/null 2>&1
it will redirect channel 1 to /dev/null and then bind channel 2 (stderr) to channel 1 (stdout). Both will go into /dev/null
There is another one (just to complete)
mycommand 2>&1 > /dev/null
In this second case, I bind (the child's) stderr to stdout (of the
parent) and then I find the child's stdout to /dev/null. The result is
that you now get the child's stderr output on stdout and the stdout
goes to the file. This is useful for processing stderr in a pipe, for
example. (see this answer)
(errfile doesn't exist)
$ cat errfile
cat: 0652-050 Cannot open errfile.
$ cat errfile > /tmp/stream.out
cat: 0652-050 Cannot open errfile.
$ cat errfile > /tmp/stream.out 2>&1
$ cat /tmp/stream.out
cat: 0652-050 Cannot open errfile.
($ rm /tmp/stream.out)
$ cat errfile 2>&1 > /tmp/stream.out
cat: 0652-050 Cannot open errfile.
$ cat /tmp/stream.out
$
Order is thus important and 2>&1 1>out is different than 1>out 2>&1 due to stream redirection at shell interpretation. You shoud redirect in "reverse" order. stdout > final than source > stdout
Try these to get the differences:
echo "stderr" > /dev/fd/2 | >/dev/null
stderr
echo "stdout" > /dev/fd/1 | >/dev/null
both commands redirected to /dev/null but in first one we're writing to stderr which prints stderr but in second one it prints nothing
1: redirect STDOUT to /dev/null, you use default file descriptor in this case, e.g. command [default]> filename, the default file descriptor is STDOUT.
2: redirect STDERR to /dev/null
3: redirect STDOUT to /dev/null and redirect STDERROR to STDOUT, which means both STDOUT and STDERROR will be redirected to /dev/null
Hope the tips make you clear.
0, 1, 2...9 are file descriptors in bash. 0 stands for stdin, 1 stands for stdout, 2 stands for stderror. 3~9 is spare for any other temporary usage.
Any file descriptor can be redirected to other file descriptor or file by using operator > or >>(append).
Usage: >
Please reference to http://www.tldp.org/LDP/abs/html/io-redirection.html