This question already has answers here:
What's the difference of redirect an output using ">", "&>", ">&" and "2&>"?
(2 answers)
Closed 9 years ago.
What does this do? I cannot find any information about what &1 and &2 are in bash.
$ echo "Hello" >&1
Hello
$ echo "Hello" >&2
Hello
$ echo "Hello" >&3
bash: syntax error near unexpected token `&'
I know that you can redirect standard and error output to a file, but what does the ampersand do in these cases?
>& is a variant on the > pipeline operation. While > redirects output into a named file (or the device's representation in the filesystem, such as /dev/console), >& redirects the output to one of the standard output streams. The most commonly used exaples are stream 1 (stdout) and stream 2 (stderr). Note that these are the same numbers you can use in front of the > or >& operators to say which stream's output you want to redirect.
echo "hello" >&1 is redundant; you're redirecting stdout to stdout.
(echo "hello" 1>&1 would mean the same thing, of course.)
A more interesting example: echo "hello" 2>&1 redirects the echo command's error output from stderr to stdout, combining them into a single stream. This one is often useful if you want to run all the output through a single pipe rather than having them treated separately. echo "hello" 2>&1 | tee log, for example, captures a copy of both normal output and error messages into a single log file.
BroSlow points out that, in version 4 and later of the bash shell, |& can be used as a shorthand for combining the streams, equivalent to 2>&1. Good to know. (I still move between shells too frequently to have started using that one.)
Bash has several file descriptors used for input and ouptut.
1 is stdout
2 is stderr
3-9 can be used for other i/o (for example if you're using 1 to read from a file in a file loop, you can use 3 to read user input)
10+ is generally used by bash internally, though can be requested from bash if needed
Now for your examples, it's probably easier to view if you split the & from the file descriptor. Both >& and >& are functionally equivalent and redirect from a given file descriptor to another. So in your examples:
The echo command will print whatever you give it to stdout (1), which in this case you redirect to stdout, so the redirection doesn't change anything.
$ echo "Hello" >& 1
The output of the echo command is redirected, and instead of going to stdout goes to stderr.
$ echo "Hello" >& 2
The output of the echo command is redirected, and instead of going to stdout bash tries to send it to file descriptor 3, however it hasn't been allocated yet, so bash throws an error.
$ echo "Hello" >& 3
You can allocate it to /dev/stdout for example, and then get the same result as your first redirection example.
$ exec 3<> /dev/stdout
$ echo "Hello" >& 3
Hello
1 is stdout Standard Out
2 is stderr Standard Error
& indicates that what follows is a file descriptor and not a filename ("fold a file descriptor into another")
common code is redirect stdderr to stdout
2>&1
then
mycommand 2>&1 1>logfile
or
mycommand > afile.txt 2>&1
Related
I have a question from the book "unix/linux your ultimate guide" it asks
Suppose there is a program named "prog" that outputs on both stderr and stdout. Give a single command to run "prog" with the 'o' option and the string 'arg' passed as its only argument, where it takes its stdin from the output of the program "progBefore", where "prog"s stdout is ignored, and "prog"s stderr is given to the program "progAfter" through "progAfter"s stdin. Do not use any temporary files.
Here is what i tried:
prog -o 'arg' < `progBefore` 1>/dev/null 2> progAfter
Any help would be appreciated thank you
What is this doing?
prog -o 'arg' < progBefore 1>/dev/null 2> progAfter
It is calling the program prog, taking input from the file progBefore, passing stdout to /dev/null (which ignores it) and passing stderr to the file progAfter. You are using file redirection when you should be using pipes:
progBefore | prog -o 'arg' 2>&1 1>/dev/null | progAfter
A pipe (more correctly, an anonymous pipe) indicated by | takes the stdout from the program on the left and sends it to the stdin of the program on the right.
2>&1 redirects stderr to whatever stdout is pointing at, note that the order is important.
I'm reading up on redirecting data to /dev/null and so I tried a simple test:
ping a.b.c # which results in an address not found
If I try this:
ping a.b.c > /dev/null # prints the same error message as the one above
However, if I do this:
ping a.b.c > /dev/null 2>&1 # The error message is gone
That last solution is the desired solution, but what is happening with this 2>&1? My research so far suggest that 2 represents stderr and 1 represents stdout. So if I read it that way, it looks like I'm creating a stderr file and redirecting stdout to it?
If that is the case, what does the & in that command do?
You are right, 2 is STDERR, 1 is STDOUT. When you do 2>&1 you are saying: "print to STDOUT (1) the things that would go to STDERR (2)". And before that, you said your STDOUT would go to /dev/null. Therefore, nothing is seen. In the examples 1 and 2 you get the output message because it is being printed to STDERR, as a regular redirection only redirects STDOUT.
And when you do the redirection, you are not creating a STDERR, the processes always have a STDERR and a STDOUT when they are created.
Consider the following code which prints the word "stdout" to stdout and the word "stderror" to stderror.
$ (echo "stdout"; echo "stderror" >&2)
stdout
stderror
Note that the '&' operator tells bash that 2 is a file descriptor (which points to the stderr) and not a file name. If we left out the '&', this command would print stdout to stdout, and create a file named "2" and write stderror there.
By experimenting with the code above, you can see for yourself exactly how redirection operators work. For instance, by changing which file which of the two descriptors 1,2, is redirected to /dev/null the following two lines of code delete everything from the stdout, and everything from stderror respectively (printing what remains).
$ (echo "stdout"; echo "stderror" >&2) 1>/dev/null
stderror
$ (echo "stdout"; echo "stderror" >&2) 2>/dev/null
stdout
Now, we approach the crux of the question (substituting my example for yours), why does
(echo "stdout"; echo "stderror" >&2) >/dev/null 2>&1
produce no output? To truly understand this, I highly recommend you read this webpage on file descriptor tables. Assuming you have done that reading, we can proceed. Note that Bash processes left to right; thus Bash sees >/dev/null first (which is the same as 1>/dev/null), and sets the file descriptor 1 to point to /dev/null instead of the stdout. Having done this, Bash then moves rightwards and sees 2>&1. This sets the file descriptor 2 to point to the same file as file descriptor 1 (and not to file descriptor 1 itself!!!! (see this resource on pointers for more info) . Since file descriptor 1 points to /dev/null, and file descriptor 2 points to the same file as file descriptor 1, file descriptor 2 now also points to /dev/null. Thus both file descriptors point to /dev/null, and this is why no output is rendered.
To test if you really understand the concept, try to guess the output when we switch the redirection order:
(echo "stdout"; echo "stderror" >&2) 2>&1 >/dev/null
stderror
The reasoning here is that evaluating from left to right, Bash sees 2>&1, and thus sets the file descriptor 2 to point to the same place as file descriptor 1, ie stdout. It then sets file descriptor 1 (remember that >/dev/null = 1>/dev/null) to point to >/dev/null, thus deleting everything which would usually be send to to the standard out. Thus all we are left with was that which was not send to stdout in the subshell (the code in the parentheses)- i.e. "stderror".
The interesting thing to note there is that even though 1 is just a pointer to the stdout, redirecting pointer 2 to 1 via 2>&1 does NOT form a chain of pointers 2 -> 1 -> stdout. If it did, as a result of redirecting 1 to /dev/null, the code 2>&1 >/dev/null would give the pointer chain 2 -> 1 -> /dev/null, and thus the code would generate nothing, in contrast to what we saw above.
Finally, I'd note that there is a simpler way to do this:
From section 3.6.4 here, we see that we can use the operator &> to redirect both stdout and stderr. Thus, to redirect both the stderr and stdout output of any command to \dev\null (which deletes the output), we simply type
$ command &> /dev/null
or in case of my example:
$ (echo "stdout"; echo "stderror" >&2) &>/dev/null
Key takeaways:
File descriptors behave like pointers (although file descriptors are not the same as file pointers)
Redirecting a file descriptor "a" to a file descriptor "b" which points to file "f", causes file descriptor "a" to point to the same place as file descriptor b - file "f". It DOES NOT form a chain of pointers a -> b -> f
Because of the above, order matters, 2>&1 >/dev/null is != >/dev/null 2>&1. One generates output and the other does not!
Finally have a look at these great resources:
Bash Documentation on Redirection, An Explanation of File Descriptor Tables, Introduction to Pointers
(Firstly I've been looking for an hour so I'm pretty sure this isn't a repeat)
I need to write a script that executes 1 command, 1 time, and then does the following:
Saves both the stdout and stderr to a file (while maintaining their proper order)
Saves stderr only to a variable.
Elaboration on point 1, if I have a file like so
echo "one"
thisisanerrror
echo "two"
thisisanotherError
I should expect to see output, followed by error, followed by output, followed by more error (thus concatenating is not sufficient).
The closest I've come is the following, which seems to corrupt the log file:
errs=`((./someCommand.sh 2>&1 1>&3) | tee /dev/stderr ) 3>file.log 2>&3 `
This might be a starting point:
How do I write stderr to a file while using "tee" with a pipe?
Edit:
This seems to work:
((./foo.sh) 2> >(tee >(cat) >&2)) > foo.log
Split stderr with tee, write one copy to stdout (cat) and write the other to stderr. Afterwards you can grab all the stdout and write it to a file.
Edit: to store the output in a variable
varx=`((./foo.sh) 2> >(tee >(cat) >&2))`
I also saw the command enclosed in additional double quotes, but i have no clue what that might be good for.
This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 6 years ago.
I know that in Linux, to redirect output from the screen to a file, I can either use the > or tee. However, I'm not sure why part of the output is still output to the screen and not written to the file.
Is there a way to redirect all output to file?
That part is written to stderr, use 2> to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
foo > allout.txt 2>&1
Note: this works in (ba)sh, check your shell for proper syntax
All POSIX operating systems have 3 streams: stdin, stdout, and stderr. stdin is the input, which can accept the stdout or stderr. stdout is the primary output, which is redirected with >, >>, or |. stderr is the error output, which is handled separately so that any exceptions do not get passed to a command or written to a file that it might break; normally, this is sent to a log of some kind, or dumped directly, even when the stdout is redirected. To redirect both to the same place, use:
$command &> /some/file
EDIT: thanks to Zack for pointing out that the above solution is not portable--use instead:
$command > file 2>&1
If you want to silence the error, do:
$command 2> /dev/null
To get the output on the console AND in a file file.txt for example.
make 2>&1 | tee file.txt
Note: & (in 2>&1) specifies that 1 is not a file name but a file descriptor.
Use this - "require command here" > log_file_name 2>&1
Detail description of redirection operator in Unix/Linux.
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
Credits to osexp2003 and j.a. …
Instead of putting:
&>> your_file.log
behind a line in:
crontab -e
I use:
#!/bin/bash
exec &>> your_file.log
…
at the beginning of a BASH script.
Advantage: You have the log definitions within your script. Good for Git etc.
You can use exec command to redirect all stdout/stderr output of any commands later.
sample script:
exec 2> your_file2 > your_file1
your other commands.....
It might be the standard error. You can redirect it:
... > out.txt 2>&1
Command:
foo >> output.txt 2>&1
appends to the output.txt file, without replacing the content.
Use >> to append:
command >> file
In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output