Redirecting everything in a file created by an executable? - linux

I have a bunch of executable files and I want to store the output of each one of them in a separate file.
For that purpose i am using following command but "2>&1" does not work every time. And sometimes the output files remain empty even though the script does print stuff in shell when run from shell.
What should I use instead of 2>&1?
./$file 2>&1 | tee "$outputFile"

Some executables don't just write to stdout and stderr, but instead open /dev/tty and write to that.
So to redirect those it is necessary to do more complicated procedures involving a psuedo-tty. See the command script, for something that can do this.

If you want to capture both stdout and stderr use:
./$file > $outputfile 2>&1
However, some programs are smart and detect to what type of terminal their output is redirected. They might generate different output if you send it to a file...

Just to elaborate on Darron's answer, you can use the script command to capture all of the output. Here's an example that writes to stdout, stderr, and /dev/tty using a python script, and captures all three outputs:
brent#battlecruiser:~$ cat test.py
import sys
sys.stdout.write('o\n')
sys.stderr.write('e\n')
with open('/dev/tty', 'w') as tty:
tty.write('t\n')
brent#battlecruiser:~$ script testout
Script started, file is testout
brent#battlecruiser:~$ python test.py
o
e
t
brent#battlecruiser:~$ exit
Script done, file is testout
brent#battlecruiser:~$ head -n -3 testout | tail -n +3
o
e
t
Contrast this with what happens when you use 2>1& and tee:
brent#battlecruiser:~$ python test.py 2>&1 | tee testout
e
t
o
brent#battlecruiser:~$ cat testout
e
o
As you can see, the write to /dev/tty is not captured. Try executing your command within the script command as shown and see if it captures all the results.

Related

Linux/bash Piping and redirection

I have a question from the book "unix/linux your ultimate guide" it asks
Suppose there is a program named "prog" that outputs on both stderr and stdout. Give a single command to run "prog" with the 'o' option and the string 'arg' passed as its only argument, where it takes its stdin from the output of the program "progBefore", where "prog"s stdout is ignored, and "prog"s stderr is given to the program "progAfter" through "progAfter"s stdin. Do not use any temporary files.
Here is what i tried:
prog -o 'arg' < `progBefore` 1>/dev/null 2> progAfter
Any help would be appreciated thank you
What is this doing?
prog -o 'arg' < progBefore 1>/dev/null 2> progAfter
It is calling the program prog, taking input from the file progBefore, passing stdout to /dev/null (which ignores it) and passing stderr to the file progAfter. You are using file redirection when you should be using pipes:
progBefore | prog -o 'arg' 2>&1 1>/dev/null | progAfter
A pipe (more correctly, an anonymous pipe) indicated by | takes the stdout from the program on the left and sends it to the stdin of the program on the right.
2>&1 redirects stderr to whatever stdout is pointing at, note that the order is important.

Redirect output of command to file and stdout [Only once, not at each command]

I want to have a snapshot of all the commands I executed and output (stdout, stderr) in a file to view later. Often I need to view it for debugging purposes.
This link gives the output for single command:
How to redirect output to a file and stdout
However, I do not want to write it for each command since it is time consuming. Also, this does not log the command I executed.
Example functionality:
bash> start-logging logger.txt
bash> echo "Hi"
Hi
bash> cat 1.txt
Contents of 1.txt
bash> stop-logging
Contents of logger.txt
bash> echo "Hi"
Hi
bash> cat 1.txt
Contents of 1.txt
Ideally, I want the above behavior.
Any other command with some missing functionality (maybe missing command executed from tmp.txt) would also help.
You can use
$ bash | tee log
to start logging and then
$ ^D
(that's Ctrl-D)
to stop logging.
Create a new bash session that will log, then terminate it when it's not needed.
But unfortunately, bash seems to not write its prompt and commands to stderr so you will only catch command output.

How to use the output of a command as an input to other program in linux?

I have a program running on linux
This program take its input from the stdin
So I can launch it with input file in this way
myprogram < file
in order to avoid typing input to the program
Now I want that the program take the input from a command output. something like that
myprogram < anycommand
but this does not work because it's expecting a file and not a command.
How I can make it work? Are there a shell syntax to make it work?
Note: I can not use pipe like anycommand | myprogram
normally (IMHO) myprogram does not know anything about file. The bash starts myprogram and reads the file, and writes the content of file to the stdin of myprogram.
So myprogram should not know that his stdin is a file.
So, anycommand | myprogram must work.
If it doesn't work with ash, maybe you can make a named pipe (mkfifo /tmp/testpipe)
Now you can start your program with myprogram < /tmp/testpipe and you can write your input to /tmp/testpipe
On my Linux system, ash is a symbolic link to dash and that handles pipes just fine:
pax> ls -ld $(which ash)
lrwxrwxrwx 1 root root 4 Mar 1 2012 /bin/ash -> dash
pax> ash
$ echo hello | tr '[a-z]' '[A-Z]'
HELLO
So I'd give the anycommand | myprogram another shot just in case.
If your ash has no piping capability, you can always revert to using temporary files, provided anycommand isn't a long-lived process that you need to handle the output of in an incremental fashion:
anycommand >/tmp/tempfile
myprogram </tmp/tempfile
You need to use it like this:
myprogram < <(anycommand)
This is called process substitution

Bash standard output display and redirection at the same time

In terminal, sometimes I would like to display the standard output and also save it as a backup. but if I use redirection ( > &> etc), it does not display the output in the terminal anymore.
I think I can do for example ls > localbackup.txt | cat localbackup.txt. But it just doesn't feel right. Is there any shortcut to achieve this?
Thank you!
tee is the command you are looking for:
ls | tee localbackup.txt
In addition to using tee to duplicate the output (and it's worth mentioning that tee is able to append to the file instead of overwriting it, by using tee -a, so that you can run several commands in sequence and retain all of the output), you can also use tail -f to "follow" the output file from a parallel process (e.g. a separate terminal):
command1 >localbackup.txt # create output file
command2 >>localbackup.txt # append to output
and from a separate terminal, at the same time:
tail -f localbackup.txt # this will keep outputting as text is appended to the file

How to redirect output to a file and stdout

In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output

Resources