Continuously print stdout and write to file - linux

I have a program that will intermittently print to stdout. I want to see the output on the screen AND redirect it to a file.
I can use tee as follows:
foo | tee ./log.txt
However, this would print the output to the screen only when foo has terminated not allowing me to observe the progress of my program.
Is there a way to continuously show the output of the program and redirect it to a log file?

Is it acceptable to write output to a file and display it live ?
$> foo > ./log.txt & tail -f ./log.txt

Related

Reading FIFO doesn't show for the first time

In Unix, I've made a FIFO and I tried to read it with tail:
mkfifo fifo.file
tail -f fifo.file
Then I try to write messages into it from another process so I do as below:
cat > fifo.file
Then I type messages such as:
abc
def
Before I type Ctrl-D, nothing is printed at the first process (tail -f fifo.file).
Then I type Ctrl-D, the two lines above are printed.
Now If I do cat > fifo.file again and I type one line such as qwe and type Enter at the end of line, this string will be printed immediately at the first process.
I'm wondering why I get two different behaviors with the same command.
Is it possible to make it the second behavior without the first, meaning that when I cat the first time, I can see messages printed once I type Enter, instead of Ctrl-D?
This is just how tail works. Basically it outputs the specified file contents only when EOF occurs which Ctrl-D effectively sends to the terminal. And the -f switch just makes tail not exit and continue reading when that happens.
Meaning no matter the switches tail still needs EOF to output anything at all.
Just to test this you can use simple cat instead of tail:
term_1$ mkfifo fifo.file
term_1$ cat < fifo.file
...
term_2$ cat > fifo.file

"cat a | cat b" ignoring contents of a

The formal definition of pipe states that the STDOUT of the left file will be immediately piped to the STDIN of the right file.I have two files, hello.txt and human.txt. cat hello.txt returns Hello and cat human.txt returns I am human.Now if I do cat hello.txt | cat human.txt, shouldn't that return Hello I am human?Instead I'm seeing command not found.I am new to shell scripting.Can someone explain?
Background: A pipe arranges for the output of the command on the left (that is, contents written to FD 1, stdout) to be delivered as input to the command on the right (on FD 0, stdin). It does this by connecting the processes with a "pipe", or FIFO, and executing them at the same time; attempts to read from the FIFO will wait until the other process has written something, and attempts to write to the FIFO will wait until the other process is ready to read.
cat hello.txt | cat human.txt
...feeds the content of hello.txt into the stdin of cat human.txt, but cat human.txt isn't reading from its stdin; instead, it's been directed by its command line arguments to read only from human.txt.
Thus, that content on the stdin of cat human.txt is ignored and never read, and cat hello.txt receives a SIGPIPE when cat human.txt exits, and thereafter exits as well.
cat hello.txt | cat - human.txt
...by contrast will have the second cat read first from stdin (you could also use /dev/stdin in place of - on many operating systems, including Linux), then from a file.
You don't need to pipe them rather you can read from multiple file like below which will in-turn concatenate the content of both file content
cat hello.txt human.txt
| generally used when you want to fed output of first command to the second command in pipe. In this case specifically your second command is reading from a file and thus don't need to be piped. If you want to you can do like
echo "Hello" | cat - human.txt
First thing the command will not give an error it will print I m human i.e the contents of human.txt
Yeah you are right about pipe definition , but on the right side of pipe there should be some command.
If the command is for receiving the input and providing the output than it will give you output,otherwise the command will do its own behaviour
But here there is a command i.e cat human.txt on the right side but it will print its own contents and does no operation on the received input .
And also this error comes when when you write like
cat hello.txt | human.txt
bash will give you this error :
human.txt: command not found

Avoid writing progress bars to file in linux

I have a process that prints output, but this output also includes loading bars.
I would like to both write this output to a file and display this output.
Normally I could just do:
./my_process.sh | tee -a my_log_file.txt
or
./my_process.sh >> my_log_file.txt
tail -f my_log_file.txt
This prints everything to my terminal, however it also prints EVERYTHING to the log file, including each step of the progress bar!
I would like to exclude progress bar iterations from getting printed to the log file.
For my purposes, any line with a carriage return can be excluded from the log file. How can I exclude carriage return lines from getting appended to the log file while still printing them to stdout on the terminal?
You can filter the tee before logging
for example
$ echo -e "progress_ignore\r\nlog this\nprogress_ignore\r" | tee >(awk '!/\r/' >> output.log)
progress_ignore
log this
progress_ignore
$ cat output.log
log this

Bash standard output display and redirection at the same time

In terminal, sometimes I would like to display the standard output and also save it as a backup. but if I use redirection ( > &> etc), it does not display the output in the terminal anymore.
I think I can do for example ls > localbackup.txt | cat localbackup.txt. But it just doesn't feel right. Is there any shortcut to achieve this?
Thank you!
tee is the command you are looking for:
ls | tee localbackup.txt
In addition to using tee to duplicate the output (and it's worth mentioning that tee is able to append to the file instead of overwriting it, by using tee -a, so that you can run several commands in sequence and retain all of the output), you can also use tail -f to "follow" the output file from a parallel process (e.g. a separate terminal):
command1 >localbackup.txt # create output file
command2 >>localbackup.txt # append to output
and from a separate terminal, at the same time:
tail -f localbackup.txt # this will keep outputting as text is appended to the file

How to redirect output to a file and stdout

In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output

Resources