Use of tee command promptly even for one command - linux

I am new to using tee command.
I am trying to run one of my program which takes long time to finish but it prints out information as it progresses. I am using 'tee' to save the output to a file as well as to see the output in the shell (bash).
But the problem is tee doesn't forward the output to shell until the end of my command.
Is there any way to do that ?
I am using Debian and bash.

This actually depends on the amount of output and the implementation of whatever command you are running. No program is obliged to print stuff straight to stdout or stderr and flush it all the time. So even though most C runtime implementation flush after a certain amount of data was written using one of the runtime routines, such as printf, this may not be true depending on the implementation.
It tee doesn't output it right away, it is likely only receiving the input at the very end of the run of your command. It might be helpful to mention which exact command it is.

The problem you are experienced is most probably related to buffering.
You may have a look at stdbuf command, which does the following:
stdbuf - Run COMMAND, with modified buffering operations for its standard streams.

If you were to post your usage I could give a better answer, but as it is
(for i in `seq 10`; do echo $i; sleep 1s; done) | tee ./tmp
Is proper usage of the tee command and seems to work. Replace the part before the pipe with your command and you should be good to go.

Related

Redirect output from subshell and processing through pipe at the same time

tl;dr: need a way to process (with grep) an output inside subshell AND redirect all original output to the main stdout/stderr at the same time. I am looking for shell-independent (!) way.
In detail
There is a proprietary binary which I want to grep for some value
The proprietary binary from time to time might be interactive to ask for a password (depends on the internal logic)
I want to grep the output of the binary AND want being able to enter the password it that is required to proceed further
So the script which is supposed to achieve my task might look like:
#!/bin/sh
user_id=... # some calculated value
list_actions_cmd="./proprietary-binary --actions ${user_id}"
action_item=$(${list_actions_cmd} | grep '^Main:')
Here proprietary-binary might ask for a password through stdin. Since subshell inside $() catches the all output, an end-user won't understand that the list_actions_cmd waits for input. What I want is either to show all output of list_action_cmd AND grepping at the same time or at least caught the keyword that now user will be asked for a password and let him know about that.
Currently what I figured out is to tee the output and grep there:
#!/bin/sh
user_id=... # some calculated value
list_actions_cmd="./proprietary-binary --actions ${user_id}"
$list_actions_cmd 2>&1 | tee /tmp/.proprietary-binary.log
action_item=$(grep "^Main" /tmp/.proprietary-binary.log)
But I wonder is there any elegant shell-independent (not limited to bash which is quite powerful) solution without any intermediate temporary file? Thanks.
What about duplicating output to stderr if executed in a terminal:
item=$(your_command | tee /dev/stderr | grep 'regexp')

How to stop writing to log file in BASH?

I'm using this code to write all terminal's output to the file
exec > >(tee -a "${LOG_FILE}" )
exec 2> >(tee -a "${LOG_FILE}" >&2)
How to tell it to stop? If, for example, I don't want some output to get into log..
Thank you!
It's not completely clear what your goal is here, but here is something to try.
Do you know about the script utility? You can run script myTerminalOutputFile and any commands and output after that will be captured to myTerminalOutputFile. The downside is that it captures everything. You'll see funny screen control chars like ^[[5m;27j. So do a quick test to see if that works for you.
To finish capturing output just type exit and you are returned to you parent shell cmd-line.
Warning: check man script for details about the inherent funkyness of your particular OS's version of script. Some are better than others. script -k myTerminalOutputFile may work better in this case.
IHTH
You may pipe through "sed" and then tee to log file

Why are some programs writing on stderr instead of stdout their output?

I've recently added to my .bashrc file an ssh-add command. I found that
ssh-add $HOME/.ssh/id_rsa_github > /dev/null
results on a message "identity added and something else" every time I open a shell.
While
ssh-add $HOME/.ssh/id_rsa_github > /dev/null 2>&1
did the trick and my shell is now 'clean'.
Reading on internet, I found that other command do it, (for example time). Could you please explain why it's done?
When you redirect output from one process to another e.g. via pipes
$ procA | procB | procC
this is traditionally done using stdout. I would expect time and similar commands to output to stderr to avoid corrupting this stream. If you're using time then you're diagnosing issues and you don't want to inadvertently provide extra input to a downstream process.
This article includes some further detail and some history surrounding this.

Force a shell script to fflush

I was wondering if it was possible to tell bash that all calls to echo or printf should be followed up by a subsequent call to fflush() on stdout/stderr respectively?
A quick and dirty solution would be to write my own printf implementation that did this and use it in lieu of either built in, but it occurred to me that I might not need to.
I'm writing several build scripts that run at once, for debugging needs I really need to see messages that they write in order.
If comands use stdio and are connected to a terminal they'll be flushed per line.
Otherwise you'll need to use something like stdbuf on commands in a pipe line
http://www.pixelbeat.org/programming/stdio_buffering/
tl;dr: instead of printf ... try to put to the script stdbuf -o0 printf .., or stdbuf -oL printf ...
If you force the file to be read, it seems to cause the buffer to flush. These work for me.
Either read the data into a useless variable:
x=$(<$logfile)
Or do a UUOC:
cat $logfile > /dev/null
Maybe "stty raw" can help with some other tricks for end-of-lines handling. AFAIK "raw" mode turns off line based buffering, at least when used for serial port ("stty raw < /dev/ttyS0").

flush output in Bourne Shell

I use echo in Upstart scripts to log things:
script
echo "main: some data" >> log
end script
post-start script
echo "post-start: another data" >> log
end script
Now these two run in parallel, so in the logs I often see:
main: post-start: some data another data
This is not critical, so I won't employ proper synching, but thought I'd turn auto flush ON to at least reduce this effect. Is there an easy way to do that?
Update: yes, flushing will not properly fix it, but I've seen it help such situations to some degree, and this is all I need in this case. It's just that I don't know how to do it in Shell
Try changing:
echo "text"
To:
cat << EOF
text
EOF
I usually use trailing |grep -F --line-buffered '' as a cheap and reliable way to enforce line buffering. There's also sponge from moreutils to delay all output until input is finished and closed.
Why do you assume that flushing would help? The write done by echo includes a newline, so if the first script ran to completion before the second, the newline would already by there.
In the output it isn't, which indicates that the second script was run before the first was complete, thus interleaving their outputs.
Flushing won't help with this, it's a "proper" parallel race condition.

Resources