I was wondering if it was possible to tell bash that all calls to echo or printf should be followed up by a subsequent call to fflush() on stdout/stderr respectively?
A quick and dirty solution would be to write my own printf implementation that did this and use it in lieu of either built in, but it occurred to me that I might not need to.
I'm writing several build scripts that run at once, for debugging needs I really need to see messages that they write in order.
If comands use stdio and are connected to a terminal they'll be flushed per line.
Otherwise you'll need to use something like stdbuf on commands in a pipe line
http://www.pixelbeat.org/programming/stdio_buffering/
tl;dr: instead of printf ... try to put to the script stdbuf -o0 printf .., or stdbuf -oL printf ...
If you force the file to be read, it seems to cause the buffer to flush. These work for me.
Either read the data into a useless variable:
x=$(<$logfile)
Or do a UUOC:
cat $logfile > /dev/null
Maybe "stty raw" can help with some other tricks for end-of-lines handling. AFAIK "raw" mode turns off line based buffering, at least when used for serial port ("stty raw < /dev/ttyS0").
Related
I'm using this code to write all terminal's output to the file
exec > >(tee -a "${LOG_FILE}" )
exec 2> >(tee -a "${LOG_FILE}" >&2)
How to tell it to stop? If, for example, I don't want some output to get into log..
Thank you!
It's not completely clear what your goal is here, but here is something to try.
Do you know about the script utility? You can run script myTerminalOutputFile and any commands and output after that will be captured to myTerminalOutputFile. The downside is that it captures everything. You'll see funny screen control chars like ^[[5m;27j. So do a quick test to see if that works for you.
To finish capturing output just type exit and you are returned to you parent shell cmd-line.
Warning: check man script for details about the inherent funkyness of your particular OS's version of script. Some are better than others. script -k myTerminalOutputFile may work better in this case.
IHTH
You may pipe through "sed" and then tee to log file
Suppose, someone is writing bash script in which it is needed to silent stdout,stderr and provide custom output.
Is it advisable to use function like below:
dump(){
"$#" > /dev/null 2>&1
}
And, then
dump rm filename || echo "custom-message"
What are the possible cases where it fails to function as expected?
This is a good technique. I use something like it all the time. Pros:
Preserves the exit code of the command.
Hides output of almost every program unless they directly write to /dev/tty or /dev/console, which is rare and probably for good reason anyways.
Works on shell builtins just as well as binaries. You can use this for cd, pushd/popd, etc.
Doesn't stop the command from reading from stdin. dump can be used at the end of a pipeline if you wish.
"$#" properly handles command names and arguments with whitespace, globs, and other special characters.
It looks good to me!
The only nitpick I have is that the name dump isn't the clearest.
I am new to using tee command.
I am trying to run one of my program which takes long time to finish but it prints out information as it progresses. I am using 'tee' to save the output to a file as well as to see the output in the shell (bash).
But the problem is tee doesn't forward the output to shell until the end of my command.
Is there any way to do that ?
I am using Debian and bash.
This actually depends on the amount of output and the implementation of whatever command you are running. No program is obliged to print stuff straight to stdout or stderr and flush it all the time. So even though most C runtime implementation flush after a certain amount of data was written using one of the runtime routines, such as printf, this may not be true depending on the implementation.
It tee doesn't output it right away, it is likely only receiving the input at the very end of the run of your command. It might be helpful to mention which exact command it is.
The problem you are experienced is most probably related to buffering.
You may have a look at stdbuf command, which does the following:
stdbuf - Run COMMAND, with modified buffering operations for its standard streams.
If you were to post your usage I could give a better answer, but as it is
(for i in `seq 10`; do echo $i; sleep 1s; done) | tee ./tmp
Is proper usage of the tee command and seems to work. Replace the part before the pipe with your command and you should be good to go.
I'm building an opensource project from source (CPP) in Linux. This is the order:
$CFLAGS="-g Wall" CXXFLAGS="-g Wall" ../trunk/configure --prefix=/somepath/ --host=i386-pc --target=i386-pc
$make
While compiling I'm getting lot of compiler warnings. I want to start fixing them. My question is how to capture all the compiler output in a file?
$make > file is not doing the job. It's just saving the compiler command like g++ -someoptions /asdf/xyz.cpp I want the output of these command executions.
The compiler warnings happen on stderr, not stdout, which is why you don't see them when you just redirect make somewhere else. Instead, try this if you're using Bash:
$ make &> results.txt
The & means "redirect stdout and stderr to this location". Other shells often have similar constructs.
In a bourne shell:
make > my.log 2>&1
I.e. > redirects stdout, 2>&1 redirects stderr to the same place as stdout
Lots of good answers so far. Here's a frill:
$ make 2>&1 | tee filetokeepitin.txt
will let you watch the output scroll past.
The output went to stderr. Use 2> to capture that.
$make 2> file
Assume you want to hilight warning and error from build ouput:
make |& grep -E "warning|error"
Based on an earlier reply by #dmckee
make | tee makelog.txt
This gives you real-time scrolling output while compiling, and simultaneously write to the makelog.txt file.
Try make 2> file. Compiler warnings come out on the standard error stream, not the standard output stream. If my suggestion doesn't work, check your shell manual for how to divert standard error.
From http://www.oreillynet.com/linux/cmd/cmd.csp?path=g/gcc
The > character does not redirect the
standard error. It's useful when you
want to save legitimate output without
mucking up a file with error messages.
But what if the error messages are
what you want to save? This is quite
common during troubleshooting. The
solution is to use a greater-than sign
followed by an ampersand. (This
construct works in almost every modern
UNIX shell.) It redirects both the
standard output and the standard
error. For instance:
$ gcc invinitjig.c >& error-msg
Have a look there, if this helps:
another forum
In C shell
- The ampersand is after the greater-than symbol
make >& filename
It is typically not what you want to do. You want to run your compilation in an editor that has support for reading the output of the compiler and going to the file/line char that has the problems. It works in all editors worth considering. Here is the emacs setup:
https://www.gnu.org/software/emacs/manual/html_node/emacs/Compilation.html
I use echo in Upstart scripts to log things:
script
echo "main: some data" >> log
end script
post-start script
echo "post-start: another data" >> log
end script
Now these two run in parallel, so in the logs I often see:
main: post-start: some data another data
This is not critical, so I won't employ proper synching, but thought I'd turn auto flush ON to at least reduce this effect. Is there an easy way to do that?
Update: yes, flushing will not properly fix it, but I've seen it help such situations to some degree, and this is all I need in this case. It's just that I don't know how to do it in Shell
Try changing:
echo "text"
To:
cat << EOF
text
EOF
I usually use trailing |grep -F --line-buffered '' as a cheap and reliable way to enforce line buffering. There's also sponge from moreutils to delay all output until input is finished and closed.
Why do you assume that flushing would help? The write done by echo includes a newline, so if the first script ran to completion before the second, the newline would already by there.
In the output it isn't, which indicates that the second script was run before the first was complete, thus interleaving their outputs.
Flushing won't help with this, it's a "proper" parallel race condition.