Better way to output to both console and output file than tee? - linux

What I need to display is a log refreshing periodically. It's a block of about 10 lines of text. I'm using |tee and it works right now. However, the performance is less satisfying. It waits a while and then outputs several blocks of texts from multiple refreshes (especially when the program just starts, it takes quite a while to start displaying anything on the console and the first time I saw this, I thought the program was hanging). In addition, it breaks randomly in the middle of the last block, so it's quite ugly to present.
Is there a way to improve this? (Maybe output less each time and switch between output file and console more frequently?)

Solved by flushing stdout after printing each block. Credit to Kenneth L!
https://superuser.com/questions/889019/bash-better-way-to-output-to-both-console-and-output-file-than-tee

Assuming you can monitor the log as a file directly [update: turned out not to be the case]:
The usual way of monitoring a [log] file for new lines is to use tail -f, which - from what I can tell - prints new data added to the log file as it is being added, without buffering.
Similarly, tee passes data it receives via stdin on without buffering.
Thus, you should be able to combine the two:
tail -f logFile | tee newLogEntriesFile

Related

How can I select() (ie, simultaneously read from) standard input *and* a file in bash?

I have a program that accepts input on one FIFO and emits output to another FIFO. I want to write a small script to control this program. The script needs to listen both to standard input (so I can input commands to adjust things in real time) and the program's output FIFO (so it can respond to events happening there as well).
Essentially my control program needs to select between standard input and a file (my FIFO).
I like learning how to figure out how to develop simple and elegant bash-based solutions to complex problems, and after a little headscratching I remembered that that tail -f will happily select on multiple files and tell you when one of them changes in real time, so I initially tried
tail -f <(od -An -vtd1 -w1) <(cat fifo)
to read both standard input (I'd previously run stty icanon min 1; this od invocation shows each stdin character on a separate line alongside its ASCII code, and is great for escape sequence parsing) and my FIFO. This failed epically (as does cat <(cat)): od gets run here as a backgrounded task, so it doesn't get access to the controlling TTY, and fails with a cryptic "I/O error" that was explained incredibly well here.
So now I'm a bit stumped. I realize that I can use any scripting language like Perl/Python/Ruby/Tcl to solve this; my compsci/engineering question is whether/how I might be able to solve this using (Linux) shell scripting.

Linux capture window of a large stream of data

Let's say I have a ton of data flowing through stdout over a long period of time, maybe an hour, and I want to capture a 30 second window of that data based on a trigger that occurs in the middle of that window. For instance, maybe something like
$ program-that-outputs-lots-of-data | program-that-captures-a-window-of-data
At some point, a line that contains "A-unique-string" will be output by the program, and at that point I want to save the 15 seconds worth of data before and after that string, discarding everything before that. Immediately afterward, I want to start monitoring again for the same string and capture another window when it comes in and save it to a new file. Any idea how I can do something like this with Linux tools?
The fact that you are trying to use time as a unit for buffering makes your problem very rare. Under the Unix command line, everything tends to be designed around the text line concept.
For example, if instead of 15 seconds of data you would like to capture 15 lines of text (before and after the special token), you could simply do:
$ program-that-outputs-lots-of-data | grep -C 15 A-unique-string
In your case, even if you are developing your own tailored filtering tool, deciding how much input to save and to discard is a pretty complex problem. I'd say that multimedia streaming is the area where there might be some ready to use tools.
I don't think anything exists that approaches these goals. Aside from the fact that your requirements are fairly specific, you also ask that the window be time-based, whereas most Unix-style text filters are line-oriented (e.g. grep -C 100 to get the hundred lines surrounding a match).
It should be fairly straightforward to do this in Python or Perl or Ruby or a similar scripting language.

Performance of sort command in unix

I am writing a custom apache log parser for my company and I noticed a performance issue that I can't explain. I have a text file log.txt with size 1.2GB.
The command: sort log.txt is up to 3 sec slower than the command: cat log.txt | sort
Does anybody know why this is happening?
cat file | sort is a Useless Use of Cat.
The purpose of cat is to concatenate
(or "catenate") files. If it's only
one file, concatenating it with
nothing at all is a waste of time, and
costs you a process.
It shouldn't take longer. Are you sure your timings are right?
Please post the output of:
time sort file
and
time cat file | sort
You need to run the commands a few times and get the average.
Instead of worrying about the performance of sort instead you should change your logging:
Eliminate unnecessarily verbose output to your log.
Periodically roll the log (based on either date or size).
...fix the errors outputting to the log. ;)
Also, are you sure cat is reading the entire file? It may have a read buffer etc.

Time taken by `less` command to show output

I have a script that produces a lot of output. The script pauses for a few seconds at point T.
Now I am using the less command to analyze the output of the script.
So I execute ./script | less. I leave it running for sufficient time so that the script would have finished executing.
Now I go through the output of the less command by pressing Pg Down key. Surprisingly while scrolling at the point T of the output I notice the pause of few seconds again.
The script does not expect any input and would have definitely completed by the time I start analyzing the output of less.
Can someone explain how the pause of few seconds is noticable in the output of less when the script would have finished executing?
Your script is communicating with less via a pipe. Pipe is an in-memory stream of bytes that connects two endpoints: your script and the less program, the former writing output to it, the latter reading from it.
As pipes are in-memory, it would be not pleasant if they grew arbitrarily large. So, by default, there's a limit of data that can be inside the pipe (written, but not yet read) at any given moment. By default it's 64k on Linux. If the pipe is full, and your script tries to write to it, the write blocks. So your script isn't actually working, it stopped at some point when doing a write() call.
How to overcome this? Adjusting defaults is a bad option; what is used instead is allocating a buffer in the reader, so that it reads into the buffer, freeing the pipe and thus letting the writing program work, but shows to you (or handles) only a part of the output. less has such a buffer, and, by default, expands it automatically, However, it doesn't fill it in the background, it only fills it as you read the input.
So what would solve your problem is reading the file until the end (like you would normally press G), and then going back to the beginning (like you would normally press g). The thing is that you may specify these commands via command line like this:
./script | less +Gg
You should note, however, that you will have to wait until the whole script's output loads into memory, so you won't be able to view it at once. less is insufficiently sophisticated for that. But if that's what you really need (browsing the beginning of the output while the ./script is still computing its end), you might want to use a temporary file:
./script >x & less x ; rm x
The pipe is full at the OS level, so script blocks until less consumes some of it.
Flow control. Your script is effectively being paused while less is paging.
If you want to make sure that your command completes before you use less interactively, invoke less as less +G and it will read to the end of the input, you can then return to the start by typing 1G into less.
For some background information there's also a nice article by Alexander Sandler called "How less processes its input"!
http://www.alexonlinux.com/how-less-processes-its-input
Can I externally enforce line buffering on the script?
Is there an off the shelf pseudo tty utility I could use?
You may try to use the script command to turn on line-buffering output mode.
script -q /dev/null ./script | less # FreeBSD, Mac OS X
script -c "./script" /dev/null | less # Linux
For more alternatives in this respect please see: Turn off buffering in pipe.

Syncing two files when one is still being written to

I have an application (video stream capture) which constantly writes its data to a single file. Application typically runs for several hours, creating ~1 gigabyte file. Soon (in a matter of several seconds) after it quits, I'd like to have 2 copies of file it was writing - let's say, one in /mnt/disk1, another in /mnt/disk2 (the latter is an USB flash drive with FAT32 filesystem).
I don't really like an idea of modifying the application to write 2 copies simulatenously, so I though of:
Application starts and begins to write the file (let's call it /mnt/disk1/file.mkv)
Some utility starts, copies what's already there in /mnt/disk1/file.mkv to /mnt/disk2/file.mkv
After getting initial sync state, it continues to follow a written file in a manner like tail -f does, copying everything it gets from /mnt/disk1/file.mkv to /mnt/disk2/file.mkv
Several hours pass
Application quits, we stop our syncing utility
Afterwards, we run a quick rsync /mnt/disk1/file.mkv /mnt/disk2/file.mkv just to make sure they're the same. In case if they're the same, it should just run a quick check and quit fairly soon.
What is the best approach for syncing 2 files, preferably using simple Linux shell-available utilities? May be I could use some clever trick with FUSE / md device / tee / tail -f?
Solution
The best possible solution for my case seems to be
mencoder ... -o >(
tee /mnt/disk1/file.mkv |
tee /mnt/disk2/file.mkv |
mplayer -
)
This one uses bash/zsh-specific magic named "process substitution" thus eliminating the need to make named pipes manually using mkfifo, and displays what's being encoded, as a bonus :)
Hmmm... the file is not usable while it's being written, so why don't you "trick" your program into writing through a pipe/fifo and use a 2nd, very simple program, to create 2 copies?
This way, you have your two copies as soon as the original process ends.
Read the manual page on tee(1).

Resources