tail -f OR less +F how to highlight new lines - linux

Is there any way to highlight, i.e. bold, or colorize new added lines since last change?
For example, I am watching a log file with multiple similar errors in PHP error_log (different line or function name, etc)... And I have to look at timestamps where one set of errors ends and another begins (page refresh)
It would be very helpful if there is a way to highlight, but only last added lines.
I am looking for solution to run on macOS and Linux in console.

Check out the watch command, if your system has it. The command:
watch -d tail /your/file/here
will display the file and highlight the differences character by character. Note that you do not want to use the -f option in this case.
Ubuntu has it. For OSX, you can can use brew install watch if you have homebrew installed or sudo ports install watch if you use ports.
Another bonus is that it works for any command that has output that changes over time. We have even used it with ls -l to watch the progress of backups and file compressions.

"tail" itself does not offer a serious way to do this. But give "multitail" a closer look:
https://www.vanheusden.com/multitail/
And for Mac OSX:
http://macappstore.org/multitail/

Turn on grep's line buffering mode.
Using tail
tail -f fileName | grep --line-buffered my_pattern
Using less
less +F fileName | grep --line-buffered my_pattern
Using watch & tail to highlight new lines
watch -d tail fileName
Note: For linux based systems.

Related

Cli colors disappear when piping into a text file [duplicate]

This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null

Linux tail on rotating log file using busybox

In my bash script am trying to monitor the out from the /var/log/message log file - and continue even when the file rotates (is re-created and started again). I tried using tail -f filename but quickly realised this is no good for when the file rotates.
So there are lots of answers for using tail -F filename or tail -f --retry filename (and a few other variants).
But on my embedded Linux I am using busybox which has a lightweight version of tail:
tail [OPTIONS] [FILE]...
Print last 10 lines of each FILE to standard output. With more than one
FILE, precede each with a header giving the file name. With no FILE, or
when FILE is -, read standard input.
Options:
-c N[kbm] Output the last N bytes
-n N[kbm] Print last N lines instead of last 10
-f Output data as the file grows
-q Never output headers giving file names
-s SEC Wait SEC seconds between reads with -f
-v Always output headers giving file names
If the first character of N (bytes or lines) is a '+', output begins with
the Nth item from the start of each file, otherwise, print the last N items
in the file. N bytes may be suffixed by k (x1024), b (x512), or m (1024^2).
So I can't do the usual tail -F ... since that option is not implemented. The above document snippet is the latest busybox version - and mine is a bit older.
So I need another way of logging /var/log/messages since the file gets overwritten at a certain size.
I was thinking of some simple bash line. So I saw things like inotifywait, but busybox does not have that. I looked here:
busybox docs and there is a inotifyd, but my version does not have that particular command. So I am wandering if there is a clever way of doing this with simple Linux commands/combination of commands like watch and tail -f and cat/less/more etc... I can't quite figure out what I need to do with the limited commands that I have :(
How are the logs rotated? Are you using a logrotate utility?
If yes, have you tried to add your line to postrotate section in the config file?
from man logrotate
postrotate/endscript
The lines between postrotate and endscript (both of which must
appear on lines by themselves) are executed after the log file
is rotated. These directives may only appear inside of a log
file definition. See prerotate as well.

Read stdout from a process (linux embedded)

Before flagging the question as duplicate, please read my various issues I encountered.
A bit of background: we are developing a C++ application running on embedded ARM sbc using a lite variant of debian linux. The application start at boot launched by the boot script and print various information to stdout. What we would like is the ability to connect using SSH/Telnet and read the application output, without having to kill the process and restart it for the current bash session. I want to create a simple .sh script for non-tech-savvy people to use.
The first solution for the similar question posted here is to use gdb. First it's not user-friendly (need to write multiple commands manually) and I wonder why but it don't seems to output anything into the file.
The second solution strace -ewrite -p PID works perfectly, that's what I want. Problem is, there's a lot more information than just the stdout, and it's badly formatted.
I managed to get an "acceptable" result with strace -e write=1 -s 1024 -p 20049 2>&1 | grep "write(1," but it still have the superfluous write(1, "...", 19) = 19 text. Up to this point it's simply a bit of string formatting, and I've found on multiple other pages this line saying it achieve good formatting : strace -ff -e write=1,2 -s 1024 -p PID 2>&1 | grep "^ |" | cut -c11-60 | sed -e 's/ //g' | xxd -r -p
There are some things I find strange in this command (why -ff?, why grep "^ |"?, why use xxd there?) and it just don't output anything when I try it.
Unfortunately, we do use a old buggy version of busybox (1.7.1) that have some problem with multiple pipes. That bug gives me bad results. For example, if I only do grep it works, and if I only do cut it also works, but let's say "grep "write(1," | cut -c11-60" returns nothing.
I know the real solution would simply be to update busybox and use these multiple pipes to format the string, but we can't update it since the os distribution is already installed on thousands of boards shipped to our clients worldwide..
Anyone have a miraculous solution? Thanks
Screen can be connected to an existing process using reptyr (http://blog.nelhage.com/2011/01/reptyr-attach-a-running-process-to-a-new-terminal/), or you can use neercs (http://caca.zoy.org/wiki/neercs) which I haven't used but apparently is like screen but supports attaching to an existing process all by itself.

Limit output of all Linux commands

I'm looking for a way to limit the amount of output produced by all command line programs in Linux, and preferably tell me when it is limited.
I'm working over a server which has a lag on the display. Occasionally I will accidentally run a command which outputs a large amount of text to the terminal, such as cat on a large file or ls on a directory with many files. I then have to wait a while for all the output to be printed to the terminal.
So is there a way to automatically pipe all output into a command like head or wc to prevent too much output having to be printed to terminal?
I don't know about the general case, but for each well-known command (cat, ls, find?)
you could do the following:
hardlink a copy to the existing utility
write a tiny bash function that calls the utility and pipes to head (or wc, or whatever)
alias the name of the utility to call your function.
So along these lines (utterly untested):
$ ln `which cat` ~/bin/old_cat
function trunc_cat () {
`old_cat $# | head -n 100`
}
alias cat=trunc_cat
Making aliases of all your commands would be a good start. Something like
alias lm="ls -al | more"
alias cam="cat $# | more"
Perhaps using screen could help?
this makes me think of bash-completion.
As complete command in bash enables you to specify handler when a program is not found,
what about write your own handler and clear $PATH, in order to execute every command with redirection to a filtering pipe?
#Did not try it myself.
Assuming you're working over a network connection, like ssh, into a remote server then try piping the output of the command to less. That way you can manage and navigate the output from the program on the server better. Use 'j' and 'k' to move up and down per line and 'ctrl-u' and 'ctrl-d' to move 1/2 a page up and down. When you do this only the relevant text (i.e. what fits on the screen) will be transmitted over the network.

Is there a way to configure Vim grepprg option to avoid waiting until the external tool has finished searching?

I am a long time Vimmer. However, I keep switching to shell to make searches. This avoids me to use the quickfix functionality.
The main reason for switching to shell is that when I use grep from inside Vim (with :grep), I cannot follow progress.
Because the code base I search is usually wide, I really appreciate immediate feedback.
It gives me a chance to find out that my search expression is wrong before the full results have been displayed.
This allow me to cancel the search, refine the expression then relaunch the search.
Any hint how to reproduce this pattern inside Vim would be appreciated.
I don't see the same vim behaviour as you. When I run :grep, I still see the results in vim (not in the quickfix) before the search completes (but I cannot do anything until the search is done).
I even tried using no vim settings or plugins:
gvim -u NONE -U NONE
If that's not your behaviour, check your grepprg. Mine is the default:
:verbose set grepprg
grepprg=grep -n $* /dev/null
When I use run grep -e "score" -R /etc I see this output in vim:
:!grep -n -e "score" -R /etc /dev/null 2>&1| tee /tmp/voLcaNS/232
It's possible that your system is missing tee or your vim doesn't use it (I'm using Vim 7.2 on Ubuntu 10.10). tee takes the text passed to it and writes it to a file and to stdout.
If you're looking for a way to have the quickfix get updated with your search results and have vim not block while you're searching, then you could write a script that:
searches with grep as a background process and redirects to a file
every second until grep completes, have vim load the file in quickfix (cgetfile) (you can tell vim to do something from another process with --remote-expr)
You can try my AsyncCommand plugin to get your code started. It does the above, except that it only loads the file when the search is complete.
Are you familiar with ack.vim at all? It doesn't use the quickfix window, but uses a separate buffer in a split. However, it's rather faster results come right back to the vim frame.
This may be due to buffering between grep and tee, not vim itself. To test this theory, run grep from the command-line and pipe the output through tee (i.e. grep <pattern> <files> | tee temp.out). If it behaves the same as you observe within vim, then buffering is occurring.
To work around, install expect (sudo apt-get install expect-dev on Ubuntu 10.10) and grepprg to unbuffer grep -n $* /dev/null. (See Turn off buffering in pipe).
Take a look at :vimgrep in the online documentation. It displays the file name being searched and updates as it goes.
There are three ways to do a search in entire projects.
System command grep(fast, but not working well with Ouickfix list)
=>$ grep -n Example *
Vim internal grep(slow, but have a strong pattern support)
:vim[grep] /{pattern}/[g][j] {file} ...
System plugin ack(perfect)
1 install ack
brew install ack
2 add below configs to your .vimrc
:set grepprg=ack\ --nongroup\ --column\ $*
:set grepformat=%f:%l:%c:%m
3 then you can use grep to call ack in vim like
:grep "object\." app/**/*.rb

Resources