I'd like to know if I can colour highlight the output of a shell command that matches certain strings.
For example, if I run myCommand, with the output below:
> myCommand
DEBUG foo bar
INFO bla bla
ERROR yak yak
I'd like all lines matching ^ERROR\s.* to be highlighted red.
Similarly, I'd like the same highlighting to be applied to the output of grep, less etc...
EDIT: I probably should mention that ideally I'd like to enable this feature globally via a 'profile' option in my .bashrc.
There is an answer in superuser.com:
your-command | grep -E --color 'pattern|$'
or
your-command | grep --color 'pattern\|$'
This will "match your pattern or the end-of-line on each line. Only the pattern is highlighted..."
You can use programs such as:
spc (Supercat)
grc (Generic Colouriser)
highlight
histring
pygmentize
grep --color
You can do something like this, but the commands won't see a tty (some will refuse to run or behave differently or do weird things):
exec > >(histring -fEi error) # Bash
If you want to enable this globally, you'll want a terminal feature, not a process that you pipe output into, because a pipe would be disruptive to some command (two problems are that stdout and stderr would appear out-of-order and buffered, and that some commands just behave differently when outputting to a terminal).
I don't know of any “conventional” terminal with this feature. It's easily done in Emacs, in a term buffer: configure font-lock-keywords for term-mode.
However, you should think carefully whether you really want that feature all the time. What if the command has its own colors (e.g. grep --color, ls --color)? Maybe it would be better to define a short alias to a colorizer command and run myCommand 2>&1|c when you want to colorize myCommand's output. You could also alias some specific always-colorize commands.
Note that the return status of a pipeline is its last command, so if you run myCommand | c, you'll get the status of c, not myCommand. Here's a bash wrapper that avoids this problem, which you can use as w myCommand:
w () {
"$#" | c
return $PIPESTATUS[0]
}
You could try (maybe needs a bit more escaping):
BLUE="$(tput setaf 4)"
BLACK="$(tput sgr0)"
command | sed "s/^ERROR /${BLUE}ERROR ${BLACK}/g"
Try
tail -f yourfile.log | egrep --color 'DEBUG|'
where DEBUG is the text you want to highlight.
You can use the hl command avalaible on github :
git clone http://github.com/mbornet-hl/hl
Then :
myCommand | hl -r '^ERROR.*'
You can use the $HOME/.hl.cfg configuration file to simplify the command line.
hl is written in C (source is available).
You can use up to 42 differents colors of text.
Use awk.
COLORIZE_AWK_COMMAND='{ print $0 }'
if [ -n "$COLORIZE" ]; then
COLORIZE_AWK_COMMAND='
/pattern1/ { printf "\033[1;30m" }
/pattern2/ { printf "\033[1;31m" }
// { print $0 "\033[0m"; }'
fi
then later you can pipe your output
... | awk "$COLORIZE_AWK_COMMAND"
printf is used in the patterns so we don't print a newline, just set the color.
You could probably enable it for specific commands using aliases and user defined shell functions wihtout too much trouble. If your coloring errors I assume you want to process stderr. Since stderr in unbuffered you would probably want to line buffer it by sending through a fifo.
Related
I remember doing magic with vi by "programming" it with input commands but I don't remember exactly how.
My sepcial request are:
launch vi in a script with command to be executed.
do an insert in one file.
search for a string in the file.
use $VARIABLE in the vi command line to replace something in the command.
finish with :wq.
I my memory, I sent the command exactly like in vi and the ESC key was emulate by '[' or something near.
I used this command into script to edit and change files.
I'm going to see the -c option but for now I cannot use $VARIABLE and insert was impossible (with 'i' or 'o').
#!/bin/sh
#
cp ../data/* .
# Retrieve filename.
MODNAME=$(pwd | awk -F'-' '{ print $2 }')
mv minimod.c $MODNAME.c
# Edit and change filename into the file (from mimimod to the content of $VARIABLE).
vi $MODENAME.c -c ":1,$s/minimod/$MODNAME/" -c ':wq!'
This example is not functionning (it seems the $VARIABLE is not good in -c command).
Could you help me remember memory ;) ?
Thanks a lot.
Joe.
You should not use vi for non-interactive editing. There's already another tool for that, it's called sed or stream editor.
What you want to do is
sed -i 's/minimod/'$MODNAME'/g' $MODNAME.c
to replace your vi command.
Maybe you are looking for the ex command, that can be run non-interatively:
ex $MODENAME.c <<< "1,\$s/minimod/$MODNAME/
wq!
"
Or if you use an old shell that does not implement here strings:
ex $MODENAME.c << EOF
1,\$s/minimod/$MODNAME/
wq!
EOF
Or if you do not want to use here documents either:
echo "1,\$s/minimod/$MODNAME/" > cmds.txt
echo "wq!" >> cmds.txt
ex $MODENAME.c < cmds.txt
rm cmds.txt
One command per line in the standard input. Just remember not to write the leading :. Take a look at this page for a quick review of ex commands.
Granted, it would be better to use the proper tool for the job, that would be sed, as #IsaA answer suggests, or awk for more complex commands.
I need to highlight certain keywords like "fail, failed, error, fatal, missing" over my terminal.
I need this with the output of ALL the commands, not any specific command. I assume I need to tweak my bashrc file for this.
To color I can use:
<input coming to terminal>|grep -P --color=auto 'fail|failed|error|fatal|missing|$'
I tried the following command but not helped:
tail -f $(tty) |grep -P --color=auto 'fail|failed|error|fatal|missing|$' &
[1]+ Stopped(SIGTTIN) tail -f $(tty) | grep -P --color=auto 'fail|failed|error|fatal|missing|$'
Searched SO for answers but could not find any question which provides desired an answer.
I don't think there's really an elegant way to do this using the shell. Ideally, you'd get a terminal emulator with this kind a keyword highlighting built in. You can get some of the way by piping the output of bash through a filter that adds ANSI colour escapes. Here is a sed script, that replaces "fail" with (red)fail(normal):
s/fail/\x1B[31m&\x1B[0m/
t done
:done
Run bash with its output piped through sed like this:
$bash | sed -f color.sed
This mechanism is not without problems, but it works in some cases. Usually it's better just to collect up the output you want, and then pipe it through sed, rather than working directly with the bash output.
I've been trying to make tail a little more readable for server startups. My current command filters out most of the INFO and DEBUG messages from the startup:
tail -F ../server/durango/log/server.log | grep -e "ERROR" -e "WARN" -e "Shutdown" -e "MicroKernel" | grep --color=auto -E 'MicroKernel|$'
What I would like to do is craft something that would highlight WARN in yellow and ERROR in red, and MicroKernel in green. I tried just piping grep --color=auto multiple times, but the only color that survives is the last command in the pipe.
Is there a one liner to do this? Or even a many-liner?
yes, there is way to do this. That is, as long as your terminal supports ANSI escape sequences. This is most terminals that exist.
I think I don't need explain how to grep, sed etc. point is the color right?
see below, this will make
WARN yellow
ERROR red
foo green
here is example:
kent$ echo "WARN
ERROR
foo"|sed 's#WARN#\x1b[33m&#; s#ERROR#\x1b[31m&#; s#foo#\x1b[32m&#'
Note: \x1b is hexadecimal for the ESC character (^VEsc).
to see the result:
I wrote a script for this years ago. You can easily cover the case of multiple colors by piping successive invocations of highlight to each other.
From the README:
Usage: ./highlight [-i] [--color=COLOR_STRING] [--] <PATTERN0> [PATTERN1...]
This is highlight version 1.0.
This program takes text via standard input and outputs it with the given
perlre(1) pattern(s) highlighted with the given color. If no color option
is specified, it defaults to 'bold red'. Colors may be anything
that Perl's Term::ANSIColor understands. This program is similar to
"grep --color PATTERN" except both matching and non-matching lines are
printed.
The default color can be selected via the $HIGHLIGHT_COLOR environment
variable. The command-line option takes precedence.
Passing -i or --ignore-case will enable case-insensitive matching.
If your pattern begins with a dash ('-'), you can pass a '--' argument
after any options and before your pattern to distinguish it from an
option.
I have been using a tool called grc for this for years. works like a charm. It comes with some quite good templates for many standard log outputs and formats and it is easy to define your own.
A command I use often is
grc tail -f /var/log/syslog
It colorizes the syslog output so it is easy to spot errors (typically marked red.
Find the tool here:
https://github.com/garabik/grc
(it is also available as package for most common linux flavours).
I wrote TxtStyle, a small utility for colorising logs. You define regular expressions to highlight in ~/.txts.conf file:
[Style="example"]
!red: regex("error")
green: regex("\d{4}-\d\d-\d\d")
# ...
And then apply the styles:
txts -n example example.log
or you can also pipe the output
tail -f example.log | txts -n example
You can create a colored log instead of using a complex command.
For php is like this:
echo "^[[30;43m".$ip."^[[0m";
The key point is to use Ctrl-v ctrl-[ to input a green ^[ under insert mode in vim, direct input ^[ does not work.
More info here
My sample using awk. Match log format like: xxxx [debug] xxxxx xxxx xxxx
black=30m
red=31m
green=32m
yellow=33m
blue=34m
magenta=35m
cyan=36m
white=37m
blacklog="\"\033[$black\" \$0 \"\033[39m\""
redlog="\"\033[$red\" \$0 \"\033[39m\""
greenlog="\"\033[$green\" \$0 \"\033[39m\""
yellowlog="\"\033[$yellow\" \$0 \"\033[39m\""
bluelog="\"\033[$blue\" \$0 \"\033[39m\""
magentalog="\"\033[$magenta\" \$0 \"\033[39m\""
cyanlog="\"\033[$cyan\" \$0 \"\033[39m\""
whitelog="\"\033[$white\" \$0 \"\033[39m\""
trace="/\[trace\]/ {print $redlog}"
debug="/\[debug\]/ {print $magentalog}"
info="/\[info\]/ {print $greenlog}"
warning="/\[warning\]/ {print $bluelog}"
error="/\[error\]/ {print $yellowlog}"
yourcommand | awk "$trace $debug $info $warning $error"
Normally when one wants to look at specific output lines from running something, one can do something like:
./a.out | grep IHaveThisString
but what if IHaveThisString is something which changes every time so you need to first run it, watch the output to catch what IHaveThisString is on that particular run, and then grep it out? I can just dump to file and later grep but is it possible to do something like background it and then bring it to foreground and bringing it back but now piped to some grep? Something akin to:
./a.out
Ctrl-Z
fg | grep NowIKnowThisString
just wondering..
No, it is only in your screen buffer if you didn't save it in some other way.
Short form: You can do this, but you need to know that you need to do it ahead-of-time; it's not something that can be put into place interactively after-the-fact.
Write your script to determine what the string is. We'd need a more detailed example of the output format to give a better example of usage, but here's one for the trivial case where the entire first line is the filter target:
run_my_command | { read string_to_filter_for; fgrep -e "$string_to_filter_for" }
Replace the read string_to_filter_for with as many commands as necessary to read enough input to determine what the target string is; this could be a loop if necessary.
For instance, let's say that the output contains the following:
Session id: foobar
and thereafter, you want to grep for lines containing foobar.
...then you can pipe through the following script:
re='Session id: (.*)'
while read; do
if [[ $REPLY =~ $re ]] ; then
target=${BASH_REMATCH[1]}
break
else
# if you want to print the preamble; leave this out otherwise
printf '%s\n' "$REPLY"
fi
done
[[ $target ]] && grep -F -e "$target"
If you want to manually specify the filter target, this can be done by having the loop check for a file being created with filter contents, and using that when starting up grep afterwards.
That is a little bit strange what you need, but you can do it tis way:
you must go into script session first;
then you use shell how usually;
then you start and interrupt you program;
then run grep over typescript file.
Example:
$ script
$ ./a.out
Ctrl-Z
$ fg
$ grep NowIKnowThisString typescript
You could use a stream editor such as sed instead of grep. Here's an example of what I mean:
$ cat list
Name to look for: Mike
Dora 1
John 2
Mike 3
Helen 4
Here we find the name to look for in the fist line and want to grep for it. Now piping the command to sed:
$ cat list | sed -ne '1{s/Name to look for: //;h}' \
> -e ':r;n;G;/^.*\(.\+\).*\n\1$/P;s/\n.*//;br'
Mike 3
Note: sed itself can take file as a parameter, but you're not working with text files, so that's how you'd use it.
Of course, you'd need to modify the command for your case.
I have a program that writes to fd3 and I want to process that data with grep and sed. Here is how the code looks so far:
exec 3> >(grep "good:"|sed -u "s/.*:\(.*\)/I got: \1/")
echo "bad:data1">&3
echo "good:data2">&3
Nothing is output until I do a
exec 3>&-
Then, everything that I wanted finally arrives as I expected:
I got: data2
It seems to reply immediately if I use only a grep or only a sed, but mixing them seems to cause some sort of buffering. How can I get immediate output from fd3?
I think I found it. For some reason, grep doesn't automatically do line buffering. I added a --line-buffered option to grep and now it responds immediately.
You only need to tell grep and sed to not bufferize lines:
grep --line-buffered
and
sed -u
An alternate means to stop sed from buffering is to run it through the s2p sed-to-Perl translator and insert a directive to have it command-buffered, perhaps like
BEGIN { $| = 1 }
The other reason to do this is that it gives you the more convenient notation from EREs instead of the backslash-annoying legacy BREs. You also get the full complement of Unicode properties, which is often critical.
But you don’t need the translator for such a simple sed command. And you do not need both grep and sed, either. These all work:
perl -nle 'BEGIN{$|=1} if (/good:/) { s/.*:(.*)/I got: $1/; print }'
perl -nle 'BEGIN{$|=1} next unless /good:/; s/.*:(.*)/I got: $1/; print'
perl -nle 'BEGIN{$|=1} next unless /good:/; s/.*:/I got: /; print'
Now you also have access to the minimal quantifier, *?, +?, ??, {N,}?, and {N,M}?. These now allow things like .*? or \S+? or [\p{Pd}.]??, which may well be preferable.
You can merge the grep into the sed like so:
exec 3> >(sed -une '/^good:/s//I got: /p')
echo "bad:data1">&3
echo "good:data2">&3
Unpacking that a bit: You can put a regexp (between slashes as usual) before any sed command, which makes it only be applied to lines that match that regexp. If the first regexp argument to the s command is the empty string (s//whatever/) then it will reuse the last regexp that matched, which in this case is the prefix, so that saves having to repeat yourself. And finally, the -n option tells sed to print only what it is specifically told to print, and the /p suffix on the s command tells it to print the result of the substitution.
The -e option is not strictly necessary but is good style, it just means "the next argument is the sed script, not a filename".
Always put sed scripts in single quotes unless you need to substitute a shell variable in there, and even then I would put everything but the shell variable in single quotes (the shell variable is, of course, double-quoted). You avoid a bunch of backslash-related grief that way.
On a Mac, brew install coreutils and use gstdbuf to control buffering of grep and sed.
Turn off buffering in pipe seems to be the easiest and most generic answer. Using stdbuf (coreutils) :
exec 3> >(stdbuf -oL grep "good:" | sed -u "s/.*:\(.*\)/I got: \1/")
echo "bad:data1">&3
echo "good:data2">&3
I got: data2
Buffering has other dependencies, for example depending on mawk either gawk reading this pipe :
exec 3> >(stdbuf -oL grep "good:" | awk '{ sub(".*:", "I got: "); print }')
In that case, mawk would retain the input, gawk wouldn't.
See also How to fix stdio buffering