Sed operation on streaming data - linux

Need to perform sed operation on streaming data
tail -f sourcefile | sed -n 's/text1/text2/p' >~/destinationfile
I tried the above command but could not get it to work.

Both programs are linked against libc and the libc performs internal buffering on input/output operations. Buffering will be line-based when stdout(!) is a terminal but block-based when stdout is a pipe. The block-based buffering uses larger buffers and the consuming application has to wait until the buffer is filled, or the end of the stream is reached or the program calls flush() on the file descriptor. However neither tail nor sed calling flush() (with default command line options).
In your case you can see block based buffering of tail's stdin in effect. This happens because stdout is going into a pipe to tail.
Solution: You can issue the stdbuf command to disable the input buffering of tail:
if you only want to see sed's output in terminal:
stdbuf -i0 tail -f /var/log/syslog | sed -n 's/CRON/cron/p'
if you are piping to a file sed's output buffer now needs to be disabled as well!
touch output.txt
tail -f output.txt & # tail output in background in order to see
# file changes immediately
stdbuf -i0 tail -f /var/log/syslog | stdbuf -o0 sed -n 's/CRON/cron/p' > output.txt

Related

Find a user defined keyword in real-time log (text) file using shell script

I am running a process in linux which logs the status and others info in a log file /home/path/Detail_File.log in text format. the file is being continuously written using a background process. I can display the file contents using tail -f /home/path/Detail_File.log.
I want to check the file for a key word keyword (say). How do I continuously check if the key word is found, and if found get that line in a variable.
I am not sure what should I search on internet so posting this question directly without the primary research. Any suggestion would be appreciated.
EDIT 1: Adding screenshot of the while loop suggestion by #jhnc
The echo "Time elapsed: $curr_time ..." is executed and then its silent
If you are using GNU utilities, grep has a --line-buffered option:
tail -n +1 -f /home/path/Detail_File.log |\
grep --line-buffered -F keyword |\
while read matchedline; do
dosomething "$matchedline"
done
This is a pipeline: cmd1 | cmd1 | cmd3
Backslash (\) at end of line allows using linebreak for readability.
tail output is fed into grep.
grep output is fed into the while loop.
read gets its input from grep, not from the user.
Usually, grep output is buffered - the output only becomes available once the pipe buffer is full. The --line-buffered option causes the pipe buffer to be flushed after every line. For example, consider difference between output of:
while sleep 1; do date; done |\
grep --line-buffered . |\
while read line; do echo "$(date) :: $line"; done |\
head
versus (may take a few minutes before you see anything):
while sleep 1; do date; done |\
grep . |\
while read line; do echo "$(date) :: $line"; done |\
head

Problems with tail -f and awk? [duplicate]

Is that possible to use grep on a continuous stream?
What I mean is sort of a tail -f <file> command, but with grep on the output in order to keep only the lines that interest me.
I've tried tail -f <file> | grep pattern but it seems that grep can only be executed once tail finishes, that is to say never.
Turn on grep's line buffering mode when using BSD grep (FreeBSD, Mac OS X etc.)
tail -f file | grep --line-buffered my_pattern
It looks like a while ago --line-buffered didn't matter for GNU grep (used on pretty much any Linux) as it flushed by default (YMMV for other Unix-likes such as SmartOS, AIX or QNX). However, as of November 2020, --line-buffered is needed (at least with GNU grep 3.5 in openSUSE, but it seems generally needed based on comments below).
I use the tail -f <file> | grep <pattern> all the time.
It will wait till grep flushes, not till it finishes (I'm using Ubuntu).
I think that your problem is that grep uses some output buffering. Try
tail -f file | stdbuf -o0 grep my_pattern
it will set output buffering mode of grep to unbuffered.
If you want to find matches in the entire file (not just the tail), and you want it to sit and wait for any new matches, this works nicely:
tail -c +0 -f <file> | grep --line-buffered <pattern>
The -c +0 flag says that the output should start 0 bytes (-c) from the beginning (+) of the file.
In most cases, you can tail -f /var/log/some.log |grep foo and it will work just fine.
If you need to use multiple greps on a running log file and you find that you get no output, you may need to stick the --line-buffered switch into your middle grep(s), like so:
tail -f /var/log/some.log | grep --line-buffered foo | grep bar
you may consider this answer as enhancement .. usually I am using
tail -F <fileName> | grep --line-buffered <pattern> -A 3 -B 5
-F is better in case of file rotate (-f will not work properly if file rotated)
-A and -B is useful to get lines just before and after the pattern occurrence .. these blocks will appeared between dashed line separators
But For me I prefer doing the following
tail -F <file> | less
this is very useful if you want to search inside streamed logs. I mean go back and forward and look deeply
Didn't see anyone offer my usual go-to for this:
less +F <file>
ctrl + c
/<search term>
<enter>
shift + f
I prefer this, because you can use ctrl + c to stop and navigate through the file whenever, and then just hit shift + f to return to the live, streaming search.
sed would be a better choice (stream editor)
tail -n0 -f <file> | sed -n '/search string/p'
and then if you wanted the tail command to exit once you found a particular string:
tail --pid=$(($BASHPID+1)) -n0 -f <file> | sed -n '/search string/{p; q}'
Obviously a bashism: $BASHPID will be the process id of the tail command. The sed command is next after tail in the pipe, so the sed process id will be $BASHPID+1.
Yes, this will actually work just fine. Grep and most Unix commands operate on streams one line at a time. Each line that comes out of tail will be analyzed and passed on if it matches.
This one command workes for me (Suse):
mail-srv:/var/log # tail -f /var/log/mail.info |grep --line-buffered LOGIN >> logins_to_mail
collecting logins to mail service
Coming some late on this question, considering this kind of work as an important part of monitoring job, here is my (not so short) answer...
Following logs using bash
1. Command tail
This command is a little more porewfull than read on already published answer
Difference between follow option tail -f and tail -F, from manpage:
-f, --follow[={name|descriptor}]
output appended data as the file grows;
...
-F same as --follow=name --retry
...
--retry
keep trying to open a file if it is inaccessible
This mean: by using -F instead of -f, tail will re-open file(s) when removed (on log rotation, for sample).
This is usefull for watching logfile over many days.
Ability of following more than one file simultaneously
I've already used:
tail -F /var/www/clients/client*/web*/log/{error,access}.log /var/log/{mail,auth}.log \
/var/log/apache2/{,ssl_,other_vhosts_}access.log \
/var/log/pure-ftpd/transfer.log
For following events through hundreds of files... (consider rest of this answer to understand how to make it readable... ;)
Using switches -n (Don't use -c for line buffering!).By default tail will show 10 last lines. This can be tunned:
tail -n 0 -F file
Will follow file, but only new lines will be printed
tail -n +0 -F file
Will print whole file before following his progression.
2. Buffer issues when piping:
If you plan to filter ouptuts, consider buffering! See -u option for sed, --line-buffered for grep, or stdbuf command:
tail -F /some/files | sed -une '/Regular Expression/p'
Is (a lot more efficient than using grep) a lot more reactive than if you does'nt use -u switch in sed command.
tail -F /some/files |
sed -une '/Regular Expression/p' |
stdbuf -i0 -o0 tee /some/resultfile
3. Recent journaling system
On recent system, instead of tail -f /var/log/syslog you have to run journalctl -xf, in near same way...
journalctl -axf | sed -une '/Regular Expression/p'
But read man page, this tool was built for log analyses!
4. Integrating this in a bash script
Colored output of two files (or more)
Here is a sample of script watching for many files, coloring ouptut differently for 1st file than others:
#!/bin/bash
tail -F "$#" |
sed -une "
/^==> /{h;};
//!{
G;
s/^\\(.*\\)\\n==>.*${1//\//\\\/}.*<==/\\o33[47m\\1\\o33[0m/;
s/^\\(.*\\)\\n==> .* <==/\\o33[47;31m\\1\\o33[0m/;
p;}"
They work fine on my host, running:
sudo ./myColoredTail /var/log/{kern.,sys}log
Interactive script
You may be watching logs for reacting on events?
Here is a little script playing some sound when some USB device appear or disappear, but same script could send mail, or any other interaction, like powering on coffe machine...
#!/bin/bash
exec {tailF}< <(tail -F /var/log/kern.log)
tailPid=$!
while :;do
read -rsn 1 -t .3 keyboard
[ "${keyboard,}" = "q" ] && break
if read -ru $tailF -t 0 _ ;then
read -ru $tailF line
case $line in
*New\ USB\ device\ found* ) play /some/sound.ogg ;;
*USB\ disconnect* ) play /some/othersound.ogg ;;
esac
printf "\r%s\e[K" "$line"
fi
done
echo
exec {tailF}<&-
kill $tailPid
You could quit by pressing Q key.
you certainly won't succeed with
tail -f /var/log/foo.log |grep --line-buffered string2search
when you use "colortail" as an alias for tail, eg. in bash
alias tail='colortail -n 30'
you can check by
type alias
if this outputs something like
tail isan alias of colortail -n 30.
then you have your culprit :)
Solution:
remove the alias with
unalias tail
ensure that you're using the 'real' tail binary by this command
type tail
which should output something like:
tail is /usr/bin/tail
and then you can run your command
tail -f foo.log |grep --line-buffered something
Good luck.
Use awk(another great bash utility) instead of grep where you dont have the line buffered option! It will continuously stream your data from tail.
this is how you use grep
tail -f <file> | grep pattern
This is how you would use awk
tail -f <file> | awk '/pattern/{print $0}'

piped sed does not output to file using ngrep

I am using ngrep to filter some tcp packetes into STDOUT
Since it now become more important to log the output (after changing the result a bit usingsed) into a file.
piping it with sed looks OK in stdout - But no content is written when writing to dump.log
Below is the command:
grep -l -q -W none -i "^POST /somefile.php" tcp and port 80 | sed -e 's/^T/IP/g' >> dump.log
Having the impression that either sed or ngrep blocks the process of pushing the content.
Add -U to GNU sed to load minimal amounts of data from the input and flush the output buffers more often.

How can I cat some continous logs and grep word in real time?

In Linux, I want to monitor the output of some tool, e.g. dbus-monitor's output. I hope to cat some special key word of its output, and then use the key word to be as input argument of other program. Like below, but it is not good.
dbus-monitor --system > d.log &
var=`cat d.log | grep some-key-word`
my_script.sh $var
I hope to monitor the output flow in real time, and not to cat the whole log from beginning. Just to cat its latest change. E.g. dmesg provides an option, dmesg -w, which meets what I want.
-w, --follow wait for new messages
So how to make such script? To cat the latest new output and use it continuously.
Instead of cat, use tail -F <file> | grep <something>. This option makes tail to wait for and output all incoming data. Most likely, you also will need to modify buffering mode for standard streams with stdbuf -oL (by default, stdout is fully buffered meaning that data is written into file each couple of kilobytes and not after each line).

bash standard output can not be redirected into file

I am reading 'advanced bash script', in Chapter 31, there is a problem. I can not figure it out.
tail -f /var/log/msg | grep 'error' >> logfile
Why is there nothing to output into logfile?
can you offer me an explanation?
thank you in advance
As #chepner comments, grep is using a larger buffer (perhaps 4k or more) to buffer its stdout. Most of the standard utilities do this when piping or redirecting to a file. They typically only switch to line-buffered mode when outputting directly to the terminal.
You can use the stdbuf utility to force grep to do line buffering of its output:
tail -f /var/log/msg | stdbuf -oL grep 'error' >> logfile
As an easily observable demonstration of this effect, you can try the following two commands:
for ((i=0;;i++)); do echo $i; sleep 0.001; done | grep . | cat
and
for ((i=0;;i++)); do echo $i; sleep 0.001; done | stdbuf -oL grep . | cat
In the first command, the output from grep . (i.e. match all lines) be buffered going into the pipe to cat. On mine the buffer appears to be about 4k. You will see the ascending numbers output in chunks as the buffer gets filled and then flushed.
In the second command, grep's output into the pipe to cat is line-buffered, so you should see terminal output for every line, i.e. more-or-less continuous output.

Resources