I want to tail multiple files (and follow them) in CentOS, I've tried this:
tail -f file1 file2 file3
but the output is very unfriendly
I've also had a look at multitail but can't find a CentOS version.
What other choices do I have?
Multitail is available for CentOS in rpmforge repos. To add rpmforge repository check the documentation on 3rd Party Repositories.
I found the solution described here work well on centos:
The link is http://www.thegeekstuff.com/2009/09/multitail-to-view-tail-f-output-of-multiple-log-files-in-one-terminal/
Thanks to Ramesh Natarajan
$ vi multi-tail.sh
#!/bin/sh
# When this exits, exit all back ground process also.
trap 'kill $(jobs -p)' EXIT
# iterate through the each given file names,
for file in "$#"
do
# show tails of each in background.
tail -f $file &
done
# wait .. until CTRL+C
wait
You could simulate multitail by opening multiple instances of tail -f in Emacs subwindows.
I usually just open another xterm and run a separate 'tail -f' there.
Otherwise if I'm using the 'screen' tool, I'll set up separate 'tail -f' commands there. I don't like that as much because it takes a few keystrokes to enable scrolling in screen before using the Page Up and Page Down keys. I prefer to just use xterm's scroll bar.
You can use the watch command, i use it to tail two files at the same time:
watch -n0 tail -n30 file1 file2
A better answer to an old question...
I create a shell function in my .bashrc (obviously assumes you're using bash as your shell) and use tmux. You can probably complicate this a whole lot and do it without the tempfile, but the quoting is just ugly if you're trying to ensure that files with spaces or other weird characters in the name still work.
multitail ()
{
cmdfile=`mktemp`
echo "new-session -d \"tail -f '$1'\"" >$cmdfile
shift
for file in "$#"
do
echo "split-window -d \"tail -f '$file'\"" >>$cmdfile
done
echo "select-layout even-vertical" >>$cmdfile
tmux source-file $cmdfile \; attach && rm -f $cmdfile
}
Related
I am monitoring the new files created in a folder in linux. Every now and then I issue an "ls -ltr" in it. But I wish there was a program/script that would automatically print it, and only the latest entries. I did a short while loop to list it, but it would repeat the entries that were not new and it would keep my screen rolling up when there were no new files. I've learned about "watch", which does show what I want and refreshes every N seconds, but I don't want a ncurses interface, I'm looking for something like tail:
continuous
shows only the new stuff
prints in my terminal, so I can run it in the background and do other things and see the output every now and then getting mixed with whatever I'm doing :D
Summarizing: get the input, compare to a previous input, output only what is new.
Something that do that doesn't sound like such an odd tool, I can see it being used for other situations also, so I would expect it to already exist, but I couldn't find anything. Suggestions?
You can use the very handy command watch
watch -n 10 "ls -ltr"
And you will get a ls every 10 seconds.
And if you add a tail -10 you will only get the 10 newest.
watch -n 10 "ls -ltr|tail -10"
If you have access to inotifywait (available from the inotify-tools package if you are on Debian/Ubuntu) you could write a script like this:
#!/bin/bash
WATCH=/tmp
inotifywait -q -m -e create --format %f $WATCH | while read event
do
ls -ltr $WATCH/$event
done
This is a one-liner that won't give you the same information that ls does, but it will print out the filename:
inotifywait -q -m -e create --format %w%f /some/directory
This works in cygwin and Linux. Some of the previous solutions which write a file will cause the disk to thrash.
This script does not have that problem:
SIG=1
SIG0=SIG
while [ $SIG != 0 ] ; do
while [ $SIG = $SIG0 ] ; do
SIG=`ls -1 | md5sum | cut -c1-32`
sleep 10
done
SIG0=$SIG
ls -lrt | tail -n 1
done
In terminal, sometimes I would like to display the standard output and also save it as a backup. but if I use redirection ( > &> etc), it does not display the output in the terminal anymore.
I think I can do for example ls > localbackup.txt | cat localbackup.txt. But it just doesn't feel right. Is there any shortcut to achieve this?
Thank you!
tee is the command you are looking for:
ls | tee localbackup.txt
In addition to using tee to duplicate the output (and it's worth mentioning that tee is able to append to the file instead of overwriting it, by using tee -a, so that you can run several commands in sequence and retain all of the output), you can also use tail -f to "follow" the output file from a parallel process (e.g. a separate terminal):
command1 >localbackup.txt # create output file
command2 >>localbackup.txt # append to output
and from a separate terminal, at the same time:
tail -f localbackup.txt # this will keep outputting as text is appended to the file
I am working on a Java EE application where its logs will be generated inside a Linux server .
I have used the command tail -f -n -10000 MyLog
It displayed last 1000 lines from that log file .
Now I pressed Ctrl + c in Putty to disconnect the logs updation ( as i am feared it may be updated with new requests and I will loose my data )
In the displayed result, how can I search for a particular keyword ?? (Used / String name to search but it's not working)
Pipe your output to PAGER.
tail -f -n LINE_CNT LOG_FILE | less
then you can use
/SEARCH_STRING
Two ways:
tail -n 10000 MyLog| grep -i "search phrase"
tail -f -n 10000 MyLog | less
The 2nd method will allow you to search with /. It will only search down but you can press g to go back to the top.
Edit: On testing it seems method 2 doesn't work all that well... if you hit the end of the file it will freeze till you ctrl+c the tail command.
You need to redirect the output from tail into a search utility (e.g. grep). You could do this in two steps: save the output to a file, then search in the file; or in one go: pipe the ouput to the search utility
To see what goes into the file (so you can hit Ctlr+c) you can use the tee command, which duplicates the output to the screen and to a file:
tail -f -n -10000 MyLog | tee <filename>
Then search within the file.
If you want to pipe the result into the search utility, you can use the same trick as above, but use your search program instead of tee
Controlling terminal output on the fly
While running any command in a terminal such as Putty you can use CTRL-S and CTRL-Q to stop and start output to the Putty terminal.
Excluding lines using grep
If you want to exclude lines that contain a specific pattern use grep -v the following would remove all line that contain the string INFO
tail -f logfile | grep -v INFO
Show lines that do not contain the words INFO or DEBUG
tail -f logfile | grep -v -E 'INFO|DEBUG'
Finally, the MOTHER AND FATHER of all tailing tools is xtail.pl
If you have perl on your host xtail.pl is a very nice tool to learn and in a nutshell you can use it to tail multiple files. Very handy.
You can just open it with less command
less logfile_name
when you open the file you can use this guide here
Tip: I suggest, first to use G to go to the end of the file and then to you use Backward Search
I want to mass-edit a ton of files that are returned in a grep. (I know, I should get better at sed).
So if I do:
grep -rnI 'xg_icon-*'
How do I pipe all of those files into vi?
The easiest way is to have grep return just the filenames (-l instead of -n) that match the pattern. Run that in a subshell and feed the results to Vim.
vim $(grep -rIl 'xg_icon-*' *)
A nice general solution to this is to use xargs to convert a stdout from a process like grep to an argument list.
A la:
grep -rIl 'xg_icon-*' | xargs vi
if you use vim and the -p option, it will open each file in a tab, and you can switch between them using gt or gT, or even the mouse if you have mouse support in the terminal
You can do it without any processing of the grep output! This will even enable you to go the the right line (using :help quickfix commands, eg. :cn or :cw). So, if you are using bash or zsh:
vim -q <(grep foo *.c)
if what you want to edit is similar across all files, then no point using vi to do it manually. (although vi can be scripted as well), hypothetically, it looks something like this, since you never mention what you want to edit
grep -rnI 'xg_icon-*' | while read FILE
do
sed -i.bak 's/old/new/g' $FILE # (or other editing commands, eg awk... )
done
vi `grep -l -i findthisword *`
How can I randomize the lines in a file using standard tools on Red Hat Linux?
I don't have the shuf command, so I am looking for something like a perl or awk one-liner that accomplishes the same task.
Um, lets not forget
sort --random-sort
shuf is the best way.
sort -R is painfully slow. I just tried to sort 5GB file. I gave up after 2.5 hours. Then shuf sorted it in a minute.
And a Perl one-liner you get!
perl -MList::Util -e 'print List::Util::shuffle <>'
It uses a module, but the module is part of the Perl code distribution. If that's not good enough, you may consider rolling your own.
I tried using this with the -i flag ("edit-in-place") to have it edit the file. The documentation suggests it should work, but it doesn't. It still displays the shuffled file to stdout, but this time it deletes the original. I suggest you don't use it.
Consider a shell script:
#!/bin/sh
if [[ $# -eq 0 ]]
then
echo "Usage: $0 [file ...]"
exit 1
fi
for i in "$#"
do
perl -MList::Util -e 'print List::Util::shuffle <>' $i > $i.new
if [[ `wc -c $i` -eq `wc -c $i.new` ]]
then
mv $i.new $i
else
echo "Error for file $i!"
fi
done
Untested, but hopefully works.
cat yourfile.txt | while IFS= read -r f; do printf "%05d %s\n" "$RANDOM" "$f"; done | sort -n | cut -c7-
Read the file, prepend every line with a random number, sort the file on those random prefixes, cut the prefixes afterwards. One-liner which should work in any semi-modern shell.
EDIT: incorporated Richard Hansen's remarks.
A one-liner for python:
python -c "import random, sys; lines = open(sys.argv[1]).readlines(); random.shuffle(lines); print ''.join(lines)," myFile
And for printing just a single random line:
python -c "import random, sys; print random.choice(open(sys.argv[1]).readlines())," myFile
But see this post for the drawbacks of python's random.shuffle(). It won't work well with many (more than 2080) elements.
Related to Jim's answer:
My ~/.bashrc contains the following:
unsort ()
{
LC_ALL=C sort -R "$#"
}
With GNU coreutils's sort, -R = --random-sort, which generates a random hash of each line and sorts by it. The randomized hash wouldn't actually be used in some locales in some older (buggy) versions, causing it to return normal sorted output, which is why I set LC_ALL=C.
Related to Chris's answer:
perl -MList::Util=shuffle -e'print shuffle<>'
is a slightly shorter one-liner. (-Mmodule=a,b,c is shorthand for -e 'use module qw(a b c);'.)
The reason giving it a simple -i doesn't work for shuffling in-place is because Perl expects that the print happens in the same loop the file is being read, and print shuffle <> doesn't output until after all input files have been read and closed.
As a shorter workaround,
perl -MList::Util=shuffle -i -ne'BEGIN{undef$/}print shuffle split/^/m'
will shuffle files in-place. (-n means "wrap the code in a while (<>) {...} loop; BEGIN{undef$/} makes Perl operate on files-at-a-time instead of lines-at-a-time, and split/^/m is needed because $_=<> has been implicitly done with an entire file instead of lines.)
When I install coreutils with homebrew
brew install coreutils
shuf becomes available as n.
Mac OS X with DarwinPorts:
sudo port install unsort
cat $file | unsort | ...
FreeBSD has its own random utility:
cat $file | random | ...
It's in /usr/games/random, so if you have not installed games, you are out of luck.
You could consider installing ports like textproc/rand or textproc/msort. These might well be available on Linux and/or Mac OS X, if portability is a concern.
On OSX, grabbing latest from http://ftp.gnu.org/gnu/coreutils/ and something like
./configure
make
sudo make install
...should give you /usr/local/bin/sort --random-sort
without messing up /usr/bin/sort
Or get it from MacPorts:
$ sudo port install coreutils
and/or
$ /opt/local//libexec/gnubin/sort --random-sort