How do I get an output from Linux Top in Batch Mode on every iteration? - linux

I'm trying to log CPU and Memory stats into a file by using top on an Arch Linux. I'm just interested in one specific process and get the wanted parameters as shown below:
top -b -n1 -p 310 | tail -fn 1 | awk '{printf "%s,%s,%s,%s\n",$1,$12,$9,$10}'
This gives me an output to command line like:
310,name,0.0,10.5
So now, if I want to run this command like 10 times with a delay of 1s and write the output to a logfile I use:
top -b -n10 -p 310 -d 1 | tail -fn 1 | awk '{printf "%s,%s,%s,%s\n",$1,$12,$9,$10}' >> log.txt
But, instead printing me line by line to the logfile, I only get the last output. So my logfile contains only 1 line, although top must have been executed 10 times.
What am I doing wrong here?
PS: Printing to command line instead into a logfile produces only 1 line (the last output) as well...

The problem is because of tail command you use. Try something like this
top -p 310-b -n2 -d 1 | grep -w 310 | awk '{printf "%s,%s,%s,%s\n",$1,$12,$9,$10}'
I use grep -w to filter the lines only containing the info you are interested

Related

shell script gives output in 2 lines I need only 1

I have the following commands saved in a .sh file
prog=$1
ps axf | grep $prog | grep -v grep | awk '{print "kill -9 " $1}'
I get the following output when I execute it
kill -9 3184
kill -9 20359
But I just need the first line of it as that is the only valid pid. How can I remove the 2nd line from the output.
There are a few issues with what you want to do:
You're building a chain of 4 commands for something relatively simple
You're going to get as a result only the first line of a list of processes containing $prog (excluding the grep $prog which you filtered out); how can you be sure that's the process you want?
The correct command to use is
pkill $prog`
as suggested in the comments, which probably will do what you want.
Just for information, and to answer your question, you can pipe an output to head -n 1 to return only the first line:
<list of commands> | head -n 1
However, in your case this would add a fifth command to the chain, so I recommend you don't do it this way.

Count lines of CLI output in linux

Hi have the following command:
lsscsi | grep HITACHI | awk '{print $6}'
I want that the output will be the number of lines of the original output.
For example, if the original output is:
/dev/sda
/dev/sdb
/dev/sdc
The final output will be 3.
Basically the command wc -l can be used to count the lines in a file or pipe. However, since you want to count the number of lines after a filter has been applied I would recommend to use grep for that:
lsscsi | grep -c 'HITACHI'
-c just prints the number of matching lines.
Another thing. In your example you are using grep .. | awk. That's a useless use of grep. It should be
lsscsi | awk '/HITACHI/{print $6}'

Retrieve last 100 lines logs

I need to retrieve last 100 lines of logs from the log file.
I tried the sed command
sed -n -e '100,$p' logfilename
Please let me know how can I change this command to specifically retrieve the last 100 lines.
You can use tail command as follows:
tail -100 <log file> > newLogfile
Now last 100 lines will be present in newLogfile
EDIT:
More recent versions of tail as mentioned by twalberg use command:
tail -n 100 <log file> > newLogfile
"tail" is command to display the last part of a file, using proper available switches helps us to get more specific output. the most used switch for me is -n and -f
SYNOPSIS
tail [-F | -f | -r] [-q] [-b number | -c number | -n number] [file ...]
Here
-n number :
The location is number lines.
-f : The -f option causes tail to not stop when end of file is
reached, but rather to wait for additional data to be appended to the
input. The -f option is ignored if the
standard input is a pipe, but not if it is a FIFO.
Retrieve last 100 lines logs
To get last static 100 lines
tail -n 100 <file path>
To get real time last 100 lines
tail -f -n 100 <file path>
You can simply use the following command:-
tail -NUMBER_OF_LINES FILE_NAME
e.g tail -100 test.log
will fetch the last 100 lines from test.log
In case, if you want the output of the above in a separate file then you can pipes as follows:-
tail -NUMBER_OF_LINES FILE_NAME > OUTPUT_FILE_NAME
e.g tail -100 test.log > output.log
will fetch the last 100 lines from test.log and store them into a new file output.log)
Look, the sed script that prints the 100 last lines you can find in the documentation for sed (https://www.gnu.org/software/sed/manual/sed.html#tail):
$ cat sed.cmd
1! {; H; g; }
1,100 !s/[^\n]*\n//
$p
$ sed -nf sed.cmd logfilename
For me it is way more difficult than your script so
tail -n 100 logfilename
is much much simpler. And it is quite efficient, it will not read all file if it is not necessary. See my answer with strace report for tail ./huge-file: https://unix.stackexchange.com/questions/102905/does-tail-read-the-whole-file/102910#102910
I know this is very old, but, for whoever it may helps.
less +F my_log_file.log
that's just basic, with less you can do lot more powerful things. once you start seeing logs you can do search, go to line number, search for pattern, much more plus it is faster for large files.
its like vim for logs[totally my opinion]
original less's documentation : https://linux.die.net/man/1/less
less cheatsheet : https://gist.github.com/glnds/8862214
len=`cat filename | wc -l`
len=$(( $len + 1 ))
l=$(( $len - 99 ))
sed -n "${l},${len}p" filename
first line takes the length (Total lines) of file
then +1 in the total lines
after that we have to fatch 100 records so, -99 from total length
then just put the variables in the sed command to fetch the last 100 lines from file
I hope this will help you.

Perl script log to file, output lag

I have a Perlscript which does some logfile parsing and sometimes executes a bash command:
$messagePath = `ls -t -d -1 $dir | head -n 5 | xargs grep -l "$messageSearchString"\`;
I start my perl script like this ./perlscript.pl > logfile.log.
Now I do a tail on the logfile to watch the progress, but the output gets stuck every time at the line I described above.
The output will stop there for some seconds and then continue. ???
To profile the problem I wrapped it like this:
print `date`;
$messagePath = `ls -t -d -1 $dir | head -n 5 | xargs grep -l "$messageSearchString"`;
print `date`;
The output shows that the command does not consume a lot of time:
So 6. Okt 22:35:04 CEST 2013
So 6. Okt 22:35:04 CEST 2013
If I run the script without redirecting the output to a file there is no LAG.
Any idea why?
I haven't tried to duplicate your behaviour, but it might be a stdout buffering problem. Try with:
$| = 1;
$messagePath = `ls -t -d -1 $dir | head -n 5 | xargs grep -l "$messageSearchString"`;
Update
I have tried to duplicate the behaviour you observe: I've had to make some assumptions but I believe my suspicion was correct. Here I'm piping, but it's the same as redirecting to a file and tailing that file:
./test.pl | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; }'
Without $| = 1, output is buffered and aggregated:
2013-10-06 23:08:27 Saluton, mondo: /home/lserni/test.sh
2013-10-06 23:08:27
2013-10-06 23:08:27 Waiting 10s...
2013-10-06 23:08:27 Saluton denove!
With the modification, each line is printed as it is generated:
2013-10-06 23:09:09 Saluton, mondo: /home/lserni/test.sh
2013-10-06 23:09:09
2013-10-06 23:09:09 Waiting 10s...
2013-10-06 23:09:19 Saluton denove!
I expect that your script is doing something that takes some seconds, and which is not generating that messagePath; and the output will be delayed until Perl has a sizeable chunk of data to send along, giving the impression that it's that line that's stalling.
I forgot: the timing pipe comes from here.
In situations like yours, I've had some success using the unbuffer command. It runs a command in an environment that looks to the command like it's outputting to a tty so it doesn't buffer its output. I don't know how to apply it exactly in your case, so if you want to try it, you will have to experiment a little.

Why does 'top | grep > file' not work?

I tested the following command, but it doesn't work.
$> top -b -d 1 | grep java > top.log
It doesn't use standard error. I checked that it uses standard output, but top.log is always empty. Why is this?
By default, grep buffers output which implies that nothing would be written to top.log until the grep output exceeds the size of the buffer (which might vary across systems).
Tell grep to use line buffering on output. Try:
top -b -d 1 | grep --line-buffered java > top.log
In my embedded machine, grep hadn't the --line-buffered option. So I used this workaround for my myself:
while :;do top -b -n 1 | grep java >> top.log;done &
By this way I could have a running monitor in the background for a program like "java" and keep all results in the file top.log.

Resources