How to control the top command in Linux using a script? - linux

Running top interactively you can do different things. Is there a way to write a bash script that would interact with top without using programs like xdotool?

Your best option, if you want to interact with top from a script, would be to have a script that runs it once and captures the output (top -n1 will make it run once and then terminate). You can have your script run top -n1 with appropriate other parameters every time it wants to capture new output, and control it accordingly.
Trying to run an interactive top session, and have a script send it keystrokes, would be very fragile and quite a mess.

top isn't the best tool to run from scripts. Foremost, it's output isn't designed to be processed by scripts. But if you tell us what you want to achieve, then we can tell you commands that do similar things:
ps lists active processes. iostat gives you disk and other I/O data. sar reports system activity (network, interrupts, ...)

top -b -n n1 -d d1
where,
n1 = duration for which top output to be collected (in seconds)
d1 = how often you need to collect top output (in milliseconds)
Example:
top -b -n $n1 -d $d1 | grep "Cpu" > top.txt (n1 and d1 accepted from user)

Related

How can I select() (ie, simultaneously read from) standard input *and* a file in bash?

I have a program that accepts input on one FIFO and emits output to another FIFO. I want to write a small script to control this program. The script needs to listen both to standard input (so I can input commands to adjust things in real time) and the program's output FIFO (so it can respond to events happening there as well).
Essentially my control program needs to select between standard input and a file (my FIFO).
I like learning how to figure out how to develop simple and elegant bash-based solutions to complex problems, and after a little headscratching I remembered that that tail -f will happily select on multiple files and tell you when one of them changes in real time, so I initially tried
tail -f <(od -An -vtd1 -w1) <(cat fifo)
to read both standard input (I'd previously run stty icanon min 1; this od invocation shows each stdin character on a separate line alongside its ASCII code, and is great for escape sequence parsing) and my FIFO. This failed epically (as does cat <(cat)): od gets run here as a backgrounded task, so it doesn't get access to the controlling TTY, and fails with a cryptic "I/O error" that was explained incredibly well here.
So now I'm a bit stumped. I realize that I can use any scripting language like Perl/Python/Ruby/Tcl to solve this; my compsci/engineering question is whether/how I might be able to solve this using (Linux) shell scripting.

How to get output of top command

I'm wondering how to get the output of the 'top' command. I'm looking to obtain the number of currently running processes.
I tried putting it to a text file and got garbage. I'm unsure on how to approach this. My assignment is to determine the number of processes considered by the short-term scheduler for process allocation (processes currently ready to run) at any given time.
Use top in batch mode:
top -bn1 > output.txt
See also this previous answer.
Easier than parsing top you could count the lines in the process list:
ps -ef | wc -l

Bash: How to record highest memory/cpu consumption during execution of a bash script?

I have a function in a bash script that executes a long process called runBatch. Basically runBatch takes a file as an argument and loads the contents into a db. (runBatch is just a wrapper function for a database command that loads the content of the file)
My function has a loop that looks something like the below, where I am currently recording start time and elapsed time for the process to variables.
for batchFile in `ls $batchFilesDir`
do
echo "Batch file is $batchFile"
START_TIME=$(($(date +%s%N)/1000000))
runBatch $batchFile
ELAPSED_TIME=$(($(($(date +%s%N)/1000000))-START_TIME))
IN_SECONDS=$(awk "BEGIN {printf \"%.2f\",${ELAPSED_TIME}/1000}")
done
Then I am writing some information on each batch (such as time, etc.) to a table in a html page I am generating.
How would I go about recording the highest memory/cpu usage while the runBatch is running, along with the time, etc?
Any help appreciated.
Edit: I managed to get this done. I added a wrapper script around this script that runs this script in the background. I pass it's PID with $! to another script in the wrapper script that monitors the processes CPU and Memory usage with top every second. I compile everything into a html page at the end when the PID is no longer alive. Cheers for the pointers.
You should be able to get the PID of the process using $!,
runBatch $batchFile &
myPID=$!
and then you can run a top -b -p $myPID to print out a ticking summary of CPU.
Memory:
cat /proc/meminfo
Next grep whatever you want,
Cpu, it is more complicated - /proc/stat expained
Average load:
cat /proc/loadavg
For timing "runBatch" use
time runBatch
like
time sleep 10
Once you've got the pid of your process (e.g. like answered here) you can use (with watch(1) & cat(1) or grep(1)) the proc(5) file system, e.g.
watch cat /proc/$myPID/stat
(or use /proc/$myPID/status or /proc/$myPID/statm, or /proc/$myPID/maps for the address space, etc...)
BTW, to run batch jobs you should consider batch (and you might look into crontab(5) to run things periodically)

Time taken by `less` command to show output

I have a script that produces a lot of output. The script pauses for a few seconds at point T.
Now I am using the less command to analyze the output of the script.
So I execute ./script | less. I leave it running for sufficient time so that the script would have finished executing.
Now I go through the output of the less command by pressing Pg Down key. Surprisingly while scrolling at the point T of the output I notice the pause of few seconds again.
The script does not expect any input and would have definitely completed by the time I start analyzing the output of less.
Can someone explain how the pause of few seconds is noticable in the output of less when the script would have finished executing?
Your script is communicating with less via a pipe. Pipe is an in-memory stream of bytes that connects two endpoints: your script and the less program, the former writing output to it, the latter reading from it.
As pipes are in-memory, it would be not pleasant if they grew arbitrarily large. So, by default, there's a limit of data that can be inside the pipe (written, but not yet read) at any given moment. By default it's 64k on Linux. If the pipe is full, and your script tries to write to it, the write blocks. So your script isn't actually working, it stopped at some point when doing a write() call.
How to overcome this? Adjusting defaults is a bad option; what is used instead is allocating a buffer in the reader, so that it reads into the buffer, freeing the pipe and thus letting the writing program work, but shows to you (or handles) only a part of the output. less has such a buffer, and, by default, expands it automatically, However, it doesn't fill it in the background, it only fills it as you read the input.
So what would solve your problem is reading the file until the end (like you would normally press G), and then going back to the beginning (like you would normally press g). The thing is that you may specify these commands via command line like this:
./script | less +Gg
You should note, however, that you will have to wait until the whole script's output loads into memory, so you won't be able to view it at once. less is insufficiently sophisticated for that. But if that's what you really need (browsing the beginning of the output while the ./script is still computing its end), you might want to use a temporary file:
./script >x & less x ; rm x
The pipe is full at the OS level, so script blocks until less consumes some of it.
Flow control. Your script is effectively being paused while less is paging.
If you want to make sure that your command completes before you use less interactively, invoke less as less +G and it will read to the end of the input, you can then return to the start by typing 1G into less.
For some background information there's also a nice article by Alexander Sandler called "How less processes its input"!
http://www.alexonlinux.com/how-less-processes-its-input
Can I externally enforce line buffering on the script?
Is there an off the shelf pseudo tty utility I could use?
You may try to use the script command to turn on line-buffering output mode.
script -q /dev/null ./script | less # FreeBSD, Mac OS X
script -c "./script" /dev/null | less # Linux
For more alternatives in this respect please see: Turn off buffering in pipe.

Get CPU usage in shell script?

I'm running some JMeter tests against a Java process to determine how responsive a web application is under load (500+ users). JMeter will give the response time for each web request, and I've written a script to ping the Tomcat Manager every X seconds which will get me the current size of the JVM heap.
I'd like to collect stats on the server of the % of CPU being used by Tomcat. I tried to do it in a shell script using ps like this:
PS_RESULTS=`ps -o pcpu,pmem,nlwp -p $PID`
...running the command every X seconds and appending the results to a text file. (for anyone wondering, pmem = % mem usage and nlwp is number of threads)
However I've found that this gives a different definition of "% of CPU Utilization" than I'd like - according to the manpages for ps, pcpu is defined as:
cpu utilization of the process in "##.#" format. It is the CPU time used divided by the time the process has been running (cputime/realtime ratio), expressed as a percentage.
In other words, pcpu gives me the % CPU utilization for the process for the lifetime of the process.
Since I want to take a sample every X seconds, I'd like to be collecting the CPU utilization of the process at the current time only - similar to what top would give me
(CPU utilization of the process since the last update).
How can I collect this from within a shell script?
Use top -b (and other switches if you want different outputs). It will just dump to stdout instead of jumping into a curses window.
The most useful tool I've found for monitoring a server while performing a test such as JMeter on it is dstat. It not only gives you a range of stats from the server, it outputs to csv for easy import into a spreadsheet and lets you extend the tool with modules written in Python.
User load: top -b -n 2 |grep Cpu |tail -n 1 |awk '{print $2}' |sed 's/.[^.]*$//'
System load: top -b -n 2 |grep Cpu |tail -n 1 |awk '{print $3}' |sed 's/.[^.]*$//'
Idle load: top -b -n 1 |grep Cpu |tail -n 1 |awk '{print $5}' |sed 's/.[^.]*$//'
Every outcome is a round decimal.
Off the top of my head, I'd use the /proc filesystem view of the system state - Look at man 5 proc to see the format of the entry for /proc/PID/stat, which contains total CPU usage information, and use /proc/stat to get global system information. To obtain "current time" usage, you probably really mean "CPU used in the last N seconds"; take two samples a short distance apart to see the current rate of CPU consumption. You can then munge these values into something useful. Really though, this is probably more a Perl/Ruby/Python job than a pure shell script.
You might be able to get the rough data you're after with /proc/PID/status, which gives a Sleep average for the process. Pretty coarse data though.
also use 1 as iteration count, so you will get current snapshot without waiting to get another one in $delay time.
top -b -n 1
This will not give you a per-process metric, but the Stress Terminal UI is super useful to know how badly you're punishing your boxes. Add -c flag to make it dump the data to a CSV file.

Resources