Is there a better way to display cpu usage in tmux? - linux

Here's a solution which may consume a lot of cpu usage (stolen from this article):
There's a difference in my Ubuntu 12 ec2 server, I have to use top -bn1 instead of top -ln.
Here's my related .tmux.conf file:
set -g status-right '#[fg=yellow]#[(getCpuUsage.sh)]'
It actually calls top every 2 seconds and outputs a whole lot of information. I think there should be a way involving less cpu consumption or use some flag to limit the output of top to only cpu usage.

I use the small tmux-mem-cpu-load C++ program. It's at least one fork/exec per update either way, but probably better than invoking a shell.

If I knew tmux-mem-cpu-load, I would become too lazy to write my own rainbarf:
It has a fancier look, but it is a Perl script so it is not a good idea to run it every 2 seconds (on my experience, 15 seconds suffice).

You can try vmstat(1). It displays the averaged CPU load over all CPUs: user, system, idle and IO wait in the last four fields:
vmstat|while read s;do [[ "$s" =~ ([[:space:]]+[0-9]+){4}$ ]]&&echo $BASH_REMATCH; done

stat top command.
press 1.
Press 0 then
Press "t" twice.
It will display bar graph of the CPU usage.
You can change the color by +z. Then color number in the list.

Related

Linux: get amount of memory swapped in/out over a time period

is there an (easy(?)) way to get the the amount of data moved to/from swap over a certain time ? Maybe, either integrated over all processes and time or integrated over specific processes and time?
Story: I have a machine which tends to swap. However, I do not know, if swap is 'actively' used. I.e., if it is constantly swapping or let's say just the shared libraries not really used are swapped away after some time and 'active' memory usage happens in mem in the end.
Thus, I am looking for a way to comfort myself, that the swap usage may be not serious...
Cheers and thanks for ideas,
Thomas
This can be relatively easily (if you know kernel MM subsystem) done via SystemTap.
You need to know the name of functions which do swapin/swapout, create corresponding probes and two counters incremented from probes. Finally, you need a timer which is fired every N seconds, dumps current counters and resets them.
here is my temporary solution to get the overall number of pages swapped in/out between to calls using vmstat
#!/bin/sh
OLDSWAPPEDIN=$SWAPPEDIN
OLDSWAPPEDOUT=$SWAPPEDOUT
PAGEINOUT=$(vmstat -s | grep swapped 2>&1)
SWAPPEDIN=`echo $PAGEINOUT | awk '{print $1}'`
SWAPPEDOUT=`echo $PAGEINOUT | awk '{print $5}'`
SWAPPEDINDIFF=`expr $SWAPPEDIN - $OLDSWAPPEDIN`
SWAPPEDOUTDIFF=`expr $SWAPPEDOUT - $OLDSWAPPEDOUT`
I tried to avoid temporary files for storage variables (so either sourcing it or create the variables at login would be necessary)

See how much memory my program used in Terminal

I know when I type time ./myProgram <input.txt >output.txt on the terminal, once my program finish running, I can see the runtime conveniently. There must be some similar command to see memory usage of the program right? Please inform me concisely. Thanks!
One way to tell how much memory the terminal is using at any one time is to use top and:
find the line that corresponds to the terminal's process.
find the column that corresponds to memory usage (man top could likely tell you -- I'm on a Windows machine right now, and can't easily look it up.)
Perhaps try this command:
top | grep terminal
('terminal' can be replaced with 'xterm' or whatever terminal program you use)
Again, this will tell you the memory usage of a program at a particular time.

view multi-core or mlti-cpu utlization on linux

I have a program running on Linux and I need to determine how it is utilizing all the CPUs/cores. Is there any program for viewing this information?
Run the 'top' command and press '1' to see the individual cores.
When runnging the top command, press f then j to display the P column (last CPU used by process), in addition to the 1 command in top, you should view some multi core occupation informations :)
htop shows you the cpu usage of each core in a graphical manner (ncurses).
mpstat -P ALL 5 5 >>your.file
You may need to parse this to use it for a presentation, or sum it But read the man page as mpstat has some useful options.
Run the 'top' command and press '1' to see the individual core is the best way to see
the cpu cores usage ..
another option is run cmd- cat /proc/stat
to see the cpu cores usage

How to set CPU load on a Red Hat Linux box?

I have a RHEL box that I need to put under a moderate and variable amount of CPU load (50%-75%).
What is the best way to go about this? Is there a program that can do this that I am not aware of? I am happy to write some C code to make this happen, I just don't know what system calls will help.
This is exactly what you need (internet archive link):
https://web.archive.org/web/20120512025754/http://weather.ou.edu/~apw/projects/stress/stress-1.0.4.tar.gz
From the homepage:
"stress is a simple workload generator for POSIX systems. It imposes a configurable amount of CPU, memory, I/O, and disk stress on the system. It is written in C, and is free software licensed under the GPL."
Find a simple prime number search program that has source code. Modify the source code to add a nanosleep call to the main loop with whichever delay gives you the desired CPU load.
One common way to get some load on a system is to compile a large software package over and over again. Something like the Linux kernel.
Get a copy of the source code, extract the tar.bz2, go into the top level source directory, copy your kernel config from /boot to .config or zcat /proc/config.gz > .config, the do make oldconfig, then while true; do make clean && make bzImage; done
If you have an SMP system, then make -j bzImage is fun, it will spawn make tasks in parallel.
One problem with this is adjusting the CPU load. It will be a maximum CPU load except for when waiting on disk I/O.
You could possibly do this using a Bash script. Use " ps -o pcpu | grep -v CPU" to get the CPU Usage of all the processes. Add all those values together to get the current usage. Then have a busy while loop that basically keeps on checking those values, figuring out the current CPU usage, and waiting a calculated amount of time to keep the processor at a certain threshhold. More detail is need, but hopefully this will give you a good starting point.
Take a look at this CPU Monitor script I found and try to get some other ideas on how you can accomplish this.
It really depends what you're trying to test. If you're just testing CPU load, simple scripts to eat empty CPU cycles will work fine. I personally had to test the performance of a RAID array recently and I relied on Bonnie++ and IOZone. IOZone will put a decent load on the box, particularly if you set the file size higher than the RAM.
You may also be interested in this Article.
Lookbusy enables set value of CPU load.
Project site
lookbusy -c util[-high_util], --cpu-util util[-high_util]
i.e. 60% load
lookbusy -c 60
Use the "nice" command.
a) Highest priority:
$ nice -n -20 my_command
or
b) Lowest priority:
$ nice -n 20 my_command
A Simple script to load & hammer the CPU using awk. The script does mathematical calculations and thus CPU load peaks up on higher values passwd to loadserver.sh .
checkout the script # http://unixfoo.blogspot.com/2008/11/linux-cpu-hammer-script.html
You can probably use some load-generating tool to accomplish this, or run a script to take all the CPU cycles and then use nice and renice on the process to vary the percentage of cycles that the process gets.
Here is a sample bash script that will occupy all the free CPU cycles:
#!/bin/bash
while true ; do
true
done
Not sure what your goal is here. I believe glxgears will use 100% CPU.
So find any process that you know will max out the CPU to 100%.
If you have four CPU cores(0 1 2 3), you could use "taskset" to bind this process to say CPUs 0 and 1. That should load your box 50%. To load it 75% bind the process to 0 1 2 CPUs.
Disclaimer: Haven't tested this. Please let us know your results. Even if this works, I'm not sure what you will achieve out of this?

Using "top" in Linux as semi-permanent instrumentation

I'm trying to find the best way to use 'top' as semi-permanent instrumentation in the development of a box running embedded Linux. (The instrumentation will be removed from the final-test and production releases.)
My first pass is to simply add this to init.d:
top -b -d 15 >/tmp/toploop.out &
This runs top in "batch" mode every 15 seconds. Let's assume that /tmp has plenty of spaceā€¦
Questions:
Is 15 seconds a good value to choose for general-purpose monitoring?
Other than disk space, how seriously is this perturbing the state of the system?
What other (perhaps better) tools could be used like this?
Look at collectd. It's a very light weight system monitoring framework coded for performance.
We use sysstat to monitor things like this.
You might find that vmstat and iostat with a delay and no repeat counter is a better option.
I suspect 15 seconds would be more than adequate unless you actually want to watch what's happening in real time, but that doesn't appear to be the case here.
As far as load, on an idling PIII 900Mhz w/ 768MB of RAM running Ubuntu (not sure which version, but not more than a year old) I have top updating every 0.5 seconds and it's about 2% CPU utilization. At 15s updates, I'm seeing 0.1% CPU utilization.
depending upon what exactly you want, you could use the output of uptime, free, and ps to get most, if not all, of top's information.
If you are looking for overall load, uptime is probably sufficient. However, if you want specific information about processes, you are adventurous, and have the /proc filessystem enabled, you may want to write your own tools. The primary benefit in this environment is that you can focus on exactly what you want and minimize the load introduced to the system.
The proc file system gives your application read access to the kernel memory that keeps track of many of the interesting variables. Reading from /proc is one of the lightest ways to get this information. Additionally, you may be able to get more information than provided by top. I've done this in the past to get amount of time spent in user and system by this process. Additionally, you can use this to get information about the number of file descriptors open by the process. You might also use this to get detailed information about how the network system is working.
Much of this information is pre-processed by other applications which can be used if you get the information you need. However, it is rather straight-forward to read the raw information. Do a man proc for more information.
Pity you haven't said what you are monitoring for.
You should decide whether 15 seconds is ok or not. Feel free to drop it way lower if you wish (and have a fast HDD)
No worries unless you are running a soft real-time system
Have a look at tools suggested in other answers. I'll add another sugestion: "iotop", for answering a "who is thrashing the HDD" questions.
At work for system monitoring during stress tests we use a tool called nmon.
What I love about nmon is it has the ability to export to XLS and generate beautiful graphs for you.
It generates statistics for:
Memory Usage
CPU Usage
Network Usage
Disk I/O
Good luck :)

Resources