Using /proc/*/stat for profiling - linux

On Linux, a process' (main thread's) last program-counter value is presented in /proc/$PID/stat. This seems to be a really simple and easy way to do some sampled profiling without having to instrument a program in any way whatsoever.
I'm wondering if this has any caveats when it comes to the sampling quality, however. I'm assuming this value is updated whenever the process runs out of its timeslice, which should happen at completely random intervals in the program code, and that samples taken at more than time-slice length should be uniformly randomly distributed according to where the program actually spends its time. But that's just an assumption, and I realize it could be wrong in any number of ways.
Does anyone know?

Why not to try modern builtin linux tools like perf (https://perf.wiki.kernel.org/index.php/Main_Page)?
It has record mode with adjustable frequency (-F100 for 100 Hz), with many events, for example, on software event task-clock without using of hardware performance counters (stop the perf with Ctrl-C or add sleep 10 to the right to sample for 10 seconds):
perf record -p $PID -e task-clock -o perf.output.file
Perf works for all threads without any instrumenting (recompilation or code editing) and will not interfere with program execution (only timer interrupt is slightly modified). (There is also some support of stacktrace sampling with -g option.)
Output can be parsed offline with perf report (only this command will try to parse binary and shared libraries)
perf report -i perf.output.file
or converted to raw PC (EIP) samples with perf script -i perf.output.file.
PS: EIP pointer in /proc/$pid/stat file is mentioned in official linux man page 5 proc http://man7.org/linux/man-pages/man5/proc.5.html as kstkeip - "The current EIP (instruction pointer)." It is read at fs/proc/array.c:do_task_stat eip = KSTK_EIP(task);, but I'm not sure where and when it is filled. It can be written on task switch (both on involuntary when taskslice ends and voluntary when tasks does something like sched_yield) or on blocking syscalls, so it is probably not the best choice as sampling source.

If it works, which it could, it will have the shortcomings of prof, which gprof was supposed to remedy. Then gprof has its own shortcomings, which have led to numerous more modern profilers. Some of us consider this to be the most effective, and it can be accomplished with a tool as simple as pstack or lsstack.

Related

Is there a way to see what happens during program execution in detail on Linux?

I am trying to debug a performance of my program. What would be ideal is to have a way to see in detail when was the thread doing useful work, when was it blocked by page faults, when was it executing some memory writes and reads, etc...
I would simply like to have a detailed understanding of whats going on. Is it possible?
The linux kernel sources come with the perf tool that can measure a large number of performance counter, all of those you listed included, and can print statistics about it, annotate symbols, instructions and source lines with them (if debug symbols are available), and can track any process or also logical cpu cores.
Your Linux distribution will have the tool probably in a standalone package. Some hardening options of the kernel may limit what information root or non-root users can collect with it.
You can use perf and visualizing a perf output file graphically with hotspot

How to collect some readable stack traces with perf?

I want to profile C++ program on Linux using random sampling that is described in this answer:
However, if you're in a hurry and you can manually interrupt your
program under the debugger while it's being subjectively slow, there's
a simple way to find performance problems.
The problem is that I can't use gdb debugger because I want to profile on production under heavy load and debugger is too intrusive and considerably slows down the program. However I can use perf record and perf report for finding bottlenecks without affecting program performance. Is there a way to collect a number of readable (gdb like) stack traces with perf instead of gdb?
perf does offer callstack recording with three different techniques
By default is uses the frame pointer (fp). This is generally supported and performs well, but it doesn't work with certain optimizations. Compile your applications with -fno-omit-frame-pointer etc. to make sure it works well.
dwarf uses a dump of the sack for each sample for post-processing. That has a significant performance penalty
Modern systems can use hardware-supported last branch record, lbr.
The stack is accessible in perf analysis tools such as perf report or perf script.
For more details check out man perf-record.

How to monitor a process in Linux CPU, Memory and time

How can I benchmark a process in Linux? I need something like "top" and "time" put together for a particular process name (it is a multiprocess program so many PIDs will be given)?
Moreover I would like to have a plot over time of memory and cpu usage for these processes and not just final numbers.
Any ideas?
I typically throw together a simple script for this type of work.
Take a look at the kernel documentation for the proc filesystem (Google 'linux proc.txt').
The first line of /proc/stat (Section 1.8 in proc.txt) will give you cumulative cpu usage stats (i.e. user, nice, system, idle, ...). For each process, the file /proc/$PID/stat (Table 1-4 in proc.txt) will provide you with both process-specific cpu usage stats and memory usage stats (see rss).
If you google a bit you'll find plenty of detailed info on these files, and pointers to libraries / apps / code snippets that can help you obtain / derive the values you need. With that in mind, I'll focus on the high-level strategy.
For CPU stats, use your favorite scripting language to create an executable that takes a set of process ids for monitoring. At a fixed interval (ex: 1 second) poll / calculate the cumulative totals for each process and the system as a whole. During each poll interval, write all results on a single line to stdout.
For memory stats, write a similar script, but simply log the per-process memory usage. Memory is a bit easier as we directly obtain the instantaneous values.
Run these script for the duration of your test, passing the set of processes ids that you'd like to monitor and redirecting its output to a log file.
./logcpu $(pidof foo) $(pidof bar) > cpustats
./logmem $(pidof foo) $(pidof bar) > memstats
Import the contents of these files into a spreadsheet (for certain applications this is as easy as copy / paste). For CPU, you are after instantaneous values but have cumulative values, so you'll need to do some minor spreadsheet work to derive these values (it's just the delta 't(x + 1) - t(x)'). Of course you could have your cpu logger write the delta, but you'll be spending a bit more time up front on the script.
Finally, use your spreadsheet to generate a nice plot.
Following are the tools for monitoring a linux system
System commands like top, free -m, vmstat, iostat, iotop, sar, netstat, etc. Nothing comes near these linux utility when you are debugging a problem. These command give you a clear picture that is going inside your server
SeaLion: Agent executes all the commands mentioned in #1 (also user defined) and outputs of these commands can be accessed in a beautiful web interface. This tool comes handy when you are debugging across hundreds of servers as installation is clear simple. And its FREE
Nagios: It is the mother of all monitoring/alerting tools. It is very much customization but very much difficult to setup for beginners. There are sets of tools called nagios plugins that covers pretty much all important Linux metrics
Munin
Server Density: A cloudbased paid service that collects important Linux metrics and gives users ability to write own plugins.
New Relic: Another well know hosted monitoring service.
Zabbix

Benchmark a linux Bash script

Is there a way to benchmark a bash script's performance? the script downloads a remote file, and then makes calls to multiple commandline programs to manipulate. I would like to know (or as much as possible):
Total time
Time spent downloading
Time spent on each command called
-=[ I think these could be wrapped in "time" calls right? ]=-
Average download speed
uses wget
Total Memory used
Total CPU usage
CPU usage per command called
I'm able to make edits to the bash script to insert any benchmark commands needed at specific points (ie, between app calls). Not sure if some "top" ninja-ry could solve this or not. Not able to find anything useful (at least to limited understanding) in man file.
Will be running the benchmarks on OSX Terminal as well as Ubuntu (if either matter).
strace -o trace -c -Ttt ./scrip
-c is to trace the time spent by cpu on specific call.
-Ttt will tell you time in microseconds at time of each system call running.
-o will save output in file "trace".
You should be able to achieve this a number of ways. One way is to use time built-in function for each command of interest and capture the results. You may have to be careful about any pipes and redirects;
You may also consider trapping SIGCHLD, DEBUG, RETURN, ERR and EXIT signals and putting timing information in there, but you may not get some results.
This concept of CPU usage of each command won't give you any thing useful, all commands use 100% of cpu. Memory usage is something you can pull out but you should look at
If you want to get deep process statistics then you would want to use strace... See strace(1) man page for details. I doubt that -Ttt as it is suggest elsewhere is useful all that tells you are system call times and you want other process trace info.
You may also want to see ltrace and dstat tools.
A similar question is answered here Linux benchmarking tools

How to set CPU load on a Red Hat Linux box?

I have a RHEL box that I need to put under a moderate and variable amount of CPU load (50%-75%).
What is the best way to go about this? Is there a program that can do this that I am not aware of? I am happy to write some C code to make this happen, I just don't know what system calls will help.
This is exactly what you need (internet archive link):
https://web.archive.org/web/20120512025754/http://weather.ou.edu/~apw/projects/stress/stress-1.0.4.tar.gz
From the homepage:
"stress is a simple workload generator for POSIX systems. It imposes a configurable amount of CPU, memory, I/O, and disk stress on the system. It is written in C, and is free software licensed under the GPL."
Find a simple prime number search program that has source code. Modify the source code to add a nanosleep call to the main loop with whichever delay gives you the desired CPU load.
One common way to get some load on a system is to compile a large software package over and over again. Something like the Linux kernel.
Get a copy of the source code, extract the tar.bz2, go into the top level source directory, copy your kernel config from /boot to .config or zcat /proc/config.gz > .config, the do make oldconfig, then while true; do make clean && make bzImage; done
If you have an SMP system, then make -j bzImage is fun, it will spawn make tasks in parallel.
One problem with this is adjusting the CPU load. It will be a maximum CPU load except for when waiting on disk I/O.
You could possibly do this using a Bash script. Use " ps -o pcpu | grep -v CPU" to get the CPU Usage of all the processes. Add all those values together to get the current usage. Then have a busy while loop that basically keeps on checking those values, figuring out the current CPU usage, and waiting a calculated amount of time to keep the processor at a certain threshhold. More detail is need, but hopefully this will give you a good starting point.
Take a look at this CPU Monitor script I found and try to get some other ideas on how you can accomplish this.
It really depends what you're trying to test. If you're just testing CPU load, simple scripts to eat empty CPU cycles will work fine. I personally had to test the performance of a RAID array recently and I relied on Bonnie++ and IOZone. IOZone will put a decent load on the box, particularly if you set the file size higher than the RAM.
You may also be interested in this Article.
Lookbusy enables set value of CPU load.
Project site
lookbusy -c util[-high_util], --cpu-util util[-high_util]
i.e. 60% load
lookbusy -c 60
Use the "nice" command.
a) Highest priority:
$ nice -n -20 my_command
or
b) Lowest priority:
$ nice -n 20 my_command
A Simple script to load & hammer the CPU using awk. The script does mathematical calculations and thus CPU load peaks up on higher values passwd to loadserver.sh .
checkout the script # http://unixfoo.blogspot.com/2008/11/linux-cpu-hammer-script.html
You can probably use some load-generating tool to accomplish this, or run a script to take all the CPU cycles and then use nice and renice on the process to vary the percentage of cycles that the process gets.
Here is a sample bash script that will occupy all the free CPU cycles:
#!/bin/bash
while true ; do
true
done
Not sure what your goal is here. I believe glxgears will use 100% CPU.
So find any process that you know will max out the CPU to 100%.
If you have four CPU cores(0 1 2 3), you could use "taskset" to bind this process to say CPUs 0 and 1. That should load your box 50%. To load it 75% bind the process to 0 1 2 CPUs.
Disclaimer: Haven't tested this. Please let us know your results. Even if this works, I'm not sure what you will achieve out of this?

Resources