How to Configure and Sample Intel Performance Counters In-Process - linux

In a nutshell, I'm trying to achieve the following inside a userland benchmark process (pseudo-code, assuming x86_64 and a UNIX system):
results[] = ...
for (iteration = 0; iteration < num_iterations; iteration++) {
pctr_start = sample_pctr();
the_benchmark();
pctr_stop = sample_pctr();
results[iteration] = pctr_stop - pctr_start;
}
FWIW, the performance counter I am thinking of using is CPU_CLK_UNHALTED.THREAD_ALL, to read the number of core cycles independent of clock frequency changes (In an earlier question I had been planning to use the TSC register for this, but alas, that is not what this register measures at all).
My initial intention was to use inline assembler to first configure a counter using WRMSR, then to read the counter using RDPMC inside sample_pctr().
I stumbled at the first hurdle, as writing MSRs requires kernel privileges. It seems like you can in fact read the counters from user space (if configured correctly), but the act of configuring the counter (with an MSR) needs to be undertaken by the kernel.
Does anyone know a lightweight way to ask the kernel to configure the a performance counters from user-space so that I can then use RDPMC from within my benchmark harness?
Stuff I've looked into/thought about:
Perf tools for Linux. Seems to be geared up for sampling over the whole lifetime of a process, not within a process as specific points (before and after each iteration).
Use perf syscalls directly (i.e. perf_event_open). Looks like the counter value will only update periodically (using a sample rate) or after the counter exceeds a threshold. I need the counter value precisely at the moment I ask. This is why RDPMC seemed so attractive. I imagine that sampling frequently will itself skew the performance counter readings.
PAPI builds on perf, so probably inherits the above problem.
Write a kernel module -- too much effort, too error prone.
Ideally I would like a solution which works on OpenBSD and Linux, but somehow I think that is a tall order. Perhaps just for Linux for now.
Any help is most appreciated. Thanks.
EDIT: I just found the Linux msr device node, which would probably suffice. I'll leave the question up in case a better answer shows up.

It seems the best way -- for Linux at least -- is to use the msr device node.
You simply open a device node, seek to the address of the MSR required, and read or write 8 bytes.
OpenBSD is harder, since (at the time of writing) there is no user-space proxy to the MSRs. So you would need to write a kernel module or implement a sysctl by hand.

Related

Linux Suspend To RAM from idle loop

I have a question regarding STR (Suspend To RAM) in the Linux kernel.
I am working on a small embedded Linux (Kernel 3.4.22) and I want to implement a mechanism that will put the system into sleep (suspend to ram) while it has nothing to do.
This is done in order to save power.
The HW support RAM self-refresh meaning its content will stay persistence.
And I'll take care of all the rest things which should be done (e.g keeping CPU context etc…)
I want to trigger the Kernel PM (power management) subsystem from within the idle loop.
When the system has nothing to do, it should go into sleep.
The HW also supports a way to wake up the system.
Doing some research, I have found out that Linux gives an option for the user space to switch to STR by writing "echo "mem" > /sys/power/state".
This will trigger the PM subsystem and will perform the relevant callbacks.
My questions are:
Is there any other standard alternative to go into STR besides writing to the above proc?
Did anyone tried to put the system into STR from the idle loop code ?
Thanks,
Why would you need another method? Linux treats everything as a file. Is it any surprise that the contents of a psudo-file dictate the state of the system? Check for yourself. pm-utils is a popular tool set for managing the state of the system. All the commands are just calls to /sys files.
This policy is actually platform dependent. You would have to look at the cpuidle driver for your platform to understand what it is doing. For example, on atmel platforms, it is using both RAM self refresh and WFI.

Is there a way to disable CPU cache (L1/L2) on a Linux system?

I am profiling some code on a Linux system (running on Intel Core i7 4500U) to obtain the time of ONLY the execution costs. The application is the demo mpeg2dec from libmpeg2. I am trying to obtain a probability distribution for the mpeg2 execution times. However we want to see the raw execution cost when cache is switched off.
Is there a way I can disable the cpu cache of my system via a Linux command, or via a gcc flag ? or even set the cpu (L1/L2) cache size to 0KB ? or even add some code changed to disable cache ? Of course, without modifying or rebuilding the kernel.
See this 2012 thread, someone posted a tiny kernel module source to disable cache through asm.
http://www.linuxquestions.org/questions/linux-kernel-70/disabling-cpu-caches-936077/
If disabling the cache is really necessary, then so be it.
Otherwise, to know how much time a process takes in terms of user or system "cycles", then I would recommend the getrusage() function.
struct rusage usage;
getrusage(RUSAGE_SELF, &usage);
You can call it before/after your loop/test and subtracted the values to get a good idea of how much time your process took, even if many other processes run in parallel on the same machine. The main problem you'd get is if your process start swapping. In that case your timings will be off.
double user_usage = usage.ru_utime.tv_sec + usage.ru_utime.tv_usec / 1000000.0;
double system_uage = usage.ru_stime.tv_sec + usage.ru_stime.tv_usec / 1000000.0;
This is really precise from my own experience. To increase precision, you could be root when running your test and give it a negative priority (-1 or -2 is enough.) Then it won't be swapped out until you call a function that may require it.
Of course, you still get the effect of the cache... assuming you do not handle very large amount of data with code that goes on and on (opposed to having a loop).

linux- How to determine time taken by each function in C Program

I want to check time taken by each function and system calls made by each function in my project .My code is part of user as well as kernel space. So i need time taken in both space. I am interested to know performance in terms of CPU time and Disk IO. Should i use profiler tool ? if yes , which will be more preferable ? or what other option i have ?
Please help,
Thanks
As for kernel level profiling or time taken by some instructions or functions could be measured in clock tics used. To get actual how many clock ticks have been used to do a given task could be measured by kernel function as...
#include <sys/time.h>
unsigned long ini,end;
rdtscl(ini);
...your code....
rdtscl(end);
printk("time lapse in cpu clics: %lu\n",(end-ini));
for more details http://www.xml.com/ldd/chapter/book/ch06.html
and if your code is taking more time then you can also use jiffies effectively.
And for user-space profiling you can use various timing functions whicg give the time in nanosecond resolution or oprofile(http://oprofile.sourceforge.net/about/) & refer tis Timer function to provide time in nano seconds using C++
For kernel-space function tracing and profiling (which includes a call-graph format and the time taken by individual functions), consider using the Ftrace framework.
Specifically for function profiling (within the kernel), enable the CONFIG_FUNCTION_PROFILER kernel config: under Kernel Hacking / Tracing / Kernel function profiler.
It's Help :
CONFIG_FUNCTION_PROFILER:
This option enables the kernel function profiler. A file is created
in debugfs called function_profile_enabled which defaults to zero.
When a 1 is echoed into this file profiling begins, and when a
zero is entered, profiling stops. A "functions" file is created in
the trace_stats directory; this file shows the list of functions that
have been hit and their counters.
Some resources:
Documentation/trace/ftrace.txt
Secrets of the Ftrace function tracer
Using ftrace to Identify the Process Calling a Kernel Function
Well I only develop in userspace so I don't know, how much this will help you with disk IO or Kernelspace profiling, but I profiled a lot with oprofile.
I haven't used it in a while, so I cannot give you a step by step guide, but you may find more informations here:
http://oprofile.sourceforge.net/doc/results.html
Usually this helped me finding my problems.
You may have to play a bit with the opreport output, to get the results you want.

pinning a pthread to a single core

I am trying to measure the performance of some library calls. My primary measurement tool is the rdtsc call. After doing some reading I realize that I need to disable preemption and interrupts in order to get the most accurate readings. Can someone help me figure out how to do these? I know that pthreads have a 'set affinity' mechanism. Is that enough to get the job done?
I also read somewhere that I can make calls into the kernel of the sort
preempt_disable()
raw_local_irq_save(...)
Is there any benefit to using one approach over the other? I tried the latter approach and got this error.
error: 'preempt_disable' was not declared in this scope
which can be fixed by including linux/preempt.h but the compiler still complains.
linux/preempt.h: No such file or directory
Obviously I have not done any kernel hacking and I could not find this file on my system anywhere. I am really hoping I wont have to install a new linux kernel. :)
Thanks for your input.
Pinning a pthread to a single CPU can be done using pthread_setaffinity_np
But what you want to achieve at the end is not so simple. I'll explain you why.
preempt.h is part of the Linux Kernel source. Its located here. You need to have kernel sources with you. Anyways, you need to write a kernel module to access it, you cannot use it from user space. Learn how to write a kernel module here. Same is the case with functions preempt_disable and other interrupt disabling kernel functions
Now the point is, pthreads are in user space and your preemption disabling function is in kernel space. How to interact?
Either you need to write a new system call of your own where you do your preemption and interrupt disabling and call it from user space. Or you need to resort to other Kernel-User Space Interfaces like procfs, sysfs, ioctl etc
But I am really skeptical as to how all these will help you to benchmark library functions. You may want to have a look at how performance is typically measured using rdtsc

Accurate way of measuring overhead in kernel space

I recently implemented a security mechanism for Linux which hooks into system calls. Now I have to measure the overhead caused by it. The project requires to compare the execution time of typical Linux apps with and without the mechanism. By typical Linux apps I assume ex. gzipping 1G file, doing 'find /', grepping files. The main goal is to show the overhead in different types of tasks: CPU bound, I/O bound etc.
The question is: how to organise the test so that they will be reliable? The first important thing is the fact that my mechanism works only in kernel space, so it is relevant to compare systime. I can use 'time' command for it, but is it the most accurate way of measuring systime? Another idea is to run those apps in long loops to minimize error. Then the loops should be inside or outside time command? If they are outside I will get many results - should I choose min, max, median, average?
Thanks for any suggestions.
I think you want more to measure a typical application payload (as Ninjajl's comment suggests, the compilation of the kernel could be a good payload). You probably don't want to measure the overhead inside each syscall itself, or even inside the kernel as a whole.
The reason for this is that most applications spend much more time and resource in user-space than in kernel-land (i.e. syscalls), so overhead inside syscalls is a "second-order" effect and probably don't matter as much. Of course, there are probable exceptions.
Perhaps phoronix test suite might be relevant.
You might be interested by oprofile
See also this answer and this question

Resources