Perf output is less than the number of actual instruction - linux

I tried to count the number of instructions of add loop application in RISC-V FPGA, using very simple RV32IM core with Linux 5.4.0 buildroot.
add.c:
int main()
{
int a = 0;
for (int i = 0; i < 1024*1024; i++)
a++;
printf("RESULT: %d\n", a);
return a;
}
I used -O0 compile option so that the loop really loop, and the resulting dump file is following:
000103c8 <main>:
103c8: fe010113 addi sp,sp,-32
103cc: 00812e23 sw s0,28(sp)
103d0: 02010413 addi s0,sp,32
103d4: fe042623 sw zero,-20(s0)
103d8: fe042423 sw zero,-24(s0)
103dc: 01c0006f j 103f8 <main+0x30>
103e0: fec42783 lw a5,-20(s0)
103e4: 00178793 addi a5,a5,1 # 12001 <__TMC_END__+0x1>
103e8: fef42623 sw a5,-20(s0)
103ec: fe842783 lw a5,-24(s0)
103f0: 00178793 addi a5,a5,1
103f4: fef42423 sw a5,-24(s0)
103f8: fe842703 lw a4,-24(s0)
103fc: 001007b7 lui a5,0x100
10400: fef740e3 blt a4,a5,103e0 <main+0x18>
10404: fec42783 lw a5,-20(s0)
10408: 00078513 mv a0,a5
1040c: 01c12403 lw s0,28(sp)
10410: 02010113 addi sp,sp,32
10414: 00008067 ret
As you can see, the application loops from 103e0 ~ 10400, which is 9 instructions, so the number of total instruction must be at least 9 * 1024^2
But the result of perf stat is pretty weird
RESULT: 1048576
Performance counter stats for './add.out':
3170.45 msec task-clock # 0.841 CPUs utilized
20 context-switches # 0.006 K/sec
0 cpu-migrations # 0.000 K/sec
38 page-faults # 0.012 K/sec
156192046 cycles # 0.049 GHz (11.17%)
8482441 instructions # 0.05 insn per cycle (11.12%)
1145775 branches # 0.361 M/sec (11.25%)
3.771031341 seconds time elapsed
0.075933000 seconds user
3.559385000 seconds sys
The total number of instructions perf counted was lower than 9 * 1024^2. Difference is about 10%.
How is this happening? I think the output of perf should be larger than that, because perf tool measures not only overall add.out, but also overhead of perf itself and context-switching.

Related

Perf stat HW counters

perf stat ./myapp
and the result must be like this (it's just an example)
Performance counter stats for 'myapp':
83723.452481 task-clock:u (msec) # 1.004 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
3,228,188 page-faults:u # 0.039 M/sec
229,570,665,834 cycles:u # 2.742 GHz
313,163,853,778 instructions:u # 1.36 insn per cycle
69,704,684,856 branches:u # 832.559 M/sec
2,078,861,393 branch-misses:u # 2.98% of all branches
83.409183620 seconds time elapsed
74.684747000 seconds user
8.739217000 seconds sys
Perf stat prints user time and system time, and the HW counter will be incremented whatever application the cpu executes.
For HW counters like cycles or instructions, does the perf count them only for "myapp"?
For instance, (cs for context switch)
|--------------------|-------|-------------------|------------|------------------|
myapp cs cs myapp cs cs end
inst 0 10 20 50 80 100
60 instructions for "myapp" , but the value of HW counter is 100, then does the perf stat prints out 60?

Failed to reproduce the high-precision time measuring kernel module from intel's white paper

I am trying to reproduce How to Benchmark Code Execution Times on Intel IA-32 and IA-64 Instruction Set Architectures White Paper. This white paper provides a kernel module to accurately measure the execution time of a piece of code, by disabling preempt and using RDTSC, etc.
However, I cannot get the expected low variance when running the benchmark codes as reported in the white paper, which means the technique from the white paper doesn't work. I couldn't find out what's wrong.
The core of the kernel module is just a couple of lines
unsigned int flags;
preempt_disable();
raw_local_irq_save(flags);
asm volatile(
"CPUID\n\t"
"RDTSC\n\t"
"mov %%edx, %0\n\t"
"mov %%eax, %1\n\t"
: "=r"(cycles_high), "=r"(cycles_low)::"%rax", "%rbx", "%rcx", "%rdx");
/* call the function to measure here */
asm volatile(
"RDTSCP\n\t"
"mov %%edx, %0\n\t"
"mov %%eax, %1\n\t"
"CPUID\n\t"
: "=r"(cycles_high1), "=r"(cycles_low1)::"%rax", "%rbx", "%rcx", "%rdx");
raw_local_irq_restore(flags);
preempt_enable();
The codes are directly copied from the white paper with the optimizations adopted.
From the white paper, the expected output should be
loop_size:995 >>>> variance(cycles): 0; max_deviation: 0 ;min time: 2216
loop_size:996 >>>> variance(cycles): 28; max_deviation: 4 ;min time: 2216
loop_size:997 >>>> variance(cycles): 0; max_deviation: 112 ;min time: 2216
loop_size:998 >>>> variance(cycles): 28; max_deviation: 116 ;min time: 2220
loop_size:999 >>>> variance(cycles): 0; max_deviation: 0 ;min time: 2224
total number of spurious min values = 0
total variance = 1
absolute max deviation = 220
variance of variances = 2
variance of minimum values = 335757
However, what I get is
[1418048.049032] loop_size:42 >>>> variance(cycles): 104027;max_deviation: 92312 ;min time: 17
[1418048.049222] loop_size:43 >>>> variance(cycles): 18694;max_deviation: 43238 ;min time: 17
[1418048.049413] loop_size:44 >>>> variance(cycles): 1;max_deviation: 60 ;min time: 17
[1418048.049602] loop_size:45 >>>> variance(cycles): 1;max_deviation: 106 ;min time: 17
[1418048.049792] loop_size:46 >>>> variance(cycles): 69198;max_deviation: 83188 ;min time: 17
[1418048.049985] loop_size:47 >>>> variance(cycles): 1;max_deviation: 60 ;min time: 17
[1418048.050179] loop_size:48 >>>> variance(cycles): 1;max_deviation: 61 ;min time: 17
[1418048.050373] loop_size:49 >>>> variance(cycles): 1;max_deviation: 58 ;min time: 17
[1418048.050374]
total number of spurious min values = 2
[1418048.050374]
total variance = 28714
[1418048.050375]
absolute max deviation = 101796
[1418048.050375]
variance of variances = 1308070648
a much higher max_deviation and variance(cycles) than the white paper.
(please ignore different min time, since the white paper may be actually benchmarking something, but my codes do not actually benchmark anything.)
Is there anything I missed from the report? Or is the white paper not up-to-date and I missed some techniques in the modern x86 CPUs? How can I measure the execution time of a piece of code with the highest precision in modern intel x86 CPU architecture?
P.S. The code I run is placed here.
Most Intel processors have a constant TSC, which implies that the core frequency and the TSC frequency may be different. If an operation takes a fixed number of core cycles to complete, it may take very different numbers of TSC cycles depending on the core frequency during the execution of the operation in different runs. When max_deviation is large, it indicates that the core frequency has changed significantly during the execution of that iteration. The solution is to fix the core frequency to the maximum non-turbo frequency of your processor. For more information on constant TSC, see: Can constant non-invariant tsc change frequency across cpu states?.
please ignore different min time, since the white paper may be
actually benchmarking something, but my codes do not actually
benchmark anything.
The minimum values depend on the microarchitecture, the core frequency (which can dynamically change), and the TSC frequency (which is some fixed value close to the base frequency). The authors of the white paper only said they're on an Core i7 processor. In 2010, this is either a Nehalem or Westmere processor.
The measurements you've copied from the paper are from Section 3.3.2 titled "Resolution with the Alternative Method." The alternative method uses mov cr0, rax for serialization instead of rdtscp. But your code is from Section 3.2.2.
Note that the if ((end - start) < 0) {...} is never true when end and start are unsigned integers because the result of the subtraction is unsigned and the constant 0 is also converted to an unsigned type. Change it to if (end < start) {...}.

Use linux perf utility to report counters every second like vmstat

There is perf command-linux utility in Linux to access hardware performance-monitoring counters, it works using perf_events kernel subsystems.
perf itself has basically two modes: perf record/perf top to record sampling profile (the sample is for example every 100000th cpu clock cycle or executed command), and perf stat mode to report total count of cycles/executed commands for the application (or for the whole system).
Is there mode of perf to print system-wide or per-CPU summary on total count every second (every 3, 5, 10 seconds), like it is printed in vmstat and systat-family tools (iostat, mpstat, sar -n DEV... like listed in http://techblog.netflix.com/2015/11/linux-performance-analysis-in-60s.html)? For example, with cycles and instructions counters I will get mean IPC for every second of system (or of every CPU).
Is there any non-perf tool (in https://perf.wiki.kernel.org/index.php/Tutorial or http://www.brendangregg.com/perf.html) which can get such statistics with perf_events kernel subsystem? What about system-wide per-process IPC calculation with resolution of seconds?
There is perf stat option "interval-print" of -I N where N is millisecond interval to do interval counter printing every N milliseconds (N>=10): http://man7.org/linux/man-pages/man1/perf-stat.1.html
-I msecs, --interval-print msecs
Print count deltas every N milliseconds (minimum: 10ms) The
overhead percentage could be high in some cases, for instance
with small, sub 100ms intervals. Use with caution. example: perf
stat -I 1000 -e cycles -a sleep 5
For best results it is usually a good idea to use it with interval
mode like -I 1000, as the bottleneck of workloads can change often.
There is also importing results in machine-readable form, and with -I first field is datetime:
With -x, perf stat is able to output a not-quite-CSV format output ... optional usec time stamp in fractions of second (with -I xxx)
vmstat, systat-family tools iostat, mpstat, etc periodic printing is -I 1000 of perf stat (every second), for example system-wide (add -A to separate cpu counters):
perf stat -a -I 1000
The option is implemented in builtin-stat.c http://lxr.free-electrons.com/source/tools/perf/builtin-stat.c?v=4.8 __run_perf_stat function
531 static int __run_perf_stat(int argc, const char **argv)
532 {
533 int interval = stat_config.interval;
For perf stat -I 1000 with some program argument (forks=1), for example perf stat -I 1000 sleep 10 there is interval loop (ts is the millisecond interval converted to struct timespec):
639 enable_counters();
641 if (interval) {
642 while (!waitpid(child_pid, &status, WNOHANG)) {
643 nanosleep(&ts, NULL);
644 process_interval();
645 }
646 }
666 disable_counters();
For variant of system-wide hardware performance monitor counting and forks=0 there is other interval loop
658 enable_counters();
659 while (!done) {
660 nanosleep(&ts, NULL);
661 if (interval)
662 process_interval();
663 }
666 disable_counters();
process_interval() http://lxr.free-electrons.com/source/tools/perf/builtin-stat.c?v=4.8#L347 from the same file uses read_counters(); which loops over event list and invokes read_counter() which loops over all known threads and all cpus and starts actual reading function:
306 for (thread = 0; thread < nthreads; thread++) {
307 for (cpu = 0; cpu < ncpus; cpu++) {
...
310 count = perf_counts(counter->counts, cpu, thread);
311 if (perf_evsel__read(counter, cpu, thread, count))
312 return -1;
perf_evsel__read is the real counter read while program is still running:
1207 int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread,
1208 struct perf_counts_values *count)
1209 {
1210 memset(count, 0, sizeof(*count));
1211
1212 if (FD(evsel, cpu, thread) < 0)
1213 return -EINVAL;
1214
1215 if (readn(FD(evsel, cpu, thread), count, sizeof(*count)) < 0)
1216 return -errno;
1217
1218 return 0;
1219 }

Oprofile and gprof output varies for same code

I am running my code AMD optron 6270 machine. OS is Centos 6.2 release.
I have made a simple program as
#include<stdio.h>
#include<stdlib.h>
int calling (long a);
int calling1 (long a);
int calling2 (long a);
int calling3 (long a);
int calling4 (long a);
int calling5 (long a);
void main()
{
long a,b=0;
printf("hi");
for (a=0; a<10000000; a++) b++;
b =Calling(a);
b =Calling5(a);
b =Calling4(a);
}
int Calling(long a)
{
long b=0;
for (a=0; a<100; a++) b = Calling1(a);
return 0;
}
int Calling1(long a)
{
long b=0;
for (a=0; a<10000000; a++) b++;
b =Calling2(a);
return 0;
}
int Calling2(long a)
{
long b=0;
for (a=0;a<10000000;a++) b++;
b =Calling3(a);
return 0;
}
int Calling3(long a)
{
long b=0;
for (a=0;a<10000000;a++) b++;
b =Calling4(a);
return 0;
}
int Calling4(long a)
{
long b=0;
for (a=0; a<10000000; a++) b++;
return 0;
}
int Calling5(long a)
{
long b=0;
for (a=0; a<10000000; a++) b++;
b=0;
for (a=0; a<10000000; a++) b++;
b=0;
for (a=0 ;a<10000000; a++) b++;
b=0;
for (a=0; a<10000000; a++) b++;
b=0;
return 0;
}
While profinling this code with gprof and Oprofile. I got different reports. Lets say i run the main.exe with gprof two times:
1st report with gprof
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
24.80 2.96 2.96 101 0.03 0.03 Calling4
24.71 5.91 2.95 100 0.03 0.12 Calling1
24.63 8.84 2.94 100 0.03 0.06 Calling3
23.78 11.68 2.84 100 0.03 0.09 Calling2
1.01 11.80 0.12 1 0.12 0.12 Calling5
0.34 11.84 0.04 main
0.00 11.84 0.00 1 0.00 11.65 Calling
2nd report with gprof
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
25.13 2.99 2.99 100 0.03 0.12 Calling1
24.88 5.95 2.96 101 0.03 0.03 Calling4
24.80 8.89 2.95 100 0.03 0.06 Calling3
23.48 11.69 2.79 100 0.03 0.09 Calling2
1.02 11.81 0.12 1 0.12 0.12 Calling5
0.17 11.83 0.02 main
0.00 11.83 0.00 1 0.00 11.66 Calling
Both the reports differ. Everytime I run my main.exe i get different profiling reports.
When I tried Oprofile I also get different result as:
Oprofile report1
Using /var/lib/oprofile/samples/ for samples directory.
warning: /no-vmlinux could not be found.
CPU: AMD64 family15h, speed 2.2e+06 MHz (estimated)
Counted CPU_CLK_UNHALTED events (CPU Clocks not Halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % image name symbol name
92552 24.7672 main Calling4
91610 24.5151 main Calling3
91566 24.5033 main Calling1
91469 24.4774 main Calling2
3665 0.9808 main Calling5
1892 0.5063 no-vmlinux /no-vmlinux
916 0.2451 main main
10 0.0027 libc-2.12.so profil_counter
1 2.7e-04 ld-2.12.so _dl_cache_libcmp
1 2.7e-04 ld-2.12.so _dl_relocate_object
1 2.7e-04 ld-2.12.so _dl_sysdep_start
1 2.7e-04 ld-2.12.so strcmp
1 2.7e-04 libc-2.12.so __libc_fini
1 2.7e-04 libc-2.12.so _dl_addr
1 2.7e-04 libc-2.12.so _int_malloc
1 2.7e-04 libc-2.12.so exit
Oprofile report2
Using /var/lib/oprofile/samples/ for samples directory.
warning: /no-vmlinux could not be found.
CPU: AMD64 family15h, speed 2.2e+06 MHz (estimated)
Counted CPU_CLK_UNHALTED events (CPU Clocks not Halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % image name symbol name
92254 24.7719 main Calling4
91482 24.5646 main Calling1
91480 24.5641 main Calling3
91340 24.5265 main Calling2
3658 0.9822 main Calling5
1270 0.3410 no-vmlinux /no-vmlinux
916 0.2460 main main
6 0.0016 libc-2.12.so profil_counter
1 2.7e-04 ld-2.12.so _dl_lookup_symbol_x
1 2.7e-04 ld-2.12.so _dl_setup_hash
1 2.7e-04 ld-2.12.so _dl_sysdep_start
1 2.7e-04 ld-2.12.so bcmp
1 2.7e-04 libc-2.12.so __mcount_internal
1 2.7e-04 libc-2.12.so _dl_addr
1 2.7e-04 libc-2.12.so _int_free
1 2.7e-04 libc-2.12.so mcount
Can any one tell em why this happens? What are the possible reasons which cause this?
How I can avoid this situation so that I can get a constant profiling results?
I wouldn't be concerned about subsequent reports differing. Reports can vary drastically depending on how the program is executed. Moreover, it's difficult to say much about what occurs between the two profiles. Depending on what other processes are running, both your system's cache and TLB will most certainly be in a different state than they were during the first profile. Unless you can ensure a controlled machine state, don't expect consistent results.
It's simple to understand why reports from each tool don't agree. The two tools are fundamentally different. Oprofile is a sampling-based profiler that, in essence, periodically interrupts the CPU. Gprof is instrumentation based; it must be compiled into your program. This produces a different binary than would otherwise run had gprof not been used. As a result, gprof will over estimate timing. Use oprofile for CPU-bound processes, and gprof for I/O-bound processes.

Why doesn't perf report cache misses?

According to perf tutorials, perf stat is supposed to report cache misses using hardware counters. However, on my system (up-to-date Arch Linux), it doesn't:
[joel#panda goog]$ perf stat ./hash
Performance counter stats for './hash':
869.447863 task-clock # 0.997 CPUs utilized
92 context-switches # 0.106 K/sec
4 cpu-migrations # 0.005 K/sec
1,041 page-faults # 0.001 M/sec
2,628,646,296 cycles # 3.023 GHz
819,269,992 stalled-cycles-frontend # 31.17% frontend cycles idle
132,355,435 stalled-cycles-backend # 5.04% backend cycles idle
4,515,152,198 instructions # 1.72 insns per cycle
# 0.18 stalled cycles per insn
1,060,739,808 branches # 1220.015 M/sec
2,653,157 branch-misses # 0.25% of all branches
0.871766141 seconds time elapsed
What am I missing? I already searched the man page and the web, but didn't find anything obvious.
Edit: my CPU is an Intel i5 2300K, if that matters.
On my system, an Intel Xeon X5570 # 2.93 GHz I was able to get perf stat to report cache references and misses by requesting those events explicitly like this
perf stat -B -e cache-references,cache-misses,cycles,instructions,branches,faults,migrations sleep 5
Performance counter stats for 'sleep 5':
10573 cache-references
1949 cache-misses # 18.434 % of all cache refs
1077328 cycles # 0.000 GHz
715248 instructions # 0.66 insns per cycle
151188 branches
154 faults
0 migrations
5.002776842 seconds time elapsed
The default set of events did not include cache events, matching your results, I don't know why
perf stat -B sleep 5
Performance counter stats for 'sleep 5':
0.344308 task-clock # 0.000 CPUs utilized
1 context-switches # 0.003 M/sec
0 CPU-migrations # 0.000 M/sec
154 page-faults # 0.447 M/sec
977183 cycles # 2.838 GHz
586878 stalled-cycles-frontend # 60.06% frontend cycles idle
430497 stalled-cycles-backend # 44.05% backend cycles idle
720815 instructions # 0.74 insns per cycle
# 0.81 stalled cycles per insn
152217 branches # 442.095 M/sec
7646 branch-misses # 5.02% of all branches
5.002763199 seconds time elapsed
In the latest source code, the default event does not include cache-misses and cache-references again:
struct perf_event_attr default_attrs[] = {
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_TASK_CLOCK },
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CONTEXT_SWITCHES },
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CPU_MIGRATIONS },
{ .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_PAGE_FAULTS },
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_CPU_CYCLES },
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_STALLED_CYCLES_FRONTEND },
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_STALLED_CYCLES_BACKEND },
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_INSTRUCTIONS },
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_INSTRUCTIONS },
{ .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_MISSES },
};
So the man and most web are out of date as so far.
I've spent some minutes trying to understand perf. I found out the cache-misses by first recording and then reporting the data (both perf tools).
To see a list of events:
perf list
For example, in order to check the last-level-cache load misses, you will need to use the event LLC-loads-misses like this
perf record -e LLC-loads-misses ./your_program
then report the results
perf report -v

Resources