Oprofile and gprof output varies for same code - linux

I am running my code AMD optron 6270 machine. OS is Centos 6.2 release.
I have made a simple program as
#include<stdio.h>
#include<stdlib.h>
int calling (long a);
int calling1 (long a);
int calling2 (long a);
int calling3 (long a);
int calling4 (long a);
int calling5 (long a);
void main()
{
long a,b=0;
printf("hi");
for (a=0; a<10000000; a++) b++;
b =Calling(a);
b =Calling5(a);
b =Calling4(a);
}
int Calling(long a)
{
long b=0;
for (a=0; a<100; a++) b = Calling1(a);
return 0;
}
int Calling1(long a)
{
long b=0;
for (a=0; a<10000000; a++) b++;
b =Calling2(a);
return 0;
}
int Calling2(long a)
{
long b=0;
for (a=0;a<10000000;a++) b++;
b =Calling3(a);
return 0;
}
int Calling3(long a)
{
long b=0;
for (a=0;a<10000000;a++) b++;
b =Calling4(a);
return 0;
}
int Calling4(long a)
{
long b=0;
for (a=0; a<10000000; a++) b++;
return 0;
}
int Calling5(long a)
{
long b=0;
for (a=0; a<10000000; a++) b++;
b=0;
for (a=0; a<10000000; a++) b++;
b=0;
for (a=0 ;a<10000000; a++) b++;
b=0;
for (a=0; a<10000000; a++) b++;
b=0;
return 0;
}
While profinling this code with gprof and Oprofile. I got different reports. Lets say i run the main.exe with gprof two times:
1st report with gprof
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
24.80 2.96 2.96 101 0.03 0.03 Calling4
24.71 5.91 2.95 100 0.03 0.12 Calling1
24.63 8.84 2.94 100 0.03 0.06 Calling3
23.78 11.68 2.84 100 0.03 0.09 Calling2
1.01 11.80 0.12 1 0.12 0.12 Calling5
0.34 11.84 0.04 main
0.00 11.84 0.00 1 0.00 11.65 Calling
2nd report with gprof
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
25.13 2.99 2.99 100 0.03 0.12 Calling1
24.88 5.95 2.96 101 0.03 0.03 Calling4
24.80 8.89 2.95 100 0.03 0.06 Calling3
23.48 11.69 2.79 100 0.03 0.09 Calling2
1.02 11.81 0.12 1 0.12 0.12 Calling5
0.17 11.83 0.02 main
0.00 11.83 0.00 1 0.00 11.66 Calling
Both the reports differ. Everytime I run my main.exe i get different profiling reports.
When I tried Oprofile I also get different result as:
Oprofile report1
Using /var/lib/oprofile/samples/ for samples directory.
warning: /no-vmlinux could not be found.
CPU: AMD64 family15h, speed 2.2e+06 MHz (estimated)
Counted CPU_CLK_UNHALTED events (CPU Clocks not Halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % image name symbol name
92552 24.7672 main Calling4
91610 24.5151 main Calling3
91566 24.5033 main Calling1
91469 24.4774 main Calling2
3665 0.9808 main Calling5
1892 0.5063 no-vmlinux /no-vmlinux
916 0.2451 main main
10 0.0027 libc-2.12.so profil_counter
1 2.7e-04 ld-2.12.so _dl_cache_libcmp
1 2.7e-04 ld-2.12.so _dl_relocate_object
1 2.7e-04 ld-2.12.so _dl_sysdep_start
1 2.7e-04 ld-2.12.so strcmp
1 2.7e-04 libc-2.12.so __libc_fini
1 2.7e-04 libc-2.12.so _dl_addr
1 2.7e-04 libc-2.12.so _int_malloc
1 2.7e-04 libc-2.12.so exit
Oprofile report2
Using /var/lib/oprofile/samples/ for samples directory.
warning: /no-vmlinux could not be found.
CPU: AMD64 family15h, speed 2.2e+06 MHz (estimated)
Counted CPU_CLK_UNHALTED events (CPU Clocks not Halted) with a unit mask of 0x00 (No unit mask) count 100000
samples % image name symbol name
92254 24.7719 main Calling4
91482 24.5646 main Calling1
91480 24.5641 main Calling3
91340 24.5265 main Calling2
3658 0.9822 main Calling5
1270 0.3410 no-vmlinux /no-vmlinux
916 0.2460 main main
6 0.0016 libc-2.12.so profil_counter
1 2.7e-04 ld-2.12.so _dl_lookup_symbol_x
1 2.7e-04 ld-2.12.so _dl_setup_hash
1 2.7e-04 ld-2.12.so _dl_sysdep_start
1 2.7e-04 ld-2.12.so bcmp
1 2.7e-04 libc-2.12.so __mcount_internal
1 2.7e-04 libc-2.12.so _dl_addr
1 2.7e-04 libc-2.12.so _int_free
1 2.7e-04 libc-2.12.so mcount
Can any one tell em why this happens? What are the possible reasons which cause this?
How I can avoid this situation so that I can get a constant profiling results?

I wouldn't be concerned about subsequent reports differing. Reports can vary drastically depending on how the program is executed. Moreover, it's difficult to say much about what occurs between the two profiles. Depending on what other processes are running, both your system's cache and TLB will most certainly be in a different state than they were during the first profile. Unless you can ensure a controlled machine state, don't expect consistent results.
It's simple to understand why reports from each tool don't agree. The two tools are fundamentally different. Oprofile is a sampling-based profiler that, in essence, periodically interrupts the CPU. Gprof is instrumentation based; it must be compiled into your program. This produces a different binary than would otherwise run had gprof not been used. As a result, gprof will over estimate timing. Use oprofile for CPU-bound processes, and gprof for I/O-bound processes.

Related

Perf output is less than the number of actual instruction

I tried to count the number of instructions of add loop application in RISC-V FPGA, using very simple RV32IM core with Linux 5.4.0 buildroot.
add.c:
int main()
{
int a = 0;
for (int i = 0; i < 1024*1024; i++)
a++;
printf("RESULT: %d\n", a);
return a;
}
I used -O0 compile option so that the loop really loop, and the resulting dump file is following:
000103c8 <main>:
103c8: fe010113 addi sp,sp,-32
103cc: 00812e23 sw s0,28(sp)
103d0: 02010413 addi s0,sp,32
103d4: fe042623 sw zero,-20(s0)
103d8: fe042423 sw zero,-24(s0)
103dc: 01c0006f j 103f8 <main+0x30>
103e0: fec42783 lw a5,-20(s0)
103e4: 00178793 addi a5,a5,1 # 12001 <__TMC_END__+0x1>
103e8: fef42623 sw a5,-20(s0)
103ec: fe842783 lw a5,-24(s0)
103f0: 00178793 addi a5,a5,1
103f4: fef42423 sw a5,-24(s0)
103f8: fe842703 lw a4,-24(s0)
103fc: 001007b7 lui a5,0x100
10400: fef740e3 blt a4,a5,103e0 <main+0x18>
10404: fec42783 lw a5,-20(s0)
10408: 00078513 mv a0,a5
1040c: 01c12403 lw s0,28(sp)
10410: 02010113 addi sp,sp,32
10414: 00008067 ret
As you can see, the application loops from 103e0 ~ 10400, which is 9 instructions, so the number of total instruction must be at least 9 * 1024^2
But the result of perf stat is pretty weird
RESULT: 1048576
Performance counter stats for './add.out':
3170.45 msec task-clock # 0.841 CPUs utilized
20 context-switches # 0.006 K/sec
0 cpu-migrations # 0.000 K/sec
38 page-faults # 0.012 K/sec
156192046 cycles # 0.049 GHz (11.17%)
8482441 instructions # 0.05 insn per cycle (11.12%)
1145775 branches # 0.361 M/sec (11.25%)
3.771031341 seconds time elapsed
0.075933000 seconds user
3.559385000 seconds sys
The total number of instructions perf counted was lower than 9 * 1024^2. Difference is about 10%.
How is this happening? I think the output of perf should be larger than that, because perf tool measures not only overall add.out, but also overhead of perf itself and context-switching.

C/C++ MPI speedup is not as expected

I am trying to write an MPI application to speedup a math algorithm with a computer cluster. But before this I am doing some kind of benchmarking. But the first results are not as much as expected.
The test application has linear speedup with 4 cores but 5,6 cores are not speeding up the application. I am doing a test with Odroid N2 platform. It has 6 cores. Nproc says there are 6 cores available.
Am I missing some kind of configuration? Or is my code not prepared well enought ( it is based on one of the base example of mpi)?
Is there any response time or syncronization time which shall be considered ?
Here are some measures from my MPI based application. I measured a total calculation time for a function.
1 core 0.838052sec
2 core 0.438483sec
3 core 0.405501sec
4 core 0.416391sec
5 core 0.514472sec
6 core 0.435128sec
12 core (4 core from 3 N2 boards) 0.06867sec
18 core (6 core from 3 N2 boards) 0.152759sec
I did a benchmark with raspberry pi4 with 4 core:
1 core 1.51 sec
2 core 0.75 sec
3 core 0.69 sec
4 core 0.67 sec
And this is my benchmark application:
int MyFun(int *array, int num_elements, int j)
{
int result_overall = 0;
for (int i = 0; i < num_elements; i++)
{
result_overall += array[i] / 1000;
}
return result_overall;
}
int compute_sum(int* sub_sums,int num_of_cpu)
{
int sum = 0;
for(int i = 0; i<num_of_cpu; i++)
{
sum += sub_sums[i];
}
return sum;
}
//measuring performance from main(): num_elements_per_proc is equal to 604800
if (world_rank == 0)
{
startTime = std::chrono::high_resolution_clock::now();
}
// Compute the sum of your subset
int sub_sum = 0;
for(int j=0;j<1000;j++)
{
sub_sum += MyFun(sub_intArray, num_elements_per_proc, world_rank);
}
MPI_Allgather(&sub_sum, 1, MPI_INT, sub_sums, 1, MPI_INT, MPI_COMM_WORLD);
int total_sum = compute_sum(sub_sums, num_of_cpu);
if (world_rank == 0)
{
elapsedTime = std::chrono::high_resolution_clock::now() - startTime;
timer = elapsedTime.count();
}
I build it with -O3 optimization level.
UPDATE:
new measures:
60480 sample, MyFun called 100000 times:
1.47 -> 0.74 -> 0.48 -> 0.36
6048 samples, MyFun called 1000000 times:
1.43 -> 0.7 -> 0.47 -> 0.35
6048 samples, MyFun called 10000000 times:
14.43 -> 7.08 -> 4.72 -> 3.59
UPDATE2:
By the way when I list the CPU info in linux I got this:
Is this normal?
The quad-core A73 core is not present. And it says there are two sockets with 3-3 cores.
And here is the CPU utilization with sar:
Seems like all of the cores are utilized.
I create some plots from speedup:
Seems like calculation on float instead of int helps a bit but the core 5-6 do not help much. And I think memory bandwidth is okay. Is this a normal behavior when utilizing all CPU equally with little.BIG architecture?

any performance penalto to be expected with thread_local?

Using C++11 and/or C11 thread_local, should we expect any performance penalty over non-thread_local storage on x86 (32- or 64-bit) Linux, Red Hat 5 or newer, with a recent g++/gcc (say, version 4 or newer) or clang?
On Ubuntu 18.04 x86_64 with gcc-8.3 (options -pthread -m{arch,tune}=native -std=gnu++17 -g -O3 -ffast-math -falign-{functions,loops}=64 -DNDEBUG) the difference is almost imperceptible:
#include <benchmark/benchmark.h>
struct A { static unsigned n; };
unsigned A::n = 0;
struct B { static thread_local unsigned n; };
thread_local unsigned B::n = 0;
template<class T>
void bm(benchmark::State& state) {
for(auto _ : state)
benchmark::DoNotOptimize(++T::n);
}
BENCHMARK_TEMPLATE(bm, A);
BENCHMARK_TEMPLATE(bm, B);
BENCHMARK_MAIN();
Results:
Run on (16 X 5000 MHz CPU s)
CPU Caches:
L1 Data 32 KiB (x8)
L1 Instruction 32 KiB (x8)
L2 Unified 256 KiB (x8)
L3 Unified 16384 KiB (x1)
Load Average: 0.59, 0.49, 0.38
-----------------------------------------------------
Benchmark Time CPU Iterations
-----------------------------------------------------
bm<A> 1.09 ns 1.09 ns 642390002
bm<B> 1.09 ns 1.09 ns 633963210
On x86_64 thread_local variables are accessed relative to fs register. Instructions with such addressing mode are often 2 bytes longer, so theoretically, they can take more time.
On other platforms it depends on how access to thread_local variables is implemented. See ELF Handling For Thread-Local Storage for more details.

fast conversion from string time to milliseconds

For a vector or list of times, I'd like to go from a string time, e.g. 12:34:56.789 to milliseconds from midnight, which would be equal to 45296789.
This is what I do now:
toms = function(time) {
sapply(strsplit(time, ':', fixed = T),
function(x) sum(as.numeric(x)*c(3600000,60000,1000)))
}
and would like to do it faster.
Here's an example data set for benchmarking:
times = rep('12:34:56.789', 1e6)
system.time(toms(times))
# user system elapsed
# 9.00 0.04 9.05
You could use the fasttime package, which seems to be about an order of magnitude faster.
library(fasttime)
fasttoms <- function(time) {
1000*unclass(fastPOSIXct(paste("1970-01-01",time)))
}
times <- rep('12:34:56.789', 1e6)
system.time(toms(times))
# user system elapsed
# 6.61 0.03 6.68
system.time(fasttoms(times))
# user system elapsed
# 0.53 0.00 0.53
identical(fasttoms(times),toms(times))
# [1] TRUE

Why is clock_gettime so erratic?

Intro
Section Old Question contains the initial question (Further Investigation and Conclusion have been added since).
Skip to the section Further Investigation below for a detailed comparison of the different timing methods (rdtsc, clock_gettime and QueryThreadCycleTime).
I believe the erratic behaviour of CGT can be attributed to either a buggy kernel or a buggy CPU (see section Conclusion).
The code used for testing is at the bottom of this question (see section Appendix).
Apologies for the length.
Old Question
In short: I am using clock_gettime to measure the execution time of many code segments. I am experiencing very inconsistent measurements between separate runs. The method has an extremely high standard deviation when compared to other methods (see Explanation below).
Question: Is there a reason why clock_gettime would give so inconsistent measurements when compared to other methods? Is there an alternative method with the same resolution that accounts for thread idle time?
Explanation: I am trying to profile a number of small parts of C code. The execution time of each of the code segments is not more than a couple of microseconds. In a single run, each of the code segments will execute some hundreds of times, which produces runs × hundreds of measurements.
I also have to measure only the time the thread actually spends executing (which is why rdtsc is not suitable). I also need a high resolution (which is why times is not suitable).
I've tried the following methods:
rdtsc (on Linux and Windows),
clock_gettime (with 'CLOCK_THREAD_CPUTIME_ID'; on Linux), and
QueryThreadCycleTime (on Windows).
Methodology: The analysis was performed on 25 runs. In each run, separate code segments repeat a 101 of times. Therefore I have 2525 measurements. Then I look at a histogram of the measurements, and also calculate some basic stuff (like the mean, std.dev., median, mode, min, and max).
I do not present how I measured the 'similarity' of the three methods, but this simply involved a basic comparison of proportion of times spent in each code segment ('proportion' means that the times are normalised). I then look at the pure differences in these proportions. This comparison showed that all 'rdtsc', 'QTCT', and 'CGT' measure the same proportions when averaged over the 25 runs. However, the results below show that 'CGT' has a very large standard deviation. This makes it unusable in my use case.
Results:
A comparison of clock_gettime with rdtsc for the same code segment (25 runs of 101 measurements = 2525 readings):
clock_gettime:
1881 measurements of 11 ns,
595 measurements were (distributed almost normally) between 3369 and 3414 ns,
2 measurements of 11680 ns,
1 measurement of 1506022 ns, and
the rest is between 900 and 5000 ns.
Min: 11 ns
Max: 1506022 ns
Mean: 1471.862 ns
Median: 11 ns
Mode: 11 ns
Stddev: 29991.034
rdtsc (note: no context switches occurred during this run, but if it happens, it usually results in just a single measurement of 30000 ticks or so):
1178 measurements between 274 and 325 ticks,
306 measurements between 326 and 375 ticks,
910 measurements between 376 and 425 ticks,
129 measurements between 426 and 990 ticks,
1 measurement of 1240 ticks, and
1 measurement of 1256 ticks.
Min: 274 ticks
Max: 1256 ticks
Mean: 355.806 ticks
Median: 333 ticks
Mode: 376 ticks
Stddev: 83.896
Discussion:
rdtsc gives very similar results on both Linux and Windows. It has an acceptable standard deviation--it is actually quite consistent/stable. However, it does not account for thread idle time. Therefore, context switches make the measurements erratic (on Windows I have observed this quite often: a code segment with an average of 1000 ticks or so will take ~30000 ticks every now and then--definitely because of pre-emption).
QueryThreadCycleTime gives very consistent measurements--i.e. much lower standard deviation when compared to rdtsc. When no context switches happen, this method is almost identical to rdtsc.
clock_gettime, on the other hand, is producing extremely inconsistent results (not just between runs, but also between measurements). The standard deviations are extreme (when compared to rdtsc).
I hope the statistics are okay. But what could be the reason for such a discrepancy in the measurements between the two methods? Of course, there is caching, CPU/core migration, and other things. But none of this should be responsible for any such differences between 'rdtsc' and 'clock_gettime'. What is going on?
Further Investigation
I have investigated this a bit further. I have done two things:
Measured the overhead of just calling clock_gettime(CLOCK_THREAD_CPUTIME_ID, &t) (see code 1 in Appendix), and
in a plain loop called clock_gettime and stored the readings into an array (see code 2 in Appendix). I measure the delta times (difference in successive measurement times, which should correspond a bit to the overhead of the call of clock_gettime).
I have measured it on two different computers with two different Linux Kernel versions:
CGT:
CPU: Core 2 Duo L9400 # 1.86GHz
Kernel: Linux 2.6.40-4.fc15.i686 #1 SMP Fri Jul 29 18:54:39 UTC 2011 i686 i686 i386
Results:
Estimated clock_gettime overhead: between 690-710 ns
Delta times:
Average: 815.22 ns
Median: 713 ns
Mode: 709 ns
Min: 698 ns
Max: 23359 ns
Histogram (left-out ranges have frequencies of 0):
Range | Frequency
------------------+-----------
697 < x ≤ 800 -> 78111 <-- cached?
800 < x ≤ 1000 -> 16412
1000 < x ≤ 1500 -> 3
1500 < x ≤ 2000 -> 4836 <-- uncached?
2000 < x ≤ 3000 -> 305
3000 < x ≤ 5000 -> 161
5000 < x ≤ 10000 -> 105
10000 < x ≤ 15000 -> 53
15000 < x ≤ 20000 -> 8
20000 < x -> 5
CPU: 4 × Dual Core AMD Opteron Processor 275
Kernel: Linux 2.6.26-2-amd64 #1 SMP Sun Jun 20 20:16:30 UTC 2010 x86_64 GNU/Linux
Results:
Estimated clock_gettime overhead: between 279-283 ns
Delta times:
Average: 320.00
Median: 1
Mode: 1
Min: 1
Max: 3495529
Histogram (left-out ranges have frequencies of 0):
Range | Frequency
--------------------+-----------
x ≤ 1 -> 86738 <-- cached?
282 < x ≤ 300 -> 13118 <-- uncached?
300 < x ≤ 440 -> 78
2000 < x ≤ 5000 -> 52
5000 < x ≤ 30000 -> 5
3000000 < x -> 8
RDTSC:
Related code rdtsc_delta.c and rdtsc_overhead.c.
CPU: Core 2 Duo L9400 # 1.86GHz
Kernel: Linux 2.6.40-4.fc15.i686 #1 SMP Fri Jul 29 18:54:39 UTC 2011 i686 i686 i386
Results:
Estimated overhead: between 39-42 ticks
Delta times:
Average: 52.46 ticks
Median: 42 ticks
Mode: 42 ticks
Min: 35 ticks
Max: 28700 ticks
Histogram (left-out ranges have frequencies of 0):
Range | Frequency
------------------+-----------
34 < x ≤ 35 -> 16240 <-- cached?
41 < x ≤ 42 -> 63585 <-- uncached? (small difference)
48 < x ≤ 49 -> 19779 <-- uncached?
49 < x ≤ 120 -> 195
3125 < x ≤ 5000 -> 144
5000 < x ≤ 10000 -> 45
10000 < x ≤ 20000 -> 9
20000 < x -> 2
CPU: 4 × Dual Core AMD Opteron Processor 275
Kernel: Linux 2.6.26-2-amd64 #1 SMP Sun Jun 20 20:16:30 UTC 2010 x86_64 GNU/Linux
Results:
Estimated overhead: between 13.7-17.0 ticks
Delta times:
Average: 35.44 ticks
Median: 16 ticks
Mode: 16 ticks
Min: 14 ticks
Max: 16372 ticks
Histogram (left-out ranges have frequencies of 0):
Range | Frequency
------------------+-----------
13 < x ≤ 14 -> 192
14 < x ≤ 21 -> 78172 <-- cached?
21 < x ≤ 50 -> 10818
50 < x ≤ 103 -> 10624 <-- uncached?
5825 < x ≤ 6500 -> 88
6500 < x ≤ 8000 -> 88
8000 < x ≤ 10000 -> 11
10000 < x ≤ 15000 -> 4
15000 < x ≤ 16372 -> 2
QTCT:
Related code qtct_delta.c and qtct_overhead.c.
CPU: Core 2 6700 # 2.66GHz
Kernel: Windows 7 64-bit
Results:
Estimated overhead: between 890-940 ticks
Delta times:
Average: 1057.30 ticks
Median: 890 ticks
Mode: 890 ticks
Min: 880 ticks
Max: 29400 ticks
Histogram (left-out ranges have frequencies of 0):
Range | Frequency
------------------+-----------
879 < x ≤ 890 -> 71347 <-- cached?
895 < x ≤ 1469 -> 844
1469 < x ≤ 1600 -> 27613 <-- uncached?
1600 < x ≤ 2000 -> 55
2000 < x ≤ 4000 -> 86
4000 < x ≤ 8000 -> 43
8000 < x ≤ 16000 -> 10
16000 < x -> 1
Conclusion
I believe the answer to my question would be a buggy implementation on my machine (the one with AMD CPUs with an old Linux kernel).
The CGT results of the AMD machine with the old kernel show some extreme readings. If we look at the delta times, we'll see that the most frequent delta is 1 ns. This means that the call to clock_gettime took less than a nanosecond! Moreover, it also produced a number of extraordinary large deltas (of more than 3000000 ns)! This seems to be erroneous behaviour. (Maybe unaccounted core migrations?)
Remarks:
The overhead of CGT and QTCT is quite big.
It is also difficult to account for their overhead, because CPU caching seems to make quite a big difference.
Maybe sticking to RDTSC, locking the process to one core, and assigning real-time priority is the most accurate way to tell how many cycles a piece of code used...
Appendix
Code 1: clock_gettime_overhead.c
#include <time.h>
#include <stdio.h>
#include <stdint.h>
/* Compiled & executed with:
gcc clock_gettime_overhead.c -O0 -lrt -o clock_gettime_overhead
./clock_gettime_overhead 100000
*/
int main(int argc, char **args) {
struct timespec tstart, tend, dummy;
int n, N;
N = atoi(args[1]);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &tstart);
for (n = 0; n < N; ++n) {
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &dummy);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &dummy);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &dummy);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &dummy);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &dummy);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &dummy);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &dummy);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &dummy);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &dummy);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &dummy);
}
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &tend);
printf("Estimated overhead: %lld ns\n",
((int64_t) tend.tv_sec * 1000000000 + (int64_t) tend.tv_nsec
- ((int64_t) tstart.tv_sec * 1000000000
+ (int64_t) tstart.tv_nsec)) / N / 10);
return 0;
}
Code 2: clock_gettime_delta.c
#include <time.h>
#include <stdio.h>
#include <stdint.h>
/* Compiled & executed with:
gcc clock_gettime_delta.c -O0 -lrt -o clock_gettime_delta
./clock_gettime_delta > results
*/
#define N 100000
int main(int argc, char **args) {
struct timespec sample, results[N];
int n;
for (n = 0; n < N; ++n) {
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &sample);
results[n] = sample;
}
printf("%s\t%s\n", "Absolute time", "Delta");
for (n = 1; n < N; ++n) {
printf("%lld\t%lld\n",
(int64_t) results[n].tv_sec * 1000000000 +
(int64_t)results[n].tv_nsec,
(int64_t) results[n].tv_sec * 1000000000 +
(int64_t) results[n].tv_nsec -
((int64_t) results[n-1].tv_sec * 1000000000 +
(int64_t)results[n-1].tv_nsec));
}
return 0;
}
Code 3: rdtsc.h
static uint64_t rdtsc() {
#if defined(__GNUC__)
# if defined(__i386__)
uint64_t x;
__asm__ volatile (".byte 0x0f, 0x31" : "=A" (x));
return x;
# elif defined(__x86_64__)
uint32_t hi, lo;
__asm__ __volatile__ ("rdtsc" : "=a"(lo), "=d"(hi));
return ((uint64_t)lo) | ((uint64_t)hi << 32);
# else
# error Unsupported architecture.
# endif
#elif defined(_MSC_VER)
return __rdtsc();
#else
# error Other compilers not supported...
#endif
}
Code 4: rdtsc_delta.c
#include <stdio.h>
#include <stdint.h>
#include "rdtsc.h"
/* Compiled & executed with:
gcc rdtsc_delta.c -O0 -o rdtsc_delta
./rdtsc_delta > rdtsc_delta_results
Windows:
cl -Od rdtsc_delta.c
rdtsc_delta.exe > windows_rdtsc_delta_results
*/
#define N 100000
int main(int argc, char **args) {
uint64_t results[N];
int n;
for (n = 0; n < N; ++n) {
results[n] = rdtsc();
}
printf("%s\t%s\n", "Absolute time", "Delta");
for (n = 1; n < N; ++n) {
printf("%lld\t%lld\n", results[n], results[n] - results[n-1]);
}
return 0;
}
Code 5: rdtsc_overhead.c
#include <time.h>
#include <stdio.h>
#include <stdint.h>
#include "rdtsc.h"
/* Compiled & executed with:
gcc rdtsc_overhead.c -O0 -lrt -o rdtsc_overhead
./rdtsc_overhead 1000000 > rdtsc_overhead_results
Windows:
cl -Od rdtsc_overhead.c
rdtsc_overhead.exe 1000000 > windows_rdtsc_overhead_results
*/
int main(int argc, char **args) {
uint64_t tstart, tend, dummy;
int n, N;
N = atoi(args[1]);
tstart = rdtsc();
for (n = 0; n < N; ++n) {
dummy = rdtsc();
dummy = rdtsc();
dummy = rdtsc();
dummy = rdtsc();
dummy = rdtsc();
dummy = rdtsc();
dummy = rdtsc();
dummy = rdtsc();
dummy = rdtsc();
dummy = rdtsc();
}
tend = rdtsc();
printf("%G\n", (double)(tend - tstart)/N/10);
return 0;
}
Code 6: qtct_delta.c
#include <stdio.h>
#include <stdint.h>
#include <Windows.h>
/* Compiled & executed with:
cl -Od qtct_delta.c
qtct_delta.exe > windows_qtct_delta_results
*/
#define N 100000
int main(int argc, char **args) {
uint64_t ticks, results[N];
int n;
for (n = 0; n < N; ++n) {
QueryThreadCycleTime(GetCurrentThread(), &ticks);
results[n] = ticks;
}
printf("%s\t%s\n", "Absolute time", "Delta");
for (n = 1; n < N; ++n) {
printf("%lld\t%lld\n", results[n], results[n] - results[n-1]);
}
return 0;
}
Code 7: qtct_overhead.c
#include <stdio.h>
#include <stdint.h>
#include <Windows.h>
/* Compiled & executed with:
cl -Od qtct_overhead.c
qtct_overhead.exe 1000000
*/
int main(int argc, char **args) {
uint64_t tstart, tend, ticks;
int n, N;
N = atoi(args[1]);
QueryThreadCycleTime(GetCurrentThread(), &tstart);
for (n = 0; n < N; ++n) {
QueryThreadCycleTime(GetCurrentThread(), &ticks);
QueryThreadCycleTime(GetCurrentThread(), &ticks);
QueryThreadCycleTime(GetCurrentThread(), &ticks);
QueryThreadCycleTime(GetCurrentThread(), &ticks);
QueryThreadCycleTime(GetCurrentThread(), &ticks);
QueryThreadCycleTime(GetCurrentThread(), &ticks);
QueryThreadCycleTime(GetCurrentThread(), &ticks);
QueryThreadCycleTime(GetCurrentThread(), &ticks);
QueryThreadCycleTime(GetCurrentThread(), &ticks);
QueryThreadCycleTime(GetCurrentThread(), &ticks);
}
QueryThreadCycleTime(GetCurrentThread(), &tend);
printf("%G\n", (double)(tend - tstart)/N/10);
return 0;
}
Well as CLOCK_THREAD_CPUTIME_ID is implemented using rdtsc it will likely suffer from the same problems as it. The manual page for clock_gettime says:
The CLOCK_PROCESS_CPUTIME_ID and CLOCK_THREAD_CPUTIME_ID clocks
are realized on many platforms using timers from the CPUs (TSC on
i386, AR.ITC on Itanium). These registers may differ between CPUs and
as a consequence these clocks may return bogus results if a
process is migrated to another CPU.
Which sounds like it might explain your problems? Maybe you should lock your process to one CPU to get stable results?
When you have a highly skewed distribution that cannot go negative, you're going to see large discrepancies between mean, median, and mode.
The standard deviation is fairly meaningless for such a distribution.
It's usually a good idea to log-transform it.
That will make it "more normal".

Resources