I want to create a simulation of an actual device on an x86 Linux Kernel. Part of this will involve simulating timings as close to possible as I can get. Based on some research it seems I will need at least microsecond resolution timing. I understand that on a non-realtime system it won't be possible to get perfect timing, but I don't perfect, just as close as I can get, perhaps with hacking around with thread scheduling / preemption options.
What I actually want to do is perform an action every interval, i.e. run a some code every Xµs. I've been trying to research the best ways to do this from a Kernel driver as well as some research into whether it's possible to do this reasonably accurately from user mode (keeping the above paragraph in mind). One of the first things that caught my eye was the HPET timer, that is programmable to generate interrupts based on programmable comparators. Unfortunately, it seems on many chipsets it has been rather buggy in the past, and there's not much information on using it for anything that obtaining a timestamp or using it as the main clock source. The linux Kernel provides an HPET driver that in the past, seemed to provide both kernel and user mode interfaces, but seems only to provide a barely documented usermode interface in more recent kernel versions. I've also read about various other kernel functions and interfaces such as the hrtimer interface and the various delay functions, though I'm having a bit of trouble understanding them and if they are suited for my purpose.
Given my current use case, what are the best options I have running recurring events at a µs resolution from say a kernel driver? Obviously accuracy is probably my biggest criteria, but ease of use would be second.
Well, it's possible to achieve your accuracy in userspace -- clock_nanosleep is one ideal option, which has relative and absolute mode. Since clock_nanosleep is based on hrtimer in kernel mode, you may want to use hrtimer if you'd like to implement it in kernel space.
However, to make the timer work accurately, there're two IMPORTENT things worth mentioning.
You should set the timerslack of your process (either by writing nonzero value in ns to /proc/self/timerslack_ns or via prctl(PR_SET_TIMERSLACK,...)). This value is considered as the 'tolerance' of the timer.
The CPU power management also matters here. The CPU has many different Cstates, each of which has a different exit latency. So you need to configure your cpuidle module to not use Cstates other than C0, e.g. for an Intel CPU you could simply write 1 to /sys/devices/system/cpu/cpu$c/cpuidle/state$s/disable to disable state $s of CPU $c. Or just add idle=poll to your kernel options to let CPU keep active (in C0) while kernel idle. NOTE that this significantly influences the power of the computer and leads the cooling fans to make noise.
You can get a timer with delays under 10 microseconds if the two things mentioned above are configured correctly. There is a trade-off between latency and power consumption that you should made.
I have been searching for an appropriate method to measure cost of various syscalls in the Linux OS. There have been many questions raised related to this topic in the past, none provide a detailed description of how to measure it accurately. Most of the answers arbitrarily claim the cost of the syscall is 1-2us or a few 100 cycles if it caches on the CPU.
System calls overhead
Syscall overhead
The naive way I can think of measuring the syscall cost is to use rdtscp instruction across a syscall such as getpid(). However this is insufficient for measuring the cost of open(), read() or write() calls accurately. I do can modify the kernel and insert specific timer code across these functions and measure it but that would require changes in the kernel which I don't want to do. I wonder if there is a simpler solution that would allow me to measure it from the userspace itself.
Update: July 14:
After a lot of searches, I found libmicro benchmark suite from RedHat. https://github.com/redhat-performance/libMicro
However, this is created a while ago and I am wondering how good this still is. Of course, it does not use rdtscp and that adds some measurement errors. Is there anything else that is missing in this benchmark creation?
strace and perf are generally used to track and measure such kind of (kernel) operations. More specifically, perf can be used to generate flame graphs enabling you to see detailed in-kernel function calls. However, one should remember that proper rights need to be adjusted in /proc/sys/kernel/perf_event_paranoid.
I advise you to put the syscall in a loop since measuring precisely the cost of one syscall with possibly delayed/asynchronous work affected to kernel threads is either very hard to measure in user-space or simply just inaccurate (on a non-customized kernel).
Additional information:
strace work at the microsecond granularity. Some the POSIX clocks (see clock_gettime) could reach the granularity of 100 ns. Beyond this limit, rdtscp is AFAIK one of the most accurate (one should care about the reference frequency). As for perf, it makes use of hardware performance counters and kernel events. You may need to configure your kernel so trace-points can be generated and properly tracked by perf. perf can track one specific process or the complete system.
From: https://rt.wiki.kernel.org/articles/f/r/e/Frequently_Asked_Questions_7407.html
Real-time only has impact on the kernel; Userspace does not notice the difference except for better real time behavior.
Does it mean that if we write the applications in user space, they won't get the hard real time effect?
It depends what you mean with "real-time effect". Usually you want a guaranteed timing behavior in a real-time system. You won't get that. However, your application will run more "smoothly" and will be more responsive. For many best-effort systems, that will be sufficient.
No, that's not what it meant.
It means that with PREEMPT_RT you get lower maximum latency in user-space without the need of adapting your code or using additional libraries/tools. In practice: PREEMPT_RT doesn't need user-level applications to use specific APIs.
The APIs within the kernel code, instead, are significantly changed (e.g., by changing any spinlock to a mutex, etc.)
By the way, keep in mind that PREEMPT_RT reduces the maximum latency experienced by a task, but the system throughput will be lower (i.e., more context switches) and the average latency likely increased.
I believe that question can be best answered in context -- asking if there were any APIs introduced by that specific patchset that application authors can use -- and none are added by this patchset. You won't need to recompile your application and there is no benefit to recompiling. You also won't be locked into any specific API.
If you have a well-written userspace application that relies on being able to run as soon as possible when hardware conditions dictate it should respond, then yes, these patches can help. But you can still write poor applications that prevent good real-time behavior and the patchset cannot help you.
It means that Real-Time Patch will manipulate some codes in kernel and the effect of this manipulation is that we will have a fine grained preemptive kernel.
All programs in user space will benefit from real-time preemptive kernel, without any modification. even no recompile is needed!
PREEMPT_RT patch goal is to convert Linux to a Hard Real Time System and it`s really good for most of the tasks. but in safety critical systems such as military and aerospace, Linux has nothing to offer and we should use other RTOSes like VxWorks, QNX and Integirty!
Some of the things I want to measure are very short,and I can only repeat them so many times if I don't run any of the setup/dispose code in the middle.
note: on linux,reading /proc/stat
Not very portable and you'll have to take great care so it is reliable, but the Time Stamp Counter definitely has the highest resolution available (increases at every CPU tick).
The time stamp counter has, until
recently, been an excellent
high-resolution, low-overhead way of
getting CPU timing information. With
the advent of multi-core/hyperthreaded
CPUs, systems with multiple CPUs, and
"hibernating" operating systems, the
TSC cannot be relied on to provide
accurate results - unless great care
is taken to correct the possible
flaws: rate of tick and whether all
cores (processors) have identical
values in their time-keeping
registers. There is no promise that
the timestamp counters of multiple
CPUs on a single motherboard will be
synchronized. In such cases,
programmers can only get reliable
results by locking their code to a
single CPU. Even then, the CPU speed
may change due to power-saving
measures taken by the OS or BIOS, or
the system may be hibernated and later
resumed (resetting the time stamp
counter). In those latter cases, to
stay relevant, the counter must be
recalibrated periodically (according
to the time resolution your
application requires).
There's some notes there about Linux specific solutions on the page, too:
Under Linux, similar functionality is
provided by reading the value of
CLOCK_MONOTONIC clock using POSIX
clock_gettime function.
I am porting a game, that was originally written for the Win32 API, to Linux (well, porting the OS X port of the Win32 port to Linux).
I have implemented QueryPerformanceCounter by giving the uSeconds since the process start up:
BOOL QueryPerformanceCounter(LARGE_INTEGER* performanceCount)
{
gettimeofday(¤tTimeVal, NULL);
performanceCount->QuadPart = (currentTimeVal.tv_sec - startTimeVal.tv_sec);
performanceCount->QuadPart *= (1000 * 1000);
performanceCount->QuadPart += (currentTimeVal.tv_usec - startTimeVal.tv_usec);
return true;
}
This, coupled with QueryPerformanceFrequency() giving a constant 1000000 as the frequency, works well on my machine, giving me a 64-bit variable that contains uSeconds since the program's start-up.
So is this portable? I don't want to discover it works differently if the kernel was compiled in a certain way or anything like that. I am fine with it being non-portable to something other than Linux, however.
Maybe. But you have bigger problems. gettimeofday() can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of gettimeofday() is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no.
You should look into clock_gettime(CLOCK_MONOTONIC) for timing intervals. It suffers from several less issues due to things like multi-core systems and external clock settings.
Also, look into the clock_getres() function.
High Resolution, Low Overhead Timing for Intel Processors
If you're on Intel hardware, here's how to read the CPU real-time instruction counter. It will tell you the number of CPU cycles executed since the processor was booted. This is probably the finest-grained counter you can get for performance measurement.
Note that this is the number of CPU cycles. On linux you can get the CPU speed from /proc/cpuinfo and divide to get the number of seconds. Converting this to a double is quite handy.
When I run this on my box, I get
11867927879484732
11867927879692217
it took this long to call printf: 207485
Here's the Intel developer's guide that gives tons of detail.
#include <stdio.h>
#include <stdint.h>
inline uint64_t rdtsc() {
uint32_t lo, hi;
__asm__ __volatile__ (
"xorl %%eax, %%eax\n"
"cpuid\n"
"rdtsc\n"
: "=a" (lo), "=d" (hi)
:
: "%ebx", "%ecx");
return (uint64_t)hi << 32 | lo;
}
main()
{
unsigned long long x;
unsigned long long y;
x = rdtsc();
printf("%lld\n",x);
y = rdtsc();
printf("%lld\n",y);
printf("it took this long to call printf: %lld\n",y-x);
}
#Bernard:
I have to admit, most of your example went straight over my head. It does compile, and seems to work, though. Is this safe for SMP systems or SpeedStep?
That's a good question... I think the code's ok.
From a practical standpoint, we use it in my company every day,
and we run on a pretty wide array of boxes, everything from 2-8 cores.
Of course, YMMV, etc, but it seems to be a reliable and low-overhead
(because it doesn't make a context switch into system-space) method
of timing.
Generally how it works is:
declare the block of code to be assembler (and volatile, so the
optimizer will leave it alone).
execute the CPUID instruction. In addition to getting some CPU information
(which we don't do anything with) it synchronizes the CPU's execution buffer
so that the timings aren't affected by out-of-order execution.
execute the rdtsc (read timestamp) execution. This fetches the number of
machine cycles executed since the processor was reset. This is a 64-bit
value, so with current CPU speeds it will wrap around every 194 years or so.
Interestingly, in the original Pentium reference, they note it wraps around every
5800 years or so.
the last couple of lines store the values from the registers into
the variables hi and lo, and put that into the 64-bit return value.
Specific notes:
out-of-order execution can cause incorrect results, so we execute the
"cpuid" instruction which in addition to giving you some information
about the cpu also synchronizes any out-of-order instruction execution.
Most OS's synchronize the counters on the CPUs when they start, so
the answer is good to within a couple of nano-seconds.
The hibernating comment is probably true, but in practice you
probably don't care about timings across hibernation boundaries.
regarding speedstep: Newer Intel CPUs compensate for the speed
changes and returns an adjusted count. I did a quick scan over
some of the boxes on our network and found only one box that
didn't have it: a Pentium 3 running some old database server.
(these are linux boxes, so I checked with: grep constant_tsc /proc/cpuinfo)
I'm not sure about the AMD CPUs, we're primarily an Intel shop,
although I know some of our low-level systems gurus did an
AMD evaluation.
Hope this satisfies your curiosity, it's an interesting and (IMHO)
under-studied area of programming. You know when Jeff and Joel were
talking about whether or not a programmer should know C? I was
shouting at them, "hey forget that high-level C stuff... assembler
is what you should learn if you want to know what the computer is
doing!"
You may be interested in Linux FAQ for clock_gettime(CLOCK_REALTIME)
Wine is actually using gettimeofday() to implement QueryPerformanceCounter() and it is known to make many Windows games work on Linux and Mac.
Starts http://source.winehq.org/source/dlls/kernel32/cpu.c#L312
leads to http://source.winehq.org/source/dlls/ntdll/time.c#L448
So it says microseconds explicitly, but says the resolution of the system clock is unspecified. I suppose resolution in this context means how the smallest amount it will ever be incremented?
The data structure is defined as having microseconds as a unit of measurement, but that doesn't mean that the clock or operating system is actually capable of measuring that finely.
Like other people have suggested, gettimeofday() is bad because setting the time can cause clock skew and throw off your calculation. clock_gettime(CLOCK_MONOTONIC) is what you want, and clock_getres() will tell you the precision of your clock.
The actual resolution of gettimeofday() depends on the hardware architecture. Intel processors as well as SPARC machines offer high resolution timers that measure microseconds. Other hardware architectures fall back to the system’s timer, which is typically set to 100 Hz. In such cases, the time resolution will be less accurate.
I obtained this answer from High Resolution Time Measurement and Timers, Part I
This answer mentions problems with the clock being adjusted. Both your problems guaranteeing tick units and the problems with the time being adjusted are solved in C++11 with the <chrono> library.
The clock std::chrono::steady_clock is guaranteed not to be adjusted, and furthermore it will advance at a constant rate relative to real time, so technologies like SpeedStep must not affect it.
You can get typesafe units by converting to one of the std::chrono::duration specializations, such as std::chrono::microseconds. With this type there's no ambiguity about the units used by the tick value. However, keep in mind that the clock doesn't necessarily have this resolution. You can convert a duration to attoseconds without actually having a clock that accurate.
From my experience, and from what I've read across the internet, the answer is "No," it is not guaranteed. It depends on CPU speed, operating system, flavor of Linux, etc.
Reading the RDTSC is not reliable in SMP systems, since each CPU maintains their own counter and each counter is not guaranteed to by synchronized with respect to another CPU.
I might suggest trying clock_gettime(CLOCK_REALTIME). The posix manual indicates that this should be implemented on all compliant systems. It can provide a nanosecond count, but you probably will want to check clock_getres(CLOCK_REALTIME) on your system to see what the actual resolution is.