Is clock_nanosleep affected by adjtime and NTP? - linux

Usually CLOCK_MONOTONIC_RAW is used for obtaining a clock that is not affected by NTP or adjtime(). However clock_nanosleep() doesn't support CLOCK_MONOTONIC_RAW and trying to use it anyway will result in return code 95 Operation not supported (Kernel 4.6.0).
Does clock_nanosleep() somehow take these clock adjustments into account or will the sleep time be affected by it?
What are the alternatives if a sleeping time is required which should not be affected by clock adjustments?

CLOCK_MONOTONIC_RAW never had support for clock_nanosleep() since it was introduced in Linux 2.6.28. It was also explicitly fixed to not have this support in 2.6.32 because of oopses. The code had been refactored several times after that, but still there is no support for CLOCK_MONOTONIC_RAW in clock_nanosleep() and I wasn't able to find any comments on why is that.
At the very minimum, the fact that there was a patch that explicitly disabled this functionality and it passed all reviews tells us that it doesn't look like a big problem for kernel developers. So, at the moment (4.7) the only things CLOCK_MONOTONIC_RAW supports are clock_getres() and clock_gettime().
Speaking of adjustments, as already noted by Rich CLOCK_MONOTONIC is subject to rate adjustments just by the nature of this clock. This happens because hrtimer_interrupt() runs its queues with adjusted monotonic time value (ktime_get_update_offsets_now()->timekeeping_get_ns()->timekeeping_delta_to_ns() and that operates with xtime_nsec which is subject to adjustment). Actually, looking at this code I'm probably no longer surprised that CLOCK_MONOTONIC_RAW has no support for clock_nanosleep() (and probably won't have it in future) — adjusted monotonic clock usage seems to be the basis for hrtimers.
As for alternatives, I think there are none. nanosleep() uses the same CLOCK_MONOTONIC, setitimer() has its own set of timers, alarm() uses ITIMER_REAL (same as setitimer()), that (with some indirection) is also our good old friend CLOCK_MONOTONIC. What else do we have? I guess nothing.
As an unrelated side note, there is an interesting observation in that if you call clock_nanosleep() for relative interval (that is not TIMER_ABSTIME) then CLOCK_REALTIME actually becomes a synonym for CLOCK_MONOTONIC.

Related

How to implement sleep utility in RISC-V?

I want to implement sleep utility that receives number of seconds as an input and pauses for given seconds on a educatational xv6 operation system that runs on risc-v processors.
The OS already have system call that get number of ticks and pauses: https://github.com/mit-pdos/xv6-riscv/blob/riscv/kernel/sysproc.c#L56
Timers are initialized using a timer vector: https://github.com/mit-pdos/xv6-riscv/blob/riscv/kernel/kernelvec.S#L93
The timer vector is initialized with CLINT_MTIMECMP function that tells timer controller when to wake the next interrupt.
What I do not understand is how to know the time between the ticks and how many ticks are done during 1 second.
Edit: A quick google of "qemu timebase riscv mtime" found a google groups chat which states that RDTIME is nanoseconds since boot and mtime is an emulated 10Mhz clock.
I haven't done a search to find the information you need, but I think I have some contextual information that would help you find it. I would recommend searching QEMU documentation / code (probably from Github search)for how mtime and mtimecmp work.
In section 10.1 (Counter - Base Counter and Timers) of specification1, it is explained that the RDTIME psuedo-instruction should have some fixed tick rate that can be determined based on the implementation 2. That tick rate would also be shared for mtimecmp and mtime as defined in the privileged specification 3.
I would presume the ticks used be the sleep system call would be the same as these ticks from the specifications. In that case, xv6 is just a kernel and wouldn't then define how many ticks/second there are. It seems that xv6 is made to run on top of qemu so the definition of ticks/second should be defined somewhere in the qemu code and might be documented.
From the old wiki for QEMU-riscv it should be clear that the SiFive CLINT defines the features xv6 needs to work, but I doubt that it specifies how to know the tickrate. Spike also supports the CLINT interface so it may also be instructive to search for the code in spike that handles it.
1 I used version 20191213 of the unprivileged specification as a reference
2
The RDTIME pseudoinstruction reads the low XLEN bits of the time CSR, which counts wall-clock
real time that has passed from an arbitrary start time in the past. RDTIMEH is an RV32I-only in-
struction that reads bits 63–32 of the same real-time counter. The underlying 64-bit counter should
never overflow in practice. The execution environment should provide a means of determining the
period of the real-time counter (seconds/tick). The period must be constant. The real-time clocks
of all harts in a single user application should be synchronized to within one tick of the real-time
clock. The environment should provide a means to determine the accuracy of the clock.
3
3.1.10
Machine Timer Registers (mtime and mtimecmp)
Platforms provide a real-time counter, exposed as a memory-mapped machine-mode read-write
register, mtime. mtime must run at constant frequency, and the platform must provide a mechanism
for determining the timebase of mtime.

clock_monotonic_raw alternative in linux versions older than 2.6.28

clock_monotonic_raw is only supported as of Linux 2.6.28.
is there another way i can get a monotonic time which isn't subject to NTP adjustments or the incremental adjustments performed by adjtime?
i can't use clock_monotonic since it's affected by NTP & adjtime.
Take a closer look at CLOCK_MONOTONIC instead of just CLOCK_MONOTONIC_RAW. I wanted to use CLOCK_MONOTONIC_RAW with condition waits, but found that it was not supported (Fedora 25/Linux 4.10.17).
The situation is vaguely infuriating, but the upshot on Linux, to the best of my current understanding, is:
CLOCK_MONOTONIC_RAW is the closest you are going to get to a square-wave accumulator running at constant frequency. However, this is well-suited to less than you might think.
CLOCK_MONOTONIC is based on CLOCK_MONOTONIC_RAW with some gradual frequency corrections applied that allow it to eventually overtake or fall behind some other clock reference. NTP and adjtime can both make these corrections. However, to avoid doing things like breaking the hell out of software builds, the clock is still guaranteed to monotonically advance:
"The adjustment that adjtime() makes to the clock is carried out in such a manner that the clock is always monotonically increasing."
--adjtime man page
"You lying bastard." --me
Yup-- that was the plan, but bugs exist in kernel versions before 2.6.32.19; see discussion here: https://stackoverflow.com/a/3657433/3005946 which includes a link to a patch, if that affects you. It's hard for me to tell what the maximum error from that bug is (and I'd really, really like to know).
Even in the 4.x kernel, most POSIX synchronization objects don't seem to support CLOCK_MONOTONIC_RAW or CLOCK_MONOTONIC_COARSE. I have found this out the hard way. ALWAYS error-check your *_setclock calls.
POSIX semaphores (sem_t) don't support ANY monotonic clocks at all, which is infuriating. If you need this, you will have to roll your own using condition-waits. (As a bonus, doing this will allow you to have semaphores with negative initial levels, which can be handy.)
If you're just trying to keep a synchronization object's wait-function from deadlocking forever, and you've got something like a three-second bailout, you can just use CLOCK_MONOTONIC and call it a day-- the adjustments CLOCK_MONOTONIC is subject to are, effectively, jitter correction, well-below your accuracy requirements. Even in the buggy implementations, CLOCK_MONOTONIC is not going to jump backwards an hour or something like that. Again, what happens instead is that things like adjtime tweak the frequency of the clock, so that it gradually overtakes, or falls behind, some other, parallel-running clock.
CLOCK_REALTIME is in fact CLOCK_MONOTONIC with some other correction factors applied. Or the other way around. Whatever, it's effectively the same thing. The important part is that, if there is any chance your application will change time zones (moving vehicle, aircraft, ship, cruise missile, rollerblading cyborg) or encounter administrative clock adjustments, you should definitely NOT use CLOCK_REALTIME, or anything that requires it. Same thing if the server uses daylight savings time instead of UTC. However, a stationary server using UTC may be able to get away with CLOCK_REALTIME for coarse deadlock-avoidance purposes, if necessary. Avoid this unless you are on a pre-2.6 kernel and have no choice, though.
CLOCK_MONOTONIC_RAW is NOT something you want to use for timestamping. It has not been jitter-corrected yet, etc. It may be appropriate for DACs and ADCs, etc., but is not what you want to use for logging events on human-discernible time scales. We have NTP for a reason.
Hope this is helpful, and I can certainly understand the frustration.

Lowering linux kernel timer frequency

When I run my Virtual Machine with Gentoo as guest, I have found that there is considerable overhead coming from tick_periodic function. (This is the function which runs on every timer interrupt.) This function updates a global jiffy using write_seqlocks which leads to the overhead.
Here's a grep of HZ and relevant stuff in my kernel config file.
sharan013#sitmac4:~$ cat /boot/config | egrep 'HZ|TIME'
# CONFIG_RCU_FAST_NO_HZ is not set
CONFIG_NO_HZ=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
# CONFIG_MACHZ_WDT is not set
CONFIG_TIMERFD=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_X86_CYCLONE_TIMER=y
CONFIG_HPET_TIMER=y
Clearly it has set the configuration to 1000, but when I do sysconf(_SC_CLK_TCK), I get 100 as my timer frequency. So what is my system's timer frequency?
What I want to do is to bring the frequency down to 100, even lower if possible. Although it might effect the interactivity and precision of poll/select and schedulers time slice, I am ready to sacrifice these things for lesser timer interrupt as it will speed up VM.
When I tried to find out what has to be done I read in some place that you can do so by changing in the configuration file, else where I read that adding divider=10 to the boot parameter does the job, else where I read that none of it is needed if you can set the CONFIG_HIGH_RES_TIMERS to acheive low-latency timers even without increasing the timer frequency and the same is possible with a tickless system CONFIG_NO_HZ.
I am extermely confused about what is the right approach.
All I want is to bring down the timer interrupt to as low as possible.
Can I know the right way of doing this?
Don't worry! Your confusion is nothing but expected. Linux timer interrupts are very confusing and have had a long and quite exciting history.
CLK_TCK
Linux has no sysconf system call and glibc is just returning the constant value 100. Sorry.
HZ <-- what you probably want
When configuring your kernel you can choose a timer frequency of either 100Hz, 250Hz, 300Hz or 1000Hz. All of these are supported, and although 1000Hz is the default it's not always the best.
People will generally choose a high value when they value latency (a desktop or a webserver) and a low value when they value throughput (HPC).
CONFIG_HIGH_RES_TIMERS
This has nothing to do with timer interrupts, it's just a mechanism that allows you to have higher resolution timers. This basically means that timeouts on calls like select can be more accurate than 1/HZ seconds.
divider
This command line option is a patch provided by Red Hat. You can probably use this (if you're using Red Hat or CentOS), but I'd be careful. It's caused lots of bugs and you should probably just recompile with a different Hz value.
CONFIG_NO_HZ
This really doesn't do much, it's for power saving and it means that the ticks will stop (or at least become less frequent) when nothing is executing. This is probably already enabled on your kernel. It doesn't make any difference when at least one task is runnable.
Frederic Weisbecker actually has a patch pending which generalizes this to cases where only a single task is running, but it's a little way off yet.

How can I prove __udelay() is working correctly on my ARM embedded system?

We have an ARM9 using the 3.2 kernel -- everything seems to work fine. Recently I was asked to add some code to add a 50ms pulse on some GPIO lines at startup. The pulse code is fine; I can see the lines go down and up, as expected. What does not work the way I expected is the udelay() function. Reading the docs makes me think the units are in microseconds, but as measured in the logic analyzer it was way too short. So I finally added this code to get 50ms.
// wait 50ms to be sure PCIE reset takes
for (i=0;i<6100;i++) // measured on logic analyzer - seems wrong to me!!
{
__udelay(2000); // 2000 is max
}
I don't like it, but it works fine. There are some odd constants and instructions in the udelay code. Can someone enlighten me as to how this is supposed to work? This code is called after all the clocks are initialized, so everything else seems ok.
According to Linus in this thread:
If it's about 1% off, it's all fine. If somebody picked a delay value
that is so sensitive to small errors in the delay that they notice
that - or even notice something like 5% - then they have picked too
short of a delay.
udelay() was never really meant to be some kind of precision
instrument. Especially with CPU's running at different frequencies,
we've historically had some rather wild fluctuation. The traditional
busy loop ends up being affected not just by interrupts, but also by
things like cache alignment (we used to inline it), and then later the
TSC-based one obviously depended on TSC's being stable (which they
weren't for a while).
So historically, we've seen udelay() being really off (ie 50% off
etc), I wouldn't worry about things in the 1% range.
Linus
So it's not going to be perfect. It's going to be off. By how much is dependent on a lot of factors. Instead of using a for loop, consider using mdelay instead. It might be a bit more accurate. From the O'Reilly Linux Device Drivers book:
The udelay call should be called only for short time lapses because
the precision of loops_per_second is only eight bits, and noticeable
errors accumulate when calculating long delays. Even though the
maximum allowable delay is nearly one second (since calculations
overflow for longer delays), the suggested maximum value for udelay is
1000 microseconds (one millisecond). The function mdelay helps in
cases where the delay must be longer than one millisecond.
It's also important to remember that udelay is a busy-waiting function
(and thus mdelay is too); other tasks can't be run during the time
lapse. You must therefore be very careful, especially with mdelay, and
avoid using it unless there's no other way to meet your goal.
Currently, support for delays longer than a few microseconds and
shorter than a timer tick is very inefficient. This is not usually an
issue, because delays need to be just long enough to be noticed by
humans or by the hardware. One hundredth of a second is a suitable
precision for human-related time intervals, while one millisecond is a
long enough delay for hardware activities.
Specifically the line "the suggested maximum value for udelay is 1000 microseconds (one millisecond)" sticks out at me since you state that 2000 is the max. From this document on inserting delays:
mdelay is macro wrapper around udelay, to account for possible
overflow when passing large arguments to udelay
So it's possible you're running into an overflow error. Though I wouldn't normally consider 2000 to be a "large argument".
But if you need real accuracy in your timing, you'll need to deal with the offset like you have, roll your own or use a different kernel. For information on how to roll your own delay function using assembler or using hard real time kernels, see this article on High-resolution timing.
See also: Linux Kernel: udelay() returns too early?

Microsecond accurate (or better) process timing in Linux

I need a very accurate way to time parts of my program. I could use the regular high-resolution clock for this, but that will return wallclock time, which is not what I need: I needthe time spent running only my process.
I distinctly remember seeing a Linux kernel patch that would allow me to time my processes to nanosecond accuracy, except I forgot to bookmark it and I forgot the name of the patch as well :(.
I remember how it works though:
On every context switch, it will read out the value of a high-resolution clock, and add the delta of the last two values to the process time of the running process. This produces a high-resolution accurate view of the process' actual process time.
The regular process time is kept using the regular clock, which is I believe millisecond accurate (1000Hz), which is much too large for my purposes.
Does anyone know what kernel patch I'm talking about? I also remember it was like a word with a letter before or after it -- something like 'rtimer' or something, but I don't remember exactly.
(Other suggestions are welcome too)
The Completely Fair Scheduler suggested suggested by Marko is not what I was looking for, but it looks promising. The problem I have with it is that the calls I can use to get process time are still not returning values that are granular enough.
times() is returning values 21, 22, in milliseconds.
clock() is returning values 21000, 22000, same granularity.
getrusage() is returning values like 210002, 22001 (and somesuch), they look to have a bit better accuracy but the values look conspicuously the same.
So now the problem I'm probably having is that the kernel has the information I need, I just don't know the system call that will return it.
If you are looking for this level of timing resolution, you are probably trying to do some micro-optimization. If that's the case, you should look at PAPI. Not only does it provide both wall-clock and virtual (process only) timing information, it also provides access to CPU event counters, which can be indispensable when you are trying to improve performance.
http://icl.cs.utk.edu/papi/
See this question for some more info.
Something I've used for such things is gettimeofday(). It provides a structure with seconds and microseconds. Call it before the code, and again after. Then just subtract the two structs using timersub, and you can get the time it took in seconds from the tv_usec field.
If you need very small time units to for (I assume) testing the speed of your software, I would reccomend just running the parts you want to time in a loop millions of times, take the time before and after the loop and calculate the average. A nice side-effect of doing this (apart from not needing to figure out how to use nanoseconds) is that you would get more consistent results because the random overhead caused by the os sceduler will be averaged out.
Of course, unless your program doesn't need to be able to run millions of times in a second, it's probably fast enough if you can't measure a millisecond running time.
I believe CFC (Completely Fair Scheduler) is what you're looking for.
You can use the High Precision Event Timer (HPET) if you have a fairly recent 2.6 kernel. Check out Documentation/hpet.txt on how to use it. This solution is platform dependent though and I believe it is only available on newer x86 systems. HPET has at least a 10MHz timer so it should fit your requirements easily.
I believe several PowerPC implementations from Freescale support a cycle exact instruction counter as well. I used this a number of years ago to profile highly optimized code but I can't remember what it is called. I believe Freescale has a kernel patch you have to apply in order to access it from user space.
http://allmybrain.com/2008/06/10/timing-cc-code-on-linux/
might be of help to you (directly if you are doing it in C/C++, but I hope it will give you pointers even if you're not)... It claims to provide microsecond accuracy, which just passes your criterion. :)
I think I found the kernel patch I was looking for. Posting it here so I don't forget the link:
http://user.it.uu.se/~mikpe/linux/perfctr/
http://sourceforge.net/projects/perfctr/
Edit: It works for my purposes, though not very user-friendly.
try the CPU's timestamp counter? Wikipedia seems to suggest using clock_gettime().

Resources