I'm working on a Raspberry Pi based project that has a GPS module which my boss wants me to get the time from for the system clock. However we also need to take readings on different sensors whilst the GPS may not have a fix, and we need to know to the millisecond precision (tolerance of 50-100ms is fine) when these readings were taken.
Personally I want a hardware RTC for this, but I've been instructed to work around it. My idea is to mark each reading with a relative time from system boot, the system time is not reliable, and is updated by NTP/Satellite time when available (I can then fix-up the records when a synchronized time is available using the relative time).
So, how can I get a millisecond precise uptime in Linux from user-space C code? Something like the jiffies value available in the kernel would be perfect.
I think you have to check the main controller(CPU) on your board. Usually, there will be a hardware timer module integrated into the CPU, or decrementer register implementation in the CPU core.
If there is a hardware timer or DEC register on your CPU, then use it to implement a periodical interrupt(the frequency can be 1000HZ or else). The interrupt handler can notify/wakeup the user-space process to do the necessary real-time work.
Related
I want to create a simulation of an actual device on an x86 Linux Kernel. Part of this will involve simulating timings as close to possible as I can get. Based on some research it seems I will need at least microsecond resolution timing. I understand that on a non-realtime system it won't be possible to get perfect timing, but I don't perfect, just as close as I can get, perhaps with hacking around with thread scheduling / preemption options.
What I actually want to do is perform an action every interval, i.e. run a some code every Xµs. I've been trying to research the best ways to do this from a Kernel driver as well as some research into whether it's possible to do this reasonably accurately from user mode (keeping the above paragraph in mind). One of the first things that caught my eye was the HPET timer, that is programmable to generate interrupts based on programmable comparators. Unfortunately, it seems on many chipsets it has been rather buggy in the past, and there's not much information on using it for anything that obtaining a timestamp or using it as the main clock source. The linux Kernel provides an HPET driver that in the past, seemed to provide both kernel and user mode interfaces, but seems only to provide a barely documented usermode interface in more recent kernel versions. I've also read about various other kernel functions and interfaces such as the hrtimer interface and the various delay functions, though I'm having a bit of trouble understanding them and if they are suited for my purpose.
Given my current use case, what are the best options I have running recurring events at a µs resolution from say a kernel driver? Obviously accuracy is probably my biggest criteria, but ease of use would be second.
Well, it's possible to achieve your accuracy in userspace -- clock_nanosleep is one ideal option, which has relative and absolute mode. Since clock_nanosleep is based on hrtimer in kernel mode, you may want to use hrtimer if you'd like to implement it in kernel space.
However, to make the timer work accurately, there're two IMPORTENT things worth mentioning.
You should set the timerslack of your process (either by writing nonzero value in ns to /proc/self/timerslack_ns or via prctl(PR_SET_TIMERSLACK,...)). This value is considered as the 'tolerance' of the timer.
The CPU power management also matters here. The CPU has many different Cstates, each of which has a different exit latency. So you need to configure your cpuidle module to not use Cstates other than C0, e.g. for an Intel CPU you could simply write 1 to /sys/devices/system/cpu/cpu$c/cpuidle/state$s/disable to disable state $s of CPU $c. Or just add idle=poll to your kernel options to let CPU keep active (in C0) while kernel idle. NOTE that this significantly influences the power of the computer and leads the cooling fans to make noise.
You can get a timer with delays under 10 microseconds if the two things mentioned above are configured correctly. There is a trade-off between latency and power consumption that you should made.
I've read this link on Measure time in Linux - getrusage vs clock_gettime vs clock vs gettimeofday? which provides a great breakdown of timing functions available in C
I'm very confused, however, to how the different notions of "time" are maintained by the OS/hardware.
This is a quote from the Linux man pages,
RTCs should not be confused with the system clock, which is a
software clock maintained by the kernel and used to implement
gettimeofday(2) and time(2), as well as setting timestamps on files,
and so on. The system clock reports seconds and microseconds since a
start point, defined to be the POSIX Epoch: 1970-01-01 00:00:00 +0000
(UTC). (One common implementation counts timer interrupts, once per
"jiffy", at a frequency of 100, 250, or 1000 Hz.) That is, it is
supposed to report wall clock time, which RTCs also do.
A key difference between an RTC and the system clock is that RTCs run
even when the system is in a low power state (including "off"), and
the system clock can't. Until it is initialized, the system clock
can only report time since system boot ... not since the POSIX Epoch.
So at boot time, and after resuming from a system low power state,
the system clock will often be set to the current wall clock time
using an RTC. Systems without an RTC need to set the system clock
using another clock, maybe across the network or by entering that
data manually.
The Arch Linux docs indicate that the RTC and system clock are independent after bootup. My questions then are:
What causes the interrupts that increments the system clock???
If wall time = time interval using the system clock, what does the process time depend on??
Is any of this all related to the CPU frequency? Or is that a totally orthogonal time-keeping business
On Linux, from the application point of view, the time(7) man page gives a good explanation.
Linux provides also the (linux specific) timerfd_create(2) and related syscalls.
You should not care about interrupts (they are the kernel's business, and are configured dynamically, e.g. thru application timers -timer_create(2), poll(2) and many other syscalls- and by the scheduler), but only about application visible time related syscalls.
Probably, if some process is making a timer with a tiny period of e.g. 10ms, the kernel will increase the frequency of timer interrupts to 100Hz
On recent kernels, you probably want the
CONFIG_HIGH_RES_TIMERS=y
CONFIG_TIMERFD=y
CONFIG_HPET_TIMER=y
CONFIG_PREEMPT=y
options in your kernel's .config file.
BTW, you could do cat /proc/interrupts twice with 10 seconds interval. On my laptop with a home-built 3.16 kernel -with mostly idle processes, but a firefox browser and an emacs, I'm getting 25 interrupts per second. Try also cat /proc/timer_list and cat /proc/timer_stats
Look also in the Documentation/timers/ directory of a recent (e.g. 3.16) Linux kernel tree.
The kernel probably use hardware devices like -for PC laptops and desktops- on-chip HPET (or the TSC) which are much better than the old battery saved RTC timer. Of course, details are hardware specific. So, on ARM based Linux systems (e.g. your Android smartphone) it is different.
I thinking about writing simple 8088 emulator. But I can't understand how to connect 8088 core with video subsystem.
I thinking about main loop:
while (TRUE)
{
execute_cpu_cycles_per_scanline() ;
paint_scanline() ;
}
Does this method is suitable for CPU and graphics emulation? Any other methods ? Any good explanation why I can't use different threads for CPU and Video. How dealing with this problem emulators like QEMU or others (x86).
Thanks.
well there are so many x86 processors and as they have evolved over time the instructions to clock periods have become somewhat non-deterministic. For older cpus like the 8088 and 6502, etc, if documented and accurate you could simply count the clock cycles for each instruction, and when the number of simulated clock cycles is equal to or greater than the scanline draw time or some interrupt interval or whatever then you could do what you are suggesting. and if you look at mame for example or other emulators that is basically how they do it, use the instructions clock cycles to determine time elapsed and from that manage emulated time in the peripherals.
lets say you want to run linux on qemu, you wouldnt want the emulated clock that tells time to be determined by the execution of instructions, you would want to sync that clock with the hardware system clock. Likewise you might want to sync the refresh rates based on the real hardware refresh rates rather than simulated ones.
so those are the two extremes. you will need to do one or the other or something in between.
I need to develop a Linux driver that generates a square wave, with a cycle of about 1ms, using the MIPS platform (this is not i386).
I tried some methods, but these are not success:
Use timer/hrtimer --> but cycle is 12ms and unstable
Cannot use realtime additional packages as RTLinux/RTAI, because these do not support for MIPS
Use the kernel-thread with a forever loop and udelay function --> It takes too much of the CPU's resource --> Performance is not acceptable
Do you aid me? Or do you thwart me...? (Please help!)
Thank you.
The Unix way would be not doing that at all. Maybe in olden times on single task machines, you would have done like this, but now - if you don't have a hardware circuit that gives to the proper frequency, you may never succeed because hardware timers don't have the necessary resolution, and it may always happen that a task of more importance grabs your CPU time.
As FrankH said, the best solution involves relying on hardware. You should check your processor's reference manual to see if it has a timer.
I'll add this: if it happens to have an Output Compare or PWM subsystem (I'm not familiar with MIPS, but it's not at all uncommon in embedded devices) you can just write a few registers to set it all up, and then you don't need any more processor time.
It might be possible, but to get this from within Linux, the hardware must have certain characteristics:
you need a programmable timer device that can create an interrupt at sufficiently-high priority that other activity by the Linux kernel (such as scheduling or other interrupts, even) won't preempt / block the interrupt handler, and at sufficient granulatity/frequency to meet your signal stability constraints
the "square wave" electrical line must also be programmable and the operation (register write ? memory mapped register write ? special CPU instruction ? ... ?) which switches its state must be guaranteed faster than the shortest cycle time used with the timer above (or else you could get "frequency moire")
If that's the case then your special timer device driver can toggle the line from within its high prio interrupt handler and create the square wave. Since it's both interrupt driven and separate from the normal timer interrupt sources / consumers (i.e. not shared - no latency from possibly dispatching multiple timer events per interrupt), you've got a much better chance of sufficient precision.
Since all this (the existance of a separately-programmable timer device, to start with) is hardware-specific, you need to start with the specs of your CPU/SoC/board and find out if there are multiple independent timers available.
I have a Fibre Optic link, with a proprietary Device Driver.
The link goes into a PCIe card. Running on a RHEL 5.2 (2.6.18-128~)
I have mmap'ed the interface on the card for setup and FIFO access etc, and these read/writes take a few µs to complete, so all good there.
But of course cannot use this for interrupts, so I have to use the kernel module provided, with its user-space lib interface.
WaitForInterrupt(); // API lib interface to kernel module
// Interrupt occurs and am returned to my code in user space
time = CurrentTime() - LatchedTime(); // time to get to here
It takes around 70µs to return from WaitForInterrupt(). (The time the interrupt is raised is latched in the firmware, I read this which as I say above takes ~2µs, and compare it against the current time in the firmware)
What are expected access times between an interrupt occurring and the User Space API interrupt call wait method returning?
Network/other-high-speed interfaces take?
500ms is many orders of magnitudes larger than what a simple switch between userspace/kernel takes, but as someone mentioned in comments, linux is not a real time OS, so there's no guarantee 500ms "hickups" won't show up now and then.
It's quite impossible to tell what the culprit is, the device driver could simpliy be trying to bundle up data to be more efficient.
That said, we've had endless troubles with some custom cards and interactions with both APIC and ACPI, requireing a delicate balance of bios settings, what card goes into which PCI slot and whether a particular video card screws up everything - likely a cause of a dubious driver interacting with more or less buggy bios/video-cards..
If you're able to reliably exceed 500us on a system that's not heavily loaded, I think you're looking at a bad driver implementation (or its userspace wrapper/counterpart).
In my experience the latency to wake a user thread on interrupt should be less than 10us, though (as others have said) Linux provides no latency guarantees.
If you have a recent kernel, you can use the perf sched tool to measure the latency, and see where the time is being used. (500us does sound a tad on the high side, depending on your processor, how many tasks are running, ...)