Set linux time to millisecond precision - linux

I have an embedded Linux device that interfaces with another "master" device over a serial comm protocol. Periodically the master passes its date down to the slave device, because later the slave will return information to the master that needs to be accurately timestamped. However, the Linux 'date' command only sets the system date to within a second accuracy. This isn't enough for our uses.
Does anybody know how to set a Linux machine's time more precisely than 1 second?

The settimeofday(2) method given in other answers has a serious problem: it does exactly what you say you want. :)
The problem with directly changing a system's time, instantaneously, is that it can confuse programs that get the time of day before and after the change if the adjustment was negative. That is, they can perceive time to go backwards.
The fix for this is adjtime(3) which is simple and portable, or adjtimex(2) which is complicated, powerful and Linux-specific. Both of these calls use sophisticated algorithms to slowly adjust the system time over some period, forward only, until the desired change is achieved.
By the way, are you sure you aren't reinventing the wheel here? I recommend that you read Julien Ridoux and Darryl Veitch's ACM Queue paper Principles of Robust Timing over the Internet. You're working on embedded systems, so I would expect the ringing in Figure 5 to give you cold shivers. Can you say "damped oscillator?" adjtime() and adjtimex() use this troubled algorithm, so in some sense I am arguing against my own advice above, but the Mills algorithm is still better than the step adjustment non-algorithm. If you choose to implement RADclock instead, so much the better.

The settimeofday() system call takes and uses microsecond precision. You'll have to write a short program to use it, but that is quite straightforward.
struct timeval tv;
tv .tv_sec = (some time_t value)
tv .tv_usec = (the number of microseconds after the second)
int rc = settimeofday (&tv, NULL);
if (rc)
errormessage ("error %d setting system time", errno);

You can use the settimeofday(2) system call; the interface supports microsecond resolution.
#include <sys/time.h>
int gettimeofday(struct timeval *tv, struct timezone *tz);
int settimeofday(const struct timeval *tv, const struct timezone *tz);
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
You can use the clock_settime(2) system call; the interface provides multiple clocks and the interface supports nanosecond resolution.
#include <time.h>
int clock_getres(clockid_t clk_id, struct timespec *res);
int clock_gettime(clockid_t clk_id, struct timespec *tp);
int clock_settime(clockid_t clk_id, const struct timespec
*tp);
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
CLOCK_REALTIME
System-wide real-time clock. Setting this clock
requires appropriate privileges.
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time
since some unspecified starting point.
CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
Similar to CLOCK_MONOTONIC, but provides access to a
raw hardware-based time that is not subject to NTP
adjustments.
CLOCK_PROCESS_CPUTIME_ID
High-resolution per-process timer from the CPU.
CLOCK_THREAD_CPUTIME_ID
Thread-specific CPU-time clock.
This interface provides the nicety of the clock_getres(2) call, which can tell you exactly what the resolution is -- just because the interface accepts nanoseconds doesn't mean it can actually support nanosecond-resolution. (I've got a fuzzy memory that 20 ns is about the limits of many systems but no references to support this.)

If you're running an IP-capable networking protocol over the serial link (something like, ooh, PPP for example), you can just run an ntpd on the "master" host, then sync time using ntpd or ntpdate on the embedded device. NTP will take care of you.

Related

Millisecond resolution timer on LinkIT 7688

I am developing for the LinkIt Smart 7688 device by Mediatek. I need to do some timekeeping in a userspace application where I need at least 10ms resolution (preferably 1ms).
However every syscall I have tried returns values only with 1 second resolution. clock_gettime (tried all the different clocks) and gettimeofday which should provide sub-second resolution does not.
Doing a dmesg on the target reveals that the kernel timestamps with a resolution below 1 second, thus I conclude that a clock source is available with sub second resolution. (I would be very surpriced if this was not the case :) )
How do I get a timestamp with sub-second resolution on the Linkit Smart 7688 device?
Perhaps I could be missing some kernel configuration selecting the correct clock source to be available to userspace? I have not been able to find one.
Do not only use the seconds returned by gettimeofday, but also usec
gettimeofday(&t0, 0);
/* ... */
gettimeofday(&t1, 0);
long elapsed = (t1.tv_sec-t0.tv_sec)*1000000 + t1.tv_usec-t0.tv_usec;

In general, on ucLinux, is ioctl faster than writing to /sys filesystem?

I have an embedded system I'm working with, and it currently uses the sysfs to control certain features.
However, there is function that we would like to speed up, if possible.
I discovered that this subsystem also supports and ioctl interface, but before rewriting the code, I decided to search to see which is a faster interface (on ucLinux) in general: sysfs or ioctl.
Does anybody understand both implementations well enough to give me a rough idea of the difference in overhead for each? I'm looking for generic info, such as "ioctl is faster because you've removed the file layer from the function calls". Or "they are roughly the same because sysfs has a very simple interface".
Update 10/24/2013:
The specific case I'm currently doing is as follows:
int fd = open("/sys/power/state",O_WRONLY);
write( fd, "standby", 7 );
close( fd );
In kernel/power/main.c, the code that handles this write looks like:
static ssize_t state_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t n)
{
#ifdef CONFIG_SUSPEND
suspend_state_t state = PM_SUSPEND_STANDBY;
const char * const *s;
#endif
char *p;
int len;
int error = -EINVAL;
p = memchr(buf, '\n', n);
len = p ? p - buf : n;
/* First, check if we are requested to hibernate */
if (len == 7 && !strncmp(buf, "standby", len)) {
error = enter_standby();
goto Exit;
((( snip )))
Can this be sped up by moving to a custom ioctl() where the code to handle the ioctl call looks something like:
case SNAPSHOT_STANDBY:
if (!data->frozen) {
error = -EPERM;
break;
}
error = enter_standby();
break;
(so the ioctl() calls the same low-level function that the sysfs function did).
If by sysfs you mean the sysfs() library call, notice this in man 2 sysfs:
NOTES
This System-V derived system call is obsolete; don't use it. On systems with /proc, the same information can be obtained via
/proc/filesystems; use that interface instead.
I can't recall noticing stuff that had an ioctl() and a sysfs interface, but probably they exist. I'd use the proc or sys handle anyway, since that tends to be less cryptic and more flexible.
If by sysfs you mean accessing files in /sys, that's the preferred method.
I'm looking for generic info, such as "ioctl is faster because you've removed the file layer from the function calls".
Accessing procfs or sysfs files does not entail an I/O bottleneck because they are not real files -- they are kernel interfaces. So no, accessing this stuff through "the file layer" does not affect performance. This is a not uncommon misconception in linux systems programming, I think. Programmers can be squeamish about system calls that aren't well, system calls, and paranoid that opening a file will be somehow slower. Of course, file I/O in the ABI is just system calls anyway. What makes a normal (disk) file read slow is not the calls to open, read, write, whatever, it's the hardware bottleneck.
I always use low level descriptor based functions (open(), read()) instead of high level streams when doing this because at some point some experience led me to believe they were more reliable for this specifically (reading from /proc). I can't say whether that's definitively true.
So, the question was interesting, I built a couple of modules, one for ioctl and one for sysfs, the ioctl implementing only a 4 bytes copy_from_user and nothing more, and the sysfs having nothing in its write interface.
Then a couple of userspace test up to 1 million iterations, here the results:
time ./sysfs /sys/kernel/kobject_example/bar
real 0m0.427s
user 0m0.056s
sys 0m0.368s
time ./ioctl /run/temp
real 0m0.236s
user 0m0.060s
sys 0m0.172s
edit
I agree with #goldilocks answer, HW is the real bottleneck, in a Linux environment with a well written driver choosing ioctl or sysfs doesn't make a big difference, but if you are using uClinux probably in your HW even few cpu cycles can make a difference.
The test I've done is for Linux not uClinux and it never wanted to be an absolute reference profiling the two interfaces, my point is that you can write a book about how fast is one or another but only testing will let you know, took me few minutes to setup the thing.

Starting point for CLOCK_MONOTONIC

As I understand on Linux starting point for CLOCK_MONOTONIC is boot time. In my current work I prefer to use monotonic clock instead of CLOCK_REALTIME (for calculation) but in same time I need to provide human friendly timestamps (with year/month/day) in reporting. They can be not very precise so I was thinking to join monotonic counter with boot time.
From where I can get this time on linux system using api calls?
Assuming the Linux kernel starts the uptime counter at the same time as it starts keeping track of the monotonic clock, you can derive the boot time (relative to the Epoch) by subtracting uptime from the current time.
Linux offers the system uptime in seconds via the sysinfo structure; the current time in seconds since the Epoch can be acquired on POSIX compliant libraries via the time function.
#include <stddef.h>
#include <stdio.h>
#include <time.h>
#include <sys/sysinfo.h>
int main(void) {
/* get uptime in seconds */
struct sysinfo info;
sysinfo(&info);
/* calculate boot time in seconds since the Epoch */
const time_t boottime = time(NULL) - info.uptime;
/* get monotonic clock time */
struct timespec monotime;
clock_gettime(CLOCK_MONOTONIC, &monotime);
/* calculate current time in seconds since the Epoch */
time_t curtime = boottime + monotime.tv_sec;
/* get realtime clock time for comparison */
struct timespec realtime;
clock_gettime(CLOCK_REALTIME, &realtime);
printf("Boot time = %s", ctime(&boottime));
printf("Current time = %s", ctime(&curtime));
printf("Real Time = %s", ctime(&realtime.tv_sec));
return 0;
}
Unfortunately, the monotonic clock may not match up relative to boot time exactly. When I tested out the above code on my machine, the monotonic clock was a second off from the system uptime. However, you can still use the monotonic clock as long as you take the respective offset into account.
Portability note: although Linux may return current monotonic time relative to boot time, POSIX machines in general are permitted to return current monotonic time from any arbitrary -- yet consistent -- point in time (often the Epoch).
As a side note, you may not need to derive boot time as I did. I suspect there is a way to get the boot time via the Linux API, as there are many Linux utilities which display the boot time in a human-readable format. For example:
$ who -b
system boot 2013-06-21 12:56
I wasn't able to find such a call, but inspection of the source code for some of these common utilities may reveal how they determine the human-readable boot time.
In the case of the who utility, I suspect it utilizes the utmp file to acquire the system boot time.
http://www.kernel.org/doc/man-pages/online/pages/man2/clock_getres.2.html:
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time since some
unspecified starting point.
Means that you can use CLOCK_MONOTONIC for interval calculations and other things but you can't really convert it to a human readable representation.
Moreover, you prabably want CLOCK_MONOTONIC_RAW instead of CLOCK_MONOTONIC:
CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
Similar to CLOCK_MONOTONIC, but provides access to a raw hard‐
ware-based time that is not subject to NTP adjustments.
Keep using CLOCK_REALTIME for human-readable times.
CLOCK_MONOTONIC is generally not affected by any adjustments to system time. For example, if the system clock is adjusted via NTP, CLOCK_MONOTONIC has no way of knowing (nor does it need to).
For this reason, don't use CLOCK_MONOTONIC if you need human-readable timestamps.
See Difference between CLOCK_REALTIME and CLOCK_MONOTONIC? for a discussion.

Is there a way to check whether the processor cache has been flushed recently?

On i386 linux. Preferably in c/(c/posix std libs)/proc if possible. If not is there any piece of assembly or third party library that can do this?
Edit: I'm trying to develop test whether a kernel module clear a cache line or the whole proccesor(with wbinvd()). Program runs as root but I'd prefer to stay in user space if possible.
Cache coherent systems do their utmost to hide such things from you. I think you will have to observe it indirectly, either by using performance counting registers to detect cache misses or by carefully measuring the time to read a memory location with a high resolution timer.
This program works on my x86_64 box to demonstrate the effects of clflush. It times how long it takes to read a global variable using rdtsc. Being a single instruction tied directly to the CPU clock makes direct use of rdtsc ideal for this.
Here is the output:
took 81 ticks
took 81 ticks
flush: took 387 ticks
took 72 ticks
You see 3 trials: The first ensures i is in the cache (which it is, because it was just zeroed as part of BSS), the second is a read of i that should be in the cache. Then clflush kicks i out of the cache (along with its neighbors) and shows that re-reading it takes significantly longer. A final read verifies it is back in the cache. The results are very reproducible and the difference is substantial enough to easily see the cache misses. If you cared to calibrate the overhead of rdtsc() you could make the difference even more pronounced.
If you can't read the memory address you want to test (although even mmap of /dev/mem should work for these purposes) you may be able to infer what you want if you know the cacheline size and associativity of the cache. Then you can use accessible memory locations to probe the activity in the set you're interested in.
Source code:
#include <stdio.h>
#include <stdint.h>
inline void
clflush(volatile void *p)
{
asm volatile ("clflush (%0)" :: "r"(p));
}
inline uint64_t
rdtsc()
{
unsigned long a, d;
asm volatile ("rdtsc" : "=a" (a), "=d" (d));
return a | ((uint64_t)d << 32);
}
volatile int i;
inline void
test()
{
uint64_t start, end;
volatile int j;
start = rdtsc();
j = i;
end = rdtsc();
printf("took %lu ticks\n", end - start);
}
int
main(int ac, char **av)
{
test();
test();
printf("flush: ");
clflush(&i);
test();
test();
return 0;
}
I dont know of any generic command to get the the cache state, but there are ways:
I guess this is the easiest: If you got your kernel module, just disassemble it and look for cache invalidation / flushing commands (atm. just 3 came to my mind: WBINDVD, CLFLUSH, INVD).
You just said it is for i386, but I guess you dont mean a 80386. The problem is that there are many different with different extension and features. E.g. the newest Intel series has some performance/profiling registers for the cache system included, which you can use to evalute cache misses/hits/number of transfers and similar.
Similar to 2, very depending on the system you got. But when you have a multiprocessor configuration you could watch the first cache coherence protocol (MESI) with the 2nd.
You mentioned WBINVD - afaik that will always flush complete, i.e. all, cache lines
It may not be an answer to your specific question, but have you tried using a cache profiler such as Cachegrind? It can only be used to profile userspace code, but you might be able to use it nonetheless, by e.g. moving the code of your function to userspace if it does not depend on any kernel-specific interfaces.
It might actually be more effective than trying to ask the processor for information that may or may not exist and that will be probably affected by your mere asking about it - yes, Heisenberg was way before his time :-)

What is the Linux version of GetTickCount? [duplicate]

I'm looking for an equivalent to GetTickCount() on Linux.
Presently I am using Python's time.time() which presumably calls through to gettimeofday(). My concern is that the time returned (the unix epoch), may change erratically if the clock is messed with, such as by NTP. A simple process or system wall time, that only increases positively at a constant rate would suffice.
Does any such time function in C or Python exist?
You can use CLOCK_MONOTONIC e.g. in C:
struct timespec ts;
if(clock_gettime(CLOCK_MONOTONIC,&ts) != 0) {
//error
}
See this question for a Python way - How do I get monotonic time durations in python?
This seems to work:
#include <unistd.h>
#include <time.h>
uint32_t getTick() {
struct timespec ts;
unsigned theTick = 0U;
clock_gettime( CLOCK_REALTIME, &ts );
theTick = ts.tv_nsec / 1000000;
theTick += ts.tv_sec * 1000;
return theTick;
}
yes, get_tick()
Is the backbone of my applications.
Consisting of one state machine for each 'task'
eg, can multi-task without using threads and Inter Process Communication
Can implement non-blocking delays.
You should use: clock_gettime(CLOCK_MONOTONIC, &tp);. This call is not affected by the adjustment of the system time just like GetTickCount() on Windows.
Yes, the kernel has high-resolution timers but it is differently. I would recommend that you look at the sources of any odd project that wraps this in a portable manner.
From C/C++ I usually #ifdef this and use gettimeofday() on Linux which gives me microsecond resolution. I often add this as a fraction to the seconds since epoch I also receive giving me a double.

Resources