Starting point for CLOCK_MONOTONIC - linux

As I understand on Linux starting point for CLOCK_MONOTONIC is boot time. In my current work I prefer to use monotonic clock instead of CLOCK_REALTIME (for calculation) but in same time I need to provide human friendly timestamps (with year/month/day) in reporting. They can be not very precise so I was thinking to join monotonic counter with boot time.
From where I can get this time on linux system using api calls?

Assuming the Linux kernel starts the uptime counter at the same time as it starts keeping track of the monotonic clock, you can derive the boot time (relative to the Epoch) by subtracting uptime from the current time.
Linux offers the system uptime in seconds via the sysinfo structure; the current time in seconds since the Epoch can be acquired on POSIX compliant libraries via the time function.
#include <stddef.h>
#include <stdio.h>
#include <time.h>
#include <sys/sysinfo.h>
int main(void) {
/* get uptime in seconds */
struct sysinfo info;
sysinfo(&info);
/* calculate boot time in seconds since the Epoch */
const time_t boottime = time(NULL) - info.uptime;
/* get monotonic clock time */
struct timespec monotime;
clock_gettime(CLOCK_MONOTONIC, &monotime);
/* calculate current time in seconds since the Epoch */
time_t curtime = boottime + monotime.tv_sec;
/* get realtime clock time for comparison */
struct timespec realtime;
clock_gettime(CLOCK_REALTIME, &realtime);
printf("Boot time = %s", ctime(&boottime));
printf("Current time = %s", ctime(&curtime));
printf("Real Time = %s", ctime(&realtime.tv_sec));
return 0;
}
Unfortunately, the monotonic clock may not match up relative to boot time exactly. When I tested out the above code on my machine, the monotonic clock was a second off from the system uptime. However, you can still use the monotonic clock as long as you take the respective offset into account.
Portability note: although Linux may return current monotonic time relative to boot time, POSIX machines in general are permitted to return current monotonic time from any arbitrary -- yet consistent -- point in time (often the Epoch).
As a side note, you may not need to derive boot time as I did. I suspect there is a way to get the boot time via the Linux API, as there are many Linux utilities which display the boot time in a human-readable format. For example:
$ who -b
system boot 2013-06-21 12:56
I wasn't able to find such a call, but inspection of the source code for some of these common utilities may reveal how they determine the human-readable boot time.
In the case of the who utility, I suspect it utilizes the utmp file to acquire the system boot time.

http://www.kernel.org/doc/man-pages/online/pages/man2/clock_getres.2.html:
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time since some
unspecified starting point.
Means that you can use CLOCK_MONOTONIC for interval calculations and other things but you can't really convert it to a human readable representation.
Moreover, you prabably want CLOCK_MONOTONIC_RAW instead of CLOCK_MONOTONIC:
CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
Similar to CLOCK_MONOTONIC, but provides access to a raw hard‐
ware-based time that is not subject to NTP adjustments.
Keep using CLOCK_REALTIME for human-readable times.

CLOCK_MONOTONIC is generally not affected by any adjustments to system time. For example, if the system clock is adjusted via NTP, CLOCK_MONOTONIC has no way of knowing (nor does it need to).
For this reason, don't use CLOCK_MONOTONIC if you need human-readable timestamps.
See Difference between CLOCK_REALTIME and CLOCK_MONOTONIC? for a discussion.

Related

perf stat time elapsed for sys and user [duplicate]

$ time foo
real 0m0.003s
user 0m0.000s
sys 0m0.004s
$
What do real, user and sys mean in the output of time? Which one is meaningful when benchmarking my app?
Real, User and Sys process time statistics
One of these things is not like the other. Real refers to actual elapsed time; User and Sys refer to CPU time used only by the process.
Real is wall clock time - time from start to finish of the call. This is all elapsed time including time slices used by other processes and time the process spends blocked (for example if it is waiting for I/O to complete).
User is the amount of CPU time spent in user-mode code (outside the kernel) within the process. This is only actual CPU time used in executing the process. Other processes and time the process spends blocked do not count towards this figure.
Sys is the amount of CPU time spent in the kernel within the process. This means executing CPU time spent in system calls within the kernel, as opposed to library code, which is still running in user-space. Like 'user', this is only CPU time used by the process. See below for a brief description of kernel mode (also known as 'supervisor' mode) and the system call mechanism.
User+Sys will tell you how much actual CPU time your process used. Note that this is across all CPUs, so if the process has multiple threads (and this process is running on a computer with more than one processor) it could potentially exceed the wall clock time reported by Real (which usually occurs). Note that in the output these figures include the User and Sys time of all child processes (and their descendants) as well when they could have been collected, e.g. by wait(2) or waitpid(2), although the underlying system calls return the statistics for the process and its children separately.
Origins of the statistics reported by time (1)
The statistics reported by time are gathered from various system calls. 'User' and 'Sys' come from wait (2) (POSIX) or times (2) (POSIX), depending on the particular system. 'Real' is calculated from a start and end time gathered from the gettimeofday (2) call. Depending on the version of the system, various other statistics such as the number of context switches may also be gathered by time.
On a multi-processor machine, a multi-threaded process or a process forking children could have an elapsed time smaller than the total CPU time - as different threads or processes may run in parallel. Also, the time statistics reported come from different origins, so times recorded for very short running tasks may be subject to rounding errors, as the example given by the original poster shows.
A brief primer on Kernel vs. User mode
On Unix, or any protected-memory operating system, 'Kernel' or 'Supervisor' mode refers to a privileged mode that the CPU can operate in. Certain privileged actions that could affect security or stability can only be done when the CPU is operating in this mode; these actions are not available to application code. An example of such an action might be manipulation of the MMU to gain access to the address space of another process. Normally, user-mode code cannot do this (with good reason), although it can request shared memory from the kernel, which could be read or written by more than one process. In this case, the shared memory is explicitly requested from the kernel through a secure mechanism and both processes have to explicitly attach to it in order to use it.
The privileged mode is usually referred to as 'kernel' mode because the kernel is executed by the CPU running in this mode. In order to switch to kernel mode you have to issue a specific instruction (often called a trap) that switches the CPU to running in kernel mode and runs code from a specific location held in a jump table. For security reasons, you cannot switch to kernel mode and execute arbitrary code - the traps are managed through a table of addresses that cannot be written to unless the CPU is running in supervisor mode. You trap with an explicit trap number and the address is looked up in the jump table; the kernel has a finite number of controlled entry points.
The 'system' calls in the C library (particularly those described in Section 2 of the man pages) have a user-mode component, which is what you actually call from your C program. Behind the scenes, they may issue one or more system calls to the kernel to do specific services such as I/O, but they still also have code running in user-mode. It is also quite possible to directly issue a trap to kernel mode from any user space code if desired, although you may need to write a snippet of assembly language to set up the registers correctly for the call.
More about 'sys'
There are things that your code cannot do from user mode - things like allocating memory or accessing hardware (HDD, network, etc.). These are under the supervision of the kernel, and it alone can do them. Some operations like malloc orfread/fwrite will invoke these kernel functions and that then will count as 'sys' time. Unfortunately it's not as simple as "every call to malloc will be counted in 'sys' time". The call to malloc will do some processing of its own (still counted in 'user' time) and then somewhere along the way it may call the function in kernel (counted in 'sys' time). After returning from the kernel call, there will be some more time in 'user' and then malloc will return to your code. As for when the switch happens, and how much of it is spent in kernel mode... you cannot say. It depends on the implementation of the library. Also, other seemingly innocent functions might also use malloc and the like in the background, which will again have some time in 'sys' then.
To expand on the accepted answer, I just wanted to provide another reason why real ≠ user + sys.
Keep in mind that real represents actual elapsed time, while user and sys values represent CPU execution time. As a result, on a multicore system, the user and/or sys time (as well as their sum) can actually exceed the real time. For example, on a Java app I'm running for class I get this set of values:
real 1m47.363s
user 2m41.318s
sys 0m4.013s
• real: The actual time spent in running the process from start to finish, as if it was measured by a human with a stopwatch
• user: The cumulative time spent by all the CPUs during the computation
• sys: The cumulative time spent by all the CPUs during system-related tasks such as memory allocation.
Notice that sometimes user + sys might be greater than real, as
multiple processors may work in parallel.
Minimal runnable POSIX C examples
To make things more concrete, I want to exemplify a few extreme cases of time with some minimal C test programs.
All programs can be compiled and run with:
gcc -ggdb3 -o main.out -pthread -std=c99 -pedantic-errors -Wall -Wextra main.c
time ./main.out
and have been tested in Ubuntu 18.10, GCC 8.2.0, glibc 2.28, Linux kernel 4.18, ThinkPad P51 laptop, Intel Core i7-7820HQ CPU (4 cores / 8 threads), 2x Samsung M471A2K43BB1-CRC RAM (2x 16GiB).
sleep syscall
Non-busy sleep as done by the sleep syscall only counts in real, but not for user or sys.
For example, a program that sleeps for a second:
#define _XOPEN_SOURCE 700
#include <stdlib.h>
#include <unistd.h>
int main(void) {
sleep(1);
return EXIT_SUCCESS;
}
GitHub upstream.
outputs something like:
real 0m1.003s
user 0m0.001s
sys 0m0.003s
The same holds for programs blocked on IO becoming available.
For example, the following program waits for the user to enter a character and press enter:
#include <stdio.h>
#include <stdlib.h>
int main(void) {
printf("%c\n", getchar());
return EXIT_SUCCESS;
}
GitHub upstream.
And if you wait for about one second, it outputs just like the sleep example something like:
real 0m1.003s
user 0m0.001s
sys 0m0.003s
For this reason time can help you distinguish between CPU and IO bound programs: What do the terms "CPU bound" and "I/O bound" mean?
Multiple threads
The following example does niters iterations of useless purely CPU-bound work on nthreads threads:
#define _XOPEN_SOURCE 700
#include <assert.h>
#include <inttypes.h>
#include <pthread.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
uint64_t niters;
void* my_thread(void *arg) {
uint64_t *argument, i, result;
argument = (uint64_t *)arg;
result = *argument;
for (i = 0; i < niters; ++i) {
result = (result * result) - (3 * result) + 1;
}
*argument = result;
return NULL;
}
int main(int argc, char **argv) {
size_t nthreads;
pthread_t *threads;
uint64_t rc, i, *thread_args;
/* CLI args. */
if (argc > 1) {
niters = strtoll(argv[1], NULL, 0);
} else {
niters = 1000000000;
}
if (argc > 2) {
nthreads = strtoll(argv[2], NULL, 0);
} else {
nthreads = 1;
}
threads = malloc(nthreads * sizeof(*threads));
thread_args = malloc(nthreads * sizeof(*thread_args));
/* Create all threads */
for (i = 0; i < nthreads; ++i) {
thread_args[i] = i;
rc = pthread_create(
&threads[i],
NULL,
my_thread,
(void*)&thread_args[i]
);
assert(rc == 0);
}
/* Wait for all threads to complete */
for (i = 0; i < nthreads; ++i) {
rc = pthread_join(threads[i], NULL);
assert(rc == 0);
printf("%" PRIu64 " %" PRIu64 "\n", i, thread_args[i]);
}
free(threads);
free(thread_args);
return EXIT_SUCCESS;
}
GitHub upstream + plot code.
Then we plot wall, user and sys as a function of the number of threads for a fixed 10^10 iterations on my 8 hyperthread CPU:
Plot data.
From the graph, we see that:
for a CPU intensive single core application, wall and user are about the same
for 2 cores, user is about 2x wall, which means that the user time is counted across all threads.
user basically doubled, and while wall stayed the same.
this continues up to 8 threads, which matches my number of hyperthreads in my computer.
After 8, wall starts to increase as well, because we don't have any extra CPUs to put more work in a given amount of time!
The ratio plateaus at this point.
Note that this graph is only so clear and simple because the work is purely CPU-bound: if it were memory bound, then we would get a fall in performance much earlier with less cores because the memory accesses would be a bottleneck as shown at What do the terms "CPU bound" and "I/O bound" mean?
Quickly checking that wall < user is a simple way to determine that a program is multithreaded, and the closer that ratio is to the number of cores, the more effective the parallelization is, e.g.:
multithreaded linkers: Can gcc use multiple cores when linking?
C++ parallel sort: Are C++17 Parallel Algorithms implemented already?
Sys heavy work with sendfile
The heaviest sys workload I could come up with was to use the sendfile, which does a file copy operation on kernel space: Copy a file in a sane, safe and efficient way
So I imagined that this in-kernel memcpy will be a CPU intensive operation.
First I initialize a large 10GiB random file with:
dd if=/dev/urandom of=sendfile.in.tmp bs=1K count=10M
Then run the code:
#define _GNU_SOURCE
#include <assert.h>
#include <fcntl.h>
#include <stdlib.h>
#include <sys/sendfile.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
int main(int argc, char **argv) {
char *source_path, *dest_path;
int source, dest;
struct stat stat_source;
if (argc > 1) {
source_path = argv[1];
} else {
source_path = "sendfile.in.tmp";
}
if (argc > 2) {
dest_path = argv[2];
} else {
dest_path = "sendfile.out.tmp";
}
source = open(source_path, O_RDONLY);
assert(source != -1);
dest = open(dest_path, O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
assert(dest != -1);
assert(fstat(source, &stat_source) != -1);
assert(sendfile(dest, source, 0, stat_source.st_size) != -1);
assert(close(source) != -1);
assert(close(dest) != -1);
return EXIT_SUCCESS;
}
GitHub upstream.
which gives basically mostly system time as expected:
real 0m2.175s
user 0m0.001s
sys 0m1.476s
I was also curious to see if time would distinguish between syscalls of different processes, so I tried:
time ./sendfile.out sendfile.in1.tmp sendfile.out1.tmp &
time ./sendfile.out sendfile.in2.tmp sendfile.out2.tmp &
And the result was:
real 0m3.651s
user 0m0.000s
sys 0m1.516s
real 0m4.948s
user 0m0.000s
sys 0m1.562s
The sys time is about the same for both as for a single process, but the wall time is larger because the processes are competing for disk read access likely.
So it seems that it does in fact account for which process started a given kernel work.
Bash source code
When you do just time <cmd> on Ubuntu, it use the Bash keyword as can be seen from:
type time
which outputs:
time is a shell keyword
So we grep source in the Bash 4.19 source code for the output string:
git grep '"user\b'
which leads us to execute_cmd.c function time_command, which uses:
gettimeofday() and getrusage() if both are available
times() otherwise
all of which are Linux system calls and POSIX functions.
GNU Coreutils source code
If we call it as:
/usr/bin/time
then it uses the GNU Coreutils implementation.
This one is a bit more complex, but the relevant source seems to be at resuse.c and it does:
a non-POSIX BSD wait3 call if that is available
times and gettimeofday otherwise
1: https://i.stack.imgur.com/qAfEe.png**Minimal runnable POSIX C examples**
To make things more concrete, I want to exemplify a few extreme cases of time with some minimal C test programs.
All programs can be compiled and run with:
gcc -ggdb3 -o main.out -pthread -std=c99 -pedantic-errors -Wall -Wextra main.c
time ./main.out
and have been tested in Ubuntu 18.10, GCC 8.2.0, glibc 2.28, Linux kernel 4.18, ThinkPad P51 laptop, Intel Core i7-7820HQ CPU (4 cores / 8 threads), 2x Samsung M471A2K43BB1-CRC RAM (2x 16GiB).
sleep
Non-busy sleep does not count in either user or sys, only real.
For example, a program that sleeps for a second:
#define _XOPEN_SOURCE 700
#include <stdlib.h>
#include <unistd.h>
int main(void) {
sleep(1);
return EXIT_SUCCESS;
}
GitHub upstream.
outputs something like:
real 0m1.003s
user 0m0.001s
sys 0m0.003s
The same holds for programs blocked on IO becoming available.
For example, the following program waits for the user to enter a character and press enter:
#include <stdio.h>
#include <stdlib.h>
int main(void) {
printf("%c\n", getchar());
return EXIT_SUCCESS;
}
GitHub upstream.
And if you wait for about one second, it outputs just like the sleep example something like:
real 0m1.003s
user 0m0.001s
sys 0m0.003s
For this reason time can help you distinguish between CPU and IO bound programs: What do the terms "CPU bound" and "I/O bound" mean?
multithreaded linkers: Can gcc use multiple cores when linking?
C++ parallel sort: Are C++17 Parallel Algorithms implemented already?
Real shows total turn-around time for a process;
while User shows the execution time for user-defined instructions
and Sys is for time for executing system calls!
Real time includes the waiting time also (the waiting time for I/O etc.)
In very simple terms, I like to think about it like this:
real is the actual amount of time it took to run the command (as if you had timed it with a stopwatch)
user and sys are how much 'work' the CPU had to do to execute the command. This 'work' is expressed in units of time.
Generally speaking:
user is how much work the CPU did to run to run the command's code
sys is how much work the CPU had to do to handle 'system overhead' type tasks (such as allocating memory, file I/O, ect.) in order to support the running command
Since these last two times are counting 'work' done, they don't include time a thread might have spent waiting (such as waiting on another process or for disk I/O to finish).
real, however, is a measure of actual runtime and not 'work', so it does include any time spent waiting (which is why sometimes real > usr+sys).
And finally, sometimes the reverse is true (usr+sys > real) for multi-threaded applications. This also arises because we are comparing 'work-time' to actual time. For example, if 3 processors each run continuously for 10 minutes to execute a command, you will get real = 10m but usr = 30m.
I want to mention some other scenario when the real-time is much much bigger than user + sys. I've created a simple server which respondes after a long time
real 4.784
user 0.01s
sys 0.01s
the issue is that in this scenario the process waits for the response which is not on the user site nor in the system.
Something similar happens when you run the find command. In that case, the time is spent mostly on requesting and getting a response from SSD.
Must mention that at least on my AMD Ryzen CPU, the user is always large than real in multi-threaded program(or single threaded program compiled with -O3).
eg.
real 0m5.815s
user 0m8.213s
sys 0m0.473s

What timeframe does the "perf sched record" use?

I've been trying to analyze the output of perf sched record but I don't understand with what frame of reference do I try to understand the "20624.983302 secs". It isn't Unix time for sure, so what is it? How would I go about converting this into Unix time?
*A0 20624.983302 secs A0 => migration/0:12
*. 20624.983311 secs . => swapper:0
*B0 20624.983318 secs B0 => IPC I/O Child:33924
*. 20624.983355 secs
*C0 20624.983485 secs C0 => WRScene~lder#15:39974
*. 20624.983581 secs
*D0 20624.983972 secs D0 => IPC I/O Parent:33780
These timestamps are captured using the kernel scheduler clock, which counts in nanoseconds since boot. The exact details depend on the compile-time parameters chosen to build a particular Linux distribution and the target architecture.
In general, the timestamp of a sample is captured around the same time when it's recorded. Timestamps on the same core are guaranteed to be monotonically increasing as long the core remains in an active state. The samples you've shown were all captured on the same core and the core remained active from the first sample to the last sample. So the timestamps are guaranteed to be monotonic in this case irrespective of the platform and distribution. When profiling on multiple cores, there is no guarantee that the clocks on all cores are in sync.
All perf tools use the same clock to capture timestamps, but they may differ in the way timestamps are printed and it may happen that two tools print timestamps from the same sample file differently. This depends on the kernel version.
It's possible to specify a clock source when calling perf_event_open() by setting use_clockid to 1 and setting clockid to one of the clock sources defined in linux/time.h, such as CLOCK_MONOTONIC. perf record provides the -k or --clockid option to specify the clock source for capturing timestamps.
Modern distributions on x86 typically use TSC as the source for the scheduler clock (check /sys/devices/system/clocksource/clocksource0/current_clocksource). So if you're on an x86 processor, most probably the TSC of the profiled core was used to capture the current value of TSC cycles, which internally gets converted into nanoseconds. When a timestamp is printed, it may get converted to a different unit. In this case, timestamps are printed in the format "seconds.microseconds". A summary of the behavior of TSC on Intel processors can be found at: Can constant non-invariant tsc change frequency across cpu states?.

Millisecond resolution timer on LinkIT 7688

I am developing for the LinkIt Smart 7688 device by Mediatek. I need to do some timekeeping in a userspace application where I need at least 10ms resolution (preferably 1ms).
However every syscall I have tried returns values only with 1 second resolution. clock_gettime (tried all the different clocks) and gettimeofday which should provide sub-second resolution does not.
Doing a dmesg on the target reveals that the kernel timestamps with a resolution below 1 second, thus I conclude that a clock source is available with sub second resolution. (I would be very surpriced if this was not the case :) )
How do I get a timestamp with sub-second resolution on the Linkit Smart 7688 device?
Perhaps I could be missing some kernel configuration selecting the correct clock source to be available to userspace? I have not been able to find one.
Do not only use the seconds returned by gettimeofday, but also usec
gettimeofday(&t0, 0);
/* ... */
gettimeofday(&t1, 0);
long elapsed = (t1.tv_sec-t0.tv_sec)*1000000 + t1.tv_usec-t0.tv_usec;

Set linux time to millisecond precision

I have an embedded Linux device that interfaces with another "master" device over a serial comm protocol. Periodically the master passes its date down to the slave device, because later the slave will return information to the master that needs to be accurately timestamped. However, the Linux 'date' command only sets the system date to within a second accuracy. This isn't enough for our uses.
Does anybody know how to set a Linux machine's time more precisely than 1 second?
The settimeofday(2) method given in other answers has a serious problem: it does exactly what you say you want. :)
The problem with directly changing a system's time, instantaneously, is that it can confuse programs that get the time of day before and after the change if the adjustment was negative. That is, they can perceive time to go backwards.
The fix for this is adjtime(3) which is simple and portable, or adjtimex(2) which is complicated, powerful and Linux-specific. Both of these calls use sophisticated algorithms to slowly adjust the system time over some period, forward only, until the desired change is achieved.
By the way, are you sure you aren't reinventing the wheel here? I recommend that you read Julien Ridoux and Darryl Veitch's ACM Queue paper Principles of Robust Timing over the Internet. You're working on embedded systems, so I would expect the ringing in Figure 5 to give you cold shivers. Can you say "damped oscillator?" adjtime() and adjtimex() use this troubled algorithm, so in some sense I am arguing against my own advice above, but the Mills algorithm is still better than the step adjustment non-algorithm. If you choose to implement RADclock instead, so much the better.
The settimeofday() system call takes and uses microsecond precision. You'll have to write a short program to use it, but that is quite straightforward.
struct timeval tv;
tv .tv_sec = (some time_t value)
tv .tv_usec = (the number of microseconds after the second)
int rc = settimeofday (&tv, NULL);
if (rc)
errormessage ("error %d setting system time", errno);
You can use the settimeofday(2) system call; the interface supports microsecond resolution.
#include <sys/time.h>
int gettimeofday(struct timeval *tv, struct timezone *tz);
int settimeofday(const struct timeval *tv, const struct timezone *tz);
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
You can use the clock_settime(2) system call; the interface provides multiple clocks and the interface supports nanosecond resolution.
#include <time.h>
int clock_getres(clockid_t clk_id, struct timespec *res);
int clock_gettime(clockid_t clk_id, struct timespec *tp);
int clock_settime(clockid_t clk_id, const struct timespec
*tp);
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
CLOCK_REALTIME
System-wide real-time clock. Setting this clock
requires appropriate privileges.
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time
since some unspecified starting point.
CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
Similar to CLOCK_MONOTONIC, but provides access to a
raw hardware-based time that is not subject to NTP
adjustments.
CLOCK_PROCESS_CPUTIME_ID
High-resolution per-process timer from the CPU.
CLOCK_THREAD_CPUTIME_ID
Thread-specific CPU-time clock.
This interface provides the nicety of the clock_getres(2) call, which can tell you exactly what the resolution is -- just because the interface accepts nanoseconds doesn't mean it can actually support nanosecond-resolution. (I've got a fuzzy memory that 20 ns is about the limits of many systems but no references to support this.)
If you're running an IP-capable networking protocol over the serial link (something like, ooh, PPP for example), you can just run an ntpd on the "master" host, then sync time using ntpd or ntpdate on the embedded device. NTP will take care of you.

What is the Linux version of GetTickCount? [duplicate]

I'm looking for an equivalent to GetTickCount() on Linux.
Presently I am using Python's time.time() which presumably calls through to gettimeofday(). My concern is that the time returned (the unix epoch), may change erratically if the clock is messed with, such as by NTP. A simple process or system wall time, that only increases positively at a constant rate would suffice.
Does any such time function in C or Python exist?
You can use CLOCK_MONOTONIC e.g. in C:
struct timespec ts;
if(clock_gettime(CLOCK_MONOTONIC,&ts) != 0) {
//error
}
See this question for a Python way - How do I get monotonic time durations in python?
This seems to work:
#include <unistd.h>
#include <time.h>
uint32_t getTick() {
struct timespec ts;
unsigned theTick = 0U;
clock_gettime( CLOCK_REALTIME, &ts );
theTick = ts.tv_nsec / 1000000;
theTick += ts.tv_sec * 1000;
return theTick;
}
yes, get_tick()
Is the backbone of my applications.
Consisting of one state machine for each 'task'
eg, can multi-task without using threads and Inter Process Communication
Can implement non-blocking delays.
You should use: clock_gettime(CLOCK_MONOTONIC, &tp);. This call is not affected by the adjustment of the system time just like GetTickCount() on Windows.
Yes, the kernel has high-resolution timers but it is differently. I would recommend that you look at the sources of any odd project that wraps this in a portable manner.
From C/C++ I usually #ifdef this and use gettimeofday() on Linux which gives me microsecond resolution. I often add this as a fraction to the seconds since epoch I also receive giving me a double.

Resources