Kernel mode clock_gettime() - linux

I'm trying to use the POSIX clock functions in the kernel but the compiler keeps giving me the error: error: implicit declaration of function ‘clock_gettime’
long __timer_end(struct timespec start_time)
{
struct timespec end_time;
clock_gettime(CLOCK_REALTIME_COARSE, &end_time);
return(end_time.tv_nsec - start_time.tv_nsec);
}
struct timespec __timer_start(void)
{
struct timespec start_time;
clock_gettime(CLOCK_REALTIME_COARSE, &start_time);
return start_time;
}
The functions are defined in <linux/posix_clock.h> as part of structure called posix_clock_operations and there are a pair of functions, posix_clock_register() and posix_clock_unregister(). The comments lead one to believe that these functions will populate the posix_clock_operations structure. I've implemented both in my init and exit functions hoping that their presence would magically make the forward declarations for clock_gettime() appear, but it doesn't.
Does anyone know what I need to do to make this one function work? Do I really need to define all my own functions an populate posix clock_operations?
Thanks in advance,
Pete

It seems there is no clock_gettime() in the kernel however there is a nsec resolution clock called current_kernel_time(). So rewriting my timer looks like this:
long timer_end(struct timespec start_time)
{
struct timespec end_time = current_kernel_time();
return(end_time.tv_nsec - start_time.tv_nsec);
}
struct timespec timer_start(void)
{
return current_kernel_time();
}
It seems to work fine, but a higher performance version of the same suitable for ns granular performance testing looks like this:
long timer_end(struct timespec start_time)
{
struct timespec end_time;
getrawmonotonic(&end_time);
return(end_time.tv_nsec - start_time.tv_nsec);
}
struct timespec timer_start(void)
{
struct timespec start_time;
getrawmonotonic(&start_time);
return start_time;
}
Thanks for the comments and pointers.
Pete

As pointed out in answer by pjenney58, you can use current_kernel_time(), but do keep in mind that it gives you the "wallclock" time (in HZ accuracy), which is not recommended if you want to profile time taken by some kernel code - simply because it'll also consider time spent in suspended mode. To profile timing in kernel code there is a kernel equivalent of userland clock_gettime(CLOCK_MONOTONIC,...) viz. do_posix_clock_monotonic_gettime() which is defined as a macro in timekeeping.h. It takes pointer to struct timespec as an argument in which monotonic clock time is returned in sec and nsec accuracy.

Related

ff_replay substructure in ff_effect empty

I am developing a force feedback driver (linux) for a yet unsupported gamepad.
Whenever a application in userspace requests a ff-effect (e.g rumbling), a function in my driver is called:
static int foo_ff_play(struct input_dev *dev, void *data, struct ff_effect *effect)
this is set by the following code inside my init function:
input_set_capability(dev, EV_FF, FF_RUMBLE);
input_ff_create_memless(dev, NULL, foo_ff_play);
I'm accessing the ff_effect struct (which is passed to my foo_ff_play function) like this:
static int foo_ff_play(struct input_dev *dev, void *data, struct ff_effect *effect)
{
u16 length;
length = effect->replay.length;
printk(KERN_DEBUG "length: %i", length);
return 0;
}
The problem is, that the reported length (in ff_effect->replay) is always zero.
That's confusing, since i am running fftest on my device, and fftest definitely sets the length attribute: https://github.com/flosse/linuxconsole/blob/master/utils/fftest.c (line 308)
/* a strong rumbling effect */
effects[4].type = FF_RUMBLE;
effects[4].id = -1;
effects[4].u.rumble.strong_magnitude = 0x8000;
effects[4].u.rumble.weak_magnitude = 0;
effects[4].replay.length = 5000;
effects[4].replay.delay = 1000;
Does this have something to do with the "memlessness"? Why does the data in ff_replay seem to be zero if it isn't?
Thank you in advance
Why is the replay struct empty?
Taking a look at https://elixir.free-electrons.com/linux/v4.4/source/drivers/input/ff-memless.c#L406 we find:
static void ml_play_effects(struct ml_device *ml)
{
struct ff_effect effect;
DECLARE_BITMAP(handled_bm, FF_MEMLESS_EFFECTS);
memset(handled_bm, 0, sizeof(handled_bm));
while (ml_get_combo_effect(ml, handled_bm, &effect))
ml->play_effect(ml->dev, ml->private, &effect);
ml_schedule_timer(ml);
}
ml_get_combo_effect sets the effect by calling ml_combine_effects., but ml_combine_effects simply does not copy replay.length to the ff_effect struct which is passed to our foo_play_effect (at least not if the effect-type is FF_RUMBLE): https://elixir.free-electrons.com/linux/v4.4/source/drivers/input/ff-memless.c#L286
That's why we cannot read out the ff_replay-data in our foo_play_effect function.
Okay, replay is empty - how can we determine how long we have to play the effect (e.g. FF_RUMBLE) then?
Looks like the replay structure is something we do not even need to carry about. Yes, fftest sets the length and then uploads the effect to the driver, but if we take a look at ml_ff_upload (https://elixir.free-electrons.com/linux/v4.4/source/drivers/input/ff-memless.c#L481), we can see the following:
if (test_bit(FF_EFFECT_STARTED, &state->flags)) {
__clear_bit(FF_EFFECT_PLAYING, &state->flags);
state->play_at = jiffies +
msecs_to_jiffies(state->effect->replay.delay);
state->stop_at = state->play_at +
msecs_to_jiffies(state->effect->replay.length);
state->adj_at = state->play_at;
ml_schedule_timer(ml);
}
That means that the duration is already handled by the input-subsystem. It starts the effect and also stops it as needed.
Furthermore we can see at https://elixir.free-electrons.com/linux/v4.4/source/include/uapi/linux/input.h#L279 that
/*
* All duration values are expressed in ms. Values above 32767 ms (0x7fff)
* should not be used and have unspecified results.
*/
That means that we have to make our effect play at least 32767ms. Everything else (stopping the effect before) is up to the scheduler - which is not our part :D

Identifying bug in linux kernel module

I am marking Michael's as he was the first. Thank you to osgx and employee of the month for additional information and assistance.
I am attempting to identify a bug in a consumer/produce kernel module. This is a problem being given to me for a course in university. My teaching assistant was not able to figure it out, and my professor said it was okay if I uploaded online (he doesn't think Stack can figure it out!).
I have included the module, the makefile, and the Kbuild.
Running the program does not guarantee the bug will present itself.
I thought the issue was on line 30 since it is possible for a thread to rush to line 36, and starve the other threads. My professor said that is not what he is looking for.
Unrelated question: What is the purpose of line 40? It seems out of place to me, but my professor said it serves a purporse.
My professor said the bug is very subtle. The bug is not deadlock.
My approach was to identify critical sections and shared variables, but I'm stumped. I am not familiar with tracing (as a method of debugging), and was told that while it may help it is not necessary to identify the issue.
File: final.c
#include <linux/completion.h>
#include <linux/init.h>
#include <linux/kthread.h>
#include <linux/module.h>
static int actor_kthread(void *);
static int writer_kthread(void *);
static DECLARE_COMPLETION(episode_cv);
static DEFINE_SPINLOCK(lock);
static int episodes_written;
static const int MAX_EPISODES = 21;
static bool show_over;
static struct task_info {
struct task_struct *task;
const char *name;
int (*threadfn) (void *);
} task_info[] = {
{.name = "Liz", .threadfn = writer_kthread},
{.name = "Tracy", .threadfn = actor_kthread},
{.name = "Jenna", .threadfn = actor_kthread},
{.name = "Josh", .threadfn = actor_kthread},
};
static int actor_kthread(void *data) {
struct task_info *actor_info = (struct task_info *)data;
spin_lock(&lock);
while (!show_over) {
spin_unlock(&lock);
wait_for_completion_interruptible(&episode_cv); //Line 30
spin_lock(&lock);
while (episodes_written) {
pr_info("%s is in a skit\n", actor_info->name);
episodes_written--;
}
reinit_completion(&episode_cv); // Line 36
}
pr_info("%s is done for the season\n", actor_info->name);
complete(&episode_cv); //Why do we need this line?
actor_info->task = NULL;
spin_unlock(&lock);
return 0;
}
static int writer_kthread(void *data) {
struct task_info *writer_info = (struct task_info *)data;
size_t ep_num;
spin_lock(&lock);
for (ep_num = 0; ep_num < MAX_EPISODES && !show_over; ep_num++) {
spin_unlock(&lock);
/* spend some time writing the next episode */
schedule_timeout_interruptible(2 * HZ);
spin_lock(&lock);
episodes_written++;
complete_all(&episode_cv);
}
pr_info("%s wrote the last episode for the season\n", writer_info->name);
show_over = true;
complete_all(&episode_cv);
writer_info->task = NULL;
spin_unlock(&lock);
return 0;
}
static int __init tgs_init(void) {
size_t i;
for (i = 0; i < ARRAY_SIZE(task_info); i++) {
struct task_info *info = &task_info[i];
info->task = kthread_run(info->threadfn, info, info->name);
}
return 0;
}
static void __exit tgs_exit(void) {
size_t i;
spin_lock(&lock);
show_over = true;
spin_unlock(&lock);
for (i = 0; i < ARRAY_SIZE(task_info); i++)
if (task_info[i].task)
kthread_stop(task_info[i].task);
}
module_init(tgs_init);
module_exit(tgs_exit);
MODULE_DESCRIPTION("CS421 Final");
MODULE_LICENSE("GPL");
File: kbuild
Kobj-m := final.o
File: Makefile
# Basic Makefile to pull in kernel's KBuild to build an out-of-tree
# kernel module
KDIR ?= /lib/modules/$(shell uname -r)/build
all: modules
clean modules:
When cleaning up in tgs_exit() the function executes the following without holding the spinlock:
if (task_info[i].task)
kthread_stop(task_info[i].task);
It's possible for a thread that's ending to set it's task_info[i].task to NULL between the check and call to kthread_stop().
I'm quite confused here.
You claim this is a question from an upcoming exam and it was released by the person delivering the course. Why would they do that? Then you say that TA failed to solve the problem. If TA can't do it, who can expect students to pass?
(professor) doesn't think Stack can figure it out
If the claim is that the level on this website is bad I definitely agree. But still, claiming it is below a level to be expected from a random university is a stretch. If there is no claim of the sort, I once more ask how are students expected to do it. What if the problem gets solved?
The code itself is imho unsuitable for teaching as it deviates too much from common idioms.
Another answer here noted one side effect of the actual problem. Namely, it was stated that the loop in tgs_exit can race with threads exiting on their own and test the ->task pointer to be non-NULL, while it becomes NULL just afterwards. The discussion whether this can result in a kthread_stop(NULL) call is not really relevant.
Either a kernel thread exiting on its own will clear everything up OR kthread_stop (and maybe something else) is necessary to do it.
If the former is true, the code suffers from a possible use-after-free. After tgs_exit tests that the pointer, the target thread could have exited. Maybe prior to kthread_stop call or maybe just as it was executed. Either way, it is possible that the passed pointer is stale as the area was already freed by the thread which was exiting.
If the latter is true, the code suffers from resource leaks due to insufficient cleanup - there are no kthread_stop calls if tgs_exit is executed after all threads exit.
The kthread_* api allows threads to just exit, hence effects are as described in the first variant.
For the sake of argument let's say the code is compiled in into the kernel (as opposed to being loaded as a module). Say the exit func is called on shutdown.
There is a design problem that there are 2 exit mechanisms and it transforms into a bug as they are not coordinated. A possible solution for this case would set a flag for writers to stop and would wait for a writer counter to drop to 0.
The fact that the code is in a module makes the problem more acute: unless you kthread_stop, you can't tell if the target thread is gone. In particular "actor" threads do:
actor_info->task = NULL;
So the thread is skipped in the exit handler, which can now finish and let the kernel unload the module itself...
spin_unlock(&lock);
return 0;
... but this code (located in the module!) possibly was not executed yet.
This would not have happened if the code followed an idiom if always using kthread_stop.
Other issue is that writers wake everyone up (so-called "thundering herd problem"), as opposed to at most one actor.
Perhaps the bug one is supposed to find is that each episode has at most one actor? Maybe that the module can exit when there are episodes written but not acted out yet?
The code is extremely weird and if you were shown a reasonable implementation of a thread-safe queue in userspace, you should see how what's presented here does not fit. For instance, why does it block instantly without checking for episodes?
Also a fun fact that locking around the write to show_over plays no role in correctness.
There are more issues and it is quite likely I missed some. As it is, I think the question is of poor quality. It does not look like anything real-world.

Inserting a PID in the Linux Hash-Table

Currently I'm working on a Linux-Kernel-Module, that can hide any normal Process.
The hiding works fine, but I haven't found a way to unhide the process yet.
First I delete the struct task_struct from the big list of task_structs inside the Kernel:
struct task_struct *p;
//Finding the correct task_struct
for_each_process(p)
if(p->pid == pid){
// Removing the task_struct
struct list_head *next = task->tasks.next;
struct list_head *prev = task->tasks.prev;
next->prev=prev;
prev->next=next;
}
But the task_struct is still traceable, because it's inside a hash_table containing every process' task_struct. In fact most of the PID-Lookup is performed by this hash-table. Removing the task-struct from there is a bit tricky:
struct pid *pid; //struct pid of the task_struct
//Deleting for every pid_namespace the pid_chain from the hash_list
for (i = 0; i <= pid->level; i++) {
struct upid *upid = pid->numbers + i;
hlist_del_rcu(&upid->pid_chain);
}
The Problem is to restore both structs: Inserting the task_struct back in the list of task_structs is easy, but I haven't found a way to restore the link in the hash-table. It's difficult, because the kernel doesn't expose the needed structures.
Inside the Kernel, it's done within this line:
hlist_add_head_rcu(&upid->pid_chain,&pid_hash[pid_hashfn(upid->nr, upid->ns)]);
pid_hashfn and pid_hash are defined as follows:
#define pid_hashfn(nr, ns) hash_long((unsigned long)nr + (unsigned long)ns, pidhash_shift)
static struct hlist_head *pid_hash;
And the struct I need to insert, is the pid_chain:
struct hlist_node pid_chain;
So my question is, how can I insert the pid_chain in the correct hash-list?
Is there a way to obtain the reference to the hash-list-array, even if it's declared as static?
Or, maybe an uncommon idea: The hash-list is allocated via
pid_hash = alloc_large_system_hash("PID", sizeof(*pid_hash), 0, 18,HASH_EARLY | HASH_SMALL, &pidhash_shift, NULL,0, 4096);
So, if I could get the starting-position of the memory of the hash-list, could I scan the corresponding memoryspace for the pointer of my struct and then cast the surrounding memoryregion to struct of type struct hlist?
Thanks for your help. Every solution or idea is appreciated :)
There is a hash list available in sysmap file. you can check that once.
The pid_hash can be located in /proc/kallsyms and also is accesible programatically by kallsyms_lookup_name.

Is clock_gettime() adequate for submicrosecond timing?

I need a high-resolution timer for the embedded profiler in the Linux build of our application. Our profiler measures scopes as small as individual functions, so it needs a timer precision of better than 25 nanoseconds.
Previously our implementation used inline assembly and the rdtsc operation to query the high-frequency timer from the CPU directly, but this is problematic and requires frequent recalibration.
So I tried using the clock_gettime function instead to query CLOCK_PROCESS_CPUTIME_ID. The docs allege this gives me nanosecond timing, but I found that the overhead of a single call to clock_gettime() was over 250ns. That makes it impossible to time events 100ns long, and having such high overhead on the timer function seriously drags down app performance, distorting the profiles beyond value. (We have hundreds of thousands of profiling nodes per second.)
Is there a way to call clock_gettime() that has less than ¼μs overhead? Or is there some other way that I can reliably get the timestamp counter with <25ns overhead? Or am I stuck with using rdtsc?
Below is the code I used to time clock_gettime().
// calls gettimeofday() to return wall-clock time in seconds:
extern double Get_FloatTime();
enum { TESTRUNS = 1024*1024*4 };
// time the high-frequency timer against the wall clock
{
double fa = Get_FloatTime();
timespec spec;
clock_getres( CLOCK_PROCESS_CPUTIME_ID, &spec );
printf("CLOCK_PROCESS_CPUTIME_ID resolution: %ld sec %ld nano\n",
spec.tv_sec, spec.tv_nsec );
for ( int i = 0 ; i < TESTRUNS ; ++ i )
{
clock_gettime( CLOCK_PROCESS_CPUTIME_ID, &spec );
}
double fb = Get_FloatTime();
printf( "clock_gettime %d iterations : %.6f msec %.3f microsec / call\n",
TESTRUNS, ( fb - fa ) * 1000.0, (( fb - fa ) * 1000000.0) / TESTRUNS );
}
// and so on for CLOCK_MONOTONIC, CLOCK_REALTIME, CLOCK_THREAD_CPUTIME_ID.
Results:
CLOCK_PROCESS_CPUTIME_ID resolution: 0 sec 1 nano
clock_gettime 8388608 iterations : 3115.784947 msec 0.371 microsec / call
CLOCK_MONOTONIC resolution: 0 sec 1 nano
clock_gettime 8388608 iterations : 2505.122119 msec 0.299 microsec / call
CLOCK_REALTIME resolution: 0 sec 1 nano
clock_gettime 8388608 iterations : 2456.186031 msec 0.293 microsec / call
CLOCK_THREAD_CPUTIME_ID resolution: 0 sec 1 nano
clock_gettime 8388608 iterations : 2956.633930 msec 0.352 microsec / call
This is on a standard Ubuntu kernel. The app is a port of a Windows app (where our rdtsc inline assembly works just fine).
Addendum:
Does x86-64 GCC have some intrinsic equivalent to __rdtsc(), so I can at least avoid inline assembly?
No. You'll have to use platform-specific code to do it. On x86 and x86-64, you can use 'rdtsc' to read the Time Stamp Counter.
Just port the rdtsc assembly you're using.
__inline__ uint64_t rdtsc(void) {
uint32_t lo, hi;
__asm__ __volatile__ ( // serialize
"xorl %%eax,%%eax \n cpuid"
::: "%rax", "%rbx", "%rcx", "%rdx");
/* We cannot use "=A", since this would use %rax on x86_64 and return only the lower 32bits of the TSC */
__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
return (uint64_t)hi << 32 | lo;
}
This is what happens when you call clock_gettime() function.
Based on the clock you choose it will call the respective function. (from vclock_gettime.c file from kernel)
int clock_gettime(clockid_t, struct __kernel_old_timespec *)
__attribute__((weak, alias("__vdso_clock_gettime")));
notrace int
__vdso_clock_gettime_stick(clockid_t clock, struct __kernel_old_timespec *ts)
{
struct vvar_data *vvd = get_vvar_data();
switch (clock) {
case CLOCK_REALTIME:
if (unlikely(vvd->vclock_mode == VCLOCK_NONE))
break;
return do_realtime_stick(vvd, ts);
case CLOCK_MONOTONIC:
if (unlikely(vvd->vclock_mode == VCLOCK_NONE))
break;
return do_monotonic_stick(vvd, ts);
case CLOCK_REALTIME_COARSE:
return do_realtime_coarse(vvd, ts);
case CLOCK_MONOTONIC_COARSE:
return do_monotonic_coarse(vvd, ts);
}
/*
* Unknown clock ID ? Fall back to the syscall.
*/
return vdso_fallback_gettime(clock, ts);
}
CLOCK_MONITONIC better (though I use CLOCK_MONOTONIC_RAW) since it is not affected from NTP time adjustment.
This is how the do_monotonic_stick is implemented inside kernel:
notrace static __always_inline int do_monotonic_stick(struct vvar_data *vvar,
struct __kernel_old_timespec *ts)
{
unsigned long seq;
u64 ns;
do {
seq = vvar_read_begin(vvar);
ts->tv_sec = vvar->monotonic_time_sec;
ns = vvar->monotonic_time_snsec;
ns += vgetsns_stick(vvar);
ns >>= vvar->clock.shift;
} while (unlikely(vvar_read_retry(vvar, seq)));
ts->tv_sec += __iter_div_u64_rem(ns, NSEC_PER_SEC, &ns);
ts->tv_nsec = ns;
return 0;
}
And the vgetsns_stick() function which provides nano seconds resolution is implemented as:
notrace static __always_inline u64 vgetsns(struct vvar_data *vvar)
{
u64 v;
u64 cycles;
cycles = vread_tick();
v = (cycles - vvar->clock.cycle_last) & vvar->clock.mask;
return v * vvar->clock.mult;
}
Where the function vread_tick() reads the cycles from register based on the CPU:
notrace static __always_inline u64 vread_tick(void)
{
register unsigned long long ret asm("o4");
__asm__ __volatile__("rd %%tick, %L0\n\t"
"srlx %L0, 32, %H0"
: "=r" (ret));
return ret;
}
A single call to clock_gettime() takes around 20 to 100 nano seconds. reading the rdtsc register and converting the cycles to time is always faster.
I have done some experiment with CLOCK_MONOTONIC_RAW here: Unexpected periodic behaviour of an ultra low latency hard real time multi threaded x86 code
I need a high-resolution timer for the embedded profiler in the Linux build of our application. Our profiler measures scopes as small as individual functions, so it needs a timer precision of better than 25 nanoseconds.
Have you considered oprofile or perf? You can use the performance counter hardware on your CPU to get profiling data without adding instrumentation to the code itself. You can see data per-function, or even per-line-of-code. The "only" drawback is that it won't measure wall clock time consumed, it will measure CPU time consumed, so it's not appropriate for all investigations.
Give clockid_t CLOCK_MONOTONIC_RAW a try?
CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
Similar to CLOCK_MONOTONIC, but provides access to a
raw hardware-based time that is not subject to NTP
adjustments or the incremental adjustments performed by
adjtime(3).
From Man7.org
It's hard to give a globally applicable answer because the hardware and software implementation will vary widely.
However, yes, most modern platforms will have a suitable clock_gettime call that is implemented purely in user-space using the VDSO mechanism, and will in my experience take 20 to 30 nanoseconds to complete (but see Wojciech's comment below about contention).
Internally, this is using rdtsc or rdtscp for the fine-grained portion of the time-keeping, plus adjustments to keep this in sync with wall-clock time (depending on the clock you choose) and a multiplication to convert from whatever units rdtsc has on your platform to nanoseconds.
Not all of the clocks offered by clock_gettime will implement this fast method, and it's not always obvious which ones do. Usually CLOCK_MONOTONIC is a good option, but you should test this on your own system.
You are calling clock_getttime with control parameter which means the api is branching through if-else tree to see what kind of time you want. I know you cant't avoid that with this call, but see if you can dig into the system code and call what the kernal is eventually calling directly. Also, I note that you are including the loop time (i++, and conditional branch).

How to get the current time in native Android code?

I was wondering if there is an easy way to get the current time in native Android code. Optimally it would be something comparable to System.getTimeMillies(). I will only be using it to see how long certain function calls will take so a long variable with the current time in milliseconds would be the optimal solution for me.
Thanks in advance!
For the lazy, add this to the top of your code:
#include <time.h>
// from android samples
/* return current time in milliseconds */
static double now_ms(void) {
struct timespec res;
clock_gettime(CLOCK_REALTIME, &res);
return 1000.0 * res.tv_sec + (double) res.tv_nsec / 1e6;
}
Call it like this:
double start = now_ms(); // start time
// YOUR CODE HERE
double end = now_ms(); // finish time
double delta = end - start; // time your code took to exec in ms
For microsecond resolution you can use gettimeofday(). This uses "wall clock time", which continues to advance when the device is asleep, but is subject to sudden shifts forward or backward if the network updates the device's clock.
You can also use clock_gettime(CLOCK_MONOTONIC). This uses the monotonic clock, which never leaps forward or backward, but stops counting when the device sleeps.
The actual resolution of the timers is device-dependent.
Both of these are POSIX APIs, not Android-specific.
Another one for the lazy, this function returns the current time in nanoseconds using CLOCK_MONOTONIC
#include <time.h>
#define NANOS_IN_SECOND 1000000000
static long currentTimeInNanos() {
struct timespec res;
clock_gettime(CLOCK_MONOTONIC, &res);
return (res.tv_sec * NANOS_IN_SECOND) + res.tv_nsec;
}

Resources