I was wondering if there is an easy way to get the current time in native Android code. Optimally it would be something comparable to System.getTimeMillies(). I will only be using it to see how long certain function calls will take so a long variable with the current time in milliseconds would be the optimal solution for me.
Thanks in advance!
For the lazy, add this to the top of your code:
#include <time.h>
// from android samples
/* return current time in milliseconds */
static double now_ms(void) {
struct timespec res;
clock_gettime(CLOCK_REALTIME, &res);
return 1000.0 * res.tv_sec + (double) res.tv_nsec / 1e6;
}
Call it like this:
double start = now_ms(); // start time
// YOUR CODE HERE
double end = now_ms(); // finish time
double delta = end - start; // time your code took to exec in ms
For microsecond resolution you can use gettimeofday(). This uses "wall clock time", which continues to advance when the device is asleep, but is subject to sudden shifts forward or backward if the network updates the device's clock.
You can also use clock_gettime(CLOCK_MONOTONIC). This uses the monotonic clock, which never leaps forward or backward, but stops counting when the device sleeps.
The actual resolution of the timers is device-dependent.
Both of these are POSIX APIs, not Android-specific.
Another one for the lazy, this function returns the current time in nanoseconds using CLOCK_MONOTONIC
#include <time.h>
#define NANOS_IN_SECOND 1000000000
static long currentTimeInNanos() {
struct timespec res;
clock_gettime(CLOCK_MONOTONIC, &res);
return (res.tv_sec * NANOS_IN_SECOND) + res.tv_nsec;
}
Related
How can I calculate the amount of processing time used by a process in C on Linux. Specifically, I want to determine how much time elapses when encrypting a file using openssl.
The easiest way for you to do this is by using the clock() function from <time.h> to report the amount of CPU time used by the calling process.
From SUSv4:
The clock() function shall return the implementation's best
approximation to the processor time used by the process since the
beginning of an implementation-defined era related only to the process
invocation.
RETURN VALUE
To determine the time in seconds, the value returned by clock() should
be divided by the value of the macro CLOCKS_PER_SEC. If the processor
time used is not available or its value cannot be represented,
the function shall return the value (clock_t)-1.
Try following,
time_t start, end;
double cpu_time_used;
start = clock();
/* Do encrypting ... */
end = clock();
cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
Could someone clear my following doubts regarding this code:
1) What is this clock() ? How does it work?
2)I have never seen ; after for statement when do i need to use these semicolons after for
and will it loop though the only next line of code int tick= clock();?
3)Is he coverting tick to float by doing this: (float)tick ? Can i do this thing with
every variable i initialize first as short, integer, long, long long and then change it to float by just doing float(variable_name)?
Thanks in advance
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
int main()
{
int i;
for(i=0; i<10000000; i++);
int tick= clock();
printf("%f", (float)tick/ CLOCKS_PER_SEC);
return 0;
}
1) What is this clock() ? How does it work?
It reads the "CPU" clock of your program (how much time your program has taken so far). It is called RTC for Real Time Clock. The precision is very high, it should be close to one CPU cycle.
2) I have never seen ; after for statement when do i need to use these semicolons after for and will it loop though the only next line of code int tick= clock();?
No. The ';' means "loop by yourself." This is wrong though because such a loop can be optimized out if it does nothing. So the number of cycles that loop takes could drop to zero.
So "he" [hopes he] counts from 0 to 10 million.
3) Is he converting tick to float by doing this: (float)tick ? Can i do this thing with every variable i initialize first as short, integer, long, long long and then change it to float by just doing float(variable_name)?
Yes. Three problems though:
a) he uses int for tick which is wrong, he should use clock_t.
b) clock_t is likely larger than 32 bits and a float cannot even accommodate 32 bit integers.
c) printf() anyway takes double so converting to float is not too good, since it will right afterward convert that float to a double.
So this would be better:
clock_t tick = clock();
printf("%f", (double)tick / CLOCKS_PER_SEC);
Now, this is good if you want to know the total time your program took to run. If you'd like to know how long a certain piece of code takes, you want to read clock() before and another time after your code, then the difference gives you the amount of time your code took to run.
I am trying to setup a periodic timer triggering a function every seconds, but there is a small drift between each call. After some investigations, I found that this is the add_timer() call which adds an offset of 2 to the expires field (~2ms in my case).
Why is this drift added? Is there a clean way to prevent it? I am not trying to get an accurate millisecond precision, I have a vague understanding of the kernel real-time limitations, but at least to avoid this intentional delay at each call.
Here is the output from a test module. Each couple of numbers is the value of the expires field just before and after the call:
[100047.127123] Init timer 1000
[100048.127986] Expired timer 99790884 99790886
[100049.129578] Expired timer 99791886 99791888
[100050.131146] Expired timer 99792888 99792890
[100051.132728] Expired timer 99793890 99793892
[100052.134315] Expired timer 99794892 99794894
[100053.135882] Expired timer 99795894 99795896
[100054.137411] Expired timer 99796896 99796898
[...]
[100071.164276] Expired timer 99813930 99813932
[100071.529455] Exit timer
And here is the source:
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/jiffies.h>
#include <linux/time.h>
static struct timer_list t;
static void timer_func(unsigned long data)
{
unsigned long pre, post;
t.expires = jiffies + HZ;
pre = t.expires;
add_timer(&t);
post = t.expires;
printk("Expired timer %lu %lu\n", pre, post);
}
static int __init timer_init(void)
{
init_timer(&t);
t.function = timer_func;
t.expires = jiffies + HZ;
add_timer(&t);
printk("Init timer %d\n", HZ);
return 0;
}
static void __exit timer_exit(void)
{
del_timer(&t);
printk("Exit timer\n");
}
module_init(timer_init);
module_exit(timer_exit);
I found the cause. Let's trace the add_timer function:
The add_timer function calls:
mod_timer(timer, timer->expires);
The mod_timer function calls:
expires = apply_slack(timer, expires);
and then goes on to actually modify the timer.
The apply_slack function says:
/*
* Decide where to put the timer while taking the slack into account
*
* Algorithm:
* 1) calculate the maximum (absolute) time
* 2) calculate the highest bit where the expires and new max are different
* 3) use this bit to make a mask
* 4) use the bitmask to round down the maximum time, so that all last
* bits are zeros
*/
Before continuing, let's see what is the timer's slack. The init_timer macro eventually calls do_init_timer which sets the slack by default to -1.
With this knowledge, let's reduce apply_slack and see what remains of it:
static inline
unsigned long apply_slack(struct timer_list *timer, unsigned long expires)
{
unsigned long expires_limit, mask;
int bit;
if (timer->slack >= 0) {
expires_limit = expires + timer->slack;
} else {
long delta = expires - jiffies;
if (delta < 256)
return expires;
expires_limit = expires + delta / 256;
}
mask = expires ^ expires_limit;
if (mask == 0)
return expires;
bit = find_last_bit(&mask, BITS_PER_LONG);
mask = (1 << bit) - 1;
expires_limit = expires_limit & ~(mask);
return expires_limit;
}
The first if, checking for timer->slack >= 0 fails, so the else part is applied. In that part the difference between expires and jiffies is slightly less than HZ (you just did t.expires = jiffies + HZ. Therefore, the delta in the function (with your data) is most likely about 4 and delta / 4 is non zero.
This in turn implies that mask (which is expires ^ expires_limit) is not zero. The rest really depends on the value of expires, but for sure, it gets changed.
So there you have it, since slack is automatically set to -1, the apply_slack function is changing your expires time to align with, I guess, the timer ticks.
If you don't want this slack, you can set t.slack = 0; when you are initializing the timer in timer_init.
This is the old answer! It doesn't address the issue in your question, but it is an issue with what you are trying to achieve nonetheless: having a periodic function.
Let's visualize your program in a timeline (assuming start time 1000 and HZ=50 with imaginary time units):
time (jiffies) event
1000 in timer_init(): t.expires = jiffies + HZ; // t.expires == 1050
1050 timer_func() is called by timer
1052 in timer_func(): t.expires = jiffies + HZ; // t.expires == 1102
1102 timer_func() is called by timer
1104 in timer_func(): t.expires = jiffies + HZ; // t.expires == 1154
I hope you see where this is going! The problem is that there is a delay between the time the timer expires and the time you calculate when the next expiration should be. That's where the drift comes from. The drift could get even larger, by the way, if the system is busy and your function call is delayed.
The way to fix it is very very easy. The problem was that when you update t.expires by jiffies, which is the current time. What you should do is update t.expires by the last time it expired (which is already in t.expires!).
So, in your timer_func function, instead of:
t.expires = jiffies + HZ;
simply do:
t.expires += HZ;
How can I check how long a process spends waiting for the CPU in a Linux box?
For example, in a loaded system I want to check how long a SQL*Loader (sqlldr) process waits.
It would be useful if there is a command line tool to do this.
I've quickly slapped this together. It prints out the smallest and largest "interferences" from task switching...
#include <sys/time.h>
#include <stdio.h>
double seconds()
{
timeval t;
gettimeofday(&t, NULL);
return t.tv_sec + t.tv_usec / 1000000.0;
}
int main()
{
double min = 999999999, max = 0;
while (true)
{
double c = -(seconds() - seconds());
if (c < min)
{
min = c;
printf("%f\n", c);
fflush(stdout);
}
if (c > max)
{
max = c;
printf("%f\n", c);
fflush(stdout);
}
}
return 0;
}
Here's how you should go about measuring it. Have a number of processes, greater than the number of your processors * cores * threading capability wait (block) on an event that will wake them up all at the same time. One such event is a multicast network packet. Use an instrumentation library like PAPI (or one more suited to your needs) to measure the differences in real and virtual "wakeup" time between your processes. From several iterations of the experiment you can get an estimate of the CPU contention time for your processes. Obviously, it's not going to be at all accurate for multicore processors, but maybe it'll help you.
Cheers.
I had this problem some time back. I ended up using getrusage :
You can get detailed help at :
http://www.opengroup.org/onlinepubs/009695399/functions/getrusage.html
getrusage populates the rusage struct.
Measuring Wait Time with getrusage
You can call getrusage at the beginning of your code and then again call it at the end, or at some appropriate point during execution. You have then initial_rusage and final_rusage. The user-time spent by your process is indicated by rusage->ru_utime.tv_sec and system-time spent by the process is indicated by rusage->ru_stime.tv_sec.
Thus the total user-time spent by the process will be:
user_time = final_rusage.ru_utime.tv_sec - initial_rusage.ru_utime.tv_sec
The total system-time spent by the process will be:
system_time = final_rusage.ru_stime.tv_sec - initial_rusage.ru_stime.tv_sec
If total_time is the time elapsed between the two calls of getrusage then the wait time will be
wait_time = total_time - (user_time + system_time)
Hope this helps
The Perl module Proc::ProcessTable occasionally observes that the pctcpu attribute as 'inf', 'nan', or a value greater then 100. Why does it do this? And are there any guidelines on how to deal with this kind of information?
We have observed this on various platforms including Linux 2.4 running on 8 logical processors.
I would guess that 'inf' or 'nan' is the result of some impossibly large value or a divide by zero.
For values greater then 100, could this possibly mean that more then one processor was used?
And for dealing with this information, is the best practice merely marking the data point as untrustworthy and normalizing to 100%?
I do not know why that happens and I cannot stress test the module right now trying to generate such cases.
However, a principle I have followed all my research is not to replace data I know to be non-sense with something that looks reasonable. You basically have missing observations and you should treat them as such. I would not attach a numerical value at all so as not to pretend I have information when I in fact do not.
Then, your statistics for the non-missing points will be meaningful and you can look at any patterns in the missing observations separately.
UPDATE: Looking at the calc_prec() function in the source code:
/* calc_prec()
*
* calculate the two cpu/memory precentage values
*/
static void calc_prec(char *format_str, struct procstat *prs, struct obstack *mem_pool)
{
float pctcpu = 100.0f * (prs->utime / 1e6) / (time(NULL) - prs->start_time);
/* calculate pctcpu - NOTE: This assumes the cpu time is in microsecond units! */
sprintf(prs->pctcpu, "%3.2f", pctcpu);
field_enable(format_str, F_PCTCPU);
/* calculate pctmem */
if (system_memory > 0) {
sprintf(prs->pctmem, "%3.2f", (float) prs->rss / system_memory * 100.f);
field_enable(format_str, F_PCTMEM);
}
}
First, IMHO, it would be better to just divide by 1e4 rather than multiplying by 100.0f after the division. Second, it is possible (if polled immediately after process start) for the time delta to be 0. Third, I would have just done the whole thing in double.
As an aside, this function looks like a good example of why you should not have comments in code.
#include <stdio.h>
#include <time.h>
volatile float calc_percent(
unsigned long utime,
time_t now,
time_t start
) {
return 100.0f * ( utime / 1e6) / (now - start);
}
int main(void) {
printf("%3.2f\n", calc_percent(1e6, time(NULL), time(NULL)));
printf("%3.2f\n", calc_percent(0, time(NULL), time(NULL)));
return 0;
}
This outputs inf in the first case and nan in the second case when compiled with Cygwin gcc-4 on Windows. I do not know if this behavior is standard or just what happens with this particular combination of OS+compiler.