What does pcpu signify and why multiply by 1000? - linux

I was reading about calculating the cpu usage of a process.
seconds = utime / Hertz
total_time = utime + stime
IF include_dead_children
total_time = total_time + cutime + cstime
ENDIF
seconds = uptime - starttime / Hertz
pcpu = (total_time * 1000 / Hertz) / seconds
print: "%CPU" pcpu / 10 "." pcpu % 10
What I don't get is, by 'seconds' the algorithm means the time computer spent doing operations other than the interested process, and before it. Since, uptime is the time our computer spent being operational and starttime means the time our [interested] process started.
Then why are we dividing the total_time by seconds [Time computer spent doing something else] to get pcpu? It doesn't make sense.
The standard meanings of the variables:
# Name Description
14 utime CPU time spent in user code, measured in jiffies
15 stime CPU time spent in kernel code, measured in jiffies
16 cutime CPU time spent in user code, including time from children
17 cstime CPU time spent in kernel code, including time from children
22 starttime Time when the process started, measured in jiffies
/proc/uptime :The uptime of the system (seconds), and the amount of time spent in idle process (seconds).
Hertz :Number of clock ticks per second

Now that you've provided what each of the variables represent, here's some comments on the pseudo-code:
seconds = utime / Hertz
The above line is pointless, as the new value of seconds is never used before it's overwritten a few lines later.
total_time = utime + stime
Total running time (user + system) of the process, in jiffies, since both utime and stime are.
IF include_dead_children
total_time = total_time + cutime + cstime
ENDIF
This should probably just say total_time = cutime + cstime, since the definitions seem to indicate that, e.g. cutime already includes utime, plus the time spent by children in user mode. So, as written, this overstates the value by including the contribution from this process twice. Or, the definition is wrong... Regardless, the total_time is still in jiffies.
seconds = uptime - starttime / Hertz
uptime is already in seconds; starttime / Hertz converts starttime from jiffies to seconds, so seconds becomes essentially "the time in seconds since this process was started".
pcpu = (total_time * 1000 / Hertz) / seconds
total_time is still in jiffies, so total_time / Hertz converts that to seconds, which is the number of CPU seconds consumed by the process. That divided by seconds would give the scaled CPU-usage percentage since process start if it were a floating point operation. Since it isn't, it's scaled by 1000 to give a resolution of 1/10%. The scaling is forced to be done early by the use of parentheses, to preserve accuracy.
print: "%CPU" pcpu / 10 "." pcpu % 10
And this undoes the scaling, by finding the dividend and the remainder when dividing pcpu by 10, and printing those values in a format that looks like a floating point value.

Related

How to get a duration of 1 day with Rust chrono?

I am dealing with some Rust code that works with durations of days but the implementation of Duration::days(n) is, per the documentation n * 24 * 60 * 60 seconds, which isn't n days because not all days are 24 * 60 * 60 seconds.
This behaviour is well documented:
pub fn days(days: i64) -> Duration
Makes a new Duration with given number of days. Equivalent to
Duration::seconds(days * 24 * 60 * 60) with overflow checks. Panics
when the duration is out of bounds.
Is there a way with Rust Chrono to get a duration that is, strictly, 1 day rather than a number of seconds and is compatible with the DateTime types? Not all days are the same number of seconds. seconds and days are quite different units. If there were such a function then the following would always give a result that is the same time of day on the following day?
let start = Local.now();
let one_day_later = start + function_that_returns_a_duration_of_days(1);
Again, Duration:days(1) is not such a function because it returns 1 * 24 * 60 * 60 seconds, rather than 1 day.
For example, with TZ set to America/Denver the following:
let start = Local.ymd(2019, 3, 10).and_hms(0, 0, 0);
println!("start: {}", start);
let end = Local.ymd(2019, 3, 11).and_hms(0, 0, 0);
println!("end: {}", end);
let elapsed_seconds = end.timestamp() - start.timestamp();
println!("elapsed_seconds: {}", elapsed_seconds);
let end2 = start + Duration::days(1);
println!("end2: {}", end2);
let elapsed_seconds2 = end2.timestamp() - start.timestamp();
println!("elapsed_seconds2: {}", elapsed_seconds2);
Returns:
start: 2019-03-10 00:00:00 -07:00
end: 2019-03-11 00:00:00 -06:00
elapsed_seconds: 82800
end2: 2019-03-11 01:00:00 -06:00
elapsed_seconds2: 86400
It adds 86400 seconds, rather than 1 day.
I can get the correct result with:
let one_day_later =
(start.date() + Duration::days(1)).and_hms(start.hour(), start.minute(), start.second());
But I would prefer a function that returns a duration of days and in general would like to know more about Rust Chrono capabilities for handling durations. Does it have durations with units other than seconds? What about weeks, months and years, which also have variable numbers of seconds.
I should probably say that I don't know Rust, only having worked with it for a few days now and I haven't much read the source code. I did look at it, but find it difficult to understand due to my limited familiarity with the language.
A Duration is an amount of time. There is no amount of time that when added to an instant, always yields the same time on the next day, because as you have noticed, calendar days may have different amounts of time in them.
Not only years, weeks and days, but even hours and minutes do not always comprise the same amount of time (Leap second). A Duration is an amount of time, not a "calendar unit". So no, a Duration is not capable of expressing an idea like "same time next week".
The easiest way to express "same time next day" is with the succ and and_time methods on Date:
let one_day_later = start.date().succ().and_time(start.time());
and_time will panic if the time does not exist on the new date.

process.hrtime returns non matching second and milisecond

I use process.hrtime() to calculate the time a process takes in sec and millisec as follows:
router.post(
"/api/result-store/v1/indexing-analyzer/:searchID/:id",
async (req, res) => {
var hrstart = process.hrtime();
//some code which takes time
hrend = process.hrtime(hrstart);
console.info("Execution time (hr): %ds %dms", hrend[0], hrend[1] / 1000000);
}
);
I followed the following for code:
https://blog.abelotech.com/posts/measure-execution-time-nodejs-javascript/
So I expect to get a matching time in millisec and sec but here is what I get:
Execution time (hr): 54s 105.970357ms
So this is very strange since when I convert 54s to millisec I get this 54000 so I do not get where this "105.970357ms" comes from. Is there anything wrong with my code? why do I see this mismatch?
According to process.hrtime() documentation it returns an array [seconds, nanoseconds], where nanoseconds is the remaining part of the real time that can't be represented in second precision.
1 nanosecond = 10^9 seconds
1 millisecond = 10^6 nanoseconds
In your case the execution took 54 seconds and 105.970357 milliseconds or
54000 milliseconds + 105.970357 milliseconds.
Or if you need it in seconds: (hrend[0]+ hrend[1] / Math.pow(10,9))

How can I measure elapsed time when encrypting using openssl in linux by C

How can I calculate the amount of processing time used by a process in C on Linux. Specifically, I want to determine how much time elapses when encrypting a file using openssl.
The easiest way for you to do this is by using the clock() function from <time.h> to report the amount of CPU time used by the calling process.
From SUSv4:
The clock() function shall return the implementation's best
approximation to the processor time used by the process since the
beginning of an implementation-defined era related only to the process
invocation.
RETURN VALUE
To determine the time in seconds, the value returned by clock() should
be divided by the value of the macro CLOCKS_PER_SEC. If the processor
time used is not available or its value cannot be represented,
the function shall return the value (clock_t)-1.
Try following,
time_t start, end;
double cpu_time_used;
start = clock();
/* Do encrypting ... */
end = clock();
cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;

taskstats stats not adding up

I am trying to figure out how the stats in the taskstats struct are adding up. I wrote a simple C program that runs for some time doing IO and exits. I monitor the stats of this program using the taskstats struct, which I get from the taskstats netlink multicast group. When I sum the values of cpu_delay_total, blkio_delay_total, swapin_delay_total, freepages_delay_total, ac_utime and ac_stime, I get a value that is about 0.5 seconds larger than the value of elapsed time (ac_etime)
Here are the statistics for a 3.5-second run:
ac_etime: 3536036
ac_utime: 172000
ac_stime: 3032000
cpu_delay_total: 792528445
blkio_delay_total: 46320128
swapin_delay_total: 0
freepages_delay_total: 0
Summing up values for delays, utime and stime yields 4042848.573 (divide the delays by 1000 to convert to microseconds), while etime is only 3536036!
Interestingly, the wall clock time gives the value that is practically equal to utime+stime: cpu_run_real_total: 3204000129, while ac_utime + ac_stime: 3204000
Does the cpu_run_real_total field give the cpu time, despite that the comment in taskstats.h clearly states that this is a wall clock time? And what could be the reason that the sum of these fields is larger than the elapsed time?
My kernel version is 3.2.0-38.
(1) cpu_run_real_total = ac_utime + ac_stime, I check the codes in ./kernel/delayacct.c, function __delayacct_add_tsk():
tmp = (s64)d->cpu_run_real_total;
cputime_to_timespec(tsk->utime + tsk->stime, &ts);
tmp += timespec_to_ns(&ts);
d->cpu_run_real_total = (tmp < (s64)d->cpu_run_real_total) ? 0 : tmp;
From the above codes, we know cpu_run_real_total is sum the utime and stime up.
(2) Why sum the values of cpu_delay_total, blkio_delay_total, swapin_delay_total, freepages_delay_total, ac_utime and ac_stime, the value is larger than the value of ac_etime?
I have not figured out why. But I have guess: the stime may somewhat overlap with the various *_delay_total counters.

CPU contention (wait time) for a process in Linux

How can I check how long a process spends waiting for the CPU in a Linux box?
For example, in a loaded system I want to check how long a SQL*Loader (sqlldr) process waits.
It would be useful if there is a command line tool to do this.
I've quickly slapped this together. It prints out the smallest and largest "interferences" from task switching...
#include <sys/time.h>
#include <stdio.h>
double seconds()
{
timeval t;
gettimeofday(&t, NULL);
return t.tv_sec + t.tv_usec / 1000000.0;
}
int main()
{
double min = 999999999, max = 0;
while (true)
{
double c = -(seconds() - seconds());
if (c < min)
{
min = c;
printf("%f\n", c);
fflush(stdout);
}
if (c > max)
{
max = c;
printf("%f\n", c);
fflush(stdout);
}
}
return 0;
}
Here's how you should go about measuring it. Have a number of processes, greater than the number of your processors * cores * threading capability wait (block) on an event that will wake them up all at the same time. One such event is a multicast network packet. Use an instrumentation library like PAPI (or one more suited to your needs) to measure the differences in real and virtual "wakeup" time between your processes. From several iterations of the experiment you can get an estimate of the CPU contention time for your processes. Obviously, it's not going to be at all accurate for multicore processors, but maybe it'll help you.
Cheers.
I had this problem some time back. I ended up using getrusage :
You can get detailed help at :
http://www.opengroup.org/onlinepubs/009695399/functions/getrusage.html
getrusage populates the rusage struct.
Measuring Wait Time with getrusage
You can call getrusage at the beginning of your code and then again call it at the end, or at some appropriate point during execution. You have then initial_rusage and final_rusage. The user-time spent by your process is indicated by rusage->ru_utime.tv_sec and system-time spent by the process is indicated by rusage->ru_stime.tv_sec.
Thus the total user-time spent by the process will be:
user_time = final_rusage.ru_utime.tv_sec - initial_rusage.ru_utime.tv_sec
The total system-time spent by the process will be:
system_time = final_rusage.ru_stime.tv_sec - initial_rusage.ru_stime.tv_sec
If total_time is the time elapsed between the two calls of getrusage then the wait time will be
wait_time = total_time - (user_time + system_time)
Hope this helps

Resources