How to get a duration of 1 day with Rust chrono? - rust

I am dealing with some Rust code that works with durations of days but the implementation of Duration::days(n) is, per the documentation n * 24 * 60 * 60 seconds, which isn't n days because not all days are 24 * 60 * 60 seconds.
This behaviour is well documented:
pub fn days(days: i64) -> Duration
Makes a new Duration with given number of days. Equivalent to
Duration::seconds(days * 24 * 60 * 60) with overflow checks. Panics
when the duration is out of bounds.
Is there a way with Rust Chrono to get a duration that is, strictly, 1 day rather than a number of seconds and is compatible with the DateTime types? Not all days are the same number of seconds. seconds and days are quite different units. If there were such a function then the following would always give a result that is the same time of day on the following day?
let start = Local.now();
let one_day_later = start + function_that_returns_a_duration_of_days(1);
Again, Duration:days(1) is not such a function because it returns 1 * 24 * 60 * 60 seconds, rather than 1 day.
For example, with TZ set to America/Denver the following:
let start = Local.ymd(2019, 3, 10).and_hms(0, 0, 0);
println!("start: {}", start);
let end = Local.ymd(2019, 3, 11).and_hms(0, 0, 0);
println!("end: {}", end);
let elapsed_seconds = end.timestamp() - start.timestamp();
println!("elapsed_seconds: {}", elapsed_seconds);
let end2 = start + Duration::days(1);
println!("end2: {}", end2);
let elapsed_seconds2 = end2.timestamp() - start.timestamp();
println!("elapsed_seconds2: {}", elapsed_seconds2);
Returns:
start: 2019-03-10 00:00:00 -07:00
end: 2019-03-11 00:00:00 -06:00
elapsed_seconds: 82800
end2: 2019-03-11 01:00:00 -06:00
elapsed_seconds2: 86400
It adds 86400 seconds, rather than 1 day.
I can get the correct result with:
let one_day_later =
(start.date() + Duration::days(1)).and_hms(start.hour(), start.minute(), start.second());
But I would prefer a function that returns a duration of days and in general would like to know more about Rust Chrono capabilities for handling durations. Does it have durations with units other than seconds? What about weeks, months and years, which also have variable numbers of seconds.
I should probably say that I don't know Rust, only having worked with it for a few days now and I haven't much read the source code. I did look at it, but find it difficult to understand due to my limited familiarity with the language.

A Duration is an amount of time. There is no amount of time that when added to an instant, always yields the same time on the next day, because as you have noticed, calendar days may have different amounts of time in them.
Not only years, weeks and days, but even hours and minutes do not always comprise the same amount of time (Leap second). A Duration is an amount of time, not a "calendar unit". So no, a Duration is not capable of expressing an idea like "same time next week".
The easiest way to express "same time next day" is with the succ and and_time methods on Date:
let one_day_later = start.date().succ().and_time(start.time());
and_time will panic if the time does not exist on the new date.

Related

Is there any NodeJs class/function which is similar to Environment.TickCount on c#?

This code is running on c#
int x = Environment.TickCount;
docs for Environment.TickCount
Gets the number of milliseconds elapsed since the system started. TickCount cycles between Int32.MinValue, which is a negative number, and Int32.MaxValue once every 49.8 days.
TickCount will increment from Zero to (2147483647) for approximately 24.9 days, then jump back to (-2147483648), which is a negative number, then increment back to zero during the next 24.9 days.
We can use int result = Environment.TickCount & Int32.MaxValue; to make it rotate between (0) and (2147483647) for every 24.9 days
I want an equivalent method in NodeJS, which would yield the same result.
I made a search on NodeJS npmjs but didn't find similar function
os.uptime() is the closest method to what you need which
Returns the system uptime in number of seconds
NodeJS docs
But this is a valid question that what will be the max limit for the above method.?
In NodeJS the max safe integer is Number.MAX_SAFE_INTEGER that is 9007199254740991. Which is basically 289583309.373 years. So I guess we will have to assume this as the max value for said method.
If you want the functionality as of c#'s TickCount, you will need to create your own custom method, maybe something like given below:
// this method will cycle between 0 and 2147483647
function TickCount() {
const miliseconds_elapsed = os.uptime() * 1000; // convert the time in miliseconds
return miliseconds_elapsed % 2147483647;
}
// this method will cycle between -2147483648 to 2147483647
// note: it will not start from 0
function TickCount() {
const miliseconds_elapsed = os.uptime() * 1000; // convert the time in miliseconds
return (miliseconds_elapsed % 4294967296) - 2147483648;
}
// this method will cycle between -2147483648 to 2147483647
// note: it will start from 0 goes to 2147483647
// then comes back to -2147483648 and starts the cycle
function TickCount() {
const miliseconds_elapsed = os.uptime() * 1000; // convert the time in miliseconds
if (miliseconds_elapsed <= 2147483647) {
return miliseconds_elapsed;
}
return ((miliseconds_elapsed - 2147483648) % 4294967296) - 2147483648;
}
The Microsoft docs say Environment.TickCount is an integer that "contains the amount of time in milliseconds that has passed since the last time the computer was started".
When searching for that I found this question and the answers suggest to use process.uptime() oros.uptime()

Rust lang thread::sleep() sleeping for almost twice the specified time during game loop on windows

So I've written the following function to show what i mean:
use std::{thread, time};
const TARGET_FPS: u64 = 60;
fn main() {
let mut frames = 0;
let target_ft = time::Duration::from_micros(1000000 / TARGET_FPS);
println!("target frame time: {:?}",target_ft);
let mut time_slept = time::Duration::from_micros(0);
let start = time::Instant::now();
loop {
let frame_time = time::Instant::now();
frames+=1;
if frames == 60 {
break
}
if let Some(i) = (target_ft).checked_sub(frame_time.elapsed()) {
time_slept+=i;
thread::sleep(i)
}
}
println!("time elapsed: {:?}",start.elapsed());
println!("time slept: {:?}",time_slept);
}
The idea of the function is to execute 60 cycles at 60fps then exit with the time elapsed and the total time spent sleeping during the loop. ideally, since im executing 60 cycles at 60fps with no real calculations happening between, it should take about one second to execute and spend basically the entire second sleeping. but instead when i run it it returns:
target frame time: 16.666ms
time elapsed: 1.8262798s
time slept: 983.2533ms
As you can see, even though it was only told to sleep for a total of 983ms, the 60 cycles took nearly 2 seconds to complete. Because of this nearly 50% inaccuracy, a loop told to run at 60fps instead runs at only 34fps.
The docs say The thread may sleep longer than the duration specified due to scheduling specifics or platform-dependent functionality. It will never sleep less. But is this really just from that? Am i doing something wrong?
i switched to using spin_sleep::sleep(i) from https://crates.io/crates/spin_sleep and it seems to have fixed it. i guess it must just be windows inaccuracies then...still strange that time::sleep on windows would be that far off for something as simple as a game loop

process.hrtime returns non matching second and milisecond

I use process.hrtime() to calculate the time a process takes in sec and millisec as follows:
router.post(
"/api/result-store/v1/indexing-analyzer/:searchID/:id",
async (req, res) => {
var hrstart = process.hrtime();
//some code which takes time
hrend = process.hrtime(hrstart);
console.info("Execution time (hr): %ds %dms", hrend[0], hrend[1] / 1000000);
}
);
I followed the following for code:
https://blog.abelotech.com/posts/measure-execution-time-nodejs-javascript/
So I expect to get a matching time in millisec and sec but here is what I get:
Execution time (hr): 54s 105.970357ms
So this is very strange since when I convert 54s to millisec I get this 54000 so I do not get where this "105.970357ms" comes from. Is there anything wrong with my code? why do I see this mismatch?
According to process.hrtime() documentation it returns an array [seconds, nanoseconds], where nanoseconds is the remaining part of the real time that can't be represented in second precision.
1 nanosecond = 10^9 seconds
1 millisecond = 10^6 nanoseconds
In your case the execution took 54 seconds and 105.970357 milliseconds or
54000 milliseconds + 105.970357 milliseconds.
Or if you need it in seconds: (hrend[0]+ hrend[1] / Math.pow(10,9))

What does pcpu signify and why multiply by 1000?

I was reading about calculating the cpu usage of a process.
seconds = utime / Hertz
total_time = utime + stime
IF include_dead_children
total_time = total_time + cutime + cstime
ENDIF
seconds = uptime - starttime / Hertz
pcpu = (total_time * 1000 / Hertz) / seconds
print: "%CPU" pcpu / 10 "." pcpu % 10
What I don't get is, by 'seconds' the algorithm means the time computer spent doing operations other than the interested process, and before it. Since, uptime is the time our computer spent being operational and starttime means the time our [interested] process started.
Then why are we dividing the total_time by seconds [Time computer spent doing something else] to get pcpu? It doesn't make sense.
The standard meanings of the variables:
# Name Description
14 utime CPU time spent in user code, measured in jiffies
15 stime CPU time spent in kernel code, measured in jiffies
16 cutime CPU time spent in user code, including time from children
17 cstime CPU time spent in kernel code, including time from children
22 starttime Time when the process started, measured in jiffies
/proc/uptime :The uptime of the system (seconds), and the amount of time spent in idle process (seconds).
Hertz :Number of clock ticks per second
Now that you've provided what each of the variables represent, here's some comments on the pseudo-code:
seconds = utime / Hertz
The above line is pointless, as the new value of seconds is never used before it's overwritten a few lines later.
total_time = utime + stime
Total running time (user + system) of the process, in jiffies, since both utime and stime are.
IF include_dead_children
total_time = total_time + cutime + cstime
ENDIF
This should probably just say total_time = cutime + cstime, since the definitions seem to indicate that, e.g. cutime already includes utime, plus the time spent by children in user mode. So, as written, this overstates the value by including the contribution from this process twice. Or, the definition is wrong... Regardless, the total_time is still in jiffies.
seconds = uptime - starttime / Hertz
uptime is already in seconds; starttime / Hertz converts starttime from jiffies to seconds, so seconds becomes essentially "the time in seconds since this process was started".
pcpu = (total_time * 1000 / Hertz) / seconds
total_time is still in jiffies, so total_time / Hertz converts that to seconds, which is the number of CPU seconds consumed by the process. That divided by seconds would give the scaled CPU-usage percentage since process start if it were a floating point operation. Since it isn't, it's scaled by 1000 to give a resolution of 1/10%. The scaling is forced to be done early by the use of parentheses, to preserve accuracy.
print: "%CPU" pcpu / 10 "." pcpu % 10
And this undoes the scaling, by finding the dividend and the remainder when dividing pcpu by 10, and printing those values in a format that looks like a floating point value.

ignore incoming logstash entries that are older than a given date

I want Logstash, when it's processing input entries, to simply drop entries that are older than N days.
I assume I'll use the date module and obviously drop, but I don't know how to connect them.
The only way that I know to do date level comparison is via Ruby code. You need the date filter to parse the timestamp (that's its own issue).
Once you parse the date into a field (e.g., event["#timestamp"]), then you can use it to determine if you want to ignore it or not:
5.0:
ruby {
code => "event.cancel if (Time.now.to_f - event.get('#timestamp').to_f) > (60 * 60 * 24 * 5)"
}
Pre-5.x:
ruby {
code => "event.cancel if (Time.now.to_f - event['#timestamp'].to_f) > (60 * 60 * 24 * 5)"
}
In this case, 5 is N.
Also, it's worth pointing out that this is relative to the machine time where Logstash happens to be running. If it's inaccurate, then it will impact date math. Similarly, if the source machine's system clock is wrong, then it too can be a problem.
Drawing on Alain's good point, you could use this store the lag time, in addition to just dropping based on it.
5.0:
ruby {
code => "event.set('lag_seconds', Time.now.to_f - event.get('#timestamp').to_f))"
}
# 5 represents the number of days to allow
if [lag_seconds] > (60 * 60 * 24 * 5) {
drop { }
}
Pre-5.x:
ruby {
code => "event['lag_seconds'] = Time.now.to_f - event['#timestamp'].to_f)"
}
# 5 represents the number of days to allow
if [lag_seconds] > (60 * 60 * 24 * 5) {
drop { }
}
Using this approach, you would then be indexing lag_seconds, which is a fractional amount, thereby allowing you to analyze lag in your index if this goes into ES or some other data store.

Resources