How can I get the current time in milliseconds? - rust

How can I get the current time in milliseconds like I can in Java?
System.currentTimeMillis()

Since Rust 1.8, you do not need to use a crate. Instead, you can use SystemTime and UNIX_EPOCH:
use std::time::{SystemTime, UNIX_EPOCH};
fn main() {
let start = SystemTime::now();
let since_the_epoch = start
.duration_since(UNIX_EPOCH)
.expect("Time went backwards");
println!("{:?}", since_the_epoch);
}
If you need exactly milliseconds, you can convert the Duration.
Rust 1.33
let in_ms = since_the_epoch.as_millis();
Rust 1.27
let in_ms = since_the_epoch.as_secs() as u128 * 1000 +
since_the_epoch.subsec_millis() as u128;
Rust 1.8
let in_ms = since_the_epoch.as_secs() * 1000 +
since_the_epoch.subsec_nanos() as u64 / 1_000_000;

If you just want to do simple timings with the milliseconds, you can use std::time::Instant like this:
use std::time::Instant;
fn main() {
let start = Instant::now();
// do stuff
let elapsed = start.elapsed();
// Debug format
println!("Debug: {:?}", elapsed);
// Format as milliseconds rounded down
// Since Rust 1.33:
println!("Millis: {} ms", elapsed.as_millis());
// Before Rust 1.33:
println!("Millis: {} ms",
(elapsed.as_secs() * 1_000) + (elapsed.subsec_nanos() / 1_000_000) as u64);
}
Output:
Debug: 10.93993ms
Millis: 10 ms
Millis: 10 ms

You can use the time crate:
extern crate time;
fn main() {
println!("{}", time::now());
}
It returns a Tm which you can get whatever precision you want.

I've found a clear solution with chrono in coinnect:
use chrono::prelude::*;
pub fn get_unix_timestamp_ms() -> i64 {
let now = Utc::now();
now.timestamp_millis()
}
pub fn get_unix_timestamp_us() -> i64 {
let now = Utc::now();
now.timestamp_nanos()
}

As #Shepmaster mentioned, this is the equivalent of Java's System.currentTimeMillis() in Rust.
use std::time::{SystemTime, UNIX_EPOCH};
fn get_epoch_ms() -> u128 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis()
}

extern crate time;
fn timestamp() -> f64 {
let timespec = time::get_time();
// 1459440009.113178
let mills: f64 = timespec.sec as f64 + (timespec.nsec as f64 / 1000.0 / 1000.0 / 1000.0);
mills
}
fn main() {
let ts = timestamp();
println!("Time Stamp: {:?}", ts);
}
Rust Playground

System.currentTimeMillis() in Java returns the difference in milliseconds between the current time and midnight, January 1, 1970.
In Rust we have time::get_time() which returns a Timespec with the current time as seconds and the offset in nanoseconds since midnight, January 1, 1970.
Example (using Rust 1.13):
extern crate time; //Time library
fn main() {
//Get current time
let current_time = time::get_time();
//Print results
println!("Time in seconds {}\nOffset in nanoseconds {}",
current_time.sec,
current_time.nsec);
//Calculate milliseconds
let milliseconds = (current_time.sec as i64 * 1000) +
(current_time.nsec as i64 / 1000 / 1000);
println!("System.currentTimeMillis(): {}", milliseconds);
}
Reference: Time crate, System.currentTimeMillis()

Related

Accessing Vector elements gives me an error Rust

I trying to write a program that will find the median of any given list.
Eventually, In the FINAL FINAL stretch, an error was shot into my face.
I tried to access elements of a Vector through a variable.
Take a look at the calc_med() function.
use std::io;
use std::sync::Mutex;
#[macro_use]
extern crate lazy_static;
lazy_static! {
static ref num_list: Mutex<Vec<f64>> = Mutex::new(Vec::new());
}
fn main() {
loop {
println!("Enter: ");
let mut inp: String = String::new();
io::stdin().read_line(&mut inp).expect("Failure");
let upd_inp: f64 = match inp.trim().parse() {
Ok(num) => num,
Err(_) => {
if inp.trim() == String::from("q") {
break;
} else if inp.trim() == String::from("d") {
break {
println!("Done!");
calc_med();
};
} else {
continue;
}
}
};
num_list.lock().unwrap().push(upd_inp);
num_list
.lock()
.unwrap()
.sort_by(|a, b| a.partial_cmp(b).unwrap());
println!("{:?}", num_list.lock().unwrap());
}
}
fn calc_med() {
// FOR THE ATTENTION OF STACKOVERFLOW
let n: f64 = ((num_list.lock().unwrap().len()) as f64 + 1.0) / 2.0;
if n.fract() == 0.0 {
let n2: usize = n as usize;
} else {
let n3: u64 = n.round() as u64;
let n4: usize = n3 as usize;
let median: f64 = (num_list[n4] + num_list[n4 - 1]) / 2;
println!("{}", median);
}
}
The error is as following:
Compiling FindTheMedian v0.1.0 (/home/isaak/Documents/Code/Rusty/FindTheMedian)
error[E0608]: cannot index into a value of type `num_list`
--> src/main.rs:50:28
|
50 | let median: f64 = (num_list[n4] + num_list[n4 - 1]) / 2;
| ^^^^^^^^^^^^
error[E0608]: cannot index into a value of type `num_list`
--> src/main.rs:50:43
|
50 | let median: f64 = (num_list[n4] + num_list[n4 - 1]) / 2;
| ^^^^^^^^^^^^^^^^
The current code is trying to index a variable of type Mutex<Vec<f64>>, which is not valid. The way you access the underlying data in a mutex is by calling .lock() on it, which will in turn return a structure that resembles Result<Vec<f64>, Error>.
So, fixing only the line would look like this:
let num_list_vec = num_list.lock().unwrap();
let median: f64 = (num_list_vec[n4] + num_list_vec[n4 - 1]) / 2;
However, since you already locked at the start of the function this will not work, since the mutex is already locked. The best way then is to do the locking + unwraping at the start of the function and use the underlying value in all places:
fn calc_med() {
let num_list_vec = num_list.lock().unwrap();
let n: f64 = ((num_list_vec.len()) as f64 + 1.0) / 2.0;
if n.fract() == 0.0 {
let n2: usize = n as usize;
} else {
let n3: u64 = n.round() as u64;
let n4: usize = n3 as usize;
let median: f64 = (num_list_vec[n4] + num_list_vec[n4 - 1]) / 2;
println!("{}", median);
}
}
Edit: Checking your main, I see you are also lock().unwrap()ing in sequence a lot, which is not the way Mutex should be used. Mutex is mainly used whenever you have a need for multi-threaded programming, so that different threads cannot access the same variable twice. It also incurs a performance hit, so you shouldn't really use it in single-threaded scenarios most of the time.
Unless there's a bigger picture we're missing, you should just define your Vec in main and pass it to calc_med as an argument. If the reason you did what you did was to get it as a global, there are other ways to do that in Rust without performance hits, but due to safe design of Rust these ways are not encouraged and should only be used if you know 100% what you want.
Your error is the num_list is not an vector, it's a mutex with an vector inside of it. To access the value inside of a mutex, you must lock it, and then unwrap the result. You do this correctly in main.
To avoid continually unlocking and locking, it is generally best practice to lock the mutex once, at the start of the function. Rust will automatically drop the lock when the reference goes out of scope. See the updated example:
fn calc_med() { // FOR THE ATTENTION OF STACKOVERFLOW
let nums = num_list.lock().unwrap();
let n: f64 = (nums.len() as f64 + 1.0) / 2.0;
if n.fract() == 0.0 {
let n2: usize = n as usize;
} else {
let n3: u64 = n.round() as u64;
let n4: usize = n3 as usize;
let median: f64 = (nums[n4] + nums[n4 - 1]) / 2;
println!("{}", median);
}
}

How to create threads that last entire duration of program and pass immutable chunks for threads to operate on?

I have a bunch of math that has real time constraints. My main loop will just call this function repeatedly and it will always store results into an existing buffer. However, I want to be able to spawn the threads at init time and then allow the threads to run and do their work and then wait for more data. The synchronization I will use a Barrier and have that part working. What I can't get working and have tried various iterations of Arc or crossbeam is splitting the thread spawning up and the actual workload. This is what I have now.
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
for i in 0..WORK_SIZE {
work.push(i as f64);
}
crossbeam::scope(|scope| {
let threads: Vec<_> = work
.chunks(NUM_TASKS_PER_THREAD)
.map(|chunk| scope.spawn(move |_| chunk.iter().cloned().sum::<f64>()))
.collect();
let threaded_time = std::time::Instant::now();
let thread_sum: f64 = threads.into_iter().map(|t| t.join().unwrap()).sum();
let threaded_micros = threaded_time.elapsed().as_micros() as f64;
println!("threaded took: {:#?}", threaded_micros);
let serial_time = std::time::Instant::now();
let no_thread_sum: f64 = work.iter().cloned().sum();
let serial_micros = serial_time.elapsed().as_micros() as f64;
println!("serial took: {:#?}", serial_micros);
assert_eq!(thread_sum, no_thread_sum);
println!(
"Threaded performace was {:?}",
serial_micros / threaded_micros
);
})
.unwrap();
}
But I can't find a way to spin these threads up in an init function and then in a do_work function pass work into them. I attempted to do something like this with Arc's and Mutex's but couldn't get everything straight there either. What I want to turn this into is something like the following
use std::sync::{Arc, Barrier, Mutex};
use std::{slice::Chunks, thread::JoinHandle};
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
//simplified version of what actual work that code base will do
fn do_work(data: &[f64], result: Arc<Mutex<f64>>, barrier: Arc<Barrier>) {
loop {
barrier.wait();
let sum = data.into_iter().cloned().sum::<f64>();
let mut result = *result.lock().unwrap();
result += sum;
}
}
fn init(
mut data: Chunks<'_, f64>,
result: &Arc<Mutex<f64>>,
barrier: &Arc<Barrier>,
) -> Vec<std::thread::JoinHandle<()>> {
let mut handles = Vec::with_capacity(NUM_THREADS);
//spawn threads, in actual code these would be stored in a lib crate struct
for i in 0..NUM_THREADS {
let result = result.clone();
let barrier = barrier.clone();
let chunk = data.nth(i).unwrap();
handles.push(std::thread::spawn(|| {
//Pass the particular thread the particular chunk it will operate on.
do_work(chunk, result, barrier);
}));
}
handles
}
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
let mut result = Arc::new(Mutex::new(0.0));
for i in 0..WORK_SIZE {
work.push(i as f64);
}
let work_barrier = Arc::new(Barrier::new(NUM_THREADS + 1));
let threads = init(work.chunks(NUM_TASKS_PER_THREAD), &result, &work_barrier);
loop {
work_barrier.wait();
//actual code base would do something with summation stored in result.
println!("{:?}", result.lock().unwrap());
}
}
I hope this expresses the intent clearly enough of what I need to do. The issue with this specific implementation is that the chunks don't seem to live long enough and when I tried wrapping them in an Arc as it just moved the argument doesn't live long enough to the Arc::new(data.chunk(_)) line.
use std::sync::{Arc, Barrier, Mutex};
use std::thread;
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
//simplified version of what actual work that code base will do
fn do_work(data: &[f64], result: Arc<Mutex<f64>>, barrier: Arc<Barrier>) {
loop {
barrier.wait();
let sum = data.iter().sum::<f64>();
*result.lock().unwrap() += sum;
}
}
fn init(
work: Vec<f64>,
result: Arc<Mutex<f64>>,
barrier: Arc<Barrier>,
) -> Vec<thread::JoinHandle<()>> {
let mut handles = Vec::with_capacity(NUM_THREADS);
//spawn threads, in actual code these would be stored in a lib crate struct
for i in 0..NUM_THREADS {
let slice = work[i * NUM_TASKS_PER_THREAD..(i + 1) * NUM_TASKS_PER_THREAD].to_owned();
let result = Arc::clone(&result);
let w = Arc::clone(&barrier);
handles.push(thread::spawn(move || {
do_work(&slice, result, w);
}));
}
handles
}
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
let result = Arc::new(Mutex::new(0.0));
for i in 0..WORK_SIZE {
work.push(i as f64);
}
let work_barrier = Arc::new(Barrier::new(NUM_THREADS + 1));
let _threads = init(work, Arc::clone(&result), Arc::clone(&work_barrier));
loop {
thread::sleep(std::time::Duration::from_secs(3));
work_barrier.wait();
//actual code base would do something with summation stored in result.
println!("{:?}", result.lock().unwrap());
}
}

Avoid locking on single threaded async

Let's say I have the following struct:
struct Time {
hour: u8,
minute: u8
}
impl Time {
pub fn set_time(&mut self, hour: u8, minute: u8) {
self.hour = hour;
self.minute = minute;
}
}
On a multithreaded program, having a mutable reference to it shared across multiple threads could cause race conditions, but on a single-threaded one that can't happen (there is no way for the task to yield inside set_time).
Is there any way to avoid having to use locks in such a situation?
Here is an example where two tasks are running on a single thread and could share the mutable reference without a problem:
use tokio::join;
struct Time {
hour: u8,
minute: u8
}
impl Time {
pub fn set_time(&mut self, hour: u8, minute: u8) {
self.hour = hour;
self.minute = minute;
}
}
fn main() {
let mut runtime_builder = tokio::runtime::Builder::new_current_thread();
runtime_builder.enable_time();
let runtime = runtime_builder.build().unwrap();
runtime.block_on(async_main());
}
async fn async_main() {
let mut time = Time {hour: 0, minute: 0};
join!(
task_1(&mut time),
task_2(&mut time) // <- Rust wont allow this
);
}
async fn task_1(time: &mut Time) {
loop {
// Do something
tokio::task::yield_now();
}
}
async fn task_2(time: &mut Time) {
loop {
// Do something
tokio::task::yield_now();
}
}
Mutable reference aliasing is never allowed by the borrow checker. This is one of conditions that the borrow checker enforces to guarantee memory safety at compile time.
This is the reason why the following code doesn't compile. There is 2 mutable references at the same time.
join!(
task_1(&mut time), // <- first mutable reference
task_2(&mut time) // <- second mutable reference ERROR
);
This rule is not specific to multithreaded context.
In the other hand, immutable reference aliasing is allowed by the borrow checker. but the following code would not compile because we can't mutate a field behind a shared reference.
use tokio::join;
struct Time {
hour: u8,
minute: u8
}
impl Time {
pub fn set_time(&self, hour: u8, minute: u8) {
self.hour = hour; // Error
self.minute = minute; // Error
}
}
[...]
async fn async_main() {
let mut time = Time {hour: 0, minute: 0};
join!(
task_1(&time),
task_2(&time)
);
}
[...]
}
To have both aliasing and mutation capabilities, you need to use the interior mutability pattern.
Interior mutability:
"A type has interior mutability if its internal state can be changed through a shared reference to it. This goes against the usual requirement that the value pointed to by a shared reference is not mutated." (see rust reference)
There is several types which implement interior mutability pattern:
Cell
RefCell
Atomics kind
Mutex
RwLock
with single threaded runtime, you can use either RefCell or Cell
With RefCell<T>, The rules still applies but at runtime instead of compile time. With a single threaded runtime, RefCell allows us to mutate shared reference in separate task, since there is no parallelism involved.
use std::sync::Arc;
use std::cell::RefCell;
use tokio::join;
#[derive(Debug)]
struct Time {
hour: RefCell<u8>,
minute: RefCell<u8>,
}
impl Time {
pub fn set_time(&self, hour: u8, minute: u8) {
*self.hour.borrow_mut() = hour;
*self.minute.borrow_mut() = minute;
}
}
async fn task_1(time: &Time) {
time.set_time(11, 54);
println!("Task 1: {:?}", time);
tokio::task::yield_now().await;
}
async fn task_2(time: &Time) {
time.set_time(8, 12);
println!("Task 2: {:?}", time);
tokio::task::yield_now().await;
}
fn main() {
let mut runtime_builder = tokio::runtime::Builder::new_current_thread();
runtime_builder.enable_time();
let runtime = runtime_builder.build().unwrap();
runtime.block_on(async {
let time = Time {
hour: RefCell::new(0),
minute: RefCell::new(0),
};
let _ = join!(task_1(&time), task_2(&time));
});
}
[based on trentcl suggestion]
If you want to avoid the cost of runtime check, you can use Cell<T>. The API is pretty convenient when T is Copy.
use std::cell::Cell;
use tokio::join;
#[derive(Debug)]
struct Time {
hour: Cell<u8>,
minute: Cell<u8>,
}
impl Time {
pub fn set_time(&self, hour: u8, minute: u8) {
self.hour.replace(hour);
self.minute.replace(minute);
}
}
async fn task_1(time: &Time) {
time.set_time(11, 54);
println!("Task 1: {:?}", time);
tokio::task::yield_now().await;
}
async fn task_2(time: &Time) {
time.set_time(8, 12);
println!("Task 2: {:?}", time);
tokio::task::yield_now().await;
}
fn main() {
let mut runtime_builder = tokio::runtime::Builder::new_current_thread();
runtime_builder.enable_time();
let runtime = runtime_builder.build().unwrap();
runtime.block_on(async {
let time = Time {
hour: Cell::new(0),
minute: Cell::new(0),
};
let _ = join!(task_1(&time), task_2(&time));
});
}
With multithreaded runtime, you can use atomics kind, Mutex, RwLock, and channel messaging.
If we take the specific case of AtomicU8, this type provides interior mutability and its store method is lock-free (at least on x86).
by using AtomicU8, we can aliased and mutate our Time struct with no data race, and no blocking.
AtomicU8 is Sync and Send, but we need to satisfy 'static bound for tokio::spawn, so taking a shared ref is not an option. We need to wrap the structure into an Arc.
use std::sync::atomic::{AtomicU8, Ordering};
use tokio::join;
use std::sync::Arc;
#[derive(Debug)]
struct Time {
hour: AtomicU8,
minute: AtomicU8,
}
impl Time {
pub fn set_time(&self, hour: u8, minute: u8) {
self.hour.store(hour, Ordering::SeqCst); // <-- mutation OK
self.minute.store(minute, Ordering::SeqCst); // <-- mutation OK
}
}
async fn task_1(time: Arc<Time>) {
time.set_time(11, 54);
println!("Task 1: {:?}", time);
tokio::task::yield_now().await;
}
async fn task_2(time: Arc<Time>) {
time.set_time(8, 12);
println!("Task 2: {:?}", time);
tokio::task::yield_now().await;
}
fn main() {
let mut runtime_builder = tokio::runtime::Builder::new_multi_thread();
runtime_builder.enable_time();
let runtime = runtime_builder.build().unwrap();
runtime.block_on(async {
let time = Arc::new(Time {
hour: AtomicU8::new(0),
minute: AtomicU8::new(0),
});
let h1 = tokio::spawn(task_1(time.clone()));
let h2 = tokio::spawn(task_2(time));
let _ = join!(
h1, // <-- aliasing Ok
h2 // <-- aliasing Ok
);
});
}

Rust futures / async - await strange behavior

Edit:
I swapped all of the future / async calls to instead just spawn a new thread, but still the program takes 15 seconds to run, instead of the expected 1 second. What is the reason? total_send_time is time taken to spawn a new thread + time spent sleeping before launching a new thread. Note: I am trying to send requests uniformly at a certain rate.
Interval between calls: 0.001 s
Ran in 15391 ms, total time: 1.0015188 s
use std::time::Duration;
use std::ops::Sub;
use std::net::TcpStream;
use std::io::Read;
const NANOSECOND: u64 = 1000000000;
const SECOND: u64 = 1;
const RPS: u64 = 1000;
const N: u64 = 1000;
fn send_request() {
let mut stream = TcpStream::connect("127.0.0.1:8080").unwrap();
let mut buffer = [0; 1024];
stream.read(&mut buffer).unwrap();
}
fn main() {
let duration: u64 = ((SECOND as f64 / RPS as f64) as f64 * NANOSECOND as f64) as u64;
println!("Interval between calls: {} s", (SECOND as f64 / RPS as f64));
let start = std::time::Instant::now();
let mut total_send_time: u128 = 0;
for i in 0..N {
let start_in = std::time::Instant::now();
std::thread::spawn(move || send_request());
let time_to_sleep = ((duration as i128 - start_in.elapsed().as_nanos() as i128) as i128).abs();
total_send_time += start_in.elapsed().as_nanos();
if time_to_sleep > 0 {
std::thread::sleep(Duration::from_nanos(time_to_sleep as u64));
total_send_time += time_to_sleep as u128;
}
}
println!("Ran in {} ms, total time: {} s", start.elapsed().as_millis(), total_send_time as f64 / NANOSECOND as f64);
}
Original:
I am new to rust and I was reading up on using futures and async / await in rust, and built a simple tcp server using it. I then decided to write a quick benchmark, by sending requests to the server at a constant rate, but I am having some strange issues.
The below code should send a request every 0.001 seconds, and it does, except the program reports strange run times. This is the output:
Interval between calls: 0.001 s
Ran in 15 s, total time: 1 s
Obviously getting system time and calculating time to sleep has some cost, but surely not 14 seconds. What have I done wrong?
use async_std::net::TcpStream;
use futures::AsyncReadExt;
use std::time::Duration;
use async_std::task::spawn;
use std::ops::Sub;
const RPS: u64 = 1000;
const N: u64 = 1000;
async fn send_request() {
let mut stream = TcpStream::connect("127.0.0.1:8080").await.unwrap();
let mut buffer = [0; 1024];
stream.read(&mut buffer).await.unwrap();
}
#[async_std::main]
async fn main() {
let duration: u64 = ((1 as f64 / RPS as f64) as f64 * 1000000000 as f64) as u64;
println!("Interval between calls: {} s", (1 as f64 / RPS as f64));
let start = std::time::Instant::now();
let mut total_send_time: u128 = 0;
for _ in 0..N {
let start_in = std::time::Instant::now();
spawn(send_request());
let time_to_sleep = ((duration as i128 - start_in.elapsed().as_nanos() as i128) as i128).abs();
total_send_time += start_in.elapsed().as_nanos();
if time_to_sleep > 0 {
std::thread::sleep(Duration::from_nanos(time_to_sleep as u64));
total_send_time += time_to_sleep as u128;
}
}
println!("Ran in {} s, total time: {} s", start.elapsed().as_secs(), total_send_time / 1000000000)
}
You are not measuring the elapsed time correctly:
total_send_time measures the duration of the spawn() call, but as the actual task is executed asynchronously, start_in.elapsed() does not give you any information about how much time the task actually takes.
The ran in time, as measured by start.elapsed() is also not useful at all. As you are using blocking sleep operation, you are just measuring how much time your app has spent in the std::thread::sleep()
Last but not least, your time_to_sleep calculation is completely incorrect, because of the issue mentioned in point 1.
Edit
As I've already explained in my previous answer, your program takes 15 seconds to run, because you do not calculate the time to sleep properly. There are also other mistakes such as using blocking sleep in an async function, etc; Here is a corrected version:
use std::time::{Duration, Instant};
const TASKS: u64 = 1000;
const TASKS_PER_SECOND: u64 = 1000;
#[async_std::main]
async fn main() -> std::io::Result<()> {
let micros_per_task = Duration::from_micros(
Duration::from_secs(1).as_micros() as u64 / TASKS_PER_SECOND
);
let mut spawn_overhead = Duration::default();
let before_spawning = Instant::now();
for _ in 0..TASKS {
let task_start = Instant::now();
async_std::task::spawn(task());
let elapsed = task_start.elapsed();
spawn_overhead += elapsed;
if elapsed < micros_per_task {
let sleep = micros_per_task - elapsed;
async_std::task::sleep(sleep).await;
}
}
let elapsed_spawning = before_spawning.elapsed();
println!("Took {}ms to spawn {} tasks", elapsed_spawning.as_millis(), TASKS);
println!("Micros spent in spawn(): {}", spawn_overhead.as_micros());
Ok(())
}
async fn task() {
async_std::task::sleep(Duration::from_millis(1000)).await;
}

How do I get a Duration as a number of milliseconds in Rust

I have a time::Duration. How can I get the number of milliseconds represented by this duration as an integer? There used to be a num_milliseconds() function, but it is no longer available.
Since Rust 1.33.0, there is an as_millis() function:
use std::time::SystemTime;
fn main() {
let now = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).expect("get millis error");
println!("now millis: {}", now.as_millis());
}
Since Rust 1.27.0, there is a subsec_millis() function:
use std::time::SystemTime;
fn main() {
let since_the_epoch = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).expect("get millis error");
let seconds = since_the_epoch.as_secs();
let subsec_millis = since_the_epoch.subsec_millis() as u64;
println!("now millis: {}", seconds * 1000 + subsec_millis);
}
Since Rust 1.8, there is a subsec_nanos function:
let in_ms = since_the_epoch.as_secs() * 1000 +
since_the_epoch.subsec_nanos() as u64 / 1_000_000;
See also:
How can I get the current time in milliseconds?
Here is the solution I came up with, which is to multiply the seconds by a billion, add it to the nanoseconds, then divide by 1e6.
let nanos = timeout_duration.subsec_nanos() as u64;
let ms = (1000*1000*1000 * timeout_duration.as_secs() + nanos)/(1000 * 1000);
Use time::Duration from the time crate on crates.io which provides a num_milliseconds() method.

Resources