Rust futures / async - await strange behavior - rust

Edit:
I swapped all of the future / async calls to instead just spawn a new thread, but still the program takes 15 seconds to run, instead of the expected 1 second. What is the reason? total_send_time is time taken to spawn a new thread + time spent sleeping before launching a new thread. Note: I am trying to send requests uniformly at a certain rate.
Interval between calls: 0.001 s
Ran in 15391 ms, total time: 1.0015188 s
use std::time::Duration;
use std::ops::Sub;
use std::net::TcpStream;
use std::io::Read;
const NANOSECOND: u64 = 1000000000;
const SECOND: u64 = 1;
const RPS: u64 = 1000;
const N: u64 = 1000;
fn send_request() {
let mut stream = TcpStream::connect("127.0.0.1:8080").unwrap();
let mut buffer = [0; 1024];
stream.read(&mut buffer).unwrap();
}
fn main() {
let duration: u64 = ((SECOND as f64 / RPS as f64) as f64 * NANOSECOND as f64) as u64;
println!("Interval between calls: {} s", (SECOND as f64 / RPS as f64));
let start = std::time::Instant::now();
let mut total_send_time: u128 = 0;
for i in 0..N {
let start_in = std::time::Instant::now();
std::thread::spawn(move || send_request());
let time_to_sleep = ((duration as i128 - start_in.elapsed().as_nanos() as i128) as i128).abs();
total_send_time += start_in.elapsed().as_nanos();
if time_to_sleep > 0 {
std::thread::sleep(Duration::from_nanos(time_to_sleep as u64));
total_send_time += time_to_sleep as u128;
}
}
println!("Ran in {} ms, total time: {} s", start.elapsed().as_millis(), total_send_time as f64 / NANOSECOND as f64);
}
Original:
I am new to rust and I was reading up on using futures and async / await in rust, and built a simple tcp server using it. I then decided to write a quick benchmark, by sending requests to the server at a constant rate, but I am having some strange issues.
The below code should send a request every 0.001 seconds, and it does, except the program reports strange run times. This is the output:
Interval between calls: 0.001 s
Ran in 15 s, total time: 1 s
Obviously getting system time and calculating time to sleep has some cost, but surely not 14 seconds. What have I done wrong?
use async_std::net::TcpStream;
use futures::AsyncReadExt;
use std::time::Duration;
use async_std::task::spawn;
use std::ops::Sub;
const RPS: u64 = 1000;
const N: u64 = 1000;
async fn send_request() {
let mut stream = TcpStream::connect("127.0.0.1:8080").await.unwrap();
let mut buffer = [0; 1024];
stream.read(&mut buffer).await.unwrap();
}
#[async_std::main]
async fn main() {
let duration: u64 = ((1 as f64 / RPS as f64) as f64 * 1000000000 as f64) as u64;
println!("Interval between calls: {} s", (1 as f64 / RPS as f64));
let start = std::time::Instant::now();
let mut total_send_time: u128 = 0;
for _ in 0..N {
let start_in = std::time::Instant::now();
spawn(send_request());
let time_to_sleep = ((duration as i128 - start_in.elapsed().as_nanos() as i128) as i128).abs();
total_send_time += start_in.elapsed().as_nanos();
if time_to_sleep > 0 {
std::thread::sleep(Duration::from_nanos(time_to_sleep as u64));
total_send_time += time_to_sleep as u128;
}
}
println!("Ran in {} s, total time: {} s", start.elapsed().as_secs(), total_send_time / 1000000000)
}

You are not measuring the elapsed time correctly:
total_send_time measures the duration of the spawn() call, but as the actual task is executed asynchronously, start_in.elapsed() does not give you any information about how much time the task actually takes.
The ran in time, as measured by start.elapsed() is also not useful at all. As you are using blocking sleep operation, you are just measuring how much time your app has spent in the std::thread::sleep()
Last but not least, your time_to_sleep calculation is completely incorrect, because of the issue mentioned in point 1.
Edit
As I've already explained in my previous answer, your program takes 15 seconds to run, because you do not calculate the time to sleep properly. There are also other mistakes such as using blocking sleep in an async function, etc; Here is a corrected version:
use std::time::{Duration, Instant};
const TASKS: u64 = 1000;
const TASKS_PER_SECOND: u64 = 1000;
#[async_std::main]
async fn main() -> std::io::Result<()> {
let micros_per_task = Duration::from_micros(
Duration::from_secs(1).as_micros() as u64 / TASKS_PER_SECOND
);
let mut spawn_overhead = Duration::default();
let before_spawning = Instant::now();
for _ in 0..TASKS {
let task_start = Instant::now();
async_std::task::spawn(task());
let elapsed = task_start.elapsed();
spawn_overhead += elapsed;
if elapsed < micros_per_task {
let sleep = micros_per_task - elapsed;
async_std::task::sleep(sleep).await;
}
}
let elapsed_spawning = before_spawning.elapsed();
println!("Took {}ms to spawn {} tasks", elapsed_spawning.as_millis(), TASKS);
println!("Micros spent in spawn(): {}", spawn_overhead.as_micros());
Ok(())
}
async fn task() {
async_std::task::sleep(Duration::from_millis(1000)).await;
}

Related

How to create threads that last entire duration of program and pass immutable chunks for threads to operate on?

I have a bunch of math that has real time constraints. My main loop will just call this function repeatedly and it will always store results into an existing buffer. However, I want to be able to spawn the threads at init time and then allow the threads to run and do their work and then wait for more data. The synchronization I will use a Barrier and have that part working. What I can't get working and have tried various iterations of Arc or crossbeam is splitting the thread spawning up and the actual workload. This is what I have now.
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
for i in 0..WORK_SIZE {
work.push(i as f64);
}
crossbeam::scope(|scope| {
let threads: Vec<_> = work
.chunks(NUM_TASKS_PER_THREAD)
.map(|chunk| scope.spawn(move |_| chunk.iter().cloned().sum::<f64>()))
.collect();
let threaded_time = std::time::Instant::now();
let thread_sum: f64 = threads.into_iter().map(|t| t.join().unwrap()).sum();
let threaded_micros = threaded_time.elapsed().as_micros() as f64;
println!("threaded took: {:#?}", threaded_micros);
let serial_time = std::time::Instant::now();
let no_thread_sum: f64 = work.iter().cloned().sum();
let serial_micros = serial_time.elapsed().as_micros() as f64;
println!("serial took: {:#?}", serial_micros);
assert_eq!(thread_sum, no_thread_sum);
println!(
"Threaded performace was {:?}",
serial_micros / threaded_micros
);
})
.unwrap();
}
But I can't find a way to spin these threads up in an init function and then in a do_work function pass work into them. I attempted to do something like this with Arc's and Mutex's but couldn't get everything straight there either. What I want to turn this into is something like the following
use std::sync::{Arc, Barrier, Mutex};
use std::{slice::Chunks, thread::JoinHandle};
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
//simplified version of what actual work that code base will do
fn do_work(data: &[f64], result: Arc<Mutex<f64>>, barrier: Arc<Barrier>) {
loop {
barrier.wait();
let sum = data.into_iter().cloned().sum::<f64>();
let mut result = *result.lock().unwrap();
result += sum;
}
}
fn init(
mut data: Chunks<'_, f64>,
result: &Arc<Mutex<f64>>,
barrier: &Arc<Barrier>,
) -> Vec<std::thread::JoinHandle<()>> {
let mut handles = Vec::with_capacity(NUM_THREADS);
//spawn threads, in actual code these would be stored in a lib crate struct
for i in 0..NUM_THREADS {
let result = result.clone();
let barrier = barrier.clone();
let chunk = data.nth(i).unwrap();
handles.push(std::thread::spawn(|| {
//Pass the particular thread the particular chunk it will operate on.
do_work(chunk, result, barrier);
}));
}
handles
}
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
let mut result = Arc::new(Mutex::new(0.0));
for i in 0..WORK_SIZE {
work.push(i as f64);
}
let work_barrier = Arc::new(Barrier::new(NUM_THREADS + 1));
let threads = init(work.chunks(NUM_TASKS_PER_THREAD), &result, &work_barrier);
loop {
work_barrier.wait();
//actual code base would do something with summation stored in result.
println!("{:?}", result.lock().unwrap());
}
}
I hope this expresses the intent clearly enough of what I need to do. The issue with this specific implementation is that the chunks don't seem to live long enough and when I tried wrapping them in an Arc as it just moved the argument doesn't live long enough to the Arc::new(data.chunk(_)) line.
use std::sync::{Arc, Barrier, Mutex};
use std::thread;
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
//simplified version of what actual work that code base will do
fn do_work(data: &[f64], result: Arc<Mutex<f64>>, barrier: Arc<Barrier>) {
loop {
barrier.wait();
let sum = data.iter().sum::<f64>();
*result.lock().unwrap() += sum;
}
}
fn init(
work: Vec<f64>,
result: Arc<Mutex<f64>>,
barrier: Arc<Barrier>,
) -> Vec<thread::JoinHandle<()>> {
let mut handles = Vec::with_capacity(NUM_THREADS);
//spawn threads, in actual code these would be stored in a lib crate struct
for i in 0..NUM_THREADS {
let slice = work[i * NUM_TASKS_PER_THREAD..(i + 1) * NUM_TASKS_PER_THREAD].to_owned();
let result = Arc::clone(&result);
let w = Arc::clone(&barrier);
handles.push(thread::spawn(move || {
do_work(&slice, result, w);
}));
}
handles
}
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
let result = Arc::new(Mutex::new(0.0));
for i in 0..WORK_SIZE {
work.push(i as f64);
}
let work_barrier = Arc::new(Barrier::new(NUM_THREADS + 1));
let _threads = init(work, Arc::clone(&result), Arc::clone(&work_barrier));
loop {
thread::sleep(std::time::Duration::from_secs(3));
work_barrier.wait();
//actual code base would do something with summation stored in result.
println!("{:?}", result.lock().unwrap());
}
}

Rust, how to perform basic recursive async?

I am just doing some quick experimenting in an attempt to learn the rust language, I have done a few successful async tests, this is my starting point:
use async_std::task;
use futures;
use std::time::SystemTime;
fn main() {
let now = SystemTime::now();
task::block_on(async {
let mut fs = Vec::new();
let sum = 100000000;
let chunks: u64 = 5; //only valid for factors of sum
let chunk_size: u64 = sum/chunks;
for n in 1..=chunks {
fs.push(task::spawn(async move {
add_range((n - 1) * chunk_size + 1, n * chunk_size + 1)
}));
}
let vals = futures::future::join_all(fs).await;
// 5000000050000000 for this configuration of inputs
println!("{}", vals.iter().sum::<u64>());
});
println!("{}ms", now.elapsed().unwrap().as_millis());
}
fn add_range(start: u64, end: u64) -> u64 {
println!("{}, {}", start, end);
let mut total: u64 = 0;
for n in start..end {
total += n;
}
return total;
}
by changing the value of chunks you can change how many task::spawns there are. Now rather than a flat set of workers, I want the add_range function to be recursive and to keep forking off workers based on the inputs, however following the compiler errors I have gotten myself quite tangled up:
use async_std::task;
use futures;
use std::future::Future;
use std::pin::Pin;
fn main() {
let pin_box_u64 = task::block_on(add_range(0, 10, 10, 1, 1001));
println!("{}", pin_box_u64/*how do i get u64 out of this*/)
}
// recursively calls itself in a branching tree structure
// forking off more worker threads
async fn add_range(
depth: u64,
chunk_split: u64,
chunk_size: u64,
start: u64,
end: u64,
) -> Pin<Box<dyn Future<Output = u64>>> {
println!("{}, {}, {}", depth, start, end);
// if the range of start to end is more than the allowed
// chunk_size then fork off more workers dividing
// the work up further.
if end - start > chunk_size {
let mut fs = Vec::new();
let next_chunk_size = (end - start) / chunk_split;
for n in 0..chunk_split {
let s = start + (next_chunk_size * n);
let mut e = start + (next_chunk_size * (n + 1));
if e > end {
e = end;
}
// spawn more workers
fs.push(task::spawn(add_range(depth + 1, chunk_split, chunk_size, s, e)));
}
return Box::pin(async move {
// join workers back up and do joining sum.
return futures::future::join_all(fs).await.iter().map(/*how do i get u64s out of here*/).sum::<u64>();
});
} else {
// else the work is less than the allowed chunk_size
// so lets now do the actual sum for my chunk
let mut total: u64 = 0;
for n in start..end {
total += n;
}
return Box::pin(async move { total });
}
}
I have played around with this for a while but I feel like Im just becoming more and more lost with the compiler errors.
You need to box the returned future, otherwise the compiler can't determine the size of the return type.
Additional context can be found here: https://rust-lang.github.io/async-book/07_workarounds/04_recursion.html
use std::pin::Pin;
use async_std::task;
use futures::Future;
use futures::FutureExt;
fn main() {
let pin_box_u64 = task::block_on(add_range(0, 10, 10, 1, 1001));
println!("{}", pin_box_u64)
}
// recursively calls itself in a branching tree structure
// forking off more worker threads
fn add_range(
depth: u64,
chunk_split: u64,
chunk_size: u64,
start: u64,
end: u64,
) -> Pin<Box<dyn Future<Output = u64> + Send + 'static>> {
println!("{}, {}, {}", depth, start, end);
// if the range of start to end is more than the allowed
// chunk_size then fork off more workers dividing
// the work up further.
if end - start > chunk_size {
let mut fs = Vec::new();
let next_chunk_size = (end - start) / chunk_split;
for n in 0..chunk_split {
let s = start + (next_chunk_size * n);
let mut e = start + (next_chunk_size * (n + 1));
if e > end {
e = end;
}
// spawn more workers
fs.push(task::spawn(add_range(
depth + 1,
chunk_split,
chunk_size,
s,
e,
)));
}
// join workers back up and do joining sum.
return futures::future::join_all(fs)
.map(|v| v.iter().sum::<u64>())
.boxed();
} else {
// else the work is less than the allowed chunk_size
// so lets now do the actual sum for my chunk
let mut total: u64 = 0;
for n in start..end {
total += n;
}
return futures::future::ready(total).boxed();
}
}

Is it normal to experience large overhead using the 1:1 threading that comes in the standard library?

While working through learning Rust, a friend asked me to see what kind of performance I could get out of Rust for generating the first 1 million prime numbers both single-threaded and multi-threaded. After trying several implementations, I'm just stumped. Here is the kind of performance that I'm seeing:
rust_primes --threads 8 --verbose --count 1000000
Options { verbose: true, count: 1000000, threads: 8 }
Non-concurrent using while (15485863): 2.814 seconds.
Concurrent using mutexes (15485863): 876.561 seconds.
Concurrent using channels (15485863): 798.217 seconds.
Without overloading the question with too much code, here are the methods responsible for each of the benchmarks:
fn non_concurrent(options: &Options) {
let mut count = 0;
let mut current = 0;
let ts = Instant::now();
while count < options.count {
if is_prime(current) {
count += 1;
}
current += 1;
}
let d = ts.elapsed();
println!("Non-concurrent using while ({}): {}.{} seconds.", current - 1, d.as_secs(), d.subsec_nanos() / 1_000_000);
}
fn concurrent_mutex(options: &Options) {
let count = Arc::new(Mutex::new(0));
let highest = Arc::new(Mutex::new(0));
let mut cc = 0;
let mut current = 0;
let ts = Instant::now();
while cc < options.count {
let mut handles = vec![];
for x in current..(current + options.threads) {
let count = Arc::clone(&count);
let highest = Arc::clone(&highest);
let handle = thread::spawn(move || {
if is_prime(x) {
let mut c = count.lock().unwrap();
let mut h = highest.lock().unwrap();
*c += 1;
if x > *h {
*h = x;
}
}
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
cc = *count.lock().unwrap();
current += options.threads;
}
let d = ts.elapsed();
println!("Concurrent using mutexes ({}): {}.{} seconds.", *highest.lock().unwrap(), d.as_secs(), d.subsec_nanos() / 1_000_000);
}
fn concurrent_channel(options: &Options) {
let mut count = 0;
let mut current = 0;
let mut highest = 0;
let ts = Instant::now();
while count < options.count {
let (tx, rx) = mpsc::channel();
for x in current..(current + options.threads) {
let txc = mpsc::Sender::clone(&tx);
thread::spawn(move || {
if is_prime(x) {
txc.send(x).unwrap();
}
});
}
drop(tx);
for message in rx {
count += 1;
if message > highest && count <= options.count {
highest = message;
}
}
current += options.threads;
}
let d = ts.elapsed();
println!("Concurrent using channels ({}): {}.{} seconds.", highest, d.as_secs(), d.subsec_nanos() / 1_000_000);
}
Am I doing something wrong, or is this normal performance with the 1:1 threading that comes in the standard library?
Here is a MCVE that shows the same problem. I didn't limit the number of threads it starts up at once here like I did in the code above. The point is, threading seems to have a very significant overhead unless I'm doing something horribly wrong.
use std::thread;
use std::time::Instant;
use std::sync::{Mutex, Arc};
use std::time::Duration;
fn main() {
let iterations = 100_000;
non_threaded(iterations);
threaded(iterations);
}
fn threaded(iterations: u32) {
let tx = Instant::now();
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for _ in 0..iterations {
let counter = Arc::clone(&counter);
let handle = thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num = test(*num);
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
let d = tx.elapsed();
println!("Threaded in {}.", dur_to_string(d));
}
fn non_threaded(iterations: u32) {
let tx = Instant::now();
let mut _q = 0;
for x in 0..iterations {
_q = test(x + 1);
}
let d = tx.elapsed();
println!("Non-threaded in {}.", dur_to_string(d));
}
fn dur_to_string(d: Duration) -> String {
let mut s = d.as_secs().to_string();
s.push_str(".");
s.push_str(&(d.subsec_nanos() / 1_000_000).to_string());
s
}
fn test(x: u32) -> u32 {
x
}
Here are the results of this on my machine:
Non-threaded in 0.9.
Threaded in 5.785.
threading seems to have a very significant overhead
It's not the general concept of "threading", it's the concept of creating and destroying lots of threads.
By default in Rust 1.22.1, each spawned thread allocates 2MiB of memory to use as stack space. In the worst case, your MCVE could allocate ~200GiB of RAM. In reality, this is unlikely to happen as some threads will exit, memory will be reused, etc. I only saw it use ~400MiB.
On top of that, there is overhead involved with inter-thread communication (Mutex, channels, Atomic*) compared to intra-thread variables. Some kind of locking needs to be performed to ensure that all threads see the same data. "Embarrassingly parallel" algorithms tend to not have a lot of communication required. There are also different amounts of time required for different communication primitives. Atomic variables tend to be faster than others in many cases, but aren't as widely usable.
Then there's compiler optimizations to account for. Non-threaded code is way easier to optimize compared to threaded code. For example, running your code in release mode shows:
Non-threaded in 0.0.
Threaded in 142.775.
That's right, the non-threaded code took no time. The compiler can see through the code and realizes that nothing actually happens and removes it all. I don't know how you got 5 seconds for the threaded code as opposed to the 2+ minutes I saw.
Switching to a threadpool will reduce a lot of the unneeded creation of threads. We can also use a threadpool that provides scoped threads, which allows us to avoid the Arc as well:
extern crate scoped_threadpool;
use scoped_threadpool::Pool;
fn threaded(iterations: u32) {
let tx = Instant::now();
let counter = Mutex::new(0);
let mut pool = Pool::new(8);
pool.scoped(|scope| {
for _ in 0..iterations {
scope.execute(|| {
let mut num = counter.lock().unwrap();
*num = test(*num);
});
}
});
let d = tx.elapsed();
println!("Threaded in {}.", dur_to_string(d));
}
Non-threaded in 0.0.
Threaded in 0.675.
As with most pieces of programming, it's crucial to understand the tools you have and to use them appropriately.

How do I get a Duration as a number of milliseconds in Rust

I have a time::Duration. How can I get the number of milliseconds represented by this duration as an integer? There used to be a num_milliseconds() function, but it is no longer available.
Since Rust 1.33.0, there is an as_millis() function:
use std::time::SystemTime;
fn main() {
let now = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).expect("get millis error");
println!("now millis: {}", now.as_millis());
}
Since Rust 1.27.0, there is a subsec_millis() function:
use std::time::SystemTime;
fn main() {
let since_the_epoch = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).expect("get millis error");
let seconds = since_the_epoch.as_secs();
let subsec_millis = since_the_epoch.subsec_millis() as u64;
println!("now millis: {}", seconds * 1000 + subsec_millis);
}
Since Rust 1.8, there is a subsec_nanos function:
let in_ms = since_the_epoch.as_secs() * 1000 +
since_the_epoch.subsec_nanos() as u64 / 1_000_000;
See also:
How can I get the current time in milliseconds?
Here is the solution I came up with, which is to multiply the seconds by a billion, add it to the nanoseconds, then divide by 1e6.
let nanos = timeout_duration.subsec_nanos() as u64;
let ms = (1000*1000*1000 * timeout_duration.as_secs() + nanos)/(1000 * 1000);
Use time::Duration from the time crate on crates.io which provides a num_milliseconds() method.

How can I get the current time in milliseconds?

How can I get the current time in milliseconds like I can in Java?
System.currentTimeMillis()
Since Rust 1.8, you do not need to use a crate. Instead, you can use SystemTime and UNIX_EPOCH:
use std::time::{SystemTime, UNIX_EPOCH};
fn main() {
let start = SystemTime::now();
let since_the_epoch = start
.duration_since(UNIX_EPOCH)
.expect("Time went backwards");
println!("{:?}", since_the_epoch);
}
If you need exactly milliseconds, you can convert the Duration.
Rust 1.33
let in_ms = since_the_epoch.as_millis();
Rust 1.27
let in_ms = since_the_epoch.as_secs() as u128 * 1000 +
since_the_epoch.subsec_millis() as u128;
Rust 1.8
let in_ms = since_the_epoch.as_secs() * 1000 +
since_the_epoch.subsec_nanos() as u64 / 1_000_000;
If you just want to do simple timings with the milliseconds, you can use std::time::Instant like this:
use std::time::Instant;
fn main() {
let start = Instant::now();
// do stuff
let elapsed = start.elapsed();
// Debug format
println!("Debug: {:?}", elapsed);
// Format as milliseconds rounded down
// Since Rust 1.33:
println!("Millis: {} ms", elapsed.as_millis());
// Before Rust 1.33:
println!("Millis: {} ms",
(elapsed.as_secs() * 1_000) + (elapsed.subsec_nanos() / 1_000_000) as u64);
}
Output:
Debug: 10.93993ms
Millis: 10 ms
Millis: 10 ms
You can use the time crate:
extern crate time;
fn main() {
println!("{}", time::now());
}
It returns a Tm which you can get whatever precision you want.
I've found a clear solution with chrono in coinnect:
use chrono::prelude::*;
pub fn get_unix_timestamp_ms() -> i64 {
let now = Utc::now();
now.timestamp_millis()
}
pub fn get_unix_timestamp_us() -> i64 {
let now = Utc::now();
now.timestamp_nanos()
}
As #Shepmaster mentioned, this is the equivalent of Java's System.currentTimeMillis() in Rust.
use std::time::{SystemTime, UNIX_EPOCH};
fn get_epoch_ms() -> u128 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis()
}
extern crate time;
fn timestamp() -> f64 {
let timespec = time::get_time();
// 1459440009.113178
let mills: f64 = timespec.sec as f64 + (timespec.nsec as f64 / 1000.0 / 1000.0 / 1000.0);
mills
}
fn main() {
let ts = timestamp();
println!("Time Stamp: {:?}", ts);
}
Rust Playground
System.currentTimeMillis() in Java returns the difference in milliseconds between the current time and midnight, January 1, 1970.
In Rust we have time::get_time() which returns a Timespec with the current time as seconds and the offset in nanoseconds since midnight, January 1, 1970.
Example (using Rust 1.13):
extern crate time; //Time library
fn main() {
//Get current time
let current_time = time::get_time();
//Print results
println!("Time in seconds {}\nOffset in nanoseconds {}",
current_time.sec,
current_time.nsec);
//Calculate milliseconds
let milliseconds = (current_time.sec as i64 * 1000) +
(current_time.nsec as i64 / 1000 / 1000);
println!("System.currentTimeMillis(): {}", milliseconds);
}
Reference: Time crate, System.currentTimeMillis()

Resources