Does rust currently have a library implement function similar to JavaScript's setTimeout and setInterval? - rust

Does rust currently have a library implement function similar to JavaScript's setTimeout and setInverval?, that is, a library that can call multiple setTimeout and setInterval to implement management of multiple tasks at the same time.
I feel that tokio is not particularly convenient to use. I imagine it to be used like this:
fn callback1() {
println!("callback1");
}
fn callback2() {
println!("callback2");
}
set_interval(callback1, 10);
set_interval(callback1, 20);
set_timeout(callback1, 30);
Of course, I can simulate a function to make it work:
// just for test, not what I wanted at all
type rust_listener_callback = fn();
fn set_interval(func: rust_listener_callback, duration: i32) {
func()
}
fn set_timeout(func: rust_listener_callback, duration: i32) {
func();
}
If a set_interval is implemented in this way, multiple combinations, dynamic addition and deletion, and cancellation are not particularly convenient:
use tokio::time;
async fn set_interval(func: rust_listener_callback, duration: u64) {
let mut interval = time::interval(Duration::from_millis(duration));
tokio::spawn(async move {
loop {
interval.tick().await;
func()
}
}).await;
}
// emm maybe loop can be removed, just a sample
While, What I want to know is if there is a library to do this, instead of writing it myself.
I have some idea if I would write it myself. Generally, all functions are turned into a task queue or task tree, and then tokio::time::delay_for can be used to execute them one by one, but the details are actually more complicated.
However, I think that this general capability may have already been implemented but I has not found for the time being, so I want to ask here, Thank you very much.
And importantly, I hope it can support single thread

setTimeout can be done like this without the need for a crate:
tokio::spawn(async move {
tokio::time::sleep(Duration::from_secs(5)).await;
// code goes here
});

I asked myself the same question a few days ago, created a solution for this (for tokio runtimes), and found your stackoverflow post just now.
https://crates.io/crates/tokio-js-set-interval
code.rs
use std::time::Duration;
use tokio_js_set_interval::{set_interval, set_timeout};
#[tokio::main]
async fn main() {
println!("hello1");
set_timeout!(println!("hello2"), 0);
println!("hello3");
set_timeout!(println!("hello4"), 0);
println!("hello5");
// give enough time before tokios runtime exits
tokio::time::sleep(Duration::from_millis(1)).await;
}
But this must be used with caution. There is no guarantee that the futures will be executed (because tokios runtime must run long enough). Use it only:
for educational purposes,
and if you have low priority background tasks that you don't expect to get executed always

I created a library just for this which allows setting many timeouts using only 1 tokio task (instead of spawning a new task for each timeout) which provides better performance and lower memory usage.
The library also supports cancelling timeouts, and provides some ways to optimize the performance of the timeouts.
Check it out:
https://crates.io/crates/set_timeout
Usage example:
#[tokio::main]
async fn main() {
let scheduler = TimeoutScheduler::new(None);
// schedule a future which will run after 1.234 seconds from now.
scheduler.set_timeout(Duration::from_secs_f32(1.234), async move {
println!("It works!");
});
// make sure that the main task doesn't end before the timeout is executed, because if the main
// task returns the runtime stops running.
tokio::time::sleep(Duration::from_secs(2)).await;
}

Related

Rust: can tokio be understood as similar to Javascripts event loop or be used like it?

I'm not sure if tokio is similar to the event loop in Javascript, also a non-blocking runtime, or if it can be used to work in a similar way. In my understanding, tokio is an runtime for futures in Rust. Therefore it must implement some kind of userland threads or tasks, which can be achieved with an event loop (at least partly) to schedule new tasks.
Let's take the following Javascript code:
console.log('hello1');
setTimeout(() => console.log('hello2'), 0);
console.log('hello3');
setTimeout(() => console.log('hello4'), 0);
console.log('hello5');
The output will be
hello1
hello3
hello5
hello2
hello4
How can I do this in tokio? Is tokio meant to work like this overall? I tried the following code
async fn set_timeout(f: impl Fn(), ms: u64) {
tokio::time::sleep(tokio::time::Duration::from_millis(ms)).await;
f()
}
#[tokio::main]
async fn main() {
println!("hello1");
tokio::spawn(async {set_timeout(|| println!("hello2"), 0)}).await;
println!("hello3");
tokio::spawn(async {set_timeout(|| println!("hello4"), 0)}).await;
println!("hello5");
}
The output is just
hello1
hello3
hello5
If I change the code to
println!("hello1");
tokio::spawn(async {set_timeout(|| println!("hello2"), 0)}.await).await;
println!("hello3");
tokio::spawn(async {set_timeout(|| println!("hello4"), 0)}.await).await;
println!("hello5");
The output is
hello1
hello2
hello3
hello4
hello5
but then I don't get the point of the whole async/await/future feature, because then my "async" set_timeout-tasks are actually blocking the other println statements..
In short: yes, Tokio is meant to work much like the JavaScript event loop. However, there are three problems with your first snippet.
First, it returns from main() before waiting for things to play out. Unlike your JavaScript code, which presumably runs in the browser, and runs the timeouts even after the code you typed in the console has finished running, the Rust code is in a short-lived executable which terminates after main(). Whatever things were scheduled to happen later won't occur if the executable stops running because it returned from main().
The second issue is that the anonymous async block that calls the set_timeout() async function doesn't do anything with its return value. An important difference between async functions in Rust and JavaScript is that in Rust you can't just call an async function and be done with it. In JavaScript an async function returns a promise, and if you don't await that promise, the event loop will still execute the code of the async function in the background. In Rust, an async function returns a future, but it is not associated with any event loop, it is just prepared for someone to run it. You then need to either await it with .await (with the same meaning as in JavaScript) or explicitly pass it to tokio::spawn() to execute in the background (with the same meaning as calling but not awaiting the function in JavaScript). Your async block does neither, so the invocation of set_timeout() is a no-op.
Finally, the code immediately awaits the task created by spawn(), which defeats the purpose of calling spawn() in the first place - tokio::spawn(foo()).await is functionally equivalent to foo().await for any foo().
The first issue can be resolved by adding a tiny sleep at the end of main. (This is not the proper fix, but will serve to demonstrate what happens.) The second issue can be fixed by removing the async block and just passing the return value of set_timeout() to tokio::spawn(). The third issue is resolved by removing the unnecessary .await of the task.
#[tokio::main]
async fn main() {
println!("hello1");
tokio::spawn(set_timeout(|| println!("hello2"), 0));
println!("hello3");
tokio::spawn(set_timeout(|| println!("hello4"), 0));
println!("hello5");
tokio::time::sleep(tokio::time::Duration::from_millis(1)).await;
}
This code will print the "expected" 1, 3, 5, 4, 2 (although the order is not guaranteed in programs like this). Real code would not end with a sleep; instead, it would await the tasks it has created, as shown in Shivam's answer.
Unlike JavaScript, Rust does not start the execution of an async function until the future is awaited. It means set_timeout(|| println!("hello2"), 0) only creates a new future. It doesn't execute it at all. When you await it, only then it is executed. .await essentially blocks the current thread until the future is completed which is not "real asynchronous applications". To make your code concurrent like JavaScript, you can use join! macro:-
use tokio::join;
use tokio::time::*;
async fn set_timeout(f: impl Fn(), ms: u64) {
sleep(Duration::from_millis(ms)).await;
f()
}
#[tokio::main]
async fn main() {
println!("hello1");
let fut_1 = tokio::spawn(set_timeout(|| println!("hello2"), 0));
println!("hello3");
let fut_2 = tokio::spawn(set_timeout(|| println!("hello4"), 0));
println!("hello5");
join!(fut_1, fut_2);
}
You can use FuturesOrdered if want to take feel of Promise.all.
More info:-
https://news.ycombinator.com/item?id=21473777
https://rust-lang.github.io/async-book/06_multiple_futures/01_chapter.html

How can I release a std::io::StdinLock externally? [duplicate]

Coming from Java, I am used to idioms along the lines of
while (true) {
try {
someBlockingOperation();
} catch (InterruptedException e) {
Thread.currentThread.interrupt(); // re-set the interrupted flag
cleanup(); // whatever is necessary
break;
}
}
This works, as far as I know, across the whole JDK for anything that might block, like reading from files, from sockets, from a queue and even for Thread.sleep().
Reading on how this is done in Rust, I find lots of seemingly special solutions mentioned like mio, tokio. I also find ErrorKind::Interrupted and tried to get this ErrorKind with sending SIGINT to the thread, but the thread seems to die immediately without leaving any (back)trace.
Here is the code I used (note: not very well versed in Rust yet, so it might look a bit strange, but it runs):
use std::io;
use std::io::Read;
use std::thread;
pub fn main() {
let sub_thread = thread::spawn(|| {
let mut buffer = [0; 10];
loop {
let d = io::stdin().read(&mut buffer);
println!("{:?}", d);
let n = d.unwrap();
if n == 0 {
break;
}
println!("-> {:?}", &buffer[0..n]);
}
});
sub_thread.join().unwrap();
}
By "blocking operations", I mean:
sleep
socket IO
file IO
queue IO (not sure yet where the queues are in Rust)
What would be the respective means to signal to a thread, like Thread.interrupt() in Java, that its time to pack up and go home?
There is no such thing. Blocking means blocking.
Instead, you deliberately use tools that are non-blocking. That's where libraries like mio, Tokio, or futures come in — they handle the architecture of sticking all of these non-blocking, asynchronous pieces together.
catch (InterruptedException e)
Rust doesn't have exceptions. If you expect to handle a failure case, that's better represented with a Result.
Thread.interrupt()
This doesn't actually do anything beyond setting a flag in the thread that some code may check and then throw an exception for. You could build the same structure yourself. One simple implementation:
use std::{
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread,
time::Duration,
};
fn main() {
let please_stop = Arc::new(AtomicBool::new(false));
let t = thread::spawn({
let should_i_stop = please_stop.clone();
move || {
while !should_i_stop.load(Ordering::SeqCst) {
thread::sleep(Duration::from_millis(100));
println!("Sleeping");
}
}
});
thread::sleep(Duration::from_secs(1));
please_stop.store(true, Ordering::SeqCst);
t.join().unwrap();
}
Sleep
No way of interrupting, as far as I know. The documentation even says:
On Unix platforms this function will not return early due to a signal
Socket IO
You put the socket into nonblocking mode using methods like set_nonblocking and then handle ErrorKind::WouldBlock.
See also:
Tokio
async-std
File IO
There isn't really a good cross-platform way of performing asynchronous file IO. Most implementations spin up a thread pool and perform blocking operations there, sending the data over something that does non-blocking.
See also:
Tokio
async-std
Queue IO
Perhaps you mean something like a MPSC channel, in which case you'd use tools like try_recv.
See also:
How to terminate or suspend a Rust thread from another thread?
What is the best approach to encapsulate blocking I/O in future-rs?
What does java.lang.Thread.interrupt() do?

what is the meaning of "await" used in Rust?

This question may somewhat relate more to async-programming than Rust.But after googling a lot, there are still somepoint I think is missing. And since I am learning Rust, I would put it in a Rust way.
Let me give my understanding of async-programming first---After all, this is the basis, maybe I am wrong or not:
To make program run efficiently, dealing tasks concurrently is essential. Then thread is used, and the thread could be joined whenever the data from the thread is needed. But thread is not enough to handle many tasks,like a server does. Then thread-pool is used, but how to fetch data when it is needed with no information of which thread should be waiting for? Then callback function(cb for short) comes up.With cb,only what needs to do in cb should be considered. In addition, to make cpu little overhead, green thread comes up.
But what if the asyn-waiting things need to do one after another, which leads to "callback hell"? Ok, the "future/promise" style comes up, which let code looks like sync-code, or maybe like a chain(like in javascript). But still the code looks not quite nice. Finally, the "async/await" style comes up, as another syntactic sugar for "future/promise" style. And usually, the "async/await" with green thread style is called "coroutine", be it using only one native thread or multi-native threads over async tasks.
=============================================
As far as I know at this point, as keyword "await" can only be used in the scope of an "async" function, and only "async" function could be "awaited". But why? And what is it used to, as there is already "async"? Anyway, I tested the code below:
use async_std::{task};
// async fn easy_task() {
// for i in 0..100 {
// dbg!(i);
// }
// println!("finished easy task");
// }
async fn heavy_task(cnt1: i32, cnt2: i32) {
for i in 0..cnt1 {
println!("heavy_task1 cnt:{}", i);
}
println!("heavy task: waiting sub task");
// normal_sub_task(cnt2);
sub_task(cnt2).await;
println!("heavy task: sub task finished");
for i in 0..cnt1 {
println!("heavy_task2 cnt:{}", i);
}
println!("finished heavy task");
}
fn normal_sub_task(cnt: i32) {
println!("normal sub_task: start sub task");
for i in 0..cnt {
println!("normal sub task cnt:{}", i);
}
println!("normal sub_task: finished sub task");
}
async fn sub_task(cnt: i32) {
println!("sub_task: start sub task");
for i in 0..cnt {
println!("sub task cnt:{}", i);
}
println!("sub_task: finished sub task");
}
fn outer_task(cnt: i32) {
for i in 0..cnt {
println!("outer task cnt:{}", i);
}
println!("finished outer task");
}
fn main() {
// let _easy_f = easy_task();
let heavy_f = heavy_task(3000, 500);
let handle = task::spawn(heavy_f);
print!("=================after spawn==============");
outer_task(5000);
// task::join_handle(handle);
task::block_on(handle);
}
the conclusion I got from test is:
1.No matter awaiting async sub_task or just doing normal_sub_task(sync version) in the middle of async heavy_task(), the code below that (the heavy loop task2) would not cut in line.
2.No matter awaiting async sub_task or just doing normal_sub_task(sync version) in the middle of async heavy_task(), the outer_task would sometimes cut in line, breaking the heavy_task1 or async_sub_task/normal_sub_task.
Therefore, what is the meaning of "await", it seems that only keyword "asyc" is used here.
reference:
asyc_std
sing_dance_example from rust asyncbook
module Task in official rust module
recommended article of rust this week about async-programming
another article about rust thread and async-programming using future crates
stackoverflow question:What is the purpose of async/await in Rust?
the conclusion 2 I got seems to be violated against what Shepmaster said, "...we felt async functions should run synchronously to the first await."
The await keyword suspends the execution of an asynchronous function until the awaited future (future.await) produces a value.
It is the same meaning of all the other languages that uses the await concept.
When a future is awaited the "status of execution" of the async function is persisted into an internal
execution context and others async functions have the opportunity to progress if they are ready to run.
When the awaited future completes the async function resumes at the exact point of suspension.
If you think I need only async and write something like:
// OK: let result = future.await
let result = future
You don't get a value but something that represents a value ready in the future.
And if you mark async a function without awaiting anything inside
the body of the function you are injecting into an asynchronous engine a sequential task
that when executed will run to completion as a normal function, preventing asynchronous behavoir.
Some more comments about your code
Probably the confusion arise from a misunderstaning ot the task concept.
When learning async in rust I found the async book pretty useful.
The book define tasks as:
Tasks are the top-level futures that have been submitted to an executor
heavy_task is really the unique task in your example because it is the only future submitted to the async
runtime with task::block_on.
For example, the function outer_task has nothing to do with asynchronous world:
it is not a task, it get excuted immediately when called.
heavy_task behaves asychronously and await
sub_task(cnt2) future ... but sub_task future once executed
goes immediately to completion.
So, as it stand, your code behave practically as sequential.
But keep in mind that things in reality are more subtle, because in presence of other async tasks the
await inside heavy_task works as a suspension point and gives opportunity to other
tasks to be executed toward completion.

Vector of futures in Rust doesn't execute concurrently [duplicate]

I'm trying to understand Future::select: in this example, the future with a longer time delay is returned first.
When I read this article with its example, I get cognitive dissonance. The author writes:
The select function runs two (or more in case of select_all) futures and returns the first one coming to completion. This is useful for implementing timeouts.
It seems I don't understand the sense of select.
extern crate futures; // v0.1 (old)
extern crate tokio_core;
use std::thread;
use std::time::Duration;
use futures::{Async, Future};
use tokio_core::reactor::Core;
struct Timeout {
time: u32,
}
impl Timeout {
fn new(period: u32) -> Timeout {
Timeout { time: period }
}
}
impl Future for Timeout {
type Item = u32;
type Error = String;
fn poll(&mut self) -> Result<Async<u32>, Self::Error> {
thread::sleep(Duration::from_secs(self.time as u64));
println!("Timeout is done with time {}.", self.time);
Ok(Async::Ready(self.time))
}
}
fn main() {
let mut reactor = Core::new().unwrap();
let time_out1 = Timeout::new(5);
let time_out2 = Timeout::new(1);
let task = time_out1.select(time_out2);
let mut reactor = Core::new().unwrap();
reactor.run(task);
}
I need to process the early future with the smaller time delay, and then work with the future with a longer delay. How can I do it?
TL;DR: use tokio::time
If there's one thing to take away from this: never perform blocking or long-running operations inside of asynchronous operations.
If you want a timeout, use something from tokio::time, such as delay_for or timeout:
use futures::future::{self, Either}; // 0.3.1
use std::time::Duration;
use tokio::time; // 0.2.9
#[tokio::main]
async fn main() {
let time_out1 = time::delay_for(Duration::from_secs(5));
let time_out2 = time::delay_for(Duration::from_secs(1));
match future::select(time_out1, time_out2).await {
Either::Left(_) => println!("Timer 1 finished"),
Either::Right(_) => println!("Timer 2 finished"),
}
}
What's the problem?
To understand why you get the behavior you do, you have to understand the implementation of futures at a high level.
When you call run, there's a loop that calls poll on the passed-in future. It loops until the future returns success or failure, otherwise the future isn't done yet.
Your implementation of poll "locks up" this loop for 5 seconds because nothing can break the call to sleep. By the time the sleep is done, the future is ready, thus that future is selected.
The implementation of an async timeout conceptually works by checking the clock every time it's polled, saying if enough time has passed or not.
The big difference is that when a future returns that it's not ready, another future can be checked. This is what select does!
A dramatic re-enactment:
sleep-based timer
core: Hey select, are you ready to go?
select: Hey future1, are you ready to go?
future1: Hold on a seconnnnnnnn [... 5 seconds pass ...] nnnnd. Yes!
simplistic async-based timer
core: Hey select, are you ready to go?
select: Hey future1, are you ready to go?
future1: Checks watch No.
select: Hey future2, are you ready to go?
future2: Checks watch No.
core: Hey select, are you ready to go?
[... polling continues ...]
[... 1 second passes ...]
core: Hey select, are you ready to go?
select: Hey future1, are you ready to go?
future1: Checks watch No.
select: Hey future2, are you ready to go?
future2: Checks watch Yes!
This simple implementation polls the futures over and over until they are all complete. This is not the most efficient, and not what most executors do.
See How do I execute an async/await function without using any external dependencies? for an implementation of this kind of executor.
smart async-based timer
core: Hey select, are you ready to go?
select: Hey future1, are you ready to go?
future1: Checks watch No, but I'll call you when something changes.
select: Hey future2, are you ready to go?
future2: Checks watch No, but I'll call you when something changes.
[... core stops polling ...]
[... 1 second passes ...]
future2: Hey core, something changed.
core: Hey select, are you ready to go?
select: Hey future1, are you ready to go?
future1: Checks watch No.
select: Hey future2, are you ready to go?
future2: Checks watch Yes!
This more efficient implementation hands a waker to each future when it is polled. When a future is not ready, it saves that waker for later. When something changes, the waker notifies the core of the executor that now would be a good time to re-check the futures. This allows the executor to not perform what is effectively a busy-wait.
The generic solution
When you have have an operation that is blocking or long-running, then the appropriate thing to do is to move that work out of the async loop. See What is the best approach to encapsulate blocking I/O in future-rs? for details and examples.

What is the standard way to get a Rust thread out of blocking operations?

Coming from Java, I am used to idioms along the lines of
while (true) {
try {
someBlockingOperation();
} catch (InterruptedException e) {
Thread.currentThread.interrupt(); // re-set the interrupted flag
cleanup(); // whatever is necessary
break;
}
}
This works, as far as I know, across the whole JDK for anything that might block, like reading from files, from sockets, from a queue and even for Thread.sleep().
Reading on how this is done in Rust, I find lots of seemingly special solutions mentioned like mio, tokio. I also find ErrorKind::Interrupted and tried to get this ErrorKind with sending SIGINT to the thread, but the thread seems to die immediately without leaving any (back)trace.
Here is the code I used (note: not very well versed in Rust yet, so it might look a bit strange, but it runs):
use std::io;
use std::io::Read;
use std::thread;
pub fn main() {
let sub_thread = thread::spawn(|| {
let mut buffer = [0; 10];
loop {
let d = io::stdin().read(&mut buffer);
println!("{:?}", d);
let n = d.unwrap();
if n == 0 {
break;
}
println!("-> {:?}", &buffer[0..n]);
}
});
sub_thread.join().unwrap();
}
By "blocking operations", I mean:
sleep
socket IO
file IO
queue IO (not sure yet where the queues are in Rust)
What would be the respective means to signal to a thread, like Thread.interrupt() in Java, that its time to pack up and go home?
There is no such thing. Blocking means blocking.
Instead, you deliberately use tools that are non-blocking. That's where libraries like mio, Tokio, or futures come in — they handle the architecture of sticking all of these non-blocking, asynchronous pieces together.
catch (InterruptedException e)
Rust doesn't have exceptions. If you expect to handle a failure case, that's better represented with a Result.
Thread.interrupt()
This doesn't actually do anything beyond setting a flag in the thread that some code may check and then throw an exception for. You could build the same structure yourself. One simple implementation:
use std::{
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread,
time::Duration,
};
fn main() {
let please_stop = Arc::new(AtomicBool::new(false));
let t = thread::spawn({
let should_i_stop = please_stop.clone();
move || {
while !should_i_stop.load(Ordering::SeqCst) {
thread::sleep(Duration::from_millis(100));
println!("Sleeping");
}
}
});
thread::sleep(Duration::from_secs(1));
please_stop.store(true, Ordering::SeqCst);
t.join().unwrap();
}
Sleep
No way of interrupting, as far as I know. The documentation even says:
On Unix platforms this function will not return early due to a signal
Socket IO
You put the socket into nonblocking mode using methods like set_nonblocking and then handle ErrorKind::WouldBlock.
See also:
Tokio
async-std
File IO
There isn't really a good cross-platform way of performing asynchronous file IO. Most implementations spin up a thread pool and perform blocking operations there, sending the data over something that does non-blocking.
See also:
Tokio
async-std
Queue IO
Perhaps you mean something like a MPSC channel, in which case you'd use tools like try_recv.
See also:
How to terminate or suspend a Rust thread from another thread?
What is the best approach to encapsulate blocking I/O in future-rs?
What does java.lang.Thread.interrupt() do?

Resources