How to terminate a blocking tokio task? - rust

In my application I have a blocking task that synchronically reads messages from a queue and feeds them to a running task.
All of this works fine, but the problem that I'm having is that the process does not terminate correctly, since the queue_reader task does not stop.
I've constructed a small example based on the tokio documentation at: https://docs.rs/tokio/1.20.1/tokio/task/fn.spawn_blocking.html
use tokio::sync::mpsc;
use tokio::task;
#[tokio::main]
async fn main() {
let (incoming_tx, mut incoming_rx) = mpsc::channel(2);
// Some blocking task that never ends
let queue_reader = task::spawn_blocking(move || {
loop {
// Stand in for receiving messages from queue
incoming_tx.blocking_send(5).unwrap();
}
});
let mut acc = 0;
// Some complex condition that determines whether the job is done
while acc < 95 {
tokio::select! {
Some(v) = incoming_rx.recv() => {
acc += v;
}
}
}
assert_eq!(acc, 95);
println!("Finalizing thread");
queue_reader.abort(); // This doesn't seem to terminate the queue_reader task
queue_reader.await.unwrap(); // <-- The process hangs on this task.
println!("Done");
}
At first I expected that queue_reader.abort() should terminate the task, however it doesn't. My expectation is that tokio can only do this for tasks that use .await internally, because that will handle control over to tokio. Is this right?
In order to terminate the queue_reader task I introduced a oneshot channel, over which I signal the termination, as shown in the next snippet.
use tokio::task;
use tokio::sync::{oneshot, mpsc};
#[tokio::main]
async fn main() {
let (incoming_tx, mut incoming_rx) = mpsc::channel(2);
// A new channel to communicate when the process must finish.
let (term_tx, mut term_rx) = oneshot::channel();
// Some blocking task that never ends
let queue_reader = task::spawn_blocking(move || {
// As long as termination is not signalled
while term_rx.try_recv().is_err() {
// Stand in for receiving messages from queue
incoming_tx.blocking_send(5).unwrap();
}
});
let mut acc = 0;
// Some complex condition that determines whether the job is done
while acc < 95 {
tokio::select! {
Some(v) = incoming_rx.recv() => {
acc += v;
}
}
}
assert_eq!(acc, 95);
// Signal termination
term_tx.send(()).unwrap();
println!("Finalizing thread");
queue_reader.await.unwrap();
println!("Done");
}
My question is, is this the canonical/best way to do this, or are there better alternatives?

Tokio cannot terminate CPU-bound/blocking tasks.
It is technically possible to kill OS threads, but generally it is not a good idea, as it's expensive to create new threads and it can leave your program in an invalid state. Even if Tokio decided this was something worth implementing, it would serverely limit its implementation - it would be forced into a multithread model, just to support the possibility that you'd want to kill a blocking task before it's finished.
Your solution is pretty good; give your blocking task the responsibility for terminating itself and provide a way to tell it to do so. If this future was part of a library, you could abstract the mechanism away by returning a "handle" to the task that had a cancel() method.
Are there better alternatives? Maybe, but that would depend on other factors. Your solution is good and easily extended, for example if you later needed to send different types of signal to the task.

Related

Daemonize scheduled thread pool

I'm trying to create a daemon in Rust which runs a process on a schedule forever:
use scheduled_thread_pool::ScheduledThreadPool;
use std::time::Duration;
let pool = ScheduledThreadPool::new(1);
pool.execute_at_fixed_rate(
Duration::new(5, 0),
Duration::new(5, 0),
move || do_business_logic(),
);
This stops the threads as soon as processing reaches the end of main(). How can I keep it running forever?
You should not use the loop { } suggested in the comments.
This has drawbacks:
it will burn cpu needlessly
your program will run forever, even in the case your do_business_logic() panics.
Lets look at the docs of scheduled_thread_pool::execute_at_fixed_rate, it says under Panics:
If the closure panics, it will not be run again.
That doesn't sound very helpful, if you want to notice in your program, wether your do_business_logic panics
So you could create a std::sync::Channel and just wait for receiving a value on the channel.
You move the sender into the closure you hand over to the thread_pool.
If your closure panics, the sender will be dropped, and the receiver will stop waiting. So you know, something happened.
Some working code:
use scheduled_thread_pool::ScheduledThreadPool;
use std::time::Duration;
use std::sync::mpsc::*;
fn main() {
let pool = ScheduledThreadPool::new(1);
let (tx, rx): (Sender<u8>, Receiver<u8>) = channel();
let killswitch = std::sync::Arc::new(std::sync::Mutex::new(false));
let killswitch2 = killswitch.clone();
pool.execute_at_fixed_rate(
Duration::new(5, 0),
Duration::new(5, 0),
move || {
if *(killswitch2.lock().unwrap()) {
panic!("gotto go!");
}
println!("haha");
tx.send(1).unwrap();
}
);
for (count, _) in rx.iter().enumerate() {
println!("got one");
if count > 0 {
println!("that's boring, killing it");
*(killswitch.lock().unwrap()) = true;
}
}
println!("Have a nice day");
}
clippy yelled at me for using a Mutex around a bool and suggested using an AtomicBool instead, but I think, that is a different rabbit hole.

How do I spawn a long running Tokio task within another task without blocking the parent task?

I'm trying to construct an object that can manage a feed from a websocket but be able to switch between multiple feeds.
There is a Feed trait:
trait Feed {
async fn start(&mut self);
async fn stop(&mut self);
}
There are three structs that implement Feed: A, B, and C.
When start is called, it starts an infinite loop of listening for messages from a websocket and processing each one as it comes in.
I want to implement a FeedManager that maintains a single active feed but can receive commands to switch what feed source it is using.
enum FeedCommand {
Start(String),
Stop,
}
struct FeedManager {
active_feed_handle: tokio::task::JoinHandle,
controller: mpsc::Receiver<FeedCommand>,
}
impl FeedManager {
async fn start(&self) {
while let Some(command) = self.controller.recv().await {
match command {
FeedCommand::Start(feed_type) => {
// somehow tell the active feed to stop (need channel probably) or kill the task?
if feed_type == "A" {
// replace active feed task with a new tokio task for consuming feed A
} else if feed_type == "B" {
// replace active feed task with a new tokio task for consuming feed B
} else {
// replace active feed task with a new tokio task for consuming feed C
}
}
}
}
}
}
I'm struggling to understand how to manage all the of Tokio tasks properly. FeedManager's core loop is to listen forever for new commands that come in, but it needs to be able to spawn another long lived task without blocking on it (so it can listen for commands).
My first attempt was:
if feed_type == "A" {
self.active_feed_handle = tokio::spawn(async {
A::new().start().await;
});
self.active_feed_handle.await
}
the .await on the handle would cause the core loop to no longer accept commands, right?
can I omit that last .await and have the task still run?
do I need to clean up the currently active task some way?
You can I spawn a long running Tokio task without blocking the parent task by spawning a task — that's a primary reason tasks exist. If you don't .await the task, then you won't wait for the task:
use std::time::Duration;
use tokio::{task, time}; // 1.3.0
#[tokio::main]
async fn main() {
task::spawn(async {
time::sleep(Duration::from_secs(100)).await;
eprintln!(
"You'll likely never see this printed \
out because the parent task has exited \
and so has the entire program"
);
});
}
See also:
What happens to an async task when it is aborted?
One way to do this would be to use Tokio's join!() macro, which takes multiple futures and awaits on all of them. You could the create multiple futures and join!() them together to await on them collectively.

How can I release a std::io::StdinLock externally? [duplicate]

Coming from Java, I am used to idioms along the lines of
while (true) {
try {
someBlockingOperation();
} catch (InterruptedException e) {
Thread.currentThread.interrupt(); // re-set the interrupted flag
cleanup(); // whatever is necessary
break;
}
}
This works, as far as I know, across the whole JDK for anything that might block, like reading from files, from sockets, from a queue and even for Thread.sleep().
Reading on how this is done in Rust, I find lots of seemingly special solutions mentioned like mio, tokio. I also find ErrorKind::Interrupted and tried to get this ErrorKind with sending SIGINT to the thread, but the thread seems to die immediately without leaving any (back)trace.
Here is the code I used (note: not very well versed in Rust yet, so it might look a bit strange, but it runs):
use std::io;
use std::io::Read;
use std::thread;
pub fn main() {
let sub_thread = thread::spawn(|| {
let mut buffer = [0; 10];
loop {
let d = io::stdin().read(&mut buffer);
println!("{:?}", d);
let n = d.unwrap();
if n == 0 {
break;
}
println!("-> {:?}", &buffer[0..n]);
}
});
sub_thread.join().unwrap();
}
By "blocking operations", I mean:
sleep
socket IO
file IO
queue IO (not sure yet where the queues are in Rust)
What would be the respective means to signal to a thread, like Thread.interrupt() in Java, that its time to pack up and go home?
There is no such thing. Blocking means blocking.
Instead, you deliberately use tools that are non-blocking. That's where libraries like mio, Tokio, or futures come in — they handle the architecture of sticking all of these non-blocking, asynchronous pieces together.
catch (InterruptedException e)
Rust doesn't have exceptions. If you expect to handle a failure case, that's better represented with a Result.
Thread.interrupt()
This doesn't actually do anything beyond setting a flag in the thread that some code may check and then throw an exception for. You could build the same structure yourself. One simple implementation:
use std::{
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread,
time::Duration,
};
fn main() {
let please_stop = Arc::new(AtomicBool::new(false));
let t = thread::spawn({
let should_i_stop = please_stop.clone();
move || {
while !should_i_stop.load(Ordering::SeqCst) {
thread::sleep(Duration::from_millis(100));
println!("Sleeping");
}
}
});
thread::sleep(Duration::from_secs(1));
please_stop.store(true, Ordering::SeqCst);
t.join().unwrap();
}
Sleep
No way of interrupting, as far as I know. The documentation even says:
On Unix platforms this function will not return early due to a signal
Socket IO
You put the socket into nonblocking mode using methods like set_nonblocking and then handle ErrorKind::WouldBlock.
See also:
Tokio
async-std
File IO
There isn't really a good cross-platform way of performing asynchronous file IO. Most implementations spin up a thread pool and perform blocking operations there, sending the data over something that does non-blocking.
See also:
Tokio
async-std
Queue IO
Perhaps you mean something like a MPSC channel, in which case you'd use tools like try_recv.
See also:
How to terminate or suspend a Rust thread from another thread?
What is the best approach to encapsulate blocking I/O in future-rs?
What does java.lang.Thread.interrupt() do?

What's a good detailed explanation of Tokio's simple TCP echo server example (on GitHub and API reference)?

Tokio has the same example of a simple TCP echo server on its:
GitHub main page (https://github.com/tokio-rs/tokio)
API reference main page (https://docs.rs/tokio/0.2.18/tokio/)
However, in both pages, there is no explanation of what's actually going on. Here's the example, slightly modified so that the main function does not return Result<(), Box<dyn std::error::Error>>:
use tokio::net::TcpListener;
use tokio::prelude::*;
#[tokio::main]
async fn main() {
if let Ok(mut tcp_listener) = TcpListener::bind("127.0.0.1:8080").await {
while let Ok((mut tcp_stream, _socket_addr)) = tcp_listener.accept().await {
tokio::spawn(async move {
let mut buf = [0; 1024];
// In a loop, read data from the socket and write the data back.
loop {
let n = match tcp_stream.read(&mut buf).await {
// socket closed
Ok(n) if n == 0 => return,
Ok(n) => n,
Err(e) => {
eprintln!("failed to read from socket; err = {:?}", e);
return;
}
};
// Write the data back
if let Err(e) = tcp_stream.write_all(&buf[0..n]).await {
eprintln!("failed to write to socket; err = {:?}", e);
return;
}
}
});
}
}
}
After reading the Tokio documentation (https://tokio.rs/docs/overview/), here's my mental model of this example. A task is spawned for each new TCP connection. And a task is ended whenever a read/write error occurs, or when the client ends the connection (i.e. n == 0 case). Therefore, if there are 20 connected clients at a point in time, there would be 20 spawned tasks. However, under the hood, this is NOT equivalent to spawning 20 threads to handle the connected clients concurrently. As far as I understand, this is basically the problem that asynchronous runtimes are trying to solve. Correct so far?
Next, my mental model is that a tokio scheduler (e.g. the multi-threaded threaded_scheduler which is the default for apps, or the single-threaded basic_scheduler which is the default for tests) will schedule these tasks concurrently on 1-to-N threads. (Side question: for the threaded_scheduler, is N fixed during the app's lifetime? If so, is it equal to num_cpus::get()?). If one task is .awaiting for the read or write_all operations, then the scheduler can use the same thread to perform more work for one of the other 19 tasks. Still correct?
Finally, I'm curious whether the outer code (i.e. the code that is .awaiting for tcp_listener.accept()) is itself a task? Such that in the 20 connected clients example, there aren't really 20 tasks but 21: one to listen for new connections + one per connection. All of these 21 tasks could be scheduled concurrently on one or many threads, depending on the scheduler. In the following example, I wrap the outer code in a tokio::spawn and .await the handle. Is it completely equivalent to the example above?
use tokio::net::TcpListener;
use tokio::prelude::*;
#[tokio::main]
async fn main() {
let main_task_handle = tokio::spawn(async move {
if let Ok(mut tcp_listener) = TcpListener::bind("127.0.0.1:8080").await {
while let Ok((mut tcp_stream, _socket_addr)) = tcp_listener.accept().await {
tokio::spawn(async move {
// ... same as above ...
});
}
}
});
main_task_handle.await.unwrap();
}
This answer is a summary of an answer I received on Tokio's Discord from Alice Ryhl. Big thank you!
First of all, indeed, for the multi-threaded scheduler, the number of OS threads is fixed to num_cpus.
Second, Tokio can swap the currently running task at every .await on a per-thread basis.
Third, the main function runs in its own task, which is spawned by the #[tokio::main] macro.
Therefore, for the first code block example, if there are 20 connected clients, there would be 21 tasks: one for the main macro + one for each of the 20 open TCP streams. For the second code block example, there would be 22 tasks because of the extra outer tokio::spawn but it's needless and doesn't add any concurrency.

Is it possible to force resume a sleeping thread?

Is it possible to force resume a sleeping thread which has been paused? For example, by calling sleep:
std::thread::sleep(std::time::Duration::from_secs(60 * 20));
I know that I can communicate between threads using std::sync::mpsc but if the thread is asleep, this does not force it to wake up before the time indicated.
I have thought that using std::sync::mpsc and maybe
Builder and .name associated with the thread, but I do not know how to get the thread to wake up.
If you want to be woken up by an event, thread::sleep() is not the correct function to use, as it's not supposed to be stopped.
There are other methods of waiting while being able to be woken up by an event (this is usually called blocking). Probably the easiest way is to use a channel together with Receiver::recv_timeout(). Often it's also sufficient to send () through the channel. That way we just communicate a signal, but don't send actual data.
If you don't want to wake up after a specific timeout, but only when a signal arrives, just use Receiver::recv().
Example with timeout:
use std::thread;
use std::sync::mpsc::{self, RecvTimeoutError};
use std::time::Duration;
use std::io;
fn main() {
let (sender, receiver) = mpsc::channel();
thread::spawn(move || {
loop {
match receiver.recv_timeout(Duration::from_secs(2)) {
Err(RecvTimeoutError::Timeout) => {
println!("Still waiting... I'm bored!");
// we'll try later...
}
Err(RecvTimeoutError::Disconnected) => {
// no point in waiting anymore :'(
break;
}
Ok(_) => {
println!("Finally got a signal! ♥♥♥");
// doing work now...
}
}
}
});
loop {
let mut s = String::new();
io::stdin().read_line(&mut s).expect("reading from stdin failed");
if s.trim() == "start" {
sender.send(()).unwrap();
}
}
}
Here, the second thread is woken up at least every two seconds (the timeout), but also earlier once something was sent through the channel.
park_timeout allows timed sleeps with wakeups from unpark, but it can also wake up early.
See std::thread module documentation

Resources