Why tokio::select! chooses an action with a higher frequency? - rust

I have two asynchronous functions. One of them writes to the channel every second , and the other reads:
#[derive(Debug)]
pub struct Message{
content:String,
id:i32
}
impl Message {
pub fn new(s : String,id:i32) -> Message{
Message {
content:s,
id
}
}
}
async fn msg_stream(sender : mpsc::Sender<Message>) {
loop{
tokio::time::sleep(Duration::from_secs(1)).await;
let m = Message::new("abc".to_string(),1);
println!("message = {:?}",m);
if let Err(e) = sender.send(m).await{
println!("channel is closed,{}",e);
break
}
}
}
async fn read_stream(mut receiver : mpsc::Receiver<Message>){
let (tx, mut rx) = oneshot::channel::<()>();
loop{
tokio::select! {
Err(_) = tokio::time::timeout(Duration::from_secs(3),& mut rx) => {
println!("time has elapsed");
break
}
message = receiver.recv() =>{
println!("was receiver message = {:?} ",message)
}
}
}
println!("END OF STREAM");
}
#[tokio::main]
async fn main() {
let (tx,rx) = mpsc::channel::<Message>(8);
tokio::join!(msg_stream(tx),read_stream(rx));
println!("end of the programm");
}
And with these parameters, that is, messages are sent every second and the reader must break after 3. The timeout message does not come.That is, the function receives messages indefinitely.
...
message = Message { content: "abc", id: 1 }
was receiver message = Some(Message { content: "abc", id: 1 })
message = Message { content: "abc", id: 1 }
was receiver message = Some(Message { content: "abc", id: 1 })
message = Message { content: "abc", id: 1 }
...
But if i change the parameters that the message is sent every 3 seconds
tokio::time::sleep(Duration::from_secs(3)).await;
, the output will be like this:
time has elapsed
END OF STREAM
message = Message { content: "abc", id: 1 }
channel was closed,channel closed
end of programm
It is unclear why this is happening. Shouldn't select select branches randomly? or is the problem not in this, but in the fact that the message about the expiration of time is lost?

It is unclear why this is happening. Shouldn't select select branches randomly?
That's only relevant when multiple branches are ready when select is invoked.
In the first snippet that's never the case (since a message is sent every second there's always a message that's ready to read, and timeout is never hit).
In the second snippet, you have a race between:
a 3s async timeout
a 3s async timeout + printing to stdout (a blocking IO call) + sending through a channel
The odds that (2) will ever win that race are basically non-existent. It might be possible due to the implementation details of tokio, but I wouldn't like to bet anything of value on ever witnessing it.

Related

No response while writing my first test for a `Quinn` and `Qp2p` based module

I have the following code that uses [QP2P][1] for network communication.
impl Broker {
pub async fn new(
config: Config
) -> Result<Self, EndpointError> {
let (main_endpoint, main_incoming, _) = Endpoint::new_peer(
local_addr(),
&[],
config,
).await?;
let mut broker = Self {
main_endpoint,
main_incoming
};
broker.on_message();
Ok(broker)
}
async fn on_message(&mut self) -> Result<(), RecvError> {
// loop over incoming connections
while let Some((connection, mut incoming_messages)) = self.main_incoming.next().await {
let src = connection.remote_address();
// loop over incoming messages
while let Some(bytes) = incoming_messages.next().await? {
println!("Received from {:?} --> {:?}", src, bytes);
println!();
}
}
Ok(())
}
}
On the same file I also want to test the above by sending a message and seeing if on_message will get it.
#[tokio::test]
async fn basic_usage() -> Result<()> {
const MSG_HELLO: &str = "HELLO";
let config = Config {
idle_timeout: Duration::from_secs(60 * 60).into(), // 1 hour idle timeout.
..Default::default()
};
let broker = Broker::new(config.clone(), None).await?;
let (node, mut incoming_conns, _contact) = Endpoint::new_peer(
SocketAddr::from((Ipv4Addr::LOCALHOST, 0)),
&[],
config.clone(),
).await?;
{
let msg = Bytes::from(MSG_HELLO);
println!("Sending to {:?} --> {:?}\n", broker.main_endpoint, msg);
node.connect_to(&broker.main_endpoint.local_addr()).await?.0.send(msg.clone()).await?;
}
Ok(())
}
What ends happening is that the broker's println will not trigger at all. Is me calling on_message during initialization and expecting that it will receive messages correct. If not, how can I write the most basic test of checking if a message is received, using qp2p endpoints?
I'm not familiar with the frameworks you're using to answer fully, but maybe I can get you pointed in the right direction. I see 2 (likely) issues:
Futures don't do anything until polled.
Basically, you call await on most of your async functions, but you don't ever await or poll() the Future from on_message(), so it's basically a no-op and the contents of on_message() are never run.
I don't think this is structured correctly.
From looking at it, assuming you did await the above call, by the time Broker::new() finishes in your test, all of on_message() would have completed (meaning it wouldn't pick up later messages).
You may wish to spawn a thread for handling incoming messages. There are probably other ways you can do this with futures by adjusting how you poll them. At the least, you probably want to take the call to on_message() out of Broker::new() and await it after the message is sent in your code, similar to how the tests in qp2p are written:
#[tokio::test(flavor = "multi_thread")]
async fn single_message() -> Result<()> {
let (peer1, mut peer1_incoming_connections, _) = new_endpoint().await?;
let peer1_addr = peer1.public_addr();
let (peer2, _, _) = new_endpoint().await?;
let peer2_addr = peer2.public_addr();
// Peer 2 connects and sends a message
let (connection, _) = peer2.connect_to(&peer1_addr).await?;
let msg_from_peer2 = random_msg(1024);
connection.send(msg_from_peer2.clone()).await?;
// Peer 1 gets an incoming connection
let mut peer1_incoming_messages = if let Ok(Some((connection, incoming))) =
peer1_incoming_connections.next().timeout().await
{
assert_eq!(connection.remote_address(), peer2_addr);
incoming
} else {
bail!("No incoming connection");
};
// Peer 2 gets an incoming message
if let Ok(message) = peer1_incoming_messages.next().timeout().await {
assert_eq!(message?, Some(msg_from_peer2));
} else {
bail!("No incoming message");
}
Ok(())
}

Tokio channel sends, but doesn't receive

TL;DR I'm trying to have a background thread that's ID'd that is controlled via that ID and web calls, and the background threads doesn't seem to be getting the message via all the types of channels I've tried.
I've tried both the std channels as well as tokio's, and of those I've tried all but the watcher type from tokio. All have the same result which probably means that I've messed something up somewhere without realizing it, but I can't find the issue:
use std::collections::{
hash_map::Entry::{Occupied, Vacant},
HashMap,
};
use std::sync::Arc;
use tokio::sync::mpsc::{self, UnboundedSender};
use tokio::sync::RwLock;
use tokio::task::JoinHandle;
use uuid::Uuid;
use warp::{http, Filter};
#[derive(Default)]
pub struct Switcher {
pub handle: Option<JoinHandle<bool>>,
pub pipeline_end_tx: Option<UnboundedSender<String>>,
}
impl Switcher {
pub fn set_sender(&mut self, tx: UnboundedSender<String>) {
self.pipeline_end_tx = Some(tx);
}
pub fn set_handle(&mut self, handle: JoinHandle<bool>) {
self.handle = Some(handle);
}
}
const ADDR: [u8; 4] = [0, 0, 0, 0];
const PORT: u16 = 3000;
type RunningPipelines = Arc<RwLock<HashMap<String, Arc<RwLock<Switcher>>>>>;
#[tokio::main]
async fn main() {
let running_pipelines = Arc::new(RwLock::new(HashMap::<String, Arc<RwLock<Switcher>>>::new()));
let session_create = warp::post()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.then(|pipelines: RunningPipelines| async move {
println!("session requested OK!");
let id = Uuid::new_v4();
let mut switcher = Switcher::default();
let (tx, mut rx) = mpsc::unbounded_channel::<String>();
switcher.set_sender(tx);
let t = tokio::spawn(async move {
println!("Background going...");
//This would be something processing in the background until it received the end signal
match rx.recv().await {
Some(v) => {
println!(
"Got end message:{} YESSSSSS#!##!!!!!!!!!!!!!!!!1111eleven",
v
);
}
None => println!("Error receiving end signal:"),
}
println!("ABORTING HANDLE");
true
});
let ret = HashMap::from([("session_id", id.to_string())]);
switcher.set_handle(t);
{
pipelines
.write()
.await
.insert(id.to_string(), Arc::new(RwLock::new(switcher)));
}
Ok(warp::reply::json(&ret))
});
let session_end = warp::delete()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.and(warp::query::<HashMap<String, String>>())
.then(
|pipelines: RunningPipelines, p: HashMap<String, String>| async move {
println!("session end requested OK!: {:?}", p);
match p.get("session_id") {
None => Ok(warp::reply::with_status(
"Please specify session to end",
http::StatusCode::BAD_REQUEST,
)),
Some(id) => {
let mut pipe = pipelines.write().await;
match pipe.entry(String::from(id)) {
Occupied(handle) => {
println!("occupied");
let (k, v) = handle.remove_entry();
drop(pipe);
println!("removed from hashmap, key:{}", k);
let s = v.write().await;
if let Some(h) = &s.handle {
if let Some(tx) = &s.pipeline_end_tx {
match tx.send("goodbye".to_string()) {
Ok(res) => {
println!(
"sent end message|{:?}| to fpipeline: {}",
res, id
);
//Added this to try to get it to at least Error on the other side
drop(tx);
},
Err(err) => println!(
"ERROR sending end message to pipeline({}):{}",
id, err
),
};
} else {
println!("no sender channel found for pipeline: {}", id);
};
h.abort();
} else {
println!(
"no luck finding the value in handle in the switcher: {}",
id
);
};
}
Vacant(_) => {
println!("no luck finding the handle in the pipelines: {}", id)
}
};
Ok(warp::reply::with_status("done", http::StatusCode::OK))
}
}
},
);
let routes = session_create
.or(session_end)
.recover(handle_rejection)
.with(warp::cors().allow_any_origin());
println!("starting server...");
warp::serve(routes).run((ADDR, PORT)).await;
}
async fn handle_rejection(
err: warp::Rejection,
) -> Result<impl warp::Reply, std::convert::Infallible> {
Ok(warp::reply::json(&format!("{:?}", err)))
}
fn with_pipelines(
pipelines: RunningPipelines,
) -> impl Filter<Extract = (RunningPipelines,), Error = std::convert::Infallible> + Clone {
warp::any().map(move || pipelines.clone())
}
depends:
[dependencies]
warp = "0.3"
tokio = { version = "1", features = ["full"] }
uuid = { version = "0.8.2", features = ["serde", "v4"] }
Results when I boot up, send a "create" request, and then an "end" request with the received ID:
starting server...
session requested OK!
Background going...
session end requested OK!: {"session_id": "6b984a45-38d8-41dc-bf95-422f75c5a429"}
occupied
removed from hashmap, key:6b984a45-38d8-41dc-bf95-422f75c5a429
sent end message|()| to fpipeline: 6b984a45-38d8-41dc-bf95-422f75c5a429
You'll notice that the background thread starts (and doesn't end) when the "create" request is made, but when the "end" request is made, while everything appears to complete successfully from the request(web) side, the background thread doesn't ever receive the message. As I've said I've tried all different channel types and moved things around to get it into this configuration... i.e. flattened and thread safetied as much as I could or at least could think of. I'm greener than I would like in rust, so any help would be VERY appreciated!
I think that the issue here is that you are sending the message and then immediately aborting the background task:
tx.send("goodbye".to_string());
//...
h.abort();
And the background task does not have time to process the message, as the abort is of higher priority.
What you need is to join the task, not to abort it.
Curiously, tokio tasks handles do not have a join() method, instead you wait for the handle itself. But for that you need to own the handle, so first you have to extract the handle from the Switcher:
let mut s = v.write().await;
//steal the task handle
if let Some(h) = s.handle.take() {
//...
tx.send("goodbye".to_string());
//...
//join the task
h.await.unwrap();
}
Note that joining a task may fail, in case the task is aborted or panicked. I'm just panicking in the code above, but you may want to do something different.
Or... you could not to wait for the task. In tokio if you drop a task handle, it will be detached. Then, it will finish when it finishes.

Worker threads send many messages through a channel to main but only the first one is delivered

I've been trying to extend the thread pool example from the Multi-Threaded Web Server chapter in The Book. The original example works fine and dispatches messages to workers properly though the spsc channel (ingress), but now I want to return values (strings) from the worker threads through an mpsc channel (egress). Somehow the egress channel sends only one message instead of 10. egress_tx.send() seems to be executed 10 times but egress_rx.recv() gives me one message only and then the program finishes (i.e. no deadlocks etc). The worker threads are terminated properly in the Drop trait implementation (this code is not shown). I'd appreciate any suggestions about debugging such a problem: putting a breakpoint ar recv() and trying to find something meaningful in its internals hasn't helped much.
type Job = Box<dyn FnOnce(usize) -> String + Send + 'static>;
enum Message {
Run(Job),
Halt,
}
struct Worker {
id: usize,
thread: Option<thread::JoinHandle<()>>,
}
pub struct ThreadPool {
workers: Vec<Worker>,
ingress_tx: Sender<Message>,
pub egress_rx: Receiver<String>
}
impl Worker {
fn new(id: usize, rx: Arc<Mutex<mpsc::Receiver<Message>>>, tx: mpsc::Sender<String>) -> Worker {
let thread = thread::spawn(move ||
loop {
let msg = rx.lock().unwrap().recv().unwrap();
match msg {
Message::Run(job) => {
let s = job(id);
println!("Sending \"{}\"", s);
tx.send(s).unwrap();
},
Message::Halt => break,
}
}
);
Worker {id, thread: Some(thread)}
}
}
impl ThreadPool {
pub fn new(size: usize) -> Result<ThreadPool, ThreadPoolError> {
if size <= 0 {
return Err(ThreadPoolError::ZeroSizedPool)
}
let (ingress_tx, ingress_rx) = mpsc::channel();
let ingress_rx = Arc::new(Mutex::new(ingress_rx));
let (egress_tx, egress_rx) = mpsc::channel();
let mut workers = Vec::with_capacity(size);
for id in 0..size {
workers.push(Worker::new(id, ingress_rx.clone(), egress_tx.clone()));
}
Ok(ThreadPool {workers, ingress_tx, egress_rx})
}
pub fn execute<F>(&self, f: F)
where F: FnOnce(usize) -> String + Send + 'static
{
let j = Box::new(f);
self.ingress_tx.send(Message::Run(j)).unwrap();
}
}
fn run_me(id: usize, i: usize) -> String {
format!("Worker {} is processing tile {}...", id, i).to_string()
}
#[cfg(test)]
mod threadpool_tests {
use super::*;
#[test]
fn tp_test() {
let tpool = ThreadPool::new(4).expect("Cannot create threadpool");
for i in 0..10 {
let closure = move |worker_id| run_me(worker_id, i);
tpool.execute(closure);
}
for s in tpool.egress_rx.recv() {
println!("{}", s);
}
}
}
And the output is:
Sending "Worker 0 is processing tile 0..."
Sending "Worker 0 is processing tile 2..."
Sending "Worker 3 is processing tile 1..."
Sending "Worker 3 is processing tile 4..."
Sending "Worker 2 is processing tile 3..."
Sending "Worker 2 is processing tile 6..."
Sending "Worker 1 is processing tile 5..."
Sending "Worker 0 is processing tile 7..."
Sending "Worker 0 is processing tile 9..."
Sending "Worker 3 is processing tile 8..."
Receiving "Worker 0 is processing tile 0..."
Process finished with exit code 0
In your code, you have for s in tpool.egress_rx.recv(), which isn't doing quite what you want. Instead of iterating over the values received by the channel, you're receiving one element (wrapped in a Result) and then iterating over that, since Result implements IntoIterator to iterate over the success value (or nothing, if it contains an error).
Simply changing this to for s in tpool.egress_rx should fix it, since channels also implement IntoIterator.

Unable to use asynchronous actors

I'm trying to use the actors as documented in the actix documentation. But even the doc example is not working for me. I tried the following code which compiles but does not print the message "Received fibo message"
use actix::prelude::*;
// #[derive(Message)]
// #[rtype(Result = "Result<u64, ()>")]
// struct Fibonacci(pub u32);
struct Fibonacci(pub u32);
impl Message for Fibonacci {
type Result = Result<u64, ()>;
}
struct SyncActor;
impl Actor for SyncActor {
// It's important to note that you use "SyncContext" here instead of "Context".
type Context = SyncContext<Self>;
}
impl Handler<Fibonacci> for SyncActor {
type Result = Result<u64, ()>;
fn handle(&mut self, msg: Fibonacci, _: &mut Self::Context) -> Self::Result {
println!("Received fibo message");
if msg.0 == 0 {
Err(())
} else if msg.0 == 1 {
Ok(1)
} else {
let mut i = 0;
let mut sum = 0;
let mut last = 0;
let mut curr = 1;
while i < msg.0 - 1 {
sum = last + curr;
last = curr;
curr = sum;
i += 1;
}
Ok(sum)
}
}
}
fn main() {
System::new().block_on(async {
// Start the SyncArbiter with 2 threads, and receive the address of the Actor pool.
let addr = SyncArbiter::start(2, || SyncActor);
// send 5 messages
for n in 5..10 {
// As there are 2 threads, there are at least 2 messages always being processed
// concurrently by the SyncActor.
println!("Sending fibo message");
addr.do_send(Fibonacci(n));
}
});
}
This program displays 5 times :
Sending fibo message
Two remarks, first I'm unable to use the macro rtype, I use to implement Message myself. And then the line addr.do_send(Fibonacci(n)) seems to not send anything to my actor. However if I use addr.send(Fibonacci(n)).await; my message get sent and received on the actor side. But since I'm awaiting the send function it processes the message synchronously instead of using the 2 threads I have defined theoretically.
I also tried to wait with a thread::sleep after my main loop but the messages were not arriving either.
I might be misunderstanding something but it seems strange to me.
Cargo.toml file :
[dependencies]
actix = "0.11.1"
actix-rt = "2.2.0"
I finally managed to make it works, though I can't understand exactly why. Simply using tokio to wait for a ctrl-C made it possible for me to call do_send/try_send and work in parallel.
fn main() {
System::new().block_on(async {
// Start the SyncArbiter with 4 threads, and receive the address of the Actor pool.
let addr = SyncArbiter::start(4, || SyncActor);
// send 15 messages
for n in 5..20 {
// As there are 4 threads, there are at least 4 messages always being processed
// concurrently by the SyncActor.
println!("Sending fibo message");
addr.do_send(Fibonacci(n));
}
// This does not wotk
//thread::spawn(move || {
// thread::sleep(Duration::from_secs_f32(10f32));
//}).join();
// This made it worked
tokio::signal::ctrl_c().await.unwrap();
println!("Ctrl-C received, shutting down");
System::current().stop();
});
}
You don't have to use crate tokio explicitly here. In your loop, just change the last line to addr.send(Fibonacci(n)).await.unwrap(). Method send returns a future and it must be awaited to resolve.

Read stdin triggered by key event without dropping first letter

Context
I am working on a pomodoro command line app written in rust, most of it worked well but now I want to edit the text of a pomodoro item in the database. All of the actions in the app are triggered by keystrokes, pausing/resuming, quitting etc. and as well editing the text.
Now I want to read the text from stdin but the key-events are as sourced as well from stdin, but on a different thread. I came up with using a stdin.lock() - which works almost fine.
The problem
How can I read a line from stdin in the main thread, without dropping the first letter, due to the event listener being triggered in its thread, before the lock in the main thread is acquired.
expected behaviour:
press t => print Reading from stdin!
type abc<enter> => print You typed: Some("abc")
actual behaviour:
press t => print Reading from stdin!
type abc<enter> => print You typed: Some("bc")
Minimal non-working example
Here is an example that shows the described behaviour:
use failure;
use std::io::{stdin, stdout};
use std::sync::mpsc;
use std::thread;
use termion::event::Key;
use termion::input::TermRead;
use termion::raw::IntoRawMode;
use tui::backend::TermionBackend;
use tui::Terminal;
pub enum Event {
Input(Key),
}
#[allow(dead_code)]
pub struct Events {
rx: mpsc::Receiver<Event>,
input_handle: thread::JoinHandle<()>,
}
impl Events {
pub fn new() -> Events {
let (tx, rx) = mpsc::channel();
let input_handle = {
let tx = tx.clone();
thread::spawn(move || {
let stdin = stdin();
for evt in stdin.keys() {
match evt {
Ok(key) => {
if let Err(_) = tx.send(Event::Input(key)) {
return;
}
}
Err(_) => {}
}
}
})
};
Events {
rx,
input_handle,
}
}
pub fn next(&self) -> Result<Event, mpsc::RecvError> {
self.rx.recv()
}
}
pub fn key_handler(key: Key) -> bool {
match key {
Key::Char('t') => {
println!("Reading from stdin!");
let stdin = stdin();
let mut handle = stdin.lock();
let input = handle.read_line().unwrap();
println!("You typed: {:?}", input);
}
_ =>{
println!("no thing!");
}
};
key == Key::Char('q')
}
fn main() -> Result<(), failure::Error> {
let stdout = stdout().into_raw_mode()?;
let backend = TermionBackend::new(stdout);
let mut terminal = Terminal::new(backend)?;
terminal.clear()?;
terminal.hide_cursor()?;
let events = Events::new();
loop {
match events.next()? {
Event::Input(key) => {
if key_handler(key) {
break;
}
}
}
}
terminal.clear()?;
terminal.show_cursor()?;
Ok(())
}
Update
Cargo.toml
[package]
name = "mnwe"
version = "1.1.0"
edition = "2018"
autotests = false
[[bin]]
bench = false
path = "app/main.rs"
name = "mnwe"
[dependencies]
failure = "0.1"
termion = "1.5.3"
tui = "0.7"
The problem, as correctly identified, is the race on stdin().lock() between stdin.keys() in the events thread and stdin.lock() in key_handler (which the events thread tends to win, and eat one key).
For the sake of the bounty, I see four possible approaches:
At easiest, you can avoid having threads at all, and instead regularly poll for new input with termion::async_std. (It's what I ended up doing for my own tui application. In the end, you're likely to be polling the event receiver anyway, so why not poll stdin directly.)
If your problem allows it, you could do the stdin reading directly on the event thread. Instead of sending key events over the channel, you would send something you could call "user commands" over the channel:
// Channel data:
pub enum Command {
StdinInput(String),
Quit,
// Add more commands
Unknown(Key),
}
// Send side
thread::spawn(move || {
for evt in stdin().keys() {
match evt {
Ok(key) => {
let cmd = match key {
Key::Char('t') => {
println!("Reading from stdin!");
let input = stdin().lock().read_line().unwrap();
Command::StdinInput(input.unwrap_or(String::new()))
}
Key::Char('q') => Command::Quit,
_ => Command::Unknown(key),
};
if let Err(_) = tx.send(cmd) {
return;
}
}
Err(_) => {}
}
}
})
// Receive side
loop {
match events.next()? {
Command::StdinInput(input) => println!("You typed: {}", input),
Command::Quit => break,
Command::Unknown(k) => println!("no thing: {:?}", k),
}
}
If you must absolutely have access stdin from to threads, I would not recommend using a CondVar, but passing the sender of another channel through the event channel. Why? Because it's much harder to get wrong. Any channel will do, but I think oneshot::channel() is most suitable here:
// Channel data
pub enum Event {
Input(Key, oneshot::Sender<()>),
}
// Send side
for evt in stdin.keys() {
let (shot_send, shot_recv) = oneshot::channel();
match evt {
Ok(key) => {
if let Err(_) = tx.send(Event::Input(key, shot_send)) {
return;
}
shot_recv.recv().ok();
}
Err(_) => {}
}
}
// Receive side
loop {
match events.next()? {
Event::Input(key, done) => {
match key {
Key::Char('t') => {
println!("Reading from stdin!");
let stdin = stdin();
let mut handle = stdin.lock();
let input = handle.read_line().unwrap();
println!("You typed: {:?}", input);
// It doesn't really matter whether we send anything
// but using the channel here avoids mean surprises about when it gets dropped
done.send(()).ok();
}
Key::Char('q') => break,
_ => println!("no thing!"),
};
}
}
}
You could also not do the stdin().lock().read_line() at all, but reassemble the user input line from the key stroke events. I wouldn't do that.

Resources