I am learning to use Rust and making my first small program with tokio.
I have async functions for sending and receiving messages using tokio::mpsc:
sender
async fn msg_stream(sender : mpsc::Sender<Message>) {
let is = sender.is_closed();
println!("it is closed : {}",is);
loop{
tokio::time::sleep(Duration::from_secs(1)).await;
let m = Message::new(get_random_string(),get_random_number());
println!("message = {:?}",m);
let is = sender.is_closed();
println!("it is closed : {}",is);
if let Err(e) = sender.send(m).await{
println!("channel was closed,{}",e);
}
}
}
receiver
async fn read_stream(mut receiver : mpsc::Receiver<Message>){
let (_, mut rx) = oneshot::channel::<()>();
loop{
tokio::select! {
_ = tokio::time::timeout(Duration::from_secs(10),(& mut rx)) => {
return;
}
message = receiver.recv() =>{
println!("was receiver message = {:?} ",message)
}
}
}
}
Now I create channels in main and send them to these functions:
#[tokio::main]
async fn main() {
let (tx,rx) = mpsc::channel::<Message>(8);
tokio::join!(msg_stream(tx),read(rx));
}
Now when I start I get an error:
channel was closed,channel closed
Also, checks for channel closeness show that at the beginning of the function the channel was opened , but the first check inside the loop and the channel is already closed.
it is closed : false
message = Message { content: "RIKiew", id: 96 }
it is closed : true
I can't figure out what's wrong. As I understand it, this can happen if rx gets dropped, but I don't see it happening here. I will be glad of help to figure out what is wrong with
Here's a minimal reproducible example of your problem:
use std::{
sync::atomic::{AtomicU32, Ordering},
time::Duration,
};
use tokio::sync::{mpsc, oneshot};
#[derive(Debug)]
struct Message {
number: u32,
}
impl Message {
fn new(number: u32) -> Self {
Self { number }
}
}
fn get_random_number() -> u32 {
static NUM: AtomicU32 = AtomicU32::new(1);
NUM.fetch_add(1, Ordering::Relaxed)
}
async fn msg_stream(sender: mpsc::Sender<Message>) {
let is = sender.is_closed();
println!("it is closed : {}", is);
loop {
tokio::time::sleep(Duration::from_secs(1)).await;
let m = Message::new(get_random_number());
println!("message = {:?}", m);
let is = sender.is_closed();
println!("it is closed : {}", is);
if let Err(e) = sender.send(m).await {
println!("channel was closed,{}", e);
}
}
}
async fn read_stream(mut receiver: mpsc::Receiver<Message>) {
let (_, mut rx) = oneshot::channel::<()>();
loop {
tokio::select! {
_ = tokio::time::timeout(Duration::from_secs(10), &mut rx) => {
println!("STOP RECEIVING");
return;
}
message = receiver.recv() =>{
println!("was receiver message = {:?} ", message)
}
}
}
}
#[tokio::main]
async fn main() {
let (tx, rx) = mpsc::channel::<Message>(8);
tokio::join!(msg_stream(tx), read_stream(rx));
}
Note the line where I added STOP RECEIVING.
The output is:
it is closed : false
STOP RECEIVING
message = Message { number: 1 }
it is closed : true
channel was closed,channel closed
...
So why does this happen?
Let's focus on this function:
async fn read_stream(mut receiver: mpsc::Receiver<Message>) {
let (_, mut rx) = oneshot::channel::<()>();
loop {
tokio::select! {
_ = tokio::time::timeout(Duration::from_secs(10), &mut rx) => {
println!("STOP RECEIVING");
return;
}
message = receiver.recv() =>{
println!("was receiver message = {:?} ", message)
}
}
}
}
A few explanations of the concepts used in this function:
tokio::select! jumps into the code whose future finishes first, and cancels the other branches
tokio::time::timeout waits until its given future (here &mut rx) triggers, or until the timeout is over
So the question is: why does it jump into the "STOP_RECEIVING" part? The 10 seconds are definitely not over yet.
That means that &mut rx got triggered. And that is absolutely what happens, because a oneshot receiver can be triggered for two reasons:
It receives a value
Its sender got dropped
And because you immediately drop the sender (via assigning it to _), the receiver will return immediately.
I'm unsure how to help you except pointing out the program flow, because it isn't clear what you are trying to achieve with the oneshot. If you intend it for cancellation reasons, just keep the sender around and this won't happen. You can achieve this by giving it a name.
The thing you might have stumbled across here is that _ is not a variable name. It is a special keyword for indicating that this variable is to be dropped immediately. _rx would be a proper variable name.
Here is one possible working version:
use std::{
sync::atomic::{AtomicU32, Ordering},
time::Duration,
};
use tokio::sync::{mpsc, oneshot};
#[derive(Debug)]
struct Message {
number: u32,
}
impl Message {
fn new(number: u32) -> Self {
Self { number }
}
}
fn get_random_number() -> u32 {
static NUM: AtomicU32 = AtomicU32::new(1);
NUM.fetch_add(1, Ordering::Relaxed)
}
async fn msg_stream(sender: mpsc::Sender<Message>) {
let is = sender.is_closed();
println!("it is closed : {}", is);
loop {
tokio::time::sleep(Duration::from_secs(1)).await;
let m = Message::new(get_random_number());
println!("message = {:?}", m);
let is = sender.is_closed();
println!("it is closed : {}", is);
if let Err(e) = sender.send(m).await {
println!("channel was closed,{}", e);
}
}
}
async fn read_stream(mut receiver: mpsc::Receiver<Message>) {
let (_tx, mut rx) = oneshot::channel::<()>();
loop {
tokio::select! {
_ = tokio::time::timeout(Duration::from_secs(10), &mut rx) => {
println!("STOP RECEIVING");
return;
}
message = receiver.recv() =>{
println!("was receiver message = {:?} ", message)
}
}
}
}
#[tokio::main]
async fn main() {
let (tx, rx) = mpsc::channel::<Message>(8);
tokio::join!(msg_stream(tx), read_stream(rx));
}
it is closed : false
message = Message { number: 1 }
it is closed : false
was receiver message = Some(Message { number: 1 })
message = Message { number: 2 }
it is closed : false
was receiver message = Some(Message { number: 2 })
message = Message { number: 3 }
it is closed : false
was receiver message = Some(Message { number: 3 })
message = Message { number: 4 }
it is closed : false
was receiver message = Some(Message { number: 4 })
message = Message { number: 5 }
it is closed : false
was receiver message = Some(Message { number: 5 })
...
Related
The test code below considers a situation in which there are three different threads.
Each thread has to do certain asynchronous tasks, that may take a certain time to finish.
This is "simulated" in the code below with a sleep.
On top of that, two of the threads collect information that they have to send to the third one for further processing. This is done using mpsc channels.
Due to the fact that there are out of our control information obtained from outside of the Rust application, the threads may get interrupted. This is emulated by generating a random number, and the loop on each thread breaks when that happens.
What I'm trying to achieve is a system in which whenever one of the threads has an error (simulated with the random number = 9), every other thread is cancelled too.
`
use std::sync::mpsc::channel;
use std::sync::mpsc::{Sender, Receiver, TryRecvError};
use std::thread::sleep;
use tokio::time::Duration;
use rand::distributions::{Uniform, Distribution};
#[tokio::main]
async fn main() {
execution_cycle().await;
}
async fn execution_cycle() {
let (tx_first, rx_first) = channel::<Message>();
let (tx_second, rx_second) = channel::<Message>();
let handle_sender_first = tokio::spawn(sender_thread(tx_first));
let handle_sender_second = tokio::spawn(sender_thread(tx_second));
let handle_receiver = tokio::spawn(receiver_thread(rx_first, rx_second));
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..10);
let mut cancel_from_cycle = rng_generator.sample(&mut thread_rng);
while !&handle_sender_first.is_finished() && !&handle_sender_second.is_finished() && !&handle_receiver.is_finished() {
cancel_from_cycle = rng_generator.sample(&mut thread_rng);
if (cancel_from_cycle == 9) {
println!("Aborting from the execution cycle.");
handle_receiver.abort();
handle_sender_first.abort();
handle_sender_second.abort();
}
}
if handle_sender_first.is_finished() {
println!("handle_sender_first finished.");
} else {
println!("handle_sender_first ongoing.");
}
if handle_sender_second.is_finished() {
println!("handle_sender_second finished.");
} else {
println!("handle_sender_second ongoing.");
}
if handle_receiver.is_finished() {
println!("handle_receiver finished.");
} else {
println!("handle_receiver ongoing.");
}
}
async fn sender_thread(tx: Sender<Message>) {
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..20);
let mut random_id = rng_generator.sample(&mut thread_rng);
while random_id != 9 {
let msg = Message {
id: random_id,
text: "hello".to_owned()
};
println!("Sending message {}.", msg.id);
random_id = rng_generator.sample(&mut thread_rng);
println!("Generated id {}.", random_id);
let result = tx.send(msg);
match result {
Ok(res) => {},
Err(error) => {
println!("Sending error {:?}", error);
random_id = 9;
}
}
sleep(Duration::from_millis(2000));
}
}
async fn receiver_thread(rx_first: Receiver<Message>, rx_second: Receiver<Message>) {
let mut channel_open_first = true;
let mut channel_open_second = true;
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..15);
let mut random_event = rng_generator.sample(&mut thread_rng);
while channel_open_first && channel_open_second && random_event != 9 {
channel_open_first = receiver_inner(&rx_first);
channel_open_second = receiver_inner(&rx_second);
random_event = rng_generator.sample(&mut thread_rng);
println!("Generated event {}.", random_event);
sleep(Duration::from_millis(800));
}
}
fn receiver_inner(rx: &Receiver<Message>) -> bool {
let value = rx.try_recv();
match value {
Ok(msg) => {
println!("Message {} received: {}", msg.id, msg.text);
},
Err(error) => {
if error != TryRecvError::Empty {
println!("{}", error);
return false;
} else { /* Channel is empty.*/ }
}
}
return true;
}
struct Message {
id: usize,
text: String,
}
`
In the working example here, it does exactly that, however, it does it only from inside the threads, and I would like to add a "kill switch" in the execution_cycle() method, allowing to cancel all the three threads when a certain event takes place (the random number cancel_from_cycle == 9), and do that in the most simple way possible... I tried drop(handler_sender), and also panic!() from the execution_cycle() but the spawn threads keep running, preventing the application to finish. I also tried handle_receiver().abort() without success.
How can I achieve the wished result?
I have a server that broadcast messages to connected client, though the messages doesn't get delivered and my tests fails.
I'm using the following
use anyhow::Result;
use std::path::{Path, PathBuf};
use std::process::Stdio;
use std::sync::Arc;
use tokio::io::AsyncWriteExt;
use tokio::net::{UnixListener, UnixStream};
use tokio::sync::broadcast::*;
use tokio::sync::Notify;
use tokio::task::JoinHandle;
This is how I start and setup my server
pub struct Server {
#[allow(dead_code)]
tx: Sender<String>,
rx: Receiver<String>,
address: Arc<PathBuf>,
handle: Option<JoinHandle<Result<()>>>,
abort: Arc<Notify>,
}
impl Server {
pub fn new<P: AsRef<Path>>(address: P) -> Self {
let (tx, rx) = channel::<String>(400);
let address = Arc::new(address.as_ref().to_path_buf());
Self {
address,
handle: None,
tx,
rx,
abort: Arc::new(Notify::new()),
}
}
}
/// Start Server
pub async fn start(server: &mut Server) -> Result<()> {
tokio::fs::remove_file(server.address.as_path()).await.ok();
let listener = UnixListener::bind(server.address.as_path())?;
println!("[Server] Started");
let tx = server.tx.clone();
let abort = server.abort.clone();
server.handle = Some(tokio::spawn(async move {
loop {
let tx = tx.clone();
let abort1 = abort.clone();
tokio::select! {
_ = abort.notified() => break,
Ok((client, _)) = listener.accept() => {
tokio::spawn(async move { handle(client, tx, abort1).await });
}
}
}
println!("[Server] Aborted!");
Ok(())
}));
Ok(())
}
my handle function
/// Handle stream
async fn handle(mut stream: UnixStream, tx: Sender<String>, abort: Arc<Notify>) {
loop {
let mut rx = tx.subscribe();
let abort = abort.clone();
tokio::select! {
_ = abort.notified() => break,
result = rx.recv() => match result {
Ok(output) => {
stream.write_all(output.as_bytes()).await.unwrap();
stream.write(b"\n").await.unwrap();
continue;
}
Err(e) => {
println!("[Server] {e}");
break;
}
}
}
}
stream.write(b"").await.unwrap();
stream.flush().await.unwrap();
}
my connect function
/// Connect to server
async fn connect(address: Arc<PathBuf>, name: String) -> Vec<String> {
use tokio::io::{AsyncBufReadExt, BufReader};
let mut outputs = vec![];
let stream = UnixStream::connect(&*address).await.unwrap();
let mut breader = BufReader::new(stream);
let mut buf = vec![];
loop {
if let Ok(len) = breader.read_until(b'\n', &mut buf).await {
if len == 0 {
break;
} else {
let value = String::from_utf8(buf.clone()).unwrap();
print!("[{name}] {value}");
outputs.push(value)
};
buf.clear();
}
}
println!("[{name}] ENDED");
outputs
}
This what I feed to the channel and want to have broadcasted to all clients
/// Feed data
pub fn feed(tx: Sender<String>, abort: Arc<Notify>) -> Result<JoinHandle<Result<()>>> {
use tokio::io::*;
use tokio::process::Command;
Ok(tokio::spawn(async move {
let mut child = Command::new("echo")
.args(&["1\n", "2\n", "3\n", "4\n"])
.stdout(Stdio::piped())
.stderr(Stdio::null())
.stdin(Stdio::null())
.spawn()?;
let mut stdout = BufReader::new(child.stdout.take().unwrap()).lines();
loop {
let sender = tx.clone();
tokio::select! {
result = stdout.next_line() => match result {
Err(e) => {
println!("[Server] FAILED to send an output to channel: {e}");
},
Ok(None) => break,
Ok(Some(output)) => {
let output = output.trim().to_string();
println!("[Server] {output}");
if !output.is_empty() {
if let Err(e) = sender.send(output) {
println!("[Server] FAILED to send an output to channel: {e}");
}
}
}
}
}
}
println!("[Server] Process Completed");
abort.notify_waiters();
Ok(())
}))
}
my failing test
#[tokio::test]
async fn test_server() -> Result<()> {
let mut server = Server::new("/tmp/testsock.socket");
start(&mut server).await?;
feed(server.tx.clone(), server.abort.clone()).unwrap();
let address = server.address.clone();
let client1 = connect(address.clone(), "Alpha".into());
let client2 = connect(address.clone(), "Beta".into());
let client3 = connect(address.clone(), "Delta".into());
let client4 = connect(address.clone(), "Gamma".into());
let (c1, c2, c3, c4) = tokio::join!(client1, client2, client3, client4,);
server.handle.unwrap().abort();
assert_eq!(c1.len(), 4, "Alpha");
assert_eq!(c2.len(), 4, "Beta");
assert_eq!(c3.len(), 4, "Delta");
assert_eq!(c4.len(), 4, "Gamma");
println!("ENDED");
Ok(())
}
Logs:
[Server] Started
[Server] 1
[Server] 2
[Server] 3
[Server] 4
[Server]
[Delta] 1
[Gamma] 1
[Alpha] 1
[Beta] 1
[Server] Process Completed
[Server] Aborted!
[Gamma] ENDED
[Alpha] ENDED
[Beta] ENDED
[Delta] ENDED
well, not an answer but I just want to suggest to use task::spawn to generate a JoinHandle from a function, then, say your handle could be:
fn handle(mut stream: UnixStream, tx: Sender<String>, abort: Arc<Notify>) -> JoinHandle {
let mut rx = tx.subscribe();
let abort = abort.clone();
task::spawn( async move {
loop {
tokio::select! {
_ = abort.notified() => break,
result = rx.recv() => match result {
Ok(output) => {
stream.write_all(output.as_bytes()).await.unwrap();
stream.write(b"\n").await.unwrap();
continue;
}
Err(e) => {
println!("[Server] {e}");
break;
}
}
}
}
stream.write(b"").await.unwrap();
stream.flush().await.unwrap();
})
}
I mean, I did not tested this, but I see a sort of duplication in the code above, like 2 loop, 2 select! and twice the abort check
I'd like to both read and process messages from two channels and construct another message and send this message via another channel.
Messages from the two channels are received at different frequencies (as per sleep).
Example: "foo1" and "bar1" are received, so we process them and form "foo1bar1". "foo2" is received ("bar2" will be received in 2sec), so we will process it as "foo2bar1". "foo3" is received, so "foo3bar1" is constructed. When "bar2" is received, then we get "foo4bar2" and so on.
In the current implementation, since the two tasks don't communicate with one another, I cannot do the "fooNbarM" construction.
use std::time::Duration;
use tokio;
use tokio::sync::mpsc::{UnboundedReceiver, UnboundedSender};
use tokio::time::sleep;
use futures::future::join_all;
async fn message_sender(msg: &'static str, foo_tx: UnboundedSender<Result<&str, Box<dyn std::error::Error + Send>>>) {
loop {
match foo_tx.send(Ok(msg)) {
Ok(()) => {
if msg == "foo" {
sleep(Duration::from_millis(1000)).await;
} else {
sleep(Duration::from_millis(3000)).await;
}
}
Err(_) => {
println!("failed to send foo");
break;
}
}
}
}
#[tokio::main]
async fn main() {
let result: Vec<&str> = vec![];
let (foo_tx, mut foo_rx): (
UnboundedSender<Result<&str, Box<dyn std::error::Error + Send>>>,
UnboundedReceiver<Result<&str, Box<dyn std::error::Error + Send>>>,
) = tokio::sync::mpsc::unbounded_channel();
let (bar_tx, mut bar_rx): (
UnboundedSender<Result<&str, Box<dyn std::error::Error + Send>>>,
UnboundedReceiver<Result<&str, Box<dyn std::error::Error + Send>>>,
) = tokio::sync::mpsc::unbounded_channel();
let foo_sender_handle = tokio::spawn(async move {
message_sender("foo", foo_tx).await;
});
let foo_handle = tokio::spawn(async move {
while let Some(v) = foo_rx.recv().await {
println!("{:?}", v);
}
});
let bar_sender_handle = tokio::spawn(async move {
message_sender("bar", bar_tx).await;
});
let bar_handle = tokio::spawn(async move {
while let Some(v) = bar_rx.recv().await {
println!("{:?}", v);
}
});
let handles = vec![foo_sender_handle, foo_handle, bar_sender_handle, bar_handle];
join_all(handles.into_iter()).await;
}
Cargo.toml
[package]
name = "play"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
tokio = { version = "1.16.1", features = ["full"] }
futures = "0.3.21"
Use tokio::select to wait for either channel to become ready:
use futures::future; // 0.3.19
use std::time::Duration;
use tokio::{
sync::mpsc::{self, UnboundedSender},
time,
}; // 1.16.1
async fn message_sender(msg: &'static str, foo_tx: UnboundedSender<String>) {
for count in 0.. {
let message = format!("{msg}{count}");
foo_tx.send(message).unwrap();
if msg == "foo" {
time::sleep(Duration::from_millis(100)).await;
} else {
time::sleep(Duration::from_millis(300)).await;
}
}
}
#[tokio::main]
async fn main() {
let (foo_tx, mut foo_rx) = mpsc::unbounded_channel();
let (bar_tx, mut bar_rx) = mpsc::unbounded_channel();
let foo_sender_handle = tokio::spawn(message_sender("foo", foo_tx));
let bar_sender_handle = tokio::spawn(message_sender("bar", bar_tx));
let receive_handle = tokio::spawn(async move {
let mut foo = None;
let mut bar = None;
loop {
tokio::select! {
f = foo_rx.recv() => foo = f,
b = bar_rx.recv() => bar = b,
}
if let (Some(foo), Some(bar)) = (&foo, &bar) {
println!("{foo}{bar}");
}
}
});
future::join_all([foo_sender_handle, bar_sender_handle, receive_handle]).await;
}
You also have to handle the case where only one message has been received yet, so Option comes in useful.
I implemented the future and made a request of it, but it blocked my curl and the log shows that poll was only invoked once.
Did I implement anything wrong?
use failure::{format_err, Error};
use futures::{future, Async};
use hyper::rt::Future;
use hyper::service::{service_fn, service_fn_ok};
use hyper::{Body, Method, Request, Response, Server, StatusCode};
use log::{debug, error, info};
use std::{
sync::{Arc, Mutex},
task::Waker,
thread,
};
pub struct TimerFuture {
shared_state: Arc<Mutex<SharedState>>,
}
struct SharedState {
completed: bool,
resp: String,
}
impl Future for TimerFuture {
type Item = Response<Body>;
type Error = hyper::Error;
fn poll(&mut self) -> futures::Poll<Response<Body>, hyper::Error> {
let mut shared_state = self.shared_state.lock().unwrap();
if shared_state.completed {
return Ok(Async::Ready(Response::new(Body::from(
shared_state.resp.clone(),
))));
} else {
return Ok(Async::NotReady);
}
}
}
impl TimerFuture {
pub fn new(instance: String) -> Self {
let shared_state = Arc::new(Mutex::new(SharedState {
completed: false,
resp: String::new(),
}));
let thread_shared_state = shared_state.clone();
thread::spawn(move || {
let res = match request_health(instance) {
Ok(status) => status.clone(),
Err(err) => {
error!("{:?}", err);
format!("{}", err)
}
};
let mut shared_state = thread_shared_state.lock().unwrap();
shared_state.completed = true;
shared_state.resp = res;
});
TimerFuture { shared_state }
}
}
fn request_health(instance_name: String) -> Result<String, Error> {
std::thread::sleep(std::time::Duration::from_secs(1));
Ok("health".to_string())
}
type BoxFut = Box<dyn Future<Item = Response<Body>, Error = hyper::Error> + Send>;
fn serve_health(req: Request<Body>) -> BoxFut {
let mut response = Response::new(Body::empty());
let path = req.uri().path().to_owned();
match (req.method(), path) {
(&Method::GET, path) => {
return Box::new(TimerFuture::new(path.clone()));
}
_ => *response.status_mut() = StatusCode::NOT_FOUND,
}
Box::new(future::ok(response))
}
fn main() {
let endpoint_addr = "0.0.0.0:8080";
match std::thread::spawn(move || {
let addr = endpoint_addr.parse().unwrap();
info!("Server is running on {}", addr);
hyper::rt::run(
Server::bind(&addr)
.serve(move || service_fn(serve_health))
.map_err(|e| eprintln!("server error: {}", e)),
);
})
.join()
{
Ok(e) => e,
Err(e) => println!("{:?}", e),
}
}
After compile and run this code, a server with port 8080 is running. Call the server with curl and it will block:
curl 127.0.0.1:8080/my-health-scope
Did I implement anything wrong?
Yes, you did not read and follow the documentation for the method you are implementing (emphasis mine):
When a future is not ready yet, the Async::NotReady value will be returned. In this situation the future will also register interest of the current task in the value being produced. This is done by calling task::park to retrieve a handle to the current Task. When the future is then ready to make progress (e.g. it should be polled again) the unpark method is called on the Task.
As a minimal, reproducible example, let's use this:
use futures::{future::Future, Async};
use std::{
mem,
sync::{Arc, Mutex},
thread,
time::Duration,
};
pub struct Timer {
data: Arc<Mutex<String>>,
}
impl Timer {
pub fn new(instance: String) -> Self {
let data = Arc::new(Mutex::new(String::new()));
thread::spawn({
let data = data.clone();
move || {
thread::sleep(Duration::from_secs(1));
*data.lock().unwrap() = instance;
}
});
Timer { data }
}
}
impl Future for Timer {
type Item = String;
type Error = ();
fn poll(&mut self) -> futures::Poll<Self::Item, Self::Error> {
let mut data = self.data.lock().unwrap();
eprintln!("poll was called");
if data.is_empty() {
Ok(Async::NotReady)
} else {
let data = mem::replace(&mut *data, String::new());
Ok(Async::Ready(data))
}
}
}
fn main() {
let v = Timer::new("Some text".into()).wait();
println!("{:?}", v);
}
It only prints out "poll was called" once.
You can call task::current (previously task::park) in the implementation of Future::poll, save the resulting value, then use the value with Task::notify (previously Task::unpark) whenever the future may be polled again:
use futures::{
future::Future,
task::{self, Task},
Async,
};
use std::{
mem,
sync::{Arc, Mutex},
thread,
time::Duration,
};
pub struct Timer {
data: Arc<Mutex<(String, Option<Task>)>>,
}
impl Timer {
pub fn new(instance: String) -> Self {
let data = Arc::new(Mutex::new((String::new(), None)));
let me = Timer { data };
thread::spawn({
let data = me.data.clone();
move || {
thread::sleep(Duration::from_secs(1));
let mut data = data.lock().unwrap();
data.0 = instance;
if let Some(task) = data.1.take() {
task.notify();
}
}
});
me
}
}
impl Future for Timer {
type Item = String;
type Error = ();
fn poll(&mut self) -> futures::Poll<Self::Item, Self::Error> {
let mut data = self.data.lock().unwrap();
eprintln!("poll was called");
if data.0.is_empty() {
let v = task::current();
data.1 = Some(v);
Ok(Async::NotReady)
} else {
let data = mem::replace(&mut data.0, String::new());
Ok(Async::Ready(data))
}
}
}
fn main() {
let v = Timer::new("Some text".into()).wait();
println!("{:?}", v);
}
See also:
Why does Future::select choose the future with a longer sleep period first?
Why is `Future::poll` not called repeatedly after returning `NotReady`?
What is the best approach to encapsulate blocking I/O in future-rs?
I have a Tokio application that should return when an error happens. This is
implemented using one-shot channels shared between two tasks. When any of the
tasks detect an error it signals the channel, which is received by the other
task.
However even after the error-detecting task signals the channel the other task
does not return -- the select! block simply doesn't realize that the channel
is signalled. Full code:
#![recursion_limit = "256"]
extern crate futures;
extern crate tokio;
extern crate tokio_net;
extern crate tokio_sync;
use std::io::Write;
use std::net::SocketAddr;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::tcp::split::{TcpStreamReadHalf, TcpStreamWriteHalf};
use tokio::net::TcpStream;
use tokio_sync::oneshot;
use futures::select;
use futures::future::FutureExt;
#[derive(Debug)]
enum AppErr {
CantConnect(std::io::Error),
}
fn main() {
let executor = tokio::runtime::Runtime::new().unwrap();
executor.spawn(async {
match client_task().await {
Ok(()) => {}
Err(err) => {
println!("Error: {:?}", err);
}
}
});
executor.shutdown_on_idle();
}
async fn client_task() -> Result<(), AppErr> {
let addr: SocketAddr = "127.0.0.1:8080".parse().unwrap();
print!("Connecting... ");
let _ = std::io::stdout().flush();
let sock = TcpStream::connect(&addr)
.await
.map_err(AppErr::CantConnect)?;
println!("Connected.");
let (read, write) = sock.split();
let (abort_in_task_snd, abort_in_task_rcv) = oneshot::channel();
let (abort_out_task_snd, abort_out_task_rcv) = oneshot::channel();
tokio::spawn(handle_incoming(read, abort_in_task_rcv, abort_out_task_snd));
tokio::spawn(handle_outgoing(
write,
abort_out_task_rcv,
abort_in_task_snd,
));
Ok(())
}
async fn handle_incoming(
mut conn: TcpStreamReadHalf,
abort_in: oneshot::Receiver<()>,
abort_out: oneshot::Sender<()>,
) {
let mut read_buf: [u8; 1024] = [0; 1024];
let mut abort_in_fused = abort_in.fuse();
loop {
select! {
abort_ret = abort_in_fused => {
// TODO match abort_ret {..}
println!("abort signalled, handle_incoming returning");
return;
},
bytes = conn.read(&mut read_buf).fuse() => {
match bytes {
Err(io_err) => {
println!("IO error when reading input stream: {:?}", io_err);
println!("Aborting");
abort_out.send(()).unwrap();
return;
}
Ok(bytes) => {
if bytes == 0 {
println!("Connection closed from the other end. Aborting.");
abort_out.send(()).unwrap();
return;
}
println!("Read {} bytes: {:?}", bytes, &read_buf[0..bytes]);
}
}
}
}
}
}
async fn handle_outgoing(
mut conn: TcpStreamWriteHalf,
abort_in: oneshot::Receiver<()>,
abort_out: oneshot::Sender<()>,
) {
let mut stdin = tokio::io::stdin();
let mut read_buf: [u8; 1024] = [0; 1024];
let mut abort_in_fused = abort_in.fuse();
loop {
select! {
abort_ret = abort_in_fused => {
println!("Abort signalled, handle_outgoing returning");
return;
}
input = stdin.read(&mut read_buf).fuse() => {
match input {
Err(io_err) => {
println!("IO error when reading stdin: {:?}", io_err);
println!("Aborting");
abort_out.send(()).unwrap();
return;
}
Ok(bytes) => {
if bytes == 0 {
println!("stdin closed, aborting");
abort_out.send(()).unwrap();
return;
}
println!("handle_outgoing read {} bytes", bytes);
match conn.write_all(&read_buf[0..bytes]).await {
Ok(()) => {
},
Err(io_err) => {
println!("Error when sending: {:?}", io_err);
println!("Aborting");
abort_out.send(()).unwrap();
return;
}
}
}
}
},
}
}
}
Dependencies:
futures-preview = { version = "0.3.0-alpha.18", features = ["async-await", "nightly"] }
tokio = "0.2.0-alpha.2"
tokio-net = "0.2.0-alpha.2"
tokio-sync = "0.2.0-alpha.2"
So when the connection is closed on the other side, or there's an error, I
signal the channel:
println!("Connection closed from the other end. Aborting.");
abort_out.send(()).unwrap();
return;
But for some reason the other task never notices:
select! {
// Never runs:
abort_ret = abort_in_fused => {
// TODO match abort_ret {..}
println!("abort signalled, handle_incoming returning");
return;
},
...
}
This can be seen in two ways:
The print in abort_ret case never runs.
After the connection is closed on the other end the process prints Connection
closed from the other end. Aborting. but it doesn't return. When I attach
gdb I see this backtrace:
...
#14 0x000055f10391ab7e in tokio_executor::enter::Enter::block_on (self=0x7ffc167ba5a0, f=...) at /home/omer/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-executor-0.2.0-alpha.2/src/enter.rs:121
#15 0x000055f103863a28 in tokio::runtime::threadpool::Runtime::shutdown_on_idle (self=...) at /home/omer/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.0-alpha.2/src/runtime/threadpool/mod.rs:219
#16 0x000055f10384c01b in chat0_client::main () at src/chat0_client.rs:36
So it's blocked in Tokio event loop.
In addition to a direct answer, I'm interested in pointers on how to debug this.
Thanks.