Tokio channel sends, but doesn't receive - multithreading

TL;DR I'm trying to have a background thread that's ID'd that is controlled via that ID and web calls, and the background threads doesn't seem to be getting the message via all the types of channels I've tried.
I've tried both the std channels as well as tokio's, and of those I've tried all but the watcher type from tokio. All have the same result which probably means that I've messed something up somewhere without realizing it, but I can't find the issue:
use std::collections::{
hash_map::Entry::{Occupied, Vacant},
HashMap,
};
use std::sync::Arc;
use tokio::sync::mpsc::{self, UnboundedSender};
use tokio::sync::RwLock;
use tokio::task::JoinHandle;
use uuid::Uuid;
use warp::{http, Filter};
#[derive(Default)]
pub struct Switcher {
pub handle: Option<JoinHandle<bool>>,
pub pipeline_end_tx: Option<UnboundedSender<String>>,
}
impl Switcher {
pub fn set_sender(&mut self, tx: UnboundedSender<String>) {
self.pipeline_end_tx = Some(tx);
}
pub fn set_handle(&mut self, handle: JoinHandle<bool>) {
self.handle = Some(handle);
}
}
const ADDR: [u8; 4] = [0, 0, 0, 0];
const PORT: u16 = 3000;
type RunningPipelines = Arc<RwLock<HashMap<String, Arc<RwLock<Switcher>>>>>;
#[tokio::main]
async fn main() {
let running_pipelines = Arc::new(RwLock::new(HashMap::<String, Arc<RwLock<Switcher>>>::new()));
let session_create = warp::post()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.then(|pipelines: RunningPipelines| async move {
println!("session requested OK!");
let id = Uuid::new_v4();
let mut switcher = Switcher::default();
let (tx, mut rx) = mpsc::unbounded_channel::<String>();
switcher.set_sender(tx);
let t = tokio::spawn(async move {
println!("Background going...");
//This would be something processing in the background until it received the end signal
match rx.recv().await {
Some(v) => {
println!(
"Got end message:{} YESSSSSS#!##!!!!!!!!!!!!!!!!1111eleven",
v
);
}
None => println!("Error receiving end signal:"),
}
println!("ABORTING HANDLE");
true
});
let ret = HashMap::from([("session_id", id.to_string())]);
switcher.set_handle(t);
{
pipelines
.write()
.await
.insert(id.to_string(), Arc::new(RwLock::new(switcher)));
}
Ok(warp::reply::json(&ret))
});
let session_end = warp::delete()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.and(warp::query::<HashMap<String, String>>())
.then(
|pipelines: RunningPipelines, p: HashMap<String, String>| async move {
println!("session end requested OK!: {:?}", p);
match p.get("session_id") {
None => Ok(warp::reply::with_status(
"Please specify session to end",
http::StatusCode::BAD_REQUEST,
)),
Some(id) => {
let mut pipe = pipelines.write().await;
match pipe.entry(String::from(id)) {
Occupied(handle) => {
println!("occupied");
let (k, v) = handle.remove_entry();
drop(pipe);
println!("removed from hashmap, key:{}", k);
let s = v.write().await;
if let Some(h) = &s.handle {
if let Some(tx) = &s.pipeline_end_tx {
match tx.send("goodbye".to_string()) {
Ok(res) => {
println!(
"sent end message|{:?}| to fpipeline: {}",
res, id
);
//Added this to try to get it to at least Error on the other side
drop(tx);
},
Err(err) => println!(
"ERROR sending end message to pipeline({}):{}",
id, err
),
};
} else {
println!("no sender channel found for pipeline: {}", id);
};
h.abort();
} else {
println!(
"no luck finding the value in handle in the switcher: {}",
id
);
};
}
Vacant(_) => {
println!("no luck finding the handle in the pipelines: {}", id)
}
};
Ok(warp::reply::with_status("done", http::StatusCode::OK))
}
}
},
);
let routes = session_create
.or(session_end)
.recover(handle_rejection)
.with(warp::cors().allow_any_origin());
println!("starting server...");
warp::serve(routes).run((ADDR, PORT)).await;
}
async fn handle_rejection(
err: warp::Rejection,
) -> Result<impl warp::Reply, std::convert::Infallible> {
Ok(warp::reply::json(&format!("{:?}", err)))
}
fn with_pipelines(
pipelines: RunningPipelines,
) -> impl Filter<Extract = (RunningPipelines,), Error = std::convert::Infallible> + Clone {
warp::any().map(move || pipelines.clone())
}
depends:
[dependencies]
warp = "0.3"
tokio = { version = "1", features = ["full"] }
uuid = { version = "0.8.2", features = ["serde", "v4"] }
Results when I boot up, send a "create" request, and then an "end" request with the received ID:
starting server...
session requested OK!
Background going...
session end requested OK!: {"session_id": "6b984a45-38d8-41dc-bf95-422f75c5a429"}
occupied
removed from hashmap, key:6b984a45-38d8-41dc-bf95-422f75c5a429
sent end message|()| to fpipeline: 6b984a45-38d8-41dc-bf95-422f75c5a429
You'll notice that the background thread starts (and doesn't end) when the "create" request is made, but when the "end" request is made, while everything appears to complete successfully from the request(web) side, the background thread doesn't ever receive the message. As I've said I've tried all different channel types and moved things around to get it into this configuration... i.e. flattened and thread safetied as much as I could or at least could think of. I'm greener than I would like in rust, so any help would be VERY appreciated!

I think that the issue here is that you are sending the message and then immediately aborting the background task:
tx.send("goodbye".to_string());
//...
h.abort();
And the background task does not have time to process the message, as the abort is of higher priority.
What you need is to join the task, not to abort it.
Curiously, tokio tasks handles do not have a join() method, instead you wait for the handle itself. But for that you need to own the handle, so first you have to extract the handle from the Switcher:
let mut s = v.write().await;
//steal the task handle
if let Some(h) = s.handle.take() {
//...
tx.send("goodbye".to_string());
//...
//join the task
h.await.unwrap();
}
Note that joining a task may fail, in case the task is aborted or panicked. I'm just panicking in the code above, but you may want to do something different.
Or... you could not to wait for the task. In tokio if you drop a task handle, it will be detached. Then, it will finish when it finishes.

Related

deno_runtime running multiple invokes on single worker concurrently

I'm trying to run multiple invocation of the same script on a single deno MainWorker concurrently, and waiting for their
results (since the scripts can be async). Conceptually, I want something like the loop in run_worker below.
type Tx = Sender<(String, Sender<String>)>;
type Rx = Receiver<(String, Sender<String>)>;
struct Runner {
worker: MainWorker,
futures: FuturesUnordered<Pin<Box<dyn Future<Output=(String, Result<Global<Value>, Error>)>>>>,
response_futures: FuturesUnordered<Pin<Box<dyn Future<Output=(String, Result<(), SendError<String>>)>>>>,
result_senders: HashMap<String, Sender<String>>,
}
impl Runner {
fn new() ...
async fn run_worker(&mut self, rx: &mut Rx, main_module: ModuleSpecifier, user_module: ModuleSpecifier) {
self.worker.execute_main_module(&main_module).await.unwrap();
self.worker.preload_side_module(&user_module).await.unwrap();
loop {
tokio::select! {
msg = rx.recv() => {
if let Some((id, sender)) = msg {
let global = self.worker.js_runtime.execute_script("test", "mod.entry()").unwrap();
self.result_senders.insert(id, sender);
self.futures.push(Box::pin(async {
let resolved = self.worker.js_runtime.resolve_value(global).await;
return (id, resolved);
}));
}
},
script_result = self.futures.next() => {
if let Some((id, out)) = script_result {
self.response_futures.push(Box::pin(async {
let value = deserialize_value(out.unwrap(), &mut self.worker);
let res = self.result_senders.remove(&id).unwrap().send(value).await;
return (id.clone(), res);
}));
}
},
// also handle response_futures here
else => break,
}
}
}
}
The worker can't be borrowed as mutable multiple times, so this won't work. So the worker has to be a RefCell, and
I've created a BorrowingFuture:
struct BorrowingFuture {
worker: RefCell<MainWorker>,
global: Global<Value>,
id: String
}
And its poll implementation:
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
match Pin::new(&mut Box::pin(self.worker.borrow_mut().js_runtime.resolve_value(self.global.clone()))).poll(cx) {
Poll::Ready(result) => Poll::Ready((self.id.clone(), result)),
Poll::Pending => {
cx.waker().clone().wake_by_ref();
Poll::Pending
}
}
}
So the above
self.futures.push(Box::pin(async {
let resolved = self.worker.js_runtime.resolve_value(global).await;
return (id, resolved);
}));
would become
self.futures.push(Box::pin(BorrowingFuture{worker: self.worker, global: global.clone(), id: id.clone()}));
and this would have to be done for the response_futures above as well.
But I see a few issues with this.
Creating a new future on every poll and then polling that seems wrong, but it does work.
It probably has a performance impact because new objects are created constantly.
The same issue would happen for the response futures, which would call send on each poll, which seems completely wrong.
The waker.wake_by_ref is called on every poll, because there is no way to know when a script result
will resolve. This results in the future being polled thousands (and more) times per second (always creating a new object),
which could be the same as checking it in a loop, I guess.
Note My current setup doesn't use select!, but an enum as Output from multiple Future implementations, pushed into
a single FuturesUnordered, and then matched to handle the correct type (script, send, receive). I used select here because
it's far less verbose, and gets the point across.
Is there a way to do this better/more efficiently? Or is it just not the way MainWorker was meant to be used?
main for completeness:
#[tokio::main]
async fn main() {
let main_module = deno_runtime::deno_core::resolve_url(MAIN_MODULE_SPECIFIER).unwrap();
let user_module = deno_runtime::deno_core::resolve_url(USER_MODULE_SPECIFIER).unwrap();
let (tx, mut rx) = channel(1);
let (result_tx, mut result_rx) = channel(1);
let handle = thread::spawn(move || {
let runtime = tokio::runtime::Builder::new_multi_thread().enable_all().build().unwrap();
let mut runner = Runner::new();
runtime.block_on(runner.run_worker(&mut rx, main_module, user_module));
});
tx.send(("test input".to_string(), result_tx)).await.unwrap();
let result = result_rx.recv().await.unwrap();
println!("result from worker {}", result);
handle.join().unwrap();
}

Receiver on tokio's mpsc channel only receives messages when buffer is full

I've spent a few hours trying to figure this out and I'm pretty done. I found the question with a similar name, but that looks like something was blocking synchronously which was messing with tokio. That very well may be the issue here, but I have absolutely no idea what is causing it.
Here is a heavily stripped down version of my project which hopefully gets the issue across.
use std::io;
use futures_util::{
SinkExt,
stream::{SplitSink, SplitStream},
StreamExt,
};
use tokio::{
net::TcpStream,
sync::mpsc::{channel, Receiver, Sender},
};
use tokio_tungstenite::{
connect_async,
MaybeTlsStream,
tungstenite::Message,
WebSocketStream,
};
#[tokio::main]
async fn main() {
connect_to_server("wss://a_valid_domain.com".to_string()).await;
}
async fn read_line() -> String {
loop {
let mut str = String::new();
io::stdin().read_line(&mut str).unwrap();
str = str.trim().to_string();
if !str.is_empty() {
return str;
}
}
}
async fn connect_to_server(url: String) {
let (ws_stream, _) = connect_async(url).await.unwrap();
let (write, read) = ws_stream.split();
let (tx, rx) = channel::<ChannelMessage>(100);
tokio::spawn(channel_thread(write, rx));
tokio::spawn(handle_std_input(tx.clone()));
read_messages(read, tx).await;
}
#[derive(Debug)]
enum ChannelMessage {
Text(String),
Close,
}
// PROBLEMATIC FUNCTION
async fn channel_thread(
mut write: SplitSink<WebSocketStream<MaybeTlsStream<TcpStream>>, Message>,
mut rx: Receiver<ChannelMessage>,
) {
while let Some(msg) = rx.recv().await {
println!("{:?}", msg); // This only fires when buffer is full
match msg {
ChannelMessage::Text(text) => write.send(Message::Text(text)).await.unwrap(),
ChannelMessage::Close => {
write.close().await.unwrap();
rx.close();
return;
}
}
}
}
async fn read_messages(
mut read: SplitStream<WebSocketStream<MaybeTlsStream<TcpStream>>>,
tx: Sender<ChannelMessage>,
) {
while let Some(msg) = read.next().await {
let msg = match msg {
Ok(m) => m,
Err(_) => continue
};
match msg {
Message::Text(m) => println!("{}", m),
Message::Close(_) => break,
_ => {}
}
}
if !tx.is_closed() {
let _ = tx.send(ChannelMessage::Close).await;
}
}
async fn handle_std_input(tx: Sender<ChannelMessage>) {
loop {
let str = read_line().await;
if tx.is_closed() {
break;
}
tx.send(ChannelMessage::Text(str)).await.unwrap();
}
}
As you can see, what I'm trying to do is:
Connect to a websocket
Print outgoing messages from the websocket
Forward any input from stdin to the websocket
Also a custom heartbeat solution which was trimmed out
The issue lies in the channel_thread() function. I move the websocket writer into this function as well as the channel receiver. The issue is, it only loops over the sent objects when the buffer is full.
I've spent a lot of time trying to solve this, any help is greatly appreciated.
Here, you make a blocking synchronous call in an async context:
async fn read_line() -> String {
loop {
let mut str = String::new();
io::stdin().read_line(&mut str).unwrap();
// ^^^^^^^^^^^^^^^^^^^
// This is sync+blocking
str = str.trim().to_string();
if !str.is_empty() {
return str;
}
}
}
You never ever make blocking synchronous calls in an async context, because that prevents the entire thread from running other async tasks. Your channel receiver task is likely also assigned to this thread, so it's having to wait until all the blocking calls are done and whatever invokes this function yields back to the async runtime.
Tokio has its own async version of stdin, which you should use instead.

Read stdin triggered by key event without dropping first letter

Context
I am working on a pomodoro command line app written in rust, most of it worked well but now I want to edit the text of a pomodoro item in the database. All of the actions in the app are triggered by keystrokes, pausing/resuming, quitting etc. and as well editing the text.
Now I want to read the text from stdin but the key-events are as sourced as well from stdin, but on a different thread. I came up with using a stdin.lock() - which works almost fine.
The problem
How can I read a line from stdin in the main thread, without dropping the first letter, due to the event listener being triggered in its thread, before the lock in the main thread is acquired.
expected behaviour:
press t => print Reading from stdin!
type abc<enter> => print You typed: Some("abc")
actual behaviour:
press t => print Reading from stdin!
type abc<enter> => print You typed: Some("bc")
Minimal non-working example
Here is an example that shows the described behaviour:
use failure;
use std::io::{stdin, stdout};
use std::sync::mpsc;
use std::thread;
use termion::event::Key;
use termion::input::TermRead;
use termion::raw::IntoRawMode;
use tui::backend::TermionBackend;
use tui::Terminal;
pub enum Event {
Input(Key),
}
#[allow(dead_code)]
pub struct Events {
rx: mpsc::Receiver<Event>,
input_handle: thread::JoinHandle<()>,
}
impl Events {
pub fn new() -> Events {
let (tx, rx) = mpsc::channel();
let input_handle = {
let tx = tx.clone();
thread::spawn(move || {
let stdin = stdin();
for evt in stdin.keys() {
match evt {
Ok(key) => {
if let Err(_) = tx.send(Event::Input(key)) {
return;
}
}
Err(_) => {}
}
}
})
};
Events {
rx,
input_handle,
}
}
pub fn next(&self) -> Result<Event, mpsc::RecvError> {
self.rx.recv()
}
}
pub fn key_handler(key: Key) -> bool {
match key {
Key::Char('t') => {
println!("Reading from stdin!");
let stdin = stdin();
let mut handle = stdin.lock();
let input = handle.read_line().unwrap();
println!("You typed: {:?}", input);
}
_ =>{
println!("no thing!");
}
};
key == Key::Char('q')
}
fn main() -> Result<(), failure::Error> {
let stdout = stdout().into_raw_mode()?;
let backend = TermionBackend::new(stdout);
let mut terminal = Terminal::new(backend)?;
terminal.clear()?;
terminal.hide_cursor()?;
let events = Events::new();
loop {
match events.next()? {
Event::Input(key) => {
if key_handler(key) {
break;
}
}
}
}
terminal.clear()?;
terminal.show_cursor()?;
Ok(())
}
Update
Cargo.toml
[package]
name = "mnwe"
version = "1.1.0"
edition = "2018"
autotests = false
[[bin]]
bench = false
path = "app/main.rs"
name = "mnwe"
[dependencies]
failure = "0.1"
termion = "1.5.3"
tui = "0.7"
The problem, as correctly identified, is the race on stdin().lock() between stdin.keys() in the events thread and stdin.lock() in key_handler (which the events thread tends to win, and eat one key).
For the sake of the bounty, I see four possible approaches:
At easiest, you can avoid having threads at all, and instead regularly poll for new input with termion::async_std. (It's what I ended up doing for my own tui application. In the end, you're likely to be polling the event receiver anyway, so why not poll stdin directly.)
If your problem allows it, you could do the stdin reading directly on the event thread. Instead of sending key events over the channel, you would send something you could call "user commands" over the channel:
// Channel data:
pub enum Command {
StdinInput(String),
Quit,
// Add more commands
Unknown(Key),
}
// Send side
thread::spawn(move || {
for evt in stdin().keys() {
match evt {
Ok(key) => {
let cmd = match key {
Key::Char('t') => {
println!("Reading from stdin!");
let input = stdin().lock().read_line().unwrap();
Command::StdinInput(input.unwrap_or(String::new()))
}
Key::Char('q') => Command::Quit,
_ => Command::Unknown(key),
};
if let Err(_) = tx.send(cmd) {
return;
}
}
Err(_) => {}
}
}
})
// Receive side
loop {
match events.next()? {
Command::StdinInput(input) => println!("You typed: {}", input),
Command::Quit => break,
Command::Unknown(k) => println!("no thing: {:?}", k),
}
}
If you must absolutely have access stdin from to threads, I would not recommend using a CondVar, but passing the sender of another channel through the event channel. Why? Because it's much harder to get wrong. Any channel will do, but I think oneshot::channel() is most suitable here:
// Channel data
pub enum Event {
Input(Key, oneshot::Sender<()>),
}
// Send side
for evt in stdin.keys() {
let (shot_send, shot_recv) = oneshot::channel();
match evt {
Ok(key) => {
if let Err(_) = tx.send(Event::Input(key, shot_send)) {
return;
}
shot_recv.recv().ok();
}
Err(_) => {}
}
}
// Receive side
loop {
match events.next()? {
Event::Input(key, done) => {
match key {
Key::Char('t') => {
println!("Reading from stdin!");
let stdin = stdin();
let mut handle = stdin.lock();
let input = handle.read_line().unwrap();
println!("You typed: {:?}", input);
// It doesn't really matter whether we send anything
// but using the channel here avoids mean surprises about when it gets dropped
done.send(()).ok();
}
Key::Char('q') => break,
_ => println!("no thing!"),
};
}
}
}
You could also not do the stdin().lock().read_line() at all, but reassemble the user input line from the key stroke events. I wouldn't do that.

How to cleanly break tokio-core event loop and futures::Stream in Rust

I am dabbling in tokio-core and can figure out how to spawn an event loop. However there are two things i am not sure of - how to gracefully exit the event loop and how to exit a stream running inside an event loop. For e.g consider this simple piece of code which spawns two listeners into the event loop and waits for another thread to indicate an exit condition:
extern crate tokio_core;
extern crate futures;
use tokio_core::reactor::Core;
use futures::sync::mpsc::unbounded;
use tokio_core::net::TcpListener;
use std::net::SocketAddr;
use std::str::FromStr;
use futures::{Stream, Future};
use std::thread;
use std::time::Duration;
use std::sync::mpsc::channel;
fn main() {
let (get_tx, get_rx) = channel();
let j = thread::spawn(move || {
let mut core = Core::new().unwrap();
let (tx, rx) = unbounded();
get_tx.send(tx).unwrap(); // <<<<<<<<<<<<<<< (1)
// Listener-0
{
let l = TcpListener::bind(&SocketAddr::from_str("127.0.0.1:44444").unwrap(),
&core.handle())
.unwrap();
let fe = l.incoming()
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
// Listener1
{
let l = TcpListener::bind(&SocketAddr::from_str("127.0.0.1:55555").unwrap(),
&core.handle())
.unwrap();
let fe = l.incoming()
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
let work = rx.for_each(|v| {
if v {
// (3) I want to shut down listener-0 above the release the resources
Ok(())
} else {
Err(()) // <<<<<<<<<<<<<<< (2)
}
});
let _ = core.run(work);
println!("Exiting event loop thread");
});
let tx = get_rx.recv().unwrap();
thread::sleep(Duration::from_secs(2));
println!("Want to terminate listener-0"); // <<<<<< (3)
tx.send(true).unwrap();
thread::sleep(Duration::from_secs(2));
println!("Want to exit event loop");
tx.send(false).unwrap();
j.join().unwrap();
}
So say after the sleep in the main thread i want a clean exit of the event loop thread. Currently I send something to the event loop to make it exit and thus releasing the thread.
However both, (1) and (2) feel hacky - i am forcing an error as an exit condition. My questions are:
1) Am I doing it right ? If not then what is the correct way to gracefully exit the event loop thread.
2) I don't event know how to do (3) - i.e. indicate a condition externally to shutdown listener-0 and free all it's resources. How do i achieve this ?
The event loop (core) is not being turned any more (e.g. by run()) or is forgotten (drop()ed). There is no synchronous exit. core.run() returns and stops turning the loop when the Future passed to it completes.
A Stream completes by yielding None (marked with (3) in the code below).
When e.g. a TCP connection is closed the Stream representing it completes and the other way around.
extern crate tokio_core;
extern crate futures;
use tokio_core::reactor::Core;
use futures::sync::mpsc::unbounded;
use tokio_core::net::TcpListener;
use std::net::SocketAddr;
use std::str::FromStr;
use futures::{Async, Stream, Future, Poll};
use std::thread;
use std::time::Duration;
struct CompletionPact<S, C>
where S: Stream,
C: Stream,
{
stream: S,
completer: C,
}
fn stream_completion_pact<S, C>(s: S, c: C) -> CompletionPact<S, C>
where S: Stream,
C: Stream,
{
CompletionPact {
stream: s,
completer: c,
}
}
impl<S, C> Stream for CompletionPact<S, C>
where S: Stream,
C: Stream,
{
type Item = S::Item;
type Error = S::Error;
fn poll(&mut self) -> Poll<Option<S::Item>, S::Error> {
match self.completer.poll() {
Ok(Async::Ready(None)) |
Err(_) |
Ok(Async::Ready(Some(_))) => {
// We are done, forget us
Ok(Async::Ready(None)) // <<<<<< (3)
},
Ok(Async::NotReady) => {
self.stream.poll()
},
}
}
}
fn main() {
// unbounded() is the equivalent of a Stream made from a channel()
// directly create it in this thread instead of receiving a Sender
let (tx, rx) = unbounded::<()>();
// A second one to cause forgetting the listener
let (l0tx, l0rx) = unbounded::<()>();
let j = thread::spawn(move || {
let mut core = Core::new().unwrap();
// Listener-0
{
let l = TcpListener::bind(
&SocketAddr::from_str("127.0.0.1:44444").unwrap(),
&core.handle())
.unwrap();
// wrap the Stream of incoming connections (which usually doesn't
// complete) into a Stream that completes when the
// other side is drop()ed or sent on
let fe = stream_completion_pact(l.incoming(), l0rx)
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
// Listener1
{
let l = TcpListener::bind(
&SocketAddr::from_str("127.0.0.1:55555").unwrap(),
&core.handle())
.unwrap();
let fe = l.incoming()
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
let _ = core.run(rx.into_future());
println!("Exiting event loop thread");
});
thread::sleep(Duration::from_secs(2));
println!("Want to terminate listener-0");
// A drop() will result in the rx side Stream being completed,
// which is indicated by Ok(Async::Ready(None)).
// Our wrapper behaves the same when something is received.
// When the event loop encounters a
// Stream that is complete it forgets about it. Which propagates to a
// drop() that close()es the file descriptor, which closes the port if
// nothing else uses it.
l0tx.send(()).unwrap(); // alternatively: drop(l0tx);
// Note that this is async and is only the signal
// that starts the forgetting.
thread::sleep(Duration::from_secs(2));
println!("Want to exit event loop");
// Same concept. The reception or drop() will cause Stream completion.
// A completed Future will cause run() to return.
tx.send(()).unwrap();
j.join().unwrap();
}
I implemented graceful shutdown via a oneshot channel.
The trick was to use both a oneshot channel to cancel the tcp listener, and use a select! on the two futures. Note I'm using tokio 0.2 and futures 0.3 in the example below.
use futures::channel::oneshot;
use futures::{FutureExt, StreamExt};
use std::thread;
use tokio::net::TcpListener;
pub struct ServerHandle {
// This is the thread in which the server will block
thread: thread::JoinHandle<()>,
// This switch can be used to trigger shutdown of the server.
kill_switch: oneshot::Sender<()>,
}
impl ServerHandle {
pub fn stop(self) {
self.kill_switch.send(()).unwrap();
self.thread.join().unwrap();
}
}
pub fn run_server() -> ServerHandle {
let (kill_switch, kill_switch_receiver) = oneshot::channel::<()>();
let thread = thread::spawn(move || {
info!("Server thread begun!!!");
let mut runtime = tokio::runtime::Builder::new()
.basic_scheduler()
.enable_all()
.thread_name("Tokio-server-thread")
.build()
.unwrap();
runtime.block_on(async {
server_prog(kill_switch_receiver).await.unwrap();
});
info!("Server finished!!!");
});
ServerHandle {
thread,
kill_switch,
}
}
async fn server_prog(kill_switch_receiver: oneshot::Receiver<()>) -> std::io::Result<()> {
let addr = "127.0.0.1:12345";
let addr: std::net::SocketAddr = addr.parse().unwrap();
let mut listener = TcpListener::bind(&addr).await?;
let mut kill_switch_receiver = kill_switch_receiver.fuse();
let mut incoming = listener.incoming().fuse();
loop {
futures::select! {
x = kill_switch_receiver => {
break;
},
optional_new_client = incoming.next() => {
if let Some(new_client) = optional_new_client {
let peer_socket = new_client?;
info!("Client connected!");
let peer = process_client(peer_socket, db.clone());
peers.lock().unwrap().push(peer);
} else {
info!("No more incoming connections.");
break;
}
},
};
}
Ok(())
}
Hopes this helps others (or future me ;)).
My code lives here:
https://github.com/windelbouwman/lognplot/blob/master/lognplot/src/server/server.rs

How can I use Server-Sent events in Iron?

I have a small Rust application that receives some requests through a serial port, does some processing and saves the results locally. I wanted to use a browser as a remote monitor so I can see everything that is happening and as I understand SSEs are pretty good for that.
I tried using Iron for that but I can't find a way to keep the connection open. The request handlers all need to return a Response, so I can't keep sending data.
This was my (dumb) attempt:
fn monitor(req: &mut Request) -> IronResult<Response> {
let mut headers = Headers::new();
headers.set(ContentType(Mime(TopLevel::Text, SubLevel::EventStream, vec![])));
headers.set(CacheControl(vec![CacheDirective::NoCache]));
println!("{:?}", req);
let mut count = 0;
loop {
let mut response = Response::with((iron::status::Ok, format!("data: Count!:{}", count)));
response.headers = headers.clone();
return Ok(response); //obviously won't do what I want
count += 1;
std::thread::sleep_ms(1000);
}
}
I think the short answer is: you can't. The current version of Iron is built on a single request-response interaction. This can be seen in your code because the only way to send a response is to return it; terminating the handler thread.
There's an issue in Iron to utilize the new async support in Hyper, which itself was merged relatively recently. There are even other people trying to use Server-Send Events in Hyper that haven't succeeded yet.
If you are willing to use the Hyper master branch, something like this seems to work. No guarantees that this is a good solution or that it doesn't eat up all your RAM or CPU. It seems to work in Chrome though.
extern crate hyper;
use std::time::{Duration, Instant};
use std::io::prelude::*;
use hyper::{Control, Encoder, Decoder, Next };
use hyper::server::{Server, HandlerFactory, Handler, Request, Response};
use hyper::status::StatusCode;
use hyper::header::ContentType;
use hyper::net::{HttpStream};
fn main() {
let address = "0.0.0.0:7777".parse().expect("Invalid address");
let server = Server::http(&address).expect("Invalid server");
let (_listen, server_loop) = server.handle(MyFactory).expect("Failed to handle");
println!("Starting...");
server_loop.run();
}
struct MyFactory;
impl HandlerFactory<HttpStream> for MyFactory {
type Output = MyHandler;
fn create(&mut self, ctrl: Control) -> Self::Output {
MyHandler {
control: ctrl,
}
}
}
struct MyHandler {
control: Control,
}
impl Handler<HttpStream> for MyHandler {
fn on_request(&mut self, _request: Request<HttpStream>) -> Next {
println!("A request was made");
Next::write()
}
fn on_request_readable(&mut self, _request: &mut Decoder<HttpStream>) -> Next {
println!("Request has data to read");
Next::write()
}
fn on_response(&mut self, response: &mut Response) -> Next {
println!("A response is ready to be sent");
response.set_status(StatusCode::Ok);
let mime = "text/event-stream".parse().expect("Invalid MIME");
response.headers_mut().set(ContentType(mime));
every_duration(Duration:: from_secs(1), self.control.clone());
Next::wait()
}
fn on_response_writable(&mut self, response: &mut Encoder<HttpStream>) -> Next {
println!("A response can be written");
// Waited long enough, send some data
let fake_data = r#"event: userconnect
data: {"username": "bobby", "time": "02:33:48"}"#;
println!("Writing some data");
response.write_all(fake_data.as_bytes()).expect("Failed to write");
response.write_all(b"\n\n").expect("Failed to write");
Next::wait()
}
}
use std::thread;
fn every_duration(max_elapsed: Duration, control: Control) {
let mut last_sent: Option<Instant> = None;
let mut count = 0;
thread::spawn(move || {
loop {
// Terminate after a fixed number of messages
if count >= 5 {
println!("Maximum messages sent, ending");
control.ready(Next::end()).expect("Failed to trigger end");
return;
}
// Wait a little while between messages
if let Some(last) = last_sent {
let elapsed = last.elapsed();
println!("It's been {:?} since the last message", elapsed);
if elapsed < max_elapsed {
let remaining = max_elapsed - elapsed;
println!("There's {:?} remaining", remaining);
thread::sleep(remaining);
}
}
// Trigger a message
control.ready(Next::write()).expect("Failed to trigger write");
last_sent = Some(Instant::now());
count += 1;
}
});
}
And the client-side JS:
var evtSource = new EventSource("http://127.0.0.1:7777");
evtSource.addEventListener("userconnect", function(e) {
const obj = JSON.parse(e.data);
console.log(obj);
}, false);

Resources