A common pattern for Node.js apps is to split them into many "sub-apps" that share some state. Of course, all the "sub-apps" should be handled asynchronously.
Here's a simple example of such a Node app, with three "sub-apps":
An interval timer => Every 10 seconds, a shared itv_counter is incremented
A TCP server => For every TCP message received, a shared tcp_counter is incremented
A UDP server => For every UDP message received, a shared udp_counter is incremented
Every time one of the counters is incremented, all three counters must be printed (hence why the "sub-apps" need to share state).
Here's an implementation in Node. The nice thing about Node is that you can assume that pretty much all I/O operations are handled asynchronously by default. There's no cognitive overhead for the developer.
const dgram = require('dgram');
const net = require('net');
const tcp_port = 3000;
const udp_port = 3001;
const tcp_listener = net.createServer();
const udp_listener = dgram.createSocket('udp4');
// state shared by the 3 asynchronous applications
const shared_state = {
itv_counter: 0,
tcp_counter: 0,
udp_counter: 0,
};
// itv async app: increment itv_counter every 10 seconds and print shared state
setInterval(() => {
shared_state.itv_counter += 1;
console.log(`itv async app: ${JSON.stringify(shared_state)}`);
}, 10_000);
// tcp async app: increment tcp_counter every time a TCP message is received and print shared state
tcp_listener.on('connection', (client) => {
client.on('data', (_data) => {
shared_state.tcp_counter += 1;
console.log(`tcp async app: ${JSON.stringify(shared_state)}`);
});
});
tcp_listener.listen(tcp_port, () => {
console.log(`TCP listener on port ${tcp_port}`);
});
// udp async app: increment udp_counter every time a UDP message is received and print shared state
udp_listener.on('message', (_message, _client) => {
shared_state.udp_counter += 1;
console.log(`udp async app: ${JSON.stringify(shared_state)}`);
});
udp_listener.on('listening', () => {
console.log(`UDP listener on port ${udp_port}`);
});
udp_listener.bind(udp_port);
Now, here's an implementation in Rust with Tokio as the asynchronous runtime.
use std::sync::{Arc, Mutex};
use std::time::Duration;
use tokio::net::{TcpListener, UdpSocket};
use tokio::prelude::*;
// state shared by the 3 asynchronous applications
#[derive(Clone, Debug)]
struct SharedState {
state: Arc<Mutex<State>>,
}
#[derive(Debug)]
struct State {
itv_counter: usize,
tcp_counter: usize,
udp_counter: usize,
}
impl SharedState {
fn new() -> SharedState {
SharedState {
state: Arc::new(Mutex::new(State {
itv_counter: 0,
tcp_counter: 0,
udp_counter: 0,
})),
}
}
}
#[tokio::main]
async fn main() {
let shared_state = SharedState::new();
// itv async app: increment itv_counter every 10 seconds and print shared state
let itv_shared_state = shared_state.clone();
let itv_handle = tokio::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_secs(10));
interval.tick().await;
loop {
interval.tick().await;
let mut state = itv_shared_state.state.lock().unwrap();
state.itv_counter += 1;
println!("itv async app: {:?}", state);
}
});
// tcp async app: increment tcp_counter every time a TCP message is received and print shared state
let tcp_shared_state = shared_state.clone();
let tcp_handle = tokio::spawn(async move {
let mut tcp_listener = TcpListener::bind("127.0.0.1:3000").await.unwrap();
println!("TCP listener on port 3000");
while let Ok((mut tcp_stream, _)) = tcp_listener.accept().await {
let tcp_shared_state = tcp_shared_state.clone();
tokio::spawn(async move {
let mut buffer = [0; 1024];
while let Ok(byte_count) = tcp_stream.read(&mut buffer).await {
if byte_count == 0 {
break;
}
let mut state = tcp_shared_state.state.lock().unwrap();
state.tcp_counter += 1;
println!("tcp async app: {:?}", state);
}
});
}
});
// udp async app: increment udp_counter every time a UDP message is received and print shared state
let udp_shared_state = shared_state.clone();
let udp_handle = tokio::spawn(async move {
let mut udp_listener = UdpSocket::bind("127.0.0.1:3001").await.unwrap();
println!("UDP listener on port 3001");
let mut buffer = [0; 1024];
while let Ok(_byte_count) = udp_listener.recv(&mut buffer).await {
let mut state = udp_shared_state.state.lock().unwrap();
state.udp_counter += 1;
println!("udp async app: {:?}", state);
}
});
itv_handle.await.unwrap();
tcp_handle.await.unwrap();
udp_handle.await.unwrap();
}
First of all, as I'm not super comfortable with Tokio and async Rust yet, there might be things that are dead wrong in this implementation, or bad practice. Please let me know if that's the case (e.g. I have no clue if the three JoinHandle .await are necessary at the very end). That said, it behaves the same as the Node implementation for my simple tests.
But I'm still not sure if it's equivalent under the hood in terms of asynchronicity. Should there be a tokio::spawn for every callback in the Node app? In that case, I should wrap tcp_stream.read() and udp_listener.recv() in another tokio::spawn to mimic the Node callbacks for TCP's on('data') and UDP's on('message'), respectively. Not sure...
What would be the tokio implementation that would be totally equivalent to the Node.js app in terms of asynchronicity? In general, what's a good rule of thumb to know when something should be wrapped in a tokio::spawn?
I see you have three different counters for your tasks and so I think there is a meaningful way to use a token of your state struct and turn it around between tasks.
So every task is responsible to update its own counter.
As a suggestion I suggest to use tokio::sync::mpsc::channel and implement three mpsc value each one directed from one task to next one.
Of course if there is an update period difference between tasks there is a risk that some values update a little bit late but I think in general cases it can be ignored.
Related
I am creating a server that stores the TcpStream objects inside a Vec to be used later.The problem is the function that listens for new connections and adds them to the Vec runs forever in a separate thread and doesn't allow other threads to read the Vec.
pub struct Server {
pub connections: Vec<TcpStream>,
}
impl Server {
fn listen(&mut self) {
println!("Server is listening on port 8080");
let listener = TcpListener::bind("127.0.0.1:8080").unwrap();
loop {
let stream = listener.accept().unwrap().0;
println!("New client connected: {}", stream.peer_addr().unwrap());
//should block for write here
self.connections.push(stream);
//should release write lock
}
}
pub fn run(self) {
let arc_self = Arc::new(RwLock::new(self));
let arc_self_clone = arc_self.clone();
//blocks the lock for writing forever because of listen()
let listener_thread = thread::spawn(move || arc_self_clone.write().unwrap().listen());
loop {
let mut input = String::new();
io::stdin().read_line(&mut input).unwrap();
if input.trim() == "1" {
//can't read because lock blocked for writing
for c in &arc_self.read().unwrap().connections {
println!("testing...");
}
}
}
}
}
In the current example the server accepts connections but does not allow the main thread to read the connections vector.I tought about making the listen function run at a fixed interval (1-5s) so it allows other threads to read the vector in that time but listener.accept() blocks the thread aniway so i don't think that is a valid solution.I would also prefer if it were to run forever if possible and block access to the vector only when it needs to write (a new client connects) and while it waits for clients to connect not block the reading access of other threads to the connections vector.
You could just wrap connections in a RwLock instead of entire self, as shown below, but I would recommend using a proper synchronisation primitive like a channel.
pub struct Server {
pub connections: RwLock<Vec<TcpStream>>,
}
impl Server {
fn listen(&self) {
println!("Server is listening on port 8080");
let listener = TcpListener::bind("127.0.0.1:8080").unwrap();
loop {
let stream = listener.accept().unwrap().0;
println!("New client connected: {}", stream.peer_addr().unwrap());
//should block for write here
self.connections.write().unwrap().push(stream);
//should release write lock
}
}
pub fn run(self) {
let arc_self = Arc::new(self);
let arc_self_clone = arc_self.clone();
let listener_thread = thread::spawn(move || arc_self_clone.listen());
loop {
let mut input = String::new();
io::stdin().read_line(&mut input).unwrap();
if input.trim() == "1" {
for c in &*arc_self.connections.try_read().unwrap() {
println!("testing...");
}
}
}
}
}
I'm learning rust and for something that I want to do is kill, or shutdown, a webserver on GET /.
Is this something you can't do in with warp? Or is my implementation broken?
I've got the following code, but it just doesn't seem to want to respond to any HTTP requests.
pub async fn perform_oauth_flow(&self) {
let (tx, rx) = channel::unbounded();
let routes = warp::path::end().map(move || {
println!("handling");
tx.send("kill");
Ok(warp::reply::with_status("OK", http::StatusCode::CREATED))
});
println!("Spawning server");
let webserver_thread = thread::spawn(|| async {
spawn(warp::serve(routes).bind(([127, 0, 0, 1], 3000)))
.await
.unwrap();
});
println!("waiting for result");
let result = rx.recv().unwrap();
println!("Got result");
if result == "kill" {
webserver_thread.join().unwrap().await;
}
}
let webserver_thread = thread::spawn(|| async {
// ^^^^^
Creating an async block is not going to execute the code inside; it is just creating a Future you need to .await. Your server never actually runs.
In general, using threads with async code is not going to work well. Better to use your runtime tasks, in case of warp it is tokio, using tokio::spawn():
let webserver_thread = tokio::spawn(async move {
spawn(warp::serve(routes).bind(([127, 0, 0, 1], 3000)))
.await
.unwrap();
});
// ...
if result == "kill" {
webserver_thread.await;
}
You may also find it necessary to use tokio's async channels instead of synchronous channels.
There are two issues in your code:
As pointed out by #ChayimFriedman's answer, you never start the server because your async block never runs.
Even if you replace the threads with Tokio tasks, you never tell the server to exit. You need to use bind_with_graceful_shutdown so that you can notify the server to exit.
(Untested) complete example:
pub async fn perform_oauth_flow(&self) {
let (tx, rx) = tokio::oneshot::channel();
let routes = warp::path::end().map(move || {
println!("handling");
tx.send(());
Ok(warp::reply::with_status("OK", http::StatusCode::CREATED))
});
println!("Spawning server");
let server = warp::serve(routes)
.bind_with_graceful_shutdown(
([127, 0, 0, 1], 3000)),
async { rx.await.ok(); })
.1;
println!("waiting for result");
server.await;
}
I am using Rust and Tokio 1.6 to build an app which can interact with an Elgato StreamDeck via hidapi = "1.2". I want to poll the HID device for events (key down / key up) and send those events on an mpsc channel, while watching a separate mpsc channel for incoming commands to update the device state (reset, change brightness, update image, etc). Since the device handle is not thread safe, I need to do both things from a single thread.
major edits below
This is a rewrite of my original question. I've left my interim answer below, but in the interest of a more self contained example, here is a the basic process using device_query = "0.2":
use device_query::{DeviceState, Keycode};
use std::time::Duration;
use tokio;
use tokio::sync::mpsc::{Receiver, Sender};
use tokio::time::timeout;
#[tokio::main]
async fn main() {
// channel for key press events coming from device loop
let (key_tx, mut key_rx) = tokio::sync::mpsc::channel(32);
// channel for commands sent to device loop
let (dev_tx, mut dev_rx) = tokio::sync::mpsc::channel(32);
start_device_loop(60, key_tx, dev_rx);
println!("Waiting for key presses");
while let Some(k) = key_rx.recv().await {
match k {
Some(ch) => match ch {
Keycode::Q => dev_tx.clone().try_send(String::from("Quit!")).expect("Could not send command"),
ch => println!("{}", ch),
},
_ => (),
}
}
println!("Done.")
}
/// Starts a tokio task, polling the supplied device and sending key events
/// on the supplied mpsc sender
pub fn start_device_loop(hz: u32, tx: Sender<Option<Keycode>>, mut rx: Receiver<String>) {
let poll_wait = 1000 / hz;
let poll_wait = Duration::from_millis(poll_wait as u64);
tokio::task::spawn(async move {
let dev = DeviceState::new();
loop {
let mut keys = dev.query_keymap();
match keys.len() {
0 => (),
1 => tx.clone().try_send(Some(keys.remove(0))).unwrap(),
_ => println!("So many keys..."),
}
match timeout(poll_wait, rx.recv()).await {
Ok(cmd) => println!("Command '{}' received.", cmd.unwrap()),
_ => (),
};
// std::thread::sleep(poll_wait);
}
});
}
Note this does not compile - I get an error future created by async block is not 'Send' and within 'impl Future', the trait 'Send' is not implemented for '*mut x11::xlib::_XDisplay'. My understanding of the error is that because device_query is not thread-safe, and awaiting introduces the possibility of scope moving across threads, nothing may be awaited while a non-thread-safe object is in scope. And indeed, if I comment out the block around match timeout... and uncomment the std::thread::sleep everything compiles and runs.
Which brings me back to the original question; how can I both send and receive messages in a single thread without using await or the apparently forbidden fruit of poll_recv()?
After much hunting I found noop_waker in the futures crate which appears to do what I need in combination with poll_recv:
pub fn start_device_loop(hz: u32, tx: Sender<Option<Keycode>>, mut rx: Receiver<String>) {
let poll_wait = 1000 / hz;
let poll_wait = Duration::from_millis(poll_wait as u64);
tokio::task::spawn_blocking(move || {
let dev = DeviceState::new();
let waker = futures::task::noop_waker();
let mut cx = std::task::Context::from_waker(&waker);
loop {
let mut keys = dev.query_keymap();
match keys.len() {
0 => (),
1 => tx.clone().try_send(Some(keys.remove(0))).unwrap(),
_ => println!("So many keys..."),
}
match rx.poll_recv(&mut cx) {
Poll::Ready(cmd) => println!("Command '{}' received.", cmd.unwrap()),
_ => ()
};
std::thread::sleep(poll_wait);
}
});
}
After digging through docs and tokio source more I can't find anything that suggests poll_recv is supposed to be an internal-only function or that using it here would have any obvious side effects. Letting the process run at 125hz I'm not seeing any excess resource usage either.
I'm leaving the above code for posterity, but since asking this question the try_recv method has been added to Receivers, making this all much cleaner.
I'm new to Rust and I'm trying to configure a simple tcp socket server which will listen to connections and will reply with the same message that received.
The thing is, this works as I want except when connecting with multiple clients.. The first client that connects will send and receive the messages but if a second client connects, the first one keeps working but the second never receives messages, in fact the message never gets in the code that will handle it. And if I disconnect the first socket, the server will start spamming forever that received a message from the first socket with the same content than the last message it sent.
I am pretty sure I did something wrong in my code but I can't find it
This is my server struct:
use std::collections::HashMap;
use std::io::Read;
use std::io::Write;
use std::net::Shutdown;
use std::net::TcpListener;
use std::net::TcpStream;
use std::str;
use std::sync::{Arc, RwLock};
use threadpool::ThreadPool;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
thread_pool: ThreadPool
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
thread_pool: ThreadPool::new(10)
};
server
}
pub fn start(&self) {
let listener = TcpListener::bind(&self.url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => {
let mut self_clone = self.clone();
self.thread_pool.execute(move || {
self_clone.on_client_connect(stream.try_clone().unwrap());
});
}
Err(error) => eprintln!("Error when tried to use stream. Error = {:?}", error),
}
}
}
fn on_client_connect(&mut self, stream: TcpStream) {
println!("Client connected from {}", stream.local_addr().unwrap());
let mut id = self.id.write().unwrap();
{
*id += 1;
}
self.connections
.write()
.unwrap()
.insert(*id, stream.try_clone().unwrap());
let mut stream = stream.try_clone().unwrap();
let mut buffer = [0; 1024];
while match stream.read(&mut buffer) {
Ok(size) => {
println!(
"Message received from {} - {}",
id,
str::from_utf8(&buffer).unwrap()
);
stream.write_all(&buffer[0..size]).unwrap();
true
}
Err(error) => {
println!(
"Error when reading message from socket. Error = {:?}",
error
);
stream.shutdown(Shutdown::Both).unwrap();
false
}
} { }
}
}
And in my main.rs I'm just calling the connect function and the server starts working
In this piece of code in your on_client_connect function, you're aquiring a read lock for self.id:
let mut id = self.id.write().unwrap();
{
*id += 1;
}
However, the id variable, which holds the lock, is not released until it drops at the end of the function. This means that all other clients will wait for this lock to be released, which won't happen until the function currently holding the lock has completed (which happens when that client disconnects).
You can solve this by rewriting the above code to only keep the lock while incrementing, and then storing the ID value in a variable:
let id: u32 = {
let mut id_lock = self.id.write.unwrap();
*id_lock += 1;
*id_lock
// id_lock is dropped at the end of this block, so the lock is released
};
Even better, you can use AtomicU32, which is still thread-safe yet does not require locking at all:
use std::sync::atomic::{AtomicU32, Ordering};
struct {
id: Arc<AtomicU32>,
// ...
}
// Fetch previous value, then increment `self.id` by one, in a thread-safe and lock-free manner
let id: u32 = self.id.fetch_add(1, Ordering::Relaxed);
Also, when the connection is closed your code goes into an infinite loop because you're not handling the case where stream.read() returns Ok(0), which indicates that the connection was closed:
while match stream.read(&mut buffer) {
Ok(0) => false, // handle connection closed...
Ok(size) => { /* ... */ }
Err(err) => { /* ... */ }
} {}
My server uses a Barrier to notify the client when it's safe to attempt to connect. Without the barrier, we risk failing randomly as there is no guarantee that the server socket would have been bound.
Now imagine that the server panics - for instance tries to bind the socket to port 80. The client will be left wait()-ing forever. We cannot join() the server thread in order to find out if it panicked, because join() is a blocking operation - if we join() we won't be able to connect().
What's the proper way to do this kind of synchronization, given that the std::sync APIs do not provide methods with timeouts?
This is just a MCVE to demonstrate the issue. I had a similar case in a unit test - it was left running forever.
use std::{
io::prelude::*,
net::{SocketAddr, TcpListener, TcpStream},
sync::{Arc, Barrier},
};
fn main() {
let port = 9090;
//let port = 80;
let barrier = Arc::new(Barrier::new(2));
let server_barrier = barrier.clone();
let client_sync = move || {
barrier.wait();
};
let server_sync = Box::new(move || {
server_barrier.wait();
});
server(server_sync, port);
//server(Box::new(|| { no_sync() }), port); //use to test without synchronisation
client(&client_sync, port);
//client(&no_sync, port); //use to test without synchronisation
}
fn no_sync() {
// do nothing in order to demonstrate the need for synchronization
}
fn server(sync: Box<Fn() + Send + Sync>, port: u16) {
std::thread::spawn(move || {
std::thread::sleep_ms(100); //there is no guarantee when the os will schedule the thread. make it 100% reproducible
let addr = SocketAddr::from(([127, 0, 0, 1], port));
let socket = TcpListener::bind(&addr).unwrap();
println!("server socket bound");
sync();
let (mut client, _) = socket.accept().unwrap();
client.write_all(b"hello mcve").unwrap();
});
}
fn client(sync: &Fn(), port: u16) {
sync();
let addr = SocketAddr::from(([127, 0, 0, 1], port));
let mut socket = TcpStream::connect(&addr).unwrap();
println!("client socket connected");
let mut buf = String::new();
socket.read_to_string(&mut buf).unwrap();
println!("client received: {}", buf);
}
Instead of a Barrier I would use a Condvar here.
To actually solve your problem, I see at least three possible solutions:
Use Condvar::wait_timeout and set the timeout to a reasonable duration (e.g. 1 second which should be enough for binding to a port)
You could use the same method as above, but with a lower timeout (e.g. 10 msec) and check if the Mutex is poisoned.
Instead of a Condvar, you could use a plain Mutex (make sure that the Mutex is locked by the other thread first) and then use Mutex::try_lock to check if the Mutex is poisoned
I think one should prefer solution 1 or 2 over the third one, because you will avoid to make sure that the other thread has locked the Mutex first.