I'm trying to implement a pool of 10 Redis of conections using a SyncArbiter for different actors to use. Say that we have an actor named Bob that has to use a Redis actor to accomplish it's task.
While this is achievable in the following manner:
// crate, use and mod statements have been omitted to lessen clutter
/// FILE main.rs
pub struct AppState {
pub redis: Addr<Redis>,
pub bob: Addr<Bob>
}
fn main() {
let system = actix::System::new("theatre");
server::new(move || {
let redis_addr = SyncArbiter::start(10, || Redis::new("redis://127.0.0.1").unwrap());
let bob_addr = SyncArbiter::start(10, || Bob::new());
let state = AppState {
redis: redis_addr,
bob: bob_addr
};
App::with_state(state).resource("/bob/eat", |r| {
r.method(http::Method::POST)
.with_async(controllers::bob::eat)
})
})
.bind("0.0.0.0:8080")
.unwrap()
.start();
println!("Server started.");
system.run();
}
/// FILE controllers/bob.rs
pub struct Food {
name: String,
kcal: u64
}
pub fn eat(
(req, state): (Json<Food>, State<AppState>),
) -> impl Future<Item = HttpResponse, Error = Error> {
state
.bob
.send(Eat::new(req.into_inner()))
.from_err()
.and_then(|res| match res {
Ok(val) => {
println!("==== BODY ==== {:?}", val);
Ok(HttpResponse::Ok().into())
}
Err(_) => Ok(HttpResponse::InternalServerError().into()),
})
}
/// FILE actors/redis.rs
#[derive(Debug)]
pub struct Redis {
pub client: Client
}
pub struct RunCommand(Cmd);
impl RunCommand {
pub fn new(cmd: Cmd) -> Self {
RunCommand(cmd)
}
}
impl Message for RunCommand {
type Result = Result<RedisResult<String>, ()>;
}
impl Actor for Redis {
type Context = SyncContext<Self>;
}
impl Handler<RunCommand> for Redis {
type Result = Result<RedisResult<String>, ()>;
fn handle(&mut self, msg: RunCommand, _context: &mut Self::Context) -> Self::Result {
println!("Redis received command!");
Ok(Ok("OK".to_string()))
}
}
impl Redis {
pub fn new(url: &str) -> Result<Self, RedisError> {
let client = match Client::open(url) {
Ok(client) => client,
Err(error) => return Err(error)
};
let redis = Redis {
client: client,
};
Ok(redis)
}
}
/// FILE actors/bob.rs
pub struct Bob;
pub struct Eat(Food);
impl Message for Eat {
type Result = Result<Bob, ()>;
}
impl Actor for Eat {
type Context = SyncContext<Self>;
}
impl Handler<Eat> for Bob {
type Result = Result<(), ()>;
fn handle(&mut self, msg: Eat, _context: &mut Self::Context) -> Self::Result {
println!("Bob received {:?}", &msg);
// How to get a Redis actor and pass data to it here?
Ok(msg.datapoint)
}
}
impl Bob {
pub fn new() -> () {
Bob {}
}
}
From the above handle implementation in Bob, it's unclear how Bob could get the address of an Redis actor. Or send any message to any Actor running in a SyncArbiter.
The same could be achieved using a regular Arbiter and a Registry but, as far as I am aware of, Actix doesn't allow multiple same actors (e.g. we can't start 10 Redis actors using a regular Arbiter).
To formalize my questions:
Is there a Registry for SyncArbiter actors
Can I start multiple same type actors in a regular Arbiter
Is there a better / more canonical way to implement a connection pool
EDIT
Versions:
actix 0.7.9
actix_web 0.7.19
futures = "0.1.26"
rust 1.33.0
I found the answer myself.
Out-of-the box there is no way for an Actor with a SyncContext to be retrieved from the registry.
Given my above example. For the actor Bob to send any kind of message to the Redis actor it needs to know the address of the Redis actor. Bob can get the address of Redis explicitly - contained in a message sent to it or read from some kind of shared state.
I wanted a system similar to Erlang's so I decided against passing the addresses of actors through messages as it seemed too laborious, error prone and in my mind it defeats the purpose of having an actor based concurrency model (since no one actor can message any other actor).
Therefore I investigated the idea of a shared state, and decided to implement my own SyncRegistry that would be an analog to the Actix standard Registry - whic does exactly what I want but not for Actors with a SyncContext.
Here is the naive solution i coded up: https://gist.github.com/monorkin/c463f34764ab23af2fd0fb0c19716177
With the following setup:
fn main() {
let system = actix::System::new("theatre");
let addr = SyncArbiter::start(10, || Redis::new("redis://redis").unwrap());
SyncRegistry::set(addr);
let addr = SyncArbiter::start(10, || Bob::new());
SyncRegistry::set(addr);
server::new(move || {
let state = AppState {};
App::with_state(state).resource("/foo", |r| {
r.method(http::Method::POST)
.with_async(controllers::foo::create)
})
})
.bind("0.0.0.0:8080")
.unwrap()
.start();
println!("Server started.");
system.run();
}
The actor Bob can get the address of Redis in the following manner, from any point in the program:
impl Handler<Eat> for Bob {
type Result = Result<(), ()>;
fn handle(&mut self, msg: Eat, _context: &mut Self::Context) -> Self::Result {
let redis = match SyncRegistry::<Redis>::get() {
Some(redis) => redis,
_ => return Err(())
};
let cmd = redis::cmd("XADD")
.arg("things_to_eat")
.arg("*")
.arg("data")
.arg(&msg.0)
.to_owned();
redis.clone().lock().unwrap().send(RunCommand::new(cmd)).wait().unwrap();
}
}
Related
I am using actix-web to run a webserver and want to be able to mutate state through websocket messages.
My current way of using websockets is through implementing the handle method from actix::StreamHandler. However this limits my ability of passing data to it. How can I access the data (actix_web::web::Data) in my handle method?
The only way I can think of solving this issue is to somehow overwrite the function signature of handle, however that doesn't seem possible
Hers is some important code snippets, we have app_name and nonces in app_data:
// main.rs
let nonces = Arc::new(Mutex::new(nonces::Nonces::new()));
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(app_data::AppData {
app_name: String::from("Actix Web"),
nonces: Arc::clone(&nonces),
}))
...
// app_data.rs
pub struct AppData {
pub app_name: String,
pub nonces: Arc<Mutex<nonces::Nonces>>,
}
// ws.rs
struct Ws {
app_data: web::Data<app_data::AppData>,
}
impl StreamHandler<Result<ws::Message, ws::ProtocolError>> for Ws {
fn handle(&mut self, msg: Result<ws::Message, ws::ProtocolError>, ctx: &mut Self::Context) {
let app_name = &self.app_data.app_name;
let mut nonces = self.app_data.nonces.lock().unwrap();
println!(">>> {app_name}");
println!(">>> {:?}", nonces.nonces); // I have a nonces data in nonces
...
}
}
async fn index(
req: HttpRequest,
stream: web::Payload,
app_data: web::Data<app_data::AppData>,
) -> Result<HttpResponse, Error> {
ws::start(Ws { app_data: app_data.clone() }, &req, stream)
}
For threaded applications, the Rust standard library provides std::sync::mpsc::sync_channel, a buffered channel which blocks on the reading end when the buffer is empty and blocks on the writing end when the buffer is full. In particular, if you set the buffer size to 0, then any read or write will block until there is a matching write or read.
For async code, there is futures::channel::mpsc::channel, but this does not have the same behavior. Here the minimum capacity is the number of senders on the channel, which is greater than 0. The sending end can still block (because it implements Sink, so we can use SinkExt::send and await it), but only after there's at least one thing already in the buffer.
I took a look to see if there were any packages that provide the functionality I'm looking for, but I could not find anything. Tokio provides lots of nice async synchronization primitives, but none of them did quite what I'm looking for. Plus, my program is going to run in the browser, so I don't think I'm able to use a runtime like Tokio. Does anyone know of a package that fits my use case? I would try to implement this myself, since this almost feels like the most minimal use case for the Sink and Stream traits, but even a minimal implementation of these traits seems like it would be really complicated. Thoughts?
Edit: here's a minimal example of what I mean:
fn main() {
let (tx, rx) = blocking_channel();
let ft = async move {
tx.send(3).await;
println!("message sent and received");
}
let fr = async move {
let msg = rx.recv().await;
println!("received {}", msg);
}
block_on(async { join!(ft, fr) });
}
In this example, whichever future runs first will yield to the other, and only print after both rx.recv and tx.send have been called. Obviously, the receiving end can only progress after tx.send has been called, but I want the less obvious behavior of the transmitting end also having to wait.
Interesting question. I don't think something like that already exists.
Here is a quick proof-of-concept prototype I wrote for that. It's not the prettiest, but it seems to work. There might be a better struct layout than just wrapping everything in a RefCell<Option<...>>, though. And I don't particularly like the sender_dropped and receiver_dropped variables.
Be sure to unittest it properly if used in production!!!
extern crate alloc;
use alloc::rc::Rc;
use core::cell::RefCell;
use core::pin::Pin;
use core::task::{Poll, Waker};
use futures::SinkExt;
use futures::StreamExt;
struct Pipe<T> {
send_waker: RefCell<Option<Waker>>,
receive_waker: RefCell<Option<Waker>>,
value: RefCell<Option<T>>,
sender_dropped: RefCell<bool>,
receiver_dropped: RefCell<bool>,
}
impl<T> Pipe<T> {
fn new() -> Rc<Pipe<T>> {
Rc::new(Self {
value: RefCell::new(None),
send_waker: RefCell::new(None),
receive_waker: RefCell::new(None),
sender_dropped: RefCell::new(false),
receiver_dropped: RefCell::new(false),
})
}
}
impl<T> Pipe<T> {
fn wake_sender(&self) {
if let Some(waker) = self.send_waker.replace(None) {
waker.wake();
}
}
fn wake_receiver(&self) {
if let Some(waker) = self.receive_waker.replace(None) {
waker.wake();
}
}
}
pub struct PipeSender<T> {
pipe: Rc<Pipe<T>>,
}
pub struct PipeReceiver<T> {
pipe: Rc<Pipe<T>>,
}
pub fn create_pipe<T>() -> (PipeSender<T>, PipeReceiver<T>) {
let pipe = Pipe::new();
(PipeSender { pipe: pipe.clone() }, PipeReceiver { pipe })
}
impl<T> futures::Sink<T> for PipeSender<T> {
type Error = ();
fn poll_ready(
self: Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
let result = if *self.pipe.receiver_dropped.borrow() {
Poll::Ready(Err(()))
} else if self.pipe.receive_waker.borrow().is_some() && self.pipe.value.borrow().is_none() {
Poll::Ready(Ok(()))
} else {
self.pipe.send_waker.replace(Some(cx.waker().clone()));
Poll::Pending
};
// Wake potential receiver
self.pipe.wake_receiver();
result
}
fn start_send(self: Pin<&mut Self>, item: T) -> Result<(), Self::Error> {
let prev = self.pipe.value.replace(Some(item));
assert!(prev.is_none(), "A value got lost in the pipe.");
Ok(())
}
fn poll_flush(
self: Pin<&mut Self>,
_: &mut futures::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
// Noop, start_send already completes the send
Poll::Ready(Ok(()))
}
fn poll_close(
self: Pin<&mut Self>,
_: &mut std::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
// Noop, start_send already completes the send
Poll::Ready(Ok(()))
}
}
impl<T> futures::Stream for PipeReceiver<T> {
type Item = T;
fn poll_next(
self: Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Option<Self::Item>> {
let result = {
let value = self.pipe.value.replace(None);
if let Some(value) = value {
Poll::Ready(Some(value))
} else if *self.pipe.sender_dropped.borrow() {
Poll::Ready(None)
} else {
self.pipe.receive_waker.replace(Some(cx.waker().clone()));
Poll::Pending
}
};
// Wake potential sender
self.pipe.wake_sender();
result
}
}
impl<T> Drop for PipeSender<T> {
fn drop(&mut self) {
self.pipe.sender_dropped.replace(true);
self.pipe.wake_receiver();
}
}
impl<T> Drop for PipeReceiver<T> {
fn drop(&mut self) {
self.pipe.receiver_dropped.replace(true);
self.pipe.wake_sender();
}
}
#[tokio::main]
async fn main() {
use std::time::Duration;
let (mut sender, mut receiver) = create_pipe();
tokio::join!(
async move {
for i in 0..5u32 {
println!("Sending {i} ...");
if let Err(_) = sender.send(i).await {
println!("Stream closed.");
break;
}
println!("Sent {i}.");
}
println!("Sender closed.");
},
async move {
println!("Attempting to receive ...");
while let Some(val) = receiver.next().await {
println!("Received: {val}");
println!("\n=== Waiting ... ===\n");
tokio::time::sleep(Duration::from_secs(1)).await;
println!("Attempting to receive ...");
}
println!("Receiver closed.");
}
);
}
Sending 0 ...
Attempting to receive ...
Sent 0.
Sending 1 ...
Received: 0
=== Waiting ... ===
Attempting to receive ...
Sent 1.
Sending 2 ...
Received: 1
=== Waiting ... ===
Attempting to receive ...
Sent 2.
Sending 3 ...
Received: 2
=== Waiting ... ===
Attempting to receive ...
Sent 3.
Sending 4 ...
Received: 3
=== Waiting ... ===
Attempting to receive ...
Sent 4.
Sender closed.
Received: 4
=== Waiting ... ===
Attempting to receive ...
Receiver closed.
I'm quite new to rust and actix, but I tried to build a technology prototype where a server sends protobuf messages to clients via websockets. The protobuf part works and is no problem, but I struggle with the websocket part.
I've tried to modify the official example of Actix-Websockets with the Actix-Broker (Chat-Broker-Example), but I'm having a hard time debugging it (beeing not familiar with VSCode, but thats another story).
The effect is, that the broker instance is not started and does not receive any messages. If I start the server manually via a supervisor (the example does not need to do that) than it still won't receive any messages.
Question:
Has anybody any idea why the broker wont start atomaticly or doesnt receive the messages?
Do I have any fundamential missunderstandings?
Has anybody any idea how to make the programm work?
The code is uploaded publicly at github (GitHub Repository).
For your convinience I'll add the ws_client, ws_server and main.rs files below.
The changes I've done compared to the example are:
Removed #[derive(Default)] from WsChatServer and implemented it myself
Wrapped WsChatServer rooms in Arc and RwLock to ensure memory safty. (Needs to be overhauled)
Removed Message ListRooms and corresponding functions
I'd highly appreciate any help, tips or suggestions!
ws_server.rs:
use std::{collections::HashMap, sync::{Arc, RwLock}};
use actix::prelude::*;
use actix_broker::BrokerSubscribe;
use crate::{messages::{ChatMessage, JoinRoom, LeaveRoom, SendMessage}};
type Client = Recipient<ChatMessage>;
type Room = HashMap<usize, Client>;
#[derive(Clone)]
pub struct WsChatServer {
rooms: Arc<RwLock<HashMap<String, Room>>>,
}
lazy_static! {
static ref ROOMS: Arc<RwLock<HashMap<String, Room>>> = Arc::new(RwLock::new(Default::default()));
}
impl Default for WsChatServer {
fn default() -> Self {
let ws = WsChatServer { rooms: ROOMS.clone() };
return ws;
}
}
impl SystemService for WsChatServer {}
impl Supervised for WsChatServer {}
impl WsChatServer {
pub fn create_room(room_name: &str) {
let mut rooms = match ROOMS.write() {
Ok(rooms) => rooms,
Err(err) => {
log::debug!("Error while requesting write lock. Error was: {}", err);
return;
},
};
if !rooms.contains_key(room_name) {
let room: HashMap<usize, Client> = HashMap::new();
rooms.insert(room_name.to_string(), room);
}
}
fn take_room(&mut self, room_name: &str) -> Option<Room> {
let mut guard = match self.rooms.write() {
Ok(guard) => guard,
Err(err) => {
log::debug!("Error waiting for write lock. Error was: {}", err);
return None;
},
};
let room = match guard.get_mut(room_name){
Some(room) => room,
None => {
log::debug!("Failed to get mutable reference of RW Guard");
return None;
},
};
let room = std::mem::take(room);
Some(room)
}
fn add_client_to_room(&mut self, room_name: &str, id: Option<usize>, client: Client) -> usize {
log::info!("In add_client_to_room Handler. Adding Client to room: {}", room_name);
let mut id = id.unwrap_or_else(rand::random::<usize>);
if let Some(room) = self.rooms.write().unwrap().get_mut(room_name) {
loop {
if room.contains_key(&id) {
id = rand::random::<usize>();
} else {
break;
}
}
room.insert(id, client);
return id;
}
// Create a new room for the first client
let mut room: Room = HashMap::new();
room.insert(id, client);
self.rooms.write().unwrap().insert(room_name.to_owned(), room);
id
}
pub fn send_chat_message(&mut self, room_name: &str, msg: &str, _src: usize) -> Option<()> {
let mut room = match self.take_room(room_name) {
Some(room) => room,
None => {
log::debug!("Error, could not take room.");
return None;
},
};
for (id, client) in room.drain() {
if client.try_send(ChatMessage(msg.to_owned())).is_ok() {
self.add_client_to_room(room_name, Some(id), client);
}
}
Some(())
}
}
impl Actor for WsChatServer {
type Context = Context<Self>;
fn started(&mut self, ctx: &mut Self::Context) {
log::info!("WsChatServer has started.");
self.subscribe_system_async::<LeaveRoom>(ctx);
self.subscribe_system_async::<SendMessage>(ctx);
}
}
impl Handler<JoinRoom> for WsChatServer {
type Result = MessageResult<JoinRoom>;
fn handle(&mut self, msg: JoinRoom, _ctx: &mut Self::Context) -> Self::Result {
log::info!("In Join Room Handler.");
let JoinRoom(room_name, client) = msg;
let id = self.add_client_to_room(&room_name, None, client);
MessageResult(id)
}
}
impl Handler<LeaveRoom> for WsChatServer {
type Result = ();
fn handle(&mut self, msg: LeaveRoom, _ctx: &mut Self::Context) {
log::info!("Removing ws client from room.");
if let Some(room) = self.rooms.write().unwrap().get_mut(&msg.0) {
room.remove(&msg.1);
}
}
}
impl Handler<SendMessage> for WsChatServer {
type Result = ();
fn handle(&mut self, msg: SendMessage, _ctx: &mut Self::Context) {
let SendMessage(room_name, id, msg) = msg;
self.send_chat_message(&room_name, &msg, id);
}
}
ws_client.rs:
use actix::{Actor, ActorContext, StreamHandler, Handler, SystemService, AsyncContext, WrapFuture, ActorFutureExt, fut, ContextFutureSpawner};
use actix_web_actors::ws;
use actix_broker::BrokerIssue;
use crate::messages::{ChatMessage, LeaveRoom, JoinRoom};
use crate::ws_server::WsChatServer;
pub struct WsConn {
room: String,
id: usize,
}
impl WsConn {
pub fn new(room: &str) -> WsConn {
WsConn {
room: room.to_string(),
id: rand::random::<usize>(),
}
}
pub fn join_room(&mut self, room_name: &str, ctx: &mut ws::WebsocketContext<Self>) {
let room_name = room_name.to_owned();
// First send a leave message for the current room
let leave_msg = LeaveRoom(self.room.clone(), self.id);
// issue_sync comes from having the `BrokerIssue` trait in scope.
self.issue_system_sync(leave_msg, ctx);
log::info!("Ws client sent leave msg.");
// Then send a join message for the new room
let join_msg = JoinRoom(
room_name.to_owned(),
ctx.address().recipient(),
);
WsChatServer::from_registry()
.send(join_msg)
.into_actor(self)
.then(move |id, act, _ctx| {
if let Ok(id) = id {
act.id = id;
act.room = room_name.clone().to_string();
}
fut::ready(())
})
.wait(ctx);
log::info!("Ws client sent join msg.");
}
}
impl Actor for WsConn {
type Context = ws::WebsocketContext<Self>;
fn started(&mut self, ctx: &mut Self::Context) {
log::info!("ws client started.");
self.join_room(self.room.to_owned().as_str(), ctx);
}
fn stopped(&mut self, _ctx: &mut Self::Context) {
log::info!(
"WsConn closed for {} in room {}",
self.id,
self.room
);
}
}
impl Handler<ChatMessage> for WsConn {
type Result = ();
fn handle(&mut self, msg: ChatMessage, ctx: &mut Self::Context) {
ctx.text(msg.0);
}
}
impl StreamHandler<Result<ws::Message, ws::ProtocolError>> for WsConn {
fn handle(&mut self, msg: Result<ws::Message, ws::ProtocolError>, ctx: &mut Self::Context) {
let msg = match msg {
Err(_) => {
ctx.stop();
return;
}
Ok(msg) => msg,
};
log::debug!("WEBSOCKET MESSAGE: {:?}", msg);
match msg {
ws::Message::Text(_) => (),
ws::Message::Close(reason) => {
ctx.close(reason);
ctx.stop();
}
_ => {}
}
}
}
main.rs:
use std::time::Duration;
use std::{env, thread::sleep};
use std::fs::File;
use std::io::Write;
use actix_web::{App, HttpServer, middleware::Logger};
use actix_cors::Cors;
use tokio::task;
#[macro_use]
extern crate lazy_static;
mod protobuf_messages;
mod actions;
mod data;
mod ws_clients;
mod messages;
mod ws_server;
pub async fn write_to_file(buf: &[u8], file_name: &str) -> Result<(), std::io::Error> {
let dir = env::current_dir().unwrap();
let file_handler = dir.join(file_name);
let mut file = File::create(file_handler).unwrap();
file.write_all(buf)
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
std::env::set_var("RUST_LOG", "debug");
env_logger::init();
data::MAIN_CONFIG.version = "1.0".to_string();
data::MAIN_CONFIG.mqtt_broker_address = "test".to_string();
data::MAIN_CONFIG.wildcard = "#".to_string();
data::MAIN_CONFIG.splitting_character = ".".to_string();
data::MAIN_CONFIG.active_configs = [].to_vec();
let ws_chat_server = ws_server::WsChatServer::default();
let mut ws_chat_server_2 = ws_chat_server.clone();
actix::Supervisor::start(|_| ws_chat_server);
ws_server::WsChatServer::create_room("Main");
let msg = serde_json::to_string_pretty(&data::MAIN_CONFIG).expect("Expected parsable string");
task::spawn(async move {
loop {
match ws_chat_server_2.send_chat_message("Main", &msg.clone(), 0) {
Some(()) => (),//log::debug!("Got a result from sending chat message"),
None => (),//log::debug!("Got no result from sending chat message"),
}
sleep(Duration::from_secs(1));
}
});
HttpServer::new(move|| App::new()
.wrap(Logger::default())
.wrap(Cors::default().allow_any_origin().allow_any_header().allow_any_method())
.service(actions::get_json)
.service(actions::get_protobuf)
.service(actions::start_ws_connection)
)
.bind(("127.0.0.1", 3000))?
.workers(2)
.run().await
}
I finally figured a way out to solve the problem.
First of all I could not get the broker example to work.
My issue was, that I tried to externally send a message to all ws clients via the ws server broker. To do that you need to get the Addr of the server. If that is done the same way the Actor clients receive the lobby address with from_registry the result seems to be a different server instance and therfore the messages were not sent to the clients.
I could not find a way to get the same broker addr returned so I switched to the actix websocket example called "chat-tcp". In this example is a WS Server Actor created in the Main function and then added as variable to all requests. In the request handler were the clients set up with the server addr.
To make sure I can send to the same broker I encapsulated the Addr with an Arc and RwLock. Now I needed to dismantel it in the request handler. When I started a tokio green thread that sends messages to the ws server (and moved an Arc copy in the scope) it worked.
Does anyone know why the registered broker addresses where different?
I'm new to Rust and I'm trying to create a Server struct which listen to an address and starts a TCP socket connection. The problem is that I want to store the client connection inside a hash map so I can use it later..
I tried writing this:
use std::collections::HashMap;
use std::net::TcpListener;
use std::net::TcpStream;
use std::sync::{Arc, RwLock};
use std::thread;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
};
server
}
pub fn start(&self) {
thread::spawn(move || {
let mut listener =
TcpListener::bind(self.clone().url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => self.on_client_connect(stream),
Err(error) => eprintln!("Error when tried to use stream"),
}
}
});
}
fn on_client_connect(&mut self, stream: TcpStream) {
let id = self.id.read().unwrap();
self.connections.read().unwrap().insert(id, stream);
let id = self.id.write().unwrap();
*id += 1;
}
}
But of course this doesn't work.. There are 2 things that I don't understand, the first is how to pass the stream into my function and then store in my connections hash map so I can use it later and how to use my id inside my on_client_connect function.
You need to clone outside of thread::spawn and move the cloned instance in thread scope.
Also, on_client_connect do not need &mut self because the fields id and connections are already protected inside RwLock.
use std::net::TcpListener;
use std::net::TcpStream;
use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use std::thread;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
};
server
}
pub fn start(&self) {
let me = self.clone(); // Clone it outside
thread::spawn(move || {
let mut listener = TcpListener::bind(&me.url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => me.on_client_connect(stream),
Err(error) => eprintln!("Error when tried to use stream"),
}
}
});
}
fn on_client_connect(&self, stream: TcpStream) { // `&mut self` not needed as the id, connection are inside the lock
let mut id = self.id.write().unwrap();
self.connections.write().unwrap().insert(*id, stream);
*id += 1;
}
}
playground
There's quite a few issues that need minor fixes in this code.
The first one I ran into was the usage of self in the thread::spawn closure.
The thread::spawn needs its argument to have static lifetime, but we've no guarantee that the Server object lives that long.
I solved it by cloning the Server object and moving that into the closure. This is OK as all its data is already behind Arcs.
The next problem was self.connections.read().unwrap().insert(*id, stream); needs to get a write lock, not a read.
Finally id+=1 needs to dereference id.
Once these were fixed, it seems that storing the TcpStream is not an issue. (At least using nightly). I'd thought I'd need to box the TcpStream - but it seems OK as is.
You can see that it compiles in the playground
use std::collections::HashMap;
use std::net::TcpListener;
use std::net::TcpStream;
use std::sync::{Arc, RwLock};
use std::thread;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
};
server
}
pub fn start(&self) {
let mut self_clone = self.clone();
thread::spawn(move || {
let mut listener =
TcpListener::bind(&self_clone.url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => self_clone.on_client_connect(stream),
Err(error) => eprintln!("Error when tried to use stream"),
}
}
});
}
fn on_client_connect(&mut self, stream: TcpStream) {
let id = self.id.read().unwrap();
self.connections.write().unwrap().insert(*id, stream);
let mut id = self.id.write().unwrap();
*id += 1;
}
}
I am trying to make this code snippet run concurrently instead of sequentially since the number of peers can be a large value. I am using async_std 1.4 and rust 1.41
pub struct Peer {
pub peer_id: String,
pub tcp_stream: Arc<TcpStream>,
pub public_key: [u8; 32],
}
async fn send_to_all_peers(message: Protocol, peers: &HashMap<String,Peer>) -> Result<()> {
for peer in peers.values() {
let mut stream = &*peer.tcp_stream;
stream.write_all(&bincode::serialize(&message)?).await?;
}
Ok(())
}
I've tried to use the futures::future::join_all method without any luck since wrapping future I created and used within async_std::task::spawn requires a static lifetime. Here is what I tried:
async fn send_to_all_peers(message: Protocol, peers: &HashMap<String,Peer>) {
let handles = peers.values().into_iter().map(|peer| {
task::spawn(
async {
let mut stream = &*peer.tcp_stream;
if let Err(err) = stream
.write_all(&bincode::serialize(&message).unwrap())
.await
{
error!("Error when writing to tcp_stream: {}", err);
}
}
)
});
futures::future::join_all(handles).await;
}
I'm sure there is some method I am missing, thanks for any help!
Since you are trying to send message concurrently, each task has to have its own copy of the message:
use async_std::{task, net::TcpStream};
use futures::{future, io::AsyncWriteExt};
use serde::Serialize;
use std::{
collections::HashMap,
error::Error,
sync::Arc,
};
pub struct Peer {
pub peer_id: String,
pub tcp_stream: Arc<TcpStream>,
pub public_key: [u8; 32],
}
#[derive(Serialize)]
struct Protocol;
async fn send_to_all_peers(
message: Protocol,
peers: &HashMap<String, Peer>)
-> Result<(), Box<dyn Error>>
{
let msg = bincode::serialize(&message)?;
let handles = peers.values()
.map(|peer| {
let msg = msg.clone();
let socket = peer.tcp_stream.clone();
task::spawn(async move {
let mut socket = &*socket;
socket.write_all(&msg).await
})
});
future::try_join_all(handles).await?;
Ok(())
}
Have you tried something like
let handles = peers.values().into_iter().map(|peer| {
let mut stream = &*peer.tcp_stream;
stream.write_all(&bincode::serialize(&message).unwrap())
}
let results = futures::future::join_all(handles).await
?
Notice how the .map closure doesn’t await, but straight up returns a future, which is then passed to join_all, and then awaited.