I'm new to Rust and I'm trying to create a Server struct which listen to an address and starts a TCP socket connection. The problem is that I want to store the client connection inside a hash map so I can use it later..
I tried writing this:
use std::collections::HashMap;
use std::net::TcpListener;
use std::net::TcpStream;
use std::sync::{Arc, RwLock};
use std::thread;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
};
server
}
pub fn start(&self) {
thread::spawn(move || {
let mut listener =
TcpListener::bind(self.clone().url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => self.on_client_connect(stream),
Err(error) => eprintln!("Error when tried to use stream"),
}
}
});
}
fn on_client_connect(&mut self, stream: TcpStream) {
let id = self.id.read().unwrap();
self.connections.read().unwrap().insert(id, stream);
let id = self.id.write().unwrap();
*id += 1;
}
}
But of course this doesn't work.. There are 2 things that I don't understand, the first is how to pass the stream into my function and then store in my connections hash map so I can use it later and how to use my id inside my on_client_connect function.
You need to clone outside of thread::spawn and move the cloned instance in thread scope.
Also, on_client_connect do not need &mut self because the fields id and connections are already protected inside RwLock.
use std::net::TcpListener;
use std::net::TcpStream;
use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use std::thread;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
};
server
}
pub fn start(&self) {
let me = self.clone(); // Clone it outside
thread::spawn(move || {
let mut listener = TcpListener::bind(&me.url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => me.on_client_connect(stream),
Err(error) => eprintln!("Error when tried to use stream"),
}
}
});
}
fn on_client_connect(&self, stream: TcpStream) { // `&mut self` not needed as the id, connection are inside the lock
let mut id = self.id.write().unwrap();
self.connections.write().unwrap().insert(*id, stream);
*id += 1;
}
}
playground
There's quite a few issues that need minor fixes in this code.
The first one I ran into was the usage of self in the thread::spawn closure.
The thread::spawn needs its argument to have static lifetime, but we've no guarantee that the Server object lives that long.
I solved it by cloning the Server object and moving that into the closure. This is OK as all its data is already behind Arcs.
The next problem was self.connections.read().unwrap().insert(*id, stream); needs to get a write lock, not a read.
Finally id+=1 needs to dereference id.
Once these were fixed, it seems that storing the TcpStream is not an issue. (At least using nightly). I'd thought I'd need to box the TcpStream - but it seems OK as is.
You can see that it compiles in the playground
use std::collections::HashMap;
use std::net::TcpListener;
use std::net::TcpStream;
use std::sync::{Arc, RwLock};
use std::thread;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
};
server
}
pub fn start(&self) {
let mut self_clone = self.clone();
thread::spawn(move || {
let mut listener =
TcpListener::bind(&self_clone.url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => self_clone.on_client_connect(stream),
Err(error) => eprintln!("Error when tried to use stream"),
}
}
});
}
fn on_client_connect(&mut self, stream: TcpStream) {
let id = self.id.read().unwrap();
self.connections.write().unwrap().insert(*id, stream);
let mut id = self.id.write().unwrap();
*id += 1;
}
}
Related
I am using actix-web to run a webserver and want to be able to mutate state through websocket messages.
My current way of using websockets is through implementing the handle method from actix::StreamHandler. However this limits my ability of passing data to it. How can I access the data (actix_web::web::Data) in my handle method?
The only way I can think of solving this issue is to somehow overwrite the function signature of handle, however that doesn't seem possible
Hers is some important code snippets, we have app_name and nonces in app_data:
// main.rs
let nonces = Arc::new(Mutex::new(nonces::Nonces::new()));
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(app_data::AppData {
app_name: String::from("Actix Web"),
nonces: Arc::clone(&nonces),
}))
...
// app_data.rs
pub struct AppData {
pub app_name: String,
pub nonces: Arc<Mutex<nonces::Nonces>>,
}
// ws.rs
struct Ws {
app_data: web::Data<app_data::AppData>,
}
impl StreamHandler<Result<ws::Message, ws::ProtocolError>> for Ws {
fn handle(&mut self, msg: Result<ws::Message, ws::ProtocolError>, ctx: &mut Self::Context) {
let app_name = &self.app_data.app_name;
let mut nonces = self.app_data.nonces.lock().unwrap();
println!(">>> {app_name}");
println!(">>> {:?}", nonces.nonces); // I have a nonces data in nonces
...
}
}
async fn index(
req: HttpRequest,
stream: web::Payload,
app_data: web::Data<app_data::AppData>,
) -> Result<HttpResponse, Error> {
ws::start(Ws { app_data: app_data.clone() }, &req, stream)
}
I'm quite new to rust and actix, but I tried to build a technology prototype where a server sends protobuf messages to clients via websockets. The protobuf part works and is no problem, but I struggle with the websocket part.
I've tried to modify the official example of Actix-Websockets with the Actix-Broker (Chat-Broker-Example), but I'm having a hard time debugging it (beeing not familiar with VSCode, but thats another story).
The effect is, that the broker instance is not started and does not receive any messages. If I start the server manually via a supervisor (the example does not need to do that) than it still won't receive any messages.
Question:
Has anybody any idea why the broker wont start atomaticly or doesnt receive the messages?
Do I have any fundamential missunderstandings?
Has anybody any idea how to make the programm work?
The code is uploaded publicly at github (GitHub Repository).
For your convinience I'll add the ws_client, ws_server and main.rs files below.
The changes I've done compared to the example are:
Removed #[derive(Default)] from WsChatServer and implemented it myself
Wrapped WsChatServer rooms in Arc and RwLock to ensure memory safty. (Needs to be overhauled)
Removed Message ListRooms and corresponding functions
I'd highly appreciate any help, tips or suggestions!
ws_server.rs:
use std::{collections::HashMap, sync::{Arc, RwLock}};
use actix::prelude::*;
use actix_broker::BrokerSubscribe;
use crate::{messages::{ChatMessage, JoinRoom, LeaveRoom, SendMessage}};
type Client = Recipient<ChatMessage>;
type Room = HashMap<usize, Client>;
#[derive(Clone)]
pub struct WsChatServer {
rooms: Arc<RwLock<HashMap<String, Room>>>,
}
lazy_static! {
static ref ROOMS: Arc<RwLock<HashMap<String, Room>>> = Arc::new(RwLock::new(Default::default()));
}
impl Default for WsChatServer {
fn default() -> Self {
let ws = WsChatServer { rooms: ROOMS.clone() };
return ws;
}
}
impl SystemService for WsChatServer {}
impl Supervised for WsChatServer {}
impl WsChatServer {
pub fn create_room(room_name: &str) {
let mut rooms = match ROOMS.write() {
Ok(rooms) => rooms,
Err(err) => {
log::debug!("Error while requesting write lock. Error was: {}", err);
return;
},
};
if !rooms.contains_key(room_name) {
let room: HashMap<usize, Client> = HashMap::new();
rooms.insert(room_name.to_string(), room);
}
}
fn take_room(&mut self, room_name: &str) -> Option<Room> {
let mut guard = match self.rooms.write() {
Ok(guard) => guard,
Err(err) => {
log::debug!("Error waiting for write lock. Error was: {}", err);
return None;
},
};
let room = match guard.get_mut(room_name){
Some(room) => room,
None => {
log::debug!("Failed to get mutable reference of RW Guard");
return None;
},
};
let room = std::mem::take(room);
Some(room)
}
fn add_client_to_room(&mut self, room_name: &str, id: Option<usize>, client: Client) -> usize {
log::info!("In add_client_to_room Handler. Adding Client to room: {}", room_name);
let mut id = id.unwrap_or_else(rand::random::<usize>);
if let Some(room) = self.rooms.write().unwrap().get_mut(room_name) {
loop {
if room.contains_key(&id) {
id = rand::random::<usize>();
} else {
break;
}
}
room.insert(id, client);
return id;
}
// Create a new room for the first client
let mut room: Room = HashMap::new();
room.insert(id, client);
self.rooms.write().unwrap().insert(room_name.to_owned(), room);
id
}
pub fn send_chat_message(&mut self, room_name: &str, msg: &str, _src: usize) -> Option<()> {
let mut room = match self.take_room(room_name) {
Some(room) => room,
None => {
log::debug!("Error, could not take room.");
return None;
},
};
for (id, client) in room.drain() {
if client.try_send(ChatMessage(msg.to_owned())).is_ok() {
self.add_client_to_room(room_name, Some(id), client);
}
}
Some(())
}
}
impl Actor for WsChatServer {
type Context = Context<Self>;
fn started(&mut self, ctx: &mut Self::Context) {
log::info!("WsChatServer has started.");
self.subscribe_system_async::<LeaveRoom>(ctx);
self.subscribe_system_async::<SendMessage>(ctx);
}
}
impl Handler<JoinRoom> for WsChatServer {
type Result = MessageResult<JoinRoom>;
fn handle(&mut self, msg: JoinRoom, _ctx: &mut Self::Context) -> Self::Result {
log::info!("In Join Room Handler.");
let JoinRoom(room_name, client) = msg;
let id = self.add_client_to_room(&room_name, None, client);
MessageResult(id)
}
}
impl Handler<LeaveRoom> for WsChatServer {
type Result = ();
fn handle(&mut self, msg: LeaveRoom, _ctx: &mut Self::Context) {
log::info!("Removing ws client from room.");
if let Some(room) = self.rooms.write().unwrap().get_mut(&msg.0) {
room.remove(&msg.1);
}
}
}
impl Handler<SendMessage> for WsChatServer {
type Result = ();
fn handle(&mut self, msg: SendMessage, _ctx: &mut Self::Context) {
let SendMessage(room_name, id, msg) = msg;
self.send_chat_message(&room_name, &msg, id);
}
}
ws_client.rs:
use actix::{Actor, ActorContext, StreamHandler, Handler, SystemService, AsyncContext, WrapFuture, ActorFutureExt, fut, ContextFutureSpawner};
use actix_web_actors::ws;
use actix_broker::BrokerIssue;
use crate::messages::{ChatMessage, LeaveRoom, JoinRoom};
use crate::ws_server::WsChatServer;
pub struct WsConn {
room: String,
id: usize,
}
impl WsConn {
pub fn new(room: &str) -> WsConn {
WsConn {
room: room.to_string(),
id: rand::random::<usize>(),
}
}
pub fn join_room(&mut self, room_name: &str, ctx: &mut ws::WebsocketContext<Self>) {
let room_name = room_name.to_owned();
// First send a leave message for the current room
let leave_msg = LeaveRoom(self.room.clone(), self.id);
// issue_sync comes from having the `BrokerIssue` trait in scope.
self.issue_system_sync(leave_msg, ctx);
log::info!("Ws client sent leave msg.");
// Then send a join message for the new room
let join_msg = JoinRoom(
room_name.to_owned(),
ctx.address().recipient(),
);
WsChatServer::from_registry()
.send(join_msg)
.into_actor(self)
.then(move |id, act, _ctx| {
if let Ok(id) = id {
act.id = id;
act.room = room_name.clone().to_string();
}
fut::ready(())
})
.wait(ctx);
log::info!("Ws client sent join msg.");
}
}
impl Actor for WsConn {
type Context = ws::WebsocketContext<Self>;
fn started(&mut self, ctx: &mut Self::Context) {
log::info!("ws client started.");
self.join_room(self.room.to_owned().as_str(), ctx);
}
fn stopped(&mut self, _ctx: &mut Self::Context) {
log::info!(
"WsConn closed for {} in room {}",
self.id,
self.room
);
}
}
impl Handler<ChatMessage> for WsConn {
type Result = ();
fn handle(&mut self, msg: ChatMessage, ctx: &mut Self::Context) {
ctx.text(msg.0);
}
}
impl StreamHandler<Result<ws::Message, ws::ProtocolError>> for WsConn {
fn handle(&mut self, msg: Result<ws::Message, ws::ProtocolError>, ctx: &mut Self::Context) {
let msg = match msg {
Err(_) => {
ctx.stop();
return;
}
Ok(msg) => msg,
};
log::debug!("WEBSOCKET MESSAGE: {:?}", msg);
match msg {
ws::Message::Text(_) => (),
ws::Message::Close(reason) => {
ctx.close(reason);
ctx.stop();
}
_ => {}
}
}
}
main.rs:
use std::time::Duration;
use std::{env, thread::sleep};
use std::fs::File;
use std::io::Write;
use actix_web::{App, HttpServer, middleware::Logger};
use actix_cors::Cors;
use tokio::task;
#[macro_use]
extern crate lazy_static;
mod protobuf_messages;
mod actions;
mod data;
mod ws_clients;
mod messages;
mod ws_server;
pub async fn write_to_file(buf: &[u8], file_name: &str) -> Result<(), std::io::Error> {
let dir = env::current_dir().unwrap();
let file_handler = dir.join(file_name);
let mut file = File::create(file_handler).unwrap();
file.write_all(buf)
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
std::env::set_var("RUST_LOG", "debug");
env_logger::init();
data::MAIN_CONFIG.version = "1.0".to_string();
data::MAIN_CONFIG.mqtt_broker_address = "test".to_string();
data::MAIN_CONFIG.wildcard = "#".to_string();
data::MAIN_CONFIG.splitting_character = ".".to_string();
data::MAIN_CONFIG.active_configs = [].to_vec();
let ws_chat_server = ws_server::WsChatServer::default();
let mut ws_chat_server_2 = ws_chat_server.clone();
actix::Supervisor::start(|_| ws_chat_server);
ws_server::WsChatServer::create_room("Main");
let msg = serde_json::to_string_pretty(&data::MAIN_CONFIG).expect("Expected parsable string");
task::spawn(async move {
loop {
match ws_chat_server_2.send_chat_message("Main", &msg.clone(), 0) {
Some(()) => (),//log::debug!("Got a result from sending chat message"),
None => (),//log::debug!("Got no result from sending chat message"),
}
sleep(Duration::from_secs(1));
}
});
HttpServer::new(move|| App::new()
.wrap(Logger::default())
.wrap(Cors::default().allow_any_origin().allow_any_header().allow_any_method())
.service(actions::get_json)
.service(actions::get_protobuf)
.service(actions::start_ws_connection)
)
.bind(("127.0.0.1", 3000))?
.workers(2)
.run().await
}
I finally figured a way out to solve the problem.
First of all I could not get the broker example to work.
My issue was, that I tried to externally send a message to all ws clients via the ws server broker. To do that you need to get the Addr of the server. If that is done the same way the Actor clients receive the lobby address with from_registry the result seems to be a different server instance and therfore the messages were not sent to the clients.
I could not find a way to get the same broker addr returned so I switched to the actix websocket example called "chat-tcp". In this example is a WS Server Actor created in the Main function and then added as variable to all requests. In the request handler were the clients set up with the server addr.
To make sure I can send to the same broker I encapsulated the Addr with an Arc and RwLock. Now I needed to dismantel it in the request handler. When I started a tokio green thread that sends messages to the ws server (and moved an Arc copy in the scope) it worked.
Does anyone know why the registered broker addresses where different?
I want to send Events between the game client and server and I already got it working, but I do not know how to do it with bevy.
I am dependent to use tokios async TcpStream, because I have to be able to split the stream into a OwnedWriteHalf and OwnedReadhalf using stream.into_split().
My first idea was to just spawn a thread that handles the connection and then send the received events to a queue using mpsc::channel
Then I include this queue into a bevy resource using app.insert_resource(Queue) and pull events from it in the game loop.
the Queue:
use tokio::sync::mpsc;
pub enum Instruction {
Push(GameEvent),
Pull(mpsc::Sender<Option<GameEvent>>),
}
#[derive(Clone, Debug)]
pub struct Queue {
sender: mpsc::Sender<Instruction>,
}
impl Queue {
pub fn init() -> Self {
let (tx, rx) = mpsc::channel(1024);
init(rx);
Self{sender: tx}
}
pub async fn send(&self, event: GameEvent) {
self.sender.send(Instruction::Push(event)).await.unwrap();
}
pub async fn pull(&self) -> Option<GameEvent> {
println!("new pull");
let (tx, mut rx) = mpsc::channel(1);
self.sender.send(Instruction::Pull(tx)).await.unwrap();
rx.recv().await.unwrap()
}
}
fn init(mut rx: mpsc::Receiver<Instruction>) {
tokio::spawn(async move {
let mut queue: Vec<GameEvent> = Vec::new();
loop {
match rx.recv().await.unwrap() {
Instruction::Push(ev) => {
queue.push(ev);
}
Instruction::Pull(sender) => {
sender.send(queue.pop()).await.unwrap();
}
}
}
});
}
But because all this has to be async I have block the pull() function in the sync game loop.
I do this using the futures-lite crate:
fn event_pull(
communication: Res<Communication>
) {
let ev = future::block_on(communication.event_queue.pull());
println!("got event: {:?}", ev);
}
And this works fine, BUT after around 5 seconds the whole program just halts and does not receive any more events.
It seems like that future::block_on() does block indefinitely.
Having the main function, in which bevy::prelude::App gets built and run, to be the async tokio::main function might also be a problem here.
It would probably be best to wrap the async TcpStream initialisation and tokio::sync::mpsc::Sender and thus also Queue.pull into synchronous functions, but I do not know how to do this.
Can anyone help?
How to reproduce
The repo can be found here
Just compile both server and client and then run both in the same order.
I got it to work by just replacing every tokio::sync::mpsc with crossbeam::channel, which might be a problem, as it does block
and manually initializing the tokio runtime.
so the init code looks like this:
pub struct Communicator {
pub event_bridge: bridge::Bridge,
pub event_queue: event_queue::Queue,
_runtime: Runtime,
}
impl Communicator {
pub fn init(ip: &str) -> Self {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_io()
.build()
.unwrap();
let (bridge, queue, game_rx) = rt.block_on(async move {
let socket = TcpStream::connect(ip).await.unwrap();
let (read, write) = socket.into_split();
let reader = TcpReader::new(read);
let writer = TcpWriter::new(write);
let (bridge, tcp_rx, game_rx) = bridge::Bridge::init();
reader::init(bridge.clone(), reader);
writer::init(tcp_rx, writer);
let event_queue = event_queue::Queue::init();
return (bridge, event_queue, game_rx);
});
// game of game_rx events to queue for game loop
let eq_clone = queue.clone();
rt.spawn(async move {
loop {
let event = game_rx.recv().unwrap();
eq_clone.send(event);
}
});
Self {
event_bridge: bridge,
event_queue: queue,
_runtime: rt,
}
}
}
And main.rs looks like this:
fn main() {
let communicator = communication::Communicator::init("0.0.0.0:8000");
communicator.event_bridge.push_tcp(TcpEvent::Register{name: String::from("luca")});
App::new()
.insert_resource(communicator)
.add_system(event_pull)
.add_plugins(DefaultPlugins)
.run();
}
fn event_pull(
communication: Res<communication::Communicator>
) {
let ev = communication.event_queue.pull();
if let Some(ev) = ev {
println!("ev");
}
}
Perhaps there might be a better solution.
I'm new to Rust and I'm trying to configure a simple tcp socket server which will listen to connections and will reply with the same message that received.
The thing is, this works as I want except when connecting with multiple clients.. The first client that connects will send and receive the messages but if a second client connects, the first one keeps working but the second never receives messages, in fact the message never gets in the code that will handle it. And if I disconnect the first socket, the server will start spamming forever that received a message from the first socket with the same content than the last message it sent.
I am pretty sure I did something wrong in my code but I can't find it
This is my server struct:
use std::collections::HashMap;
use std::io::Read;
use std::io::Write;
use std::net::Shutdown;
use std::net::TcpListener;
use std::net::TcpStream;
use std::str;
use std::sync::{Arc, RwLock};
use threadpool::ThreadPool;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
thread_pool: ThreadPool
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
thread_pool: ThreadPool::new(10)
};
server
}
pub fn start(&self) {
let listener = TcpListener::bind(&self.url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => {
let mut self_clone = self.clone();
self.thread_pool.execute(move || {
self_clone.on_client_connect(stream.try_clone().unwrap());
});
}
Err(error) => eprintln!("Error when tried to use stream. Error = {:?}", error),
}
}
}
fn on_client_connect(&mut self, stream: TcpStream) {
println!("Client connected from {}", stream.local_addr().unwrap());
let mut id = self.id.write().unwrap();
{
*id += 1;
}
self.connections
.write()
.unwrap()
.insert(*id, stream.try_clone().unwrap());
let mut stream = stream.try_clone().unwrap();
let mut buffer = [0; 1024];
while match stream.read(&mut buffer) {
Ok(size) => {
println!(
"Message received from {} - {}",
id,
str::from_utf8(&buffer).unwrap()
);
stream.write_all(&buffer[0..size]).unwrap();
true
}
Err(error) => {
println!(
"Error when reading message from socket. Error = {:?}",
error
);
stream.shutdown(Shutdown::Both).unwrap();
false
}
} { }
}
}
And in my main.rs I'm just calling the connect function and the server starts working
In this piece of code in your on_client_connect function, you're aquiring a read lock for self.id:
let mut id = self.id.write().unwrap();
{
*id += 1;
}
However, the id variable, which holds the lock, is not released until it drops at the end of the function. This means that all other clients will wait for this lock to be released, which won't happen until the function currently holding the lock has completed (which happens when that client disconnects).
You can solve this by rewriting the above code to only keep the lock while incrementing, and then storing the ID value in a variable:
let id: u32 = {
let mut id_lock = self.id.write.unwrap();
*id_lock += 1;
*id_lock
// id_lock is dropped at the end of this block, so the lock is released
};
Even better, you can use AtomicU32, which is still thread-safe yet does not require locking at all:
use std::sync::atomic::{AtomicU32, Ordering};
struct {
id: Arc<AtomicU32>,
// ...
}
// Fetch previous value, then increment `self.id` by one, in a thread-safe and lock-free manner
let id: u32 = self.id.fetch_add(1, Ordering::Relaxed);
Also, when the connection is closed your code goes into an infinite loop because you're not handling the case where stream.read() returns Ok(0), which indicates that the connection was closed:
while match stream.read(&mut buffer) {
Ok(0) => false, // handle connection closed...
Ok(size) => { /* ... */ }
Err(err) => { /* ... */ }
} {}
I am trying to make this code snippet run concurrently instead of sequentially since the number of peers can be a large value. I am using async_std 1.4 and rust 1.41
pub struct Peer {
pub peer_id: String,
pub tcp_stream: Arc<TcpStream>,
pub public_key: [u8; 32],
}
async fn send_to_all_peers(message: Protocol, peers: &HashMap<String,Peer>) -> Result<()> {
for peer in peers.values() {
let mut stream = &*peer.tcp_stream;
stream.write_all(&bincode::serialize(&message)?).await?;
}
Ok(())
}
I've tried to use the futures::future::join_all method without any luck since wrapping future I created and used within async_std::task::spawn requires a static lifetime. Here is what I tried:
async fn send_to_all_peers(message: Protocol, peers: &HashMap<String,Peer>) {
let handles = peers.values().into_iter().map(|peer| {
task::spawn(
async {
let mut stream = &*peer.tcp_stream;
if let Err(err) = stream
.write_all(&bincode::serialize(&message).unwrap())
.await
{
error!("Error when writing to tcp_stream: {}", err);
}
}
)
});
futures::future::join_all(handles).await;
}
I'm sure there is some method I am missing, thanks for any help!
Since you are trying to send message concurrently, each task has to have its own copy of the message:
use async_std::{task, net::TcpStream};
use futures::{future, io::AsyncWriteExt};
use serde::Serialize;
use std::{
collections::HashMap,
error::Error,
sync::Arc,
};
pub struct Peer {
pub peer_id: String,
pub tcp_stream: Arc<TcpStream>,
pub public_key: [u8; 32],
}
#[derive(Serialize)]
struct Protocol;
async fn send_to_all_peers(
message: Protocol,
peers: &HashMap<String, Peer>)
-> Result<(), Box<dyn Error>>
{
let msg = bincode::serialize(&message)?;
let handles = peers.values()
.map(|peer| {
let msg = msg.clone();
let socket = peer.tcp_stream.clone();
task::spawn(async move {
let mut socket = &*socket;
socket.write_all(&msg).await
})
});
future::try_join_all(handles).await?;
Ok(())
}
Have you tried something like
let handles = peers.values().into_iter().map(|peer| {
let mut stream = &*peer.tcp_stream;
stream.write_all(&bincode::serialize(&message).unwrap())
}
let results = futures::future::join_all(handles).await
?
Notice how the .map closure doesn’t await, but straight up returns a future, which is then passed to join_all, and then awaited.