I have four modules. The client is sending messages and the server is receiving messages. Once the server receives the message, it tries to send the message to the MPSC channel. I put the receiver in the other .rs file where I intend to receive the message.
I am not getting any message on the receiver side.
Maybe an infinite loop on the server side creates a problem, but is there a way to make this channel communication working?
client.rs
use std::io::prelude::*;
use std::os::unix::net::UnixDatagram;
use std::path::Path;
use std::sync::mpsc;
pub fn tcp_datagram_client() {
pub static FILE_PATH: &'static str = "/tmp/datagram.sock";
let socket = UnixDatagram::unbound().unwrap();
match socket.connect(FILE_PATH) {
Ok(socket) => socket,
Err(e) => {
println!("Couldn't connect: {:?}", e);
return;
}
};
println!("TCP client Connected to TCP Server {:?}", socket);
loop {
socket
.send(b"Hello from client to server")
.expect("recv function failed");
}
}
fn main() {
tcp_datagram_client();
}
server.rs
use std::os::unix::net::UnixDatagram;
use std::path::Path;
use std::str::from_utf8;
use std::sync::mpsc::Sender;
fn unlink_socket(path: impl AsRef<Path>) {
let path = path.as_ref();
if path.exists() {
if let Err(e) = std::fs::remove_file(path) {
eprintln!("Couldn't remove the file: {:?}", e);
}
}
}
static FILE_PATH: &'static str = "/tmp/datagram.sock";
pub fn tcp_datagram_server(tx: Sender<String>) {
unlink_socket(FILE_PATH);
let socket = match UnixDatagram::bind(FILE_PATH) {
Ok(socket) => socket,
Err(e) => {
eprintln!("Couldn't bind: {:?}", e);
return;
}
};
let mut buf = vec![0; 1024];
println!("Waiting for client to connect...");
loop {
let received_bytes = socket.recv(&mut buf).expect("recv function failed");
println!("Received {:?}", received_bytes);
let received_message = from_utf8(&buf).expect("utf-8 convert failed");
tx.send(received_message.to_string());
}
}
message_receiver.rs
use crate::server;
use std::sync::mpsc;
pub fn handle_messages() {
let (tx, rx) = mpsc::channel();
server::tcp_datagram_server(tx);
let message_from_tcp_server = rx.recv().unwrap();
println!("{:?}", message_from_tcp_server);
}
main.rs
mod server;
mod message_receiver;
fn main() {
message_receiver::handle_messages();
}
Once the TCP client is connected:
TCP client Connected to TCP Server UnixDatagram { fd: 3, local: (unnamed), peer: "/tmp/datagram.sock" (pathname) }
I receive no messages on the channel receiver end:
Waiting for client to connect...
Maybe an infinite loop on the server side creates a problem
Yes, quite literally, your server code does an infinite loop to handle continuously messages from the client(s). So the call to tcp_datagram_server never returns.
but is there a way to make this channel communication working?
Of course, it seems you are simply missing a second thread for your message_receiver. Wrapping your tcp_datagram_server(tx) in std::thread::spawn should do it. You could also add a loop to keep processing requests to match the one in tcp_datagram_server:
pub fn handle_messages() {
let (tx, rx) = mpsc::channel();
std::thread::spawn(|| tcp_datagram_server(tx));
loop {
let message_from_tcp_server = rx.recv().unwrap();
println!("{}", message_from_tcp_server);
}
}
Related
I took a really simple example of a rust server with tcp.
use std::net::{Shutdown,TcpListener, TcpStream};
use std::thread;
use std::io::{Read,Write,Error};
fn handle_client(mut stream: TcpStream)-> Result<(), Error> {
println!("incoming connection from: {}", stream.peer_addr()?);
let mut buf = [0;512];
loop {
let bytes_read = stream.read(&mut buf)?;
if bytes_read == 0 {return Ok(())}
let tmp = format!("{}", String::from_utf8_lossy(&buf).trim());
eprintln!("getting {}",tmp);
stream.write(&buf[..bytes_read])?;
}
}
fn main() {
let listener = TcpListener::bind("0.0.0.0:8888").expect("Could not bind");
for stream in listener.incoming() {
match stream {
Err(e)=> {eprintln!("failed: {}", e)}
Ok(stream) => {
thread::spawn(move || {
handle_client(stream).unwrap_or_else(|error| eprintln!("{:?}", error));
});
}
}
}
}
Which basically takes input, spits it back at the client, and prints to it's own terminal.
I would like to be able to end this connection. Ending the connection should probably happen depending on something, but right now I want to just try to shut it down.
I tried looking thorugh the docs, and then tried adding the shutdown method
Now, I want to take some stream as input, do something with it, and then shut the channel
So I tried doing this:
fn main() {
let listener = TcpListener::bind("0.0.0.0:8888").expect("Could not bind");
for stream in listener.incoming() {
match stream {
Err(e)=> {eprintln!("failed: {}", e)}
Ok(stream) => {
thread::spawn(move || {
handle_client(stream).unwrap_or_else(|error| eprintln!("{:?}", error));
});
stream.shutdown(Shutdown::Both).expect("shutdown call failed");
}
}
}
}
But this causes an issue with the "stream" being a moved value.
So how can I shut down the channel right after receiving and doing something with the input?
(I still want to preserve this structure with the loop, since I actually want to receive many messages, and then shut down dependning on the input)
You can try to do a clone of the stream and pass that stream clone to the function you spawn. Then you can call the shutdown inside your spawned function after you have handled the client.
This way, your original stream variable would remain intact and you can have the behavior you want.
The reason you get this issue is that the spawned function moves everything it captures due to the move keyword.
fn main() {
let listener = TcpListener::bind("0.0.0.0:8888").expect("Could not bind");
for stream in listener.incoming() {
let stream_clone = stream.clone();
match stream {
Err(e)=> {eprintln!("failed: {}", e)}
Ok(stream) => {
thread::spawn(move || {
handle_client(stream_clone).unwrap_or_else(|error| eprintln!("{:?}", error));
stream_clone.shutdown(Shutdown::Both).expect("shutdown call failed");
});
}
}
}
}
Also, you would need to change your signature of handle client to fn handle_client(stream: &mut TcpStream)-> Result<(), Error> so it would not also move the cloned stream variable, but burrow as mutable instead.
In my code snippet the tokio (v0.3) mpsc:channel receiver only receives a message when the buffer is full. It doesn't matter how big or small the buffer is.
use std::io;
use std::net::{SocketAddr, ToSocketAddrs};
use std::sync::Arc;
use std::time::Duration;
use tokio::net::UdpSocket;
use tokio::sync::mpsc;
use tokio::time::sleep;
const MESSAGE_LENGTH: usize = 1024;
pub struct Peer {
socket: Arc<UdpSocket>,
}
impl Peer {
pub fn new<S: ToSocketAddrs>(addr: S) -> Peer {
let socket = std::net::UdpSocket::bind(addr).expect("could not create socket");
let peer = Peer {
socket: Arc::new(UdpSocket::from_std(socket).unwrap()),
};
peer.start_inbound_message_handler();
peer
}
pub fn local_addr(&self) -> SocketAddr {
self.socket.local_addr().unwrap()
}
fn start_inbound_message_handler(&self) {
let socket = self.socket.clone();
let (tx, rx) = mpsc::channel(1);
self.start_request_handler(rx);
tokio::spawn(async move {
let mut buf = [0u8; MESSAGE_LENGTH];
loop {
if let Ok((len, addr)) = socket.recv_from(&mut buf).await {
println!("received {} bytes from {}", len, addr);
if let Err(_) = tx.send(true).await {
println!("error sending msg to request handler");
}
}
}
});
}
fn start_request_handler(&self, mut receiver: mpsc::Receiver<bool>) {
tokio::spawn(async move {
while let Some(msg) = receiver.recv().await {
println!("got ping request: {:?}", msg);
}
});
}
pub async fn send_ping(&self, dest: String) -> Result<(), io::Error> {
let buf = [255u8; MESSAGE_LENGTH];
self.socket.send_to(&buf[..], &dest).await?;
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let peer1 = Peer::new("0.0.0.0:0");
println!("peer1 started on: {}", peer1.local_addr().to_string());
let peer2 = Peer::new("0.0.0.0:0");
println!("peer2 started on: {}", peer2.local_addr().to_string());
peer2.send_ping(peer1.local_addr().to_string()).await?;
peer2.send_ping(peer1.local_addr().to_string()).await?;
sleep(Duration::from_secs(100)).await;
Ok(())
}
Link to the Playground
In the start_inbound_message_handler function I start reading from the socket, if a message was received, a message over the mpsc::channel would be send to the start_request_handler where the processing happens, in this case a simple log output would be written if the receiver receives anything.
In the main function I'm creating two peers, peer1 and peer2, after both peers are created I start a ping request to the first peer. In the start_inbound_message_handler I will receive the data from the udp socket and send a message over the mpsc::channel The send returns without error. The problem is as mentioned before that the receiver will only receive a message when the buffer is full. In this case the buffer is 1. So if I send a second ping the first ping is received. I cannot find out why this happen.
The expected behavior is, if I send a message over the channel, the receiver starts receiving messages immediately and is not waiting until the buffer is full.
According to the Tokio documentation of from_std():
Creates new UdpSocket from a previously bound std::net::UdpSocket.
This function is intended to be used to wrap a UDP socket from the
standard library in the Tokio equivalent. The conversion assumes nothing
about the underlying socket; it is left up to the user to set it in
non-blocking mode.
This can be used in conjunction with socket2's Socket interface to
configure a socket before it's handed off, such as setting options like
reuse_address or binding to multiple addresses.
A socket that is not in non-blocking mode will prevent Tokio from working normally.
Just use the tokio function bind(), it is way simpler.
I'm new to Rust and I'm trying to configure a simple tcp socket server which will listen to connections and will reply with the same message that received.
The thing is, this works as I want except when connecting with multiple clients.. The first client that connects will send and receive the messages but if a second client connects, the first one keeps working but the second never receives messages, in fact the message never gets in the code that will handle it. And if I disconnect the first socket, the server will start spamming forever that received a message from the first socket with the same content than the last message it sent.
I am pretty sure I did something wrong in my code but I can't find it
This is my server struct:
use std::collections::HashMap;
use std::io::Read;
use std::io::Write;
use std::net::Shutdown;
use std::net::TcpListener;
use std::net::TcpStream;
use std::str;
use std::sync::{Arc, RwLock};
use threadpool::ThreadPool;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
thread_pool: ThreadPool
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
thread_pool: ThreadPool::new(10)
};
server
}
pub fn start(&self) {
let listener = TcpListener::bind(&self.url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => {
let mut self_clone = self.clone();
self.thread_pool.execute(move || {
self_clone.on_client_connect(stream.try_clone().unwrap());
});
}
Err(error) => eprintln!("Error when tried to use stream. Error = {:?}", error),
}
}
}
fn on_client_connect(&mut self, stream: TcpStream) {
println!("Client connected from {}", stream.local_addr().unwrap());
let mut id = self.id.write().unwrap();
{
*id += 1;
}
self.connections
.write()
.unwrap()
.insert(*id, stream.try_clone().unwrap());
let mut stream = stream.try_clone().unwrap();
let mut buffer = [0; 1024];
while match stream.read(&mut buffer) {
Ok(size) => {
println!(
"Message received from {} - {}",
id,
str::from_utf8(&buffer).unwrap()
);
stream.write_all(&buffer[0..size]).unwrap();
true
}
Err(error) => {
println!(
"Error when reading message from socket. Error = {:?}",
error
);
stream.shutdown(Shutdown::Both).unwrap();
false
}
} { }
}
}
And in my main.rs I'm just calling the connect function and the server starts working
In this piece of code in your on_client_connect function, you're aquiring a read lock for self.id:
let mut id = self.id.write().unwrap();
{
*id += 1;
}
However, the id variable, which holds the lock, is not released until it drops at the end of the function. This means that all other clients will wait for this lock to be released, which won't happen until the function currently holding the lock has completed (which happens when that client disconnects).
You can solve this by rewriting the above code to only keep the lock while incrementing, and then storing the ID value in a variable:
let id: u32 = {
let mut id_lock = self.id.write.unwrap();
*id_lock += 1;
*id_lock
// id_lock is dropped at the end of this block, so the lock is released
};
Even better, you can use AtomicU32, which is still thread-safe yet does not require locking at all:
use std::sync::atomic::{AtomicU32, Ordering};
struct {
id: Arc<AtomicU32>,
// ...
}
// Fetch previous value, then increment `self.id` by one, in a thread-safe and lock-free manner
let id: u32 = self.id.fetch_add(1, Ordering::Relaxed);
Also, when the connection is closed your code goes into an infinite loop because you're not handling the case where stream.read() returns Ok(0), which indicates that the connection was closed:
while match stream.read(&mut buffer) {
Ok(0) => false, // handle connection closed...
Ok(size) => { /* ... */ }
Err(err) => { /* ... */ }
} {}
I'm using this method as part of a WebSocket client implementation to read data from a serial port and send it to the server. I had to wrap the port in Arc<Mutex<_>> because I needed to share it with other methods in order to write to the serial port upon receiving a WebSocket message.
I'm using a 128 byte u8 buffer to store the data; is there any way to make the buffer dynamically-sized? Wrapping in Arc<Mutex<_>> is a must.
extern crate env_logger;
extern crate ws;
use std::thread;
use ws::{listen, CloseCode, Handler, Handshake, Message, Result, Sender};
fn on_open(&mut self, _: Handshake) -> Result<(), ws::Error> {
let port_handle: Arc<Mutex<SystemPort>> = self.port_handle.clone();
let out: Sender = self.out.clone();
thread::spawn(move || {
// read_from_serial(&mut port);
let mut buffer = [0u8; 128];
let mut msg: String;
loop {
let read_result: Result<usize, std::io::Error>;
{
read_result = port_handle
.lock()
.expect("Access port handle")
.read(&mut buffer);
}
if read_result.is_ok() {
msg = buffer_to_string(&buffer);
buffer = [0u8; 128];
println!("Client sending message: '{}'", msg);
out.send(msg).expect("Forward COM message to server");
}
}
});
Ok(())
}
Edit: This is the same code using read_to_string instead of read. The string length is completely dynamic, but the only problem is that read_result is always Err(Custom { kind: TimedOut, error: StringError("Operation timed out") }) here. As a temporary fix, I decided to leave the Result unused and used msg.len() > 0 to test for a new serial message.
extern crate env_logger;
extern crate ws;
use std::thread;
use ws::{listen, CloseCode, Handler, Handshake, Message, Result, Sender};
fn on_open(&mut self, _: Handshake) -> Result<(), ws::Error> {
let port_handle = self.port_handle.clone();
let out = self.out.clone();
thread::spawn(move || {
// read_from_serial(&mut port);
let mut buffer = String::new();
let mut msg;
loop {
{
port_handle
.lock()
.expect("Access port handle")
.read_to_string(&mut buffer);
msg = format!("{}", buffer);
}
if msg.len() > 0 {
buffer.clear();
println!("Client sending message: '{}'", msg);
out.send(msg).expect("Forward COM message to server");
}
}
});
Ok(())
}
Edit: Ive included the following minimal example and removed all other unnecessary code.
extern crate env_logger;
extern crate serial;
use serial::prelude::*;
use std::io::Read;
use std::thread;
use std::time::Duration;
fn main() {
let client = thread::spawn(move || {
// setting up serial port
let mut port = serial::open("/dev/tnt0").expect("Open serial port");
let new_timeout = Duration::from_millis(1000);
port.set_timeout(new_timeout).expect("Set port timeout");
// setting up buffer
let mut buffer = String::new();
// looping to continuously read serial data
loop {
// unused Result returned by read_to_string
// because when used, it always is an Err variant
// `Custom { kind: TimedOut, error: StringError("Operation timed out") }`
port.read_to_string(&mut buffer);
// using string length to test for valid data instead of using Ok/Err
if buffer.len() > 0 {
println!("Client sending message: '{}'", buffer);
// clearing buffer for next iteration
buffer.clear();
}
}
});
// Wait for the worker threads to finish what they are doing
let _ = client.join();
println!("All done.")
}
I am dabbling in tokio-core and can figure out how to spawn an event loop. However there are two things i am not sure of - how to gracefully exit the event loop and how to exit a stream running inside an event loop. For e.g consider this simple piece of code which spawns two listeners into the event loop and waits for another thread to indicate an exit condition:
extern crate tokio_core;
extern crate futures;
use tokio_core::reactor::Core;
use futures::sync::mpsc::unbounded;
use tokio_core::net::TcpListener;
use std::net::SocketAddr;
use std::str::FromStr;
use futures::{Stream, Future};
use std::thread;
use std::time::Duration;
use std::sync::mpsc::channel;
fn main() {
let (get_tx, get_rx) = channel();
let j = thread::spawn(move || {
let mut core = Core::new().unwrap();
let (tx, rx) = unbounded();
get_tx.send(tx).unwrap(); // <<<<<<<<<<<<<<< (1)
// Listener-0
{
let l = TcpListener::bind(&SocketAddr::from_str("127.0.0.1:44444").unwrap(),
&core.handle())
.unwrap();
let fe = l.incoming()
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
// Listener1
{
let l = TcpListener::bind(&SocketAddr::from_str("127.0.0.1:55555").unwrap(),
&core.handle())
.unwrap();
let fe = l.incoming()
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
let work = rx.for_each(|v| {
if v {
// (3) I want to shut down listener-0 above the release the resources
Ok(())
} else {
Err(()) // <<<<<<<<<<<<<<< (2)
}
});
let _ = core.run(work);
println!("Exiting event loop thread");
});
let tx = get_rx.recv().unwrap();
thread::sleep(Duration::from_secs(2));
println!("Want to terminate listener-0"); // <<<<<< (3)
tx.send(true).unwrap();
thread::sleep(Duration::from_secs(2));
println!("Want to exit event loop");
tx.send(false).unwrap();
j.join().unwrap();
}
So say after the sleep in the main thread i want a clean exit of the event loop thread. Currently I send something to the event loop to make it exit and thus releasing the thread.
However both, (1) and (2) feel hacky - i am forcing an error as an exit condition. My questions are:
1) Am I doing it right ? If not then what is the correct way to gracefully exit the event loop thread.
2) I don't event know how to do (3) - i.e. indicate a condition externally to shutdown listener-0 and free all it's resources. How do i achieve this ?
The event loop (core) is not being turned any more (e.g. by run()) or is forgotten (drop()ed). There is no synchronous exit. core.run() returns and stops turning the loop when the Future passed to it completes.
A Stream completes by yielding None (marked with (3) in the code below).
When e.g. a TCP connection is closed the Stream representing it completes and the other way around.
extern crate tokio_core;
extern crate futures;
use tokio_core::reactor::Core;
use futures::sync::mpsc::unbounded;
use tokio_core::net::TcpListener;
use std::net::SocketAddr;
use std::str::FromStr;
use futures::{Async, Stream, Future, Poll};
use std::thread;
use std::time::Duration;
struct CompletionPact<S, C>
where S: Stream,
C: Stream,
{
stream: S,
completer: C,
}
fn stream_completion_pact<S, C>(s: S, c: C) -> CompletionPact<S, C>
where S: Stream,
C: Stream,
{
CompletionPact {
stream: s,
completer: c,
}
}
impl<S, C> Stream for CompletionPact<S, C>
where S: Stream,
C: Stream,
{
type Item = S::Item;
type Error = S::Error;
fn poll(&mut self) -> Poll<Option<S::Item>, S::Error> {
match self.completer.poll() {
Ok(Async::Ready(None)) |
Err(_) |
Ok(Async::Ready(Some(_))) => {
// We are done, forget us
Ok(Async::Ready(None)) // <<<<<< (3)
},
Ok(Async::NotReady) => {
self.stream.poll()
},
}
}
}
fn main() {
// unbounded() is the equivalent of a Stream made from a channel()
// directly create it in this thread instead of receiving a Sender
let (tx, rx) = unbounded::<()>();
// A second one to cause forgetting the listener
let (l0tx, l0rx) = unbounded::<()>();
let j = thread::spawn(move || {
let mut core = Core::new().unwrap();
// Listener-0
{
let l = TcpListener::bind(
&SocketAddr::from_str("127.0.0.1:44444").unwrap(),
&core.handle())
.unwrap();
// wrap the Stream of incoming connections (which usually doesn't
// complete) into a Stream that completes when the
// other side is drop()ed or sent on
let fe = stream_completion_pact(l.incoming(), l0rx)
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
// Listener1
{
let l = TcpListener::bind(
&SocketAddr::from_str("127.0.0.1:55555").unwrap(),
&core.handle())
.unwrap();
let fe = l.incoming()
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
let _ = core.run(rx.into_future());
println!("Exiting event loop thread");
});
thread::sleep(Duration::from_secs(2));
println!("Want to terminate listener-0");
// A drop() will result in the rx side Stream being completed,
// which is indicated by Ok(Async::Ready(None)).
// Our wrapper behaves the same when something is received.
// When the event loop encounters a
// Stream that is complete it forgets about it. Which propagates to a
// drop() that close()es the file descriptor, which closes the port if
// nothing else uses it.
l0tx.send(()).unwrap(); // alternatively: drop(l0tx);
// Note that this is async and is only the signal
// that starts the forgetting.
thread::sleep(Duration::from_secs(2));
println!("Want to exit event loop");
// Same concept. The reception or drop() will cause Stream completion.
// A completed Future will cause run() to return.
tx.send(()).unwrap();
j.join().unwrap();
}
I implemented graceful shutdown via a oneshot channel.
The trick was to use both a oneshot channel to cancel the tcp listener, and use a select! on the two futures. Note I'm using tokio 0.2 and futures 0.3 in the example below.
use futures::channel::oneshot;
use futures::{FutureExt, StreamExt};
use std::thread;
use tokio::net::TcpListener;
pub struct ServerHandle {
// This is the thread in which the server will block
thread: thread::JoinHandle<()>,
// This switch can be used to trigger shutdown of the server.
kill_switch: oneshot::Sender<()>,
}
impl ServerHandle {
pub fn stop(self) {
self.kill_switch.send(()).unwrap();
self.thread.join().unwrap();
}
}
pub fn run_server() -> ServerHandle {
let (kill_switch, kill_switch_receiver) = oneshot::channel::<()>();
let thread = thread::spawn(move || {
info!("Server thread begun!!!");
let mut runtime = tokio::runtime::Builder::new()
.basic_scheduler()
.enable_all()
.thread_name("Tokio-server-thread")
.build()
.unwrap();
runtime.block_on(async {
server_prog(kill_switch_receiver).await.unwrap();
});
info!("Server finished!!!");
});
ServerHandle {
thread,
kill_switch,
}
}
async fn server_prog(kill_switch_receiver: oneshot::Receiver<()>) -> std::io::Result<()> {
let addr = "127.0.0.1:12345";
let addr: std::net::SocketAddr = addr.parse().unwrap();
let mut listener = TcpListener::bind(&addr).await?;
let mut kill_switch_receiver = kill_switch_receiver.fuse();
let mut incoming = listener.incoming().fuse();
loop {
futures::select! {
x = kill_switch_receiver => {
break;
},
optional_new_client = incoming.next() => {
if let Some(new_client) = optional_new_client {
let peer_socket = new_client?;
info!("Client connected!");
let peer = process_client(peer_socket, db.clone());
peers.lock().unwrap().push(peer);
} else {
info!("No more incoming connections.");
break;
}
},
};
}
Ok(())
}
Hopes this helps others (or future me ;)).
My code lives here:
https://github.com/windelbouwman/lognplot/blob/master/lognplot/src/server/server.rs