How to test a method with an asynchronous infinite loop? - rust

I have a struct that works as a postmaster for a server application: since I don't know how many clients will connect I have the postmaster listen on a socket and start a new struct with business logic whenever a client opens a connection.
But this means I don't know how to implement integration tests for the Postmaster. There is a public "main" method that hangs indefinitely while waiting for connections:
#[tokio::main]
pub async fn start(self) -> Result<(), GenericError> {
// https://stackoverflow.com/a/55874334/70600
let mut this = self;
loop {
let tmp = this.configuration.clone().hostaddr();
println!("{:?}", tmp);
let listener = TcpListener::bind(tmp).await?;
match listener.accept().await {
Ok((stream, _addr)) => {
let backend = Backend::new(&this.configuration, stream);
this.backends.push(backend);
}
Err(e) => todo!("Log error accepting client connection."),
}
}
Ok(())
}
This is my test:
#[test]
fn test_server_default_configuration() {
let postmaster = Postmaster::default();
let started = postmaster.start();
assert!(started.is_ok())
}
Except the assert is obviously never reached. How can I test this async code?

You can start the postmaster in a separate thread, connect to it, and give it some commands, and check the responses:
#[test]
fn test_server_default_configuration() {
let postmaster = Postmaster::default();
let thr = std::thread::spawn(move || postmaster.start());
// connect to the configured address, test the responses...
// ...
// finally, send the postmaster a "quit" command
let result = thr.join().unwrap();
assert!(result.is_ok())
}

Related

How do I set up a listening socket using Rust without having it get dropped by garbage collector

I want to set up socket to listen for incoming connections, and do some logic based on incoming message.
use std::io::{Read, Write};
use std::net::{TcpListener, TcpStream};
fn main() {
set_listening_server("192.168.80.180", 2048);
}
fn set_listening_server(ip: &str, port: i32) {
assert!(port > 1000 && ip.len() > 0);
println!("Function is OK!");
let addr = format!("{}:{}", ip, port);
let listener = match TcpListener::bind(&addr) {
Ok(listener) => {
println!("Listening on {}", &addr);
loop {}
}
Err(e) => {
println!("Error binding to {}: {}", &addr, e);
return;
}
};
}
The above is how I have set up the socket. At first I had the loop{} exist right after the set_listening_server call but I figured that as soon as the function finished executing that socket was dropped. Moving the loop inside the function call solves those issues, but is there perhaps a way to declare listener globally?
I want to have seperate functions to handle steps of the communication, for example if I want to have something like:
fn service_connection() {
for stream in listener.incoming() {
let stream = stream.unwrap();
println!("Connection established!");
}
}
How can I access listener if it is being declared inside of the set_listening_server?
Would calling this service_connection function inside of the loop be the correct approach?
EDIT:
If I want to return the socket how can I do it correctly?
fn set_listening_server(ip: &str, port: i32) -> TcpListener{
assert!(port > 1000 && ip.len() > 0);
println!("Function is OK!");
let addr = format!("{}:{}", ip, port);
let listener = TcpListener::bind(&addr);
return listener;
}

How to properly use self when creating a new thread from inside a method in Rust

I am creating a server that stores the TcpStream objects inside a Vec to be used later.The problem is the function that listens for new connections and adds them to the Vec runs forever in a separate thread and doesn't allow other threads to read the Vec.
pub struct Server {
pub connections: Vec<TcpStream>,
}
impl Server {
fn listen(&mut self) {
println!("Server is listening on port 8080");
let listener = TcpListener::bind("127.0.0.1:8080").unwrap();
loop {
let stream = listener.accept().unwrap().0;
println!("New client connected: {}", stream.peer_addr().unwrap());
//should block for write here
self.connections.push(stream);
//should release write lock
}
}
pub fn run(self) {
let arc_self = Arc::new(RwLock::new(self));
let arc_self_clone = arc_self.clone();
//blocks the lock for writing forever because of listen()
let listener_thread = thread::spawn(move || arc_self_clone.write().unwrap().listen());
loop {
let mut input = String::new();
io::stdin().read_line(&mut input).unwrap();
if input.trim() == "1" {
//can't read because lock blocked for writing
for c in &arc_self.read().unwrap().connections {
println!("testing...");
}
}
}
}
}
In the current example the server accepts connections but does not allow the main thread to read the connections vector.I tought about making the listen function run at a fixed interval (1-5s) so it allows other threads to read the vector in that time but listener.accept() blocks the thread aniway so i don't think that is a valid solution.I would also prefer if it were to run forever if possible and block access to the vector only when it needs to write (a new client connects) and while it waits for clients to connect not block the reading access of other threads to the connections vector.
You could just wrap connections in a RwLock instead of entire self, as shown below, but I would recommend using a proper synchronisation primitive like a channel.
pub struct Server {
pub connections: RwLock<Vec<TcpStream>>,
}
impl Server {
fn listen(&self) {
println!("Server is listening on port 8080");
let listener = TcpListener::bind("127.0.0.1:8080").unwrap();
loop {
let stream = listener.accept().unwrap().0;
println!("New client connected: {}", stream.peer_addr().unwrap());
//should block for write here
self.connections.write().unwrap().push(stream);
//should release write lock
}
}
pub fn run(self) {
let arc_self = Arc::new(self);
let arc_self_clone = arc_self.clone();
let listener_thread = thread::spawn(move || arc_self_clone.listen());
loop {
let mut input = String::new();
io::stdin().read_line(&mut input).unwrap();
if input.trim() == "1" {
for c in &*arc_self.connections.try_read().unwrap() {
println!("testing...");
}
}
}
}
}

No response while writing my first test for a `Quinn` and `Qp2p` based module

I have the following code that uses [QP2P][1] for network communication.
impl Broker {
pub async fn new(
config: Config
) -> Result<Self, EndpointError> {
let (main_endpoint, main_incoming, _) = Endpoint::new_peer(
local_addr(),
&[],
config,
).await?;
let mut broker = Self {
main_endpoint,
main_incoming
};
broker.on_message();
Ok(broker)
}
async fn on_message(&mut self) -> Result<(), RecvError> {
// loop over incoming connections
while let Some((connection, mut incoming_messages)) = self.main_incoming.next().await {
let src = connection.remote_address();
// loop over incoming messages
while let Some(bytes) = incoming_messages.next().await? {
println!("Received from {:?} --> {:?}", src, bytes);
println!();
}
}
Ok(())
}
}
On the same file I also want to test the above by sending a message and seeing if on_message will get it.
#[tokio::test]
async fn basic_usage() -> Result<()> {
const MSG_HELLO: &str = "HELLO";
let config = Config {
idle_timeout: Duration::from_secs(60 * 60).into(), // 1 hour idle timeout.
..Default::default()
};
let broker = Broker::new(config.clone(), None).await?;
let (node, mut incoming_conns, _contact) = Endpoint::new_peer(
SocketAddr::from((Ipv4Addr::LOCALHOST, 0)),
&[],
config.clone(),
).await?;
{
let msg = Bytes::from(MSG_HELLO);
println!("Sending to {:?} --> {:?}\n", broker.main_endpoint, msg);
node.connect_to(&broker.main_endpoint.local_addr()).await?.0.send(msg.clone()).await?;
}
Ok(())
}
What ends happening is that the broker's println will not trigger at all. Is me calling on_message during initialization and expecting that it will receive messages correct. If not, how can I write the most basic test of checking if a message is received, using qp2p endpoints?
I'm not familiar with the frameworks you're using to answer fully, but maybe I can get you pointed in the right direction. I see 2 (likely) issues:
Futures don't do anything until polled.
Basically, you call await on most of your async functions, but you don't ever await or poll() the Future from on_message(), so it's basically a no-op and the contents of on_message() are never run.
I don't think this is structured correctly.
From looking at it, assuming you did await the above call, by the time Broker::new() finishes in your test, all of on_message() would have completed (meaning it wouldn't pick up later messages).
You may wish to spawn a thread for handling incoming messages. There are probably other ways you can do this with futures by adjusting how you poll them. At the least, you probably want to take the call to on_message() out of Broker::new() and await it after the message is sent in your code, similar to how the tests in qp2p are written:
#[tokio::test(flavor = "multi_thread")]
async fn single_message() -> Result<()> {
let (peer1, mut peer1_incoming_connections, _) = new_endpoint().await?;
let peer1_addr = peer1.public_addr();
let (peer2, _, _) = new_endpoint().await?;
let peer2_addr = peer2.public_addr();
// Peer 2 connects and sends a message
let (connection, _) = peer2.connect_to(&peer1_addr).await?;
let msg_from_peer2 = random_msg(1024);
connection.send(msg_from_peer2.clone()).await?;
// Peer 1 gets an incoming connection
let mut peer1_incoming_messages = if let Ok(Some((connection, incoming))) =
peer1_incoming_connections.next().timeout().await
{
assert_eq!(connection.remote_address(), peer2_addr);
incoming
} else {
bail!("No incoming connection");
};
// Peer 2 gets an incoming message
if let Ok(message) = peer1_incoming_messages.next().timeout().await {
assert_eq!(message?, Some(msg_from_peer2));
} else {
bail!("No incoming message");
}
Ok(())
}

Receive message from channel between modules

I have four modules. The client is sending messages and the server is receiving messages. Once the server receives the message, it tries to send the message to the MPSC channel. I put the receiver in the other .rs file where I intend to receive the message.
I am not getting any message on the receiver side.
Maybe an infinite loop on the server side creates a problem, but is there a way to make this channel communication working?
client.rs
use std::io::prelude::*;
use std::os::unix::net::UnixDatagram;
use std::path::Path;
use std::sync::mpsc;
pub fn tcp_datagram_client() {
pub static FILE_PATH: &'static str = "/tmp/datagram.sock";
let socket = UnixDatagram::unbound().unwrap();
match socket.connect(FILE_PATH) {
Ok(socket) => socket,
Err(e) => {
println!("Couldn't connect: {:?}", e);
return;
}
};
println!("TCP client Connected to TCP Server {:?}", socket);
loop {
socket
.send(b"Hello from client to server")
.expect("recv function failed");
}
}
fn main() {
tcp_datagram_client();
}
server.rs
use std::os::unix::net::UnixDatagram;
use std::path::Path;
use std::str::from_utf8;
use std::sync::mpsc::Sender;
fn unlink_socket(path: impl AsRef<Path>) {
let path = path.as_ref();
if path.exists() {
if let Err(e) = std::fs::remove_file(path) {
eprintln!("Couldn't remove the file: {:?}", e);
}
}
}
static FILE_PATH: &'static str = "/tmp/datagram.sock";
pub fn tcp_datagram_server(tx: Sender<String>) {
unlink_socket(FILE_PATH);
let socket = match UnixDatagram::bind(FILE_PATH) {
Ok(socket) => socket,
Err(e) => {
eprintln!("Couldn't bind: {:?}", e);
return;
}
};
let mut buf = vec![0; 1024];
println!("Waiting for client to connect...");
loop {
let received_bytes = socket.recv(&mut buf).expect("recv function failed");
println!("Received {:?}", received_bytes);
let received_message = from_utf8(&buf).expect("utf-8 convert failed");
tx.send(received_message.to_string());
}
}
message_receiver.rs
use crate::server;
use std::sync::mpsc;
pub fn handle_messages() {
let (tx, rx) = mpsc::channel();
server::tcp_datagram_server(tx);
let message_from_tcp_server = rx.recv().unwrap();
println!("{:?}", message_from_tcp_server);
}
main.rs
mod server;
mod message_receiver;
fn main() {
message_receiver::handle_messages();
}
Once the TCP client is connected:
TCP client Connected to TCP Server UnixDatagram { fd: 3, local: (unnamed), peer: "/tmp/datagram.sock" (pathname) }
I receive no messages on the channel receiver end:
Waiting for client to connect...
Maybe an infinite loop on the server side creates a problem
Yes, quite literally, your server code does an infinite loop to handle continuously messages from the client(s). So the call to tcp_datagram_server never returns.
but is there a way to make this channel communication working?
Of course, it seems you are simply missing a second thread for your message_receiver. Wrapping your tcp_datagram_server(tx) in std::thread::spawn should do it. You could also add a loop to keep processing requests to match the one in tcp_datagram_server:
pub fn handle_messages() {
let (tx, rx) = mpsc::channel();
std::thread::spawn(|| tcp_datagram_server(tx));
loop {
let message_from_tcp_server = rx.recv().unwrap();
println!("{}", message_from_tcp_server);
}
}

Rust TCP socket server only working with one connection

I'm new to Rust and I'm trying to configure a simple tcp socket server which will listen to connections and will reply with the same message that received.
The thing is, this works as I want except when connecting with multiple clients.. The first client that connects will send and receive the messages but if a second client connects, the first one keeps working but the second never receives messages, in fact the message never gets in the code that will handle it. And if I disconnect the first socket, the server will start spamming forever that received a message from the first socket with the same content than the last message it sent.
I am pretty sure I did something wrong in my code but I can't find it
This is my server struct:
use std::collections::HashMap;
use std::io::Read;
use std::io::Write;
use std::net::Shutdown;
use std::net::TcpListener;
use std::net::TcpStream;
use std::str;
use std::sync::{Arc, RwLock};
use threadpool::ThreadPool;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
thread_pool: ThreadPool
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
thread_pool: ThreadPool::new(10)
};
server
}
pub fn start(&self) {
let listener = TcpListener::bind(&self.url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => {
let mut self_clone = self.clone();
self.thread_pool.execute(move || {
self_clone.on_client_connect(stream.try_clone().unwrap());
});
}
Err(error) => eprintln!("Error when tried to use stream. Error = {:?}", error),
}
}
}
fn on_client_connect(&mut self, stream: TcpStream) {
println!("Client connected from {}", stream.local_addr().unwrap());
let mut id = self.id.write().unwrap();
{
*id += 1;
}
self.connections
.write()
.unwrap()
.insert(*id, stream.try_clone().unwrap());
let mut stream = stream.try_clone().unwrap();
let mut buffer = [0; 1024];
while match stream.read(&mut buffer) {
Ok(size) => {
println!(
"Message received from {} - {}",
id,
str::from_utf8(&buffer).unwrap()
);
stream.write_all(&buffer[0..size]).unwrap();
true
}
Err(error) => {
println!(
"Error when reading message from socket. Error = {:?}",
error
);
stream.shutdown(Shutdown::Both).unwrap();
false
}
} { }
}
}
And in my main.rs I'm just calling the connect function and the server starts working
In this piece of code in your on_client_connect function, you're aquiring a read lock for self.id:
let mut id = self.id.write().unwrap();
{
*id += 1;
}
However, the id variable, which holds the lock, is not released until it drops at the end of the function. This means that all other clients will wait for this lock to be released, which won't happen until the function currently holding the lock has completed (which happens when that client disconnects).
You can solve this by rewriting the above code to only keep the lock while incrementing, and then storing the ID value in a variable:
let id: u32 = {
let mut id_lock = self.id.write.unwrap();
*id_lock += 1;
*id_lock
// id_lock is dropped at the end of this block, so the lock is released
};
Even better, you can use AtomicU32, which is still thread-safe yet does not require locking at all:
use std::sync::atomic::{AtomicU32, Ordering};
struct {
id: Arc<AtomicU32>,
// ...
}
// Fetch previous value, then increment `self.id` by one, in a thread-safe and lock-free manner
let id: u32 = self.id.fetch_add(1, Ordering::Relaxed);
Also, when the connection is closed your code goes into an infinite loop because you're not handling the case where stream.read() returns Ok(0), which indicates that the connection was closed:
while match stream.read(&mut buffer) {
Ok(0) => false, // handle connection closed...
Ok(size) => { /* ... */ }
Err(err) => { /* ... */ }
} {}

Resources