web::Payload::next() hangs when websocket connection is interrupted - rust

I write a websocket server without using Actix actors and struggle with weird hanging on web::Payload::next() when connection is aborted without sending close message from client. Here is a simplified example:
async fn handle_connection(
req: HttpRequest,
stream: web::Payload,
) -> ActixResult<HttpResponse> {
let mut res = ws::handshake(req.head())?;
let (tx, rx) =
mpsc::channel::<Result<Bytes, ActixError>>(FRAME_BUFFER_CAPACITY);
spawn_local(handle_frames(stream));
res.message_body(BodyStream::new(rx).boxed())
.map(HttpResponse::from)
.map_err(ActixError::from)
}
async fn handle_frames(mut stream: web::Payload) -> Result<()> {
while let Some(chunk) = stream.next().await { // Hangs here.
...
}
}
How can I resolve this issue?

Related

No response while writing my first test for a `Quinn` and `Qp2p` based module

I have the following code that uses [QP2P][1] for network communication.
impl Broker {
pub async fn new(
config: Config
) -> Result<Self, EndpointError> {
let (main_endpoint, main_incoming, _) = Endpoint::new_peer(
local_addr(),
&[],
config,
).await?;
let mut broker = Self {
main_endpoint,
main_incoming
};
broker.on_message();
Ok(broker)
}
async fn on_message(&mut self) -> Result<(), RecvError> {
// loop over incoming connections
while let Some((connection, mut incoming_messages)) = self.main_incoming.next().await {
let src = connection.remote_address();
// loop over incoming messages
while let Some(bytes) = incoming_messages.next().await? {
println!("Received from {:?} --> {:?}", src, bytes);
println!();
}
}
Ok(())
}
}
On the same file I also want to test the above by sending a message and seeing if on_message will get it.
#[tokio::test]
async fn basic_usage() -> Result<()> {
const MSG_HELLO: &str = "HELLO";
let config = Config {
idle_timeout: Duration::from_secs(60 * 60).into(), // 1 hour idle timeout.
..Default::default()
};
let broker = Broker::new(config.clone(), None).await?;
let (node, mut incoming_conns, _contact) = Endpoint::new_peer(
SocketAddr::from((Ipv4Addr::LOCALHOST, 0)),
&[],
config.clone(),
).await?;
{
let msg = Bytes::from(MSG_HELLO);
println!("Sending to {:?} --> {:?}\n", broker.main_endpoint, msg);
node.connect_to(&broker.main_endpoint.local_addr()).await?.0.send(msg.clone()).await?;
}
Ok(())
}
What ends happening is that the broker's println will not trigger at all. Is me calling on_message during initialization and expecting that it will receive messages correct. If not, how can I write the most basic test of checking if a message is received, using qp2p endpoints?
I'm not familiar with the frameworks you're using to answer fully, but maybe I can get you pointed in the right direction. I see 2 (likely) issues:
Futures don't do anything until polled.
Basically, you call await on most of your async functions, but you don't ever await or poll() the Future from on_message(), so it's basically a no-op and the contents of on_message() are never run.
I don't think this is structured correctly.
From looking at it, assuming you did await the above call, by the time Broker::new() finishes in your test, all of on_message() would have completed (meaning it wouldn't pick up later messages).
You may wish to spawn a thread for handling incoming messages. There are probably other ways you can do this with futures by adjusting how you poll them. At the least, you probably want to take the call to on_message() out of Broker::new() and await it after the message is sent in your code, similar to how the tests in qp2p are written:
#[tokio::test(flavor = "multi_thread")]
async fn single_message() -> Result<()> {
let (peer1, mut peer1_incoming_connections, _) = new_endpoint().await?;
let peer1_addr = peer1.public_addr();
let (peer2, _, _) = new_endpoint().await?;
let peer2_addr = peer2.public_addr();
// Peer 2 connects and sends a message
let (connection, _) = peer2.connect_to(&peer1_addr).await?;
let msg_from_peer2 = random_msg(1024);
connection.send(msg_from_peer2.clone()).await?;
// Peer 1 gets an incoming connection
let mut peer1_incoming_messages = if let Ok(Some((connection, incoming))) =
peer1_incoming_connections.next().timeout().await
{
assert_eq!(connection.remote_address(), peer2_addr);
incoming
} else {
bail!("No incoming connection");
};
// Peer 2 gets an incoming message
if let Ok(message) = peer1_incoming_messages.next().timeout().await {
assert_eq!(message?, Some(msg_from_peer2));
} else {
bail!("No incoming message");
}
Ok(())
}

How to connect bevy game to externel TCP server using tokios async TcpStream?

I want to send Events between the game client and server and I already got it working, but I do not know how to do it with bevy.
I am dependent to use tokios async TcpStream, because I have to be able to split the stream into a OwnedWriteHalf and OwnedReadhalf using stream.into_split().
My first idea was to just spawn a thread that handles the connection and then send the received events to a queue using mpsc::channel
Then I include this queue into a bevy resource using app.insert_resource(Queue) and pull events from it in the game loop.
the Queue:
use tokio::sync::mpsc;
pub enum Instruction {
Push(GameEvent),
Pull(mpsc::Sender<Option<GameEvent>>),
}
#[derive(Clone, Debug)]
pub struct Queue {
sender: mpsc::Sender<Instruction>,
}
impl Queue {
pub fn init() -> Self {
let (tx, rx) = mpsc::channel(1024);
init(rx);
Self{sender: tx}
}
pub async fn send(&self, event: GameEvent) {
self.sender.send(Instruction::Push(event)).await.unwrap();
}
pub async fn pull(&self) -> Option<GameEvent> {
println!("new pull");
let (tx, mut rx) = mpsc::channel(1);
self.sender.send(Instruction::Pull(tx)).await.unwrap();
rx.recv().await.unwrap()
}
}
fn init(mut rx: mpsc::Receiver<Instruction>) {
tokio::spawn(async move {
let mut queue: Vec<GameEvent> = Vec::new();
loop {
match rx.recv().await.unwrap() {
Instruction::Push(ev) => {
queue.push(ev);
}
Instruction::Pull(sender) => {
sender.send(queue.pop()).await.unwrap();
}
}
}
});
}
But because all this has to be async I have block the pull() function in the sync game loop.
I do this using the futures-lite crate:
fn event_pull(
communication: Res<Communication>
) {
let ev = future::block_on(communication.event_queue.pull());
println!("got event: {:?}", ev);
}
And this works fine, BUT after around 5 seconds the whole program just halts and does not receive any more events.
It seems like that future::block_on() does block indefinitely.
Having the main function, in which bevy::prelude::App gets built and run, to be the async tokio::main function might also be a problem here.
It would probably be best to wrap the async TcpStream initialisation and tokio::sync::mpsc::Sender and thus also Queue.pull into synchronous functions, but I do not know how to do this.
Can anyone help?
How to reproduce
The repo can be found here
Just compile both server and client and then run both in the same order.
I got it to work by just replacing every tokio::sync::mpsc with crossbeam::channel, which might be a problem, as it does block
and manually initializing the tokio runtime.
so the init code looks like this:
pub struct Communicator {
pub event_bridge: bridge::Bridge,
pub event_queue: event_queue::Queue,
_runtime: Runtime,
}
impl Communicator {
pub fn init(ip: &str) -> Self {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_io()
.build()
.unwrap();
let (bridge, queue, game_rx) = rt.block_on(async move {
let socket = TcpStream::connect(ip).await.unwrap();
let (read, write) = socket.into_split();
let reader = TcpReader::new(read);
let writer = TcpWriter::new(write);
let (bridge, tcp_rx, game_rx) = bridge::Bridge::init();
reader::init(bridge.clone(), reader);
writer::init(tcp_rx, writer);
let event_queue = event_queue::Queue::init();
return (bridge, event_queue, game_rx);
});
// game of game_rx events to queue for game loop
let eq_clone = queue.clone();
rt.spawn(async move {
loop {
let event = game_rx.recv().unwrap();
eq_clone.send(event);
}
});
Self {
event_bridge: bridge,
event_queue: queue,
_runtime: rt,
}
}
}
And main.rs looks like this:
fn main() {
let communicator = communication::Communicator::init("0.0.0.0:8000");
communicator.event_bridge.push_tcp(TcpEvent::Register{name: String::from("luca")});
App::new()
.insert_resource(communicator)
.add_system(event_pull)
.add_plugins(DefaultPlugins)
.run();
}
fn event_pull(
communication: Res<communication::Communicator>
) {
let ev = communication.event_queue.pull();
if let Some(ev) = ev {
println!("ev");
}
}
Perhaps there might be a better solution.

How to test a method with an asynchronous infinite loop?

I have a struct that works as a postmaster for a server application: since I don't know how many clients will connect I have the postmaster listen on a socket and start a new struct with business logic whenever a client opens a connection.
But this means I don't know how to implement integration tests for the Postmaster. There is a public "main" method that hangs indefinitely while waiting for connections:
#[tokio::main]
pub async fn start(self) -> Result<(), GenericError> {
// https://stackoverflow.com/a/55874334/70600
let mut this = self;
loop {
let tmp = this.configuration.clone().hostaddr();
println!("{:?}", tmp);
let listener = TcpListener::bind(tmp).await?;
match listener.accept().await {
Ok((stream, _addr)) => {
let backend = Backend::new(&this.configuration, stream);
this.backends.push(backend);
}
Err(e) => todo!("Log error accepting client connection."),
}
}
Ok(())
}
This is my test:
#[test]
fn test_server_default_configuration() {
let postmaster = Postmaster::default();
let started = postmaster.start();
assert!(started.is_ok())
}
Except the assert is obviously never reached. How can I test this async code?
You can start the postmaster in a separate thread, connect to it, and give it some commands, and check the responses:
#[test]
fn test_server_default_configuration() {
let postmaster = Postmaster::default();
let thr = std::thread::spawn(move || postmaster.start());
// connect to the configured address, test the responses...
// ...
// finally, send the postmaster a "quit" command
let result = thr.join().unwrap();
assert!(result.is_ok())
}

Receiver on "tokio::mpsc::channel" only receives messages when buffer is full

In my code snippet the tokio (v0.3) mpsc:channel receiver only receives a message when the buffer is full. It doesn't matter how big or small the buffer is.
use std::io;
use std::net::{SocketAddr, ToSocketAddrs};
use std::sync::Arc;
use std::time::Duration;
use tokio::net::UdpSocket;
use tokio::sync::mpsc;
use tokio::time::sleep;
const MESSAGE_LENGTH: usize = 1024;
pub struct Peer {
socket: Arc<UdpSocket>,
}
impl Peer {
pub fn new<S: ToSocketAddrs>(addr: S) -> Peer {
let socket = std::net::UdpSocket::bind(addr).expect("could not create socket");
let peer = Peer {
socket: Arc::new(UdpSocket::from_std(socket).unwrap()),
};
peer.start_inbound_message_handler();
peer
}
pub fn local_addr(&self) -> SocketAddr {
self.socket.local_addr().unwrap()
}
fn start_inbound_message_handler(&self) {
let socket = self.socket.clone();
let (tx, rx) = mpsc::channel(1);
self.start_request_handler(rx);
tokio::spawn(async move {
let mut buf = [0u8; MESSAGE_LENGTH];
loop {
if let Ok((len, addr)) = socket.recv_from(&mut buf).await {
println!("received {} bytes from {}", len, addr);
if let Err(_) = tx.send(true).await {
println!("error sending msg to request handler");
}
}
}
});
}
fn start_request_handler(&self, mut receiver: mpsc::Receiver<bool>) {
tokio::spawn(async move {
while let Some(msg) = receiver.recv().await {
println!("got ping request: {:?}", msg);
}
});
}
pub async fn send_ping(&self, dest: String) -> Result<(), io::Error> {
let buf = [255u8; MESSAGE_LENGTH];
self.socket.send_to(&buf[..], &dest).await?;
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let peer1 = Peer::new("0.0.0.0:0");
println!("peer1 started on: {}", peer1.local_addr().to_string());
let peer2 = Peer::new("0.0.0.0:0");
println!("peer2 started on: {}", peer2.local_addr().to_string());
peer2.send_ping(peer1.local_addr().to_string()).await?;
peer2.send_ping(peer1.local_addr().to_string()).await?;
sleep(Duration::from_secs(100)).await;
Ok(())
}
Link to the Playground
In the start_inbound_message_handler function I start reading from the socket, if a message was received, a message over the mpsc::channel would be send to the start_request_handler where the processing happens, in this case a simple log output would be written if the receiver receives anything.
In the main function I'm creating two peers, peer1 and peer2, after both peers are created I start a ping request to the first peer. In the start_inbound_message_handler I will receive the data from the udp socket and send a message over the mpsc::channel The send returns without error. The problem is as mentioned before that the receiver will only receive a message when the buffer is full. In this case the buffer is 1. So if I send a second ping the first ping is received. I cannot find out why this happen.
The expected behavior is, if I send a message over the channel, the receiver starts receiving messages immediately and is not waiting until the buffer is full.
According to the Tokio documentation of from_std():
Creates new UdpSocket from a previously bound std::net::UdpSocket.
This function is intended to be used to wrap a UDP socket from the
standard library in the Tokio equivalent. The conversion assumes nothing
about the underlying socket; it is left up to the user to set it in
non-blocking mode.
This can be used in conjunction with socket2's Socket interface to
configure a socket before it's handed off, such as setting options like
reuse_address or binding to multiple addresses.
A socket that is not in non-blocking mode will prevent Tokio from working normally.
Just use the tokio function bind(), it is way simpler.

How do I terminate a hyper server after fulfilling one request?

I need a simple hyper server that serves a single request and then exits. This is my code so far, I believe that all I need is a way to get tx into hello, so I can use tx.send(()) and it should work the way I want it. However, I can't quite work out a way to do that without having the compiler yell at me.
use std::convert::Infallible;
use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response, Server};
async fn hello(_: Request<Body>) -> Result<Response<Body>, Infallible> {
Ok(Response::new(Body::from("Hello World!")))
}
#[tokio::main]
pub async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let (tx, rx) = tokio::sync::oneshot::channel::<()>();
let make_svc = make_service_fn(|_conn| {
async { Ok::<_, Infallible>(service_fn(hello)) }
});
let addr = ([127, 0, 0, 1], 3000).into();
let server = Server::bind(&addr).serve(make_svc);
println!("Listening on http://{}", addr);
let graceful = server.with_graceful_shutdown(async {
rx.await.ok();
});
graceful.await?;
Ok(())
}
Rust playground
Relevant crates:
tokio = { version = "0.2", features = ["full"] }
hyper = "0.13.7"
Since How to share mutable state for a Hyper handler? and How to share mutable state for a Hyper handler?, the hyper API has changed and I am unable to compile the code when edited to work with the current version.
A straightforward solution would be to use global state for this, made possible by tokio's Mutex type, like so:
use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response, Server};
use lazy_static::lazy_static;
use std::convert::Infallible;
use std::sync::Arc;
use tokio::sync::oneshot::Sender;
use tokio::sync::Mutex;
lazy_static! {
/// Channel used to send shutdown signal - wrapped in an Option to allow
/// it to be taken by value (since oneshot channels consume themselves on
/// send) and an Arc<Mutex> to allow it to be safely shared between threads
static ref SHUTDOWN_TX: Arc<Mutex<Option<Sender<()>>>> = <_>::default();
}
async fn hello(_: Request<Body>) -> Result<Response<Body>, Infallible> {
// Attempt to send a shutdown signal, if one hasn't already been sent
if let Some(tx) = SHUTDOWN_TX.lock().await.take() {
let _ = tx.send(());
}
Ok(Response::new(Body::from("Hello World!")))
}
#[tokio::main]
pub async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let (tx, rx) = tokio::sync::oneshot::channel::<()>();
SHUTDOWN_TX.lock().await.replace(tx);
let make_svc = make_service_fn(|_conn| async { Ok::<_, Infallible>(service_fn(hello)) });
let addr = ([127, 0, 0, 1], 3000).into();
let server = Server::bind(&addr).serve(make_svc);
println!("Listening on http://{}", addr);
let graceful = server.with_graceful_shutdown(async {
rx.await.ok();
});
graceful.await?;
Ok(())
}
In this version of the code, we store the sender half of the shutdown signal channel in a global variable protected by a mutex lock, and then attempt to consume the channel to send the signal on every request.

Resources