Receiver on "tokio::mpsc::channel" only receives messages when buffer is full - rust

In my code snippet the tokio (v0.3) mpsc:channel receiver only receives a message when the buffer is full. It doesn't matter how big or small the buffer is.
use std::io;
use std::net::{SocketAddr, ToSocketAddrs};
use std::sync::Arc;
use std::time::Duration;
use tokio::net::UdpSocket;
use tokio::sync::mpsc;
use tokio::time::sleep;
const MESSAGE_LENGTH: usize = 1024;
pub struct Peer {
socket: Arc<UdpSocket>,
}
impl Peer {
pub fn new<S: ToSocketAddrs>(addr: S) -> Peer {
let socket = std::net::UdpSocket::bind(addr).expect("could not create socket");
let peer = Peer {
socket: Arc::new(UdpSocket::from_std(socket).unwrap()),
};
peer.start_inbound_message_handler();
peer
}
pub fn local_addr(&self) -> SocketAddr {
self.socket.local_addr().unwrap()
}
fn start_inbound_message_handler(&self) {
let socket = self.socket.clone();
let (tx, rx) = mpsc::channel(1);
self.start_request_handler(rx);
tokio::spawn(async move {
let mut buf = [0u8; MESSAGE_LENGTH];
loop {
if let Ok((len, addr)) = socket.recv_from(&mut buf).await {
println!("received {} bytes from {}", len, addr);
if let Err(_) = tx.send(true).await {
println!("error sending msg to request handler");
}
}
}
});
}
fn start_request_handler(&self, mut receiver: mpsc::Receiver<bool>) {
tokio::spawn(async move {
while let Some(msg) = receiver.recv().await {
println!("got ping request: {:?}", msg);
}
});
}
pub async fn send_ping(&self, dest: String) -> Result<(), io::Error> {
let buf = [255u8; MESSAGE_LENGTH];
self.socket.send_to(&buf[..], &dest).await?;
Ok(())
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let peer1 = Peer::new("0.0.0.0:0");
println!("peer1 started on: {}", peer1.local_addr().to_string());
let peer2 = Peer::new("0.0.0.0:0");
println!("peer2 started on: {}", peer2.local_addr().to_string());
peer2.send_ping(peer1.local_addr().to_string()).await?;
peer2.send_ping(peer1.local_addr().to_string()).await?;
sleep(Duration::from_secs(100)).await;
Ok(())
}
Link to the Playground
In the start_inbound_message_handler function I start reading from the socket, if a message was received, a message over the mpsc::channel would be send to the start_request_handler where the processing happens, in this case a simple log output would be written if the receiver receives anything.
In the main function I'm creating two peers, peer1 and peer2, after both peers are created I start a ping request to the first peer. In the start_inbound_message_handler I will receive the data from the udp socket and send a message over the mpsc::channel The send returns without error. The problem is as mentioned before that the receiver will only receive a message when the buffer is full. In this case the buffer is 1. So if I send a second ping the first ping is received. I cannot find out why this happen.
The expected behavior is, if I send a message over the channel, the receiver starts receiving messages immediately and is not waiting until the buffer is full.

According to the Tokio documentation of from_std():
Creates new UdpSocket from a previously bound std::net::UdpSocket.
This function is intended to be used to wrap a UDP socket from the
standard library in the Tokio equivalent. The conversion assumes nothing
about the underlying socket; it is left up to the user to set it in
non-blocking mode.
This can be used in conjunction with socket2's Socket interface to
configure a socket before it's handed off, such as setting options like
reuse_address or binding to multiple addresses.
A socket that is not in non-blocking mode will prevent Tokio from working normally.
Just use the tokio function bind(), it is way simpler.

Related

How to properly use self when creating a new thread from inside a method in Rust

I am creating a server that stores the TcpStream objects inside a Vec to be used later.The problem is the function that listens for new connections and adds them to the Vec runs forever in a separate thread and doesn't allow other threads to read the Vec.
pub struct Server {
pub connections: Vec<TcpStream>,
}
impl Server {
fn listen(&mut self) {
println!("Server is listening on port 8080");
let listener = TcpListener::bind("127.0.0.1:8080").unwrap();
loop {
let stream = listener.accept().unwrap().0;
println!("New client connected: {}", stream.peer_addr().unwrap());
//should block for write here
self.connections.push(stream);
//should release write lock
}
}
pub fn run(self) {
let arc_self = Arc::new(RwLock::new(self));
let arc_self_clone = arc_self.clone();
//blocks the lock for writing forever because of listen()
let listener_thread = thread::spawn(move || arc_self_clone.write().unwrap().listen());
loop {
let mut input = String::new();
io::stdin().read_line(&mut input).unwrap();
if input.trim() == "1" {
//can't read because lock blocked for writing
for c in &arc_self.read().unwrap().connections {
println!("testing...");
}
}
}
}
}
In the current example the server accepts connections but does not allow the main thread to read the connections vector.I tought about making the listen function run at a fixed interval (1-5s) so it allows other threads to read the vector in that time but listener.accept() blocks the thread aniway so i don't think that is a valid solution.I would also prefer if it were to run forever if possible and block access to the vector only when it needs to write (a new client connects) and while it waits for clients to connect not block the reading access of other threads to the connections vector.
You could just wrap connections in a RwLock instead of entire self, as shown below, but I would recommend using a proper synchronisation primitive like a channel.
pub struct Server {
pub connections: RwLock<Vec<TcpStream>>,
}
impl Server {
fn listen(&self) {
println!("Server is listening on port 8080");
let listener = TcpListener::bind("127.0.0.1:8080").unwrap();
loop {
let stream = listener.accept().unwrap().0;
println!("New client connected: {}", stream.peer_addr().unwrap());
//should block for write here
self.connections.write().unwrap().push(stream);
//should release write lock
}
}
pub fn run(self) {
let arc_self = Arc::new(self);
let arc_self_clone = arc_self.clone();
let listener_thread = thread::spawn(move || arc_self_clone.listen());
loop {
let mut input = String::new();
io::stdin().read_line(&mut input).unwrap();
if input.trim() == "1" {
for c in &*arc_self.connections.try_read().unwrap() {
println!("testing...");
}
}
}
}
}

No response while writing my first test for a `Quinn` and `Qp2p` based module

I have the following code that uses [QP2P][1] for network communication.
impl Broker {
pub async fn new(
config: Config
) -> Result<Self, EndpointError> {
let (main_endpoint, main_incoming, _) = Endpoint::new_peer(
local_addr(),
&[],
config,
).await?;
let mut broker = Self {
main_endpoint,
main_incoming
};
broker.on_message();
Ok(broker)
}
async fn on_message(&mut self) -> Result<(), RecvError> {
// loop over incoming connections
while let Some((connection, mut incoming_messages)) = self.main_incoming.next().await {
let src = connection.remote_address();
// loop over incoming messages
while let Some(bytes) = incoming_messages.next().await? {
println!("Received from {:?} --> {:?}", src, bytes);
println!();
}
}
Ok(())
}
}
On the same file I also want to test the above by sending a message and seeing if on_message will get it.
#[tokio::test]
async fn basic_usage() -> Result<()> {
const MSG_HELLO: &str = "HELLO";
let config = Config {
idle_timeout: Duration::from_secs(60 * 60).into(), // 1 hour idle timeout.
..Default::default()
};
let broker = Broker::new(config.clone(), None).await?;
let (node, mut incoming_conns, _contact) = Endpoint::new_peer(
SocketAddr::from((Ipv4Addr::LOCALHOST, 0)),
&[],
config.clone(),
).await?;
{
let msg = Bytes::from(MSG_HELLO);
println!("Sending to {:?} --> {:?}\n", broker.main_endpoint, msg);
node.connect_to(&broker.main_endpoint.local_addr()).await?.0.send(msg.clone()).await?;
}
Ok(())
}
What ends happening is that the broker's println will not trigger at all. Is me calling on_message during initialization and expecting that it will receive messages correct. If not, how can I write the most basic test of checking if a message is received, using qp2p endpoints?
I'm not familiar with the frameworks you're using to answer fully, but maybe I can get you pointed in the right direction. I see 2 (likely) issues:
Futures don't do anything until polled.
Basically, you call await on most of your async functions, but you don't ever await or poll() the Future from on_message(), so it's basically a no-op and the contents of on_message() are never run.
I don't think this is structured correctly.
From looking at it, assuming you did await the above call, by the time Broker::new() finishes in your test, all of on_message() would have completed (meaning it wouldn't pick up later messages).
You may wish to spawn a thread for handling incoming messages. There are probably other ways you can do this with futures by adjusting how you poll them. At the least, you probably want to take the call to on_message() out of Broker::new() and await it after the message is sent in your code, similar to how the tests in qp2p are written:
#[tokio::test(flavor = "multi_thread")]
async fn single_message() -> Result<()> {
let (peer1, mut peer1_incoming_connections, _) = new_endpoint().await?;
let peer1_addr = peer1.public_addr();
let (peer2, _, _) = new_endpoint().await?;
let peer2_addr = peer2.public_addr();
// Peer 2 connects and sends a message
let (connection, _) = peer2.connect_to(&peer1_addr).await?;
let msg_from_peer2 = random_msg(1024);
connection.send(msg_from_peer2.clone()).await?;
// Peer 1 gets an incoming connection
let mut peer1_incoming_messages = if let Ok(Some((connection, incoming))) =
peer1_incoming_connections.next().timeout().await
{
assert_eq!(connection.remote_address(), peer2_addr);
incoming
} else {
bail!("No incoming connection");
};
// Peer 2 gets an incoming message
if let Ok(message) = peer1_incoming_messages.next().timeout().await {
assert_eq!(message?, Some(msg_from_peer2));
} else {
bail!("No incoming message");
}
Ok(())
}

Receive message from channel between modules

I have four modules. The client is sending messages and the server is receiving messages. Once the server receives the message, it tries to send the message to the MPSC channel. I put the receiver in the other .rs file where I intend to receive the message.
I am not getting any message on the receiver side.
Maybe an infinite loop on the server side creates a problem, but is there a way to make this channel communication working?
client.rs
use std::io::prelude::*;
use std::os::unix::net::UnixDatagram;
use std::path::Path;
use std::sync::mpsc;
pub fn tcp_datagram_client() {
pub static FILE_PATH: &'static str = "/tmp/datagram.sock";
let socket = UnixDatagram::unbound().unwrap();
match socket.connect(FILE_PATH) {
Ok(socket) => socket,
Err(e) => {
println!("Couldn't connect: {:?}", e);
return;
}
};
println!("TCP client Connected to TCP Server {:?}", socket);
loop {
socket
.send(b"Hello from client to server")
.expect("recv function failed");
}
}
fn main() {
tcp_datagram_client();
}
server.rs
use std::os::unix::net::UnixDatagram;
use std::path::Path;
use std::str::from_utf8;
use std::sync::mpsc::Sender;
fn unlink_socket(path: impl AsRef<Path>) {
let path = path.as_ref();
if path.exists() {
if let Err(e) = std::fs::remove_file(path) {
eprintln!("Couldn't remove the file: {:?}", e);
}
}
}
static FILE_PATH: &'static str = "/tmp/datagram.sock";
pub fn tcp_datagram_server(tx: Sender<String>) {
unlink_socket(FILE_PATH);
let socket = match UnixDatagram::bind(FILE_PATH) {
Ok(socket) => socket,
Err(e) => {
eprintln!("Couldn't bind: {:?}", e);
return;
}
};
let mut buf = vec![0; 1024];
println!("Waiting for client to connect...");
loop {
let received_bytes = socket.recv(&mut buf).expect("recv function failed");
println!("Received {:?}", received_bytes);
let received_message = from_utf8(&buf).expect("utf-8 convert failed");
tx.send(received_message.to_string());
}
}
message_receiver.rs
use crate::server;
use std::sync::mpsc;
pub fn handle_messages() {
let (tx, rx) = mpsc::channel();
server::tcp_datagram_server(tx);
let message_from_tcp_server = rx.recv().unwrap();
println!("{:?}", message_from_tcp_server);
}
main.rs
mod server;
mod message_receiver;
fn main() {
message_receiver::handle_messages();
}
Once the TCP client is connected:
TCP client Connected to TCP Server UnixDatagram { fd: 3, local: (unnamed), peer: "/tmp/datagram.sock" (pathname) }
I receive no messages on the channel receiver end:
Waiting for client to connect...
Maybe an infinite loop on the server side creates a problem
Yes, quite literally, your server code does an infinite loop to handle continuously messages from the client(s). So the call to tcp_datagram_server never returns.
but is there a way to make this channel communication working?
Of course, it seems you are simply missing a second thread for your message_receiver. Wrapping your tcp_datagram_server(tx) in std::thread::spawn should do it. You could also add a loop to keep processing requests to match the one in tcp_datagram_server:
pub fn handle_messages() {
let (tx, rx) = mpsc::channel();
std::thread::spawn(|| tcp_datagram_server(tx));
loop {
let message_from_tcp_server = rx.recv().unwrap();
println!("{}", message_from_tcp_server);
}
}

Rust TCP socket server only working with one connection

I'm new to Rust and I'm trying to configure a simple tcp socket server which will listen to connections and will reply with the same message that received.
The thing is, this works as I want except when connecting with multiple clients.. The first client that connects will send and receive the messages but if a second client connects, the first one keeps working but the second never receives messages, in fact the message never gets in the code that will handle it. And if I disconnect the first socket, the server will start spamming forever that received a message from the first socket with the same content than the last message it sent.
I am pretty sure I did something wrong in my code but I can't find it
This is my server struct:
use std::collections::HashMap;
use std::io::Read;
use std::io::Write;
use std::net::Shutdown;
use std::net::TcpListener;
use std::net::TcpStream;
use std::str;
use std::sync::{Arc, RwLock};
use threadpool::ThreadPool;
#[derive(Clone, Debug)]
pub struct Server {
id: Arc<RwLock<u32>>,
connections: Arc<RwLock<HashMap<u32, TcpStream>>>,
url: String,
thread_pool: ThreadPool
}
impl Server {
pub fn new(url: String) -> Server {
let server = Server {
id: Arc::new(RwLock::new(0)),
connections: Arc::new(RwLock::new(HashMap::new())),
url,
thread_pool: ThreadPool::new(10)
};
server
}
pub fn start(&self) {
let listener = TcpListener::bind(&self.url).expect("Could not start the server");
println!("Server started succesfully");
for stream in listener.incoming() {
match stream {
Ok(stream) => {
let mut self_clone = self.clone();
self.thread_pool.execute(move || {
self_clone.on_client_connect(stream.try_clone().unwrap());
});
}
Err(error) => eprintln!("Error when tried to use stream. Error = {:?}", error),
}
}
}
fn on_client_connect(&mut self, stream: TcpStream) {
println!("Client connected from {}", stream.local_addr().unwrap());
let mut id = self.id.write().unwrap();
{
*id += 1;
}
self.connections
.write()
.unwrap()
.insert(*id, stream.try_clone().unwrap());
let mut stream = stream.try_clone().unwrap();
let mut buffer = [0; 1024];
while match stream.read(&mut buffer) {
Ok(size) => {
println!(
"Message received from {} - {}",
id,
str::from_utf8(&buffer).unwrap()
);
stream.write_all(&buffer[0..size]).unwrap();
true
}
Err(error) => {
println!(
"Error when reading message from socket. Error = {:?}",
error
);
stream.shutdown(Shutdown::Both).unwrap();
false
}
} { }
}
}
And in my main.rs I'm just calling the connect function and the server starts working
In this piece of code in your on_client_connect function, you're aquiring a read lock for self.id:
let mut id = self.id.write().unwrap();
{
*id += 1;
}
However, the id variable, which holds the lock, is not released until it drops at the end of the function. This means that all other clients will wait for this lock to be released, which won't happen until the function currently holding the lock has completed (which happens when that client disconnects).
You can solve this by rewriting the above code to only keep the lock while incrementing, and then storing the ID value in a variable:
let id: u32 = {
let mut id_lock = self.id.write.unwrap();
*id_lock += 1;
*id_lock
// id_lock is dropped at the end of this block, so the lock is released
};
Even better, you can use AtomicU32, which is still thread-safe yet does not require locking at all:
use std::sync::atomic::{AtomicU32, Ordering};
struct {
id: Arc<AtomicU32>,
// ...
}
// Fetch previous value, then increment `self.id` by one, in a thread-safe and lock-free manner
let id: u32 = self.id.fetch_add(1, Ordering::Relaxed);
Also, when the connection is closed your code goes into an infinite loop because you're not handling the case where stream.read() returns Ok(0), which indicates that the connection was closed:
while match stream.read(&mut buffer) {
Ok(0) => false, // handle connection closed...
Ok(size) => { /* ... */ }
Err(err) => { /* ... */ }
} {}

How to cleanly break tokio-core event loop and futures::Stream in Rust

I am dabbling in tokio-core and can figure out how to spawn an event loop. However there are two things i am not sure of - how to gracefully exit the event loop and how to exit a stream running inside an event loop. For e.g consider this simple piece of code which spawns two listeners into the event loop and waits for another thread to indicate an exit condition:
extern crate tokio_core;
extern crate futures;
use tokio_core::reactor::Core;
use futures::sync::mpsc::unbounded;
use tokio_core::net::TcpListener;
use std::net::SocketAddr;
use std::str::FromStr;
use futures::{Stream, Future};
use std::thread;
use std::time::Duration;
use std::sync::mpsc::channel;
fn main() {
let (get_tx, get_rx) = channel();
let j = thread::spawn(move || {
let mut core = Core::new().unwrap();
let (tx, rx) = unbounded();
get_tx.send(tx).unwrap(); // <<<<<<<<<<<<<<< (1)
// Listener-0
{
let l = TcpListener::bind(&SocketAddr::from_str("127.0.0.1:44444").unwrap(),
&core.handle())
.unwrap();
let fe = l.incoming()
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
// Listener1
{
let l = TcpListener::bind(&SocketAddr::from_str("127.0.0.1:55555").unwrap(),
&core.handle())
.unwrap();
let fe = l.incoming()
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
let work = rx.for_each(|v| {
if v {
// (3) I want to shut down listener-0 above the release the resources
Ok(())
} else {
Err(()) // <<<<<<<<<<<<<<< (2)
}
});
let _ = core.run(work);
println!("Exiting event loop thread");
});
let tx = get_rx.recv().unwrap();
thread::sleep(Duration::from_secs(2));
println!("Want to terminate listener-0"); // <<<<<< (3)
tx.send(true).unwrap();
thread::sleep(Duration::from_secs(2));
println!("Want to exit event loop");
tx.send(false).unwrap();
j.join().unwrap();
}
So say after the sleep in the main thread i want a clean exit of the event loop thread. Currently I send something to the event loop to make it exit and thus releasing the thread.
However both, (1) and (2) feel hacky - i am forcing an error as an exit condition. My questions are:
1) Am I doing it right ? If not then what is the correct way to gracefully exit the event loop thread.
2) I don't event know how to do (3) - i.e. indicate a condition externally to shutdown listener-0 and free all it's resources. How do i achieve this ?
The event loop (core) is not being turned any more (e.g. by run()) or is forgotten (drop()ed). There is no synchronous exit. core.run() returns and stops turning the loop when the Future passed to it completes.
A Stream completes by yielding None (marked with (3) in the code below).
When e.g. a TCP connection is closed the Stream representing it completes and the other way around.
extern crate tokio_core;
extern crate futures;
use tokio_core::reactor::Core;
use futures::sync::mpsc::unbounded;
use tokio_core::net::TcpListener;
use std::net::SocketAddr;
use std::str::FromStr;
use futures::{Async, Stream, Future, Poll};
use std::thread;
use std::time::Duration;
struct CompletionPact<S, C>
where S: Stream,
C: Stream,
{
stream: S,
completer: C,
}
fn stream_completion_pact<S, C>(s: S, c: C) -> CompletionPact<S, C>
where S: Stream,
C: Stream,
{
CompletionPact {
stream: s,
completer: c,
}
}
impl<S, C> Stream for CompletionPact<S, C>
where S: Stream,
C: Stream,
{
type Item = S::Item;
type Error = S::Error;
fn poll(&mut self) -> Poll<Option<S::Item>, S::Error> {
match self.completer.poll() {
Ok(Async::Ready(None)) |
Err(_) |
Ok(Async::Ready(Some(_))) => {
// We are done, forget us
Ok(Async::Ready(None)) // <<<<<< (3)
},
Ok(Async::NotReady) => {
self.stream.poll()
},
}
}
}
fn main() {
// unbounded() is the equivalent of a Stream made from a channel()
// directly create it in this thread instead of receiving a Sender
let (tx, rx) = unbounded::<()>();
// A second one to cause forgetting the listener
let (l0tx, l0rx) = unbounded::<()>();
let j = thread::spawn(move || {
let mut core = Core::new().unwrap();
// Listener-0
{
let l = TcpListener::bind(
&SocketAddr::from_str("127.0.0.1:44444").unwrap(),
&core.handle())
.unwrap();
// wrap the Stream of incoming connections (which usually doesn't
// complete) into a Stream that completes when the
// other side is drop()ed or sent on
let fe = stream_completion_pact(l.incoming(), l0rx)
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
// Listener1
{
let l = TcpListener::bind(
&SocketAddr::from_str("127.0.0.1:55555").unwrap(),
&core.handle())
.unwrap();
let fe = l.incoming()
.for_each(|(_sock, peer)| {
println!("Accepted from {}", peer);
Ok(())
})
.map_err(|e| println!("----- {:?}", e));
core.handle().spawn(fe);
}
let _ = core.run(rx.into_future());
println!("Exiting event loop thread");
});
thread::sleep(Duration::from_secs(2));
println!("Want to terminate listener-0");
// A drop() will result in the rx side Stream being completed,
// which is indicated by Ok(Async::Ready(None)).
// Our wrapper behaves the same when something is received.
// When the event loop encounters a
// Stream that is complete it forgets about it. Which propagates to a
// drop() that close()es the file descriptor, which closes the port if
// nothing else uses it.
l0tx.send(()).unwrap(); // alternatively: drop(l0tx);
// Note that this is async and is only the signal
// that starts the forgetting.
thread::sleep(Duration::from_secs(2));
println!("Want to exit event loop");
// Same concept. The reception or drop() will cause Stream completion.
// A completed Future will cause run() to return.
tx.send(()).unwrap();
j.join().unwrap();
}
I implemented graceful shutdown via a oneshot channel.
The trick was to use both a oneshot channel to cancel the tcp listener, and use a select! on the two futures. Note I'm using tokio 0.2 and futures 0.3 in the example below.
use futures::channel::oneshot;
use futures::{FutureExt, StreamExt};
use std::thread;
use tokio::net::TcpListener;
pub struct ServerHandle {
// This is the thread in which the server will block
thread: thread::JoinHandle<()>,
// This switch can be used to trigger shutdown of the server.
kill_switch: oneshot::Sender<()>,
}
impl ServerHandle {
pub fn stop(self) {
self.kill_switch.send(()).unwrap();
self.thread.join().unwrap();
}
}
pub fn run_server() -> ServerHandle {
let (kill_switch, kill_switch_receiver) = oneshot::channel::<()>();
let thread = thread::spawn(move || {
info!("Server thread begun!!!");
let mut runtime = tokio::runtime::Builder::new()
.basic_scheduler()
.enable_all()
.thread_name("Tokio-server-thread")
.build()
.unwrap();
runtime.block_on(async {
server_prog(kill_switch_receiver).await.unwrap();
});
info!("Server finished!!!");
});
ServerHandle {
thread,
kill_switch,
}
}
async fn server_prog(kill_switch_receiver: oneshot::Receiver<()>) -> std::io::Result<()> {
let addr = "127.0.0.1:12345";
let addr: std::net::SocketAddr = addr.parse().unwrap();
let mut listener = TcpListener::bind(&addr).await?;
let mut kill_switch_receiver = kill_switch_receiver.fuse();
let mut incoming = listener.incoming().fuse();
loop {
futures::select! {
x = kill_switch_receiver => {
break;
},
optional_new_client = incoming.next() => {
if let Some(new_client) = optional_new_client {
let peer_socket = new_client?;
info!("Client connected!");
let peer = process_client(peer_socket, db.clone());
peers.lock().unwrap().push(peer);
} else {
info!("No more incoming connections.");
break;
}
},
};
}
Ok(())
}
Hopes this helps others (or future me ;)).
My code lives here:
https://github.com/windelbouwman/lognplot/blob/master/lognplot/src/server/server.rs

Resources