How to send multiple messages over a TCPStream in Rust? - rust

I have a list of string messages which I want to send over to another machine by opening a TCP connection between the both of them. I'm not looking to use existing solutions like mpsc::channel.
I had seen examples about how we can do the same thing in tokio by using intervals and poll writes. But assuming we want to send the messages as fast as possible how do we do that? I also tried using tokio::spawn and loop through the entire queue to write the required messages but always ended up getting errors from the socket ( cannot be moved....)
let done = listener
.incoming()
.for_each(move |socket| {
let server_queue = _cqueue.clone();
let (reader, mut writer) = socket.split();
let sender = Interval::new_interval(std::time::Duration::from_millis(1))
.for_each(move |_| {
writer
.poll_write(server_queue.pull().borrow())
.map_err(|_| {
tokio::timer::Error::shutdown();
})
.unwrap();
return Ok(());
})
.map_err(|e| println!("{}", e));
;
tokio::spawn(sender);
return Ok(());
})
.map_err(|e| println!("Future_error {}", e));
tokio::run(done);
Using this I was able to get messages on the consumer side but I feel like intervals slow us down because we wait before sending other messages. Is there another way to achieve something similar without using interval?

Related

Graceful handling of SIGTERM (Ctrl-c) and shutdown a Threadpool

Currently, I am at the last chapter of the rust book, implementing the graceful shutdown of the HTTP server.
Now I want to extend the logic a bit and trigger the graceful shutdown after pressing Ctrl-c. Therefore I use the ctrlc crate.
But I cannot make it work due to various borrowing checker errors:
fn main() {
let listener = TcpListener::bind("127.0.0.1:7878").unwrap();
let pool = ThreadPool::new(4);
ctrlc::set_handler(|| {
// dropping will trigger ThreadPool::drop and gracefully shutdown the running workers
drop(pool); // compile error, variable is moved here
})
.unwrap();
for stream in listener.incoming() {
let stream = stream.unwrap();
pool.execute(|| {
handle_connection(stream);
});
}
}
I tried multiple approaches with Arc<> and additional mpsc channels, but without success.
What is the best practice here in order to make it work?
I ended up with:
fn main() {
let listener = TcpListener::bind("127.0.0.1:7878").unwrap();
let pool = ThreadPool::new(4);
let (tx, rx) = channel();
ctrlc::set_handler(move || tx.send(()).expect("Could not send signal on channel."))
.expect("Error setting Ctrl-C handler");
for stream in listener.incoming() {
let stream = stream.unwrap();
pool.execute(|| {
handle_connection(stream);
});
match rx.try_recv() {
Ok(_) => break,
Err(_) => continue,
}
}
}
It has a flaw though. Namely after pressing Ctrl-c the shutdown process is not triggered immediately, but only after receiving another request. Then the loop will break and the ThreadPool goes out of scope and gets dropped which triggers the graceful shutdown logic.
The solution is adequate for my learning purposes. In a production environment, one would rely on the graceful shutdown from a web-framework like actix

Determining the End-of-stream

I made a loop for a webserver.
On a windows client I didn't have any problems but on a linux client the server didn't responding to requests.
The problem: I found out that if request_size % buffer_size == 0 then the loop runs once more waiting for more data.
The question: Is there an efficient way of reading data that takes into consideration slow connections, connections that drop packages. (Not just using non_blocking or nodelay.)
let listener = TcpListener::bind("127.0.0.1:80").unwrap();
while let Ok((mut stream, _)) = listener.accept() {
let mut data: Vec<u8> = Vec::new();
let mut buf = [0u8; 32];
while let Ok(size) = stream.read(&mut buf) {
data.extend(buf[..size].iter());
if size != buf.len() { break; }
}
// do something with the data
}
I could increase the buffer size but that wouldn't solve the problem.
First, to detect EOF reliably, you should test the returned size of Read::read against zero and not your buffer size, because if you have a 'slow connections' you might not get enough data to fill the entire buffer at once, causing your loop to quite early with an incomplete message in data.
There are essentially 3 ways to make sure you received the entire message:
Read until EOF
Read a fixed-sized message
Encode some 'content length' and read that many bytes
Notice, that only the last two variants allow your client to eventually send more data over the same stream. Also notice, that these two variants can be implemented comparably easy via Read::read_exact.
Besides notice, if you don't trust your client, it might be helpful to set up TcpStream::set_read_timeout with a reasonably long timeout (e.g. 2 min).
Read until EOF
This is probably the easiest and, according to your title and code, probably the method you are aiming for. However, to generate an EOF, the client must shutdown at least its write channel. So, if your server is stuck in read, I assume you forgot to shutdown your client (tho I have to guess here).
On the server side, if you really want to read until EOF, you don't need a loop yourself, you can simply use the Read::read_to_end utility function. Here is an example for a client & server with the client sending a single message terminated by EOF:
use std::io::Read;
use std::io::Write;
use std::net::TcpListener;
use std::net::TcpStream;
// --- Client code
const SERVER_ADDR: &str = "localhost:1234";
pub fn client() {
let mut socket = TcpStream::connect(SERVER_ADDR).expect("Failed to connect");
// Send a 'single' message, the flushes kinda simulates a very slow connection
for _ in 0..3 {
socket.write(b"Hello").expect("Failed to send");
socket.flush().unwrap();
}
// Instead of shutdow, you can also drop(socket), but than you can't read.
socket.shutdown(std::net::Shutdown::Write).unwrap();
// go reading, or whatever
}
// --- Server code
const SERVER_BIND: &str = "127.0.0.1:1234";
pub fn server() {
let listener = TcpListener::bind(SERVER_BIND).expect("Failed to bind");
while let Ok((stream, _)) = listener.accept() {
let _ = handle_client(stream); // don't care if the client screwed up
}
}
pub fn handle_client(mut socket: TcpStream) -> std::io::Result<()> {
let mut data: Vec<u8> = Vec::new();
// Read all bytes until EOF
socket.read_to_end(&mut data)?;
println!("Data: {:?}", data); // or whatever
Ok(())
}

Wrapping blocking mpsc in async Rust (Tokio)

I am trying to wrap a synchronous MQTT client library using Tokio. The code needs to continuously receive messages via std::sync::mpsc channel and send them into the async code. I understand how to use spawn_blocking for wrapping a code that returns a single value. But how this can be applied to wrap a loop that is continuously receiving messages from std::sync::mpsc channel?
Here is the code that I use to send messages into the channel.
let (mut tx, mut rx) = std::sync::mpsc::channel();
tokio::spawn(async move {
let mut mqtt_options = MqttOptions::new("bot", settings.mqtt.host, settings.mqtt.port);
let (mut mqtt_client, notifications) = MqttClient::start(mqtt_options).unwrap();
mqtt_client.subscribe(settings.mqtt.topic_name, QoS::AtLeastOnce).unwrap();
tokio::task::spawn_blocking(move || {
println!("Waiting for notifications");
for notification in notifications {
match notification {
rumqtt::Notification::Publish(publish) => {
let payload = Arc::try_unwrap(publish.payload).unwrap();
let text: String = String::from_utf8(payload).expect("Can't decode payload for notification");
println!("Recieved message: {}", text);
let msg: Message = serde_json::from_str(&text).expect("Error while deserializing message");
println!("Deserialized message: {:?}", msg);
println!("{}", msg);
tx.send(msg);
}
_ => println!("{:?}", notification)
}
}
});
});
But I am unsure how should I use tokio API to receive these messages inside another async closure.
tokio::task::spawn(async move || {
// How to revieve messages via `rx` here? I can't use tokio::sync::mpsc channels
// since the code that sends messages is blocking.
});
I've posted a separate thread on a rust-lang community and got an answer there.
std::sync::mpsc::channel can be swapped to tokio::sync::mpsc::unbounded_channel, which has a non-async send method. It solves the issue.

Can I connect to a Tarpc RPC Service without using anything but the address from the server?

I am using Tarpc.
Client
let (_, mut auth_reactor) = auth::spawn_server(auth_server_address);
let auth_client: auth::FutureClient = auth_reactor
.run(auth::FutureClient::connect(
auth_server_address,
client::Options::default(),
))
.unwrap();
auth_reactor
.run(
auth_client
.authme(byte_vector_auth.clone())
.map_err(|e| println!("{}", e))
.and_then(|i| {
println!("{:?}", i);
Ok(())
}),
)
.unwrap();
Server
pub fn spawn_server(address: SocketAddr) -> (server::Handle, reactor::Core) {
let reactor = reactor::Core::new().unwrap();
client::Options::default().handle(reactor.handle());
let (auth_handler, server) = AuthServer
.listen(address, &reactor.handle(), server::Options::default())
.unwrap();
reactor.handle().spawn(server);
return (auth_handler, reactor);
}
I'm returning the reactor because I need it for the client.
Using tokio, you need a reactor to run your async client.
I don't think you need to use the same reactor as the server, but you can have only one reactor per thread.
So you can spawn a client thread, or even build a different binary for your client.
You will have to get server address in another way, but that shouldn't be too hard ;)

Unexpectedly closed channel in sync::mpsc

I have a closure, that uses Sender from std::sync::mpsc:
let node = Arc::new(Mutex::new(node_sender));
let switch_callback =
move |p| match Params::parse::<Value>(p) {
Ok(ref v) if v.as_array().is_some() => {
let chain = v.as_array()
.and_then(|arr| arr[0].as_str())
.and_then(|s| Some(s.to_owned()))
.unwrap();
let channel = node.lock().unwrap().clone();
match channel.send(chain.clone()) {
Ok(_) => futures::done(Ok(Value::String(chain))).boxed(),
Err(err) => futures::failed(JsonRpcError::invalid_params(
format!("Node not responding: {}", err.to_string())))
.boxed(),
}
}
Ok(_) | Err(_) => {
futures::failed(JsonRpcError::invalid_params("Invalid chain label for node"))
.boxed()
}
};
This closure is used as a callback from another thread. I used clone() here, to clone Sender so I expect the channel should stay active. But the channel is actually getting closed, why would this happen?
One possibility for this would be that your Receiver has been dropped. The channel will only stay active while both the Sender and Receiver are alive.
One of the examples for Sender.send shows that dropping the Receiver terminates the channel:
use std::sync::mpsc::channel;
let (tx, rx) = channel();
// This send is always successful
tx.send(1).unwrap();
// This send will fail because the receiver is gone
drop(rx);
assert_eq!(tx.send(1).unwrap_err().0, 1);
Make sure your Receiver is alive for as long as your Sender is and you should not see this error.

Resources