Rust, libzmq, client only send message, when followed by receive - rust

I am testing Rust libzmq client
https://crates.io/crates/libzmq
https://docs.rs/libzmq/0.2.5/libzmq/struct.ClientBuilder.html
and stumbled upon this weird behavior.
This Client would not send the message:
use libzmq::{prelude::*, *, ServerBuilder, ClientBuilder, TcpAddr};
fn main() -> Result<(), String> {
let saddr = format!("{}:{}", "192.168.1.206","5540");
println!("Strating client for {}", saddr);
let addr2: TcpAddr = saddr.try_into().unwrap();
let client = ClientBuilder::new()
.connect(&addr2)
.build().unwrap();
client.send("test").unwrap();
println!("Finished");
Ok(())
}
this would send:
use libzmq::{prelude::*, *, ServerBuilder, ClientBuilder, TcpAddr};
fn main() -> Result<(), String> {
let saddr = format!("{}:{}", "192.168.1.206","5540");
println!("Strating client for {}", saddr);
let addr2: TcpAddr = saddr.try_into().unwrap();
let client = ClientBuilder::new()
.connect(&addr2)
.build().unwrap();
client.send("test").unwrap();
let mut msg = Msg::new();
client.recv(&mut msg).unwrap();
println!("Finished");
Ok(())
}
For receiving I am using python server (but I tested on Rust as well):
import zmq
if __name__ == '__main__':
context = zmq.Context()
socket = context.socket(zmq.SERVER)
socket.bind("tcp://0.0.0.0:5540")
print("waiting for hand shake")
m = socket.recv(copy=False)
print("got it...")
socket.send(b'READY', routing_id=m.routing_id)
print("sent reply")
socket.close()
context.term()
Output on server side from code 1:
waiting for hand shake
[blocked]
Output on server side from code 2:
waiting for hand shake
got it...
sent reply
using this version for Rust:
[dependencies]
libzmq = "0.2.5"

The libzmq crate is just a wrapper for the libzmq C library. In the documentation for the C library, you will find this note:
NOTE: A successful invocation of zmq_msg_send() does not indicate that the
message has been transmitted to the network, only that it has been queued on
the 'socket' and 0MQ has assumed responsibility for the message.
In your first example, since the program exits immediately after calling send, the message is still in the libzmq queue and has not been transmitted yet. In your second example, the message gets transmitted while your program is waiting in the recv call.

Related

How to access the connection status of an awc websocket client?

I have a basic Actix Web server set up, and I have successfully been creating websocket connections in my tests using awc::client::Client.
Now I am trying to test that my server is closing all of the websocket connections when I tell it reset the status of the app.
My current planned test for this is:
#[test]
async fn reset_game_works_basic() {
let server: TestServer = test_fixtures::get_test_server();
let (_resp, mut chris_connection) = Client::new()
.ws(server.url("/join-game?username=Chris"))
.connect()
.await
.unwrap();
let _ = server.post("/reset-game").send().await;
let websocket_connected = chris_connection.websocket_connected_status();
// ^^^^ Not a real function
assert_eq!(websocket_connected, false);
}
From the awc websocket docs, I have only been able to find a CloseCode enum, but that seems to be used for the client to tell the server why it is closing the connection.
I also fruitlessly tried to check if the connection was open by using is_write_ready(), but that returned true.
I have done manual testing with Postman, and the clients are being disconnected when you send a post request to /reset-game.
How should I ask the client if it still has an open connection?
I ended up finding a working (albiet fragile) test for this.
When the websocket connection is closed, .next().await will return None, and when it is open, it will return Some, even if there is no next message!
So this test successfully passes with the working code, and fails if I remove the code that actually closes the websocket!
#[test]
async fn reset_game_works_basic() {
let server: TestServer = test_fixtures::get_test_server();
let (_resp, mut chris_connection) = Client::new()
.ws(server.url("/join-game?username=Chris"))
.connect()
.await
.unwrap();
let _ = server.post("/reset-game").send().await;
let _join = chris_connection.next().await;
let no_message = chris_connection.next().await;
let websocket_disconnected = no_message.is_none();
assert_eq!(websocket_disconnected, true);
}
It certainly is fragile to the problem that if I make the websocket send a second message before disconnecting it will break, but that is a separate problem to the one I was trying to solve here.

Rust on ESP32 - How to send (and receive) data using the MQTT protocol to AWS IoT (Core)?

First off: I know running Rust on an ESP32 isn't a very common practice yet, and some (quite a bit of) trouble is to be expected. But I seem to have hit a roadblock.
What works:
flashing and running the code on an ESP32
passing along the certificates in the src/certificates directory
WiFi connection (simple WPA Personal, nothing fancy like WPA Enterprise)
publishing and suscribing to topics using MQTT
What doesn't work:
publising and subscribing to AWS IoT (Core) using MQTT. This needs certificates, and as far as I'm aware I'm handling this properly (see code below).
Some additional info (see code below):
server.cert.crt is renamed from the AWS provided root-CA.crt
client.cert.pem is renamed from the AWS provided my-thing-rev1.cert.pem
client.private.key is renamed from the AWS provided my-thing-rev1.private.key
I also received my-thing-rev1.public.key and my-thing-rev1-Policy, but I don't think I need these...?
I know this is not the proper way of implementing this (I should not provide the certificates directly, instead use a service to get them, but this is a very basic POC)
the code works fine if I don't want to connect to AWS, but instead use my own broker or broker.emqx.io for testing (even with the certificates included)
This is the code I'm currently using (heavily based on Rust on ESP32 STD demo app):
use embedded_svc::httpd::Result;
use embedded_svc::mqtt::client::{Connection, MessageImpl, QoS};
use esp_idf_svc::mqtt::client::{EspMqttClient, MqttClientConfiguration};
use esp_idf_svc::tls::X509;
use esp_idf_sys::EspError;
// other needed imports (not relevant here)
extern crate dotenv_codegen;
extern crate core;
const AWS_IOT_ENDPOINT: &str = dotenv!("AWS_IOT_ENDPOINT");
const AWS_IOT_CLIENT_ID: &str = dotenv!("AWS_IOT_CLIENT_ID");
const AWS_IOT_TOPIC: &str = dotenv!("AWS_IOT_TOPIC");
fn main() -> Result<()> {
esp_idf_sys::link_patches();
// other code
let mqtt_client: EspMqttClient<ConnState<MessageImpl, EspError>> = test_mqtt_client()?;
// more code
Ok(())
}
fn convert_certificate(mut certificate_bytes: Vec<u8>) -> X509<'static> {
// append NUL
certificate_bytes.push(0);
// convert the certificate
let certificate_slice: &[u8] = unsafe {
let ptr: *const u8 = certificate_bytes.as_ptr();
let len: usize = certificate_bytes.len();
mem::forget(certificate_bytes);
slice::from_raw_parts(ptr, len)
};
// return the certificate file in the correct format
X509::pem_until_nul(certificate_slice)
}
fn test_mqtt_client() -> Result<EspMqttClient<ConnState<MessageImpl, EspError>>> {
info!("About to start MQTT client");
let server_cert_bytes: Vec<u8> = include_bytes!("certificates/server.cert.crt").to_vec();
let client_cert_bytes: Vec<u8> = include_bytes!("certificates/client.cert.pem").to_vec();
let private_key_bytes: Vec<u8> = include_bytes!("certificates/client.private.key").to_vec();
let server_cert: X509 = convert_certificate(server_cert_bytes);
let client_cert: X509 = convert_certificate(client_cert_bytes);
let private_key: X509 = convert_certificate(private_key_bytes);
// TODO: fix the following error: `E (16903) esp-tls-mbedtls: mbedtls_ssl_handshake returned -0x7280`
let conf = MqttClientConfiguration {
client_id: Some(AWS_IOT_CLIENT_ID),
crt_bundle_attach: Some(esp_idf_sys::esp_crt_bundle_attach),
server_certificate: Some(server_cert),
client_certificate: Some(client_cert),
private_key: Some(private_key),
..Default::default()
};
let (mut client, mut connection) =
EspMqttClient::new_with_conn(AWS_IOT_ENDPOINT, &conf)?;
info!("MQTT client started");
// Need to immediately start pumping the connection for messages, or else subscribe() and publish() below will not work
// Note that when using the alternative constructor - `EspMqttClient::new` - you don't need to
// spawn a new thread, as the messages will be pumped with a backpressure into the callback you provide.
// Yet, you still need to efficiently process each message in the callback without blocking for too long.
//
// Note also that if you go to http://tools.emqx.io/ and then connect and send a message to the specified topic,
// the client configured here should receive it.
thread::spawn(move || {
info!("MQTT Listening for messages");
while let Some(msg) = connection.next() {
match msg {
Err(e) => info!("MQTT Message ERROR: {}", e),
Ok(msg) => info!("MQTT Message: {:?}", msg),
}
}
info!("MQTT connection loop exit");
});
client.subscribe(AWS_IOT_TOPIC, QoS::AtMostOnce)?;
info!("Subscribed to all topics ({})", AWS_IOT_TOPIC);
client.publish(
AWS_IOT_TOPIC,
QoS::AtMostOnce,
false,
format!("Hello from {}!", AWS_IOT_TOPIC).as_bytes(),
)?;
info!("Published a hello message to topic \"{}\".", AWS_IOT_TOPIC);
Ok(client)
}
Here are the final lines of output when I try to run this on the device (it's setup to compile and flash to the device and monitor (debug mode) when running cargo run):
I (16913) esp32_aws_iot_with_std: About to start MQTT client
I (16923) esp32_aws_iot_with_std: MQTT client started
I (16923) esp32_aws_iot_with_std: MQTT Listening for messages
I (16933) esp32_aws_iot_with_std: MQTT Message: BeforeConnect
I (17473) esp-x509-crt-bundle: Certificate validated
E (19403) MQTT_CLIENT: mqtt_message_receive: transport_read() error: errno=119 # <- This is the actual error
E (19403) MQTT_CLIENT: esp_mqtt_connect: mqtt_message_receive() returned -1
E (19413) MQTT_CLIENT: MQTT connect failed
I (19413) esp32_aws_iot_with_std: MQTT Message ERROR: ESP_FAIL
I (19423) esp32_aws_iot_with_std: MQTT Message: Disconnected
E (19433) MQTT_CLIENT: Client has not connected
I (19433) esp32_aws_iot_with_std: MQTT connection loop exit
I (24423) esp_idf_svc::eventloop: Dropped
I (24423) esp_idf_svc::wifi: Stop requested
I (24423) wifi:state: run -> init (0)
I (24423) wifi:pm stop, total sleep time: 10737262 us / 14862601 us
W (24423) wifi:<ba-del>idx
I (24433) wifi:new:<1,0>, old:<1,1>, ap:<1,1>, sta:<1,0>, prof:1
W (24443) wifi:hmac tx: ifx0 stop, discard
I (24473) wifi:flush txq
I (24473) wifi:stop sw txq
I (24473) wifi:lmac stop hw txq
I (24473) esp_idf_svc::wifi: Stopping
I (24473) esp_idf_svc::wifi: Disconnect requested
I (24473) esp_idf_svc::wifi: Stop requested
I (24483) esp_idf_svc::wifi: Stopping
I (24483) wifi:Deinit lldesc rx mblock:10
I (24503) esp_idf_svc::wifi: Driver deinitialized
I (24503) esp_idf_svc::wifi: Dropped
I (24503) esp_idf_svc::eventloop: Dropped
Error: ESP_FAIL
This error seems to indicate the buffer holding the incoming data is full and can't hold any more data, but I'm not sure. And I definately don't know how to fix it.
(I assume the actual certificate handling is done properly)
When I run the following command, I do get the message in AWS IoT (MQTT test client):
mosquitto_pub -h my.amazonawsIoT.com --cafile server.cert.crt --cert client.cert.pem --key client.private.key -i basicPubSub -t my/topic -m 'test'
Does anyone have some more experience with this who can point me in the right direction?
Is this actually a buffer error, and if so: how do I mitigate this error? Do I need to increase the buffer size somehow (it is running on a basic ESP32 revision 1, ESP32_Devkitc_v4, if that helps). As far as I can tell this version has a 4MB flash size, so that might explain the buffer overlow, although I think this should be enough. The total memory used is under 35% of the total storage (App/part. size: 1347344/4128768 bytes, 32.63%)
UPDATE 1: I have been made aware that this data is stored in RAM, not in flash memory (didn't cross my mind at the time), but I'm not entirely sure on how large the RAM on my specific device is (ESP32 revision 1, ESP32_Devkitc_v4). My best guess is 320KB, but I'm not sure.
UPDATE 2: I've tried changing the buffer size like so:
let conf = MqttClientConfiguration {
client_id: Some(AWS_IOT_CLIENT_ID),
crt_bundle_attach: Some(esp_idf_sys::esp_crt_bundle_attach),
server_certificate: Some(server_cert),
client_certificate: Some(client_cert),
private_key: Some(private_key),
buffer_size: 50, // added this (tried various sizes)
out_buffer_size: 50, // added this (tried various sizes)
..Default::default()
};
I've tried various combinations, but this doesn't seem to change much: either I get the exact same error, or this one (when choosing smaller numbers, for example 10):
E (18303) MQTT_CLIENT: Connect message cannot be created
E (18303) MQTT_CLIENT: MQTT connect failed
E (18313) MQTT_CLIENT: Client has not connected
I'm not sure how big this buffer size should be (when sending simple timestamps to AWS IoT), and can't find any documentation on what this number represents: is it in Bit, KiloBit, ... No idea.

Rust thrussh library client example fails at channel_open_session

In thrussh's documentation they have server and client example code.
Code based on the example server code has been working fine in various projects. However, the client example fails at the line:
let mut channel = session.channel_open_session().await.unwrap();
The error I get is this:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Disconnect', src/main.rs:121:64
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
I'm not sure what is causing the panic as everything works fine up until that point. The server calls both finished_auth() and channel_open_confirmation() but never gets to call channel_open_session(). I've been looking through the source but I can't seem to identify what's wrong.
Here's the full code
use thrussh_keys::*;
use thrussh::*;
let ssh_config = thrussh::client::Config::default();
let ssh_config = Arc::new(ssh_config);
let sh = Client {};
let key = thrussh_keys::load_secret_key("server.key", passphrase).unwrap();
let mut agent = thrussh_keys::agent::client::AgentClient::connect_env().await.unwrap();
agent.add_identity(&key, &[]).await.unwrap();
let mut session = thrussh::client::connect(ssh_config, format!("localhost:{}", config.port), sh)
.await
.unwrap();
println!("connected");
if session
.authenticate_future(config.user, key.clone_public_key(), agent)
.await
.1
.unwrap()
{
println!("session authenticated");
let mut channel = session.channel_open_session().await.unwrap();
channel.data(&b"Hello, world!"[..]).await.unwrap();
if let Some(msg) = channel.wait().await {
println!("{:?}", msg)
}
}
the code on the server side is the same as the server example aside from writing the key to disk so that it can be used in the client code.
I guess the channel open response package was consumed by the fn channel_open_confirmation for some reason. Therefore, the wait_channel_confirmation will be pending for receiving the confirmation package for ever.
If you comment out the function implementation channel_open_confirmation for Client, it should work.
The trait does have a default implementation for this method, which sends out the OpenChannelMsg.
So, the demo is improper, unluckily, I have no access to report this issue.

How to retrieve information from the tokio-proto connection handshake?

I'm figuring out how to use the tokio-proto crate, particularly on the handshake made when a connection is established. I've got the example from the official documentation working:
impl<T: AsyncRead + AsyncWrite + 'static> ClientProto<T> for ClientLineProto {
type Request = String;
type Response = String;
/// `Framed<T, LineCodec>` is the return value of `io.framed(LineCodec)`
type Transport = Framed<T, line::LineCodec>;
type BindTransport = Box<Future<Item = Self::Transport, Error = io::Error>>;
fn bind_transport(&self, io: T) -> Self::BindTransport {
// Construct the line-based transport
let transport = io.framed(line::LineCodec);
// Send the handshake frame to the server.
let handshake = transport.send("You ready?".to_string())
// Wait for a response from the server, if the transport errors out,
// we don't care about the transport handle anymore, just the error
.and_then(|transport| transport.into_future().map_err(|(e, _)| e))
.and_then(|(line, transport)| {
// The server sent back a line, check to see if it is the
// expected handshake line.
match line {
Some(ref msg) if msg == "Bring it!" => {
println!("CLIENT: received server handshake");
Ok(transport)
}
Some(ref msg) if msg == "No! Go away!" => {
// At this point, the server is at capacity. There are a
// few things that we could do. Set a backoff timer and
// try again in a bit. Or we could try a different
// remote server. However, we're just going to error out
// the connection.
println!("CLIENT: server is at capacity");
let err = io::Error::new(io::ErrorKind::Other, "server at capacity");
Err(err)
}
_ => {
println!("CLIENT: server handshake INVALID");
let err = io::Error::new(io::ErrorKind::Other, "invalid handshake");
Err(err)
}
}
});
Box::new(handshake)
}
}
But the official docs only mention a handshake without stateful information. Is there a common way to retrieve and store useful data from the handshake?
For example, if during the handshake (in the first message after the connection is established) the server sends some key that should be used later by the client, how should the ClientProto implementation look into that key? And where should it be stored?
You can add fields to ClientLineProto, so this should work:
pub struct ClientLineProto {
handshakes: Arc<Mutex<HashMap<String, String>>>
}
And then you can reference it and store data as needed:
let mut handshakes = self.handshakes.lock();
handshakes.insert(handshake_key, "Blah blah handshake data")
This sort of access would work in bind_transport() for storing things. Then when you create the Arc::Mutex::HashMap in your main() function and you will have access to the whole thing in the serve() method as well, which means you can pass it in to the Service object instantiation and then the handshakes will be available during call().

Asynchronously reconnecting a client to a server in an infinite loop

I'm not able to create a client that tries to connect to a server and:
if the server is down it has to try again in an infinite loop
if the server is up and connection is successful, when the connection is lost (i.e. server disconnects the client) the client has to restart the infinite loop to try to connect to the server
Here's the code to connect to a server; currently when the connection is lost the program exits. I'm not sure what the best way to implement it is; maybe I have to create a Future with an infinite loop?
extern crate tokio_line;
use tokio_line::LineCodec;
fn get_connection(handle: &Handle) -> Box<Future<Item = (), Error = io::Error>> {
let remote_addr = "127.0.0.1:9876".parse().unwrap();
let tcp = TcpStream::connect(&remote_addr, handle);
let client = tcp.and_then(|stream| {
let (sink, from_server) = stream.framed(LineCodec).split();
let reader = from_server.for_each(|message| {
println!("{}", message);
Ok(())
});
reader.map(|_| {
println!("CLIENT DISCONNECTED");
()
}).map_err(|err| err)
});
let client = client.map_err(|_| { panic!()});
Box::new(client)
}
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = get_connection(&handle);
let client = client.and_then(|c| {
println!("Try to reconnect");
get_connection(&handle);
Ok(())
});
core.run(client).unwrap();
}
Add the tokio-line crate with:
tokio-line = { git = "https://github.com/tokio-rs/tokio-line" }
The key question seems to be: how do I implement an infinite loop using Tokio? By answering this question, we can tackle the problem of reconnecting infinitely upon disconnection. From my experience writing asynchronous code, recursion seems to be a straightforward solution to this problem.
UPDATE: as pointed out by Shepmaster (and the folks of the Tokio Gitter), my original answer leaks memory since we build a chain of futures that grows on each iteration. Here follows a new one:
Updated answer: use loop_fn
There is a function in the futures crate that does exactly what you need. It is called loop_fn. You can use it by changing your main function to the following:
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = future::loop_fn((), |_| {
// Run the get_connection function and loop again regardless of its result
get_connection(&handle).map(|_| -> Loop<(), ()> {
Loop::Continue(())
})
});
core.run(client).unwrap();
}
The function resembles a for loop, which can continue or break depending on the result of get_connection (see the documentation for the Loop enum). In this case, we choose to always continue, so it will infinitely keep reconnecting.
Note that your version of get_connection will panic if there is an error (e.g. if the client cannot connect to the server). If you also want to retry after an error, you should remove the call to panic!.
Old answer: use recursion
Here follows my old answer, in case anyone finds it interesting.
WARNING: using the code below results in unbounded memory growth.
Making get_connection loop infinitely
We want to call the get_connection function each time the client is disconnected, so that is exactly what we are going to do (look at the comment after reader.and_then):
fn get_connection(handle: &Handle) -> Box<Future<Item = (), Error = io::Error>> {
let remote_addr = "127.0.0.1:9876".parse().unwrap();
let tcp = TcpStream::connect(&remote_addr, handle);
let handle_clone = handle.clone();
let client = tcp.and_then(|stream| {
let (sink, from_server) = stream.framed(LineCodec).split();
let reader = from_server.for_each(|message| {
println!("{}", message);
Ok(())
});
reader.and_then(move |_| {
println!("CLIENT DISCONNECTED");
// Attempt to reconnect in the future
get_connection(&handle_clone)
})
});
let client = client.map_err(|_| { panic!()});
Box::new(client)
}
Remember that get_connection is non-blocking. It just constructs a Box<Future>. This means that when calling it recursively, we still don't block. Instead, we get a new future, which we can link to the previous one by using and_then. As you can see, this is different to normal recursion since the stack doesn't grow on each iteration.
Note that we need to clone the handle (see handle_clone), and move it into the closure passed to reader.and_then. This is necessary because the closure is going to live longer than the function (it will be contained in the future we are returning).
Handling errors
The code you provided doesn't handle the case in which the client is unable to connect to the server (nor any other errors). Following the same principle shown above, we can handle errors by changing the end of get_connection to the following:
let handle_clone = handle.clone();
let client = client.or_else(move |err| {
// Note: this code will infinitely retry, but you could pattern match on the error
// to retry only on certain kinds of error
println!("Error connecting to server: {}", err);
get_connection(&handle_clone)
});
Box::new(client)
Note that or_else is like and_then, but it operates on the error produced by the future.
Removing unnecessary code from main
Finally, it is not necessary to use and_then in the main function. You can replace your main by the following code:
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = get_connection(&handle);
core.run(client).unwrap();
}

Resources