How to configure tonic::transport::Endpoint to recover a broken connection? - rust

I have a problem with a client. If the connection between the client and the server is lost. He can no longer restore it and the program that uses the client no longer works. For example, when you sleep for a long time or physically disconnect from the server.
Further use of the grpc service is impossible, just a timeout error or a broken connection.
[2023-01-28T15:11:55Z DEBUG tower::buffer::worker] service.ready=true message=processing request
[2023-01-28T15:11:55Z DEBUG h2::codec::framed_write] send frame=Headers { stream_id: StreamId(19), flags: (0x4: END_HEADERS) }
[2023-01-28T15:11:55Z DEBUG h2::codec::framed_write] send frame=Data { stream_id: StreamId(19) }
[2023-01-28T15:11:55Z DEBUG h2::codec::framed_write] send frame=Data { stream_id: StreamId(19), flags: (0x1: END_STREAM) }
[2023-01-28T15:12:01Z DEBUG hyper::proto::h2::server] stream error: connection error: broken pipe
[2023-01-28T15:12:01Z DEBUG h2::codec::framed_write] send frame=Reset { stream_id: StreamId(19), error_code: CANCEL }
or this
[2023-01-28T16:11:55Z DEBUG client] send: interval(40)
[2023-01-28T16:11:55Z DEBUG h2::codec::framed_write] send frame=Reset { stream_id: StreamId(25), error_code: CANCEL }
[2023-01-28T16:11:55Z DEBUG tower::buffer::worker] service.ready=true message=processing request
[2023-01-28T16:11:55Z DEBUG h2::codec::framed_write] send frame=Headers { stream_id: StreamId(27), flags: (0x4: END_HEADERS) }
[2023-01-28T16:11:55Z DEBUG h2::codec::framed_write] send frame=Data { stream_id: StreamId(27) }
[2023-01-28T16:11:55Z DEBUG h2::codec::framed_write] send frame=Data { stream_id: StreamId(27), flags: (0x1: END_STREAM) }
[2023-01-28T16:11:56Z ERROR client] status: Cancelled, message: "Timeout expired", details: [], metadata: MetadataMap { headers: {} }
I encountered this problem on the working program in the finished product. To study it I created a simple server-client on from the tonic examples
And I get the same errors.
I run the server on a remote machine, that it would be possible to physically break the connection between the client and the server.
Server
struct Service {}
#[tonic::async_trait]
impl test_grpc::say_server::Say for Service {
async fn hello(&self, request: Request<RequestSay>) -> Result<Response<ResponseSay>, Status> {
let r = request.into_inner().text;
debug!("in request: {}", r);
Ok(Response::new(ResponseSay {
text: format!("hello {r}"),
}))
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error + Send + Sync>> {
env_logger::Builder::new()
.filter_level(log::LevelFilter::from_str("debug").unwrap())
.init();
let s = Service {};
let key = "secret token";
let svc = test_grpc::say_server::SayServer::with_interceptor(
s,
move |req: Request<()>| -> Result<Request<()>, Status> {
let token: MetadataValue<_> = key.parse().unwrap();
match req.metadata().get("authorization") {
Some(t) if token == t => Ok(req),
_ => Err(Status::unauthenticated("No valid auth token")),
}
},
);
let addr = "0.0.0.0:8804".parse::<SocketAddr>().unwrap();
Server::builder()
.add_service(svc)
.serve(addr)
.await
.unwrap();
Ok(())
}
Client
async fn tester_client(sleep: Duration, uri: &str, key: &str) {
let uri = uri.parse().unwrap();
debug!("create connect");
let chan = tonic::transport::Channel::builder(uri)
.timeout(Duration::from_secs(20))
.connect_timeout(Duration::from_secs(20))
//.http2_keep_alive_interval(Duration::from_secs(5))
//.keep_alive_while_idle(true)
.connect_lazy();
let key = key.parse::<tonic::metadata::MetadataValue<_>>().unwrap();
let mut key = Some(key);
let mut service = test_grpc::say_client::SayClient::with_interceptor(
chan,
move |mut req: tonic::Request<()>| {
if let Some(secret) = &mut key {
req.metadata_mut().insert("authorization", secret.clone());
}
Ok(req)
},
);
loop {
let send_text = format!("interval({})", sleep.as_secs_f32() / 60.0);
debug!("send: {send_text}");
let res = match service
.hello(tonic::Request::new(test_grpc::RequestSay {
text: send_text.clone(),
}))
.await
{
Ok(r) => r,
Err(e) => {
error!("{e:#}");
continue;
}
};
debug!("recv: {}", res.into_inner().text);
time::sleep(sleep).await;
println!();
}
}
I have tried several settings. For example, if use .http2_keep_alive_interval(Duration::from_secs(5)) then the connection does not break during idle time. But if you physically break the connection, then it can no longer be restored.
Perhaps I need to specify some other settings so that a new connection is established when it breaks?

Related

Rust: How to move child (commandchild) to thread to be killed on window exit

I am trying to run a streamlit server behind the scenes of a Tauri application.
The streamlit server is packaged with PyInstaller into a single binary file and works as expected when standalone.
I have a rust main.rs file that needs to run a streamlit binary using a Command (in this case its a tauri::api::process::Command using a new_sidecar).
It spawns the server, but when the app closes, my streamlit server is not disposed off.
On window exit, I want to send a kill command to kill the server instance in the child.
Here is an example of my code:
#![cfg_attr(
all(not(debug_assertions), target_os = "windows"),
windows_subsystem = "windows"
)]
use async_std::task;
use std::sync::mpsc::sync_channel;
use std::thread;
use std::time::Duration;
use tauri::api::process::{Command, CommandEvent};
use tauri::{Manager, WindowEvent};
fn main() {
let (tx_kill, rx_kill) = sync_channel(1);
tauri::Builder::default()
.setup(|app| {
println!("App Setup Start");
let t = Command::new_sidecar("streamlit").expect("failed to create sidecar");
let (mut rx, child) = t.spawn().expect("Failed to spawn server");
let splashscreen_window = app.get_window("splashscreen").unwrap();
let main_window = app.get_window("main").unwrap();
// Listen for server port then refresh main window
tauri::async_runtime::spawn(async move {
while let Some(event) = rx.recv().await {
if let CommandEvent::Stdout(output) = event {
if output.contains("Network URL:") {
let tokens: Vec<&str> = output.split(":").collect();
let port = tokens.last().unwrap();
println!("Connect to port {}", port);
main_window.eval(&format!(
"window.location.replace('http://localhost:{}')",
port
));
task::sleep(Duration::from_secs(2)).await;
splashscreen_window.close().unwrap();
main_window.show().unwrap();
}
}
}
});
// Listen for kill command
thread::spawn(move || loop {
let event = rx_kill.recv();
if event.unwrap() == -1 {
child.kill().expect("Failed to close API");
}
});
Ok(())
})
.on_window_event(move |event| match event.event() {
WindowEvent::Destroyed => {
println!("Window destroyed");
tx_kill.send(-1).expect("Failed to send close signal");
}
_ => {}
})
.run(tauri::generate_context!())
.expect("error while running application");
}
But I am getting an error, when I try to kill the child instance:
thread::spawn(move || loop {
let event = rx_kill.recv();
if event.unwrap() == -1 {
child.kill().expect("Failed to close API");
}
});
use of moved value: `child`
move occurs because `child` has type `CommandChild`, which does not implement the `Copy` trait
Any ideas?

Too many connection from pinging in Rust?

I have a code to check a network connection like this
fn network_connection() {
loop {
let output = Command::new("ping")
.arg("-c")
.arg("1")
.arg("google.com")
.output()
.expect("[ ERR ] Failed to execute network check process");
let network_status = output.status;
if network_status.success() == false {
Command::new("init")
.arg("6");
}
thread::sleep(Duration::from_millis(60000));
}
}
fn main() {
thread::spawn(|| {
network_connection();
});
...
this should ping google every 60 seconds. But when I'm looking at number of request sent to router its like 200 requests per 10 minutes.
Is this spamming more threads than one?
main() is running only once.

Reading a line in a BufReader creates an infinite loop

I'm trying to receive a message (server side) on the network using TcpListener in Rust.
Here's the server code:
// Instanciate TcpListener
let server = TcpListener::bind("0.0.0.0:52001").expect("Could not bind");
match server.accept() { // Waiting new connection from a client
Ok((stream, addr)) => {
println!("New connection from {}", addr);
// Wrapping the stream in a BufReader
let mut reader = BufReader::new(&stream);
let mut message = String::new();
loop { // Entering the loop when a client is connected
reader.read_line(&mut message).expect("Cannot read new line");
println!("message received: {}", message);
message.clear();
}
}
Err(e) => {
println!("Fail: {:?}", e)
}
}
Here's my Kotlin client:
Socket("192.168.134.138", 52001).use { client ->
client.getOutputStream().use { out ->
out.write("test\n".toByteArray())
}
}
while(true) {
Thread.sleep(15_000)
}
The client send the following line: test\n and it ends with a linebreak for the server to read.
The intended behaviours would be that on the server side it prints message received: test and then the server waits at the read_line() instruction for the next line
It works because I receive the test but the read_line() method does not seem to block nor wait for another message. So it creates an infinite loop. So in the terminal I'm getting:
New connection from 192.168.134.123:7869
message received: test
message received:
message received:
message received:
message received:
Process finished with exit code 130 (interrupted by signal 2: SIGINT)
And I have to stop the program forcefully.
Any ideas?
To detect the end of the stream, you need to check if read_line() returned Ok(0):
From the docs:
If this function returns Ok(0), the stream has reached EOF.
loop { // Entering the loop when a client is connected
let mut message = String::new();
if reader.read_line(&mut message).expect("Cannot read new line") == 0 {
break;
}
println!("message received: {}", message);
}
Another way option is to use BufReader::lines() iterator:
for line in reader.lines() {
let message = line.expect("Cannot read new line");
println!("message received: {}", message);
}
This approach is a bit inefficient as it allocates a new String on every iteration. For best performance, you should allocate a single String and reuse it like #BlackBeans pointed out in a comment:
let mut message = String::new();
loop { // Entering the loop when a client is connected
message.clear();
if reader.read_line(&mut message).expect("Cannot read new line") == 0 {
break;
}
println!("message received: {}", message);
}

Fail to pass raw pointer to ioctl in Rust 1.56.1 using Nix 0.23.1

I'm trying to pass a file descriptor to the ioctl system call.
I'm trying to link a loop device to a file so I can then set an offset so I can mount it correctly.
I have the following code snippet:
use std::fs::{File, OpenOptions};
use std::os::unix::io::{AsRawFd, RawFd};
use nix::{ioctl_none, ioctl_write_ptr};
use crate::consts::consts::{MAGIC_NUMBER_SIZE, SIGNATURE_SIZE};
const LOOP_MAGIC_BIT: u8 = 0x4C;
const LOOP_SET_FD: u32 = 0x4C00;
const LOOP_SET_STATUS64: u32 = 0x4C04;
const LOOP_CTL_GET_FREE: u32 = 0x4C82;
ioctl_none!(loopback_read_free_device, LOOP_MAGIC_BIT, LOOP_CTL_GET_FREE);
ioctl_write_ptr!(loopback_set_device_fd, LOOP_MAGIC_BIT, LOOP_SET_FD, RawFd);
ioctl_write_ptr!(loopback_set_device_info, LOOP_MAGIC_BIT, LOOP_SET_STATUS64, LoopbackInfo);
pub struct Loopback {}
pub struct LoopbackInfo {
pub io_offset: u64
}
impl Loopback {
pub fn mount_loopback_device(file_path: &str) -> String {
// Open the loopback control device
let loopback_control = File::open("/dev/loop-control");
// Check if is open correctly
match loopback_control {
Ok(control_fd) => {
unsafe {
// Format the device path
let result = format!("/dev/loop{}", loopback_read_free_device(control_fd.as_raw_fd()).unwrap());
// Open the device and the container
let container_file = OpenOptions::new().read(true).write(false).open(file_path);
let device_file = OpenOptions::new().read(true).write(true).open(result.clone());
// Check if is open correctly both files
if container_file.is_err() {
panic!("[Error]: Failed to open the application file");
}
if device_file.is_err() {
panic!("[Error]: Failed to open the device file");
}
// Get the raw pointer from the files
let raw_device_fd = device_file.unwrap().as_raw_fd();
let raw_container_fd = container_file.unwrap().as_raw_fd();
// Match file and device
match loopback_set_device_fd(raw_device_fd, raw_container_fd as *const RawFd) {
Ok(_) => {
// Prepare new offset
let loop_info = LoopbackInfo {
io_offset : (MAGIC_NUMBER_SIZE + SIGNATURE_SIZE) as u64 // Magic Number + Signature Offset
};
// Set the device information
loopback_set_device_info(raw_device_fd, &loop_info as *const LoopbackInfo).unwrap();
// Return the path of the prepared device
return result.clone();
}
Err(code) => panic!("[Error]: Failed to associate the file with the device, error code: {}", code)
}
}
}
Err(_) => panic!("[Error]: It is impossible to get control over the loopback device")
}
}
}
When I try to execute these lines, I get the following panic:
[Error]: Failed to associate the file with the device, error code: EBADF: Bad file number
It seems that it is a bug in the rust nix library, I will try to report it shortly. I changed ioctl_write_ptr to ioctl_write_ptr_bad and it worked as it should.

How to retrieve information from the tokio-proto connection handshake?

I'm figuring out how to use the tokio-proto crate, particularly on the handshake made when a connection is established. I've got the example from the official documentation working:
impl<T: AsyncRead + AsyncWrite + 'static> ClientProto<T> for ClientLineProto {
type Request = String;
type Response = String;
/// `Framed<T, LineCodec>` is the return value of `io.framed(LineCodec)`
type Transport = Framed<T, line::LineCodec>;
type BindTransport = Box<Future<Item = Self::Transport, Error = io::Error>>;
fn bind_transport(&self, io: T) -> Self::BindTransport {
// Construct the line-based transport
let transport = io.framed(line::LineCodec);
// Send the handshake frame to the server.
let handshake = transport.send("You ready?".to_string())
// Wait for a response from the server, if the transport errors out,
// we don't care about the transport handle anymore, just the error
.and_then(|transport| transport.into_future().map_err(|(e, _)| e))
.and_then(|(line, transport)| {
// The server sent back a line, check to see if it is the
// expected handshake line.
match line {
Some(ref msg) if msg == "Bring it!" => {
println!("CLIENT: received server handshake");
Ok(transport)
}
Some(ref msg) if msg == "No! Go away!" => {
// At this point, the server is at capacity. There are a
// few things that we could do. Set a backoff timer and
// try again in a bit. Or we could try a different
// remote server. However, we're just going to error out
// the connection.
println!("CLIENT: server is at capacity");
let err = io::Error::new(io::ErrorKind::Other, "server at capacity");
Err(err)
}
_ => {
println!("CLIENT: server handshake INVALID");
let err = io::Error::new(io::ErrorKind::Other, "invalid handshake");
Err(err)
}
}
});
Box::new(handshake)
}
}
But the official docs only mention a handshake without stateful information. Is there a common way to retrieve and store useful data from the handshake?
For example, if during the handshake (in the first message after the connection is established) the server sends some key that should be used later by the client, how should the ClientProto implementation look into that key? And where should it be stored?
You can add fields to ClientLineProto, so this should work:
pub struct ClientLineProto {
handshakes: Arc<Mutex<HashMap<String, String>>>
}
And then you can reference it and store data as needed:
let mut handshakes = self.handshakes.lock();
handshakes.insert(handshake_key, "Blah blah handshake data")
This sort of access would work in bind_transport() for storing things. Then when you create the Arc::Mutex::HashMap in your main() function and you will have access to the whole thing in the serve() method as well, which means you can pass it in to the Service object instantiation and then the handshakes will be available during call().

Resources