Detecting a ConnectionReset in Rust, instead of having the Thread panic - rust

So I have a multi-threading program in Rust, which sends Get Requests to my Website, and I'm wondering how I can detect a ConnectionReset.
What I'm trying to do is, after the request, check if there was a ConnectionReset, and if there was, wait for a minute, so the thread doesn't panic
The code I'm using right now
let mut req = reqwest::get(&url).unwrap();
And after that was executed I want to check if there's a ConnectionReset, and then println ("Connection Error"), instead of having the thread panic.
The Error, that I want to detect
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value:
Error { kind: Io(Custom { kind: Other, error: Os { code: 10054, kind: ConnectionReset,
message: "An existing connection was forcibly closed by the remote host." } }),
url: Some("https://tayhay.vip") }', src\main.rs:22:43
I also read something about std::panic::catch_unwind, but I am not sure if that's the right way to go.

.unwrap means literally: "panic in case of error". If you don't want to panic, you will have to handle the error yourself. You have three solutions here depending on code you haven't shown us:
Propagate the error up with the ? operator and let the calling function handle it.
Have some default value ready to use (or create on the fly) in case of error:
let mut req = reqwest::get (&url).unwrap_or (default);
or
let mut req = reqwest::get (&url).unwrap_or_else (|_| { default });
(this probably doesn't apply in this specific case since I don't know what would make a sensible default here, but it applies in other error handling situations).
Have some specific error handling code:
match reqwest::get (&url) {
Ok (mut req) => { /* Process the request */ },
Err (e) => { /* Handle the error */ },
}
For more details, the Rust book has a full chapter on error handling.

Related

The right way to early close request with large payload ( a image ) without error

a simplest http handler func like
pub async fn new(mut payload: web::Payload) -> Result<impl Responder> {
return Ok("ok");
}
will raise error in the log:
[2022-06-03T01:39:58Z DEBUG actix_http::h1::dispatcher] cannot read request payload
[2022-06-03T01:39:58Z DEBUG actix_http::h1::dispatcher] handler dropped payload early; attempt to clean connection
[2022-06-03T01:39:58Z ERROR actix_http::h1::dispatcher] handler did not read whole payload and dispatcher could not drain read buf; return 500 and close connection
[2022-06-03T01:39:58Z ERROR actix_http::h1::dispatcher] stream error: Handler dropped payload before reading EOF
Seems that caused by the reason we haven't consume the payload.
Is there any way to fix this probrem?
What I really want to do is to protected a handler like this:
pub async fn new(user: User, mut payload: web::Payload) -> Result<impl Responder> {
/*
Do something with payload.
*/
}
where User implements the FromRequest trait, in its from_request function it will return an User or return the Unauthorized error.
So if there is an Unauthorized user calls the handler, it will return ErrorUnauthorized early.
But this will case the
stream error: Handler dropped payload before reading EOF.
This sounds similar to:
https://github.com/actix/actix-web/issues/2695
You likely need to drain the payload before returning your response/error. You can do so as:
payload.for_each(|_| ready(())).await;
or
while let Ok(Some(_)) = payload.try_next().await {}
I ran into a similar issue with processing multipart file uploads. Draining the payload as outlined above in option one worked in most cases, however, in some instances, like in the case the user hits the 'stop' or 'refresh' button on their browser, and the interruption occurs while looping on the stream, reading chunks of the payload would error 'Incomplete'. Attempting to turn the stream into a future and running for_each() on it would hang indefinitely. Option two worked consistently regardless.
Assuming your issue is interrelated, hopefully it should be resolved in v4.2, although you may still have to do some housekeeping with the payload when processing it like you are.
As far as the authorization comment, you can do this in a number of ways. One option is in middleware:
Rust Actix - early return from middleware

Why does variable binding affect lifetime inside a loop body?

In "The Rust Programming Language" in chapter 20 you go through an exercise of building a simple multi-threaded web server. In the exercise you use a single std::sync::mpsc channel. The worker threads all access a single Receiver which is contained like: Arc<Mutex<mpsc::Receiver<Message>>>.
If we write the worker thread like:
let thread = thread::spawn(move || loop {
match receiver.lock().unwrap().recv().unwrap() {
Message::NewJob(job) => {
println!("Worker {} got a job; executing.", id);
job.call_box();
println!("Worker {} job complete.", id);
}
Message::Terminate => {
println!("Worker {} was told to terminate.", id);
break;
}
};
println!("hello, loop");
});
Then we do not achieve concurrency, apparently the worker holds on to the mutex lock I supposed because no worker is able to pull off another job until the previous one is complete. However if we simply change it to this (how the book shows the code):
let thread = thread::spawn(move || loop {
let message = receiver.lock().unwrap().recv().unwrap();
match message {
Message::NewJob(job) => {
println!("Worker {} got a job; executing.", id);
job.call_box();
println!("Worker {} job complete.", id);
}
Message::Terminate => {
println!("Worker {} was told to terminate.", id);
break;
}
};
println!("hello, loop");
});
Then everything works fine. If you fire off 5 requests you'll see each thread gets one immediately. Concurrency!
The question is "why does variable binding affect lifetime" (I'm assuming that's the reason). Or if not then I'm missing something and what is that?! The book itself talks about how you cannot implement the worker loop with while let Ok(job) = receiver.lock().unwrap().recv() { because of the scope of the lock but apparently even inside the loop there be dragons.
Because in Rust, "resource acquisition is initialization".
Specifically receiver.lock() returns a type which acquires the lock when it is initialized and releases the lock when it is dropped.
In your first example, the lifetime of the MutexGuard extends to the end of the match statement, so the lock will be held while job.call_box() is called.
match receiver.lock().unwrap().recv().unwrap() {
// ...
};
// `MutexGuard` is dropped and lock is released here
In your second example, the lock guard is only kept alive long enough to read a message from your message queue; the lock guard is dropped at the end of the statement and the lock is released before the match is entered.
let message = receiver.lock().unwrap().recv().unwrap();
// `MutexGuard` is dropped and lock is released here
match message {

How to handle errors from parallel web requests using Retrofit + RxJava?

I have a situation like this where I make some web requests in parallel. Sometimes I make these calls and all requests see the same error (e.g. no-network):
void main() {
Observable.just("a", "b", "c")
.flatMap(s -> makeNetworkRequest())
.subscribe(
s -> {
// TODO
},
error -> {
// handle error
});
}
Observable<String> makeNetworkRequest() {
return Observable.error(new NoNetworkException());
}
class NoNetworkException extends Exception {
}
Depending on the timing, if one request emits the NoNetworkException before the others can, Retrofit/RxJava will dispose/interrupt** the others. I'll see one of the following logs (not all three) for each request remaining in progress++:
<-- HTTP FAILED: java.io.IOException: Canceled
<-- HTTP FAILED: java.io.InterruptedIOException
<-- HTTP FAILED: java.io.InterruptedIOException: thread interrupted
I'll be able to handle the NoNetworkException error in the subscriber and everything downstream will get disposed of and all is OK.
However based on timing, if two or more web requests emit NoNetworkException, then the first one will trigger the events above, disposing of everything down stream. The second NoNetworkException will have nowhere to go and I'll get the dreaded UndeliverableException. This is the same as example #1 documented here.
In the above article, the author suggested using an error handler. Obviously retry/retryWhen don't make sense if I expect to hear the same errors again. I don't understand how onErrorResumeNext/onErrorReturn help here, unless I map them to something recoverable to be handled downstream:
Observable.just("a", "b", "c")
.flatMap(s ->
makeNetworkRequest()
.onErrorReturn(error -> {
// eat actual error and return something else
return "recoverable error";
}))
.subscribe(
s -> {
if (s.equals("recoverable error")) {
// handle error
} else {
// TODO
}
},
error -> {
// handle error
});
but this seems wonky.
I know another solution is to set a global error handler with RxJavaPlugins.setErrorHandler(). This doesn't seem like a great solution either. I may want to handle NoNetworkException differently in different parts of my app.
So what other options to I have? What do other people do in this case? This must be pretty common.
** I don't fully understand who is interrupting/disposing of who. Is RxJava disposing of all other requests in flatmap which in turn causes Retrofit to cancel requests? Or does Retrofit cancel requests, resulting in each
request in flatmap emitting one of the above IOExceptions? I guess it doesn't really matter to answer the question, just curious.
++ It's possible that not all a, b, and c requests are in flight depending on thread pool.
Have you tried by using flatMap() with delayErrors=true?

Rust Type Inference Error

I'm writing a chat server over TCP as a learning project. I've been tinkering with the ws crate today, but I've come across an issue. This is the code I wrote, modifying their server example.
extern crate ws;
extern crate env_logger;
use ws::listen;
fn main() {
// Setup logging
env_logger::init().unwrap();
// Listen on an address and call the closure for each connection
if let Err(error) = listen("127.0.0.1:3012", |out| {
let mut message: String;
// The handler needs to take ownership of out, so we use move
move |message| {
message = message.trim();
// Handle messages received on this connection
println!("Server got message '{}'. ", message);
// Use the out channel to send messages back
out.send(message)
}
}) {
// Inform the user of failure
println!("Failed to create WebSocket due to {:?}", error);
}
}
When I try compiling it I get an error:
error: the type of this value must be known in this context
--> src/main.rs:15:23
|
15 | message = message.trim();
| ^^^^^^^^^^^^^^
Why is this happening? How may I fix this?
move |message| shadows the message variable you've declared outside the closure. So within the closure.. message is said to be a ws::Message ... except you've done this:
message = message.trim();
The compiler goes "oh no! trim()? That doesn't exist for ws::Message".. and so now it doesn't quite know what to do.
Option 1
The first fix involves delegating the trim() call to the client who sends the message.
The fix is to not make any assumptions about what the message is inside this closure. If you keep this:
move |message|
..but remove the trim() call, the compiler happily infers its type as ws::Message and will build:
if let Err(error) = listen("127.0.0.1:3012", |out| {
// The handler needs to take ownership of out, so we use move
move |message| {
// --- REMOVED trim() call ---
// Handle messages received on this connection
println!("Server got message '{}'. ", message);
// Use the out channel to send messages back
out.send(message)
}
}
This gives you the option of delegating the trim() call to the client instead.
Option 2
Option 2 involves inspecting the type of message you've received, and making sure you trim it only if it is text:
// The handler needs to take ownership of out, so we use move
move |mut message: ws::Message| {
// Only do it if the Message is text
if message.is_text() {
message = ws::Message::Text(message.as_text().unwrap().trim().into());
}
// Handle messages received on this connection
println!("Server got message '{}'. ", message);
// Use the out channel to send messages back
out.send(message)
}
This is perhaps a little more verbose than it needs to be.. but hopefully it shows you what the actual issue is with your original snippet of code.

Why do I get a Bad File Descriptor error when writing to opened File?

Calling write_all on a file returns an error with the description: os error. Debug printing the error outputs: Err(Error { repr: Os(9) })
What does the error mean?
You didn't include any code, so I had to make wild guesses about what you are doing. Here's one piece of code that reproduces your error:
use std::fs;
use std::io::Write;
fn main() {
let mut f = fs::File::open("/").unwrap();
// f.write_all(b"hello").unwrap();
// Error { repr: Os(9) }
match f.write_all(b"hello") {
Ok(..) => {},
Err(e) => println!("{}", e),
}
// Bad file descriptor (os error 9)
}
If you use the Display ({}) format instead of Debug ({:?}), you will see an error message that is nicer than just the error code. Note that unwrap will use the Debug formatter, so you have to use match in this case.
You could also look up the error code in the kernel source. You don't indicate if you are running Windows (unlikely), OS X or Linux, so I guessed Linux.
There are lots of SO questions that then explain what the code can mean, but I'm sure you know how to search through those, now that you have a handle on the problem.

Resources