Is it possible to run Axum on a single thread? - rust

I know that Axum is built on top of Tokio and Tokio has a multi-threaded scheduler and current-threaded scheduler.
Is it possible to set the runtime to make it serve the requests in single thread?

Since axum is just spawned in a tokio runtime with the #[tokio::main] macro or a system you setup. The futures are handled the way you configured the runtime.
Just do this if you're using the macro:
#[tokio::main(flavor = "current_thread")]
async fn main() {
// Axum code here...
}
Here's the docs for more information for the macro: https://docs.rs/tokio/0.3.3/tokio/attr.main.html#current-thread-runtime
And here's the non sugared way to setup a tokio runtime: https://docs.rs/tokio/latest/tokio/runtime/struct.Builder.html#method.new_current_thread

Related

Is there a performance difference between futures::executor::block_on and block_in_place

I have calls to async code inside a synchronous method (this method is part of a trait and I can't implement it asynchronously) so I use block_on to wait for the async calls to finish.
The sync method will be called from async code.
So the application is in #[tokio::main] and it calls the synchronous method when some event happens (endpoint hit), and the synchronous method will call some async code and wait on it to finish and return.
Turns out block_on can't be used inside async code. I have found tokio::task::block_in_place kind of spawns a synchronous context inside the async context, thus allows one to call block_on inside it.
So the method now looks like this:
impl SomeTrait for MyStruct {
fn some_sync_method(&self, handle: tokio::runtime::Handle) -> u32 {
tokio::task::block_in_place(|| {
handle.block_on(some_async_function())
}
}
}
Is this implementation better or using futures::executor::block_on instead:
impl SomeTrait for MyStruct {
fn some_sync_method(&self, handle: tokio::runtime::Handle) -> u32 {
futures::executor::block_on(some_async_function())
}
}
What is the underlying difference between the two implementations and in which cases each of them would be more efficient.
Btw, this method gets called a lot. This is part of a web server.
Don't use futures::executors::block_on(). Even before comparing the performance, there is something more important to consider: futures::executors::block_on() is just wrong here, as it blocks the asynchronous runtime.
As explained in block_in_place() docs:
In general, issuing a blocking call or performing a lot of compute in a future without yielding is problematic, as it may prevent the executor from driving other tasks forward. Calling this function informs the executor that the currently executing task is about to block the thread, so the executor is able to hand off any other tasks it has to a new worker thread before that happens. See the CPU-bound tasks and blocking code section for more information.
futures::executors::block_on() is likely to be a little more performant (I haven't benchmarked though) because it doesn't inform the executor. But that is the point: you need to inform the executor. Otherwise, your code can get stuck until your blocking function completes, performing essentially serializedly, without utilizing the device's resources.
If this code is most of the code, you may reconsider using an async runtime. It may be more efficient if you just spawn threads. Or just give up on using that library and use only async code.

Not using Async in Rocket 0.5+?

I read Rocket v0.5 now uses the Tokio runtime to support Async. I know Async can offer great scalability when we have lots of (like hundreds or thousands of) concurrent IO-bound requests. But many web/REST server apps simply don't fall into that category and in such cases, I feel like Async would only complicate stuff. Sorry if that sounds like a dumb question, but with Rocket 0.5+ will I still be able to write a traditional non-async code the same way as before? Does Async-support in Rocket 0.5+ mean that we will only get Async behaviour for async fn handlers? If so, will the Tokio runtime still play any role in non-async code?
Sure you can.
Look at the first examples in the web page:
#[get("/")]
fn index() -> &'static str {
"Hello, world!"
}
There is no async/await anywhere. The nicest thing of Rocket5 is that you can choose which views are sync are which are async, simply by making them so, and you can mix them together as you see fit.
For example this will just work:
#[get("/sync")]
fn index1() -> &'static str {
"Hello, sync!"
}
#[get("/async")]
async fn index2() -> &'static str {
"Hello, async!"
}
The Rocket runtime is all async under the hood, but that doesn't need to be exposed to your view handlers at all. When a non-async handler is run, it will be as if Rocket used spawn_blocking().

Is there a rust feature for async analogous to the recv_timeout function?

I'm trying to call an async function inside non-async context, and I'm having a really hard time with it.
Channels have been far easier to use for me - it's pretty simple and intuitive.
recv means block the thread until you receive something.
try_recv means see if something's there, otherwise error out.
recv_timeout means try for a certain amount of milliseconds, and then error out if nothing's there after the timeout.
I've been looking all over in the documentation of std::future::Future, but I don't see any way to do something similar. None of the functions that I've tried are simple solutions, and they all either take or give weird results that require even more unwrapping.
The Future trait in the standard library is very rudimentary and provides a stable foundation for others to build on.
Async runtimes (such as tokio, async-std, smol) include combinators that can take a future and turn it into another future. The tokio library has one such combinator called timeout.
Here is an example (playground link), which times out after 1 second while attempting to receive on a oneshot channel.
use std::time::Duration;
use tokio::{runtime::Runtime, sync::oneshot, time::{timeout, error::Elapsed}};
fn main() {
// Create a oneshot channel, which also implements `Future`, we can use it as an example.
let (_tx, rx) = oneshot::channel::<()>();
// Create a new tokio runtime, if you already have an async environment,
// you probably want to use tokio::spawn instead in order to re-use the existing runtime.
let rt = Runtime::new().unwrap();
// block_on is a function on the runtime which makes the current thread poll the future
// until it has completed. async move { } creates an async block, which implements `Future`
let output: Result<_, Elapsed> = rt.block_on(async move {
// The timeout function returns another future, which outputs a `Result<T, Elapsed>`. If the
// future times out, the `Elapsed` error is returned.
timeout(Duration::from_secs(1), rx).await
});
println!("{:?}", output);
}

Do you always need an async fn main() if using async in Rust?

I'm researching and playing with Rust's async/.await to write a service in Rust that will pull from some websockets and do something with that data. A colleague of mine (who did this similar "data feed importing" in C#) has told me to handle these feeds asynchronously, since threads would be bad performance-wise.
It's my understanding that, to do any async in Rust, you need a runtime (e.g. Tokio). After inspecting most code I've found on the subject it seems that a prerequisite is to have a:
#[tokio::main]
async fn main() {
// ...
}
which provides the necessary runtime which manages our async code. I came to this conclusion because you cannot use .await in scopes which are not async functions or blocks.
This leads me to my main question: if intending to use async in Rust, do you always needs an async fn main() as described above? If so, how do you structure your synchronous code? Can structs have async methods and functions implemented (or should they even)?
All of this stems from my initial approach to writing this service, because the way I envisioned it is to have some sort of struct which would handle multiple websocket feeds and if they need to be done asynchronously, then by this logic, that struct would have to have async logic in it.
No. The #[tokio::main] is just a convenience feature which you can use to create a Tokio runtime and launch the main function inside it.
If you want to explicitly initialize a runtime instance, you can use the Builder. The runtime has the spawn method which takes an async closure and executes it inside the runtime without being async itself. This allows you to create a Tokio runtime anywhere in your non-async code.

How can I reliably clean up Rust threads performing blocking IO?

It seems to be a common idiom in Rust to spawn off a thread for blocking IO so you can use non-blocking channels:
use std::sync::mpsc::channel;
use std::thread;
use std::net::TcpListener;
fn main() {
let (accept_tx, accept_rx) = channel();
let listener_thread = thread::spawn(move || {
let listener = TcpListener::bind(":::0").unwrap();
for client in listener.incoming() {
if let Err(_) = accept_tx.send(client.unwrap()) {
break;
}
}
});
}
The problem is, rejoining threads like this depends on the spawned thread "realizing" that the receiving end of the channel has been dropped (i.e., calling send(..) returns Err(_)):
drop(accept_rx);
listener_thread.join(); // blocks until listener thread reaches accept_tx.send(..)
You can make dummy connections for TcpListeners, and shutdown TcpStreams via a clone, but these seem like really hacky ways to clean up such threads, and as it stands, I don't even know of a hack to trigger a thread blocking on a read from stdin to join.
How can I clean up threads like these, or is my architecture just wrong?
One simply cannot safely cancel a thread reliably in Windows or Linux/Unix/POSIX, so it isn't available in the Rust standard library.
Here is an internals discussion about it.
There are a lot of unknowns that come from cancelling threads forcibly. It can get really messy. Beyond that, the combination of threads and blocking I/O will always face this issue: you need every blocking I/O call to have timeouts for it to even have a chance of being interruptible reliably. If one can't write async code, one needs to either use processes (which have a defined boundary and can be ended by the OS forcibly, but obviously come with heavier weight and data sharing challenges) or non-blocking I/O which will land your thread back in an event loop that is interruptible.
mio is available for async code. Tokio is a higher level crate based on mio which makes writing non-blocking async code even more straight forward.

Resources