Is there something like Promise.race() when using Reqwest in Rust? [duplicate] - rust

Given a collection of Futures, say a Vec<impl Future<..>>, how can I block and run all of the Futures concurrently until the first Future is ready?
The closest feature I can find is the select macro (which is also available in Tokio). Unfortunately it only works with an explicit number of Futures, instead of handling a collection of them.
There is an equivalent of this feature in Javascript, called Promise.race. Is there a way to do this in Rust?
Or perhaps there's a way to fulfill this use case using another pattern, perhaps with channels?

I figured out a solution using the select_all function from the futures library.
Here is a simple example to demonstrate how it can be used to race a collection of futures:
use futures::future::select_all;
use futures::FutureExt;
use tokio::time::{delay_for, Duration};
async fn get_async_task(task_id: &str, seconds: u64) -> &'_ str {
println!("starting {}", task_id);
let duration = Duration::new(seconds, 0);
delay_for(duration).await;
println!("{} complete!", task_id);
task_id
}
#[tokio::main]
async fn main() {
let futures = vec![
// `select_all` expects the Futures iterable to implement UnPin, so we use `boxed` here to
// allocate on the heap:
// https://users.rust-lang.org/t/the-trait-unpin-is-not-implemented-for-genfuture-error-when-using-join-all/23612/3
// https://docs.rs/futures/0.3.5/futures/future/trait.FutureExt.html#method.boxed
get_async_task("task 1", 5).boxed(),
get_async_task("task 2", 4).boxed(),
get_async_task("task 3", 1).boxed(),
get_async_task("task 4", 2).boxed(),
get_async_task("task 5", 3).boxed(),
];
let (item_resolved, ready_future_index, _remaining_futures) =
select_all(futures).await;
assert_eq!("task 3", item_resolved);
assert_eq!(2, ready_future_index);
}
Here's a link to the code above:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f32b2ed404624c1b0abe284914f8658d
Thanks to #Herohtar for suggesting select_all in the comments above!

Related

How to pass HashMap parameters to a function in Warp Rust

I'm fairly new to rust programming, and I'm following the rust book. However, I recently started to make some exercises by myself to "deepen" my understanding.
I came into an issue with Warp, particularly post requests. Basically, what I am trying to do is whenever I make a post requests with two parameters, in this case 2 numbers. I want to return the sum of both as a response.
use std::collections::HashMap;
use std::ptr::hash;
use warp::{Filter};
async fn add_two_numbers(a: u32, b: u32) -> Result<impl warp::Reply, warp::Rejection> {
Ok(format!("sum {}", a+b))
}
#[tokio::main]
async fn main() {
let hello = warp::post()
.and(warp::path("add_numbers"))
.and(warp::query::<HashMap<u32, u32>>())
.and(warp::path::end())
.and_then(add_two_numbers);
warp::serve(hello)
.run(([127, 0, 0, 1], 3000))
.await
}
However, I'm stuck and do not really know how to get the parameters from the query, and pass them into the function or pass the whole HashMap to the function and from there extract the data I need.

How can I use `tokio::spawn` in a third-party crate that does not implement the `Send` trait?

The struct named ResultSet from the Oracle crate does not implement the Send trait. But the definition of tokio::spawn requires that the result of the future it spawns implements the Send trait. Do I have to modify the ResultSet struct to implement Send? Is there a better way?
pub fn query_named(
&self,
sql: &str,
params: &[(&str, &dyn ToSql)],
) -> Result<ResultSet<Row>, Error>{
let rows = self.conn.query_named(sql, params)?;
Ok(rows)
}
pub async fn translate_result(
rm_result: Vec<sqlserver_mod2::MyResult>,
) -> Result<String, Error> {
let res_get_ryxm = ordb.query_named(sql_getRYXM, &[("barcode", &"900421757188")]).unwrap();
otherfun(res_get_ryxm);
}
spawn(async {
translate_result().await;
});
You can use tokio::task::spawn_local if your async code is executing in the context of a LocalSet:
use std::future::Future;
use std::pin::Pin;
use tokio::task::LocalSet;
// This is explicitly a !Send future
fn f() -> Pin<Box<dyn Future<Output = ()>>> {
Box::pin(async {
println!("hey, it worked");
// can use spawn_local() here
})
}
// This creates a multi-threaded runtime
#[tokio::main]
async fn main() {
let local = LocalSet::new();
local.run_until(f()).await;
}
However, be sure to heed its documentation since there are many restrictions for where it can be used since it goes against the grain of tokio's blocking and work-stealing model.
If this would be used extensively throughout your code-base, you may opt to use actix-rt instead of using tokio directly:
Tokio-based single-threaded async runtime for the Actix ecosystem.
In most parts of the the Actix ecosystem, it has been chosen to use !Send futures. For this reason, a single-threaded runtime is appropriate since it is guaranteed that futures will not be moved between threads. This can result in small performance improvements over cases where atomics would otherwise be needed.
To achieve similar performance to multi-threaded, work-stealing runtimes, applications using actix-rt will create multiple, mostly disconnected, single-threaded runtimes. This approach has good performance characteristics for workloads where the majority of tasks have similar runtime expense.

Rust chunks method with owned values?

I'm trying to perform a parallel operation on several chunks of strings at a time, and I'm finding having an issue with the borrow checker:
(for context, identifiers is a Vec<String> from a CSV file, client is reqwest and target is an Arc<String> that is write once read many)
use futures::{stream, StreamExt};
use std::sync::Arc;
async fn nop(
person_ids: &[String],
target: &str,
url: &str,
) -> String {
let noop = format!("{} {}", target, url);
let noop2 = person_ids.iter().for_each(|f| {f.as_str();});
"Some text".into()
}
#[tokio::main]
async fn main() {
let target = Arc::new(String::from("sometext"));
let url = "http://example.com";
let identifiers = vec!["foo".into(), "bar".into(), "baz".into(), "qux".into(), "quux".into(), "quuz".into(), "corge".into(), "grault".into(), "garply".into(), "waldo".into(), "fred".into(), "plugh".into(), "xyzzy".into()];
let id_sets: Vec<&[String]> = identifiers.chunks(2).collect();
let responses = stream::iter(id_sets)
.map(|person_ids| {
let target = target.clone();
tokio::spawn( async move {
let resptext = nop(person_ids, target.as_str(), url).await;
})
})
.buffer_unordered(2);
responses
.for_each(|b| async { })
.await;
}
Playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=e41c635e99e422fec8fc8a581c28c35e
Given chunks yields a Vec<&[String]>, the compiler complains that identifiers doesn't live long enough because it potentially goes out of scope while the slices are being referenced. Realistically this won't happen because there's an await. Is there a way to tell the compiler that this is safe, or is there another way of getting chunks as a set of owned Strings for each thread?
There was a similarly asked question that used into_owned() as a solution, but when I try that, rustc complains about the slice size not being known at compile time in the request_user function.
EDIT: Some other questions as well:
Is there a more direct way of using target in each thread without needing Arc? From the moment it is created, it never needs to be modified, just read from. If not, is there a way of pulling it out of the Arc that doesn't require the .as_str() method?
How do you handle multiple error types within the tokio::spawn() block? In the real world use, I'm going to receive quick_xml::Error and reqwest::Error within it. It works fine without tokio spawn for concurrency.
Is there a way to tell the compiler that this is safe, or is there another way of getting chunks as a set of owned Strings for each thread?
You can chunk a Vec<T> into a Vec<Vec<T>> without cloning by using the itertools crate:
use itertools::Itertools;
fn main() {
let items = vec![
String::from("foo"),
String::from("bar"),
String::from("baz"),
];
let chunked_items: Vec<Vec<String>> = items
.into_iter()
.chunks(2)
.into_iter()
.map(|chunk| chunk.collect())
.collect();
for chunk in chunked_items {
println!("{:?}", chunk);
}
}
["foo", "bar"]
["baz"]
This is based on the answers here.
Your issue here is that the identifiers are a Vector of references to a slice. They will not necessarily be around once you've left the scope of your function (which is what async move inside there will do).
Your solution to the immediate problem is to convert the Vec<&[String]> to a Vec<Vec<String>> type.
A way of accomplishing that would be:
let id_sets: Vec<Vec<String>> = identifiers
.chunks(2)
.map(|x: &[String]| x.to_vec())
.collect();

Why do I get "panicked at 'not currently running on the Tokio runtime'" when using block_on from the futures crate?

I'm using the example code on elastic search's blog post about their new crate and I'm unable to get it working as intended. The thread panics with thread 'main' panicked at 'not currently running on the Tokio runtime.'.
What is the Tokio runtime, how do I configure it and why must I?
use futures::executor::block_on;
async elastic_search_example() -> Result<(), Box<dyn Error>> {
let index_response = client
.index(IndexParts::IndexId("tweets", "1"))
.body(json!({
"user": "kimchy",
"post_date": "2009-11-15T00:00:00Z",
"message": "Trying out Elasticsearch, so far so good?"
}))
.refresh(Refresh::WaitFor)
.send()
.await?;
if !index_response.status_code().is_success() {
panic!("indexing document failed")
}
let index_response = client
.index(IndexParts::IndexId("tweets", "2"))
.body(json!({
"user": "forloop",
"post_date": "2020-01-08T00:00:00Z",
"message": "Indexing with the rust client, yeah!"
}))
.refresh(Refresh::WaitFor)
.send()
.await?;
if !index_response.status_code().is_success() {
panic!("indexing document failed")
}
}
fn main() {
block_on(elastic_search_example());
}
It looks like Elasticsearch's crate is using Tokio internally, so you must use it too to match their assumptions.
Looking for block_on function in their documentation, I've got this. So, it appears that your main should look like this:
use tokio::runtime::Runtime;
fn main() {
Runtime::new()
.expect("Failed to create Tokio runtime")
.block_on(elastic_search_example());
}
Or you can make you main function itself async with the attribute macro, which will generate runtime creation and block_on call for you:
#[tokio::main]
async fn main() {
elastic_search_example().await;
}
I had the same error when I used to tokio::run (from tokio version = 0.1) with crate that use tokio02 (tokio version = 0.2) internally (in my case it was reqwest).
First I just changed std::future::Future to futures01::future::Future with futures03::compat. To make it compile. After run I get exactly your error.
Solution:
Adding tokio-compat resolved my problem.
More about tokio compat

Is there a zero-overhead consuming iterator?

I often find myself writing code like:
myvec.iter().map(|x| some_operation(x)).count()
The invocation of count triggers the iterator chain to be consumed, but also produces as non-unit result which is undesired.
I am looking for something like
myvec.iter().map(|x| some_operation(x)).consume()
which should be equivalent to
for _ in myvec.iter().map(|x| some_operation(x)) {}
Iterator::for_each does what you want:
struct Int(i32);
impl Int {
fn print(&self) {
println!("{}", self.0)
}
}
fn main() {
[Int(1), Int(2), Int(3)].into_iter().for_each(Int::print);
}
No, Rust does not have this.
There were several discussions and even an RFC about having a for_each() operation on iterators which will execute a closure for each element of an iterator, consuming it, but nothing is there yet.
Consider using for loop instead:
for x in myvec.iter() {
some_operation(x);
}
In this particular case it does look better than iterator operations.

Resources