Rust - Using map with mutable reference and async; possibly using Stream? - rust

Is there a way with Rust to perform the following operation without making models mutable? Possibly by using Stream? The core issue with using uuids.iter().map(...) appears to be (a) passing/moving &mut conn into the closure and (b) the fact that DatabaseModel::load is async.
// assume:
// uuid: Vec<uuid::Uuid>
// conn: &mut PgConnection from `sqlx`
let mut models = Vec::<DatabaseModel>::new();
for uuid in &uuids {
let model = DatabaseModel::load(conn, uuid).await;
models.extend(model);
}
//.. do immutable stuff with `models`
A more basic toy example without (a) and (b) above may look like the following, which is closer to what I wish for:
let models = uuids.iter().map(|uuid| DatabaseModel::load(uuid));

Yes, what you're looking for is a Stream, a.k.a. "an asynchronous version of Iterator".
You can adapt an existing iterator into a stream by using futures::stream::iter and chain that with .then() to call an async function for each element. Here's an example (playground):
use futures::StreamExt;
let models: Vec<_> = futures::stream::iter(&uuids)
.then(|uuid| DatabaseModel::load(conn, uuid))
.collect()
.await;
However, this won't work if conn is a mutable reference. Streams can't ensure that their futures run purely sequentially. Through the stream its possible to create multiple futures, which would all need the &mut Connection to make progress, but that is not allowed. You would need some form of interior mutability, likely an asynchronous Mutex, to ensure use of conn is exclusive (playground):
use futures::StreamExt;
use tokio::sync::Mutex;
let conn = Mutex::new(conn);
let models: Vec<_> = futures::stream::iter(&uuids)
.then(|uuid| async {
let mut conn = conn.lock().await;
DatabaseModel::load(&mut conn, uuid).await
})
.collect()
.await;
If that is unsavory, then you do need a for-loop with .await to ensure that uses of conn are exclusive. Since that is pretty much set in stone, any other method to create models without mutating it would simply be obtuse.

Related

Waiting on multiple futures borrowing mutable self

Each of the following methods need (&mut self) to operate. The following code gives the error.
cannot borrow *self as mutable more than once at a time
How can I achieve this correctly?
loop {
let future1 = self.handle_new_connections(sender_to_connector.clone());
let future2 = self.handle_incoming_message(&mut receiver_from_peers);
let future3 = self.handle_outgoing_message();
tokio::pin!(future1, future2, future3);
tokio::select! {
_=future1=>{},
_=future2=>{},
_=future3=>{}
}
}
You are not allowed to have multiple mutable references to an object and there's a good reason for that.
Imagine you pass an object mutably to 2 different functions and they edited the object out of sync since you don't have any mechanism for that in place. then you'd end up with something called a race condition.
To prevent this bug rust allows only one mutable reference to an object at a time but you can have multiple immutable references and often you see people use internal mutability patterns.
In your case, you want data not to be able to be modified by 2 different threads at the same time so you'd wrap it in a Lock or RwLock then since you want multiple threads to be able to own this value you'd wrap that in an Arc.
here you can read about interior mutability in more detail.
Alternatively, while declaring the type of your function you could add proper lifetimes to indicate the resulting Future will be waited on in the same context by giving it a lifetime since your code waits for the future before the next iteration that would do the trick as well.
I encountered the same problem when dealing with async code. Here is what I figured out:
Let's say you have an Engine, that contains both incoming and outgoing:
struct Engine {
log: Arc<Mutex<Vec<String>>>,
outgoing: UnboundedSender<String>,
incoming: UnboundedReceiver<String>,
}
Our goal is to create two functions process_incoming and process_logic and then poll them simultaneously without messing up with the borrow checker in Rust.
What is important here is that:
You cannot pass &mut self to these async functions simultaneously.
Either incoming or outgoing will be only held by one function at most.
The data access by both process_incoming and process_logic need to be wrapped by a lock.
Any trying to lock Engine directly will lead to a deadlock at runtime.
So that leaves us giving up using the method in favor of the associated function:
impl Engine {
// ...
async fn process_logic(outgoing: &mut UnboundedSender<String>, log: Arc<Mutex<Vec<String>>>) {
loop {
Delay::new(Duration::from_millis(1000)).await.unwrap();
let msg: String = "ping".into();
println!("outgoing: {}", msg);
log.lock().push(msg.clone());
outgoing.send(msg).await.unwrap();
}
}
async fn process_incoming(
incoming: &mut UnboundedReceiver<String>,
log: Arc<Mutex<Vec<String>>>,
) {
while let Some(msg) = incoming.next().await {
println!("incoming: {}", msg);
log.lock().push(msg);
}
}
}
And we can then write main as:
fn main() {
futures::executor::block_on(async {
let mut engine = Engine::new();
let a = Engine::process_incoming(&mut engine.incoming, engine.log.clone()).fuse();
let b = Engine::process_logic(&mut engine.outgoing, engine.log).fuse();
futures::pin_mut!(a, b);
select! {
_ = a => {},
_ = b => {},
}
});
}
I put the whole example here.
It's a workable solution, only be aware that you should add futures and futures-timer in your dependencies.

`RefCell<std::string::String>` cannot be shared between threads safely?

This is a continuation of How to re-use a value from the outer scope inside a closure in Rust? , opened new Q for better presentation.
// main.rs
// The value will be modified eventually inside `main`
// and a http request should respond with whatever "current" value it holds.
let mut test_for_closure :Arc<RefCell<String>> = Arc::new(RefCell::from("Foo".to_string()));
// ...
// Handler for HTTP requests
// From https://docs.rs/hyper/0.14.8/hyper/service/fn.service_fn.html
let make_svc = make_service_fn(|_conn| async {
Ok::<_, Infallible>(service_fn(|req: Request<Body>| async move {
if req.version() == Version::HTTP_11 {
let foo:String = *test_for_closure.borrow();
Ok(Response::new(Body::from(foo.as_str())))
} else {
Err("not HTTP/1.1, abort connection")
}
}))
});
Unfortunately, I get RefCell<std::string::String> cannot be shared between threads safely:
RefCell only works on single threads. You will need to use Mutex which is similar but works on multiple threads. You can read more about Mutex here: https://doc.rust-lang.org/std/sync/struct.Mutex.html.
Here is an example of moving an Arc<Mutex<>> into a closure:
use std::sync::{Arc, Mutex};
fn main() {
let mut test: Arc<Mutex<String>> = Arc::new(Mutex::from("Foo".to_string()));
let mut test_for_closure = Arc::clone(&test);
let closure = || async move {
// lock it so it cant be used in other threads
let foo = test_for_closure.lock().unwrap();
println!("{}", foo);
};
}
The first error in your error message is that Sync is not implemented for RefCell<String>. This is by design, as stated by Sync's rustdoc:
Types that are not Sync are those that have “interior mutability” in a
non-thread-safe form, such as Cell and RefCell. These types allow for
mutation of their contents even through an immutable, shared
reference. For example the set method on Cell takes &self, so it
requires only a shared reference &Cell. The method performs no
synchronization, thus Cell cannot be Sync.
Thus it's not safe to share RefCells between threads, because you can cause a data race through a regular, shared reference.
But what if you wrap it in Arc ? Well, the rustdoc is quite clear again:
Arc will implement Send and Sync as long as the T implements Send
and Sync. Why can’t you put a non-thread-safe type T in an Arc to
make it thread-safe? This may be a bit counter-intuitive at first:
after all, isn’t the point of Arc thread safety? The key is this:
Arc makes it thread safe to have multiple ownership of the same
data, but it doesn’t add thread safety to its data. Consider
Arc<RefCell>. RefCell isn’t Sync, and if Arc was always Send,
Arc<RefCell> would be as well. But then we’d have a problem:
RefCell is not thread safe; it keeps track of the borrowing count
using non-atomic operations.
In the end, this means that you may need to pair Arc with some sort
of std::sync type, usually Mutex.
Arc<T> will not be Sync unless T is Sync because of the same reason. Given that, probably you should use std/tokio Mutex instead of RefCell

Execute concurrent async functions on each element of the hashmap using rayon::par_iter with for_each

Can I do something like this ?
I want to execute concurrent async functions on each element of the hashmap.
async fn b(){
let hm: HashMap<String,String> = ...;
hm.par_iter().for_each(async move |(k, v)| {
...operations on k,v
another_func(&v).await;
})
}
Error on async move:
*async move {rustcE0658
mismatched types
expected unit type ()
found opaque type impl std::future::FuturerustcE0308
mod.rs(61, 43): the found opaque type*
rayon is not a good fit for async because it operates on ordinary non-async functions, and blocks in operations like for_each(). But since you're using async, you could avoid Rayon altogether and just spawn a task for each operation:
let tasks: Vec<_> = hm
.iter()
.map(|(k, v)| {
let k = k.clone();
let v = v.clone();
tokio::spawn(async move {
some_func(&k, &v);
another_func(&v).await;
})
})
.collect();
future::join_all(tasks).await;
Playground
Note that we clone the keys and the values because that allows them to be safely moved into the task. Without the clone the borrow checker cannot prove that a reference to the data in the hash table passed to a global task won't outlive the hash table itself.

Rust chunks method with owned values?

I'm trying to perform a parallel operation on several chunks of strings at a time, and I'm finding having an issue with the borrow checker:
(for context, identifiers is a Vec<String> from a CSV file, client is reqwest and target is an Arc<String> that is write once read many)
use futures::{stream, StreamExt};
use std::sync::Arc;
async fn nop(
person_ids: &[String],
target: &str,
url: &str,
) -> String {
let noop = format!("{} {}", target, url);
let noop2 = person_ids.iter().for_each(|f| {f.as_str();});
"Some text".into()
}
#[tokio::main]
async fn main() {
let target = Arc::new(String::from("sometext"));
let url = "http://example.com";
let identifiers = vec!["foo".into(), "bar".into(), "baz".into(), "qux".into(), "quux".into(), "quuz".into(), "corge".into(), "grault".into(), "garply".into(), "waldo".into(), "fred".into(), "plugh".into(), "xyzzy".into()];
let id_sets: Vec<&[String]> = identifiers.chunks(2).collect();
let responses = stream::iter(id_sets)
.map(|person_ids| {
let target = target.clone();
tokio::spawn( async move {
let resptext = nop(person_ids, target.as_str(), url).await;
})
})
.buffer_unordered(2);
responses
.for_each(|b| async { })
.await;
}
Playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=e41c635e99e422fec8fc8a581c28c35e
Given chunks yields a Vec<&[String]>, the compiler complains that identifiers doesn't live long enough because it potentially goes out of scope while the slices are being referenced. Realistically this won't happen because there's an await. Is there a way to tell the compiler that this is safe, or is there another way of getting chunks as a set of owned Strings for each thread?
There was a similarly asked question that used into_owned() as a solution, but when I try that, rustc complains about the slice size not being known at compile time in the request_user function.
EDIT: Some other questions as well:
Is there a more direct way of using target in each thread without needing Arc? From the moment it is created, it never needs to be modified, just read from. If not, is there a way of pulling it out of the Arc that doesn't require the .as_str() method?
How do you handle multiple error types within the tokio::spawn() block? In the real world use, I'm going to receive quick_xml::Error and reqwest::Error within it. It works fine without tokio spawn for concurrency.
Is there a way to tell the compiler that this is safe, or is there another way of getting chunks as a set of owned Strings for each thread?
You can chunk a Vec<T> into a Vec<Vec<T>> without cloning by using the itertools crate:
use itertools::Itertools;
fn main() {
let items = vec![
String::from("foo"),
String::from("bar"),
String::from("baz"),
];
let chunked_items: Vec<Vec<String>> = items
.into_iter()
.chunks(2)
.into_iter()
.map(|chunk| chunk.collect())
.collect();
for chunk in chunked_items {
println!("{:?}", chunk);
}
}
["foo", "bar"]
["baz"]
This is based on the answers here.
Your issue here is that the identifiers are a Vector of references to a slice. They will not necessarily be around once you've left the scope of your function (which is what async move inside there will do).
Your solution to the immediate problem is to convert the Vec<&[String]> to a Vec<Vec<String>> type.
A way of accomplishing that would be:
let id_sets: Vec<Vec<String>> = identifiers
.chunks(2)
.map(|x: &[String]| x.to_vec())
.collect();

What do I use to share an object with many threads and one writer in Rust?

What is the right approach to share a common object between many threads when the object may sometimes be written to by one owner?
I tried to create one Configuration trait object with several methods to get and set config keys. I'd like to pass this to other threads where configuration items may be read. Bonus points would be if it can be written and read by everyone.
I found a Reddit thread which talks about Rc and RefCell; would that be the right way? I think these would not allow me to borrow the object immutably multiple times and still mutate it.
Rust has a built-in concurrency primitive exactly for this task called RwLock. Together with Arc, it can be used to implement what you want:
use std::sync::{Arc, RwLock};
use std::sync::mpsc;
use std::thread;
const N: usize = 12;
let shared_data = Arc::new(RwLock::new(Vec::new()));
let (finished_tx, finished_rx) = mpsc::channel();
for i in 0..N {
let shared_data = shared_data.clone();
let finished_tx = finished_tx.clone();
if i % 4 == 0 {
thread::spawn(move || {
let mut guard = shared_data.write().expect("Unable to lock");
guard.push(i);
finished_tx.send(()).expect("Unable to send");
});
} else {
thread::spawn(move || {
let guard = shared_data.read().expect("Unable to lock");
println!("From {}: {:?}", i, *guard);
finished_tx.send(()).expect("Unable to send");
});
}
}
// wait until everything's done
for _ in 0..N {
let _ = finished_rx.recv();
}
println!("Done");
This example is very silly but it demonstrates what RwLock is and how to use it.
Also note that Rc and RefCell/Cell are not appropriate in a multithreaded environment because they are not synchronized properly. Rust won't even allow you to use them at all with thread::spawn(). To share data between threads you must use an Arc, and to share mutable data you must additionally use one of the synchronization primitives like RWLock or Mutex.

Resources