I am developing an algorithm in Rust that I want to multi-thread. The nature of the algorithm is that it produces solutions to overlapping subproblems, hence why I am looking for a way to achieve multi-threaded memoisation.
An implementation of (single-threaded) memoisation is presented by Pritchard in this article.
I would like to have this functionality extended such that:
Whenever the underlying function must be invoked, including recursively, the result is evaluated asynchronously on a new thread.
Continuing on from the previous point, suppose we have some memoised function f, and f(x) that needs to recursively invoke f(x1), f(x2), … f(xn). It should be possible for all of these recursive invocations to be evaluated concurrently on separate threads.
If the memoised function is called on an input whose result is currently being evaluated, the current thread should block on this thread, and somehow obtain the result after it is released. This ensures that we don't end up with multiple threads attempting to evaluate the same result.
There is a means of forcing f(x) to be evaluated and cached (if it isn't already) without blocking the current thread. This allows the programmer to preemptively begin the evaluation of a result on a particular value that they know will be (or is likely to be) needed later.
One way you could do this is by storing a HashMap, where the key is the paramaters to f and the value is the receiver of a oneshot message containing the result. Then for any value that you need:
If there is already a receiver in the map, await it.
Otherwise, spawn a future to start calculating the result, and store the receiver in the map.
Here is a very contrived example that took way longer than it should have, but successfully runs (Playground):
use futures::{
future::{self, BoxFuture},
prelude::*,
ready,
};
use std::{
collections::HashMap,
pin::Pin,
sync::Arc,
task::{Context, Poll},
};
use tokio::sync::{oneshot, Mutex};
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
struct MemoInput(usize);
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
struct MemoReturn(usize);
/// This is necessary in order to make a concrete type for the `HashMap`.
struct OneshotReceiverUnwrap<T>(oneshot::Receiver<T>);
impl<T> Future for OneshotReceiverUnwrap<T> {
type Output = T;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
// Don't worry too much about this part
Poll::Ready(ready!(Pin::new(&mut self.0).poll(cx)).unwrap())
}
}
type MemoMap = Mutex<HashMap<MemoInput, future::Shared<OneshotReceiverUnwrap<MemoReturn>>>>;
/// Compute (2^n)-1, super inefficiently.
fn compute(map: Arc<MemoMap>, x: MemoInput) -> BoxFuture<'static, MemoReturn> {
async move {
// First, get all dependencies.
let dependencies: Vec<MemoReturn> = future::join_all({
let map2 = map.clone();
let mut map_lock = map.lock().await;
// This is an iterator of futures that resolve to the results of the
// dependencies.
(0..x.0).map(move |i| {
let key = MemoInput(i);
let key2 = key.clone();
(*map_lock)
.entry(key)
.or_insert_with(|| {
// If the value is not currently being calculated (ie.
// is not in the map), start calculating it
let (tx, rx) = oneshot::channel();
let map3 = map2.clone();
tokio::spawn(async move {
// Compute the value, then send it to the receiver
// that we put in the map. This will awake all
// threads that were awaiting it.
tx.send(compute(map3, key2).await).unwrap();
});
// Return a shared future so that multiple threads at a
// time can await it
OneshotReceiverUnwrap(rx).shared()
})
.clone() // Clone one instance of the shared future for us
})
})
.await;
// At this point, all dependencies have been resolved!
let result = dependencies.iter().map(|r| r.0).sum::<usize>() + x.0;
MemoReturn(result)
}
.boxed() // Box in order to prevent a recursive type
}
#[tokio::main]
async fn main() {
let map = Arc::new(MemoMap::default());
let result = compute(map, MemoInput(10)).await.0;
println!("{}", result); // 1023
}
Note: this could certainly be better optimized, this is just a POC example.
Related
I have a list of functions with variable arguments, and I want to randomly pick one of them, in runtime, and call it, on a loop. I'm looking to enhance the performance of my solution.
I have a function that calculates the arguments based on some randomness, and then (should) return a function pointer, which I could then call.
pub async fn choose_random_endpoint(
&self,
rng: ThreadRng,
endpoint_type: EndpointType,
) -> impl Future<Output = Result<std::string::String, MyError>> {
match endpoint_type {
EndpointType::Type1 => {
let endpoint_arguments = self.choose_endpoint_arguments(rng);
let endpoint = endpoint1(&self.arg1, &self.arg2, &endpoint_arguments.arg3);
endpoint
}
EndpointType::Type2 => {
let endpoint_arguments = self.choose_endpoint_arguments(rng);
let endpoint = endpoint2(
&self.arg1,
&self.arg2,
&endpoint_arguments.arg3,
rng.clone(),
);
endpoint
}
EndpointType::Type3 => {
let endpoint_arguments = self.choose_endpoint_arguments(rng);
let endpoint = endpoint3(
&self.arg1,
&self.arg2,
&endpoint_arguments.arg3,
rng.clone(),
);
endpoint
}
}
}
The error I obtain is
expected opaque type `impl Future<Output = Result<std::string::String, MyError>>` (opaque type at <src/calls/type1.rs:14:6>)
found opaque type `impl Future<Output = Result<std::string::String, MyError>>` (opaque type at <src/type2.rs:19:6>)
. The compiler advises me to await the endpoints, and this solves the issue, but is there a performance overhead to this?
Outer function:
Aassume there is a loop calling this function:
pub async fn make_call(arg1: &str, arg2: &str) -> Result<String> {
let mut rng = rand::thread_rng();
let random_endpoint_type = choose_random_endpoint_type(&mut rng);
let random_endpoint = choose_random_endpoint(&rng, random_endpoint_type);
// call the endpoint
Ok(response)
}
Now, I want to call make_call every X seconds, but I don't want my main thread to block during the endpoint calls, as those are expensive. I suppose the right way to approach this is spawning a new thread per X seconds of interval, that call make_call?
Also, performance-wise: having so many clones on the rng seems quite expensive. Is there a more performant way to do this?
The error you get is sort of unrelated to async. It's the same one you get when you try to return two different iterators from a function. Your function as written doesn't even need to be async. I'm going to remove async from it when it's not needed, but if you need async (like for implementing an async-trait) then you can add it back and it'll probably work the same.
I've reduced your code into a simpler example that has the same issue (playground):
async fn a() -> &'static str {
"a"
}
async fn b() -> &'static str {
"b"
}
fn a_or_b() -> impl Future<Output = &'static str> {
if rand::random() {
a()
} else {
b()
}
}
What you're trying to write
When you want to return a trait, but the specific type that implements that trait isn't known at compile time, you can return a trait object. Futures need to be Unpin to be awaited, so this uses a pinned box (playground).
fn a_or_b() -> Pin<Box<dyn Future<Output = &'static str>>> {
if rand::random() {
Box::pin(a())
} else {
Box::pin(b())
}
}
You may need the type to be something like Pin<Box<dyn Future<Output = &'static str> + Send + Sync + 'static>> depending on the context.
What you should write
I think the only reason you'd do the above is if you want to generate the future with some kind of async rng, then do something else, and then run the generated future after that. Otherwise there's no need to have nested futures; just await the inner futures when you call them (playground).
async fn a_or_b() -> &'static str {
if rand::random() {
a().await
} else {
b().await
}
}
This is conceptually equivalent to the Pin<Box> method, just without having to allocate a Box. Instead, you have an opaque type that implements Future itself.
Blocking
The blocking behavior of these is only slightly different. Pin<Box> will block on non-async things when you call it, while the async one will block on non-async things where you await it. This is probably mostly the random generation.
The blocking behavior of the endpoint is the same and depends on what happens inside there. It'll block or not block wherever you await either way.
If you want to have multiple make_call calls happening at the same time, you'll need to do that outside the function anyway. Using the tokio runtime, it would look something like this:
use tokio::task;
use futures::future::join_all;
let tasks: Vec<_> = (0..100).map(|_| task::spawn(make_call())).collect();
let results = join_all(tasks).await;
This also lets you do other stuff while the futures are running, in between collect(); and let results.
If something inside your function blocks, you'd want to spawn it with task::spawn_blocking (and then await that handle) so that the await call in make_call doesn't get blocked.
RNG
If your runtime is multithreaded, the ThreadRng will be an issue. You could create a type that implements Rng + Send with from_entropy, and pass that into your functions. Or you can call thread_rng or even just rand::random where you need it. This makes a new rng per thread, but will reuse them on later calls since it's a thread-local static. On the other hand, if you don't need as much randomness, you can go with a Rng + Send type from the beginning.
If your runtime isn't multithreaded, you should be able to pass &mut ThreadRng all the way through, assuming the borrow checker is smart enough. You won't be able to pass it into an async function and then spawn it, though, so you'd have to create a new one inside that function.
I'm new to Rust and was trying to generate plenty of JSON data on the fly for a project, but I'm having deadlocks.
I've tried removing the serialization (json_serde) and sending the HashMaps in the channel instead but I still get deadlocks on my computer. If I however comment the send(generator.next()) line and send a string myself, code works flawlessly, thus the deadlock is caused by my DatasetGenerator, but I don't understand why.
Code summary:
Have a DatasetGenerator object that can generate sequences of "events" and serialize them to JSON.
generator.next() works like an "iterator" - It increments an internal atomic counter in the generator and then generates the i-th item in the sequence + serializes the JSON.
Have a generator threadpool generate these JSONs at high throughput (very large payloads each)
Send these JSONs through a channel to other thread (which will send them through network but irrelevant for this question)
Depending if I comment tx_ref.send(generator_ref.next()) or tx_ref.send(some_new_string) below my code deadlocks or succeeds:
src/main.rs:
extern crate threads_pool;
use threads_pool::*;
mod generator;
use std::sync::mpsc;
use std::sync::Arc;
use std::thread;
fn main() {
// N will be an argument, and a very high number. For tests use this:
const N: i64 = 12; // Increase this if you're not getting the deadlock yet, or run cargo run again until it happens.
let (tx, rx) = mpsc::channel();
let tx_producer = tx.clone();
let producer_thread = thread::spawn(move || {
let pool = ThreadPool::new(4);
let generator = Arc::new(generator::data_generator::DatasetGenerator::new(3000));
for i in 0..N {
println!("Generating #{}", i);
let tx_ref = tx_producer.clone();
let generator_ref = generator.clone();
pool.execute(move || {
////////// v !!!DEADLOCK HERE!!! v //////////
tx_ref.send(generator_ref.next()).expect("tx failed."); // This locks!
//tx_ref.send(format!(" {} ", i)).expect("tx failed."); // This works!
////////// ^ !!!DEADLOCK HERE!!! ^ //////////
})
.unwrap();
}
println!("Generator done!");
});
println!("-» Consumer consuming!");
for j in 0..N {
let s = rx.recv().expect("rx failed");
println!("-» Consumed #{}: {} ... ", j, &s[..10]);
}
println!("Consumer done!!");
producer_thread.join().unwrap();
println!("Success. Exit!");
}
This is my DatasetGenerator which seems to be causing all the trouble (as not using serde but outputting the HashMaps still gives deadlocks). src/generator/dataset_generator.rs:
use serde_json::Value;
use std::collections::HashMap;
use std::sync::atomic;
pub struct DatasetGenerator {
num_features: usize,
pub counter: atomic::AtomicI64,
feature_names: Vec<String>,
}
type Datapoint = HashMap<String, Value>;
type Out = String;
impl DatasetGenerator {
pub fn new(num_features: usize) -> DatasetGenerator {
let mut feature_names = Vec::new();
for i in 0..num_features {
feature_names.push(format!("f_{}", i));
}
DatasetGenerator {
num_features,
counter: atomic::AtomicI64::new(0),
feature_names,
}
}
/// Generates the next item in the sequence (iterator-like).
pub fn next(&self) -> Out {
let value = self.counter.fetch_add(1, atomic::Ordering::SeqCst);
self.gen(value)
}
/// Generates the ith item in the sequence. DEADLOCKS!!! ///////////////////////////
pub fn gen(&self, ith: i64) -> Out {
let mut data = Datapoint::with_capacity(self.num_features);
for f in 0..self.num_features {
let name = self.feature_names.get(f).unwrap();
data.insert(name.to_string(), Value::from(ith));
}
serde_json::json!(data).to_string() // Tried without serialization and still deadlocks!
}
}
Commit with deadlock code is here if you want to try out yourself with cargo run: https://github.com/AlbertoEAF/learn-rust/tree/dc5fa867e5a70b605553ef65796fdc9dd42d38a0/rest-injector
Deadlock on Windows with Rust 1.60.0:
Thank you for the help! it's greatly appreciated :)
Update
I've followed the suggestions from #kmdreko's answer below, and apparently the problem is in the generator: not all the items are generated. Even though pool.execute() is called N times, only a random number of closures c < N are executed even if I place pool.close() before leaving the producer_thread. Why does that happen / How can it be fixed?
Fix: Turns out this lockup is caused by the threads_pool library (0.2.6). I switched the thread pool to rayon's and it worked smoothly at the first try.
One thing you should change: an mpsc::Receiver will return an error on .recv() if it cannot possibly yield a result by realizing that all the associated mpsc::Senders have dropped, which is a good indicator that all the work is done. Your tx_refs and even tx_producer will be dropped when their respective tasks/threads complete, however you still have tx in scope that can theoretically give a value. This is what gives you the apparent deadlock. You should simply remove tx_producer and use tx directly so it is moved into the producer thread and dropped accordingly.
Now, you'll see either all N tasks complete, or you'll get an error indicating that some tasks did not complete. The reason not all tasks are completing is because you're creating the thread pool, spawning all the tasks, and then immediately destroying it. The threads_pool documentation says that the threads will finish their current job when the pool is destroyed, but you want to wait until all jobs have completed. For that you need to call the .close() method provided by the PoolManager trait before the end of the closure.
The reason you saw inconsistent behavior, but was benefited by returning a string directly is because the jobs required less work and the threads could get away with completing all them before they saw their signal to exit. Your generator_ref.next() requires much more computation so its not surprising they'd only process 4-plus-a-bit jobs before they see they've been told to exit.
Playground if you want to jump directly into the code.
Problem
I'm trying to implement a function filter_con<T, F>(v: Vec<T>, predicate: F) that allows concurrent filter on a Vec, via async predicates.
That is, instead of doing:
let arr = vec![...];
let arr_filtered = join_all(arr.into_iter().map(|it| async move {
if some_getter(&it).await > some_value {
Some(it)
} else {
None
}
}))
.await
.into_iter()
.filter_map(|it| it)
.collect::<Vec<T>>()
every time I need to filter for a Vec, I want to be able to:
let arr = vec![...];
let arr_filtered = filter_con(arr, |it| async move {
some_getter(&it).await > some_value
}).await
Tentative implementation
I've extracted the function into its own but I am incurring in lifetime issues
async fn filter_con<T, B, F>(arr: Vec<T>, predicate: F) -> Vec<T>
where
F: FnMut(&T) -> B,
B: futures::Future<Output = bool>,
{
join_all(arr.into_iter().map(|it| async move {
if predicate(&it).await {
Some(it)
} else {
None
}
}))
.await
.into_iter()
.filter_map(|p| p)
.collect::<Vec<_>>()
}
error[E0507]: cannot move out of a shared reference
I don't know what I'm moving out of predicate?
For more details, see the playground.
You won't be able to make the predicate an FnOnce, because, if you have 10 items in your Vec, you'll need to call the predicate 10 times, but an FnOnce only guarantees it can be called once, which could lead to something like this:
let vec = vec![1, 2, 3];
let has_drop_impl = String::from("hello");
filter_con(vec, |&i| async {
drop(has_drop_impl);
i < 5
}
So F must be either an FnMut or an Fn. The standard library Iterator::filter takes an FnMut, though this can be a source of confusion (it is the captured variables of the closure that need a mutable reference, not the elements of the iterator).
Because the predicate is an FnMut, any caller needs to be able to get an &mut F. For Iterator::filter, this can be used to do something like this:
let vec = vec![1, 2, 3];
let mut count = 0;
vec.into_iter().filter(|&x| {
count += 1; // this line makes the closure an `FnMut`
x < 2
})
However, by sending the iterator to join_all, you are essentially allowing your async runtime to schedule these calls as it wants, potentially at the same time, which would cause an aliased &mut T, which is always undefined behaviour. This issue has a slightly more cut down version of the same issue https://github.com/rust-lang/rust/issues/69446.
I'm still not 100% on the details, but it seems the compiler is being conservative here and doesn't even let you create the closure in the first place to prevent soundness issues.
I'd recommend making your function only accept Fns. This way, your runtime is free to call the function however it wants. This does means that your closure cannot have mutable state, but this is unlikely to be a problem in a tokio application. For the counting example, the "correct" solution is to use an AtomicUsize (or equivalent), which allows mutation via shared reference. If you're referencing mutable state in your filter call, it should be thread safe, and thread safe data structures generally allow mutation via shared reference.
Given that restriction, the following gives the answer you expect:
async fn filter_con<T, B, F>(arr: Vec<T>, predicate: F) -> Vec<T>
where
F: Fn(&T) -> B,
B: Future<Output = bool>,
{
join_all(arr.into_iter().map(|it| async {
if predicate(&it).await {
Some(it)
} else {
None
}
}))
.await
.into_iter()
.filter_map(|p| p)
.collect::<Vec<_>>()
}
Playground
I have an async method that should execute some futures in parallel, and only return after all futures finished. However, it is passed some data by reference that does not live as long as 'static (it will be dropped at some point in the main method). Conceptually, it's similar to this (Playground):
async fn do_sth(with: &u64) {
delay_for(Duration::new(*with, 0)).await;
println!("{}", with);
}
async fn parallel_stuff(array: &[u64]) {
let mut tasks: Vec<JoinHandle<()>> = Vec::new();
for i in array {
let task = spawn(do_sth(i));
tasks.push(task);
}
for task in tasks {
task.await;
}
}
#[tokio::main]
async fn main() {
parallel_stuff(&[3, 1, 4, 2]);
}
Now, tokio wants futures that are passed to spawn to be valid for the 'static lifetime, because I could drop the handle without the future stopping. That means that my example above produces this error message:
error[E0759]: `array` has an anonymous lifetime `'_` but it needs to satisfy a `'static` lifetime requirement
--> src/main.rs:12:25
|
12 | async fn parallel_stuff(array: &[u64]) {
| ^^^^^ ------ this data with an anonymous lifetime `'_`...
| |
| ...is captured here...
...
15 | let task = spawn(do_sth(i));
| ----- ...and is required to live as long as `'static` here
So my question is: How do I spawn futures that are only valid for the current context that I can then wait until all of them completed?
It is not possible to spawn a non-'static future from async Rust. This is because any async function might be cancelled at any time, so there is no way to guarantee that the caller really outlives the spawned tasks.
It is true that there are various crates that allow scoped spawns of async tasks, but these crates cannot be used from async code. What they do allow is to spawn scoped async tasks from non-async code. This doesn't violate the problem above, because the non-async code that spawned them cannot be cancelled at any time, as it is not async.
Generally there are two approaches to this:
Spawn a 'static task by using Arc rather than ordinary references.
Use the concurrency primitives from the futures crate instead of spawning.
Generally to spawn a static task and use Arc, you must have ownership of the values in question. This means that since your function took the argument by reference, you cannot use this technique without cloning the data.
async fn do_sth(with: Arc<[u64]>, idx: usize) {
delay_for(Duration::new(with[idx], 0)).await;
println!("{}", with[idx]);
}
async fn parallel_stuff(array: &[u64]) {
// Make a clone of the data so we can shared it across tasks.
let shared: Arc<[u64]> = Arc::from(array);
let mut tasks: Vec<JoinHandle<()>> = Vec::new();
for i in 0..array.len() {
// Cloning an Arc does not clone the data.
let shared_clone = shared.clone();
let task = spawn(do_sth(shared_clone, i));
tasks.push(task);
}
for task in tasks {
task.await;
}
}
Note that if you have a mutable reference to the data, and the data is Sized, i.e. not a slice, it is possible to temporarily take ownership of it.
async fn do_sth(with: Arc<Vec<u64>>, idx: usize) {
delay_for(Duration::new(with[idx], 0)).await;
println!("{}", with[idx]);
}
async fn parallel_stuff(array: &mut Vec<u64>) {
// Swap the array with an empty one to temporarily take ownership.
let vec = std::mem::take(array);
let shared = Arc::new(vec);
let mut tasks: Vec<JoinHandle<()>> = Vec::new();
for i in 0..array.len() {
// Cloning an Arc does not clone the data.
let shared_clone = shared.clone();
let task = spawn(do_sth(shared_clone, i));
tasks.push(task);
}
for task in tasks {
task.await;
}
// Put back the vector where we took it from.
// This works because there is only one Arc left.
*array = Arc::try_unwrap(shared).unwrap();
}
Another option is to use the concurrency primitives from the futures crate. These have the advantage of working with non-'static data, but the disadvantage that the tasks will not be able to run on multiple threads at the same time.
For many workflows this is perfectly fine, as async code should spend most of its time waiting for IO anyway.
One approach is to use FuturesUnordered. This is a special collection that can store many different futures, and it has a next function that runs all of them concurrently, and returns once the first of them finished. (The next function is only available when StreamExt is imported)
You can use it like this:
use futures::stream::{FuturesUnordered, StreamExt};
async fn do_sth(with: &u64) {
delay_for(Duration::new(*with, 0)).await;
println!("{}", with);
}
async fn parallel_stuff(array: &[u64]) {
let mut tasks = FuturesUnordered::new();
for i in array {
let task = do_sth(i);
tasks.push(task);
}
// This loop runs everything concurrently, and waits until they have
// all finished.
while let Some(()) = tasks.next().await { }
}
Note: The FuturesUnordered must be defined after the shared value. Otherwise you will get a borrow error that is caused by them being dropped in the wrong order.
Another approach is to use a Stream. With streams, you can use buffer_unordered. This is a utility that uses FuturesUnordered internally.
use futures::stream::StreamExt;
async fn do_sth(with: &u64) {
delay_for(Duration::new(*with, 0)).await;
println!("{}", with);
}
async fn parallel_stuff(array: &[u64]) {
// Create a stream going through the array.
futures::stream::iter(array)
// For each item in the stream, create a future.
.map(|i| do_sth(i))
// Run at most 10 of the futures concurrently.
.buffer_unordered(10)
// Since Streams are lazy, we must use for_each or collect to run them.
// Here we use for_each and do nothing with the return value from do_sth.
.for_each(|()| async {})
.await;
}
Note that in both cases, importing StreamExt is important as it provides various methods that are not available on streams without importing the extension trait.
In case of code that uses threads for parallelism, it is possible to avoid copying by extending a lifetime with transmute. An example:
fn main() {
let now = std::time::Instant::now();
let string = format!("{now:?}");
println!(
"{now:?} has length {}",
parallel_len(&[&string, &string]) / 2
);
}
fn parallel_len(input: &[&str]) -> usize {
// SAFETY: this variable needs to be static, because it is passed into a thread,
// but the thread does not live longer than this function, because we wait for
// it to finish by calling `join` on it.
let input: &[&'static str] = unsafe { std::mem::transmute(input) };
let mut threads = vec![];
for txt in input {
threads.push(std::thread::spawn(|| txt.len()));
}
threads.into_iter().map(|t| t.join().unwrap()).sum()
}
It seems reasonable that this should also work for asynchronous code, but I do not know enough about that to say for sure.
I have code lined up as below :-
let done = client_a.
.get_future(data)
.then(move |result| {
// further processing on result
spawn a future here.
});
tokio::run(done);
Now I have another future the result of which I want to process along with 'processing of result'.
However, that future is completely independent of client_a which implies :-
both could have different error types.
failure of one should not stop other one.
let done = client_a.
.get_future(data)
.then(move |result| {
// how to fit in
// client_b.get_future
// here
// further processing on both results
spawn third future here.
});
tokio::run(done);
If both error and item types are heterogenous and you know how many futures you will chain, the simplest way to do so is to chain into an infallible Future (because that's what your remaining future really is) whose Item type is a tuple of all the intermediate results.
This can be relatively simply implemented by simple chaining:
let future1_casted = future1.then(future::ok::<Result<_, _>, ()>);
let future2_casted = future2.then(future::ok::<Result<_, _>, ()>);
let chain = future1_casted
.and_then(|result1| future2_casted.map(|result2| (result1, result2)));
Playground link
The final future type is a tuple containing all the results of the futures.
If you do not know how many futures you are chaining, you will need to strengthen your requirements and explicitly know ahead of time the possible return types of your futures. Since it is not possible without macros to generate an arbitrary-sized tuple, you're going to need to store the intermediate results into a structure requiring homogeneous types.
To solve this problem, defining tuples containing your types, for example for errors, is required:
#[derive(PartialEq, Debug)]
pub enum MyError {
Utf16Error(char::DecodeUtf16Error),
ParseError(num::ParseIntError)
}
impl From<char::DecodeUtf16Error> for MyError {
fn from(e: char::DecodeUtf16Error) -> Self {
MyError::Utf16Error(e)
}
}
impl From<num::ParseIntError> for MyError {
fn from(e: num::ParseIntError) -> Self {
MyError::ParseError(e)
}
}
From there, combining futures follows the same route as before - turn a fallible future into an infallible Result<_, _> return, and combine then into a structure like a Vec with future::loop_fn()