Rust: Safe multi threading with recursion - linux

I'm new to Rust.
For learning purposes, I'm writing a simple program to search for files in Linux, and it uses a recursive function:
fn ffinder(base_dir:String, prmtr:&'static str, e:bool, h:bool) -> std::io::Result<()>{
let mut handle_vec = vec![];
let pth = std::fs::read_dir(&base_dir)?;
for p in pth {
let p2 = p?.path().clone();
if p2.is_dir() {
if !h{ //search doesn't include hidden directories
let sstring:String = get_fname(p2.display().to_string());
let slice:String = sstring[..1].to_string();
if slice != ".".to_string() {
let handle = thread::spawn(move || {
ffinder(p2.display().to_string(),prmtr,e,h);
});
handle_vec.push(handle);
}
}
else {//search include hidden directories
let handle2 = thread::spawn(move || {
ffinder(p2.display().to_string(),prmtr,e,h);
});
handle_vec.push(handle2);
}
}
else {
let handle3 = thread::spawn(move || {
if compare(rmv_underline(get_fname(p2.display().to_string())),rmv_underline(prmtr.to_string()),e){
println!("File found at: {}",p2.display().to_string().blue());
}
});
handle_vec.push(handle3);
}
}
for h in handle_vec{
h.join().unwrap();
}
Ok(())
}
I've tried to use multi threading (thread::spawn), however it can create too many threads, exploding the OS limit, which breaks the program execution.
Is there a way to multi thread with recursion, using a safe,limited (fixed) amount of threads?

As one of the commenters mentioned, this is an absolutely perfect case for using Rayon. The blog post mentioned doesn't show how Rayon might be used in recursion, only making an allusion to crossbeam's scoped threads with a broken link. However, Rayon provides its own scoped threads implementation that solves your problem as well in that only uses as many threads as you have cores available, avoiding the error you ran into.
Here's the documentation for it:
https://docs.rs/rayon/1.0.1/rayon/fn.scope.html
Here's an example from some code I recently wrote. Basically what it does is recursively scan a folder, and each time it nests into a folder it creates a new job to scan that folder while the current thread continues. In my own tests it vastly outperforms a single threaded approach.
let source = PathBuf::from("/foo/bar/");
let (tx, rx) = mpsc::channel();
rayon::scope(|s| scan(&source, tx, s));
fn scan<'a, U: AsRef<Path>>(
src: &U,
tx: Sender<(Result<DirEntry, std::io::Error>, u64)>,
scope: &Scope<'a>,
) {
let dir = fs::read_dir(src).unwrap();
dir.into_iter().for_each(|entry| {
let info = entry.as_ref().unwrap();
let path = info.path();
if path.is_dir() {
let tx = tx.clone();
scope.spawn(move |s| scan(&path, tx, s)) // Recursive call here
} else {
// dbg!("{}", path.as_os_str().to_string_lossy());
let size = info.metadata().unwrap().len();
tx.send((entry, size)).unwrap();
}
});
}
I'm not an expert on Rayon, but I'm fairly certain the threading strategy works like this:
Rayon creates a pool of threads to match the number of logical cores you have available in your environment. The first call to the scoped function creates a job that the first available thread "steals" from the queue of jobs available. Each time we make another recursive call, it doesn't necessarily execute immediately, but it creates a new job that an idle thread can then "steal" from the queue. If all of the threads are busy, the job queue just fills up each time we make another recursive call, and each time a thread finishes its current job it steals another job from the queue.
The full code can be found here: https://github.com/1Dragoon/fcp
(Note that repo is a work in progress and the code there is currently typically broken and probably won't work at the time you're reading this.)
As an exercise to the reader, I'm more of a sys admin than an actual developer, so I also don't know if this is the ideal approach. From Rayon's documentation linked earlier:
scope() is a more flexible building block compared to join(), since a loop can be used to spawn any number of tasks without recursing
The language of that is a bit confusing. I'm not sure what they mean by "without recursing". Join seems to intend for you to already have tasks known about ahead of time and to execute them in parallel as threads become available, whereas scope seems to be more aimed at only creating jobs when you need them rather than having to know everything you need to do in advance. Either that or I'm understanding their meaning backwards.

Related

Is there any way to keep threads spawned in Rust - i.e., to "memoize" a thread object?

On my runtime, I have the following Rust code:
pub fn reduce(heap: &Heap, prog: &Program, tids: &[usize], root: u64, debug: bool) -> Ptr {
// Halting flag
let stop = &AtomicBool::new(false);
let barr = &Barrier::new(tids.len());
let locs = &tids.iter().map(|x| AtomicU64::new(u64::MAX)).collect::<Vec<AtomicU64>>();
// Spawn a thread for each worker
std::thread::scope(|s| {
for tid in tids {
s.spawn(move || {
reducer(heap, prog, tids, stop, barr, locs, root, *tid, debug);
});
}
});
// Return whnf term ptr
return load_ptr(heap, root);
}
This will spawn many threads, in order to perform a parallel computation. The problem is, the reduce function is called thousands of times, and the overhead of spawning threads can be considerable. When I implemented the same thing in C, I just kept the threads open, and sent a message in order to activate them. In Rust, with the std::thread::scope idiom, I'm not sure how to do so. Is it possible to keep the threads spawned after the first call to reduce, by just modifying that one function? That is, without changing anything else on my code?
Threads spawned using the threads::scoped api won't be able to outlive the the calling function. For long-running threads you'll need to spawn them using std::thread::spawn.
Once you've made that change rustc will be very upset with you due to lifetime errors because you are sending non-static references into the spawned threads. Fixing those errors will require some changes. That path is long and full of learning.
If you want something that works really well and is simple, consider instead using the excellent rayon crate. Rayon maintains it's own global thread pool so you don't have to.
Using rayon will look something like this:
tids.par_iter()
.for_each(|tid| {
reducer(heap, prog, tids, stop, barr, locs, root, *tid, debug);
});

How to leverage concurrency for processing large array of data in rust?

I am a javascript developer and I am building a desktop application using Tauri. I am used to single-threaded languages and trying to get comfortable with the concept of concurrency.
My problem can be summarized as follows. I receive a JSON array from the backend of length 500. I loop through this array and perform some asynchronous operations like making a network call. In the end, I aggregate, structure, and return the data. This entire process takes around 25-35 seconds on my machine.
I wanted to leverage the concept of concurrency to reduce the time required for this operation. One possible solution that I thought of, was to create n number of threads, let's say 8, and parallelly process the data.
async fun main(){
// creating variables which will hold final structured data
let app_data;
let app_to_bundle_map;
// create 8 threads
let thread_1 = thread::spawn(|| {})
let thread_2 = thread::spawn(|| {})
//.. so on
for i in 1..500{
let thread_assigned = i % 8
match thread_assigned {
1 => {
// process_data() on thread 1 and insert into app_data & app_to_bundle_map.
// But how do I assign process_data() to the thread's closure?
// How do I make sure thread 1 is available for use?
}
2 => {
// process_data() on thread 2 and insert into app_data & app_to_bundle_map
}
_ => {
// process_data() on thread _ and insert into app_data & app_to_bundle_map
}
}
}
}
fun process_data(item: String, data: &Data){
// perform some heavy operations like
make_network_call(item.url)
// perform more operations and modify the function argument
more_processing()
}
async fun make_a_network_call(url: String) -> String{
let client = reqwest::Client::builder().build().unwrap();
let mut _res: Result<Response, Error> = client.get(url).send().await;
match _res{
OK(_res) => {
// return response
}
Err(_res) => {
format!(r#"{{"error": "{placeholder}"}}"#, placeholder = _res.to_string())
}
}
}
Another option I thought of was dividing my data of size 500 into 8 parts and then processing them parallelly. Is this a better approach? Or are both the approaches wrong, if so, what do you suggest is the correct method to solve such problems in rust? Overall, my final goal is to reduce the time from 25-35 seconds to less than 10 seconds. Looking forward to everybody's insights. Thank you in advance.
Concurrency is hard [citation needed]. What you are trying to do here is manually handling it. That of course could work, but it will be a pain, especially if you are a beginner. Luckily there are fantastic libraries already out there that provide handling concurrency part for you. You should looking if they already provide what you need.From your description I am not quite sure if you are CPU or IO bound.
If you are CPU bound you should look at rayon crate. It allows to easily iterate in parallel over some iterator.
If you are IO bound you should look at async rust. There are many libraries, that allow to do many things, but I would recommend tokio to begin with. It is production-ready and has great emphasis put on networking. You would need however to learn a bit about async rust, as it requires different thinking model than normal synchronous code.
And regardless of which one you choose you should familiarize yourself with channels. They are a great and easy tool for passing data around, including from one thread to another.

When should you use tokio::join!() over tokio::spawn()?

Let's say I want to download two web pages concurrently with Tokio...
Either I could implement this with tokio::spawn():
async fn v1() {
let t1 = tokio::spawn(reqwest::get("https://example.com"));
let t2 = tokio::spawn(reqwest::get("https://example.org"));
let (r1, r2) = (t1.await.unwrap(), t2.await.unwrap());
println!("example.com = {}", r1.unwrap().status());
println!("example.org = {}", r2.unwrap().status());
}
Or I could implement this with tokio::join!():
async fn v2() {
let t1 = reqwest::get("https://example.com");
let t2 = reqwest::get("https://example.org");
let (r1, r2) = tokio::join!(t1, t2);
println!("example.com = {}", r1.unwrap().status());
println!("example.org = {}", r2.unwrap().status());
}
In both cases, the two requests are happening concurrently. However, in the second case, the two requests are running in the same task and therefore on the same thread.
So, my questions are:
Is there an advantage to tokio::join!() over tokio::spawn()?
If so, in which scenarios? (it doesn't have to do anything with downloading web pages)
I'm guessing there's a very small overhead to spawning a new task, but is that it?
The difference will depend on how you have configured the runtime. tokio::join! will run tasks concurrently in the same task, while tokio::spawn! creates a new task for each.
In a single-threaded runtime, these are effectively the same. In a multi-threaded runtime, using tokio::spawn! twice like that may use two separate threads.
From the docs for tokio::join!:
By running all async expressions on the current task, the expressions are able to run concurrently but not in parallel. This means all expressions are run on the same thread and if one branch blocks the thread, all other expressions will be unable to continue. If parallelism is required, spawn each async expression using tokio::spawn and pass the join handle to join!.
For IO-bound tasks, like downloading web pages, you aren't going to notice the difference; most of the time will be spent waiting for packets and each task can efficiently interleave their processing.
Use tokio::spawn! when tasks are more CPU-bound and could block each other.
I would typically look at this from the other angle; why would I use tokio::spawn over tokio::join? Spawning a new task has more constraints than joining two futures, the 'static requirement can be very annoying and as such is not my go-to choice.
In addition to the cost of spawning the task, that I would guess is fairly marginal, there is also the cost of signaling the original task when its done. That I would also guess is marginal but you'd have to measure them in your environment and async workloads to see if they actually have an impact or not.
But you're right, the biggest boon to using two tasks is that they have the opportunity to work in parallel, not just concurrently. But on the other hand, async is most suited to I/O-bound workloads where there is lots of waiting and, depending on your workload, is probably unlikely that this lack of parallelism would have much impact.
All in all, tokio::join is a nicer and more flexible to use and I doubt the technical difference would make an impact on performance. But as always: measure!
#kmdreko's answer was great and I'd like to add some details to it!
As mentioned, using tokio::spawn has a 'static requirement, so the following snippet doesn't compile:
async fn v1() {
let url = String::from("https://example.com");
let t1 = tokio::spawn(reqwest::get(&url)); // `url` does not live long enough
let t2 = tokio::spawn(reqwest::get(&url));
let (r1, r2) = (t1.await.unwrap(), t2.await.unwrap());
}
However, the equivalent snippet with tokio::join! does compile:
async fn v2() {
let url = String::from("https://example.com");
let t1 = reqwest::get(&url);
let t2 = reqwest::get(&url);
let (r1, r2) = tokio::join!(t1, t2);
}
Also, that answer got me curious about the cost of spawning a new task so I wrote the following simple benchmark:
use std::time::Instant;
#[tokio::main]
async fn main() {
let now = Instant::now();
for _ in 0..100_000 {
v1().await;
}
println!("tokio::spawn = {:?}", now.elapsed());
let now = Instant::now();
for _ in 0..100_000 {
v2().await;
}
println!("tokio::join! = {:?}", now.elapsed());
}
async fn v1() {
let t1 = tokio::spawn(do_nothing());
let t2 = tokio::spawn(do_nothing());
t1.await.unwrap();
t2.await.unwrap();
}
async fn v2() {
let t1 = do_nothing();
let t2 = do_nothing();
tokio::join!(t1, t2);
}
async fn do_nothing() {}
In release mode, I get the following output on my macOS laptop:
tokio::spawn = 862.155882ms
tokio::join! = 369.603µs
EDIT: This benchmark is flawed in many ways (see comments), so don't rely on it for the specific numbers. However, the conclusion that spawning is more expensive than joining 2 tasks seems to be true.

How can I work around not being able to export functions with lifetimes when using wasm-bindgen?

I'm trying to write a simple game that runs in the browser, and I'm having a hard time modeling a game loop given the combination of restrictions imposed by the browser, rust, and wasm-bindgen.
A typical game loop in the browser follows this general pattern:
function mainLoop() {
update();
draw();
requestAnimationFrame(mainLoop);
}
If I were to model this exact pattern in rust/wasm-bindgen, it would look like this:
let main_loop = Closure::wrap(Box::new(move || {
update();
draw();
window.request_animation_frame(main_loop.as_ref().unchecked_ref()); // Not legal
}) as Box<FnMut()>);
Unlike javascript, I'm unable to reference main_loop from within itself, so this doesn't work.
An alternative approach that someone suggested is to follow the pattern illustrated in the game of life example. At a high-level, it involves exporting a type that contains the game state and includes public tick() and render() functions that can be called from within a javascript game loop. This doesn't work for me because my gamestate requires lifetime parameters, since it effectively just wraps a specs World and Dispatcher struct, the latter of which has lifetime parameters. Ultimately, this means that I can't export it using #[wasm_bindgen].
I'm having a hard time finding ways to work around these restrictions, and am looking for suggestions.
The easiest way to model this is likely to leave invocations of requestAnimationFrame to JS and instead just implement the update/draw logic in Rust.
In Rust, however, what you can also do is to exploit the fact that a closure which doesn't actually capture any variables is zero-size, meaning that Closure<T> of that closure won't allocate memory and you can safely forget it. For example something like this should work:
#[wasm_bindgen]
pub fn main_loop() {
update();
draw();
let window = ...;
let closure = Closure::wrap(Box::new(|| main_loop()) as Box<Fn()>);
window.request_animation_frame(closure.as_ref().unchecked_ref());
closure.forget(); // not actually leaking memory
}
If your state has lifetimes inside of it, that is unfortunately incompatible with returning back to JS because when you return all the way back to the JS event loop then all WebAssembly stack frames have been popped, meaning that any lifetime is invalidated. This means that your game state persisted across iterations of the main_loop will need to be 'static
I'm a Rust novice, but here's how I addressed the same issue.
You can eliminate the problematic window.request_animation_frame recursion and implement an FPS cap at the same time by invoking window.request_animation_frame from a window.set_interval callback which checks a Rc<RefCell<bool>> or something to see if there's an animation frame request still pending. I'm not sure if the inactive tab behavior will be any different in practice.
I put the bool into my application state since I'm using an Rc<RefCell<...>> to that anyway for other event handling. I haven't checked that this below compiles as is, but here's the relevant parts of how I'm doing this:
pub struct MyGame {
...
should_request_render: bool, // Don't request another render until the previous runs, init to false since we'll fire the first one immediately.
}
...
let window = web_sys::window().expect("should have a window in this context");
let application_reference = Rc::new(RefCell::new(MyGame::new()));
let request_animation_frame = { // request_animation_frame is not forgotten! Its ownership is moved into the timer callback.
let application_reference = application_reference.clone();
let request_animation_frame_callback = Closure::wrap(Box::new(move || {
let mut application = application_reference.borrow_mut();
application.should_request_render = true;
application.handle_animation_frame(); // handle_animation_frame being your main loop.
}) as Box<FnMut()>);
let window = window.clone();
move || {
window
.request_animation_frame(
request_animation_frame_callback.as_ref().unchecked_ref(),
)
.unwrap();
}
};
request_animation_frame(); // fire the first request immediately
let timer_closure = Closure::wrap(
Box::new(move || { // move both request_animation_frame and application_reference here.
let mut application = application_reference.borrow_mut();
if application.should_request_render {
application.should_request_render = false;
request_animation_frame();
}
}) as Box<FnMut()>
);
window.set_interval_with_callback_and_timeout_and_arguments_0(
timer_closure.as_ref().unchecked_ref(),
25, // minimum ms per frame
)?;
timer_closure.forget(); // this leaks it, you could store it somewhere or whatever, depends if it's guaranteed to live as long as the page
You can store the result of set_interval and the timer_closure in Options in your game state so that your game can clean itself up if needed for some reason (maybe? I haven't tried this, and it would seem to cause a free of self?). The circular reference won't erase itself unless broken (you're then storing Rcs to the application inside the application effectively). It should also enable you to change the max fps while running, by stopping the interval and creating another using the same closure.

Join futures with limited concurrency

I have a large vector of Hyper HTTP request futures and want to resolve them into a vector of results. Since there is a limit of maximum open files, I want to limit concurrency to N futures.
I've experimented with Stream::buffer_unordered but seems like it executed futures one by one.
We've used code like this in a project to avoid opening too many TCP sockets. These futures have Hyper futures within, so it seems exactly the same case.
// Convert the iterator into a `Stream`. We will process
// `PARALLELISM` futures at the same time, but with no specified
// order.
let all_done =
futures::stream::iter(iterator_of_futures.map(Ok))
.buffer_unordered(PARALLELISM);
// Everything after here is just using the stream in
// some manner, not directly related
let mut successes = Vec::with_capacity(LIMIT);
let mut failures = Vec::with_capacity(LIMIT);
// Pull values off the stream, dividing them into success and
// failure buckets.
let mut all_done = all_done.into_future();
loop {
match core.run(all_done) {
Ok((None, _)) => break,
Ok((Some(v), next_all_done)) => {
successes.push(v);
all_done = next_all_done.into_future();
}
Err((v, next_all_done)) => {
failures.push(v);
all_done = next_all_done.into_future();
}
}
}
This is used in a piece of example code, so the event loop (core) is explicitly driven. Watching the number of file handles used by the program showed that it was capped. Additionally, before this bottleneck was added, we quickly ran out of allowable file handles, whereas afterward we did not.

Resources