Why isn't parallelization providing a larger speedup? - multithreading

I'm trying to make a variation of a HashMap that can be extended very quickly using multithreading. I'm partitioning the data using remainders. It is working, but the speedup compared to my sequential version is surprisingly small. Here's my code:
use rustc_hash::FxHashMap;
use rayon::prelude::*;
use std::time::Instant;
fn main() {
const NUM_SUBMAPS: usize = 1_000;
// initialize data for serial version
let mut data_vecs = vec![Vec::new(); NUM_SUBMAPS];
for i in 0..100_000_000 {
data_vecs[i % NUM_SUBMAPS].push((i, i));
}
let mut maps = vec![FxHashMap::default(); NUM_SUBMAPS];
// initialize clones for parallel version
let (data_vecs_clone, mut maps_clone) = (data_vecs.clone(), maps.clone());
// time sequential version
let t = Instant::now();
maps.iter_mut().zip(data_vecs).for_each(|(submap, vec)| {
submap.extend(vec);
});
println!("time in sequential version: {}", t.elapsed().as_secs_f64());
drop(maps);
// time parallel version
let t = Instant::now();
maps_clone.par_iter_mut().zip(data_vecs_clone).for_each(|(submap, vec)| {
submap.extend(vec);
});
println!("time in parallel version: {}", t.elapsed().as_secs_f64());
}
And here's the output on my machine:
time in sequential version: 1.9712106999999999
time in parallel version: 0.7583539
The parallel version is faster, but the speedup is much less than I typically get with Rayon. I'm using a 16-core Ryzen 5950x, so I typically get speedups over 10x using Rayon. Why is the speedup so much smaller in this case? Is there any way to improve the parallel version to use all the CPU's cores efficiently?
Edit:
I'm on Windows, in case that makes a difference.

Related

When should you use tokio::join!() over tokio::spawn()?

Let's say I want to download two web pages concurrently with Tokio...
Either I could implement this with tokio::spawn():
async fn v1() {
let t1 = tokio::spawn(reqwest::get("https://example.com"));
let t2 = tokio::spawn(reqwest::get("https://example.org"));
let (r1, r2) = (t1.await.unwrap(), t2.await.unwrap());
println!("example.com = {}", r1.unwrap().status());
println!("example.org = {}", r2.unwrap().status());
}
Or I could implement this with tokio::join!():
async fn v2() {
let t1 = reqwest::get("https://example.com");
let t2 = reqwest::get("https://example.org");
let (r1, r2) = tokio::join!(t1, t2);
println!("example.com = {}", r1.unwrap().status());
println!("example.org = {}", r2.unwrap().status());
}
In both cases, the two requests are happening concurrently. However, in the second case, the two requests are running in the same task and therefore on the same thread.
So, my questions are:
Is there an advantage to tokio::join!() over tokio::spawn()?
If so, in which scenarios? (it doesn't have to do anything with downloading web pages)
I'm guessing there's a very small overhead to spawning a new task, but is that it?
The difference will depend on how you have configured the runtime. tokio::join! will run tasks concurrently in the same task, while tokio::spawn! creates a new task for each.
In a single-threaded runtime, these are effectively the same. In a multi-threaded runtime, using tokio::spawn! twice like that may use two separate threads.
From the docs for tokio::join!:
By running all async expressions on the current task, the expressions are able to run concurrently but not in parallel. This means all expressions are run on the same thread and if one branch blocks the thread, all other expressions will be unable to continue. If parallelism is required, spawn each async expression using tokio::spawn and pass the join handle to join!.
For IO-bound tasks, like downloading web pages, you aren't going to notice the difference; most of the time will be spent waiting for packets and each task can efficiently interleave their processing.
Use tokio::spawn! when tasks are more CPU-bound and could block each other.
I would typically look at this from the other angle; why would I use tokio::spawn over tokio::join? Spawning a new task has more constraints than joining two futures, the 'static requirement can be very annoying and as such is not my go-to choice.
In addition to the cost of spawning the task, that I would guess is fairly marginal, there is also the cost of signaling the original task when its done. That I would also guess is marginal but you'd have to measure them in your environment and async workloads to see if they actually have an impact or not.
But you're right, the biggest boon to using two tasks is that they have the opportunity to work in parallel, not just concurrently. But on the other hand, async is most suited to I/O-bound workloads where there is lots of waiting and, depending on your workload, is probably unlikely that this lack of parallelism would have much impact.
All in all, tokio::join is a nicer and more flexible to use and I doubt the technical difference would make an impact on performance. But as always: measure!
#kmdreko's answer was great and I'd like to add some details to it!
As mentioned, using tokio::spawn has a 'static requirement, so the following snippet doesn't compile:
async fn v1() {
let url = String::from("https://example.com");
let t1 = tokio::spawn(reqwest::get(&url)); // `url` does not live long enough
let t2 = tokio::spawn(reqwest::get(&url));
let (r1, r2) = (t1.await.unwrap(), t2.await.unwrap());
}
However, the equivalent snippet with tokio::join! does compile:
async fn v2() {
let url = String::from("https://example.com");
let t1 = reqwest::get(&url);
let t2 = reqwest::get(&url);
let (r1, r2) = tokio::join!(t1, t2);
}
Also, that answer got me curious about the cost of spawning a new task so I wrote the following simple benchmark:
use std::time::Instant;
#[tokio::main]
async fn main() {
let now = Instant::now();
for _ in 0..100_000 {
v1().await;
}
println!("tokio::spawn = {:?}", now.elapsed());
let now = Instant::now();
for _ in 0..100_000 {
v2().await;
}
println!("tokio::join! = {:?}", now.elapsed());
}
async fn v1() {
let t1 = tokio::spawn(do_nothing());
let t2 = tokio::spawn(do_nothing());
t1.await.unwrap();
t2.await.unwrap();
}
async fn v2() {
let t1 = do_nothing();
let t2 = do_nothing();
tokio::join!(t1, t2);
}
async fn do_nothing() {}
In release mode, I get the following output on my macOS laptop:
tokio::spawn = 862.155882ms
tokio::join! = 369.603µs
EDIT: This benchmark is flawed in many ways (see comments), so don't rely on it for the specific numbers. However, the conclusion that spawning is more expensive than joining 2 tasks seems to be true.

Rust: Safe multi threading with recursion

I'm new to Rust.
For learning purposes, I'm writing a simple program to search for files in Linux, and it uses a recursive function:
fn ffinder(base_dir:String, prmtr:&'static str, e:bool, h:bool) -> std::io::Result<()>{
let mut handle_vec = vec![];
let pth = std::fs::read_dir(&base_dir)?;
for p in pth {
let p2 = p?.path().clone();
if p2.is_dir() {
if !h{ //search doesn't include hidden directories
let sstring:String = get_fname(p2.display().to_string());
let slice:String = sstring[..1].to_string();
if slice != ".".to_string() {
let handle = thread::spawn(move || {
ffinder(p2.display().to_string(),prmtr,e,h);
});
handle_vec.push(handle);
}
}
else {//search include hidden directories
let handle2 = thread::spawn(move || {
ffinder(p2.display().to_string(),prmtr,e,h);
});
handle_vec.push(handle2);
}
}
else {
let handle3 = thread::spawn(move || {
if compare(rmv_underline(get_fname(p2.display().to_string())),rmv_underline(prmtr.to_string()),e){
println!("File found at: {}",p2.display().to_string().blue());
}
});
handle_vec.push(handle3);
}
}
for h in handle_vec{
h.join().unwrap();
}
Ok(())
}
I've tried to use multi threading (thread::spawn), however it can create too many threads, exploding the OS limit, which breaks the program execution.
Is there a way to multi thread with recursion, using a safe,limited (fixed) amount of threads?
As one of the commenters mentioned, this is an absolutely perfect case for using Rayon. The blog post mentioned doesn't show how Rayon might be used in recursion, only making an allusion to crossbeam's scoped threads with a broken link. However, Rayon provides its own scoped threads implementation that solves your problem as well in that only uses as many threads as you have cores available, avoiding the error you ran into.
Here's the documentation for it:
https://docs.rs/rayon/1.0.1/rayon/fn.scope.html
Here's an example from some code I recently wrote. Basically what it does is recursively scan a folder, and each time it nests into a folder it creates a new job to scan that folder while the current thread continues. In my own tests it vastly outperforms a single threaded approach.
let source = PathBuf::from("/foo/bar/");
let (tx, rx) = mpsc::channel();
rayon::scope(|s| scan(&source, tx, s));
fn scan<'a, U: AsRef<Path>>(
src: &U,
tx: Sender<(Result<DirEntry, std::io::Error>, u64)>,
scope: &Scope<'a>,
) {
let dir = fs::read_dir(src).unwrap();
dir.into_iter().for_each(|entry| {
let info = entry.as_ref().unwrap();
let path = info.path();
if path.is_dir() {
let tx = tx.clone();
scope.spawn(move |s| scan(&path, tx, s)) // Recursive call here
} else {
// dbg!("{}", path.as_os_str().to_string_lossy());
let size = info.metadata().unwrap().len();
tx.send((entry, size)).unwrap();
}
});
}
I'm not an expert on Rayon, but I'm fairly certain the threading strategy works like this:
Rayon creates a pool of threads to match the number of logical cores you have available in your environment. The first call to the scoped function creates a job that the first available thread "steals" from the queue of jobs available. Each time we make another recursive call, it doesn't necessarily execute immediately, but it creates a new job that an idle thread can then "steal" from the queue. If all of the threads are busy, the job queue just fills up each time we make another recursive call, and each time a thread finishes its current job it steals another job from the queue.
The full code can be found here: https://github.com/1Dragoon/fcp
(Note that repo is a work in progress and the code there is currently typically broken and probably won't work at the time you're reading this.)
As an exercise to the reader, I'm more of a sys admin than an actual developer, so I also don't know if this is the ideal approach. From Rayon's documentation linked earlier:
scope() is a more flexible building block compared to join(), since a loop can be used to spawn any number of tasks without recursing
The language of that is a bit confusing. I'm not sure what they mean by "without recursing". Join seems to intend for you to already have tasks known about ahead of time and to execute them in parallel as threads become available, whereas scope seems to be more aimed at only creating jobs when you need them rather than having to know everything you need to do in advance. Either that or I'm understanding their meaning backwards.

Runtime returning without completing all tasks despite using block_on in Tokio

I am writing a multi-threaded concurrent Kafka producer using Rust and Tokio. The project has 2 modes, an interactive mode that runs in an infinite loop and a file mode which takes a file as an argument and then reads the file and sends these messages to Kafka via multiple threads. Interactive mode works fine! but file mode has issues.
To achieve this, I had initially started with Rayon, but then switched to a more flexible runtime; tokio. Now, I am able to parallelize the task of sending data over a specified number of threads within tokio, however, it seems that runtime is getting dropped before all messages are produced. Here is my code:
pub fn worker(brokers: String, f: File, t: usize, topic: Arc<String>) {
let reader = BufReader::new(f);
let mut rt = runtime::Builder::new()
.threaded_scheduler()
.core_threads(t)
.build()
.unwrap();
let producers: Arc<Vec<Mutex<BaseProducer>>> = Arc::new(
(0..t)
.map(|_| get_producer(&brokers))
.collect::<Vec<Mutex<BaseProducer>>>(),
);
let acounter = atomic::AtomicUsize::new(0);
let _results: Vec<_> = reader
.lines()
.map(|line| line.unwrap())
.map(move |line| {
let prods = producers.clone();
let tp = topic.clone();
let cnt = acounter.swap(
(acounter.load(atomic::Ordering::SeqCst) + 1) % t,
atomic::Ordering::SeqCst,
);
rt.block_on(async move {
match prods[cnt]
.lock()
.unwrap()
.send(BaseRecord::to(&(*tp)).payload(&line).key(""))
{
Ok(_) => (),
Err(e) => eprintln!("{:?}", e),
};
})
})
.collect();
}
fn get_producer(brokers: &String) -> Mutex<BaseProducer> {
Mutex::new(
BaseProducer::from_config(
ClientConfig::new()
.set("bootstrap.servers", &brokers)
.set("message.timeout.ms", "5000"),
)
.expect("Producer creation error"),
)
}
As a high-level walkthrough: I create mutable producers equal to the number of threads specified and every task within this thread will use one of these producers. The file is read line by line sequentially and every line is moved into the closure that produces it as a message to Kafka.
The code works fine, for the most part, but there are issues related to the runtime exiting without completing all tasks, even when I am using the block_on function in the runtime. Which is supposed to block until the future is complete (Async block in my case here).
I believe the issue is that the issue is with runtime getting dropped without all the threading within Tokio exiting successfully.
I tried reading a file with this approach habing 100,000 records, on a single thread, I was able to produce 28,000 records. On 2 threads, close to 46,000 records. And while utilising all 8 cores of my CPU, I was getting 99,000-100,000 messages indeterministically.
I have checked several answers on SO, but none help in my case. I also read through the documentation of tokio::runtime::Runtime here and tried to use spawn and then use futures::future::join, but that didn't work either.
Any help is appreciated!

What's a good detailed explanation of Tokio's simple TCP echo server example (on GitHub and API reference)?

Tokio has the same example of a simple TCP echo server on its:
GitHub main page (https://github.com/tokio-rs/tokio)
API reference main page (https://docs.rs/tokio/0.2.18/tokio/)
However, in both pages, there is no explanation of what's actually going on. Here's the example, slightly modified so that the main function does not return Result<(), Box<dyn std::error::Error>>:
use tokio::net::TcpListener;
use tokio::prelude::*;
#[tokio::main]
async fn main() {
if let Ok(mut tcp_listener) = TcpListener::bind("127.0.0.1:8080").await {
while let Ok((mut tcp_stream, _socket_addr)) = tcp_listener.accept().await {
tokio::spawn(async move {
let mut buf = [0; 1024];
// In a loop, read data from the socket and write the data back.
loop {
let n = match tcp_stream.read(&mut buf).await {
// socket closed
Ok(n) if n == 0 => return,
Ok(n) => n,
Err(e) => {
eprintln!("failed to read from socket; err = {:?}", e);
return;
}
};
// Write the data back
if let Err(e) = tcp_stream.write_all(&buf[0..n]).await {
eprintln!("failed to write to socket; err = {:?}", e);
return;
}
}
});
}
}
}
After reading the Tokio documentation (https://tokio.rs/docs/overview/), here's my mental model of this example. A task is spawned for each new TCP connection. And a task is ended whenever a read/write error occurs, or when the client ends the connection (i.e. n == 0 case). Therefore, if there are 20 connected clients at a point in time, there would be 20 spawned tasks. However, under the hood, this is NOT equivalent to spawning 20 threads to handle the connected clients concurrently. As far as I understand, this is basically the problem that asynchronous runtimes are trying to solve. Correct so far?
Next, my mental model is that a tokio scheduler (e.g. the multi-threaded threaded_scheduler which is the default for apps, or the single-threaded basic_scheduler which is the default for tests) will schedule these tasks concurrently on 1-to-N threads. (Side question: for the threaded_scheduler, is N fixed during the app's lifetime? If so, is it equal to num_cpus::get()?). If one task is .awaiting for the read or write_all operations, then the scheduler can use the same thread to perform more work for one of the other 19 tasks. Still correct?
Finally, I'm curious whether the outer code (i.e. the code that is .awaiting for tcp_listener.accept()) is itself a task? Such that in the 20 connected clients example, there aren't really 20 tasks but 21: one to listen for new connections + one per connection. All of these 21 tasks could be scheduled concurrently on one or many threads, depending on the scheduler. In the following example, I wrap the outer code in a tokio::spawn and .await the handle. Is it completely equivalent to the example above?
use tokio::net::TcpListener;
use tokio::prelude::*;
#[tokio::main]
async fn main() {
let main_task_handle = tokio::spawn(async move {
if let Ok(mut tcp_listener) = TcpListener::bind("127.0.0.1:8080").await {
while let Ok((mut tcp_stream, _socket_addr)) = tcp_listener.accept().await {
tokio::spawn(async move {
// ... same as above ...
});
}
}
});
main_task_handle.await.unwrap();
}
This answer is a summary of an answer I received on Tokio's Discord from Alice Ryhl. Big thank you!
First of all, indeed, for the multi-threaded scheduler, the number of OS threads is fixed to num_cpus.
Second, Tokio can swap the currently running task at every .await on a per-thread basis.
Third, the main function runs in its own task, which is spawned by the #[tokio::main] macro.
Therefore, for the first code block example, if there are 20 connected clients, there would be 21 tasks: one for the main macro + one for each of the 20 open TCP streams. For the second code block example, there would be 22 tasks because of the extra outer tokio::spawn but it's needless and doesn't add any concurrency.

Join futures with limited concurrency

I have a large vector of Hyper HTTP request futures and want to resolve them into a vector of results. Since there is a limit of maximum open files, I want to limit concurrency to N futures.
I've experimented with Stream::buffer_unordered but seems like it executed futures one by one.
We've used code like this in a project to avoid opening too many TCP sockets. These futures have Hyper futures within, so it seems exactly the same case.
// Convert the iterator into a `Stream`. We will process
// `PARALLELISM` futures at the same time, but with no specified
// order.
let all_done =
futures::stream::iter(iterator_of_futures.map(Ok))
.buffer_unordered(PARALLELISM);
// Everything after here is just using the stream in
// some manner, not directly related
let mut successes = Vec::with_capacity(LIMIT);
let mut failures = Vec::with_capacity(LIMIT);
// Pull values off the stream, dividing them into success and
// failure buckets.
let mut all_done = all_done.into_future();
loop {
match core.run(all_done) {
Ok((None, _)) => break,
Ok((Some(v), next_all_done)) => {
successes.push(v);
all_done = next_all_done.into_future();
}
Err((v, next_all_done)) => {
failures.push(v);
all_done = next_all_done.into_future();
}
}
}
This is used in a piece of example code, so the event loop (core) is explicitly driven. Watching the number of file handles used by the program showed that it was capped. Additionally, before this bottleneck was added, we quickly ran out of allowable file handles, whereas afterward we did not.

Resources