Concurrency in a rust tui app - lock starvation issue - multithreading

I have a tui app (built with the tui-rs crate) with its state inside a global AppState struct.
The app has 2 threads:
The main thread draws the contents to the screen in a loop. It needs an immutable reference to AppState to do the drawing.
A second thread occasionally spawns to do some cpu-heavy work. As a result it needs to update AppState and thus needs a mutable reference.
pub fn draw_screen() -> Result<(), Box<dyn Error>> {
let mut terminal = init_terminal().unwrap();
// prepare app state
let mut state = AppState::fresh_state();
let arc_state = Arc::new(Mutex::new(state));
loop {
terminal.draw(|f| {
// needs an immutable reference to AppState
do_some_drawing(arc_state.clone());
});
// needs a mutable reference to AppState
thread::spawn(|| {
do_some_calc(arc_state.clone());
});
}
Ok(())
}
Currently this results in a hanged app. The main thread keeps spinning in a loop, preventing the Mutex lock from ever being released. This means the second thread can never do its work. This in turn means the main thread (having checked whether the work is done yet) keeps spawning more worker threads, none of which can progress.
At least that's my best guess as to what's happening.
In case it matters, this is how I access the mutex in both immutable and mutable cases:
// when I need an immutable ref
let lock = thread_state.lock().unwrap();
let state = lock.deref();
// when I need a mutable ref
let mut lock = thread_state.lock().unwrap();
let state = lock.deref_mut();
What's the right way to resolve the above situation?
I've been reading about RefCell and channels but I'm not experienced enough to make a call. Would appreciate some guidance (any links to tutorials / docs super welcome).

Related

Can the borrow checker know when an Arc is "released"? Can a 'static lifetime granted temporarily?

I'm trying to speed up a computationally-heavy Rust function by making it concurrent using only the built-in thread support. In particular, I want to alternate between quick single-threaded phases (where the main thread has mutable access to a big structure) and concurrent phases (where many worker threads run with read-only access to the structure). I don't want to make extra copies of the structure or force it to be 'static. Where I'm having trouble is convincing the borrow checker that the worker threads have finished.
Ignoring the borrow checker, an Arc reference seems like does all that is needed. The reference count in the Arc increases with the .clone() for each worker, then decreases as the workers conclude and I join all the worker threads. If (and only if) the Arc reference count is 1, it should be safe for the main thread to resume. The borrow checker, however, doesn't seem to know about Arc reference counts, and insists that my structure needs to be 'static.
Here's some sample code which works fine if I don't use threads, but won't compile if I switch the comments to enable the multi-threaded case.
struct BigStruct {
data: Vec<usize>
// Lots more
}
pub fn main() {
let ref_bigstruct = &mut BigStruct { data: Vec::new() };
for i in 0..3 {
ref_bigstruct.data.push(i); // Phase where main thread has write access
run_threads(ref_bigstruct); // Phase where worker threads have read-only access
}
}
fn run_threads(ref_bigstruct: &BigStruct) {
let arc_bigstruct = Arc::new(ref_bigstruct);
{
let arc_clone_for_worker = arc_bigstruct.clone();
// SINGLE-THREADED WORKS:
worker_thread(arc_clone_for_worker);
// MULTI-THREADED DOES NOT COMPILE:
// let handle = thread::spawn(move || { worker_thread(arc_clone_for_worker); } );
// handle.join();
}
assert!(Arc::strong_count(&arc_bigstruct) == 1);
println!("??? How can I tell the borrow checker that all borrows of ref_bigstruct are done?")
}
fn worker_thread(my_struct: Arc<&BigStruct>) {
println!(" worker says len()={}", my_struct.data.len());
}
I'm still learning about Rust lifetimes, but what I think (fear?) what I need is an operation that will take an ordinary (not 'static) reference to my structure and give me an Arc that I can clone into immutable references with a 'static lifetime for use by the workers. Once all the the worker Arc references are dropped, the borrow checker needs to allow my thread-spawning function to return. For safety, I assume this would panic if the the reference count is >1. While this seems like it would generally confirm with Rust's safety requirements, I don't see how to do it.
The underlying problem is not the borrowing checker not following Arc and the solution is not to use Arc. The problem is the borrow checker being unable to understand that the reason a thread must be 'static is because it may outlive the spawning thread, and thus if I immediately .join() it it is fine.
And the solution is to use scoped threads, that is, threads that allow you to use non-'static data because they always immediately .join(), and thus the spawned thread cannot outlive the spawning thread. Problem is, there are no worker threads on the standard library. Well, there are, however they're unstable.
So if you insist on not using crates, for some reason, you have no choice but to use unsafe code (don't, really). But if you can use external crates, then you can use the well-known crossbeam crate with its crossbeam::scope function, at least til std's scoped threads are stabilized.
In Rust Arc< T>, T is per definition immutable. Which means in order to use Arc, to make threads access data that is going to change, you also need it to wrap in some type that is interiorly mutable.
Rust provides a type that is especially suited for a single write or multiple read accesses in parallel, called RwLock.
So for your simple example, this would propably look something like this
use std::{sync::{Arc, RwLock}, thread};
struct BigStruct {
data: Vec<usize>
// Lots more
}
pub fn main() {
let arc_bigstruct = Arc::new(RwLock::new(BigStruct { data: Vec::new() }));
for i in 0..3 {
arc_bigstruct.write().unwrap().data.push(i); // Phase where main thread has write access
run_threads(&arc_bigstruct); // Phase where worker threads have read-only access
}
}
fn run_threads(ref_bigstruct: &Arc<RwLock<BigStruct>>) {
{
let arc_clone_for_worker = ref_bigstruct.clone();
//MULTI-THREADED
let handle = thread::spawn(move || { worker_thread(&arc_clone_for_worker); } );
handle.join().unwrap();
}
assert!(Arc::strong_count(&ref_bigstruct) == 1);
}
fn worker_thread(my_struct: &Arc<RwLock<BigStruct>>) {
println!(" worker says len()={}", my_struct.read().unwrap().data.len());
}
Which outputs
worker says len()=1
worker says len()=2
worker says len()=3
As for your question, the borrow checker does not know when an Arc is released, as far as I know. The references are counted at runtime.

`RefCell<std::string::String>` cannot be shared between threads safely?

This is a continuation of How to re-use a value from the outer scope inside a closure in Rust? , opened new Q for better presentation.
// main.rs
// The value will be modified eventually inside `main`
// and a http request should respond with whatever "current" value it holds.
let mut test_for_closure :Arc<RefCell<String>> = Arc::new(RefCell::from("Foo".to_string()));
// ...
// Handler for HTTP requests
// From https://docs.rs/hyper/0.14.8/hyper/service/fn.service_fn.html
let make_svc = make_service_fn(|_conn| async {
Ok::<_, Infallible>(service_fn(|req: Request<Body>| async move {
if req.version() == Version::HTTP_11 {
let foo:String = *test_for_closure.borrow();
Ok(Response::new(Body::from(foo.as_str())))
} else {
Err("not HTTP/1.1, abort connection")
}
}))
});
Unfortunately, I get RefCell<std::string::String> cannot be shared between threads safely:
RefCell only works on single threads. You will need to use Mutex which is similar but works on multiple threads. You can read more about Mutex here: https://doc.rust-lang.org/std/sync/struct.Mutex.html.
Here is an example of moving an Arc<Mutex<>> into a closure:
use std::sync::{Arc, Mutex};
fn main() {
let mut test: Arc<Mutex<String>> = Arc::new(Mutex::from("Foo".to_string()));
let mut test_for_closure = Arc::clone(&test);
let closure = || async move {
// lock it so it cant be used in other threads
let foo = test_for_closure.lock().unwrap();
println!("{}", foo);
};
}
The first error in your error message is that Sync is not implemented for RefCell<String>. This is by design, as stated by Sync's rustdoc:
Types that are not Sync are those that have “interior mutability” in a
non-thread-safe form, such as Cell and RefCell. These types allow for
mutation of their contents even through an immutable, shared
reference. For example the set method on Cell takes &self, so it
requires only a shared reference &Cell. The method performs no
synchronization, thus Cell cannot be Sync.
Thus it's not safe to share RefCells between threads, because you can cause a data race through a regular, shared reference.
But what if you wrap it in Arc ? Well, the rustdoc is quite clear again:
Arc will implement Send and Sync as long as the T implements Send
and Sync. Why can’t you put a non-thread-safe type T in an Arc to
make it thread-safe? This may be a bit counter-intuitive at first:
after all, isn’t the point of Arc thread safety? The key is this:
Arc makes it thread safe to have multiple ownership of the same
data, but it doesn’t add thread safety to its data. Consider
Arc<RefCell>. RefCell isn’t Sync, and if Arc was always Send,
Arc<RefCell> would be as well. But then we’d have a problem:
RefCell is not thread safe; it keeps track of the borrowing count
using non-atomic operations.
In the end, this means that you may need to pair Arc with some sort
of std::sync type, usually Mutex.
Arc<T> will not be Sync unless T is Sync because of the same reason. Given that, probably you should use std/tokio Mutex instead of RefCell

How can I move the data between threads safely?

I'm currently trying to call a function to which I pass multiple file names and expect the function to read the files and generate the appropriate structs and return them in a Vec<Audit>. I've been able to accomplish it reading the files one by one but I want to achieve it using threads.
This is the function:
fn generate_audits_from_files(files: Vec<String>) -> Vec<Audit> {
let mut audits = Arc::new(Mutex::new(vec![]));
let mut handlers = vec![];
for file in files {
let audits = Arc::clone(&audits);
handlers.push(thread::spawn(move || {
let mut audits = audits.lock().unwrap();
audits.push(audit_from_xml_file(file.clone()));
audits
}));
}
for handle in handlers {
let _ = handle.join();
}
audits
.lock()
.unwrap()
.into_iter()
.fold(vec![], |mut result, audit| {
result.push(audit);
result
})
}
But it won't compile due to the following error:
error[E0277]: `MutexGuard<'_, Vec<Audit>>` cannot be sent between threads safely
--> src/main.rs:82:23
|
82 | handlers.push(thread::spawn(move || {
| ^^^^^^^^^^^^^ `MutexGuard<'_, Vec<Audit>>` cannot be sent between threads safely
|
::: /home/enthys/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:618:8
I have tried wrapping the generated Audit structs in Some(Audit) to avoid the MutexGuard but then I stumble with Poisonned Thread issues.
The cause of the error is that after after pushing the new Audit into the (locked) audits vec you then try to return the vec's MutexGuard.
In Rust, a thread's function can actually return values, the point of doing that is to send the value back to whoever is join-ing the thread. This means the value is going to move between threads, so the value needs to be movable betweem threads (aka Send), which mutex guards have no reason to be[0].
The easy solution is to just... not do that. Just delete the last line of the spawn function. Though it's not like the code works after that as you still have borrowing issue related to the thing at the end.
An alternative is to lean into the feature (especially if Audit objects are not too big): drop the audits vec entirely and instead have each thread return its audit, then collect from the handlers when you join them:
pub fn generate_audits_from_files(files: Vec<String>) -> Vec<Audit> {
let mut handlers = vec![];
for file in files {
handlers.push(thread::spawn(move || {
audit_from_xml_file(file)
}));
}
handlers.into_iter()
.map(|handler| handler.join().unwrap())
.collect()
}
Though at that point you might as well just let Rayon handle it:
use rayon::prelude::*;
pub fn generate_audits_from_files(files: Vec<String>) -> Vec<Audit> {
files.into_par_iter().map(audit_from_xml_file).collect()
}
That also avoids crashing the program or bringing the machine to its knees if you happen to have millions of files.
[0] and all the reasons not to be, locking on one thread and unlocking on an other is not necessarily supported e.g. ReleaseMutex
The ReleaseMutex function fails if the calling thread does not own the mutex object.
(NB: in the windows lingo, "owning" a mutex means having acquired it via WaitForSingleObject, which translates to lock in posix lingo)
and can be plain UB e.g. pthread_mutex_unlock
If a thread attempts to unlock a mutex that it has not locked or a mutex which is unlocked, undefined behavior results.
Your problem is that you are passing your Vec<Audit> (or more precisely the MutexGuard<Vec<Audit>>), to the threads and back again, without really needing it.
And you don't need Mutex or Arc for this simpler task:
fn generate_audits_from_files(files: Vec<String>) -> Vec<Audit> {
let mut handlers = vec![];
for file in files {
handlers.push(thread::spawn(move || {
audit_from_xml_file(file)
}));
}
handlers
.into_iter()
.flat_map(|x| x.join())
.collect()
}

Accessing disjoint entries in global HashMap for lifetime of thread in Rust

my current project requires recording some information for various events that happen during the execution of a thread. These events are saved in a global struct index by the thread id:
RECORDER1: HashMap<ThreadId, Vec<Entry>> = HashMap::new();
Every thread appends new Entry to its vector. Therefore, threads access "disjoint" vectors. Rust requires synchronization primitives to make the above work of course. So the real implementation looks like:
struct Entry {
// ... not important.
}
#[derive(Clone, Eq, PartialEq, Hash)]
struct ThreadId;
// lazy_static necessary to initialize this data structure.
lazy_static! {
/// Global data structure. Threads access disjoint entries based on their unique thread id.
/// "Outer" mutex necessary as lazy_static requires sync (so cannot use RefCell).
static ref RECORDER2: Mutex<HashMap<ThreadId, Vec<Entry>>> = Mutex::new(HashMap::new());
}
This works, but all threads contend on the same global lock. It would be nice if a thread could "borrow" its respective vector for the lifetime of the thread so it could write all the entries it needs without needing to lock every time (I understand the outer lock is necessary for ensuring threads don't insert into the HashMap at the same time).
We can do this by adding an Arc and some more interior mutability via a Mutex for the values in the HashMap:
lazy_static! {
static ref RECORDER: Mutex<HashMap<ThreadId, Arc<Mutex<Vec<Entry>>>>> = Mutex::new(HashMap::new());
}
Now we can "check out" our entry when a thread is spawned:
fn local_borrow() {
std::thread::spawn(|| {
let mut recorder = RECORDER.lock().expect("Unable to acquire outer mutex lock.");
let my_thread_id: ThreadId = ThreadId {}; // Get thread id...
// Insert entry in hashmap for our thread.
// Omit logic to check if key-value pair already existed (it shouldn't).
recorder.insert(my_thread_id.clone(), Arc::new(Mutex::new(Vec::new())));
// Get "reference" to vector
let local_entries: Arc<Mutex<Vec<Entry>>> = recorder
.get(&my_thread_id)
.unwrap() // We just inserted this entry, so unwrap.
.clone(); // Clone on the Arc to acquire a "copy".
// Lock once, use multiple times.
let mut local_entries: MutexGuard<_> = local_entries.lock().unwrap();
local_entries.push(Entry {});
local_entries.push(Entry {});
});
}
This works and is what I want. However, due to API constraints I have to access the MutexGuard from widely different places across the code without the ability to pass the MutexGuard as an argument to functions. So instead I use a thread local variable:
thread_local! {
/// This variable is initialized lazily. Due to API constraints, we use this thread_local! to
/// "pass" LOCAL_ENTRIES around.
static LOCAL_ENTRIES: Arc<Mutex<Vec<Entry>>> = {
let mut recorder = RECORDER.lock().expect("Unable to acquire outer mutex lock.");
let my_thread_id: ThreadId = ThreadId {}; // Get thread id...
// Omit logic to check if key-value pair already existed (it shouldn't).
recorder.insert(my_thread_id.clone(), Arc::new(Mutex::new(Vec::new())));
// Get "reference" to vector
recorder
.get(&my_thread_id)
.unwrap() // We just inserted this entry, so unwrap.
.clone() // Clone on the Arc to acquire a "copy".
}
}
I cannot make LOCAL_ENTRIES: MutexGuard<_> since thread_local! requires a 'static lifetime. So currently I have to .lock() every time I want to access the thread-local variable:
fn main() {
std::thread::spawn(|| {
// Record important message.
LOCAL_ENTRIES.with(|entries| {
// We have to lock every time we want to write to LOCAL_ENTRIES. It would be nice
// to lock once and hold on to the MutexGuard for the lifetime of the thread, but
// this is not possible to due the lifetime on the MutextGuard.
let mut entries = entries.lock().expect("Unable to acquire lock");
entries.push(Entry {});
});
});
}
Sorry for all the code and explanation but I'm really stuck and wanted to show why it doesn't work and what I'm trying to get working. How can one get around this in Rust?
Or am I getting hung up on cost of the mutex locking? For any Arc<Mutex<Vec<Entry>>>, the lock will always be unlocked so the cost of doing the atomic locking will be tiny?
Thanks for any thoughts. Here is the complete example in Rust Playground.

Is it possible to share a HashMap between threads without locking the entire HashMap?

I would like to have a shared struct between threads. The struct has many fields that are never modified and a HashMap, which is. I don't want to lock the whole HashMap for a single update/remove, so my HashMap looks something like HashMap<u8, Mutex<u8>>. This works, but it makes no sense since the thread will lock the whole map anyways.
Here's this working version, without threads; I don't think that's necessary for the example.
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
fn main() {
let s = Arc::new(Mutex::new(S::new()));
let z = s.clone();
let _ = z.lock().unwrap();
}
struct S {
x: HashMap<u8, Mutex<u8>>, // other non-mutable fields
}
impl S {
pub fn new() -> S {
S {
x: HashMap::default(),
}
}
}
Playground
Is this possible in any way? Is there something obvious I missed in the documentation?
I've been trying to get this working, but I'm not sure how. Basically every example I see there's always a Mutex (or RwLock, or something like that) guarding the inner value.
I don't see how your request is possible, at least not without some exceedingly clever lock-free data structures; what should happen if multiple threads need to insert new values that hash to the same location?
In previous work, I've used a RwLock<HashMap<K, Mutex<V>>>. When inserting a value into the hash, you get an exclusive lock for a short period. The rest of the time, you can have multiple threads with reader locks to the HashMap and thus to a given element. If they need to mutate the data, they can get exclusive access to the Mutex.
Here's an example:
use std::{
collections::HashMap,
sync::{Arc, Mutex, RwLock},
thread,
time::Duration,
};
fn main() {
let data = Arc::new(RwLock::new(HashMap::new()));
let threads: Vec<_> = (0..10)
.map(|i| {
let data = Arc::clone(&data);
thread::spawn(move || worker_thread(i, data))
})
.collect();
for t in threads {
t.join().expect("Thread panicked");
}
println!("{:?}", data);
}
fn worker_thread(id: u8, data: Arc<RwLock<HashMap<u8, Mutex<i32>>>>) {
loop {
// Assume that the element already exists
let map = data.read().expect("RwLock poisoned");
if let Some(element) = map.get(&id) {
let mut element = element.lock().expect("Mutex poisoned");
// Perform our normal work updating a specific element.
// The entire HashMap only has a read lock, which
// means that other threads can access it.
*element += 1;
thread::sleep(Duration::from_secs(1));
return;
}
// If we got this far, the element doesn't exist
// Get rid of our read lock and switch to a write lock
// You want to minimize the time we hold the writer lock
drop(map);
let mut map = data.write().expect("RwLock poisoned");
// We use HashMap::entry to handle the case where another thread
// inserted the same key while where were unlocked.
thread::sleep(Duration::from_millis(50));
map.entry(id).or_insert_with(|| Mutex::new(0));
// Let the loop start us over to try again
}
}
This takes about 2.7 seconds to run on my machine, even though it starts 10 threads that each wait for 1 second while holding the exclusive lock to the element's data.
This solution isn't without issues, however. When there's a huge amount of contention for that one master lock, getting a write lock can take a while and completely kills parallelism.
In that case, you can switch to a RwLock<HashMap<K, Arc<Mutex<V>>>>. Once you have a read or write lock, you can then clone the Arc of the value, returning it and unlocking the hashmap.
The next step up would be to use a crate like arc-swap, which says:
Then one would lock, clone the [RwLock<Arc<T>>] and unlock. This suffers from CPU-level contention (on the lock and on the reference count of the Arc) which makes it relatively slow. Depending on the implementation, an update may be blocked for arbitrary long time by a steady inflow of readers.
The ArcSwap can be used instead, which solves the above problems and has better performance characteristics than the RwLock, both in contended and non-contended scenarios.
I often advocate for performing some kind of smarter algorithm. For example, you could spin up N threads each with their own HashMap. You then shard work among them. For the simple example above, you could use id % N_THREADS, for example. There are also complicated sharding schemes that depend on your data.
As Go has done a good job of evangelizing: do not communicate by sharing memory; instead, share memory by communicating.
Suppose the key of the data is map-able to a u8
You can have Arc<HashMap<u8,Mutex<HashMap<Key,Value>>>
When you initialize the data structure you populate all the first level map before putting it in Arc (it will be immutable after initialization)
When you want a value from the map you will need to do a double get, something like:
data.get(&map_to_u8(&key)).unwrap().lock().expect("poison").get(&key)
where the unwrap is safe because we initialized the first map with all the value.
to write in the map something like:
data.get(&map_to_u8(id)).unwrap().lock().expect("poison").entry(id).or_insert_with(|| value);
It's easy to see contention is reduced because we now have 256 Mutex and the probability of multiple threads asking the same Mutex is low.
#Shepmaster example with 100 threads takes about 10s on my machine, the following example takes a little more than 1 second.
use std::{
collections::HashMap,
sync::{Arc, Mutex, RwLock},
thread,
time::Duration,
};
fn main() {
let mut inner = HashMap::new( );
for i in 0..=u8::max_value() {
inner.insert(i, Mutex::new(HashMap::new()));
}
let data = Arc::new(inner);
let threads: Vec<_> = (0..100)
.map(|i| {
let data = Arc::clone(&data);
thread::spawn(move || worker_thread(i, data))
})
.collect();
for t in threads {
t.join().expect("Thread panicked");
}
println!("{:?}", data);
}
fn worker_thread(id: u8, data: Arc<HashMap<u8,Mutex<HashMap<u8,Mutex<i32>>>>> ) {
loop {
// first unwrap is safe to unwrap because we populated for every `u8`
if let Some(element) = data.get(&id).unwrap().lock().expect("poison").get(&id) {
let mut element = element.lock().expect("Mutex poisoned");
// Perform our normal work updating a specific element.
// The entire HashMap only has a read lock, which
// means that other threads can access it.
*element += 1;
thread::sleep(Duration::from_secs(1));
return;
}
// If we got this far, the element doesn't exist
// Get rid of our read lock and switch to a write lock
// You want to minimize the time we hold the writer lock
// We use HashMap::entry to handle the case where another thread
// inserted the same key while where were unlocked.
thread::sleep(Duration::from_millis(50));
data.get(&id).unwrap().lock().expect("poison").entry(id).or_insert_with(|| Mutex::new(0));
// Let the loop start us over to try again
}
}
Maybe you want to consider evmap:
A lock-free, eventually consistent, concurrent multi-value map.
The trade-off is eventual-consistency: Readers do not see changes until the writer refreshes the map. A refresh is atomic and the writer decides when to do it and expose new data to the readers.

Resources