Wake another thread periodically with Rust - multithreading

Condvar in Rust is good for waking another thread, but in this case below, I don't really need the true, I just want to wake the other thread periodically
use std::sync::{Arc, Mutex, Condvar};
use std::thread;
fn main() {
let pair = Arc::new((Mutex::new(false), Condvar::new()));
let pair2 = Arc::clone(&pair);
// Inside of our lock, spawn a new thread, and then wait for it to start.
thread::spawn(move|| {
let (lock, cvar) = &*pair2;
let mut started = lock.lock().unwrap();
// We notify the condvar that the value has changed.
loop{
*started = true;
cvar.notify_one();
std::thread::sleep(std::time::Duration::from_millis(20));
}
});
// Wait for the thread to start up.
let (lock, cvar) = &*pair;
let mut started = lock.lock().unwrap();
loop {
started = cvar.wait(started).unwrap();
println!("done");
}
println!("end");
}
Also, this example does not even work, I don't know why. It should wake the main thread every 20 ms.

The reason this doesn't work is that you're holding the mutex while you sleep. The main thread is only woken up after it has been hit with notify_one and its mutex is lockable. But the spawned thread holds the mutex locked forever.
Playground
In fact, you don't need the lock at all in your spawned thread and you could make it contain no data by constructing it as Mutex::new(()). However, if you do that, it is possible that your main thread hasn't finished one loop by the time the spawned thread finishes its sleep. The mutex ensures that when you call notify_one, the main thread is definitely waiting to be notified. If you want that behavior or not is up to you. With locking the mutex in the spawned thread, the main thread is woken up immediately if its previous loop took longer than one tick. Without the locking, wake-ups may be skipped and the next wake-up is aligned to the next tick.
But really, do what's in the answers #Finomnis suggested and use channels.

Related

Why does this use of Condvar wait and notify not deadlock?

https://doc.rust-lang.org/stable/std/sync/struct.Condvar.html
use std::sync::{Arc, Mutex, Condvar};
use std::thread;
let pair = Arc::new((Mutex::new(false), Condvar::new()));
let pair2 = Arc::clone(&pair);
// Inside of our lock, spawn a new thread, and then wait for it to start.
thread::spawn(move|| {
let (lock, cvar) = &*pair2;
let mut started = lock.lock().unwrap(); // #1
*started = true;
// We notify the condvar that the value has changed.
cvar.notify_one();
});
// Wait for the thread to start up.
let (lock, cvar) = &*pair;
let mut started = lock.lock().unwrap(); // #2
while !*started {
started = cvar.wait(started).unwrap();
}
If I understand this correctly, the core in the spawned thread (#1) might run after the main thread locked the mutex (#2). But in this case, the main thread will never unlock it, because the spawned thread can't lock it and change the value, so the loop keeps running forever... Or, is it because of some Condvar mechanics?
wait's documentation says (emphasis added):
pub fn wait<'a, T>(
&self,
guard: MutexGuard<'a, T>
) -> LockResult<MutexGuard<'a, T>>
This function will atomically unlock the mutex specified (represented by guard) and block the current thread. This means that any calls to notify_one or notify_all which happen logically after the mutex is unlocked are candidates to wake this thread up. When this function call returns, the lock specified will have been re-acquired.
When you call cvar.wait inside the loop it unlocks the mutex, which allows the spawned thread to lock it and set started to true.

Concurrency in a rust tui app - lock starvation issue

I have a tui app (built with the tui-rs crate) with its state inside a global AppState struct.
The app has 2 threads:
The main thread draws the contents to the screen in a loop. It needs an immutable reference to AppState to do the drawing.
A second thread occasionally spawns to do some cpu-heavy work. As a result it needs to update AppState and thus needs a mutable reference.
pub fn draw_screen() -> Result<(), Box<dyn Error>> {
let mut terminal = init_terminal().unwrap();
// prepare app state
let mut state = AppState::fresh_state();
let arc_state = Arc::new(Mutex::new(state));
loop {
terminal.draw(|f| {
// needs an immutable reference to AppState
do_some_drawing(arc_state.clone());
});
// needs a mutable reference to AppState
thread::spawn(|| {
do_some_calc(arc_state.clone());
});
}
Ok(())
}
Currently this results in a hanged app. The main thread keeps spinning in a loop, preventing the Mutex lock from ever being released. This means the second thread can never do its work. This in turn means the main thread (having checked whether the work is done yet) keeps spawning more worker threads, none of which can progress.
At least that's my best guess as to what's happening.
In case it matters, this is how I access the mutex in both immutable and mutable cases:
// when I need an immutable ref
let lock = thread_state.lock().unwrap();
let state = lock.deref();
// when I need a mutable ref
let mut lock = thread_state.lock().unwrap();
let state = lock.deref_mut();
What's the right way to resolve the above situation?
I've been reading about RefCell and channels but I'm not experienced enough to make a call. Would appreciate some guidance (any links to tutorials / docs super welcome).

How can I send a message to a specific thread?

I need to create some threads where some of them are going to run until their runner variable value has been changed. This is my minimal code.
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
fn main() {
let mut log_runner = Arc::new(Mutex::new(true));
println!("{}", *log_runner.lock().unwrap());
let mut threads = Vec::new();
{
let mut log_runner_ref = Arc::clone(&log_runner);
// log runner thread
let handle = thread::spawn(move || {
while *log_runner_ref.lock().unwrap() == true {
// DO SOME THINGS CONTINUOUSLY
println!("I'm a separate thread!");
}
});
threads.push(handle);
}
// let the main thread to sleep for x time
thread::sleep(Duration::from_millis(1));
// stop the log_runner thread
*log_runner.lock().unwrap() = false;
// join all threads
for handle in threads {
handle.join().unwrap();
println!("Thread joined!");
}
println!("{}", *log_runner.lock().unwrap());
}
It looks like I'm able to set the log_runner_ref in the log runner thread after 1 second to false. Is there a way to mark the treads with some name / ID or something similar and send a message to a specific thread using its specific marker (name / ID)?
If I understand it correctly, then the let (tx, rx) = mpsc::channel(); can be used for sending messages to all the threads simultaneously rather than to a specific one. I could send some identifier with the messages and each thread will be looking for its own identifier for the decision if to act on received message or not, but I would like to avoid the broadcasting effect.
MPSC stands for Multiple Producers, Single Consumer. As such, no, you cannot use that by itself to send a message to all threads, since for that you'd have to be able to duplicate the consumer. There are tools for this, but the choice of them requires a bit more info than just "MPMC" or "SPMC".
Honestly, if you can rely on channels for messaging (there are cases where it'd be a bad idea), you can create a channel per thread, assign the ID outside of the thread, and keep a HashMap instead of a Vec with the IDs associated to the threads. Receiver<T> can be moved into the thread (it implements Send if T implements Send), so you can quite literally move it in.
You then keep the Sender outside and send stuff to it :-)

Shouldn't a loop spawned in a thread print repeatedly?

Example code:
fn main() {
use std::thread::spawn;
spawn(|| { loop { println!("a") } });
// `a` is never printed
}
fn main() {
use std::thread::spawn;
spawn(|| { loop { println!("a") } });
loop { }
// `a` is printed repeatedly
}
a prints to the standard output in the second case, but the same is not true in the first case. Why is that? Shouldn't a print repeatedly in the first case as well?
Shouldn't a print repeatedly in the first case as well?
No. The documentation of thread:spawn says (emphasis mine):
The join handle will implicitly detach the child thread upon being dropped. In this case, the child thread may outlive the parent (unless the parent thread is the main thread; the whole process is terminated when the main thread finishes.) Additionally, the join handle provides a join method that can be used to join the child thread. If the child thread panics, join will return an Err containing the argument given to panic.
Your entire program exits because the main thread has exited. There was never even a chance for the child thread to start, much less print anything.
In the second example, you prevent the main thread from exiting by also causing that to spin forever.
What happens when you spawn a loop?
That thread will spin in the loop, as long as the program executes.
Idiomatically, you don't need the extra curly braces in the spawn, and it's more standard to only import the std::thread and then call thread::spawn:
fn main() {
use std::thread;
thread::spawn(|| loop {
println!("a")
});
}
To have the main thread wait for the child, you need to keep the JoinHandle from thread::spawn and call join on it:
fn main() {
use std::thread;
let handle = thread::spawn(|| loop {
println!("a")
});
handle.join().expect("The thread panicked");
}

How can I cause a panic on a thread to immediately end the main thread?

In Rust, a panic terminates the current thread but is not sent back to the main thread. The solution we are told is to use join. However, this blocks the currently executing thread. So if my main thread spawns 2 threads, I cannot join both of them and immediately get a panic back.
let jh1 = thread::spawn(|| { println!("thread 1"); sleep(1000000); };
let jh2 = thread::spawn(|| { panic!("thread 2") };
In the above, if I join on thread 1 and then on thread 2 I will be waiting for 1 before ever receiving a panic from either thread
Although in some cases I desire the current behavior, my goal is to default to Go's behavior where I can spawn a thread and have it panic on that thread and then immediately end the main thread. (The Go specification also documents a protect function, so it is easy to achieve Rust behavior in Go).
Updated for Rust 1.10+, see revision history for the previous version of the answer
good point, in go the main thread doesn't get unwound, the program just crashes, but the original panic is reported. This is in fact the behavior I want (although ideally resources would get cleaned up properly everywhere).
This you can achieve with the recently stable std::panic::set_hook() function. With it, you can set a hook which prints the panic info and then exits the whole process, something like this:
use std::thread;
use std::panic;
use std::process;
fn main() {
// take_hook() returns the default hook in case when a custom one is not set
let orig_hook = panic::take_hook();
panic::set_hook(Box::new(move |panic_info| {
// invoke the default handler and exit the process
orig_hook(panic_info);
process::exit(1);
}));
thread::spawn(move || {
panic!("something bad happened");
}).join();
// this line won't ever be invoked because of process::exit()
println!("Won't be printed");
}
Try commenting the set_hook() call out, and you'll see that the println!() line gets executed.
However, this approach, due to the use of process::exit(), will not allow resources allocated by other threads to be freed. In fact, I'm not sure that Go runtime allows this as well; it is likely that it uses the same approach with aborting the process.
I tried to force my code to stop processing when any of threads panicked. The only more-or-less clear solution without using unstable features was to use Drop trait implemented on some struct. This can lead to a resource leak, but in my scenario I'm ok with this.
use std::process;
use std::thread;
use std::time::Duration;
static THREAD_ERROR_CODE: i32 = 0x1;
static NUM_THREADS: u32 = 17;
static PROBE_SLEEP_MILLIS: u64 = 500;
struct PoisonPill;
impl Drop for PoisonPill {
fn drop(&mut self) {
if thread::panicking() {
println!("dropped while unwinding");
process::exit(THREAD_ERROR_CODE);
}
}
}
fn main() {
let mut thread_handles = vec![];
for i in 0..NUM_THREADS {
thread_handles.push(thread::spawn(move || {
let b = PoisonPill;
thread::sleep(Duration::from_millis(PROBE_SLEEP_MILLIS));
if i % 2 == 0 {
println!("kill {}", i);
panic!();
}
println!("this is thread number {}", i);
}));
}
for handle in thread_handles {
let _ = handle.join();
}
}
No matter how b = PoisonPill leaves it's scope, normal or after panic!, its Drop method kicks in. You can distinguish if the caller panicked using thread::panicking and take some action — in my case killing the process.
Looks like exiting the whole process on a panic in any thread is now (rust 1.62) as simple as adding this to your Cargo.toml:
[profile.release]
panic = 'abort'
[profile.dev]
panic = 'abort'
A panic in a thread then looks like this, with exit code 134:
thread '<unnamed>' panicked at 'panic in thread', src/main.rs:5:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Aborted (core dumped)

Resources