The mutex and threads examples on
the internet are not good as I am not able to find how to lock a block of code using a mutex to lock a method.
// mutex example
#include <iostream> // std::cout
#include <thread> // std::thread
#include <mutex> // std::mutex
std::mutex mtx; // mutex for critical section
void print_block (int n, char c) {
// critical section (exclusive access to std::cout signaled by locking mtx):
mtx.lock();
for (int i=0; i<n; ++i) { std::cout << c; }
std::cout << '\n';
mtx.unlock();
}
int main ()
{
std::thread th1 (print_block,50,'*');
std::thread th2 (print_block,50,'$');
th1.join();
th2.join();
return 0;
}
What is similar Rust code for this C++ snippet? Locking loops and printing as in Rust example the mutex must be of that type such as
use std::sync::{Arc, Mutex};
use std::thread;
fn main() {
let data = Arc::new(Mutex::new(vec![1u32, 2, 3]));
for i in 0..3 {
let data = data.clone();
thread::spawn(move || {
let mut data = data.lock().unwrap();
data[i] += 1;
});
}
thread::sleep_ms(50);
}
I have written this similar code. Is it fine or can it be written in a better way?
use std::sync::{Arc, Mutex};
use std::thread;
fn main() {
let mtx = Arc::new(Mutex::new(""));
let mtx1 = mtx.clone();
let mtx2 = mtx.clone();
let n = 50;
let th1 = thread::spawn(move || {
mtx1.lock().unwrap();
printData(n, "*".to_string());
});
let th2 = thread::spawn(move || {
mtx2.lock().unwrap();
printData(n, "$".to_string());
});
th1.join();
th2.join();
}
fn printData(n: u32, c: String) {
let mut str_val: String = "".to_string();
for i in 0..n {
str_val.push_str(&c);
}
println!("{}", str_val);
}
Related
I'm new to Rust. I'm supposed to use a Mutex and an Arc to create a critical section within the print_lots function to stop the race condition from happening. Any Ideas?
fn main() {
let num_of_threads = 4;
let mut array_of_threads = vec![];
for id in 0..num_of_threads {
array_of_threads.push(std::thread::spawn(move || print_lots(id)));
}
for t in array_of_threads {
t.join().expect("Thread join failure");
}
}
fn print_lots(id: u32) {
println!("Begin [{}]", id);
for _i in 0..100 {
print!("{} ", id);
}
println!("\nEnd [{}]", id);
}
Mutex in Rust perhaps works differently to how locks work in some other languages you might be used to. Instead of tracking the lock independently from the value, a Rust Mutex owns the data and prevents accessing it without first obtaining a lock, which is enforced at compile time.
The warning you are getting is because you have locked the Mutex, but then done nothing with the value. The warning is there because this is almost certainly a mistake.
fn main() {
let foo = Mutex::new(0);
// It's often best to just unwrap and panic if the lock is poisoned
if let Ok(mut lock) = foo.lock() {
*lock = 2;
// The mutex is unlocked automatically when lock goes out of scope here
}
println!("{:?}", foo); // Mutex { data: 2 }
}
I am guessing that your real problem is that you want to synchronise the print statements so that output from different threads is not intermingled.
One way to do that is to obtain a lock on StdOut which actually uses a lock internally and provides a similar API to Mutex:
fn print_lots(id: u32) {
let stdout = io::stdout();
println!("Begin [{}]", id);
let mut handle = stdout.lock();
for _i in 0..100 {
write!(&mut handle, "{} ", id).unwrap();
}
println!("\nEnd [{}]", id);
// handle is dropped here, unlocking stdout
}
In your simplified example, creating a long-lived lock in each thread is counterproductive since each thread will block the others and the result is sequential rather than concurrent. This might still make sense though if your real-world code has more going on.
use std::sync::{Arc, Mutex};
fn main() {
let num_of_threads = 4;
let mut array_of_threads = vec![];
let counter = Arc::new(Mutex::new(0));
for id in 0..num_of_threads {
let counter_clone = counter.clone();
array_of_threads.push(std::thread::spawn(move || print_lots(id, counter_clone)));
}
for t in array_of_threads {
t.join().expect("Thread join failure");
}
}
fn print_lots(id: u32, c: Arc<Mutex<u32>>) {
println!("Begin [{}]", id);
let _guard = c.lock().unwrap();
for _i in 0..100 {
print!("{} ", id);
}
println!("\nEnd [{}]", id);
}
I want to build a single-producer multiple-consumer example in Rust, where the producer is bounded to have no more than 10 outstanding items. I modeled a solution in C that uses a a mutex and two condvars. One condvar is to wait the consumers when there is nothing to consume and one condvar is to wait for the producer when the unconsumed items count is greater than say 10. The C code is below.
As I understand it from the Rust docs, there must be a 1-1 connection between std::sync::Mutex and a std::sync::Condvar so I can't make an exact translation of my C solution.
Is there some other way to achieve the same end (that I cannot see) in Rust using std::sync::Mutex and std::sync::Condvar.
#define _GNU_SOURCE
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <pthread.h>
//
// This is a simple example of using a mutex and 2 condition variables to
// sync a single writer and multiple readers interacting with a bounded (fixed max size) queue
//
// in this toy example a queue is simulated by an int counter n_resource
//
int n_resource;
pthread_cond_t rdr_cvar;
pthread_cond_t wrtr_cvar;
pthread_mutex_t mutex;
void reader(void* data)
{
long id = (long)data;
for(;;) {
pthread_mutex_lock(&mutex);
while (n_resource <= 0) {
pthread_cond_wait(&rdr_cvar, &mutex);
}
printf("Reader %ld n_resource = %d\n", id, n_resource);
--n_resource;
// if there are still things to read - singla one reader
if(n_resource > 0) {
pthread_cond_signal(&rdr_cvar);
}
// if there is space for the writer to add another signal the writer
if(n_resource < 10) {
pthread_cond_signal(&wrtr_cvar);
}
pthread_mutex_unlock(&mutex);
}
}
void writer(void* data)
{
for(;;) {
pthread_mutex_lock(&mutex);
printf("Writer before while n_resource %d \n", n_resource);
while (n_resource > 10) {
pthread_cond_wait(&wrtr_cvar, &mutex);
}
printf("Writer after while n_resource %d \n", n_resource);
++n_resource;
// if there is something for a reader to read signal one of the readers.
if(n_resource > 0) {
pthread_cond_signal(&rdr_cvar);
}
pthread_mutex_unlock(&mutex);
}
}
int main()
{
pthread_t rdr_thread_1;
pthread_t rdr_thread_2;
pthread_t wrtr_thread;
pthread_mutex_init(&mutex, NULL);
pthread_cond_init(&rdr_cvar, NULL);
pthread_cond_init(&wrtr_cvar, NULL);
pthread_create(&rdr_thread_1, NULL, &reader, (void*)1L);
pthread_create(&rdr_thread_2, NULL, &reader, (void*)2L);
pthread_create(&wrtr_thread, NULL, &writer, NULL);
pthread_join(wrtr_thread, NULL);
pthread_join(rdr_thread_1, NULL);
pthread_join(rdr_thread_2, NULL);
}
While a CondVar needs to be associated with only one Mutex, it is not necessary that a Mutex is associated with only one CondVar.
For example, the following code seems to work just fine - you can run it on the playground.
use std::sync::{Arc, Condvar, Mutex};
use std::thread;
struct Q {
rdr_cvar: Condvar,
wrtr_cvar: Condvar,
mutex: Mutex<i32>,
}
impl Q {
pub fn new() -> Q {
Q {
rdr_cvar: Condvar::new(),
wrtr_cvar: Condvar::new(),
mutex: Mutex::new(0),
}
}
}
fn writer(id: i32, qq: Arc<Q>) {
let q = &*qq;
for i in 0..10 {
let guard = q.mutex.lock().unwrap();
let mut guard = q.wrtr_cvar.wait_while(guard, |n| *n > 3).unwrap();
println!("{}: Writer {} n_resource = {}\n", i, id, *guard);
*guard += 1;
if *guard > 0 {
q.rdr_cvar.notify_one();
}
if *guard < 10 {
q.wrtr_cvar.notify_one();
}
}
}
fn reader(id: i32, qq: Arc<Q>) {
let q = &*qq;
for i in 0..10 {
let guard = q.mutex.lock().unwrap();
let mut guard = q.rdr_cvar.wait_while(guard, |n| *n <= 0).unwrap();
println!("{} Reader {} n_resource = {}\n", i, id, *guard);
*guard -= 1;
if *guard > 0 {
q.rdr_cvar.notify_one();
}
if *guard < 10 {
q.wrtr_cvar.notify_one();
}
}
}
fn main() {
let data = Arc::new(Q::new());
let data2 = data.clone();
let t1 = thread::spawn(move || writer(0, data2));
let t2 = thread::spawn(move || reader(1, data));
t1.join().unwrap();
t2.join().unwrap();
}
Because Rust does not have have the built-in ability to read from a file in a non-blocking manner, I have to spawn a thread which reads the file /dev/input/fs0 in order to get joystick events. Suppose the joystick is unused (nothing to read), so the reading thread is blocked while reading from the file.
Is there a way for the main thread to force the blocking read of the reading thread to resume, so the reading thread may exit cleanly?
In other languages, I would simply close the file in the main thread. This would force the blocking read to resume. But I have not found a way to do so in Rust, because reading requires a mutable reference to the file.
The idea is to call File::read only when there is available data. If there is no available data, we check a flag to see if the main thread requested to stop. If not, wait and try again.
Here is an example using nonblock crate:
extern crate nonblock;
use std::fs::File;
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
use nonblock::NonBlockingReader;
fn main() {
let f = File::open("/dev/stdin").expect("open failed");
let mut reader = NonBlockingReader::from_fd(f).expect("from_fd failed");
let exit = Arc::new(Mutex::new(false));
let texit = exit.clone();
println!("start reading, type something and enter");
thread::spawn(move || {
let mut buf: Vec<u8> = Vec::new();
while !*texit.lock().unwrap() {
let s = reader.read_available(&mut buf).expect("io error");
if s == 0 {
if reader.is_eof() {
println!("eof");
break;
}
} else {
println!("read {:?}", buf);
buf.clear();
}
thread::sleep(Duration::from_millis(200));
}
println!("stop reading");
});
thread::sleep(Duration::from_secs(5));
println!("closing file");
*exit.lock().unwrap() = true;
thread::sleep(Duration::from_secs(2));
println!("\"stop reading\" was printed before the main exit!");
}
fn read_async<F>(file: File, fun: F) -> thread::JoinHandle<()>
where F: Send + 'static + Fn(&Vec<u8>)
{
let mut reader = NonBlockingReader::from_fd(file).expect("from_fd failed");
let mut buf: Vec<u8> = Vec::new();
thread::spawn(move || {
loop {
let s = reader.read_available(&mut buf).expect("io error");
if s == 0 {
if reader.is_eof() {
break;
}
} else {
fun(&buf);
buf.clear();
}
thread::sleep(Duration::from_millis(100));
}
})
}
Here is an example using poll binding of nix crate. The function poll waits (with timeout) for specific events:
extern crate nix;
use std::io::Read;
use std::os::unix::io::AsRawFd;
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
use nix::poll;
fn main() {
let mut f = std::fs::File::open("/dev/stdin").expect("open failed");
let mut pfd = poll::PollFd {
fd: f.as_raw_fd(),
events: poll::POLLIN, // is there input data?
revents: poll::EventFlags::empty(),
};
let exit = Arc::new(Mutex::new(false));
let texit = exit.clone();
println!("start reading, type something and enter");
thread::spawn(move || {
let timeout = 100; // millisecs
let mut s = unsafe { std::slice::from_raw_parts_mut(&mut pfd, 1) };
let mut buffer = [0u8; 10];
loop {
if poll::poll(&mut s, timeout).expect("poll failed") != 0 {
let s = f.read(&mut buffer).expect("read failed");
println!("read {:?}", &buffer[..s]);
}
if *texit.lock().unwrap() {
break;
}
}
println!("stop reading");
});
thread::sleep(Duration::from_secs(5));
println!("closing file");
*exit.lock().unwrap() = true;
thread::sleep(Duration::from_secs(2));
println!("\"stop reading\" was printed before the main exit!");
}
I am new to Rust, and struggling to deal with all those wrapper types in Rust. I am trying to write code that is semantically equal to the following C code. The code tries to create a big table for book keeping, but will divide the big table so that every thread will only access their local small slices of that table. The big table will not be accessed unless other threads quit and no longer access their own slice.
#include <stdio.h>
#include <pthread.h>
void* write_slice(void* arg) {
int* slice = (int*) arg;
int i;
for (i = 0; i < 10; i++)
slice[i] = i;
return NULL;
}
int main()
{
int* table = (int*) malloc(100 * sizeof(int));
int* slice[10];
int i;
for (i = 0; i < 10; i++) {
slice[i] = table + i * 10;
}
// create pthread for each slice
pthread_t p[10];
for (i = 0; i < 10; i++)
pthread_create(&p[i], NULL, write_slice, slice[i]);
for (i = 0; i < 10; i++)
pthread_join(p[i], NULL);
for (i = 0; i < 100; i++)
printf("%d,", table[i]);
}
How do I use Rust's types and ownership to achieve this?
Let's start with the code:
// cargo-deps: crossbeam="0.7.3"
extern crate crossbeam;
const CHUNKS: usize = 10;
const CHUNK_SIZE: usize = 10;
fn main() {
let mut table = [0; CHUNKS * CHUNK_SIZE];
// Scoped threads allow the compiler to prove that no threads will outlive
// table (which would be bad).
let _ = crossbeam::scope(|scope| {
// Chop `table` into disjoint sub-slices.
for slice in table.chunks_mut(CHUNK_SIZE) {
// Spawn a thread operating on that subslice.
scope.spawn(move |_| write_slice(slice));
}
// `crossbeam::scope` ensures that *all* spawned threads join before
// returning control back from this closure.
});
// At this point, all threads have joined, and we have exclusive access to
// `table` again. Huzzah for 100% safe multi-threaded stack mutation!
println!("{:?}", &table[..]);
}
fn write_slice(slice: &mut [i32]) {
for (i, e) in slice.iter_mut().enumerate() {
*e = i as i32;
}
}
One thing to note is that this needs the crossbeam crate. Rust used to have a similar "scoped" construct, but a soundness hole was found right before 1.0, so it was deprecated with no time to replace it. crossbeam is basically the replacement.
What Rust lets you do here is express the idea that, whatever the code does, none of the threads created within the call to crossbeam::scoped will survive that scope. As such, anything borrowed from outside that scope will live longer than the threads. Thus, the threads can freely access those borrows without having to worry about things like, say, a thread outliving the stack frame that table is defined by and scribbling over the stack.
So this should do more or less the same thing as the C code, though without that nagging worry that you might have missed something. :)
Finally, here's the same thing using scoped_threadpool instead. The only real practical difference is that this allows us to control how many threads are used.
// cargo-deps: scoped_threadpool="0.1.6"
extern crate scoped_threadpool;
const CHUNKS: usize = 10;
const CHUNK_SIZE: usize = 10;
fn main() {
let mut table = [0; CHUNKS * CHUNK_SIZE];
let mut pool = scoped_threadpool::Pool::new(CHUNKS as u32);
pool.scoped(|scope| {
for slice in table.chunks_mut(CHUNK_SIZE) {
scope.execute(move || write_slice(slice));
}
});
println!("{:?}", &table[..]);
}
fn write_slice(slice: &mut [i32]) {
for (i, e) in slice.iter_mut().enumerate() {
*e = i as i32;
}
}
I'm trying to use a Condvar to limit the number of threads that are active at any given time. I'm having a hard time finding good examples on how to use Condvar. So far I have:
use std::sync::{Arc, Condvar, Mutex};
use std::thread;
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0), Condvar::new()));
let mut i = 0;
while i < 100 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let &(ref num, ref cvar) = &*thread_count;
{
let mut start = num.lock().unwrap();
if *start >= 20 {
cvar.wait(start);
}
*start += 1;
}
println!("hello");
cvar.notify_one();
});
i += 1;
}
}
The compiler error given is:
error[E0382]: use of moved value: `start`
--> src/main.rs:16:18
|
14 | cvar.wait(start);
| ----- value moved here
15 | }
16 | *start += 1;
| ^^^^^ value used here after move
|
= note: move occurs because `start` has type `std::sync::MutexGuard<'_, i32>`, which does not implement the `Copy` trait
I'm entirely unsure if my use of Condvar is correct. I tried staying as close as I could to the example on the Rust API. Wwat is the proper way to implement this?
Here's a version that compiles:
use std::{
sync::{Arc, Condvar, Mutex},
thread,
};
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0u8), Condvar::new()));
let mut i = 0;
while i < 100 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let (num, cvar) = &*thread_count;
let mut start = cvar
.wait_while(num.lock().unwrap(), |start| *start >= 20)
.unwrap();
// Before Rust 1.42, use this:
//
// let mut start = num.lock().unwrap();
// while *start >= 20 {
// start = cvar.wait(start).unwrap()
// }
*start += 1;
println!("hello");
cvar.notify_one();
});
i += 1;
}
}
The important part can be seen from the signature of Condvar::wait_while or Condvar::wait:
pub fn wait_while<'a, T, F>(
&self,
guard: MutexGuard<'a, T>,
condition: F
) -> LockResult<MutexGuard<'a, T>>
where
F: FnMut(&mut T) -> bool,
pub fn wait<'a, T>(
&self,
guard: MutexGuard<'a, T>
) -> LockResult<MutexGuard<'a, T>>
This says that wait_while / wait consumes the guard, which is why you get the error you did - you no longer own start, so you can't call any methods on it!
These functions are doing a great job of reflecting how Condvars work - you give up the lock on the Mutex (represented by start) for a while, and when the function returns you get the lock again.
The fix is to give up the lock and then grab the lock guard return value from wait_while / wait. I've also switched from an if to a while, as encouraged by huon.
For reference, the usual way to have a limited number of threads in a given scope is with a Semaphore.
Unfortunately, Semaphore was never stabilized, was deprecated in Rust 1.8 and was removed in Rust 1.9. There are crates available that add semaphores on top of other concurrency primitives.
let sema = Arc::new(Semaphore::new(20));
for i in 0..100 {
let sema = sema.clone();
thread::spawn(move || {
let _guard = sema.acquire();
println!("{}", i);
})
}
This isn't quite doing the same thing: since each thread is not printing the total number of the threads inside the scope when that thread entered it.
I realized the code I provided didn't do exactly what I wanted it to, so I'm putting this edit of Shepmaster's code here for future reference.
use std::sync::{Arc, Condvar, Mutex};
use std::thread;
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0u8), Condvar::new()));
let mut i = 0;
while i < 150 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let x;
let &(ref num, ref cvar) = &*thread_count;
{
let start = num.lock().unwrap();
let mut start = if *start >= 20 {
cvar.wait(start).unwrap()
} else {
start
};
*start += 1;
x = *start;
}
println!("{}", x);
{
let mut counter = num.lock().unwrap();
*counter -= 1;
}
cvar.notify_one();
});
i += 1;
}
println!("done");
}
Running this in the playground should show more or less expected behavior.
You want to use a while loop, and re-assign start at each iteration, like:
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0), Condvar::new()));
let mut i = 0;
while i < 100 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let &(ref num, ref cvar) = &*thread_count;
let mut start = num.lock().unwrap();
while *start >= 20 {
let current = cvar.wait(start).unwrap();
start = current;
}
*start += 1;
println!("hello");
cvar.notify_one();
});
i += 1;
}
}
See also some article on the topic:
https://medium.com/#polyglot_factotum/rust-concurrency-five-easy-pieces-871f1c62906a
https://medium.com/#polyglot_factotum/rust-concurrency-patterns-condvars-and-locks-e278f18db74f