How to "unlock" an RwLock? - multithreading

I'm trying to solve the thread-ring problem. In each thread I read the token value
if it is not mine, check if it's the end of the program
if it is then finish the thread
otherwise, read again and repeat
if it is mine (i.e. has my id) then acquire the write lock, increase the value of the token, check if it's the end then tell main thread that I finished it and finish the current thread loop
If it not over, then release the write lock, and start to read again
There is no unlock. Is there any unlock like the one I need in here?
It seems that I should release the read lock as well, because the write lock won't happen if there is someone reading the data. Is it necessary?
fn main() {
use std::sync::{Arc, RwLock};
use std::thread;
use std::sync::mpsc::channel;
const N: usize = 5; //503;
const STOP_POINT: usize = 100;
let n = Arc::new(RwLock::new(1));
let (sender, reciever) = channel();
for i in 1..N {
let (n_c, channel) = (n.clone(), sender.clone());
// println!("Thread n.{} beeing created!", i);
let a = thread::Builder::new()
.name(i.to_string())
.spawn(move || -> () {
loop {
let mut read_only = n_c.read().unwrap();
let say_my_name = (*thread::current().name().unwrap()).to_string();
// println!("Thread {} says: gonna try!", say_my_name);
while (*read_only % N) != i {
if *read_only == 0 {
break;
}
// println!("Thread {} says: aint mine!", say_my_name);
read_only = n_c.read().unwrap();
} // WAIT
println!("Thread {} says: my turn!", say_my_name);
let mut ref_to_num = n_c.write().unwrap();
*ref_to_num += 1;
if *ref_to_num == STOP_POINT {
channel.send(say_my_name).unwrap();
break;
}
}
()
});
assert_eq!(a.is_ok(), true);
// thread::spawn();
// println!("Thread n.{} created!", i);
}
println!("{}", reciever.recv().unwrap());
}

To release a lock, you let it fall out of scope or explicitly invoke its destructor by calling drop.
Here's how your program could be written using drop in two places:
fn main() {
use std::sync::{Arc, RwLock};
use std::sync::mpsc::channel;
use std::thread;
use std::time::Duration;
const N: usize = 503;
const STOP_POINT: usize = 100;
let n = Arc::new(RwLock::new(1));
let (sender, receiver) = channel();
for i in 1..N {
let (n_c, channel) = (n.clone(), sender.clone());
// println!("Thread n.{} beeing created!", i);
thread::Builder::new()
.name(i.to_string())
.spawn(move || {
loop {
let mut read_only = n_c.read().unwrap();
let say_my_name = (*thread::current().name().unwrap()).to_string();
// println!("Thread {} says: gonna try!", say_my_name);
while (*read_only % N) != i {
if *read_only == 0 {
break;
}
drop(read_only); // release the lock before sleeping
// println!("Thread {} says: aint mine!", say_my_name);
thread::sleep(Duration::from_millis(1));
read_only = n_c.read().unwrap();
}
println!("Thread {} says: my turn!", say_my_name);
drop(read_only); // release the read lock before taking a write lock
let mut ref_to_num = n_c.write().unwrap();
*ref_to_num += 1;
if *ref_to_num == STOP_POINT {
channel.send(say_my_name).unwrap();
break;
}
}
})
.expect("failed to spawn a thread");
// println!("Thread n.{} created!", i);
}
println!("{}", receiver.recv().unwrap());
}
Note that if we don't reassign read_lock in the while loop, the compiler will give an error because read_lock doesn't hold a valid value after we call drop(read_lock). Rust is fine with local variables that are temporarily uninitialized, but of course we need to reinitialize them before we can use them again.
Here's how the thread's main loop could be written to use a scope to replace one of the drops:
loop {
let say_my_name = (*thread::current().name().unwrap()).to_string();
{
let mut read_only = n_c.read().unwrap();
// println!("Thread {} says: gonna try!", say_my_name);
while (*read_only % N) != i {
if *read_only == 0 {
break;
}
drop(read_only);
thread::sleep(Duration::from_millis(1));
// println!("Thread {} says: aint mine!", say_my_name);
read_only = n_c.read().unwrap();
}
println!("Thread {} says: my turn!", say_my_name);
} // read_only is dropped here
let mut ref_to_num = n_c.write().unwrap();
*ref_to_num += 1;
if *ref_to_num == STOP_POINT {
channel.send(say_my_name).unwrap();
break;
}
}

Related

Terminal state not restored using termion

I am trying to get the user input after a certain duration by using two threads. A thread duration and thread for editing. When the thread duration completes,and that the thread for editing has not completed,the terminal state is not restored thus breaking the terminal. This happens when the user did not press "q" before the time duration
The only way of restoring the state of the terminal is to press"q" which will break the loop in the first thread calling droop on the termion raw terminal
use std::io;
use std::io::Write;
use crossbeam_channel::{select, unbounded};
use std::thread;
use std::time;
use std::time::Duration;
use termion;
use termion::input::TermRead;
use termion::raw::IntoRawMode;
fn test() -> String {
let (s1, r1) = unbounded();
let (s2, r2) = unbounded();
let terminal = io::stdout().into_raw_mode();
let mut stdout = terminal.unwrap();
let mut stdin = termion::async_stdin().keys();
thread::spawn(move || {
// Use asynchronous stdin
let mut s = String::new();
loop {
// Read input (if any)
let input = stdin.next();
// If a key was pressed
if let Some(Ok(key)) = input {
match key {
// Exit if 'q' is pressed
termion::event::Key::Char('q') => {
s1.send('q');
break;
}
// Else print the pressed key
_ => {
if let termion::event::Key::Char(k) = key {
s1.send(k);
}
stdout.lock().flush().unwrap();
}
}
}
thread::sleep(time::Duration::from_millis(50));
}
});
thread::spawn(move || {
thread::sleep(Duration::from_millis(3000));
s2.send(20).unwrap();
});
// None of the two operations will become ready within 100 milliseconds.
let mut val: String = String::new();
loop {
select! {
recv(r1) -> msg => val.push(msg.unwrap()),
recv(r2) -> _msg => break,
default(Duration::from_millis(3000)) => println!("timed out"),
};
}
return val;
}
fn main() {
println!("result {}", test());
}
In Rust, forcefully exiting a thread (such as by ending the main thread before the child threads run) is almost never a good idea, for reasons you've seen here. Their destructors don't get run, which means things could get messed up. The cleanest way is probably to keep an Arc<Mutex<bool>> that becomes true when threads should exit, and the threads can read it on their own accord and exit gracefully. Then, you should join the threads at the end of the function to ensure they finish all the way through. I've documented my changes in the comments:
use std::io;
use std::io::Write;
use crossbeam_channel::{select, unbounded};
use std::thread;
use std::time;
use std::time::Duration;
// import Arc and Mutex
use std::sync::{Arc, Mutex};
use termion;
use termion::input::TermRead;
use termion::raw::IntoRawMode;
fn test() -> String {
let (s1, r1) = unbounded();
let (s2, r2) = unbounded();
let terminal = io::stdout().into_raw_mode();
let stdout = terminal.unwrap();
let mut stdin = termion::async_stdin().keys();
// keep a boolean flag of if we should exit
let should_exit = Arc::new(Mutex::new(false));
// clone the Arc for moving into the first thread
let should_exit_t1 = Arc::clone(&should_exit);
// keep a vec of handles for joining
let mut handles = vec![];
// push the handle onto the vec
handles.push(thread::spawn(move || {
loop {
// if the flag is true then we should gracefully exit
if *should_exit_t1.lock().unwrap() {
break;
}
// Read input (if any)
let input = stdin.next();
// If a key was pressed
if let Some(Ok(key)) = input {
match key {
// Exit if 'q' is pressed
termion::event::Key::Char('q') => {
s1.send('q').unwrap();
break;
}
// Else print the pressed key
_ => {
if let termion::event::Key::Char(k) = key {
s1.send(k).unwrap();
}
stdout.lock().flush().unwrap();
}
}
}
thread::sleep(time::Duration::from_millis(50));
}
}));
// also push the handle onto the vec
handles.push(thread::spawn(move || {
thread::sleep(Duration::from_millis(3000));
s2.send(20).unwrap();
}));
// None of the two operations will become ready within 100 milliseconds.
let mut val: String = String::new();
loop {
select! {
recv(r1) -> msg => val.push(msg.unwrap()),
recv(r2) -> _msg => break,
default(Duration::from_millis(3000)) => println!("timed out"),
};
}
// before exiting, set the exit flag to true
*should_exit.lock().unwrap() = true;
// join all the threads so their destructors are run
for handle in handles {
handle.join().unwrap();
}
return val;
}
fn main() {
println!("result {}", test());
}

Condition variable not playing well with thread::sleep

I'm not sure I understand Rust's concurrency support with Mutexes and condition variables. In the following code, the main thread sets the poll_thread to be idle for two seconds, then to "read a register" for 2 seconds, and then return to "idle":
use std::thread;
use std::sync::{Arc, Mutex, Condvar};
use std::time;
#[derive(PartialEq, Debug)]
enum Command {
Idle,
ReadRegister(u32),
}
fn poll_thread(sync_pair: Arc<(Mutex<Command>, Condvar)>) {
let &(ref mutex, ref cvar) = &*sync_pair;
loop {
let mut flag = mutex.lock().unwrap();
while *flag == Command::Idle {
flag = cvar.wait(flag).unwrap();
}
match *flag {
Command::Idle => {
println!("WHAT IMPOSSIBLE!");
panic!();
}
Command::ReadRegister(i) => {
println!("You want me to read {}?", i);
thread::sleep(time::Duration::from_millis(450));
println!("Ok, here it is: {}", 42);
}
}
}
}
pub fn main() {
let pair = Arc::new((Mutex::new(Command::Idle), Condvar::new()));
let pclone = pair.clone();
let rx_thread = thread::spawn(|| poll_thread(pclone));
let &(ref mutex, ref cvar) = &*pair;
for i in 0..10 {
thread::sleep(time::Duration::from_millis(500));
if i == 4 {
println!("Setting ReadRegister");
let mut flag = mutex.lock().unwrap();
*flag = Command::ReadRegister(5);
println!("flag is = {:?}", *flag);
cvar.notify_one();
} else if i == 8 {
println!("Setting Idle");
let mut flag = mutex.lock().unwrap();
*flag = Command::Idle;
println!("flag is = {:?}", *flag);
cvar.notify_one();
}
}
println!("after notify_one()");
rx_thread.join();
}
This works as expected, but when the line to sleep for 450 milliseconds is uncommented, the code will often remain in the "read" state and not return to waiting on the condition variable cvar.wait(). Sometimes it will return to idle after, say, 15 seconds!
I would think that when poll_thread reaches the bottom of the loop, it would release the lock, allowing main to acquire and set flag = Command::Idle, and within roughly half a second, poll_thread would return to idle, but it appears that isn't happening when poll_thread sleeps. Why?

How can I force a thread that is blocked reading from a file to resume in Rust?

Because Rust does not have have the built-in ability to read from a file in a non-blocking manner, I have to spawn a thread which reads the file /dev/input/fs0 in order to get joystick events. Suppose the joystick is unused (nothing to read), so the reading thread is blocked while reading from the file.
Is there a way for the main thread to force the blocking read of the reading thread to resume, so the reading thread may exit cleanly?
In other languages, I would simply close the file in the main thread. This would force the blocking read to resume. But I have not found a way to do so in Rust, because reading requires a mutable reference to the file.
The idea is to call File::read only when there is available data. If there is no available data, we check a flag to see if the main thread requested to stop. If not, wait and try again.
Here is an example using nonblock crate:
extern crate nonblock;
use std::fs::File;
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
use nonblock::NonBlockingReader;
fn main() {
let f = File::open("/dev/stdin").expect("open failed");
let mut reader = NonBlockingReader::from_fd(f).expect("from_fd failed");
let exit = Arc::new(Mutex::new(false));
let texit = exit.clone();
println!("start reading, type something and enter");
thread::spawn(move || {
let mut buf: Vec<u8> = Vec::new();
while !*texit.lock().unwrap() {
let s = reader.read_available(&mut buf).expect("io error");
if s == 0 {
if reader.is_eof() {
println!("eof");
break;
}
} else {
println!("read {:?}", buf);
buf.clear();
}
thread::sleep(Duration::from_millis(200));
}
println!("stop reading");
});
thread::sleep(Duration::from_secs(5));
println!("closing file");
*exit.lock().unwrap() = true;
thread::sleep(Duration::from_secs(2));
println!("\"stop reading\" was printed before the main exit!");
}
fn read_async<F>(file: File, fun: F) -> thread::JoinHandle<()>
where F: Send + 'static + Fn(&Vec<u8>)
{
let mut reader = NonBlockingReader::from_fd(file).expect("from_fd failed");
let mut buf: Vec<u8> = Vec::new();
thread::spawn(move || {
loop {
let s = reader.read_available(&mut buf).expect("io error");
if s == 0 {
if reader.is_eof() {
break;
}
} else {
fun(&buf);
buf.clear();
}
thread::sleep(Duration::from_millis(100));
}
})
}
Here is an example using poll binding of nix crate. The function poll waits (with timeout) for specific events:
extern crate nix;
use std::io::Read;
use std::os::unix::io::AsRawFd;
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
use nix::poll;
fn main() {
let mut f = std::fs::File::open("/dev/stdin").expect("open failed");
let mut pfd = poll::PollFd {
fd: f.as_raw_fd(),
events: poll::POLLIN, // is there input data?
revents: poll::EventFlags::empty(),
};
let exit = Arc::new(Mutex::new(false));
let texit = exit.clone();
println!("start reading, type something and enter");
thread::spawn(move || {
let timeout = 100; // millisecs
let mut s = unsafe { std::slice::from_raw_parts_mut(&mut pfd, 1) };
let mut buffer = [0u8; 10];
loop {
if poll::poll(&mut s, timeout).expect("poll failed") != 0 {
let s = f.read(&mut buffer).expect("read failed");
println!("read {:?}", &buffer[..s]);
}
if *texit.lock().unwrap() {
break;
}
}
println!("stop reading");
});
thread::sleep(Duration::from_secs(5));
println!("closing file");
*exit.lock().unwrap() = true;
thread::sleep(Duration::from_secs(2));
println!("\"stop reading\" was printed before the main exit!");
}

Application on OSX cannot spawn more than 2048 threads

I have a Rust application on on OSX firing up a large amount of threads as can be seen in the code below, however, after looking at how many max threads my version of OSX is allowed to create via the sysctl kern.num_taskthreads command, I can see that it is kern.num_taskthreads: 2048 which explains why I can't spin up over 2048 threads.
How do I go about getting past this hard limit?
let threads = 300000;
let requests = 1;
for _x in 0..threads {
println!("{}", _x);
let request_clone = request.clone();
let handle = thread::spawn(move || {
for _y in 0..requests {
request_clone.lock().unwrap().push((request::Request::new(request::Request::create_request())));
}
});
child_threads.push(handle);
}
Before starting, I'd encourage you to read about the C10K problem. When you get into this scale, there's a lot more things you need to keep in mind.
That being said, I'd suggest looking at mio...
a lightweight IO library for Rust with a focus on adding as little overhead as possible over the OS abstractions.
Specifically, mio provides an event loop, which allows you to handle a large number of connections without spawning threads. Unfortunately, I don't know of a HTTP library that currently supports mio. You could create one and be a hero to the Rust community!
Not sure how helpful this will be, but I was trying to create a small pool of threads that will create connections and then send them over to an event loop via a channel for reading.
I'm sure this code is probably pretty bad, but here it is anyways for examples. It uses the Hyper library, like you mentioned.
extern crate hyper;
use std::io::Read;
use std::thread;
use std::thread::{JoinHandle};
use std::sync::{Arc, Mutex};
use std::sync::mpsc::channel;
use hyper::Client;
use hyper::client::Response;
use hyper::header::Connection;
const TARGET: i32 = 100;
const THREADS: i32 = 10;
struct ResponseWithString {
index: i32,
response: Response,
data: Vec<u8>,
complete: bool
}
fn main() {
// Create a client.
let url: &'static str = "http://www.gooogle.com/";
let mut threads = Vec::<JoinHandle<()>>::with_capacity((TARGET * 2) as usize);
let conn_count = Arc::new(Mutex::new(0));
let (tx, rx) = channel::<ResponseWithString>();
for _ in 0..THREADS {
// Move var references into thread context
let conn_count = conn_count.clone();
let tx = tx.clone();
let t = thread::spawn(move || {
loop {
let idx: i32;
{
// Lock, increment, and release
let mut count = conn_count.lock().unwrap();
*count += 1;
idx = *count;
}
if idx > TARGET {
break;
}
let mut client = Client::new();
// Creating an outgoing request.
println!("Creating connection {}...", idx);
let res = client.get(url) // Get URL...
.header(Connection::close()) // Set headers...
.send().unwrap(); // Fire!
println!("Pushing response {}...", idx);
tx.send(ResponseWithString {
index: idx,
response: res,
data: Vec::<u8>::with_capacity(1024),
complete: false
}).unwrap();
}
});
threads.push(t);
}
let mut responses = Vec::<ResponseWithString>::with_capacity(TARGET as usize);
let mut buf: [u8; 1024] = [0; 1024];
let mut completed_count = 0;
loop {
if completed_count >= TARGET {
break; // No more work!
}
match rx.try_recv() {
Ok(r) => {
println!("Incoming response! {}", r.index);
responses.push(r)
},
_ => { }
}
for r in &mut responses {
if r.complete {
continue;
}
// Read the Response.
let res = &mut r.response;
let data = &mut r.data;
let idx = &r.index;
match res.read(&mut buf) {
Ok(i) => {
if i == 0 {
println!("No more data! {}", idx);
r.complete = true;
completed_count += 1;
}
else {
println!("Got data! {} => {}", idx, i);
for x in 0..i {
data.push(buf[x]);
}
}
}
Err(e) => {
panic!("Oh no! {} {}", idx, e);
}
}
}
}
}

How do I use a Condvar to limit multithreading?

I'm trying to use a Condvar to limit the number of threads that are active at any given time. I'm having a hard time finding good examples on how to use Condvar. So far I have:
use std::sync::{Arc, Condvar, Mutex};
use std::thread;
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0), Condvar::new()));
let mut i = 0;
while i < 100 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let &(ref num, ref cvar) = &*thread_count;
{
let mut start = num.lock().unwrap();
if *start >= 20 {
cvar.wait(start);
}
*start += 1;
}
println!("hello");
cvar.notify_one();
});
i += 1;
}
}
The compiler error given is:
error[E0382]: use of moved value: `start`
--> src/main.rs:16:18
|
14 | cvar.wait(start);
| ----- value moved here
15 | }
16 | *start += 1;
| ^^^^^ value used here after move
|
= note: move occurs because `start` has type `std::sync::MutexGuard<'_, i32>`, which does not implement the `Copy` trait
I'm entirely unsure if my use of Condvar is correct. I tried staying as close as I could to the example on the Rust API. Wwat is the proper way to implement this?
Here's a version that compiles:
use std::{
sync::{Arc, Condvar, Mutex},
thread,
};
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0u8), Condvar::new()));
let mut i = 0;
while i < 100 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let (num, cvar) = &*thread_count;
let mut start = cvar
.wait_while(num.lock().unwrap(), |start| *start >= 20)
.unwrap();
// Before Rust 1.42, use this:
//
// let mut start = num.lock().unwrap();
// while *start >= 20 {
// start = cvar.wait(start).unwrap()
// }
*start += 1;
println!("hello");
cvar.notify_one();
});
i += 1;
}
}
The important part can be seen from the signature of Condvar::wait_while or Condvar::wait:
pub fn wait_while<'a, T, F>(
&self,
guard: MutexGuard<'a, T>,
condition: F
) -> LockResult<MutexGuard<'a, T>>
where
F: FnMut(&mut T) -> bool,
pub fn wait<'a, T>(
&self,
guard: MutexGuard<'a, T>
) -> LockResult<MutexGuard<'a, T>>
This says that wait_while / wait consumes the guard, which is why you get the error you did - you no longer own start, so you can't call any methods on it!
These functions are doing a great job of reflecting how Condvars work - you give up the lock on the Mutex (represented by start) for a while, and when the function returns you get the lock again.
The fix is to give up the lock and then grab the lock guard return value from wait_while / wait. I've also switched from an if to a while, as encouraged by huon.
For reference, the usual way to have a limited number of threads in a given scope is with a Semaphore.
Unfortunately, Semaphore was never stabilized, was deprecated in Rust 1.8 and was removed in Rust 1.9. There are crates available that add semaphores on top of other concurrency primitives.
let sema = Arc::new(Semaphore::new(20));
for i in 0..100 {
let sema = sema.clone();
thread::spawn(move || {
let _guard = sema.acquire();
println!("{}", i);
})
}
This isn't quite doing the same thing: since each thread is not printing the total number of the threads inside the scope when that thread entered it.
I realized the code I provided didn't do exactly what I wanted it to, so I'm putting this edit of Shepmaster's code here for future reference.
use std::sync::{Arc, Condvar, Mutex};
use std::thread;
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0u8), Condvar::new()));
let mut i = 0;
while i < 150 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let x;
let &(ref num, ref cvar) = &*thread_count;
{
let start = num.lock().unwrap();
let mut start = if *start >= 20 {
cvar.wait(start).unwrap()
} else {
start
};
*start += 1;
x = *start;
}
println!("{}", x);
{
let mut counter = num.lock().unwrap();
*counter -= 1;
}
cvar.notify_one();
});
i += 1;
}
println!("done");
}
Running this in the playground should show more or less expected behavior.
You want to use a while loop, and re-assign start at each iteration, like:
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0), Condvar::new()));
let mut i = 0;
while i < 100 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let &(ref num, ref cvar) = &*thread_count;
let mut start = num.lock().unwrap();
while *start >= 20 {
let current = cvar.wait(start).unwrap();
start = current;
}
*start += 1;
println!("hello");
cvar.notify_one();
});
i += 1;
}
}
See also some article on the topic:
https://medium.com/#polyglot_factotum/rust-concurrency-five-easy-pieces-871f1c62906a
https://medium.com/#polyglot_factotum/rust-concurrency-patterns-condvars-and-locks-e278f18db74f

Resources