How to update in one thread and read from many? - multithreading

I've failed to get this code past the borrow-checker:
use std::sync::Arc;
use std::thread::{sleep, spawn};
use std::time::Duration;
#[derive(Debug, Clone)]
struct State {
count: u64,
not_copyable: Vec<u8>,
}
fn bar(thread_num: u8, arc_state: Arc<State>) {
let state = arc_state.clone();
loop {
sleep(Duration::from_millis(1000));
println!("thread_num: {}, state.count: {}", thread_num, state.count);
}
}
fn main() -> std::io::Result<()> {
let mut state = State {
count: 0,
not_copyable: vec![],
};
let arc_state = Arc::new(state);
for i in 0..2 {
spawn(move || {
bar(i, arc_state.clone());
});
}
loop {
sleep(Duration::from_millis(300));
state.count += 1;
}
}
I'm probably trying the wrong thing.
I want one (main) thread which can update state and many threads which can read state.
How should I do this in Rust?
I have read the Rust book on shared state, but that uses mutexes which seem overly complex for a single writer / multiple reader situation.
In C I would achieve this with a generous sprinkling of _Atomic.

Atomics are indeed a proper way, there are plenty of those in std (link. Your example needs 2 fixes.
Arc must be cloned before moving into the closure, so your loop becomes:
for i in 0..2 {
let arc_state = arc_state.clone();
spawn(move || { bar(i, arc_state); });
}
Using AtomicU64 is fairly straight forward, though you need explicitly use newtype methods with specified Ordering (Playground):
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Arc;
use std::thread::{sleep, spawn};
use std::time::Duration;
#[derive(Debug)]
struct State {
count: AtomicU64,
not_copyable: Vec<u8>,
}
fn bar(thread_num: u8, arc_state: Arc<State>) {
let state = arc_state.clone();
loop {
sleep(Duration::from_millis(1000));
println!(
"thread_num: {}, state.count: {}",
thread_num,
state.count.load(Ordering::Relaxed)
);
}
}
fn main() -> std::io::Result<()> {
let state = State {
count: AtomicU64::new(0),
not_copyable: vec![],
};
let arc_state = Arc::new(state);
for i in 0..2 {
let arc_state = arc_state.clone();
spawn(move || {
bar(i, arc_state);
});
}
loop {
sleep(Duration::from_millis(300));
// you can't use `state` here, because it moved
arc_state.count.fetch_add(1, Ordering::Relaxed);
}
}

Related

Separate thread for loop in struct implementation

I'm working with a struct where I need to read the GPIO pin of a Raspberry Pi, and increment a 'register' within the struct every time the pin goes high. Concurrently with this, I would like to be able to sample the register every now and then to see what the current value is.
When implementing this, my thought was to spawn a thread that continuously loops checking if the pin has gone from Low to High, and increment the register from within the thread. Then, from the parent thread, I can read the value of the register and report it.
After doing some research, it seems that a scoped thread would not be the correct implementation of this, because the child thread would never hand over ownership of the register to the parent thread.
Rather, I believe I should use an Arc/Mutex combination guarding the register and only momentarily take control over the lock to increment the register. Is this the correct interpretation of multithreading in Rust?
Assuming the above is correct, I'm unsure of how to implement this in Rust.
struct GpioReader {
register: Arc<Mutex<i64>>,
input_pin: Arc<Mutex<InputPin>>,
}
impl GpioReader {
pub fn new(input_pin: InputPin) -> Self {
Self {
register: Arc::New(Mutex::from(0)),
input_pin: Arc::new(Mutex::from(input_pin))
}
}
pub fn start(&self) {
let pin = self.input_pin.lock().unwrap(); // ???
let register = self.register.lock().unwrap(); // ???
let handle = spawn(move || loop {
match pin.read() { // ???
High => register += 1, // ???
Low => (),
}
sleep(Duration::from_millis(SLEEP_TIME));
});
handle.join().expect("Failed to join thread.");
}
pub fn get_register(&self) -> i64 {
let reg_val = self.register.lock().unwrap();
return reg_val;
}
}
Given the above, how do I declare the pin and register variables in such a way that I can read off the pin and increment the register within the loop? My best guess is I'll have to instantiate some kind of reference to these members of the struct outside of the loop, and then pass the reference into the loop at which point I can use the lock() method of the Arc.
Edit: Using RaspberryPi 3A+ running Raspbian. The InputPin in question is from the rppal crate.
Mutex<i64> is an anti-pattern. Replace it with AtomicI64.
Arc is meant to be cloned with Arc::clone() to create new references to the same object.
Don't use shared ownership if not necessary. InputPin is only used from within the thread, so move it in instead.
I'm unsure why you do handle.join(). If you want it to continue in the background, don't wait for it with .join().
use std::{
sync::{
atomic::{AtomicI64, Ordering},
Arc,
},
thread::{self, sleep},
time::Duration,
};
use rppal::gpio::InputPin;
struct GpioReader {
register: Arc<AtomicI64>,
input_pin: Option<InputPin>,
}
const SLEEP_TIME: Duration = Duration::from_millis(1000);
impl GpioReader {
pub fn new(input_pin: InputPin) -> Self {
Self {
register: Arc::new(AtomicI64::new(0)),
input_pin: Some(input_pin),
}
}
pub fn start(&mut self) {
let register = Arc::clone(&self.register);
let pin = self.input_pin.take().expect("Thread already running!");
let handle = thread::spawn(move || loop {
match pin.read() {
High => {
register.fetch_add(1, Ordering::Relaxed);
}
Low => (),
}
sleep(SLEEP_TIME);
});
}
pub fn get_register(&self) -> i64 {
self.register.load(Ordering::Relaxed)
}
}
If you want to stop the thread automatically when the GpioReader object is dropped, you can use Weak to signal it to the thread:
use std::{
sync::{
atomic::{AtomicI64, Ordering},
Arc,
},
thread::{self, sleep},
time::Duration,
};
use rppal::gpio::InputPin;
struct GpioReader {
register: Arc<AtomicI64>,
input_pin: Option<InputPin>,
}
const SLEEP_TIME: Duration = Duration::from_millis(1000);
impl GpioReader {
pub fn new(input_pin: InputPin) -> Self {
Self {
register: Arc::new(AtomicI64::new(0)),
input_pin: Some(input_pin),
}
}
pub fn start(&mut self) {
let register = Arc::downgrade(&self.register);
let pin = self.input_pin.take().expect("Thread already running!");
let handle = thread::spawn(move || loop {
if let Some(register) = register.upgrade() {
match pin.read() {
High => {
register.fetch_add(1, Ordering::Relaxed);
}
Low => (),
}
sleep(SLEEP_TIME);
} else {
// Original `register` got dropped, cancel the thread
break;
}
});
}
pub fn get_register(&self) -> i64 {
self.register.load(Ordering::Relaxed)
}
}

Maintaining a mutable reference to struct in HashMap

Is it possible to borrow a mutable reference to the contents of a HashMap and use it for an extended period of time without impeding read-only access?
This is for trying to maintain a window into the state of various components in a system that are running independently (via Tokio) and need to be monitored.
As an example:
use std::sync::Arc;
use std::collections::HashMap;
struct Container {
running : bool,
count : u8
}
impl Container {
fn run(&mut self) {
for i in 1..100 {
self.count = i;
}
self.running = false;
}
}
fn main() {
let mut map = HashMap::new();
let mut container = Arc::new(
Box::new(
Container {
running: true,
count: 0
}
)
);
map.insert(0, container.clone());
container.run();
map.remove(&0);
}
This is for a Tokio-driven program where multiple operations will be happening asynchronously and visibility into the overall state of them is required.
There's this question where a temporary mutable reference can be borrowed, but that won't work as the run() function needs time to complete.
Based on suggestions from Jmb and Stargateur reworked this to use a RwLock internally. These internals could be reworked by having methods that manipulate them, but the basics are here:
use std::sync::Arc;
use std::sync::RwLock;
use std::collections::HashMap;
#[derive(Debug)]
struct ContainerState {
running : bool,
count : u8
}
struct Container {
state : Arc<RwLock<ContainerState>>
}
impl Container {
fn run(&self) {
for i in 1..100 {
let mut state = self.state.write().unwrap();
state.count = i;
}
{
let mut state = self.state.write().unwrap();
state.running = false;
}
}
}
fn main() {
let mut map = HashMap::new();
let state = Arc::new(
RwLock::new(
ContainerState {
running: true,
count: 0
}
)
);
map.insert(0, state);
let container = Container {
state: map[&0].clone()
};
container.run();
println!("Final state: {:?}", map[&0]);
map.remove(&0);
}
Where the key thing I was missing is you can have a mutable reference or multiple immutable references, and they're mutually exclusive. My initial understanding was that these two limits were independent.

Rust: concurrency error, program hangs after first thread

I have created a simplified version of my problem below, I have a Bag struct and Item struct. I want to spawn 10 threads that execute item_action method from Bag on each item in an item_list, and print a statement if both item's attributes are in the bag's attributes.
use std::sync::{Mutex,Arc};
use std::thread;
#[derive(Clone, Debug)]
struct Bag{
attributes: Arc<Mutex<Vec<usize>>>
}
impl Bag {
fn new(n: usize) -> Self {
let mut v = Vec::with_capacity(n);
for _ in 0..n {
v.push(0);
}
Bag{
attributes:Arc::new(Mutex::new(v)),
}
}
fn item_action(&self, item_attr1: usize, item_attr2: usize) -> Result<(),()> {
if self.attributes.lock().unwrap().contains(&item_attr1) ||
self.attributes.lock().unwrap().contains(&item_attr2) {
println!("Item attributes {} and {} are in Bag attribute list!", item_attr1, item_attr2);
Ok(())
} else {
Err(())
}
}
}
#[derive(Clone, Debug)]
struct Item{
item_attr1: usize,
item_attr2: usize,
}
impl Item{
pub fn new(item_attr1: usize, item_attr2: usize) -> Self {
Item{
item_attr1: item_attr1,
item_attr2: item_attr2
}
}
}
fn main() {
let mut item_list: Vec<Item> = Vec::new();
for i in 0..10 {
item_list.push(Item::new(i, (i+1)%10));
}
let bag: Bag= Bag::new(10); //create 10 attributes
let mut handles = Vec::with_capacity(10);
for x in 0..10 {
let bag2 = bag.clone();
let item_list2= item_list.clone();
handles.push(
thread::spawn(move || {
bag2.item_action(item_list2[x].item_attr1, item_list2[x].item_attr2);
})
)
}
for h in handles {
println!("Here");
h.join().unwrap();
}
}
When running, I only got one line, and the program just stops there without returning.
Item attributes 0 and 1 are in Bag attribute list!
May I know what went wrong? Please see code in Playground
Updated:
With suggestion from #loganfsmyth, the program can return now... but still only prints 1 line as above. I expect it to print 10 because my item_list has 10 items. Not sure if my thread logic is correct.
I have added println!("Here"); when calling join all threads. And I can see Here is printed 10 times, just not the actual log from item_action
I believe this is because Rust is not running your
if self.attributes.lock().unwrap().contains(&item_attr1) ||
self.attributes.lock().unwrap().contains(&item_attr2) {
expression in the order you expect. The evaluation order of subexpressions in Rust is currently undefined. What appears to be happening is that you essentially end up with
const condition = {
let lock1 = self.attributes.lock().unwrap();
let lock2 = self.attributes.lock().unwrap();
lock1.contains(&item_attr1) || lock2.contains(&item_attr2)
};
if condition {
which is causing your code to deadlock.
You should instead write:
let attributes = self.attributes.lock().unwrap();
if attributes.contains(&item_attr1) ||
attributes.contains(&item_attr2) {
so that there is only one lock.
Your code would also work as-is if you used an RwLock or ReentrantMutex instead of a Mutex since those allow the same thread to have multiple immutable references to the data.

How to add special NotReady logic to tokio-io?

I'm trying to make a Stream that would wait until a specific character is in buffer. I know there's read_until() on BufRead but I actually need a custom solution, as this is a stepping stone to implement waiting until a specific string in in buffer (or, for example, a regexp match happens).
In my project where I first encountered the problem, problem was that future processing just hanged when I get a Ready(_) from inner future and return NotReady from my function. I discovered I shouldn't do that per docs (last paragraph). However, what I didn't get, is what's the actual alternative that is promised in that paragraph. I read all the published documentation on the Tokio site and it doesn't make sense for me at the moment.
So following is my current code. Unfortunately I couldn't make it simpler and smaller as it's already broken. Current result is this:
Err(Custom { kind: Other, error: Error(Shutdown) })
Err(Custom { kind: Other, error: Error(Shutdown) })
Err(Custom { kind: Other, error: Error(Shutdown) })
<ad infinum>
Expected result is getting some Ok(Ready(_)) out of it, while printing W and W', and waiting for specific character in buffer.
extern crate futures;
extern crate tokio_core;
extern crate tokio_io;
extern crate tokio_io_timeout;
extern crate tokio_process;
use futures::stream::poll_fn;
use futures::{Async, Poll, Stream};
use tokio_core::reactor::Core;
use tokio_io::AsyncRead;
use tokio_io_timeout::TimeoutReader;
use tokio_process::CommandExt;
use std::process::{Command, Stdio};
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
struct Process {
child: tokio_process::Child,
stdout: Arc<Mutex<tokio_io_timeout::TimeoutReader<tokio_process::ChildStdout>>>,
}
impl Process {
fn new(
command: &str,
reader_timeout: Option<Duration>,
core: &tokio_core::reactor::Core,
) -> Self {
let mut cmd = Command::new(command);
let cat = cmd.stdout(Stdio::piped());
let mut child = cat.spawn_async(&core.handle()).unwrap();
let stdout = child.stdout().take().unwrap();
let mut timeout_reader = TimeoutReader::new(stdout);
timeout_reader.set_timeout(reader_timeout);
let timeout_reader = Arc::new(Mutex::new(timeout_reader));
Self {
child,
stdout: timeout_reader,
}
}
}
fn work() -> Result<(), ()> {
let window = Arc::new(Mutex::new(Vec::new()));
let mut core = Core::new().unwrap();
let process = Process::new("cat", Some(Duration::from_secs(20)), &core);
let mark = Arc::new(Mutex::new(b'c'));
let read_until_stream = poll_fn({
let window = window.clone();
let timeout_reader = process.stdout.clone();
move || -> Poll<Option<u8>, std::io::Error> {
let mut buf = [0; 8];
let poll;
{
let mut timeout_reader = timeout_reader.lock().unwrap();
poll = timeout_reader.poll_read(&mut buf);
}
match poll {
Ok(Async::Ready(0)) => Ok(Async::Ready(None)),
Ok(Async::Ready(x)) => {
{
let mut window = window.lock().unwrap();
println!("W: {:?}", *window);
println!("buf: {:?}", &buf[0..x]);
window.extend(buf[0..x].into_iter().map(|x| *x));
println!("W': {:?}", *window);
if let Some(_) = window.iter().find(|c| **c == *mark.lock().unwrap()) {
Ok(Async::Ready(Some(1)))
} else {
Ok(Async::NotReady)
}
}
}
Ok(Async::NotReady) => Ok(Async::NotReady),
Err(e) => Err(e),
}
}
});
let _stream_thread = thread::spawn(move || {
for o in read_until_stream.wait() {
println!("{:?}", o);
}
});
match core.run(process.child) {
Ok(_) => {}
Err(e) => {
println!("Child error: {:?}", e);
}
}
Ok(())
}
fn main() {
work().unwrap();
}
This is complete example project.
If you need more data you need to call poll_read again until you either find what you were looking for or poll_read returns NotReady.
You might want to avoid looping in one task for too long, so you can build yourself a yield_task function to call instead if poll_read didn't return NotReady; it makes sure your task gets called again ASAP after other pending tasks were run.
To use it just run return yield_task();.
fn yield_inner() {
use futures::task;
task::current().notify();
}
#[inline(always)]
pub fn yield_task<T, E>() -> Poll<T, E> {
yield_inner();
Ok(Async::NotReady)
}
Also see futures-rs#354: Handle long-running, always-ready futures fairly #354.
With the new async/await API futures::task::current is gone; instead you'll need a std::task::Context reference, which is provided as parameter to the new std::future::Future::poll trait method.
If you're already manually implementing the std::future::Future trait you can simply insert:
context.waker().wake_by_ref();
return std::task::Poll::Pending;
Or build yourself a Future-implementing type that yields exactly once:
pub struct Yield {
ready: bool,
}
impl core::future::Future for Yield {
type Output = ();
fn poll(self: core::pin::Pin<&mut Self>, cx: &mut core::task::Context<'_>) -> core::task::Poll<Self::Output> {
let this = self.get_mut();
if this.ready {
core::task::Poll::Ready(())
} else {
cx.waker().wake_by_ref();
this.ready = true; // ready next round
core::task::Poll::Pending
}
}
}
pub fn yield_task() -> Yield {
Yield { ready: false }
}
And then use it in async code like this:
yield_task().await;

Implement a monitoring thread without lock?

I have a struct that sends messages to a channel as well as updating some of its own fields. How do I implement a monitoring thread that looks (read only) at its internal fields periodically?
I can write it using a Arc<Mutex<T>> wrapper, but I feel it is not that efficient as A::x could have been i32 which is stored and updated on the stack. Is there any better way to do it without the locks?
use std::sync::{Arc, Mutex};
use std::sync::mpsc::{channel, Sender};
use std::{thread, time};
struct A {
x: Arc<Mutex<i32>>,
y: Sender<i32>,
}
impl A {
fn do_some_loop(&mut self) {
let sleep_time = time::Duration::from_millis(200);
// This is a long running thread.
for x in 1..1000000 {
*self.x.lock().unwrap() = x;
self.y.send(x);
thread::sleep(sleep_time);
}
}
}
fn test() {
let (sender, recever) = channel();
let x = Arc::new(Mutex::new(1));
let mut a = A { x: x.clone(), y: sender };
thread::spawn(move || {
// Monitor every 10 secs.
let sleep_time = time::Duration::from_millis(10000);
loop {
thread::sleep(sleep_time);
println!("{}", *x.lock().unwrap());
}
});
a.do_some_loop();
}

Resources