Separate thread for loop in struct implementation - multithreading

I'm working with a struct where I need to read the GPIO pin of a Raspberry Pi, and increment a 'register' within the struct every time the pin goes high. Concurrently with this, I would like to be able to sample the register every now and then to see what the current value is.
When implementing this, my thought was to spawn a thread that continuously loops checking if the pin has gone from Low to High, and increment the register from within the thread. Then, from the parent thread, I can read the value of the register and report it.
After doing some research, it seems that a scoped thread would not be the correct implementation of this, because the child thread would never hand over ownership of the register to the parent thread.
Rather, I believe I should use an Arc/Mutex combination guarding the register and only momentarily take control over the lock to increment the register. Is this the correct interpretation of multithreading in Rust?
Assuming the above is correct, I'm unsure of how to implement this in Rust.
struct GpioReader {
register: Arc<Mutex<i64>>,
input_pin: Arc<Mutex<InputPin>>,
}
impl GpioReader {
pub fn new(input_pin: InputPin) -> Self {
Self {
register: Arc::New(Mutex::from(0)),
input_pin: Arc::new(Mutex::from(input_pin))
}
}
pub fn start(&self) {
let pin = self.input_pin.lock().unwrap(); // ???
let register = self.register.lock().unwrap(); // ???
let handle = spawn(move || loop {
match pin.read() { // ???
High => register += 1, // ???
Low => (),
}
sleep(Duration::from_millis(SLEEP_TIME));
});
handle.join().expect("Failed to join thread.");
}
pub fn get_register(&self) -> i64 {
let reg_val = self.register.lock().unwrap();
return reg_val;
}
}
Given the above, how do I declare the pin and register variables in such a way that I can read off the pin and increment the register within the loop? My best guess is I'll have to instantiate some kind of reference to these members of the struct outside of the loop, and then pass the reference into the loop at which point I can use the lock() method of the Arc.
Edit: Using RaspberryPi 3A+ running Raspbian. The InputPin in question is from the rppal crate.

Mutex<i64> is an anti-pattern. Replace it with AtomicI64.
Arc is meant to be cloned with Arc::clone() to create new references to the same object.
Don't use shared ownership if not necessary. InputPin is only used from within the thread, so move it in instead.
I'm unsure why you do handle.join(). If you want it to continue in the background, don't wait for it with .join().
use std::{
sync::{
atomic::{AtomicI64, Ordering},
Arc,
},
thread::{self, sleep},
time::Duration,
};
use rppal::gpio::InputPin;
struct GpioReader {
register: Arc<AtomicI64>,
input_pin: Option<InputPin>,
}
const SLEEP_TIME: Duration = Duration::from_millis(1000);
impl GpioReader {
pub fn new(input_pin: InputPin) -> Self {
Self {
register: Arc::new(AtomicI64::new(0)),
input_pin: Some(input_pin),
}
}
pub fn start(&mut self) {
let register = Arc::clone(&self.register);
let pin = self.input_pin.take().expect("Thread already running!");
let handle = thread::spawn(move || loop {
match pin.read() {
High => {
register.fetch_add(1, Ordering::Relaxed);
}
Low => (),
}
sleep(SLEEP_TIME);
});
}
pub fn get_register(&self) -> i64 {
self.register.load(Ordering::Relaxed)
}
}
If you want to stop the thread automatically when the GpioReader object is dropped, you can use Weak to signal it to the thread:
use std::{
sync::{
atomic::{AtomicI64, Ordering},
Arc,
},
thread::{self, sleep},
time::Duration,
};
use rppal::gpio::InputPin;
struct GpioReader {
register: Arc<AtomicI64>,
input_pin: Option<InputPin>,
}
const SLEEP_TIME: Duration = Duration::from_millis(1000);
impl GpioReader {
pub fn new(input_pin: InputPin) -> Self {
Self {
register: Arc::new(AtomicI64::new(0)),
input_pin: Some(input_pin),
}
}
pub fn start(&mut self) {
let register = Arc::downgrade(&self.register);
let pin = self.input_pin.take().expect("Thread already running!");
let handle = thread::spawn(move || loop {
if let Some(register) = register.upgrade() {
match pin.read() {
High => {
register.fetch_add(1, Ordering::Relaxed);
}
Low => (),
}
sleep(SLEEP_TIME);
} else {
// Original `register` got dropped, cancel the thread
break;
}
});
}
pub fn get_register(&self) -> i64 {
self.register.load(Ordering::Relaxed)
}
}

Related

How can I share a Vector between 2 threads?

I am pretty new to Rust, and cannot manage to keep both Arcs values updated in both threads I'm spawning. The idea would be that one thread loops over received events and when it receives one, updates the object, which the other thread constantly watches. How can I achieve that in Rust, or if this method isn't adequate, would there be a better way to do it ?
(The concrete idea would be one thread listening for MIDI events and the other one re-rendering on a LED strip the notes received)
Here's what I currently have:
main.rs
mod functions;
mod structs;
use crate::functions::*;
use crate::structs::*;
use portmidi as pm;
use rs_ws281x::{ChannelBuilder, ControllerBuilder, StripType};
use std::sync::{Arc, Mutex};
use std::{fs, thread, time};
const MIDI_TIMEOUT: u64 = 10;
const MIDI_CHANNEL: usize = 0;
#[tokio::main]
async fn main() {
let config: Arc<std::sync::Mutex<Config>> = Arc::new(Mutex::new(
toml::from_str(&fs::read_to_string("config.toml").unwrap()).unwrap(),
));
let config_midi = config.clone();
let config_leds = config.clone();
let leds_status = Arc::new(Mutex::new(vec![0; config.lock().unwrap().leds.num_leds]));
let leds_status_midi = Arc::clone(&leds_status);
let leds_status_leds = Arc::clone(&leds_status);
thread::spawn(move || {
let config = config_midi.lock().unwrap();
let midi_context = pm::PortMidi::new().unwrap();
let device_info = midi_context
.device(config.midi.id)
.expect(format!("Could not find device with id {}", config.midi.id).as_str());
println!("Using device {}) {}", device_info.id(), device_info.name());
let input_port = midi_context
.input_port(device_info, config.midi.buffer_size)
.expect("Could not create input port");
let mut leds_status = leds_status_midi.lock().unwrap();
loop {
if let Ok(_) = input_port.poll() {
if let Ok(Some(events)) = input_port.read_n(config.midi.buffer_size) {
for event in events {
let event_type =
get_midi_event_type(event.message.status, event.message.data2);
match event_type {
MidiEventType::NoteOn => {
let key = get_note_position(event.message.data1, &config);
leds_status[key] = 1;
}
MidiEventType::NoteOff => {
let key = get_note_position(event.message.data1, &config);
leds_status[key] = 0;
}
_ => {}
}
}
}
}
thread::sleep(time::Duration::from_millis(MIDI_TIMEOUT));
}
});
thread::spawn(move || {
let config = config_leds.lock().unwrap();
let mut led_controller = ControllerBuilder::new()
.freq(800_000)
.dma(10)
.channel(
MIDI_CHANNEL,
ChannelBuilder::new()
.pin(config.leds.pin)
.count(config.leds.num_leds as i32)
.strip_type(StripType::Ws2812)
.brightness(config.leds.brightness)
.build(),
)
.build()
.unwrap();
loop {
let leds_status = leds_status_leds.lock().unwrap();
print!("\x1b[2J\x1b[1;1H");
println!(
"{:?}",
leds_status.iter().filter(|x| (**x) > 0).collect::<Vec<_>>()
);
}
});
}
functions.rs
use crate::structs::MidiEventType;
pub fn get_note_position(note: u8, config: &crate::structs::Config) -> usize {
let mut note_offset = 0;
for i in 0..config.leds.offsets.len() {
if note > config.leds.offsets[i][0] {
note_offset = config.leds.offsets[i][1];
break;
}
}
note_offset -= config.leds.shift;
let note_pos_raw = 2 * (note - 20) - note_offset;
config.leds.num_leds - (note_pos_raw as usize)
}
pub fn get_midi_event_type(status: u8, velocity: u8) -> MidiEventType {
if status == 144 && velocity > 0 {
MidiEventType::NoteOn
} else if status == 128 || (status == 144 && velocity == 0) {
MidiEventType::NoteOff
} else {
MidiEventType::ControlChange
}
}
structs.rs
use serde_derive::Deserialize;
#[derive(Deserialize, Debug)]
pub struct Config {
pub leds: LedsConfig,
pub midi: MidiConfig,
}
#[derive(Deserialize, Debug)]
pub struct LedsConfig {
pub pin: i32,
pub num_leds: usize,
pub brightness: u8,
pub offsets: Vec<Vec<u8>>,
pub shift: u8,
pub fade: i8,
}
#[derive(Deserialize, Debug)]
pub struct MidiConfig {
pub id: i32,
pub buffer_size: usize,
}
#[derive(Debug)]
pub enum MidiEventType {
NoteOn,
NoteOff,
ControlChange,
}
Thank you very much !
The idea would be that one thread loops over received events and when it receives one, updates the object, which the other thread constantly watches.
That's a good way to do it, particularly if one of the threads needs to be near-realtime (e.g. live audio processing). You can use channels to achieve this. You transfer the sender to one thread and the receiver to another. In a realtime scenario, the receiver can loop until try_recv errs with Empty (limiting to some number of iterations to prevent starvation of the processing code). For example, something like this, given a r: Receiver:
// Process 100 messages max to not starve the thread of the other stuff
// it needs to be doing.
for _ in 0..100 {
match r.try_recv() {
Ok(msg) => { /* Process msg, applying it to the current state */ },
Err(TryRecvError::Empty) => break,
Err(TryRecvError::Disconnected) => {
// The sender is gone, maybe this is our signal to terminate?
return;
},
}
}
Alternatively, if one thread needs to act only when a message is received, it can simply iterate the receiver, which will continue to loop as long as messages are received and the channel is open:
for msg in r {
// Handle the message
}
It really is that simple. If the channel is empty but there are senders alive, it will block until a message is received. Once all senders are gone and the channel is empty, the loop will terminate.
A channel can convey messages of exactly one type; if only one kind of message needs to be sent, you can use a struct. Otherwise, an enum with variants for each kind of message works well.
Given the sending side of the channel, s: Sender, you just s.send(your_message_value).
Another option would be to create an Arc<Mutex<_>>, which it looks like you are doing in your sample code. This way is fine if the lock contention is not too high, but this can inhibit the ability of both threads to run concurrently, which is often the goal of multithreading. Channels tend to work better in message-passing scenarios because there isn't a need for a mutual exclusion lock.
As a side note, you are using Tokio with an async main(), but you never actually do anything with any futures, so there's no reason to even use Tokio in this code.

How to update in one thread and read from many?

I've failed to get this code past the borrow-checker:
use std::sync::Arc;
use std::thread::{sleep, spawn};
use std::time::Duration;
#[derive(Debug, Clone)]
struct State {
count: u64,
not_copyable: Vec<u8>,
}
fn bar(thread_num: u8, arc_state: Arc<State>) {
let state = arc_state.clone();
loop {
sleep(Duration::from_millis(1000));
println!("thread_num: {}, state.count: {}", thread_num, state.count);
}
}
fn main() -> std::io::Result<()> {
let mut state = State {
count: 0,
not_copyable: vec![],
};
let arc_state = Arc::new(state);
for i in 0..2 {
spawn(move || {
bar(i, arc_state.clone());
});
}
loop {
sleep(Duration::from_millis(300));
state.count += 1;
}
}
I'm probably trying the wrong thing.
I want one (main) thread which can update state and many threads which can read state.
How should I do this in Rust?
I have read the Rust book on shared state, but that uses mutexes which seem overly complex for a single writer / multiple reader situation.
In C I would achieve this with a generous sprinkling of _Atomic.
Atomics are indeed a proper way, there are plenty of those in std (link. Your example needs 2 fixes.
Arc must be cloned before moving into the closure, so your loop becomes:
for i in 0..2 {
let arc_state = arc_state.clone();
spawn(move || { bar(i, arc_state); });
}
Using AtomicU64 is fairly straight forward, though you need explicitly use newtype methods with specified Ordering (Playground):
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Arc;
use std::thread::{sleep, spawn};
use std::time::Duration;
#[derive(Debug)]
struct State {
count: AtomicU64,
not_copyable: Vec<u8>,
}
fn bar(thread_num: u8, arc_state: Arc<State>) {
let state = arc_state.clone();
loop {
sleep(Duration::from_millis(1000));
println!(
"thread_num: {}, state.count: {}",
thread_num,
state.count.load(Ordering::Relaxed)
);
}
}
fn main() -> std::io::Result<()> {
let state = State {
count: AtomicU64::new(0),
not_copyable: vec![],
};
let arc_state = Arc::new(state);
for i in 0..2 {
let arc_state = arc_state.clone();
spawn(move || {
bar(i, arc_state);
});
}
loop {
sleep(Duration::from_millis(300));
// you can't use `state` here, because it moved
arc_state.count.fetch_add(1, Ordering::Relaxed);
}
}

How to let struct hold a thread and destroy thread as soon as it go out of scope

struct ThreadHolder{
state: ???
thread: ???
}
impl ThreadHolder {
fn launch(&mut self) {
self.thread = ???
// in thread change self.state
}
}
#[test]
fn test() {
let mut th = ThreadHolder{...};
th.launch();
// thread will be destroy as soon as th go out of scope
}
I think there is something to deal with lifetime, but I don't know how to write it.
What you want is so simple that you don't even need it to be mutable in any way, and then it becomes trivial to share it across threads, unless you want to reset it. You said you need to leave a thread, for one reason or another, therefore I'll assume that you don't care about this.
You instead can poll it every tick (most games run in ticks so I don't think there will be any issue implementing that).
I will provide example that uses sleep, so it's not most accurate thing, it is painfully obvious on the last subsecond duration, but I am not trying to do your work for you anyway, there's enough resources on internet that can help you deal with it.
Here it goes:
use std::{
sync::Arc,
thread::{self, Result},
time::{Duration, Instant},
};
struct Timer {
end: Instant,
}
impl Timer {
fn new(duration: Duration) -> Self {
// this code is valid for now, but might break in the future
// future so distant, that you really don't need to care unless
// you let your players draw for eternity
let end = Instant::now().checked_add(duration).unwrap();
Timer { end }
}
fn left(&self) -> Duration {
self.end.saturating_duration_since(Instant::now())
}
// more usable than above with fractional value being accounted for
fn secs_left(&self) -> u64 {
let span = self.left();
span.as_secs() + if span.subsec_millis() > 0 { 1 } else { 0 }
}
}
fn main() -> Result<()> {
let timer = Timer::new(Duration::from_secs(10));
let timer_main = Arc::new(timer);
let timer = timer_main.clone();
let t = thread::spawn(move || loop {
let seconds_left = timer.secs_left();
println!("[Worker] Seconds left: {}", seconds_left);
if seconds_left == 0 {
break;
}
thread::sleep(Duration::from_secs(1));
});
loop {
let seconds_left = timer_main.secs_left();
println!("[Main] Seconds left: {}", seconds_left);
if seconds_left == 5 {
println!("[Main] 5 seconds left, waiting for worker thread to finish work.");
break;
}
thread::sleep(Duration::from_secs(1));
}
t.join()?;
println!("[Main] worker thread finished work, shutting down!");
Ok(())
}
By the way, this kind of implementation wouldn't be any different in any other language, so please don't blame Rust for it. It's not the easiest language, but it provides more than enough tools to build anything you want from scratch as long as you put effort into it.
Goodluck :)
I think I got it work
use std::sync::{Arc, Mutex};
use std::thread::{sleep, spawn, JoinHandle};
use std::time::Duration;
struct Timer {
pub(crate) time: Arc<Mutex<u32>>,
jh_ticker: Option<JoinHandle<()>>,
}
impl Timer {
fn new<T>(i: T, duration: Duration) -> Self
where
T: Iterator<Item = u32> + Send + 'static,
{
let time = Arc::new(Mutex::new(0));
let arc_time = time.clone();
let jh_ticker = Some(spawn(move || {
for item in i {
let mut mg = arc_time.lock().unwrap();
*mg = item;
drop(mg); // needed, otherwise this thread will always hold lock
sleep(duration);
}
}));
Timer { time, jh_ticker }
}
}
impl Drop for Timer {
fn drop(&mut self) {
self.jh_ticker.take().unwrap().join();
}
}
#[test]
fn test_timer() {
let t = Timer::new(0..=10, Duration::from_secs(1));
let a = t.time.clone();
for _ in 0..100 {
let b = *a.lock().unwrap();
println!("{}", b);
sleep(Duration::from_millis(100));
}
}

Retain reference to timer::guard in struct

I am trying to implement a struct that keeps track of a global tick. In an effort to refactor I moved the timer into the struct but now I am facing the issue of the timer guard losing reference and thus the timer being dropped. My thought was to add the guard as struct member but I am not sure how to do this.
use timer;
use chrono;
use futures::Future;
use std::{process, thread};
use std::sync::{Arc, Mutex};
struct GlobalTime {
tick_count: Arc<Mutex<u64>>,
millis: Arc<Mutex<i64>>,
timer: timer::Timer,
guard: timer::Guard,
}
impl GlobalTime {
fn new() -> GlobalTime {
GlobalTime {
tick_count: Arc::new(Mutex::new(0)),
millis: Arc::new(Mutex::new(200)),
timer: timer::Timer::new(),
guard: ???, // what do I do here to init the guard??
}
}
fn tick(&self) {
*self.guard = {
let global_tick = self.tick_count.clone();
self.timer.schedule_repeating(
chrono::Duration::milliseconds(*self.millis.lock().unwrap()),
move || {
*global_tick.lock().unwrap() += 1;
println!("timer callback");
},
);
}
}
}
Given that the timer is not always running for the lifetime of GlobalTime, there isn't always a valid value for guard. We usually model that idea with an Option:
struct GlobalTime {
tick_count: Arc<Mutex<u64>>,
millis: Arc<Mutex<i64>>,
timer: timer::Timer,
guard: Option<timer::Guard>,
}
Which also solves your problem of what the initial value is, because it's Option::None:
impl GlobalTime {
fn new() -> GlobalTime {
GlobalTime {
tick_count: Arc::new(Mutex::new(0)),
millis: Arc::new(Mutex::new(200)),
timer: timer::Timer::new(),
guard: None,
}
}
}
The tick method becomes:
fn tick(&mut self) {
let global_tick = self.tick_count.clone();
let guard = self.timer.schedule_repeating(
chrono::Duration::milliseconds(*self.millis.lock().unwrap()),
move || {
*global_tick.lock().unwrap() += 1;
println!("timer callback");
},
);
self.guard = Some(guard);
}
To stop the timer you can just set the guard value to Option::None:
fn stop(&mut self) {
self.guard = None;
}

Second lock on same variable in a single expression blocks indefinitely

I have a Node containing a Mutex on a shared Protocol which is in turn used among different threads within a thread pool:
use std::sync::{Arc, Mutex};
pub struct Node {
thread_pool: ThreadPool,
protocol: Arc<Mutex<Protocol>>,
}
pub struct Protocol {}
impl Protocol {
pub fn is_leader(&self) -> bool {
// Do stuff...
}
pub fn is_co_leader(&self) -> bool {
// Do stuff...
}
}
When I try to acquire a lock on the protocol of the Node within the same if-statement, the code within that statement is never executed.
impl Node {
pub fn sign(&mut self) {
let protocol_handler = Arc::clone(&self.protocol);
self.thread_pool.execute(move || {
if !protocol_handler.lock().unwrap().is_leader()
&& !protocol_handler.lock().unwrap().is_co_leader()
{
// This is never executed
}
// And this neither...
})
}
}
However, if the values of the method invocations are assigned to two variables, everything works as intended:
impl Node {
pub fn sign(&mut self) {
let protocol_handler = Arc::clone(&self.protocol);
self.thread_pool.execute(move || {
let is_leader = protocol_handler.lock().unwrap().is_leader();
let is_co_leader = protocol_handler.lock().unwrap().is_co_leader();
if !is_leader && !is_co_leader {
// Either this will be executed
}
// or this ...
})
}
}
Is there any specific cause for Rust's behaviour to wait indefinitely in the first case?
Here is an MCVE for your problem:
use std::sync::Mutex;
fn main() {
let foo = Mutex::new(42i32);
let f1 = (*foo.lock().unwrap()).count_ones();
println!("f1: {}", f1);
let f2 = (*foo.lock().unwrap()).count_zeros();
println!("f2: {}", f2);
let tot = (*foo.lock().unwrap()).count_ones() + (*foo.lock().unwrap()).count_zeros();
println!("tot: {}", tot);
}
playground
When running this code it will print f1 and f2, then hang when trying to compute tot.
The problem is that Mutex::lock returns a MutexGuard which releases the lock automatically when it goes out of scope. In the example above, the guards go out of scope at the end of the expressions in which they are used. So when I write:
let f1 = (*foo.lock().unwrap()).count_ones();
I acquire the lock, read the value, and release the lock. Therefore the lock is free when computing f2.
However, when I write:
let tot = (*foo.lock().unwrap()).count_ones() + (*foo.lock().unwrap()).count_zeros();
I acquire the lock, read the value, try to acquire the lock again and only release both guards at the end of the line. This causes the code to deadlock when I try to acquire the lock for the second time without having released it first.
Note as commented by trentcl that your two steps example is subject to race conditions if things are changed between the two times the mutex is locked. You should rather use something like this:
impl Node {
pub fn sign(&mut self) {
let protocol_handler = Arc::clone(&self.protocol);
self.thread_pool.execute(move || {
let handler = protocol_handler.lock().unwrap();
if !handler.is_leader && !handler.is_co_leader {
// Either this will be executed
}
// or this ...
})
}
}

Resources