I'm new to Rust. As a learning exercise I'm trying to write a simple timer struct that I once wrote in C++. The interface and implementation looks something like this:
pub struct Timer {
handle: Option<std::thread::JoinHandle<()>>,
alive: bool,
}
impl Timer {
pub fn new() {
Timer {
handle: None,
alive: false,
}
}
pub fn start(&'static mut self) {
// Oops! How do I do this?
self.handle = Some(std::thread::spawn(move || {
self.alive = true;
self.loop()
}));
}
pub fn stop(&mut self) {
self.alive = false;
self.handle.unwrap().join()
}
pub fn loop(&self) {
// while alive
}
}
I understand why this is an error because of use of moved value: self within the start function, but I'm wondering how I'm supposed to design my struct so that something like this would work. In every scenario I can think of, I'll always have a double borrow situation.
I have a hunch that I need to learn more about interior mutability, but figured I would ask for design guidance before going down any more rabbit holes.
I think you are pretty close to getting it to work.
There are only two hurdles:
thread::spawn will not allow sharing references
alive and loop for you to share in this design
The solution is two-fold:
split up things between the controller (Timer) and the worker (the closure)
share state between the two using Arc since references are forbidden
Here is a minimal example for you to toy with:
use std::{sync, thread, time};
use std::sync::atomic::{AtomicBool, Ordering};
pub struct Timer {
handle: Option<thread::JoinHandle<()>>,
alive: sync::Arc<AtomicBool>,
}
impl Timer {
pub fn new() -> Timer {
Timer {
handle: None,
alive: sync::Arc::new(AtomicBool::new(false)),
}
}
pub fn start<F>(&mut self, fun: F)
where F: 'static + Send + FnMut() -> ()
{
self.alive.store(true, Ordering::SeqCst);
let alive = self.alive.clone();
self.handle = Some(thread::spawn(move || {
let mut fun = fun;
while alive.load(Ordering::SeqCst) {
fun();
thread::sleep(time::Duration::from_millis(10));
}
}));
}
pub fn stop(&mut self) {
self.alive.store(false, Ordering::SeqCst);
self.handle
.take().expect("Called stop on non-running thread")
.join().expect("Could not join spawned thread");
}
}
fn main() {
let mut timer = Timer::new();
timer.start(|| println!("Hello, World!") );
println!("Feeling sleepy...");
thread::sleep(time::Duration::from_millis(100));
println!("Time for dinner!");
timer.stop();
}
I invite you to poke holes at it one at a time (ie, change one thing that is different from your example, check the error message, and try to understand how the difference solved it).
On the playground, it printed for me:
Feeling sleepy...
Hello, World!
Hello, World!
Hello, World!
Hello, World!
Hello, World!
Hello, World!
Hello, World!
Hello, World!
Hello, World!
Hello, World!
Time for dinner!
Though I would not rely on (1) the number of times "Hello, World!" appears and (2) "Feeling sleepy..." appearing first.
And damned, is Atomic verbose... I kinda wish there was a get/set with SeqCst (the stronger ordering) available.
Related
I'm working with a struct where I need to read the GPIO pin of a Raspberry Pi, and increment a 'register' within the struct every time the pin goes high. Concurrently with this, I would like to be able to sample the register every now and then to see what the current value is.
When implementing this, my thought was to spawn a thread that continuously loops checking if the pin has gone from Low to High, and increment the register from within the thread. Then, from the parent thread, I can read the value of the register and report it.
After doing some research, it seems that a scoped thread would not be the correct implementation of this, because the child thread would never hand over ownership of the register to the parent thread.
Rather, I believe I should use an Arc/Mutex combination guarding the register and only momentarily take control over the lock to increment the register. Is this the correct interpretation of multithreading in Rust?
Assuming the above is correct, I'm unsure of how to implement this in Rust.
struct GpioReader {
register: Arc<Mutex<i64>>,
input_pin: Arc<Mutex<InputPin>>,
}
impl GpioReader {
pub fn new(input_pin: InputPin) -> Self {
Self {
register: Arc::New(Mutex::from(0)),
input_pin: Arc::new(Mutex::from(input_pin))
}
}
pub fn start(&self) {
let pin = self.input_pin.lock().unwrap(); // ???
let register = self.register.lock().unwrap(); // ???
let handle = spawn(move || loop {
match pin.read() { // ???
High => register += 1, // ???
Low => (),
}
sleep(Duration::from_millis(SLEEP_TIME));
});
handle.join().expect("Failed to join thread.");
}
pub fn get_register(&self) -> i64 {
let reg_val = self.register.lock().unwrap();
return reg_val;
}
}
Given the above, how do I declare the pin and register variables in such a way that I can read off the pin and increment the register within the loop? My best guess is I'll have to instantiate some kind of reference to these members of the struct outside of the loop, and then pass the reference into the loop at which point I can use the lock() method of the Arc.
Edit: Using RaspberryPi 3A+ running Raspbian. The InputPin in question is from the rppal crate.
Mutex<i64> is an anti-pattern. Replace it with AtomicI64.
Arc is meant to be cloned with Arc::clone() to create new references to the same object.
Don't use shared ownership if not necessary. InputPin is only used from within the thread, so move it in instead.
I'm unsure why you do handle.join(). If you want it to continue in the background, don't wait for it with .join().
use std::{
sync::{
atomic::{AtomicI64, Ordering},
Arc,
},
thread::{self, sleep},
time::Duration,
};
use rppal::gpio::InputPin;
struct GpioReader {
register: Arc<AtomicI64>,
input_pin: Option<InputPin>,
}
const SLEEP_TIME: Duration = Duration::from_millis(1000);
impl GpioReader {
pub fn new(input_pin: InputPin) -> Self {
Self {
register: Arc::new(AtomicI64::new(0)),
input_pin: Some(input_pin),
}
}
pub fn start(&mut self) {
let register = Arc::clone(&self.register);
let pin = self.input_pin.take().expect("Thread already running!");
let handle = thread::spawn(move || loop {
match pin.read() {
High => {
register.fetch_add(1, Ordering::Relaxed);
}
Low => (),
}
sleep(SLEEP_TIME);
});
}
pub fn get_register(&self) -> i64 {
self.register.load(Ordering::Relaxed)
}
}
If you want to stop the thread automatically when the GpioReader object is dropped, you can use Weak to signal it to the thread:
use std::{
sync::{
atomic::{AtomicI64, Ordering},
Arc,
},
thread::{self, sleep},
time::Duration,
};
use rppal::gpio::InputPin;
struct GpioReader {
register: Arc<AtomicI64>,
input_pin: Option<InputPin>,
}
const SLEEP_TIME: Duration = Duration::from_millis(1000);
impl GpioReader {
pub fn new(input_pin: InputPin) -> Self {
Self {
register: Arc::new(AtomicI64::new(0)),
input_pin: Some(input_pin),
}
}
pub fn start(&mut self) {
let register = Arc::downgrade(&self.register);
let pin = self.input_pin.take().expect("Thread already running!");
let handle = thread::spawn(move || loop {
if let Some(register) = register.upgrade() {
match pin.read() {
High => {
register.fetch_add(1, Ordering::Relaxed);
}
Low => (),
}
sleep(SLEEP_TIME);
} else {
// Original `register` got dropped, cancel the thread
break;
}
});
}
pub fn get_register(&self) -> i64 {
self.register.load(Ordering::Relaxed)
}
}
struct ThreadHolder{
state: ???
thread: ???
}
impl ThreadHolder {
fn launch(&mut self) {
self.thread = ???
// in thread change self.state
}
}
#[test]
fn test() {
let mut th = ThreadHolder{...};
th.launch();
// thread will be destroy as soon as th go out of scope
}
I think there is something to deal with lifetime, but I don't know how to write it.
What you want is so simple that you don't even need it to be mutable in any way, and then it becomes trivial to share it across threads, unless you want to reset it. You said you need to leave a thread, for one reason or another, therefore I'll assume that you don't care about this.
You instead can poll it every tick (most games run in ticks so I don't think there will be any issue implementing that).
I will provide example that uses sleep, so it's not most accurate thing, it is painfully obvious on the last subsecond duration, but I am not trying to do your work for you anyway, there's enough resources on internet that can help you deal with it.
Here it goes:
use std::{
sync::Arc,
thread::{self, Result},
time::{Duration, Instant},
};
struct Timer {
end: Instant,
}
impl Timer {
fn new(duration: Duration) -> Self {
// this code is valid for now, but might break in the future
// future so distant, that you really don't need to care unless
// you let your players draw for eternity
let end = Instant::now().checked_add(duration).unwrap();
Timer { end }
}
fn left(&self) -> Duration {
self.end.saturating_duration_since(Instant::now())
}
// more usable than above with fractional value being accounted for
fn secs_left(&self) -> u64 {
let span = self.left();
span.as_secs() + if span.subsec_millis() > 0 { 1 } else { 0 }
}
}
fn main() -> Result<()> {
let timer = Timer::new(Duration::from_secs(10));
let timer_main = Arc::new(timer);
let timer = timer_main.clone();
let t = thread::spawn(move || loop {
let seconds_left = timer.secs_left();
println!("[Worker] Seconds left: {}", seconds_left);
if seconds_left == 0 {
break;
}
thread::sleep(Duration::from_secs(1));
});
loop {
let seconds_left = timer_main.secs_left();
println!("[Main] Seconds left: {}", seconds_left);
if seconds_left == 5 {
println!("[Main] 5 seconds left, waiting for worker thread to finish work.");
break;
}
thread::sleep(Duration::from_secs(1));
}
t.join()?;
println!("[Main] worker thread finished work, shutting down!");
Ok(())
}
By the way, this kind of implementation wouldn't be any different in any other language, so please don't blame Rust for it. It's not the easiest language, but it provides more than enough tools to build anything you want from scratch as long as you put effort into it.
Goodluck :)
I think I got it work
use std::sync::{Arc, Mutex};
use std::thread::{sleep, spawn, JoinHandle};
use std::time::Duration;
struct Timer {
pub(crate) time: Arc<Mutex<u32>>,
jh_ticker: Option<JoinHandle<()>>,
}
impl Timer {
fn new<T>(i: T, duration: Duration) -> Self
where
T: Iterator<Item = u32> + Send + 'static,
{
let time = Arc::new(Mutex::new(0));
let arc_time = time.clone();
let jh_ticker = Some(spawn(move || {
for item in i {
let mut mg = arc_time.lock().unwrap();
*mg = item;
drop(mg); // needed, otherwise this thread will always hold lock
sleep(duration);
}
}));
Timer { time, jh_ticker }
}
}
impl Drop for Timer {
fn drop(&mut self) {
self.jh_ticker.take().unwrap().join();
}
}
#[test]
fn test_timer() {
let t = Timer::new(0..=10, Duration::from_secs(1));
let a = t.time.clone();
for _ in 0..100 {
let b = *a.lock().unwrap();
println!("{}", b);
sleep(Duration::from_millis(100));
}
}
I'm trying to create a method that can return a reference to Data that is either in a constant global array or inside an Option in an item. The lifetimes are certainly different, but it's safe to assume that the lifetime of the data is at least as long as the lifetime of the item. While doing this, I expected the compiler to warn if I did anything wrong, but it's instead generating wrong instructions and the program is crashing with SIGILL.
Concretely speaking, I have the following code failing in Rust 1.27.2:
#[derive(Debug)]
pub enum Type {
TYPE1,
TYPE2,
}
#[derive(Debug)]
pub struct Data {
pub ctype: Type,
pub int: i32,
}
#[derive(Debug)]
pub struct Entity {
pub idata: usize,
pub modifier: Option<Data>,
}
impl Entity {
pub fn data(&self) -> &Data {
if self.modifier.is_none() {
&DATA[self.idata]
} else {
self.modifier.as_ref().unwrap()
}
}
}
pub const DATA: [Data; 1] = [Data {
ctype: Type::TYPE2,
int: 1,
}];
fn main() {
let mut itemvec = vec![Entity {
idata: 0,
modifier: None,
}];
eprintln!("vec[0]: {:p} = {:?}", &itemvec[0], itemvec[0]);
eprintln!("removed item 0");
let item = itemvec.remove(0);
eprintln!("item: {:p} = {:?}", &item, item);
eprintln!("modifier: {:p} = {:?}", &item.modifier, item.modifier);
eprintln!("DATA: {:p} = {:?}", &DATA[0], DATA[0]);
let itemdata = item.data();
eprintln!("itemdata: {:p} = {:?}", itemdata, itemdata);
}
Complete code
I can't understand what I'm doing wrong. Why isn't the compiler generating a warning? Is it the removal of the (non-copy) item of the vector? Is it the ambiguous lifetimes?
How to return a reference to a global vector or an internal Option?
By using Option::unwrap_or_else:
impl Entity {
pub fn data(&self) -> &Data {
self.modifier.as_ref().unwrap_or_else(|| &DATA[self.idata])
}
}
but it's instead generating wrong instructions and the program is crashing with SIGILL
The code in your question does not have this behavior on macOS with Rust 1.27.2 or 1.28.0. On Ubuntu I see an issue when running the program in Valgrind, but the problem goes away in Rust 1.28.0.
See also:
Why should I prefer `Option::ok_or_else` instead of `Option::ok_or`?
What is this unwrap thing: sometimes it's unwrap sometimes it's unwrap_or
I am a beginner in Rust.
I have a long running IO-bound process that I want to spawn and monitor via a REST API. I chose Iron for that, following this tutorial . Monitoring means getting its progress and its final result.
When I spawn it, I give it an id and map that id to a resource that I can GET to get the progress. I don't have to be exact with the progress; I can report the progress from 5 seconds ago.
My first attempt was to have a channel via which I send request for progress and receive the status. I got stuck where to store the receiver, as in my understanding it belongs to one thread only. I wanted to put it in the context of the request, but that won't work as there are different threads handling subsequent requests.
What would be the idiomatic way to do this in Rust?
I have a sample project.
Later edit:
Here is a self contained example which follows the sample principle as the answer, namely a map where each thread updates its progress:
extern crate iron;
extern crate router;
extern crate rustc_serialize;
use iron::prelude::*;
use iron::status;
use router::Router;
use rustc_serialize::json;
use std::io::Read;
use std::sync::{Mutex, Arc};
use std::thread;
use std::time::Duration;
use std::collections::HashMap;
#[derive(Debug, Clone, RustcEncodable, RustcDecodable)]
pub struct Status {
pub progress: u64,
pub context: String
}
#[derive(RustcEncodable, RustcDecodable)]
struct StartTask {
id: u64
}
fn start_process(status: Arc<Mutex<HashMap<u64, Status>>>, task_id: u64) {
let c = status.clone();
thread::spawn(move || {
for i in 1..100 {
{
let m = &mut c.lock().unwrap();
m.insert(task_id, Status{ progress: i, context: "in progress".to_string()});
}
thread::sleep(Duration::from_secs(1));
}
let m = &mut c.lock().unwrap();
m.insert(task_id, Status{ progress: 100, context: "done".to_string()});
});
}
fn main() {
let status: Arc<Mutex<HashMap<u64, Status>>> = Arc::new(Mutex::new(HashMap::new()));
let status_clone: Arc<Mutex<HashMap<u64, Status>>> = status.clone();
let mut router = Router::new();
router.get("/:taskId", move |r: &mut Request| task_status(r, &status.lock().unwrap()));
router.post("/start", move |r: &mut Request|
start_task(r, status_clone.clone()));
fn task_status(req: &mut Request, statuses: & HashMap<u64,Status>) -> IronResult<Response> {
let ref task_id = req.extensions.get::<Router>().unwrap().find("taskId").unwrap_or("/").parse::<u64>().unwrap();
let payload = json::encode(&statuses.get(&task_id)).unwrap();
Ok(Response::with((status::Ok, payload)))
}
// Receive a message by POST and play it back.
fn start_task(request: &mut Request, statuses: Arc<Mutex<HashMap<u64, Status>>>) -> IronResult<Response> {
let mut payload = String::new();
request.body.read_to_string(&mut payload).unwrap();
let task_start_request: StartTask = json::decode(&payload).unwrap();
start_process(statuses, task_start_request.id);
Ok(Response::with((status::Ok, json::encode(&task_start_request).unwrap())))
}
Iron::new(router).http("localhost:3000").unwrap();
}
One possibility is to use a global HashMap that associate each worker id with the progress (and result). Here is simple example (without the rest stuff)
#[macro_use]
extern crate lazy_static;
use std::sync::Mutex;
use std::collections::HashMap;
use std::thread;
use std::time::Duration;
lazy_static! {
static ref PROGRESS: Mutex<HashMap<usize, usize>> = Mutex::new(HashMap::new());
}
fn set_progress(id: usize, progress: usize) {
// insert replaces the old value if there was one.
PROGRESS.lock().unwrap().insert(id, progress);
}
fn get_progress(id: usize) -> Option<usize> {
PROGRESS.lock().unwrap().get(&id).cloned()
}
fn work(id: usize) {
println!("Creating {}", id);
set_progress(id, 0);
for i in 0..100 {
set_progress(id, i + 1);
// simulates work
thread::sleep(Duration::new(0, 50_000_000));
}
}
fn monitor(id: usize) {
loop {
if let Some(p) = get_progress(id) {
if p == 100 {
println!("Done {}", id);
// to avoid leaks, remove id from PROGRESS.
// maybe save that the task ends in a data base.
return
} else {
println!("Progress {}: {}", id, p);
}
}
thread::sleep(Duration::new(1, 0));
}
}
fn main() {
let w = thread::spawn(|| work(1));
let m = thread::spawn(|| monitor(1));
w.join().unwrap();
m.join().unwrap();
}
You need to register one channel per request thread, because if cloning Receivers were possible the responses might/will end up with the wrong thread if two request are running at the same time.
Instead of having your thread create a channel for answering requests, use a future. A future allows you to have a handle to an object, where the object doesn't exist yet. You can change the input channel to receive a Promise, which you then fulfill, no output channel necessary.
I have an EventRegistry which people can use to register event listeners. It then calls the appropriate listeners when an event is broadcast. But, when I try to multithread it, it doesn't compile. How would I get this code working?
use std::collections::HashMap;
use std::thread;
struct EventRegistry<'a> {
event_listeners: HashMap<&'a str, Vec<Box<Fn() + Sync>>>
}
impl<'a> EventRegistry<'a> {
fn new() -> EventRegistry<'a> {
EventRegistry {
event_listeners: HashMap::new()
}
}
fn add_event_listener(&mut self, event: &'a str, listener: Box<Fn() + Sync>) {
match self.event_listeners.get_mut(event) {
Some(listeners) => {
listeners.push(listener);
return
},
None => {}
};
let mut listeners = Vec::with_capacity(1);
listeners.push(listener);
self.event_listeners.insert(event, listeners);
}
fn broadcast_event(&mut self, event: &str) {
match self.event_listeners.get(event) {
Some(listeners) => {
for listener in listeners.iter() {
let _ = thread::spawn(|| {
listener();
});
}
}
None => {}
}
}
}
fn main() {
let mut main_registry = EventRegistry::new();
main_registry.add_event_listener("player_move", Box::new(|| {
println!("Hey, look, the player moved!");
}));
main_registry.broadcast_event("player_move");
}
Playpen (not sure if it's minimal, but it produces the error)
If I use thread::scoped, it works too, but that's unstable, and I think it only works because it immediately joins back to the main thread.
Updated question
I meant "call them in their own thread"
The easiest thing to do is avoid the Fn* traits, if possible. If you know that you are only using full functions, then it's straightforward:
use std::thread;
fn a() { println!("a"); }
fn b() { println!("b"); }
fn main() {
let fns = vec![a as fn(), b as fn()];
for &f in &fns {
thread::spawn(move || f());
}
thread::sleep_ms(500);
}
If you can't use that for some reason (like you want to accept closures), then you will need to be a bit more explicit and use Arc:
use std::thread;
use std::sync::Arc;
fn a() { println!("a"); }
fn b() { println!("b"); }
fn main() {
let fns = vec![
Arc::new(Box::new(a) as Box<Fn() + Send + Sync>),
Arc::new(Box::new(b) as Box<Fn() + Send + Sync>),
];
for f in &fns {
let my_f = f.clone();
thread::spawn(move || my_f());
}
thread::sleep_ms(500);
}
Here, we can create a reference-counted trait object. We can clone the trait object (increasing the reference count) each time we spawn a new thread. Each thread gets its own reference to the trait object.
If I use thread::scoped, it works too
thread::scoped is pretty awesome; it's really unfortunate that it needed to be marked unstable due to some complex interactions that weren't the best.
One of the benefits of a scoped thread is that the thread is guaranteed to end by a specific time: when the JoinGuard is dropped. That means that scoped threads are allowed to contain non-'static references, so long as those references last longer than the thread!
A spawned thread has no such guarantees about how long they live; these threads may live "forever". Any references they take must also live "forever", thus the 'static restriction.
This serves to explain your original problem. You have a vector with a non-'static lifetime, but you are handing references that point into that vector to the thread. If the vector were to be deallocated before the thread exited, you could attempt to access undefined memory, which leads to crashes in C or C++ programs. This is Rust helping you out!
Original question
Call functions in vector without consuming them
The answer is that you just call them:
fn a() { println!("a"); }
fn b() { println!("b"); }
fn main() {
let fns = vec![Box::new(a) as Box<Fn()>, Box::new(b) as Box<Fn()>];
fns[0]();
fns[1]();
fns[0]();
fns[1]();
}
Playpen