I'm quite new to Rust but I've run into a strange issue that is likely me misunderstanding how the defer-lite crate works.
If I have the below code then everything is working as expected
use defer_lite::defer;
fn main() {
println!("Start");
defer! {
println!("Stop");
}
println!("Interval");
}
I get the output that I want as
Start
Interval
Stop
Then when I try to include a mutable struct
use defer_lite::defer;
struct Timer {
start:i32, interval:i32, stop:i32
}
fn timer_initialize() -> Timer {
let timer = Timer{start:0, interval:0, stop:0 };
return timer;
}
impl Timer {
fn timer_start(&mut self) { self.start = 1; }
fn timer_interval(&mut self) { self.interval = 2; }
fn timer_stop(&mut self) { self.stop = 3; }
}
fn main() {
let mut timer = timer_initialize();
println!("Start");
timer.timer_start();
defer! {
println!("Stop");
timer.timer_stop();
}
println!("Interval");
// uncommenting this line will cause a compiler error
//timer.timer_interval();
}
Everything still works fine until I try to use the struct methods below the defer! at which point I will get a compiler error
timer.timer_interval();
^^^^^^^^^^^^^^^^^^^^^^ second mutable borrow occurs here
I'm not sure why this is and if I am using defer-lite wrong
This is a limitation of this crate. This cannot be implemented in Rust. The problem is that in order to do its work, defer-lite creates a RAII guard that holds the cleanup code as a closure. But the closure holds a mutable reference to the value, so you cannot use it again until it is dropped - at the end of the scope.
If you need that I'd recommend the scopeguard crate which is also much more popular. It provides a guard() function for this case. It works like:
fn main() {
let mut timer = timer_initialize();
println!("Start");
timer.timer_start();
let mut timer = scopeguard::guard(timer, |mut timer| {
println!("Stop");
timer.timer_stop();
});
println!("Interval");
timer.timer_interval();
}
Related
In my browser application, two closures access data stored in a Rc<RefCell<T>>. One closure mutably borrows the data, while the other immutably borrows it. The two closures are invoked independently of one another, and this will occasionally result in a BorrowError or BorrowMutError.
Here is my attempt at an MWE, though it uses a future to artificially inflate the likelihood of the error occurring:
use std::cell::RefCell;
use std::future::Future;
use std::pin::Pin;
use std::rc::Rc;
use std::task::{Context, Poll, Waker};
use wasm_bindgen::prelude::*;
use wasm_bindgen::JsValue;
#[wasm_bindgen]
extern "C" {
#[wasm_bindgen(js_namespace = console)]
pub fn log(s: &str);
#[wasm_bindgen(js_name = setTimeout)]
fn set_timeout(closure: &Closure<dyn FnMut()>, millis: u32) -> i32;
#[wasm_bindgen(js_name = setInterval)]
fn set_interval(closure: &Closure<dyn FnMut()>, millis: u32) -> i32;
}
pub struct Counter(u32);
#[wasm_bindgen(start)]
pub async fn main() -> Result<(), JsValue> {
console_error_panic_hook::set_once();
let counter = Rc::new(RefCell::new(Counter(0)));
let counter_clone = counter.clone();
let log_closure = Closure::wrap(Box::new(move || {
let c = counter_clone.borrow();
log(&c.0.to_string());
}) as Box<dyn FnMut()>);
set_interval(&log_closure, 1000);
log_closure.forget();
let counter_clone = counter.clone();
let increment_closure = Closure::wrap(Box::new(move || {
let counter_clone = counter_clone.clone();
wasm_bindgen_futures::spawn_local(async move {
let mut c = counter_clone.borrow_mut();
// In reality this future would be replaced by some other
// time-consuming operation manipulating the borrowed data
SleepFuture::new(5000).await;
c.0 += 1;
});
}) as Box<dyn FnMut()>);
set_timeout(&increment_closure, 3000);
increment_closure.forget();
Ok(())
}
struct SleepSharedState {
waker: Option<Waker>,
completed: bool,
closure: Option<Closure<dyn FnMut()>>,
}
struct SleepFuture {
shared_state: Rc<RefCell<SleepSharedState>>,
}
impl Future for SleepFuture {
type Output = ();
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let mut shared_state = self.shared_state.borrow_mut();
if shared_state.completed {
Poll::Ready(())
} else {
shared_state.waker = Some(cx.waker().clone());
Poll::Pending
}
}
}
impl SleepFuture {
fn new(duration: u32) -> Self {
let shared_state = Rc::new(RefCell::new(SleepSharedState {
waker: None,
completed: false,
closure: None,
}));
let state_clone = shared_state.clone();
let closure = Closure::wrap(Box::new(move || {
let mut state = state_clone.borrow_mut();
state.completed = true;
if let Some(waker) = state.waker.take() {
waker.wake();
}
}) as Box<dyn FnMut()>);
set_timeout(&closure, duration);
shared_state.borrow_mut().closure = Some(closure);
SleepFuture { shared_state }
}
}
panicked at 'already mutably borrowed: BorrowError'
The error makes sense, but how should I go about resolving it?
My current solution is to have the closures use try_borrow or try_borrow_mut, and if unsuccessful, use setTimeout for an arbitrary amount of time before attempting to borrow again.
Think about this problem independently of Rust's borrow semantics. You have a long-running operation that's updating some shared state.
How would you do it if you were using threads? You would put the shared state behind a lock. RefCell is like a lock except that you can't block on unlocking it — but you can emulate blocking by using some kind of message-passing to wake up the reader.
How would you do it if you were using pure JavaScript? You don't automatically have anything like RefCell, so either:
The state can be safely read while the operation is still ongoing (in a concurrency-not-parallelism sense): in this case, emulate that by not holding a single RefMut (result of borrow_mut()) alive across an await boundary.
The state is not safe to be read: you'd either write something lock-like as described above, or perhaps arrange so that it's only written once when the operation is done, and until then, the long-running operation has its own private state not shared with the rest of the application (so there can be no BorrowError conflicts).
Think about what your application actually needs and pick a suitable solution. Implementing any of these solutions will most likely involve having additional interior-mutable objects used for communication.
Is it possible to borrow a mutable reference to the contents of a HashMap and use it for an extended period of time without impeding read-only access?
This is for trying to maintain a window into the state of various components in a system that are running independently (via Tokio) and need to be monitored.
As an example:
use std::sync::Arc;
use std::collections::HashMap;
struct Container {
running : bool,
count : u8
}
impl Container {
fn run(&mut self) {
for i in 1..100 {
self.count = i;
}
self.running = false;
}
}
fn main() {
let mut map = HashMap::new();
let mut container = Arc::new(
Box::new(
Container {
running: true,
count: 0
}
)
);
map.insert(0, container.clone());
container.run();
map.remove(&0);
}
This is for a Tokio-driven program where multiple operations will be happening asynchronously and visibility into the overall state of them is required.
There's this question where a temporary mutable reference can be borrowed, but that won't work as the run() function needs time to complete.
Based on suggestions from Jmb and Stargateur reworked this to use a RwLock internally. These internals could be reworked by having methods that manipulate them, but the basics are here:
use std::sync::Arc;
use std::sync::RwLock;
use std::collections::HashMap;
#[derive(Debug)]
struct ContainerState {
running : bool,
count : u8
}
struct Container {
state : Arc<RwLock<ContainerState>>
}
impl Container {
fn run(&self) {
for i in 1..100 {
let mut state = self.state.write().unwrap();
state.count = i;
}
{
let mut state = self.state.write().unwrap();
state.running = false;
}
}
}
fn main() {
let mut map = HashMap::new();
let state = Arc::new(
RwLock::new(
ContainerState {
running: true,
count: 0
}
)
);
map.insert(0, state);
let container = Container {
state: map[&0].clone()
};
container.run();
println!("Final state: {:?}", map[&0]);
map.remove(&0);
}
Where the key thing I was missing is you can have a mutable reference or multiple immutable references, and they're mutually exclusive. My initial understanding was that these two limits were independent.
I am relatively new to Rust and wanted to learn it by building a game with Piston.
There I have a Renderer struct that renders the scene and a main loop that handles game-logic events. They both need a mutable borrow and I understand that multiple borrows can lead to undefined behaviour, but I don't understand how it can in my case, since I always use the object in different scopes and in the same thread.
I had a look at this question and feel like I am supposed to use Rc and/or RefCell to solve this, but this seems too extreme. I also don't think I need ownership in the renderer (or do I, then why?), because the renderer lives shorter than the main loop.
Another solution would be to pass the mutable object every time it is needed, but in the real case this is almost for every function and I come from Java/C++ where the below code worked.
struct Renderer<'a> {
w: &'a mut Window,
}
impl<'a> Renderer<'a> {
fn render(&mut self) {
let w = &mut self.w;
// complex render operation needing the window mutably
}
}
struct Window {
//...
}
impl Window {
fn poll_event(&mut self) {
//some internal code of piston
}
}
fn main() {
let mut w = Window {};
let mut renderer = Renderer { w: &mut w };
loop {
{
//first event handling
w.poll_event();
}
{
//AFTERWARDS rendering
renderer.render();
}
}
}
With the error:
error[E0499]: cannot borrow `w` as mutable more than once at a time
--> src/main.rs:29:13
|
24 | let mut renderer = Renderer { w: &mut w };
| - first mutable borrow occurs here
...
29 | w.poll_event();
| ^ second mutable borrow occurs here
...
36 | }
| - first borrow ends here
My Rc approach compiles fine, but seems over the top:
use std::rc::Rc;
struct Renderer {
w: Rc<Window>,
}
impl Renderer {
fn render(&mut self) {
let w = Rc::get_mut(&mut self.w).unwrap();
// complex render operation needing the window mutably
}
}
struct Window {
//...
}
impl Window {
fn poll_event(&mut self) {
//some internal code of piston
}
}
fn main() {
let mut w = Rc::new(Window {});
let mut renderer = Renderer { w: w.clone() };
loop {
{
//first event handling
Rc::get_mut(&mut w).unwrap().poll_event();
}
{
//AFTERWARDS rendering
renderer.render();
}
}
}
All I actually need to do is delaying the &mut. This works with Rc but I don't need ownership - so all the unwrap stuff will never fail since I (can) use it in different scopes.
Can this predicament be resolved or am I forced to use Rc in this case?
If there is a more "Rusty" way of doing this, please let me know.
I'm trying to parallelize an algorithm I have. This is a sketch of how I would write it in C++:
void thread_func(std::vector<int>& results, int threadid) {
results[threadid] = threadid;
}
std::vector<int> foo() {
std::vector<int> results(4);
for(int i = 0; i < 4; i++)
{
spawn_thread(thread_func, results, i);
}
join_threads();
return results;
}
The point here is that each thread has a reference to a shared, mutable object that it does not own. It seems like this is difficult to do in Rust. Should I try to cobble it together in terms of (and I'm guessing here) Mutex, Cell and &mut, or is there a better pattern I should follow?
The proper way is to use Arc<Mutex<...>> or, for example, Arc<RWLock<...>>. Arc is a shared ownership-based concurrency-safe pointer to immutable data, and Mutex/RWLock introduce synchronized internal mutability. Your code then would look like this:
use std::sync::{Arc, Mutex};
use std::thread;
fn thread_func(results: Arc<Mutex<Vec<i32>>>, thread_id: i32) {
let mut results = results.lock().unwrap();
results[thread_id as usize] = thread_id;
}
fn foo() -> Arc<Mutex<Vec<i32>>> {
let results = Arc::new(Mutex::new(vec![0; 4]));
let guards: Vec<_> = (0..4).map(|i| {
let results = results.clone();
thread::spawn(move || thread_func(results, i))
}).collect();
for guard in guards {
guard.join();
}
results
}
This unfortunately requires you to return Arc<Mutex<Vec<i32>>> from the function because there is no way to "unwrap" the value. An alternative is to clone the vector before returning.
However, using a crate like scoped_threadpool (whose approach could only be recently made sound; something like it will probably make into the standard library instead of the now deprecated thread::scoped() function, which is unsafe) it can be done in a much nicer way:
extern crate scoped_threadpool;
use scoped_threadpool::Pool;
fn thread_func(result: &mut i32, thread_id: i32) {
*result = thread_id;
}
fn foo() -> Vec<i32> {
let results = vec![0; 4];
let mut pool = Pool::new(4);
pool.scoped(|scope| {
for (i, e) in results.iter_mut().enumerate() {
scope.execute(move || thread_func(e, i as i32));
}
});
results
}
If your thread_func needs to access the whole vector, however, you can't get away without synchronization, so you would need a Mutex, and you would still get the unwrapping problem:
extern crate scoped_threadpool;
use std::sync::Mutex;
use scoped_threadpool::Pool;
fn thread_func(results: &Mutex<Vec<u32>>, thread_id: i32) {
let mut results = results.lock().unwrap();
result[thread_id as usize] = thread_id;
}
fn foo() -> Vec<i32> {
let results = Mutex::new(vec![0; 4]);
let mut pool = Pool::new(4);
pool.scoped(|scope| {
for i in 0..4 {
scope.execute(move || thread_func(&results, i));
}
});
results.lock().unwrap().clone()
}
But at least you don't need any Arcs here. Also execute() method is unsafe if you use stable compiler because it does not have a corresponding fix to make it safe. It is safe on all compiler versions greater than 1.4.0, according to its build script.
I have an EventRegistry which people can use to register event listeners. It then calls the appropriate listeners when an event is broadcast. But, when I try to multithread it, it doesn't compile. How would I get this code working?
use std::collections::HashMap;
use std::thread;
struct EventRegistry<'a> {
event_listeners: HashMap<&'a str, Vec<Box<Fn() + Sync>>>
}
impl<'a> EventRegistry<'a> {
fn new() -> EventRegistry<'a> {
EventRegistry {
event_listeners: HashMap::new()
}
}
fn add_event_listener(&mut self, event: &'a str, listener: Box<Fn() + Sync>) {
match self.event_listeners.get_mut(event) {
Some(listeners) => {
listeners.push(listener);
return
},
None => {}
};
let mut listeners = Vec::with_capacity(1);
listeners.push(listener);
self.event_listeners.insert(event, listeners);
}
fn broadcast_event(&mut self, event: &str) {
match self.event_listeners.get(event) {
Some(listeners) => {
for listener in listeners.iter() {
let _ = thread::spawn(|| {
listener();
});
}
}
None => {}
}
}
}
fn main() {
let mut main_registry = EventRegistry::new();
main_registry.add_event_listener("player_move", Box::new(|| {
println!("Hey, look, the player moved!");
}));
main_registry.broadcast_event("player_move");
}
Playpen (not sure if it's minimal, but it produces the error)
If I use thread::scoped, it works too, but that's unstable, and I think it only works because it immediately joins back to the main thread.
Updated question
I meant "call them in their own thread"
The easiest thing to do is avoid the Fn* traits, if possible. If you know that you are only using full functions, then it's straightforward:
use std::thread;
fn a() { println!("a"); }
fn b() { println!("b"); }
fn main() {
let fns = vec![a as fn(), b as fn()];
for &f in &fns {
thread::spawn(move || f());
}
thread::sleep_ms(500);
}
If you can't use that for some reason (like you want to accept closures), then you will need to be a bit more explicit and use Arc:
use std::thread;
use std::sync::Arc;
fn a() { println!("a"); }
fn b() { println!("b"); }
fn main() {
let fns = vec![
Arc::new(Box::new(a) as Box<Fn() + Send + Sync>),
Arc::new(Box::new(b) as Box<Fn() + Send + Sync>),
];
for f in &fns {
let my_f = f.clone();
thread::spawn(move || my_f());
}
thread::sleep_ms(500);
}
Here, we can create a reference-counted trait object. We can clone the trait object (increasing the reference count) each time we spawn a new thread. Each thread gets its own reference to the trait object.
If I use thread::scoped, it works too
thread::scoped is pretty awesome; it's really unfortunate that it needed to be marked unstable due to some complex interactions that weren't the best.
One of the benefits of a scoped thread is that the thread is guaranteed to end by a specific time: when the JoinGuard is dropped. That means that scoped threads are allowed to contain non-'static references, so long as those references last longer than the thread!
A spawned thread has no such guarantees about how long they live; these threads may live "forever". Any references they take must also live "forever", thus the 'static restriction.
This serves to explain your original problem. You have a vector with a non-'static lifetime, but you are handing references that point into that vector to the thread. If the vector were to be deallocated before the thread exited, you could attempt to access undefined memory, which leads to crashes in C or C++ programs. This is Rust helping you out!
Original question
Call functions in vector without consuming them
The answer is that you just call them:
fn a() { println!("a"); }
fn b() { println!("b"); }
fn main() {
let fns = vec![Box::new(a) as Box<Fn()>, Box::new(b) as Box<Fn()>];
fns[0]();
fns[1]();
fns[0]();
fns[1]();
}
Playpen