FnOnce inside Enum: cannot move out of borrowed content - rust

I'm new to rust and still have some problems with ownership/borrowing. In this case, I want to store a FnOnce in an enum and then call it later on from another thread. I tried a lot of different variants but always get stuck somewhere. Here is a reduced variant of what I currently have:
#![feature(fnbox)]
use std::sync::{Arc, Mutex};
use std::boxed::{Box, FnBox};
enum Foo<T> {
DoNothing,
CallFunction(Box<FnBox(&T) + Send>)
}
struct FooInMutex<T> {
foo: Arc<Mutex<Foo<T>>>
}
impl<T> FooInMutex<T> {
fn put_fn(&self, f: Box<FnBox(&T)+Send>) {
let mut foo = self.foo.lock().unwrap();
let mut new_foo : Foo<T>;
match *foo {
Foo::DoNothing =>
new_foo = Foo::CallFunction(f),
_ =>
new_foo = Foo::DoNothing
}
*foo = new_foo;
}
fn do_it(&self, t: T) {
let mut foo = self.foo.lock().unwrap();
let mut new_foo : Foo<T>;
match *foo {
Foo::CallFunction(ref mut f) => {
//f(&t)
f.call_box((&t,));
new_foo = Foo::DoNothing;
}
_ =>
panic!("...")
}
*foo = new_foo;
}
}
#[test]
fn it_works() {
let x = FooInMutex { foo: Arch::new(Mutex::new(Foo::DoNothing)) };
x.put_fn(Box::new(|| panic!("foo")));
x.do_it();
}
I use "rustc 1.4.0-nightly (e35fd7481 2015-08-17)".
The error message:
src/lib.rs:35:17: 35:18 error: cannot move out of borrowed content
src/lib.rs:35 f.call_box((&t,));
^
As I understand it, f is owned by the enum in the mutex and I only borrow it via *foo. But for calling f, I need move it out. But how to do this? Or what else do I have to change to make this example work?

std::mem::replace is what you should use there, like this:
use std::mem;
…
fn do_it(&self, t: T) {
match mem::replace(self.foo.lock().unwrap(), Foo::DoNothing) {
Foo::CallFunction(f) => {
f.call_box((&t,));
}
_ => panic!("...")
}
}

Related

How do I use PickleDB with Rocket/Juniper Context?

I'm trying to write a Rocket / Juniper / Rust based GraphQL Server using PickleDB - an in-memory key/value store.
The pickle db is created / loaded at the start and given to rocket to manage:
fn rocket() -> Rocket {
let pickle_path = var_os(String::from("PICKLE_PATH")).unwrap_or(OsString::from("pickle.db"));
let pickle_db_dump_policy = PickleDbDumpPolicy::PeriodicDump(Duration::from_secs(120));
let pickle_serialization_method = SerializationMethod::Bin;
let pickle_db: PickleDb = match Path::new(&pickle_path).exists() {
false => PickleDb::new(pickle_path, pickle_db_dump_policy, pickle_serialization_method),
true => PickleDb::load(pickle_path, pickle_db_dump_policy, pickle_serialization_method).unwrap(),
};
rocket::ignite()
.manage(Schema::new(Query, Mutation))
.manage(pickle_db)
.mount(
"/",
routes![graphiql, get_graphql_handler, post_graphql_handler],
)
}
And I want to retrieve the PickleDb instance from the Rocket State in my Guard:
pub struct Context {
pickle_db: PickleDb,
}
impl juniper::Context for Context {}
impl<'a, 'r> FromRequest<'a, 'r> for Context {
type Error = ();
fn from_request(_request: &'a Request<'r>) -> request::Outcome<Context, ()> {
let pickle_db = _request.guard::<State<PickleDb>>()?.inner();
Outcome::Success(Context { pickle_db })
}
}
This does not work because the State only gives me a reference:
26 | Outcome::Success(Context { pickle_db })
| ^^^^^^^^^ expected struct `pickledb::pickledb::PickleDb`, found `&pickledb::pickledb::PickleDb`
When I change my Context struct to contain a reference I get lifetime issues which I'm not yet familiar with:
15 | pickle_db: &PickleDb,
| ^ expected named lifetime parameter
I tried using 'static which does make rust quite unhappy and I tried to use the request lifetime (?) 'r of the FromRequest, but that does not really work either...
How do I get this to work? As I'm quite new in rust, is this the right way to do things?
I finally have a solution, although the need for unsafe indicates it is sub-optimal :)
#![allow(unsafe_code)]
use pickledb::{PickleDb, PickleDbDumpPolicy, SerializationMethod};
use serde::de::DeserializeOwned;
use serde::Serialize;
use std::env;
use std::path::Path;
use std::time::Duration;
pub static mut PICKLE_DB: Option<PickleDb> = None;
pub fn cache_init() {
let pickle_path = env::var(String::from("PICKLE_PATH")).unwrap_or(String::from("pickle.db"));
let pickle_db_dump_policy = PickleDbDumpPolicy::PeriodicDump(Duration::from_secs(120));
let pickle_serialization_method = SerializationMethod::Json;
let pickle_db = match Path::new(&pickle_path).exists() {
false => PickleDb::new(
pickle_path,
pickle_db_dump_policy,
pickle_serialization_method,
),
true => PickleDb::load(
pickle_path,
pickle_db_dump_policy,
pickle_serialization_method,
)
.unwrap(),
};
unsafe {
PICKLE_DB = Some(pickle_db);
}
}
pub fn cache_get<V>(key: &str) -> Option<V>
where
V: DeserializeOwned + std::fmt::Debug,
{
unsafe {
let pickle_db = PICKLE_DB
.as_ref()
.expect("cache uninitialized - call cache_init()");
pickle_db.get::<V>(key)
}
}
pub fn cache_set<V>(key: &str, value: &V) -> Result<(), pickledb::error::Error>
where
V: Serialize,
{
unsafe {
let pickle_db = PICKLE_DB
.as_mut()
.expect("cache uninitialized - call cache_init()");
pickle_db.set::<V>(key, value)?;
Ok(())
}
}
This can be simply imported and used as expected, but I think I'll run into issues when the load gets to high...

Rust: Implement AVL Tree and error: thread 'main' panicked at 'already borrowed: BorrowMutError'

I have the following tree structure:
use std::cell::RefCell;
use std::rc::Rc;
use std::cmp;
use std::cmp::Ordering;
type AVLTree<T> = Option<Rc<RefCell<TreeNode<T>>>>;
#[derive(Debug, PartialEq, Clone)]
struct TreeSet<T: Ord> {
root: AVLTree<T>,
}
impl<T: Ord> TreeSet<T> {
fn new() -> Self {
Self {
root: None
}
}
fn insert(&mut self, value: T) -> bool {
let current_tree = &mut self.root;
while let Some(current_node) = current_tree {
let node_key = &current_node.borrow().key;
match node_key.cmp(&value) {
Ordering::Less => { let current_tree = &mut current_node.borrow_mut().right; },
Ordering::Equal => {
return false;
}
Ordering::Greater => { let current_tree = &mut current_node.borrow_mut().left; },
}
}
*current_tree = Some(Rc::new(RefCell::new(TreeNode {
key: value,
left: None,
right: None,
parent: None
})));
true
}
}
#[derive(Clone, Debug, PartialEq)]
struct TreeNode<T: Ord> {
pub key: T,
pub parent: AVLTree<T>,
left: AVLTree<T>,
right: AVLTree<T>,
}
fn main() {
let mut new_avl_tree: TreeSet<u32> = TreeSet::new();
new_avl_tree.insert(3);
new_avl_tree.insert(5);
println!("Tree: {:#?}", &new_avl_tree);
}
Building with cargo build is fine, but when I run cargo run, I got the below error:
thread 'main' panicked at 'already borrowed: BorrowMutError', src\libcore\result.rs:1165:5
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace. error: process didn't
exit successfully: target\debug\avl-tree.exe (exit code: 101)
If i just call insert(3), it will be fine and my tree gets printed correctly. However, if I insert(5) after insert(3), I will get that error.
How do I fix that?
Manually implementing data structures such as linked list, tree, graph are not task for novices, because of memory safety rules in language. I suggest you to read Too Many Linked Lists tutorial, which discusses how to implement safe and unsafe linked lists in Rust right way.
Also read about name shadowing.
Your error is that inside a cycle you try to borrow mutable something which is already borrowed as immutable.
let node_key = &current_node.borrow().key; // Borrow as immutable
match node_key.cmp(&value) {
Ordering::Less => { let current_tree = &mut current_node.borrow_mut().right; }, // Create a binding which will be immediately deleted and borrow as mutable.
And I recommend you to read Rust book to learn rust.
First let us correct your algorithm. The following lines are incorrect:
let current_tree = &mut current_node.borrow_mut().right;
...
let current_tree = &mut current_node.borrow_mut().left;
Both do not reassign a value to current_tree but create a new (unused) one (#Inline refers to it as Name shadowing). Remove the let and make current_tree mut.
Now we get a compiler error temporary value dropped while borrowed. Probably the compiler error message did mislead you. It tells you to use let to increase the lifetime, and this would be right if you used the result in the same scope, but no let can increase the lifetime beyond the scope.
The problem is that you cannot pass out a reference to a value owned by a loop (as current_node.borrow_mut.right). So it would be better to use current_tree as owned variable. Sadly this means that many clever tricks in your code will not work any more.
Another problem in the code is the multiple borrow problem (your original runtime warning is about this). You cannot call borrow() and borrow_mut() on the same RefCell without panic(that is the purpose of RefCell).
So after finding the problems in your code, I got interested in how I would write the code. And now that it is written, I thought it would be fair to share it:
fn insert(&mut self, value: T) -> bool {
if let None = self.root {
self.root = TreeSet::root(value);
return true;
}
let mut current_tree = self.root.clone();
while let Some(current_node) = current_tree {
let mut borrowed_node = current_node.borrow_mut();
match borrowed_node.key.cmp(&value) {
Ordering::Less => {
if let Some(next_node) = &borrowed_node.right {
current_tree = Some(next_node.clone());
} else {
borrowed_node.right = current_node.child(value);
return true;
}
}
Ordering::Equal => {
return false;
}
Ordering::Greater => {
if let Some(next_node) = &borrowed_node.left {
current_tree = Some(next_node.clone());
} else {
borrowed_node.left = current_node.child(value);
return true;
}
}
};
}
true
}
//...
trait NewChild<T: Ord> {
fn child(&self, value: T) -> AVLTree<T>;
}
impl<T: Ord> NewChild<T> for Rc<RefCell<TreeNode<T>>> {
fn child(&self, value: T) -> AVLTree<T> {
Some(Rc::new(RefCell::new(TreeNode {
key: value,
left: None,
right: None,
parent: Some(self.clone()),
})))
}
}
One will have to write the two methods child(value:T) and root(value:T) to make this compile.

How can I return the combination of two borrowed RefCells?

I have a struct with two Vecs wrapped in RefCells. I want to have a method on that struct that combines the two vectors and returns them as a new RefCell or RefMut:
use std::cell::{RefCell, RefMut};
struct World {
positions: RefCell<Vec<Option<Position>>>,
velocities: RefCell<Vec<Option<Velocity>>>,
}
type Position = i32;
type Velocity = i32;
impl World {
pub fn new() -> World {
World {
positions: RefCell::new(vec![Some(1), None, Some(2)]),
velocities: RefCell::new(vec![None, None, Some(1)]),
}
}
pub fn get_pos_vel(&self) -> RefMut<Vec<(Position, Velocity)>> {
let mut poses = self.positions.borrow_mut();
let mut vels = self.velocities.borrow_mut();
poses
.iter_mut()
.zip(vels.iter_mut())
.filter(|(e1, e2)| e1.is_some() && e2.is_some())
.map(|(e1, e2)| (e1.unwrap(), e2.unwrap()))
.for_each(|elem| println!("{:?}", elem));
}
}
fn main() {
let world = World::new();
world.get_pos_vel();
}
How would I return the zipped contents of the vectors as a new RefCell? Is that possible?
I know there is RefMut::map() and I tried to nest two calls to map, but didn't succeed with that.
You want to be able to modify the positions and velocities. If these have to be stored in two separate RefCells, what about side-stepping the problem and using a callback to do the modification?
use std::cell::RefCell;
struct World {
positions: RefCell<Vec<Option<Position>>>,
velocities: RefCell<Vec<Option<Velocity>>>,
}
type Position = i32;
type Velocity = i32;
impl World {
pub fn new() -> World {
World {
positions: RefCell::new(vec![Some(1), None, Some(2)]),
velocities: RefCell::new(vec![None, None, Some(1)]),
}
}
pub fn modify_pos_vel<F: FnMut(&mut Position, &mut Velocity)>(&self, mut f: F) {
let mut poses = self.positions.borrow_mut();
let mut vels = self.velocities.borrow_mut();
poses
.iter_mut()
.zip(vels.iter_mut())
.filter_map(|pair| match pair {
(Some(e1), Some(e2)) => Some((e1, e2)),
_ => None,
})
.for_each(|pair| f(pair.0, pair.1))
}
}
fn main() {
let world = World::new();
world.modify_pos_vel(|position, velocity| {
// Some modification goes here, for example:
*position += *velocity;
});
}
If you want to return a new Vec, then you don't need to wrap it in RefMut or RefCell:
Based on your code with filter and map
pub fn get_pos_vel(&self) -> Vec<(Position, Velocity)> {
let mut poses = self.positions.borrow_mut();
let mut vels = self.velocities.borrow_mut();
poses.iter_mut()
.zip(vels.iter_mut())
.filter(|(e1, e2)| e1.is_some() && e2.is_some())
.map(|(e1, e2)| (e1.unwrap(), e2.unwrap()))
.collect()
}
Alternative with filter_map
poses.iter_mut()
.zip(vels.iter_mut())
.filter_map(|pair| match pair {
(Some(e1), Some(e2)) => Some((*e1, *e2)),
_ => None,
})
.collect()
You can wrap it in RefCell with RefCell::new, if you really want to, but I would leave it up to the user of the function to wrap it in whatever they need.

How do you convert an instance of generic T into a concrete instance in Rust?

I'm trying to implement this pattern:
use std::any::Any;
use std::fmt::Debug;
trait CommandHandler<TCommand> {
fn execute(&self, data: TCommand);
}
#[derive(Debug)]
struct FooCommand {}
struct FooCommandHandler {}
impl CommandHandler<FooCommand> for FooCommandHandler {
fn execute(&self, data: FooCommand) {
println!("Foo");
}
}
#[derive(Debug)]
struct BarCommand {}
struct BarCommandHandler {}
impl CommandHandler<BarCommand> for BarCommandHandler {
fn execute(&self, data: BarCommand) {
println!("Bar");
}
}
fn execute<T>(command: T)
where
T: Any + Debug,
{
println!("Command: {:?}", command);
match (&command as &Any).downcast_ref::<FooCommand>() {
Some(c) => (FooCommandHandler {}).execute(c),
None => {}
};
match (&command as &Any).downcast_ref::<BarCommand>() {
Some(c) => (BarCommandHandler {}).execute(c),
None => {}
};
}
fn main() {
(FooCommandHandler {}).execute(FooCommand {});
(BarCommandHandler {}).execute(BarCommand {});
execute(FooCommand {});
execute(BarCommand {});
}
This doesn't work:
error[E0308]: mismatched types
--> src/main.rs:37:51
|
37 | Some(c) => (FooCommandHandler {}).execute(c),
| ^ expected struct `FooCommand`, found &FooCommand
|
= note: expected type `FooCommand`
found type `&FooCommand`
error[E0308]: mismatched types
--> src/main.rs:41:51
|
41 | Some(c) => (BarCommandHandler {}).execute(c),
| ^ expected struct `BarCommand`, found &BarCommand
|
= note: expected type `BarCommand`
found type `&BarCommand`
How can I implement the execute() method in a way that preserves the following requirements:
The type XCommand should be totally naive of the XCommandHandler's that execute it.
Multiple implementations of CommandHandler<X> may exist.
The command handler receives (and consumes) the concrete command instance, not a reference to it (making duplicate dispatch of commands impossible).
In essence, I have a generic function fn foo<T>(v: T) and a I wish to dispatch to a number of concrete functions fn foo1(v: Foo), fn foo2(v: Bar); how do I do that?
Is transmute the only option?
Note that this is distinct from what Any::downcast_ref does, which is return an &Foo, not Foo from the generic value v.
You need to go via Box, like so:
fn execute<T>(command: T)
where
T: Any + Debug,
{
println!("Command: {:?}", command);
let any: Box<Any> = Box::new(command);
let any = match any.downcast() {
Ok(c) => return (FooCommandHandler {}).execute(*c),
Err(any) => any,
};
let any = match any.downcast() {
Ok(c) => return (BarCommandHandler {}).execute(*c),
Err(any) => any,
};
let _ = any; // avoid unused variable error
panic!("could not downcast command");
}
"But I don't wanna use a Box!"
Just use Box.
"But it's an allocation! I've measured the above code and proven beyond a shadow of a doubt that it's a bottleneck!"
What? Really?
"You can't prove otherwise."
Oh fine. But I do not guarantee that this will work in all cases. This is treading into "blow yourself up" territory. Do not do this unless you know you need to:
fn execute<T>(command: T)
where
T: Any + Debug,
{
use std::any::TypeId;
use std::mem;
println!("Command: {:?}", command);
macro_rules! do_cast {
($t:ty, $h:expr) => {
if TypeId::of::<T>() == TypeId::of::<$t>() {
let casted: $t = mem::transmute_copy(&command);
mem::forget(command); // we CANNOT let command drop.
$h.execute(casted);
return;
}
};
}
unsafe {
do_cast!(FooCommand, FooCommandHandler {});
do_cast!(BarCommand, BarCommandHandler {});
}
panic!("could not downcast command");
}
Just as a quick summary of the accepted answer:
Where &Any only has:
pub fn downcast_ref<T>(&self) -> Option<&T> where T: Any
Box<Any> implements:
pub fn downcast<T>(self) -> Result<Box<T>, Box<Any + 'static>> where T: Any
However, for complicated reasons, the documentation is on Box not on Any.

"Registering" trait implementations + factory method for trait objects

Say we want to have objects implementations switched at runtime, we'd do something like this:
pub trait Methods {
fn func(&self);
}
pub struct Methods_0;
impl Methods for Methods_0 {
fn func(&self) {
println!("foo");
}
}
pub struct Methods_1;
impl Methods for Methods_1 {
fn func(&self) {
println!("bar");
}
}
pub struct Object<'a> { //'
methods: &'a (Methods + 'a),
}
fn main() {
let methods: [&Methods; 2] = [&Methods_0, &Methods_1];
let mut obj = Object { methods: methods[0] };
obj.methods.func();
obj.methods = methods[1];
obj.methods.func();
}
Now, what if there are hundreds of such implementations? E.g. imagine implementations of cards for collectible card game where every card does something completely different and is hard to generalize; or imagine implementations for opcodes for a huge state machine. Sure you can argue that a different design pattern can be used -- but that's not the point of this question...
Wonder if there is any way for these Impl structs to somehow "register" themselves so they can be looked up later by a factory method? I would be happy to end up with a magical macro or even a plugin to accomplish that.
Say, in D you can use templates to register the implementations -- and if you can't for some reason, you can always inspect modules at compile-time and generate new code via mixins; there are also user-defined attributes that can help in this. In Python, you would normally use a metaclass so that every time a new child class is created, a ref to it is stored in the metaclass's registry which allows you to look up implementations by name or parameter; this can also be done via decorators if implementations are simple functions.
Ideally, in the example above you would be able to create Object as
Object::new(0)
where the value 0 is only known at runtime and it would magically return you an Object { methods: &Methods_0 }, and the body of new() would not have the implementations hard-coded like so "methods: [&Methods; 2] = [&Methods_0, &Methods_1]", instead it should be somehow inferred automatically.
So, this is probably extremely buggy, but it works as a proof of concept.
It is possible to use Cargo's code generation support to make the introspection at compile-time, by parsing (not exactly parsing in this case, but you get the idea) the present implementations, and generating the boilerplate necessary to make Object::new() work.
The code is pretty convoluted and has no error handling whatsoever, but works.
Tested on rustc 1.0.0-dev (2c0535421 2015-02-05 15:22:48 +0000)
(See on github)
src/main.rs:
pub mod implementations;
mod generated_glue {
include!(concat!(env!("OUT_DIR"), "/generated_glue.rs"));
}
use generated_glue::Object;
pub trait Methods {
fn func(&self);
}
pub struct Methods_2;
impl Methods for Methods_2 {
fn func(&self) {
println!("baz");
}
}
fn main() {
Object::new(2).func();
}
src/implementations.rs:
use super::Methods;
pub struct Methods_0;
impl Methods for Methods_0 {
fn func(&self) {
println!("foo");
}
}
pub struct Methods_1;
impl Methods for Methods_1 {
fn func(&self) {
println!("bar");
}
}
build.rs:
#![feature(core, unicode, path, io, env)]
use std::env;
use std::old_io::{fs, File, BufferedReader};
use std::collections::HashMap;
fn main() {
let target_dir = Path::new(env::var_string("OUT_DIR").unwrap());
let mut target_file = File::create(&target_dir.join("generated_glue.rs")).unwrap();
let source_code_path = Path::new(file!()).join_many(&["..", "src/"]);
let source_files = fs::readdir(&source_code_path).unwrap().into_iter()
.filter(|path| {
match path.str_components().last() {
Some(Some(filename)) => filename.split('.').last() == Some("rs"),
_ => false
}
});
let mut implementations = HashMap::new();
for source_file_path in source_files {
let relative_path = source_file_path.path_relative_from(&source_code_path).unwrap();
let source_file_name = relative_path.as_str().unwrap();
implementations.insert(source_file_name.to_string(), vec![]);
let mut file_implementations = &mut implementations[*source_file_name];
let mut source_file = BufferedReader::new(File::open(&source_file_path).unwrap());
for line in source_file.lines() {
let line_str = match line {
Ok(line_str) => line_str,
Err(_) => break,
};
if line_str.starts_with("impl Methods for Methods_") {
const PREFIX_LEN: usize = 25;
let number_len = line_str[PREFIX_LEN..].chars().take_while(|chr| {
chr.is_digit(10)
}).count();
let number: i32 = line_str[PREFIX_LEN..(PREFIX_LEN + number_len)].parse().unwrap();
file_implementations.push(number);
}
}
}
writeln!(&mut target_file, "use super::Methods;").unwrap();
for (source_file_name, impls) in &implementations {
let module_name = match source_file_name.split('.').next() {
Some("main") => "super",
Some(name) => name,
None => panic!(),
};
for impl_number in impls {
writeln!(&mut target_file, "use {}::Methods_{};", module_name, impl_number).unwrap();
}
}
let all_impls = implementations.values().flat_map(|impls| impls.iter());
writeln!(&mut target_file, "
pub struct Object;
impl Object {{
pub fn new(impl_number: i32) -> Box<Methods + 'static> {{
match impl_number {{
").unwrap();
for impl_number in all_impls {
writeln!(&mut target_file,
" {} => Box::new(Methods_{}),", impl_number, impl_number).unwrap();
}
writeln!(&mut target_file, "
_ => panic!(\"Unknown impl number: {{}}\", impl_number),
}}
}}
}}").unwrap();
}
The generated code:
use super::Methods;
use super::Methods_2;
use implementations::Methods_0;
use implementations::Methods_1;
pub struct Object;
impl Object {
pub fn new(impl_number: i32) -> Box<Methods + 'static> {
match impl_number {
2 => Box::new(Methods_2),
0 => Box::new(Methods_0),
1 => Box::new(Methods_1),
_ => panic!("Unknown impl number: {}", impl_number),
}
}
}

Resources