"Registering" trait implementations + factory method for trait objects - metaprogramming

Say we want to have objects implementations switched at runtime, we'd do something like this:
pub trait Methods {
fn func(&self);
}
pub struct Methods_0;
impl Methods for Methods_0 {
fn func(&self) {
println!("foo");
}
}
pub struct Methods_1;
impl Methods for Methods_1 {
fn func(&self) {
println!("bar");
}
}
pub struct Object<'a> { //'
methods: &'a (Methods + 'a),
}
fn main() {
let methods: [&Methods; 2] = [&Methods_0, &Methods_1];
let mut obj = Object { methods: methods[0] };
obj.methods.func();
obj.methods = methods[1];
obj.methods.func();
}
Now, what if there are hundreds of such implementations? E.g. imagine implementations of cards for collectible card game where every card does something completely different and is hard to generalize; or imagine implementations for opcodes for a huge state machine. Sure you can argue that a different design pattern can be used -- but that's not the point of this question...
Wonder if there is any way for these Impl structs to somehow "register" themselves so they can be looked up later by a factory method? I would be happy to end up with a magical macro or even a plugin to accomplish that.
Say, in D you can use templates to register the implementations -- and if you can't for some reason, you can always inspect modules at compile-time and generate new code via mixins; there are also user-defined attributes that can help in this. In Python, you would normally use a metaclass so that every time a new child class is created, a ref to it is stored in the metaclass's registry which allows you to look up implementations by name or parameter; this can also be done via decorators if implementations are simple functions.
Ideally, in the example above you would be able to create Object as
Object::new(0)
where the value 0 is only known at runtime and it would magically return you an Object { methods: &Methods_0 }, and the body of new() would not have the implementations hard-coded like so "methods: [&Methods; 2] = [&Methods_0, &Methods_1]", instead it should be somehow inferred automatically.

So, this is probably extremely buggy, but it works as a proof of concept.
It is possible to use Cargo's code generation support to make the introspection at compile-time, by parsing (not exactly parsing in this case, but you get the idea) the present implementations, and generating the boilerplate necessary to make Object::new() work.
The code is pretty convoluted and has no error handling whatsoever, but works.
Tested on rustc 1.0.0-dev (2c0535421 2015-02-05 15:22:48 +0000)
(See on github)
src/main.rs:
pub mod implementations;
mod generated_glue {
include!(concat!(env!("OUT_DIR"), "/generated_glue.rs"));
}
use generated_glue::Object;
pub trait Methods {
fn func(&self);
}
pub struct Methods_2;
impl Methods for Methods_2 {
fn func(&self) {
println!("baz");
}
}
fn main() {
Object::new(2).func();
}
src/implementations.rs:
use super::Methods;
pub struct Methods_0;
impl Methods for Methods_0 {
fn func(&self) {
println!("foo");
}
}
pub struct Methods_1;
impl Methods for Methods_1 {
fn func(&self) {
println!("bar");
}
}
build.rs:
#![feature(core, unicode, path, io, env)]
use std::env;
use std::old_io::{fs, File, BufferedReader};
use std::collections::HashMap;
fn main() {
let target_dir = Path::new(env::var_string("OUT_DIR").unwrap());
let mut target_file = File::create(&target_dir.join("generated_glue.rs")).unwrap();
let source_code_path = Path::new(file!()).join_many(&["..", "src/"]);
let source_files = fs::readdir(&source_code_path).unwrap().into_iter()
.filter(|path| {
match path.str_components().last() {
Some(Some(filename)) => filename.split('.').last() == Some("rs"),
_ => false
}
});
let mut implementations = HashMap::new();
for source_file_path in source_files {
let relative_path = source_file_path.path_relative_from(&source_code_path).unwrap();
let source_file_name = relative_path.as_str().unwrap();
implementations.insert(source_file_name.to_string(), vec![]);
let mut file_implementations = &mut implementations[*source_file_name];
let mut source_file = BufferedReader::new(File::open(&source_file_path).unwrap());
for line in source_file.lines() {
let line_str = match line {
Ok(line_str) => line_str,
Err(_) => break,
};
if line_str.starts_with("impl Methods for Methods_") {
const PREFIX_LEN: usize = 25;
let number_len = line_str[PREFIX_LEN..].chars().take_while(|chr| {
chr.is_digit(10)
}).count();
let number: i32 = line_str[PREFIX_LEN..(PREFIX_LEN + number_len)].parse().unwrap();
file_implementations.push(number);
}
}
}
writeln!(&mut target_file, "use super::Methods;").unwrap();
for (source_file_name, impls) in &implementations {
let module_name = match source_file_name.split('.').next() {
Some("main") => "super",
Some(name) => name,
None => panic!(),
};
for impl_number in impls {
writeln!(&mut target_file, "use {}::Methods_{};", module_name, impl_number).unwrap();
}
}
let all_impls = implementations.values().flat_map(|impls| impls.iter());
writeln!(&mut target_file, "
pub struct Object;
impl Object {{
pub fn new(impl_number: i32) -> Box<Methods + 'static> {{
match impl_number {{
").unwrap();
for impl_number in all_impls {
writeln!(&mut target_file,
" {} => Box::new(Methods_{}),", impl_number, impl_number).unwrap();
}
writeln!(&mut target_file, "
_ => panic!(\"Unknown impl number: {{}}\", impl_number),
}}
}}
}}").unwrap();
}
The generated code:
use super::Methods;
use super::Methods_2;
use implementations::Methods_0;
use implementations::Methods_1;
pub struct Object;
impl Object {
pub fn new(impl_number: i32) -> Box<Methods + 'static> {
match impl_number {
2 => Box::new(Methods_2),
0 => Box::new(Methods_0),
1 => Box::new(Methods_1),
_ => panic!("Unknown impl number: {}", impl_number),
}
}
}

Related

How do I extract information about the type in a derive macro?

I am implementing a derive macro to reduce the amount of boilerplate I have to write for similar types.
I want the macro to operate on structs which have the following format:
#[derive(MyTrait)]
struct SomeStruct {
records: HashMap<Id, Record>
}
Calling the macro should generate an implementation like so:
impl MyTrait for SomeStruct {
fn foo(&self, id: Id) -> Record { ... }
}
So I understand how to generate the code using quote:
#[proc_macro_derive(MyTrait)]
pub fn derive_answer_fn(item: TokenStream) -> TokenStream {
...
let generated = quote!{
impl MyTrait for #struct_name {
fn foo(&self, id: #id_type) -> #record_type { ... }
}
}
...
}
But what is the best way to get #struct_name, #id_type and #record_type from the input token stream?
One way is to use the venial crate to parse the TokenStream.
use proc_macro2;
use quote::quote;
use venial;
#[proc_macro_derive(MyTrait)]
pub fn derive_answer_fn(item: proc_macro::TokenStream) -> proc_macro::TokenStream {
// Ensure it's deriving for a struct.
let s = match venial::parse_declaration(proc_macro2::TokenStream::from(item)) {
Ok(venial::Declaration::Struct(s)) => s,
Ok(_) => panic!("Can only derive this trait on a struct"),
Err(_) => panic!("Error parsing into valid Rust"),
};
let struct_name = s.name;
// Get the struct's first field.
let fields = s.fields;
let named_fields = match fields {
venial::StructFields::Named(named_fields) => named_fields,
_ => panic!("Expected a named field"),
};
let inners: Vec<(venial::NamedField, proc_macro2::Punct)> = named_fields.fields.inner;
if inners.len() != 1 {
panic!("Expected exactly one named field");
}
// Get the name and type of the first field.
let first_field_name = &inners[0].0.name;
let first_field_type = &inners[0].0.ty;
// Extract Id and Record from the type HashMap<Id, Record>
if first_field_type.tokens.len() != 6 {
panic!("Expected type T<R, S> for first named field");
}
let id = first_field_type.tokens[2].clone();
let record = first_field_type.tokens[4].clone();
// Implement MyTrait.
let generated = quote! {
impl MyTrait for #struct_name {
fn foo(&self, id: #id) -> #record { *self.#first_field_name.get(&id).unwrap() }
}
};
proc_macro::TokenStream::from(generated)
}

Initialize a Vec with not-None values only

If I have variables like this:
let a: u32 = ...;
let b: Option<u32> = ...;
let c: u32 = ...;
, what is the shortest way to make a vector of those values, so that b is only included if it's Some?
In other words, is there something simpler than this:
let v = match b {
None => vec![a, c],
Some(x) => vec![a, x, c],
};
P.S. I would prefer a solution where we don't need to use the variables more than once. Consider this example:
let some_person: String = ...;
let best_man: Option<String> = ...;
let a_third_person: &str = ...;
let another_opt: Option<String> = ...;
...
As can be seen, we might have to use longer variable names, more than one Option (None), expressions (like a_third_person.to_string()), etc.
Yours is fine, but here's a sophisticated one:
[Some(a), b, Some(c)].into_iter().flatten().collect::<Vec<_>>()
This works since Option impls IntoIterator.
If it depends on just one variable:
b.map(|b| vec![a, b, c]).unwrap_or_else(|| vec![a, c]);
Playground
After some thinking and investigating, I've come with the following crazy thing.
The end goal is to have a macro, optional_vec![], that you can pass it either T or Option<T> and it should behave like described in the question. However, I decided on a strong restriction: it should have the best performance possible. So, you write:
optional_vec![a, b, c]
And get at least the performance of hand-written match, if not more. This forbids the use of the simple [Some(a), b, Some(c)].into_iter().flatten().collect::<Vec<_>>(), suggested in my other answer (though even this solution needs some way to differentiate between Option<T> and just T, which, like we'll see, is not an easy problem at all).
I will first warn that I've not found a way to make my macro work with Option. That is, if you want to build a vector of Option<T> from Option<T> and Option<Option<T>>, it will not work.
When a design a complex macro, I like to think first how the expanded code will look like. And in this macro, we have several hard problems to solve.
First, the macro take plain expressions. But somehow, it needs to switch on their type being T or Option<T>. How should such thing be done?
The feature we use to do such things is specialization.
#![feature(specialization)]
pub trait Optional {
fn some_method(self);
}
impl<T> Optional for T {
default fn some_method(self) {
// Just T
}
}
impl<T> Optional for Option<T> {
fn some_method(self) {
// Option<T>
}
}
Like you probably noticed, now we have two problems: first, specialization is unstable, and I'd like to stay with stable. Second, what should be inside the trait? The second problem is easier to solve, so let's begin with it.
Turns out that the most performant way to do the pushing to the vector is to pre-allocate capacity (Vec::with_capacity), write to the vector by using pointers (don't push(), it optimizes badly!) then set the length (Vec::set_len()).
We can get a pointer to the internal buffer of the vector using Vec::as_mut_ptr(), and advance the pointer via <*mut T>::add(1).
So, we need two methods: one to hint us about the capacity (can be zero for None or one for Some() and non-Option elements), and a write_and_advance() method:
pub trait Optional {
type Item;
fn len(&self) -> usize;
unsafe fn write_and_advance(self, place: &mut *mut Self::Item);
}
impl<T> Optional for T {
default type Item = Self;
default fn len(&self) -> usize { 1 }
default unsafe fn write_and_advance(self, place: &mut *mut Self) {
place.write(self);
*place = place.add(1);
}
}
impl<T> Optional<T> for Option<T> {
type Item = T;
fn len(&self) -> usize { self.is_some() as usize }
unsafe fn write_and_advance(self, place: &mut *mut T) {
if let Some(value) = self {
place.write(value);
*place = place.add(1);
}
}
}
It doesn't even compile! For the why, see Mismatch between associated type and type parameter only when impl is marked `default`. Luckily for us, the trick we'll use to workaround specialization not being stable does work in this situation. But for now, let's assume it works. How will the code using this trait look like?
match (a, b, c) { // The match is here because it's the best binding for liftimes: see https://stackoverflow.com/a/54855986/7884305
(a, b, c) => {
let len = Optional::len(&a) + Optional::len(&b) + Optional::len(&c);
let mut result = ::std::vec::Vec::with_capacity(len);
let mut next_element = result.as_mut_ptr();
unsafe {
Optional::write_and_advance(a, &mut next_element);
Optional::write_and_advance(b, &mut next_element);
Optional::write_and_advance(c, &mut next_element);
result.set_len(len);
}
result
}
}
And it works! Except that it does not, because the specialization does not compile as I said, and we also want to not repeat all of this boilerplate but insert it into a macro.
So, how do we solve the problems with specialization: being unstable and not working?
dtonlay has a very cool trick he calls autoref specialization (BTW, all of this repo is a very recommended reading!). This is a trick that can be used to emulate specialization. It works only in macros, but we're in a macro so this is fine.
I will not elaborate about the trick here (I recommend to read his post; he also used this trick in the excellent and very widely used anyhow crate). In short, the idea is to trick the typechecker by implementing a trait for T under certain conditions (the specialized impl) and other trait for &T for the general case (this could be inherent impl if not coherence). Since Rust performs automatic referencing during method resolution, that is take reference to the receiver as needed, this will work - the typechecker will autoref if needed, and will stop in the first applicable impl - i.e. the specialized impl if it matches, or the general impl otherwise.
Here's an example:
use std::fmt;
pub trait Display {
fn foo(&self);
}
// Level 1
impl<T: fmt::Display> Display for T {
fn foo(&self) { println!("Display({}), {}", std::any::type_name::<T>(), self); }
}
pub trait Debug {
fn foo(&self);
}
// Level 2
impl<T: fmt::Debug> Debug for &T {
fn foo(&self) { println!("Debug({}), {:?}", std::any::type_name::<T>(), self); }
}
macro_rules! foo {
($e:expr) => ((&$e).foo());
}
Playground.
We can use this trick in our case:
#[doc(hidden)]
pub mod autoref_specialization {
#[derive(Copy, Clone)]
pub struct OptionTag;
pub trait OptionKind {
fn optional_kind(&self) -> OptionTag;
}
impl<T> OptionKind for Option<T> {
#[inline(always)]
fn optional_kind(&self) -> OptionTag { OptionTag }
}
impl OptionTag {
#[inline(always)]
pub fn len<T>(self, this: &Option<T>) -> usize { this.is_some() as usize }
#[inline(always)]
pub unsafe fn write_and_advance<T>(self, this: Option<T>, place: &mut *mut T) {
if let Some(value) = this {
place.write(value);
*place = place.add(1);
}
}
}
#[derive(Copy, Clone)]
pub struct DefaultTag;
pub trait DefaultKind {
fn optional_kind(&self) -> DefaultTag;
}
impl<T> DefaultKind for &'_ T {
#[inline(always)]
fn optional_kind(&self) -> DefaultTag { DefaultTag }
}
impl DefaultTag {
#[inline(always)]
pub fn len<T>(self, _this: &T) -> usize { 1 }
#[inline(always)]
pub unsafe fn write_and_advance<T>(self, this: T, place: &mut *mut T) {
place.write(this);
*place = place.add(1);
}
}
}
And the expanded code will look like:
use autoref_specialization::{DefaultKind as _, OptionKind as _};
match (a, b, c) {
(a, b, c) => {
let (a_tag, b_tag, c_tag) = (
(&a).optional_kind(),
(&b).optional_kind(),
(&c).optional_kind(),
);
let len = a_tag.len(&a) + b_tag.len(&b) + c_tag.len(&c);
let mut result = ::std::vec::Vec::with_capacity(len);
let mut next_element = result.as_mut_ptr();
unsafe {
a_tag.write_and_advance(a, &mut next_element);
b_tag.write_and_advance(b, &mut next_element);
c_tag.write_and_advance(c, &mut next_element);
result.set_len(len);
}
result
}
}
It may be tempting to try to convert this immediately into a macro, but we still have one unsolved problem: our macro need to generate identifiers. This may not be obvious, but what if we pass optional_vec![1, Some(2), 3]? We need to generate the bindings for the match (in our case, (a, b, c) => ...) and the tag names ((a_tag, b_tag, c_tag)).
Unfortunately, generating names is not something macro_rules! can do in today's Rust. Fortunately, there is an excellent crate paste (another one from dtonlay!) that is a small proc-macro that allows you to do that. It is even available on the playground!
However, we need a series of identifiers. That can be done with tt-munching, by repeatedly adding some letter (I used a), so you get a, aa, aaa, ... you get the idea.
#[doc(hidden)]
pub mod reexports {
pub use std::vec::Vec;
pub use paste::paste;
}
#[macro_export]
macro_rules! optional_vec {
// Empty case
{ #generate_idents
exprs = []
processed_exprs = [$($e:expr,)*]
match_bindings = [$($binding:ident)*]
tags = [$($tag:ident)*]
} => {{
use $crate::autoref_specialization::{DefaultKind as _, OptionKind as _};
match ($($e,)*) {
($($binding,)*) => {
let ($($tag,)*) = (
$((&$binding).optional_kind(),)*
);
let len = 0 $(+ $tag.len(&$binding))*;
let mut result = $crate::reexports::Vec::with_capacity(len);
let mut next_element = result.as_mut_ptr();
unsafe {
$($tag.write_and_advance($binding, &mut next_element);)*
result.set_len(len);
}
result
}
}
}};
{ #generate_idents
exprs = [$e:expr, $($rest:expr,)*]
processed_exprs = [$($processed_exprs:tt)*]
match_bindings = [$first_binding:ident $($bindings:ident)*]
tags = [$($tags:ident)*]
} => {
$crate::reexports::paste! {
$crate::optional_vec! { #generate_idents
exprs = [$($rest,)*]
processed_exprs = [$($processed_exprs)* $e,]
match_bindings = [
[< $first_binding a >]
$first_binding
$($bindings)*
]
tags = [
[< $first_binding a_tag >]
$($tags)*
]
}
}
};
// Entry
[$e:expr $(, $exprs:expr)* $(,)?] => {
$crate::optional_vec! { #generate_idents
exprs = [$($exprs,)+]
processed_exprs = [$e,]
match_bindings = [__optional_vec_a]
tags = [__optional_vec_a_tag]
}
};
}
Playground.
I can also personally recommend
let mut v = vec![a, c];
v.extend(b);
Short and clear.
Sometime the straight forward solution is the best:
fn jim_power(a: u32, b: Option<u32>, c: u32) -> Vec<u32> {
let mut acc = Vec::with_capacity(3);
acc.push(a);
if let Some(b) = b {
acc.push(b);
}
acc.push(c);
acc
}
fn ys_iii(
some_person: String,
best_man: Option<String>,
a_third_person: String,
another_opt: Option<String>,
) -> Vec<String> {
let mut acc = Vec::with_capacity(4);
acc.push(some_person);
best_man.map(|x| acc.push(x));
acc.push(a_third_person);
another_opt.map(|x| acc.push(x));
acc
}
If you don't care about the order of the values, another option is
Iterator::chain(
[a, c].into_iter(),
[b].into_iter().flatten()
).collect()
Playground

Implementing Strategy pattern in rust without knowing which strategy are we using at compile time

I've been trying to implement a Strategy pattern in rust, but I'm having trouble understanding how to make it work.
So let's imagine we have a trait Adder and Element:
pub trait Element {
fn to_string(&self) -> String;
}
pub trait Adder {
type E: Element;
fn add (&self, a: &Self::E, b: &Self::E) -> Self::E;
}
And we have two implementations StringAdder with StringElements and UsizeAdder with UsizeElements:
// usize
pub struct UsizeElement {
pub value: usize
}
impl Element for UsizeElement {
fn to_string(&self) -> String {
self.value.to_string()
}
}
pub struct UsizeAdder {
}
impl Adder for UsizeAdder{
type E = UsizeElement;
fn add(&self, a: &UsizeElement, b: &UsizeElement) -> UsizeElement{
UsizeElement { value: a.value + b.value }
}
}
// String
pub struct StringElement {
pub value: String
}
impl Element for StringElement {
fn to_string(&self) -> String {
self.value.to_string()
}
}
pub struct StringAdder {
}
impl Adder for StringAdder {
type E = StringElement;
fn add(&self, a: &StringElement, b: &StringElement) -> StringElement {
let a: usize = a.value.parse().unwrap();
let b: usize = b.value.parse().unwrap();
StringElement {
value: (a + b).to_string()
}
}
}
And I want to write a code that uses trait methods from Adder trait and it's corresponding elements without knowing at compile time which strategy is going to be used.
fn main() {
let policy = "usize";
let element = "1";
let adder = get_adder(&policy);
let element_a = get_element(&policy, element);
let result = adder.add(element_a, element_a);
}
To simplify I'm going to assign a string to policy and element but normally that would be read from a file.
Is the only way to implement get_adder and get_element using dynamic dispatch? And by extension should I define Adder and Element traits to use trait objects and or the Any trait?
Edit: Here is what I managed to figure out so far.
An example of possible implementation is using match to help define concrete types for the compiler.
fn main() {
let policy = "string";
let element = "1";
let secret_key = "5";
let result = cesar(policy, element, secret_key);
dbg!(result.to_string());
}
fn cesar(policy: &str, element: &str, secret_key: &str) -> Box<dyn Element>{
match policy {
"usize" => {
let adder = UsizeAdder{};
let element = UsizeElement{ value: element.parse().unwrap() };
let secret_key = UsizeElement{ value: secret_key.parse().unwrap() };
Box::new(cesar_impl(&adder, &element, &secret_key))
}
"string" => {
let adder = StringAdder{};
let element = StringElement{ value: element.to_string() };
let secret_key = StringElement{ value: secret_key.to_string() };
Box::new(cesar_impl(&adder, &element, &secret_key))
}
_ => {
panic!("Policy not supported!")
}
}
}
fn cesar_impl<A>(adder: &A, element: &A::E, secret_key: &A::E) -> A::E where A: Adder, A::E : Element {
adder.add(&element, &secret_key)
}
However the issue is that I have to wrap every function I want to implement using a match function to determine the concrete type, and also case for every policy available.
It does not seem like the proper way of implementing it as it will bloat the code, make it more error prone and less maintainable unless I end up using macros.
Edit 2: Here you can find an example using dynamic dispatch. However I'm not convinced it's the proper way to implement the solution.
Example using dynamic dispatch
Thank you for your help :)

How do I use PickleDB with Rocket/Juniper Context?

I'm trying to write a Rocket / Juniper / Rust based GraphQL Server using PickleDB - an in-memory key/value store.
The pickle db is created / loaded at the start and given to rocket to manage:
fn rocket() -> Rocket {
let pickle_path = var_os(String::from("PICKLE_PATH")).unwrap_or(OsString::from("pickle.db"));
let pickle_db_dump_policy = PickleDbDumpPolicy::PeriodicDump(Duration::from_secs(120));
let pickle_serialization_method = SerializationMethod::Bin;
let pickle_db: PickleDb = match Path::new(&pickle_path).exists() {
false => PickleDb::new(pickle_path, pickle_db_dump_policy, pickle_serialization_method),
true => PickleDb::load(pickle_path, pickle_db_dump_policy, pickle_serialization_method).unwrap(),
};
rocket::ignite()
.manage(Schema::new(Query, Mutation))
.manage(pickle_db)
.mount(
"/",
routes![graphiql, get_graphql_handler, post_graphql_handler],
)
}
And I want to retrieve the PickleDb instance from the Rocket State in my Guard:
pub struct Context {
pickle_db: PickleDb,
}
impl juniper::Context for Context {}
impl<'a, 'r> FromRequest<'a, 'r> for Context {
type Error = ();
fn from_request(_request: &'a Request<'r>) -> request::Outcome<Context, ()> {
let pickle_db = _request.guard::<State<PickleDb>>()?.inner();
Outcome::Success(Context { pickle_db })
}
}
This does not work because the State only gives me a reference:
26 | Outcome::Success(Context { pickle_db })
| ^^^^^^^^^ expected struct `pickledb::pickledb::PickleDb`, found `&pickledb::pickledb::PickleDb`
When I change my Context struct to contain a reference I get lifetime issues which I'm not yet familiar with:
15 | pickle_db: &PickleDb,
| ^ expected named lifetime parameter
I tried using 'static which does make rust quite unhappy and I tried to use the request lifetime (?) 'r of the FromRequest, but that does not really work either...
How do I get this to work? As I'm quite new in rust, is this the right way to do things?
I finally have a solution, although the need for unsafe indicates it is sub-optimal :)
#![allow(unsafe_code)]
use pickledb::{PickleDb, PickleDbDumpPolicy, SerializationMethod};
use serde::de::DeserializeOwned;
use serde::Serialize;
use std::env;
use std::path::Path;
use std::time::Duration;
pub static mut PICKLE_DB: Option<PickleDb> = None;
pub fn cache_init() {
let pickle_path = env::var(String::from("PICKLE_PATH")).unwrap_or(String::from("pickle.db"));
let pickle_db_dump_policy = PickleDbDumpPolicy::PeriodicDump(Duration::from_secs(120));
let pickle_serialization_method = SerializationMethod::Json;
let pickle_db = match Path::new(&pickle_path).exists() {
false => PickleDb::new(
pickle_path,
pickle_db_dump_policy,
pickle_serialization_method,
),
true => PickleDb::load(
pickle_path,
pickle_db_dump_policy,
pickle_serialization_method,
)
.unwrap(),
};
unsafe {
PICKLE_DB = Some(pickle_db);
}
}
pub fn cache_get<V>(key: &str) -> Option<V>
where
V: DeserializeOwned + std::fmt::Debug,
{
unsafe {
let pickle_db = PICKLE_DB
.as_ref()
.expect("cache uninitialized - call cache_init()");
pickle_db.get::<V>(key)
}
}
pub fn cache_set<V>(key: &str, value: &V) -> Result<(), pickledb::error::Error>
where
V: Serialize,
{
unsafe {
let pickle_db = PICKLE_DB
.as_mut()
.expect("cache uninitialized - call cache_init()");
pickle_db.set::<V>(key, value)?;
Ok(())
}
}
This can be simply imported and used as expected, but I think I'll run into issues when the load gets to high...

FnOnce inside Enum: cannot move out of borrowed content

I'm new to rust and still have some problems with ownership/borrowing. In this case, I want to store a FnOnce in an enum and then call it later on from another thread. I tried a lot of different variants but always get stuck somewhere. Here is a reduced variant of what I currently have:
#![feature(fnbox)]
use std::sync::{Arc, Mutex};
use std::boxed::{Box, FnBox};
enum Foo<T> {
DoNothing,
CallFunction(Box<FnBox(&T) + Send>)
}
struct FooInMutex<T> {
foo: Arc<Mutex<Foo<T>>>
}
impl<T> FooInMutex<T> {
fn put_fn(&self, f: Box<FnBox(&T)+Send>) {
let mut foo = self.foo.lock().unwrap();
let mut new_foo : Foo<T>;
match *foo {
Foo::DoNothing =>
new_foo = Foo::CallFunction(f),
_ =>
new_foo = Foo::DoNothing
}
*foo = new_foo;
}
fn do_it(&self, t: T) {
let mut foo = self.foo.lock().unwrap();
let mut new_foo : Foo<T>;
match *foo {
Foo::CallFunction(ref mut f) => {
//f(&t)
f.call_box((&t,));
new_foo = Foo::DoNothing;
}
_ =>
panic!("...")
}
*foo = new_foo;
}
}
#[test]
fn it_works() {
let x = FooInMutex { foo: Arch::new(Mutex::new(Foo::DoNothing)) };
x.put_fn(Box::new(|| panic!("foo")));
x.do_it();
}
I use "rustc 1.4.0-nightly (e35fd7481 2015-08-17)".
The error message:
src/lib.rs:35:17: 35:18 error: cannot move out of borrowed content
src/lib.rs:35 f.call_box((&t,));
^
As I understand it, f is owned by the enum in the mutex and I only borrow it via *foo. But for calling f, I need move it out. But how to do this? Or what else do I have to change to make this example work?
std::mem::replace is what you should use there, like this:
use std::mem;
…
fn do_it(&self, t: T) {
match mem::replace(self.foo.lock().unwrap(), Foo::DoNothing) {
Foo::CallFunction(f) => {
f.call_box((&t,));
}
_ => panic!("...")
}
}

Resources