Cannot get wrapping of filter to compile - rust

I have the goal of wrapping an Iterator<Item = rusb::Device<_> to Iterator<Item = LitraDevice>. The latter contains specific implementation.
To make this work I tried the following code:
use std::iter::Filter;
use rusb;
const VENDOR: u16 = 0x046d;
const PRODUCT: u16 = 0xc900;
struct LitraDevice {
dev: rusb::Device<rusb::GlobalContext>,
}
pub struct LitraDevices {
unfiltered: rusb::DeviceList<rusb::GlobalContext>,
}
struct LitraDeviceIterator<'a> {
it: Filter<rusb::Devices<'a, rusb::GlobalContext>, for<'r> fn(&'r rusb::Device<rusb::GlobalContext>) -> bool>,
}
impl LitraDevices {
pub fn new() -> Self {
let unfiltered = rusb::devices().unwrap();
LitraDevices { unfiltered }
}
fn can_not_handle<'r>(dev: &'r rusb::Device<rusb::GlobalContext>) -> bool {
let desc = dev.device_descriptor().unwrap();
match (desc.vendor_id(), desc.product_id()) {
(VENDOR, PRODUCT) => (),
_ => return true,
}
match desc.class_code() {
LIBUSB_CLASS_HID => return true, // Skip HID devices, they are handled directly by OS libraries
_ => return false,
}
}
pub fn iter<'a>(self) -> LitraDeviceIterator<'a> {
let it = self.unfiltered.iter().filter(Self::can_not_handle);
LitraDeviceIterator{
it,
}
}
}
impl <'a> Iterator for LitraDeviceIterator<'a> {
type Item = LitraDevice;
fn next(&mut self) -> Option<Self::Item> {
let n = self.it.next();
match n {
Some(Device) => return Some(LitraDevice{dev: n.unwrap()}),
None => return None,
}
}
}
Now I really cannot figure out how to code LitraDeviceIterator so that it wraps the filtered iterator.
All code iterations I have tried so far turn into a generic nightmare very quickly.

I rewrote your iter() to yield LitraDevice, you can surely take it wherever you wanted to go from there.
The first underlying issue is that filter() yields references, but in cases like these, you actually mean to move yielded items while filtering. That's what filter_map() is capable of. That way, you can scrap the references, greatly simplifying your code.
(This code does not work yet, read on)
pub fn iter(self) -> impl Iterator<Item = LitraDevice> {
self.unfiltered.iter().filter_map(|dev| {
(!Self::can_not_handle(&dev))
.then_some(dev)
.map(|dev| LitraDevice { dev })
})
}
Now, there's a second little issue at play her: rusb::DeviceList<T : UsbContext>>::iter(&self) returns rusb::Devices<'_, T>, '_ being the anonymous lifetime inferred from &self. Meaning, while you can drive rusb::Devices<'_, T> to yield Device<T>s, you can not actually keep it around longer than self.unfiltered. More specifically, as you consume self in iter(), you can not return an iterator referencing that rusb::Devices<'_, T> from iter(). One solution is to immediately collect, then again moving into an iterator.
pub fn iter(self) -> impl Iterator<Item = LitraDevice> {
let devices = self.unfiltered.iter().collect::<Vec<_>>();
devices.into_iter().filter_map(|dev| {
(!Self::can_not_handle(&dev))
.then_some(dev)
.map(|dev| LitraDevice { dev })
})
}

Related

Derive macro generation

I'm making my own Serializable trait, in the context of a client / server system.
My idea was that the messages sent by the system is an enum made by the user of this system, so it can be customize as needed.
Too ease implementing the trait on the enum, I would like to use the #[derive(Serializable)] method, as implementing it is always the same thing.
Here is the trait :
pub trait NetworkSerializable {
fn id(&self) -> usize;
fn size(&self) -> usize;
fn serialize(self) -> Vec<u8>;
fn deserialize(id: usize, data: Vec<u8>) -> Self;
}
Now, I've tried to look at the book (this one too) and this example to try to wrap my head around derive macros, but I'm really struggling to understand them and how to implement them. I've read about token streams and abstract trees, and I think I understand the basics.
Let's take the example of the id() method : it should gives a unique id for each variant of the enum, to allow headers of messages to tell which message is incoming.
let's say I have this enum as a message system :
enum NetworkMessages {
ErrorMessage,
SpawnPlayer(usize, bool, Transform), // player id, is_mine, position
MovePlayer(usize, Transform), // player id, new_position
DestroyPlayer(usize) // player_id
}
Then, the id() function should look like this :
fn id(&self) -> usize {
match &self {
&ErrorMessage => 0,
&SpawnPlayer => 1,
&MovePlayer => 2,
&DestroyPlayer => 3,
}
}
Here was my go with writting this using a derive macro :
#[proc_macro_derive(NetworkSerializable)]
pub fn network_serializable_derive(input: TokenStream) -> TokenStream {
// Construct a representation of Rust code as a syntax tree
// that we can manipulate
let ast = syn::parse(input).unwrap();
// Build the trait implementation
impl_network_serializable_macro(&ast)
}
fn impl_network_serializable_macro(ast: &syn::DeriveInput) -> TokenStream {
// get enum name
let ref name = ast.ident;
let ref data = ast.data;
let (id_func, size_func, serialize_func, deserialize_func) = match data {
// Only if data is an enum, we do parsing
Data::Enum(data_enum) => {
// Iterate over enum variants
let mut id_func_internal = TokenStream2::new();
let mut variant_id: usize = 0;
for variant in &data_enum.variants {
// add the branch for the variant
id_func_internal.extend(quote_spanned!{
variant.span() => &variant_id,
});
variant_id += 1;
}
(id_func_internal, (), (), ())
}
_ => {(TokenStream2::new(), (), (), ())},
};
let expanded = quote! {
impl NetworkSerializable for #name {
// variant_checker_functions gets replaced by all the functions
// that were constructed above
fn size(&self) -> usize {
match &self {
#id_func
}
}
/*
#size_func
#serialize_func
#deserialize_func
*/
}
};
expanded.into()
}
So this is generating quite a lot of errors, with the "proc macro NetworkSerializable not expanded: no proc macro dylib present" being first. So I'm guessing there a lot of misunderstaning from my part in here.

Initialize a Vec with not-None values only

If I have variables like this:
let a: u32 = ...;
let b: Option<u32> = ...;
let c: u32 = ...;
, what is the shortest way to make a vector of those values, so that b is only included if it's Some?
In other words, is there something simpler than this:
let v = match b {
None => vec![a, c],
Some(x) => vec![a, x, c],
};
P.S. I would prefer a solution where we don't need to use the variables more than once. Consider this example:
let some_person: String = ...;
let best_man: Option<String> = ...;
let a_third_person: &str = ...;
let another_opt: Option<String> = ...;
...
As can be seen, we might have to use longer variable names, more than one Option (None), expressions (like a_third_person.to_string()), etc.
Yours is fine, but here's a sophisticated one:
[Some(a), b, Some(c)].into_iter().flatten().collect::<Vec<_>>()
This works since Option impls IntoIterator.
If it depends on just one variable:
b.map(|b| vec![a, b, c]).unwrap_or_else(|| vec![a, c]);
Playground
After some thinking and investigating, I've come with the following crazy thing.
The end goal is to have a macro, optional_vec![], that you can pass it either T or Option<T> and it should behave like described in the question. However, I decided on a strong restriction: it should have the best performance possible. So, you write:
optional_vec![a, b, c]
And get at least the performance of hand-written match, if not more. This forbids the use of the simple [Some(a), b, Some(c)].into_iter().flatten().collect::<Vec<_>>(), suggested in my other answer (though even this solution needs some way to differentiate between Option<T> and just T, which, like we'll see, is not an easy problem at all).
I will first warn that I've not found a way to make my macro work with Option. That is, if you want to build a vector of Option<T> from Option<T> and Option<Option<T>>, it will not work.
When a design a complex macro, I like to think first how the expanded code will look like. And in this macro, we have several hard problems to solve.
First, the macro take plain expressions. But somehow, it needs to switch on their type being T or Option<T>. How should such thing be done?
The feature we use to do such things is specialization.
#![feature(specialization)]
pub trait Optional {
fn some_method(self);
}
impl<T> Optional for T {
default fn some_method(self) {
// Just T
}
}
impl<T> Optional for Option<T> {
fn some_method(self) {
// Option<T>
}
}
Like you probably noticed, now we have two problems: first, specialization is unstable, and I'd like to stay with stable. Second, what should be inside the trait? The second problem is easier to solve, so let's begin with it.
Turns out that the most performant way to do the pushing to the vector is to pre-allocate capacity (Vec::with_capacity), write to the vector by using pointers (don't push(), it optimizes badly!) then set the length (Vec::set_len()).
We can get a pointer to the internal buffer of the vector using Vec::as_mut_ptr(), and advance the pointer via <*mut T>::add(1).
So, we need two methods: one to hint us about the capacity (can be zero for None or one for Some() and non-Option elements), and a write_and_advance() method:
pub trait Optional {
type Item;
fn len(&self) -> usize;
unsafe fn write_and_advance(self, place: &mut *mut Self::Item);
}
impl<T> Optional for T {
default type Item = Self;
default fn len(&self) -> usize { 1 }
default unsafe fn write_and_advance(self, place: &mut *mut Self) {
place.write(self);
*place = place.add(1);
}
}
impl<T> Optional<T> for Option<T> {
type Item = T;
fn len(&self) -> usize { self.is_some() as usize }
unsafe fn write_and_advance(self, place: &mut *mut T) {
if let Some(value) = self {
place.write(value);
*place = place.add(1);
}
}
}
It doesn't even compile! For the why, see Mismatch between associated type and type parameter only when impl is marked `default`. Luckily for us, the trick we'll use to workaround specialization not being stable does work in this situation. But for now, let's assume it works. How will the code using this trait look like?
match (a, b, c) { // The match is here because it's the best binding for liftimes: see https://stackoverflow.com/a/54855986/7884305
(a, b, c) => {
let len = Optional::len(&a) + Optional::len(&b) + Optional::len(&c);
let mut result = ::std::vec::Vec::with_capacity(len);
let mut next_element = result.as_mut_ptr();
unsafe {
Optional::write_and_advance(a, &mut next_element);
Optional::write_and_advance(b, &mut next_element);
Optional::write_and_advance(c, &mut next_element);
result.set_len(len);
}
result
}
}
And it works! Except that it does not, because the specialization does not compile as I said, and we also want to not repeat all of this boilerplate but insert it into a macro.
So, how do we solve the problems with specialization: being unstable and not working?
dtonlay has a very cool trick he calls autoref specialization (BTW, all of this repo is a very recommended reading!). This is a trick that can be used to emulate specialization. It works only in macros, but we're in a macro so this is fine.
I will not elaborate about the trick here (I recommend to read his post; he also used this trick in the excellent and very widely used anyhow crate). In short, the idea is to trick the typechecker by implementing a trait for T under certain conditions (the specialized impl) and other trait for &T for the general case (this could be inherent impl if not coherence). Since Rust performs automatic referencing during method resolution, that is take reference to the receiver as needed, this will work - the typechecker will autoref if needed, and will stop in the first applicable impl - i.e. the specialized impl if it matches, or the general impl otherwise.
Here's an example:
use std::fmt;
pub trait Display {
fn foo(&self);
}
// Level 1
impl<T: fmt::Display> Display for T {
fn foo(&self) { println!("Display({}), {}", std::any::type_name::<T>(), self); }
}
pub trait Debug {
fn foo(&self);
}
// Level 2
impl<T: fmt::Debug> Debug for &T {
fn foo(&self) { println!("Debug({}), {:?}", std::any::type_name::<T>(), self); }
}
macro_rules! foo {
($e:expr) => ((&$e).foo());
}
Playground.
We can use this trick in our case:
#[doc(hidden)]
pub mod autoref_specialization {
#[derive(Copy, Clone)]
pub struct OptionTag;
pub trait OptionKind {
fn optional_kind(&self) -> OptionTag;
}
impl<T> OptionKind for Option<T> {
#[inline(always)]
fn optional_kind(&self) -> OptionTag { OptionTag }
}
impl OptionTag {
#[inline(always)]
pub fn len<T>(self, this: &Option<T>) -> usize { this.is_some() as usize }
#[inline(always)]
pub unsafe fn write_and_advance<T>(self, this: Option<T>, place: &mut *mut T) {
if let Some(value) = this {
place.write(value);
*place = place.add(1);
}
}
}
#[derive(Copy, Clone)]
pub struct DefaultTag;
pub trait DefaultKind {
fn optional_kind(&self) -> DefaultTag;
}
impl<T> DefaultKind for &'_ T {
#[inline(always)]
fn optional_kind(&self) -> DefaultTag { DefaultTag }
}
impl DefaultTag {
#[inline(always)]
pub fn len<T>(self, _this: &T) -> usize { 1 }
#[inline(always)]
pub unsafe fn write_and_advance<T>(self, this: T, place: &mut *mut T) {
place.write(this);
*place = place.add(1);
}
}
}
And the expanded code will look like:
use autoref_specialization::{DefaultKind as _, OptionKind as _};
match (a, b, c) {
(a, b, c) => {
let (a_tag, b_tag, c_tag) = (
(&a).optional_kind(),
(&b).optional_kind(),
(&c).optional_kind(),
);
let len = a_tag.len(&a) + b_tag.len(&b) + c_tag.len(&c);
let mut result = ::std::vec::Vec::with_capacity(len);
let mut next_element = result.as_mut_ptr();
unsafe {
a_tag.write_and_advance(a, &mut next_element);
b_tag.write_and_advance(b, &mut next_element);
c_tag.write_and_advance(c, &mut next_element);
result.set_len(len);
}
result
}
}
It may be tempting to try to convert this immediately into a macro, but we still have one unsolved problem: our macro need to generate identifiers. This may not be obvious, but what if we pass optional_vec![1, Some(2), 3]? We need to generate the bindings for the match (in our case, (a, b, c) => ...) and the tag names ((a_tag, b_tag, c_tag)).
Unfortunately, generating names is not something macro_rules! can do in today's Rust. Fortunately, there is an excellent crate paste (another one from dtonlay!) that is a small proc-macro that allows you to do that. It is even available on the playground!
However, we need a series of identifiers. That can be done with tt-munching, by repeatedly adding some letter (I used a), so you get a, aa, aaa, ... you get the idea.
#[doc(hidden)]
pub mod reexports {
pub use std::vec::Vec;
pub use paste::paste;
}
#[macro_export]
macro_rules! optional_vec {
// Empty case
{ #generate_idents
exprs = []
processed_exprs = [$($e:expr,)*]
match_bindings = [$($binding:ident)*]
tags = [$($tag:ident)*]
} => {{
use $crate::autoref_specialization::{DefaultKind as _, OptionKind as _};
match ($($e,)*) {
($($binding,)*) => {
let ($($tag,)*) = (
$((&$binding).optional_kind(),)*
);
let len = 0 $(+ $tag.len(&$binding))*;
let mut result = $crate::reexports::Vec::with_capacity(len);
let mut next_element = result.as_mut_ptr();
unsafe {
$($tag.write_and_advance($binding, &mut next_element);)*
result.set_len(len);
}
result
}
}
}};
{ #generate_idents
exprs = [$e:expr, $($rest:expr,)*]
processed_exprs = [$($processed_exprs:tt)*]
match_bindings = [$first_binding:ident $($bindings:ident)*]
tags = [$($tags:ident)*]
} => {
$crate::reexports::paste! {
$crate::optional_vec! { #generate_idents
exprs = [$($rest,)*]
processed_exprs = [$($processed_exprs)* $e,]
match_bindings = [
[< $first_binding a >]
$first_binding
$($bindings)*
]
tags = [
[< $first_binding a_tag >]
$($tags)*
]
}
}
};
// Entry
[$e:expr $(, $exprs:expr)* $(,)?] => {
$crate::optional_vec! { #generate_idents
exprs = [$($exprs,)+]
processed_exprs = [$e,]
match_bindings = [__optional_vec_a]
tags = [__optional_vec_a_tag]
}
};
}
Playground.
I can also personally recommend
let mut v = vec![a, c];
v.extend(b);
Short and clear.
Sometime the straight forward solution is the best:
fn jim_power(a: u32, b: Option<u32>, c: u32) -> Vec<u32> {
let mut acc = Vec::with_capacity(3);
acc.push(a);
if let Some(b) = b {
acc.push(b);
}
acc.push(c);
acc
}
fn ys_iii(
some_person: String,
best_man: Option<String>,
a_third_person: String,
another_opt: Option<String>,
) -> Vec<String> {
let mut acc = Vec::with_capacity(4);
acc.push(some_person);
best_man.map(|x| acc.push(x));
acc.push(a_third_person);
another_opt.map(|x| acc.push(x));
acc
}
If you don't care about the order of the values, another option is
Iterator::chain(
[a, c].into_iter(),
[b].into_iter().flatten()
).collect()
Playground

Function that generates a HashMap of Enum variants

I'm working with apollo_parser to parse a GraphQL query. It defines an enum, apollo_parser::ast::Definition, that has several variants including apollo_parser::ast::OperationDefintion and apollo_parser::ast::FragmentDefinition. I'd like to have a single Trait I can apply to apollo_parser::ast::Definition that provides a function definition_map that returns a HashMap mapping the operation name to the variant instance.
I've got as far as the trait, but I don't know how to implement it. Also, I don't know how to constrain T to be a variant of Definition.
trait Mappable {
fn definition_map<T>(&self) -> HashMap<String, T>;
}
EDIT:
Here's a Rust-ish pseudocode implementation.
impl Mappable for Document {
fn definition_map<T>(&self) -> HashMap<String, T> {
let defs = Vec<T> = self.definitions
.filter_map(|def: Definition| match def {
T(foo) => Some(foo),
_ => None
}).collect();
let map = HashMap::new();
for def: T in definitions {
map.insert(def.name(), def);
}
map
}
}
and it would output
// From a document consisting of OperationDefinitions "operation1" and "operation2"
// and FragmentDefinitons "fragment1" and "fragment2"
{
"operation1": OperationDefinition(...),
"operation2": OperationDefinition(...),
}
{
"fragment1": FragmentDefinition(...),
"fragment2": FragmentDefinition(...)
}
I don't know how to constrain T to be a variant of Definition.
There is no such thing in Rust. There's the name of the variant and the name of the type contained within that variant, there is no relationship between the two. The variants can be named whatever they want, and multiple variant can contain the same type. So there's no shorthand for pulling a T out of an enum which has a variant with a T.
You need to make your own trait that says how to get a T from a Definition:
trait TryFromDefinition {
fn try_from_def(definition: Definition) -> Option<Self> where Self: Sized;
fn name(&self) -> String;
}
And using that, your implementation is simple:
impl Mappable for Document {
fn definition_map<T: TryFromDefinition>(&self) -> HashMap<String, T> {
self.definitions()
.filter_map(T::try_from_def)
.map(|t| (t.name(), t))
.collect()
}
}
You just have to define TryFromDefinition for all the types you want to use:
impl TryFromDefinition for OperationDefinition {
fn try_from_def(definition: Definition) -> Option<Self> {
match definition {
Definition::OperationDefinition(operation) => Some(operation),
_ => None,
}
}
fn name(&self) -> String {
self.name().unwrap().ident_token().unwrap().text().into()
}
}
impl TryFromDefinition for FragmentDefinition {
fn try_from_def(definition: Definition) -> Option<Self> {
match definition {
Definition::FragmentDefinition(operation) => Some(operation),
_ => None,
}
}
fn name(&self) -> String {
self.fragment_name().unwrap().name().unwrap().ident_token().unwrap().text().into()
}
}
...
Some of this could probably be condensed using macros, but there's no normalized way that I can tell to get a name from a definition, so that would still have to be custom per type.
You should also decide how you want to handle definitions that don't have a name; you'd probably want to return Option<String> to avoid all those .unwrap()s, but I don't know how you'd want to put that in your HashMap.
Without knowing your whole workflow, I might suggest a different route instead:
struct Definitions {
operations: HashMap<String, OperationDefinition>,
fragments: HashMap<String, FragmentDefinition>,
...
}
impl Definitions {
fn from_document(document: &Document) -> Self {
let mut operations = HashMap::new();
let mut fragments = HashMap::new();
...
for definition in document.definitions() {
match definition {
Definition::OperationDefinition(operation) => {
let name: String = operation.name().unwrap().ident_token().unwrap().text().into();
operations.insert(name, operation);
},
Definition::FragmentDefinition(fragment) => {
let name: String = fragment.fragment_name().unwrap().name().unwrap().ident_token().unwrap().text().into();
fragments.insert(name, fragment);
},
...
}
}
Definitions {
operations,
fragments,
...
}
}
}

What's the most idiomatic way of working with an Iterator of Results? [duplicate]

This question already has answers here:
How do I stop iteration and return an error when Iterator::map returns a Result::Err?
(4 answers)
Closed 3 years ago.
I have code like this:
let things = vec![/* ...*/]; // e.g. Vec<String>
things
.map(|thing| {
let a = try!(do_stuff(thing));
Ok(other_stuff(a))
})
.filter(|thing_result| match *thing_result {
Err(e) => true,
Ok(a) => check(a),
})
.map(|thing_result| {
let a = try!(thing_result);
// do stuff
b
})
.collect::<Result<Vec<_>, _>>()
In terms of semantics, I want to stop processing after the first error.
The above code works, but it feels quite cumbersome. Is there a better way? I've looked through the docs for something like filter_if_ok, but I haven't found anything.
I am aware of collect::<Result<Vec<_>, _>>, and it works great. I'm specifically trying to eliminate the following boilerplate:
In the filter's closure, I have to use match on thing_result. I feel like this should just be a one-liner, e.g. .filter_if_ok(|thing| check(a)).
Every time I use map, I have to include an extra statement let a = try!(thing_result); in order to deal with the possibility of an Err. Again, I feel like this could be abstracted away into .map_if_ok(|thing| ...).
Is there another approach I can use to get this level of conciseness, or do I just need to tough it out?
There are lots of ways you could mean this.
If you just want to panic, use .map(|x| x.unwrap()).
If you want all results or a single error, collect into a Result<X<T>>:
let results: Result<Vec<i32>, _> = result_i32_iter.collect();
If you want everything except the errors, use .filter_map(|x| x.ok()) or .flat_map(|x| x).
If you want everything up to the first error, use .scan((), |_, x| x.ok()).
let results: Vec<i32> = result_i32_iter.scan((), |_, x| x.ok());
Note that these operations can be combined with earlier operations in many cases.
Since Rust 1.27, Iterator::try_for_each could be of interest:
An iterator method that applies a fallible function to each item in the iterator, stopping at the first error and returning that error.
This can also be thought of as the fallible form of for_each() or as the stateless version of try_fold().
You can implement these iterators yourself. See how filter and map are implemented in the standard library.
map_ok implementation:
#[derive(Clone)]
pub struct MapOkIterator<I, F> {
iter: I,
f: F,
}
impl<A, B, E, I, F> Iterator for MapOkIterator<I, F>
where
F: FnMut(A) -> B,
I: Iterator<Item = Result<A, E>>,
{
type Item = Result<B, E>;
#[inline]
fn next(&mut self) -> Option<Self::Item> {
self.iter.next().map(|x| x.map(&mut self.f))
}
}
pub trait MapOkTrait {
fn map_ok<F, A, B, E>(self, func: F) -> MapOkIterator<Self, F>
where
Self: Sized + Iterator<Item = Result<A, E>>,
F: FnMut(A) -> B,
{
MapOkIterator {
iter: self,
f: func,
}
}
}
impl<I, T, E> MapOkTrait for I
where
I: Sized + Iterator<Item = Result<T, E>>,
{
}
filter_ok is almost the same:
#[derive(Clone)]
pub struct FilterOkIterator<I, P> {
iter: I,
predicate: P,
}
impl<I, P, A, E> Iterator for FilterOkIterator<I, P>
where
P: FnMut(&A) -> bool,
I: Iterator<Item = Result<A, E>>,
{
type Item = Result<A, E>;
#[inline]
fn next(&mut self) -> Option<Result<A, E>> {
for x in self.iter.by_ref() {
match x {
Ok(xx) => if (self.predicate)(&xx) {
return Some(Ok(xx));
},
Err(_) => return Some(x),
}
}
None
}
}
pub trait FilterOkTrait {
fn filter_ok<P, A, E>(self, predicate: P) -> FilterOkIterator<Self, P>
where
Self: Sized + Iterator<Item = Result<A, E>>,
P: FnMut(&A) -> bool,
{
FilterOkIterator {
iter: self,
predicate: predicate,
}
}
}
impl<I, T, E> FilterOkTrait for I
where
I: Sized + Iterator<Item = Result<T, E>>,
{
}
Your code may look like this:
["1", "2", "3", "4"]
.iter()
.map(|x| x.parse::<u16>().map(|a| a + 10))
.filter_ok(|x| x % 2 == 0)
.map_ok(|x| x + 100)
.collect::<Result<Vec<_>, std::num::ParseIntError>>()
playground
filter_map can be used to reduce simple cases of mapping then filtering. In your example there is some logic to the filter so I don't think it simplifies things. I don't see any useful functions in the documentation for Result either unfortunately. I think your example is as idiomatic as it could get, but here are some small improvements:
let things = vec![...]; // e.g. Vec<String>
things.iter().map(|thing| {
// The ? operator can be used in place of try! in the nightly version of Rust
let a = do_stuff(thing)?;
Ok(other_stuff(a))
// The closure braces can be removed if the code is a single expression
}).filter(|thing_result| match *thing_result {
Err(e) => true,
Ok(a) => check(a),
}
).map(|thing_result| {
let a = thing_result?;
// do stuff
b
})
The ? operator can be less readable in some cases, so you might not want to use it.
If you are able to change the check function to return Some(x) instead of true, and None instead of false, you can use filter_map:
let bar = things.iter().filter_map(|thing| {
match do_stuff(thing) {
Err(e) => Some(Err(e)),
Ok(a) => {
let x = other_stuff(a);
if check_2(x) {
Some(Ok(x))
} else {
None
}
}
}
}).map(|thing_result| {
let a = try!(thing_result);
// do stuff
b
}).collect::<Result<Vec<_>, _>>();
You can get rid of the let a = try!(thing); by using a match in some cases as well. However, using filter_map here doesn't seem to help.

"Registering" trait implementations + factory method for trait objects

Say we want to have objects implementations switched at runtime, we'd do something like this:
pub trait Methods {
fn func(&self);
}
pub struct Methods_0;
impl Methods for Methods_0 {
fn func(&self) {
println!("foo");
}
}
pub struct Methods_1;
impl Methods for Methods_1 {
fn func(&self) {
println!("bar");
}
}
pub struct Object<'a> { //'
methods: &'a (Methods + 'a),
}
fn main() {
let methods: [&Methods; 2] = [&Methods_0, &Methods_1];
let mut obj = Object { methods: methods[0] };
obj.methods.func();
obj.methods = methods[1];
obj.methods.func();
}
Now, what if there are hundreds of such implementations? E.g. imagine implementations of cards for collectible card game where every card does something completely different and is hard to generalize; or imagine implementations for opcodes for a huge state machine. Sure you can argue that a different design pattern can be used -- but that's not the point of this question...
Wonder if there is any way for these Impl structs to somehow "register" themselves so they can be looked up later by a factory method? I would be happy to end up with a magical macro or even a plugin to accomplish that.
Say, in D you can use templates to register the implementations -- and if you can't for some reason, you can always inspect modules at compile-time and generate new code via mixins; there are also user-defined attributes that can help in this. In Python, you would normally use a metaclass so that every time a new child class is created, a ref to it is stored in the metaclass's registry which allows you to look up implementations by name or parameter; this can also be done via decorators if implementations are simple functions.
Ideally, in the example above you would be able to create Object as
Object::new(0)
where the value 0 is only known at runtime and it would magically return you an Object { methods: &Methods_0 }, and the body of new() would not have the implementations hard-coded like so "methods: [&Methods; 2] = [&Methods_0, &Methods_1]", instead it should be somehow inferred automatically.
So, this is probably extremely buggy, but it works as a proof of concept.
It is possible to use Cargo's code generation support to make the introspection at compile-time, by parsing (not exactly parsing in this case, but you get the idea) the present implementations, and generating the boilerplate necessary to make Object::new() work.
The code is pretty convoluted and has no error handling whatsoever, but works.
Tested on rustc 1.0.0-dev (2c0535421 2015-02-05 15:22:48 +0000)
(See on github)
src/main.rs:
pub mod implementations;
mod generated_glue {
include!(concat!(env!("OUT_DIR"), "/generated_glue.rs"));
}
use generated_glue::Object;
pub trait Methods {
fn func(&self);
}
pub struct Methods_2;
impl Methods for Methods_2 {
fn func(&self) {
println!("baz");
}
}
fn main() {
Object::new(2).func();
}
src/implementations.rs:
use super::Methods;
pub struct Methods_0;
impl Methods for Methods_0 {
fn func(&self) {
println!("foo");
}
}
pub struct Methods_1;
impl Methods for Methods_1 {
fn func(&self) {
println!("bar");
}
}
build.rs:
#![feature(core, unicode, path, io, env)]
use std::env;
use std::old_io::{fs, File, BufferedReader};
use std::collections::HashMap;
fn main() {
let target_dir = Path::new(env::var_string("OUT_DIR").unwrap());
let mut target_file = File::create(&target_dir.join("generated_glue.rs")).unwrap();
let source_code_path = Path::new(file!()).join_many(&["..", "src/"]);
let source_files = fs::readdir(&source_code_path).unwrap().into_iter()
.filter(|path| {
match path.str_components().last() {
Some(Some(filename)) => filename.split('.').last() == Some("rs"),
_ => false
}
});
let mut implementations = HashMap::new();
for source_file_path in source_files {
let relative_path = source_file_path.path_relative_from(&source_code_path).unwrap();
let source_file_name = relative_path.as_str().unwrap();
implementations.insert(source_file_name.to_string(), vec![]);
let mut file_implementations = &mut implementations[*source_file_name];
let mut source_file = BufferedReader::new(File::open(&source_file_path).unwrap());
for line in source_file.lines() {
let line_str = match line {
Ok(line_str) => line_str,
Err(_) => break,
};
if line_str.starts_with("impl Methods for Methods_") {
const PREFIX_LEN: usize = 25;
let number_len = line_str[PREFIX_LEN..].chars().take_while(|chr| {
chr.is_digit(10)
}).count();
let number: i32 = line_str[PREFIX_LEN..(PREFIX_LEN + number_len)].parse().unwrap();
file_implementations.push(number);
}
}
}
writeln!(&mut target_file, "use super::Methods;").unwrap();
for (source_file_name, impls) in &implementations {
let module_name = match source_file_name.split('.').next() {
Some("main") => "super",
Some(name) => name,
None => panic!(),
};
for impl_number in impls {
writeln!(&mut target_file, "use {}::Methods_{};", module_name, impl_number).unwrap();
}
}
let all_impls = implementations.values().flat_map(|impls| impls.iter());
writeln!(&mut target_file, "
pub struct Object;
impl Object {{
pub fn new(impl_number: i32) -> Box<Methods + 'static> {{
match impl_number {{
").unwrap();
for impl_number in all_impls {
writeln!(&mut target_file,
" {} => Box::new(Methods_{}),", impl_number, impl_number).unwrap();
}
writeln!(&mut target_file, "
_ => panic!(\"Unknown impl number: {{}}\", impl_number),
}}
}}
}}").unwrap();
}
The generated code:
use super::Methods;
use super::Methods_2;
use implementations::Methods_0;
use implementations::Methods_1;
pub struct Object;
impl Object {
pub fn new(impl_number: i32) -> Box<Methods + 'static> {
match impl_number {
2 => Box::new(Methods_2),
0 => Box::new(Methods_0),
1 => Box::new(Methods_1),
_ => panic!("Unknown impl number: {}", impl_number),
}
}
}

Resources