Back here with learning macros,
I'm wondering if a macro can generate a struct and return it.
I'm trying to iterate over multiple components in a ecs. I've managed to write the functions to iterate over one component type, and over all pairs of components, in a way that could be expanded to any number of component types.
So now, I want to make a macro that automate the process. The macro should :
create a struct that will be the iterator, and that will be returned
implement the Iterator trait on that struct
build an instance of that struct and return it.
you can see On the git page the implementations of the iterator, the structs, and the functions for 1 and 2 components, and they can be super easily extended (at the end of the file).
Here is what I'm trying for now :
#[macro_export]
macro_rules! iterate_over_component {
($ecs:expr; $($comp:ty),+) => {
// create the result struct that will act as an iterator
struct ComponentIterator<$($comp),+> {
current_iterator: usize,
current_entity: usize,
$(
array_$comp: Option<&'a Vec<IndexedElem<$comp>>>,
current_index_$comp: usize,
)*
}
};
}
But I'm having lots of issues :
It doesn't seems the ty keyword accepts types ? I would like to call the macro like so : iterate_over_component!(ecs; Position, Velocity) (where Position and Velocity are structs types)
I'm having errors when using the struct keyword at the beginning of the macro, so can I really declare structs in there ?
Related
How can I downcast a trait to a struct like in this C# example?
I have a base trait and several derived structs that must be pushed into a single vector of base traits.
I have to check if the each item of the vector is castable to a specific derived struct and, if yes, use it as a struct of that type.
This is my Rust code, I don't know how to implement the commented part.
trait Base {
fn base_method(&self);
}
struct Derived1;
impl Derived1 {
pub fn derived1_method(&self) {
println!("Derived1");
}
}
impl Base for Derived1 {
fn base_method(&self) {
println!("Base Derived1");
}
}
struct Derived2;
impl Derived2 {
pub fn derived2_method(&self) {
println!("Derived2");
}
}
impl Base for Derived2 {
fn base_method(&self) {
println!("Base Derived2");
}
}
fn main() {
let mut bases = Vec::<Box<dyn Base>>::new();
let der1 = Derived1{};
let der2 = Derived2{};
bases.push(Box::new(der1));
bases.push(Box::new(der2));
for b in bases {
b.base_method();
//if (b is Derived1)
// (b as Derived1).derived1_method();
//else if (b is Derived2)
// (b as Derived2).derived2_method();
}
}
Technically you can use as_any, as explained in this answer:
How to get a reference to a concrete type from a trait object?
However, type-checking and downcasting when looping over a vector of trait objects is considered a code smell. If you put a bunch of objects into a vector and then loop over that vector, presumably the objects in that vector are supposed to play a similar role.
So then you should refactor your code such that you can call the same method on your object regardless of the underlying concrete type.
From your code, it seems you're purely checking the type (and downcasting) so that you can call the appropriate method. What you really should do, then, is introduce yet another trait that provides a unified interface that you then can call from your loop, so that the loop doesn't need to know the concrete type at all.
EDIT: Allow me to add a concrete example that highlights this, but I'm going to use Python to show this, because in Python it's very easy to do what you are asking to do, so we can then focus on why it's not the best design choice:
class Dog:
def bark():
print("Woof woof")
class Cat:
def meow():
print("Meow meow")
list_of_animals = [Dog(), Dog(), Cat(), Dog()]
for animal in list_of_animals:
if isinstance(animal, Dog):
animal.bark()
elif isinstance(animal, Cat):
animal.meow()
Here Python's dynamic typing allows us to just slap all the objects into the same list, iterate over it, and then figure out the type at runtime so we can call the right method.
But really the whole point of well-designed object oriented code is to lift that burden from the caller to the object. This type of design is very inflexible, because as soon as you add another animal, you'll have to add another branch to your if block, and you better do that everywhere you had that branching.
The solution is of course to identify the common role that both bark and meow play, and abstract that behavior into an interface. In Python of course we don't need to formally declare such an interface, so we can just slap another method in:
class Dog:
...
def make_sound():
self.bark()
class Cat:
...
def make_sound():
self.meow()
...
for animal in list_of_animals:
animal.make_sound()
In your Rust code, you actually have two options, and that depends on the rest of your design. Either, as I suggested, adding another trait that expresses the common role that the objects play (why put them into a vector otherwise in the first place?) and implementing that for all your derived structs. Or, alternatively, expressing all the various derived structs as different variants of the same enum, and then add a method to the enum that handles the dispatch. The enum is of course more closed to outside extension than using the trait version. That's why the solution will depend on your needs for that.
Lets say I have a trait ShowStrings with a method show_strings. Calling show_strings will deeply find Strings and print them out (with println!()).
The trait has a accompanying derive macro for it which traces all the fields of a structure and generates the recursive calls.
So for example the structure struct Person { first_name: String, last_name: String, children: Vec<Person>, height: f32 }, Person::show_strings will print out their first and last name and then recurse to do the same for their children.
The trait ShowStrings implementation on String is to print out self.
The question I have is what to do about the height: f32 field. f32 is not a string neither does it have any fields that can be strings. There are two options for how to handle ShowStrings for f32:
Either the proc derive macro generates a ShowStrings::show_strings(self.height) call and f32 gets a blank implementation: impl ShowStrings for f32 { fn show_strings(&self) {} }
Or the proc macro sees the f32 type reference and decides to ignore that field.
The first method is what the gc crate does. This can be automated with a simple macro so is not too manual. It would mean ShowStrings would be implemented on something like () which doesn't make sense and could confuse users of the API. Also I would assume Rust would compile away the call ShowStrings::show_string(field_that_has_empty_implementation) so it would not call/jump.
The second method ends up with less generated code. But type references are weak as they can be shadowed, the type name may have been shadowed.
I think I will go for the first method? Are there any performance, binary size or other concerns around having lots of implementations that are empty? impl MyTrait for *inbuilt_type* { fn func() {} }
The main goal is to implement a computation graph, that handles nodes with values and nodes with operators (think of simple arithmetic operators like add, subtract, multiply etc..). An operator node can take up to two value nodes, and "produces" a resulting value node.
Up to now, I'm using an enum to differentiate between a value and operator node:
pub enum Node<'a, T> where T : Copy + Clone {
Value(ValueNode<'a, T>),
Operator(OperatorNode)
}
pub struct ValueNode<'a, T> {
id: usize,
value_object : &'a dyn ValueType<T>
}
Update: Node::Value contains a struct, which itself contains a reference to a trait object ValueType, which is being implemented by a variety of concrete types.
But here comes the problem. During compililation, the generic types will be elided, and replaced by the actual types. The generic type T is also being propagated throughout the computation graph (obviously):
pub struct ComputationGraph<T> where T : Copy + Clone {
nodes: Vec<Node<T>>
}
This actually restricts the usage of ComputeGraph to one specific ValueType.
Furthermore the generic type T cannot be Sized, since a value node can be an opqaue type handled by a different backend not available through rust (think of C opqaue types made available through FFI).
One possible solution to this problem would be to introduce an additional enum type, that "mirrors" the concrete implementation of the valuetype trait mentioned above. this approach would be similiar, that enum dispatch does.
Is there anything I haven't thought of to use multiple implementations of ValueType?
update:
What i want to achive is following code:
pub struct Scalar<T> where T : Copy + Clone{
data : T
}
fn main() {
let cg = ComputeGraph::new();
// a new scalar type. doesn't have to be a tuple struct
let a = Scalar::new::<f32>(1.0);
let b_size = 32;
let b = Container::new::<opaque_type>(32);
let op = OperatorAdd::new();
// cg.insert_operator_node constructs four nodes: 3 value nodes
// and one operator nodes internally.
let result = cg.insert_operator_node::<Container>(&op, &a, &b);
}
update
ValueType<T> looks like this
pub trait ValueType<T> {
fn get_size(&self) -> usize;
fn get_value(&self) -> T;
}
update
To further increase the clarity of my question think of a small BLAS library backed by OpenCL. The memory management and device interaction shall be transparent to the user. A Matrix type allocates space on an OpenCL device with types as a primitive type buffer, and the appropriate call will return a pointer to that specific region of memory. Think of an operation that will scale the matrix by a scalar type, that is being represented by a primitive value. Both the (pointer to the) buffer and the scalar can be passed to a kernel function. Going back to the ComputeGraph, it seems obvious, that all BLAS operations form some type of computational graph, which can be reduced to a linear list of instructions ( think here of setting kernel arguments, allocating buffers, enqueue the kernel, storing the result, etc... ). Having said all that, a computation graph needs to be able to store value nodes with a variety of types.
As always the answer to the problem posed in the question is obvious. The graph expects one generic type (with trait bounds). Using an enum to "cluster" various subtypes was the solution, as already sketched out in the question.
An example to illustrate the solution. Consider following "subtypes":
struct Buffer<T> {
// fields
}
struct Scalar<T> {
// fields
}
struct Kernel {
// fields
}
The value containing types can be packed into an enum:
enum MemType {
Buffer(Buffer<f32>);
Scalar(Scalar<f32>);
// more enum variants ..
}
Now MemType and Kernel can now be packed in an enum as well
enum Node {
Value(MemType);
Operator(Kernel);
}
Node can now be used as the main type for nodes/vertices inside the graph. The solution might not be very elegant, but it does the trick for now. Maybe some code restructuring might be done in the future.
I am very new to Rust and I was wondering if it's possible to define a tuple as a struct member. Something like:
struct MyStruct {
(x, y) : (u32, f32)
}
The compiler complains about the first comma, so this obviously isn't the right syntax. Is it even possible? I Can't find anything in the documentation, and if I search for tuple and struct I get results for tuple structs which is not what I'm looking for.
For anyone interested why I want to know this, I have a function that returns a tuple and I want to store the result inside a member of a struct. Currently I am calling the function on two temporary variables and then moving the results into two different struct members, but not sure if this is the right way to do it.
A tuple is a single variable which contains 2 values so when you define it in your struct it is still a single variable/field:
struct MyStruct {
x: (u32, f32),
}
I have two fragments of code which worked well until a few days ago. They look fairly simple:
1.
let mut mask = 0 as c_ulong;
for bit in bits.iter() {
mask |= *bit as c_ulong;
}
2.
for bit in vec!(...).iter() {
if res & *bit as c_ulong != 0 {
bits.push(*bit);
}
}
The vector iterated vector in both cases contains an enum like:
#[allow(non_camel_case_types)]
#[deriving(PartialEq,Show)]
pub enum SomeEnum {
BLAH = 0x01,
...
}
But unfortunately now that code causes the following error on all *bit expressions.
cannot move out of dereference of `&`-pointer
I don't understand this error. Why is it now not allowed? I have a pointer to an immutable vector with enum variants that are practically just int values.
Also, how can I fix it? I know the first case can be transformed to use .into_iter() - that's ok, I don't need that vector afterwards. But in the second fragment, I actually can't use bit twice after .into_iter(). Even though I only compare it the first time!
Am I missing something trivial here?
Your type used to be Copy, but this trait is now opt-in.
Add #[deriving(Clone, Copy)] (or add Clone and Copy to the list of traits in an existing #[deriving] attribute) to your type. You need to implement both Clone and Copy, because Copy is a subtrait of Clone (as any type that implements Copy can also trivially implement Clone).
pub trait Copy: Clone { }