Providing Blanket Trait Implementations for a Custom Trait - rust

I am implementing a quick geometry crate for practice, and I want to implement two structs, Vector and Normal (this is because standard vectors and normal vectors map through certain transformations differently). I've implemented the following trait:
trait Components {
fn new(x: f32, y: f32, z: f32) -> Self;
fn x(&self) -> f32;
fn y(&self) -> f32;
fn z(&self) -> f32;
}
I'd also like to be add two vectors together, as well as two normals, so I have blocks that look like this:
impl Add<Vector> for Vector {
type Output = Vector;
fn add(self, rhs: Vector) -> Vector {
Vector { vals: [
self.x() + rhs.x(),
self.y() + rhs.y(),
self.z() + rhs.z()] }
}
}
And almost the exact same impl for Normals. What I really want is to provide a default Add impl for every struct that implements Components, since typically, they all will add the same way (e.g. a third struct called Point will do the same thing). Is there a way of doing this besides writing out three identical implementations for Point, Vector, and Normal? Something that might look like this:
impl Add<Components> for Components {
type Output = Components;
fn add(self, rhs: Components) -> Components {
Components::new(
self.x() + rhs.x(),
self.y() + rhs.y(),
self.z() + rhs.z())
}
}
Where "Components" would automatically get replaced by the appropriate type. I suppose I could do it in a macro, but that seems a little hacky to me.

In Rust, it is possible to define generic impls, but there are some important restrictions that result from the coherence rules. You'd like an impl that goes like this:
impl<T: Components> Add<T> for T {
type Output = T;
fn add(self, rhs: T) -> T {
T::new(
self.x() + rhs.x(),
self.y() + rhs.y(),
self.z() + rhs.z())
}
}
Unfortunately, this does not compile:
error: type parameter T must be used as the type parameter for some local type (e.g. MyStruct<T>); only traits defined in the current crate can be implemented for a type parameter [E0210]
Why? Suppose your Components trait were public. Now, a type in another crate could implement the Components trait. That type might also try to implement the Add trait. Whose implementation of Add should win, your crate's or that other crate's? By Rust's current coherence rules, the other crate gets this privilege.
For now, the only option, besides repeating the impls, is to use a macro. Rust's standard library uses macros in many places to avoid repeating impls (especially for the primitive types), so you don't have to feel dirty! :P

At present, macros are the only way to do this. Coherence rules prevent multiple implementations that could overlap, so you can’t use a generic solution.

Related

Why impl RangeBounds<T> for Range<&T> requires T to be sized?

https://doc.rust-lang.org/src/core/ops/range.rs.html#979-986
impl<T> RangeBounds<T> for Range<&T> {
fn start_bound(&self) -> Bound<&T> {
Included(self.start)
}
fn end_bound(&self) -> Bound<&T> {
Excluded(self.end)
}
}
As you can see, T is not marked with ?Sized, which is preventing me from passing a Range<&[u8]> into an argument requires impl RangeBounds<[u8]>.
Are there some design considerations behind it? If this is intended, which is the proper way to pass a range of [u8]?
It's rather unfortunate, but adding the T: ?Sized bound breaks type inference in code like btreemap.range("from".."to") because the compiler can't choose between T = str and T = &str, both of which satisfy the bounds on range.
Some discussion of this has taken place on PR #64327. As far as I know there are no plans to relax the bounds on these impls in the future.
Instead of a Range<T>, you can use a tuple of Bound<T>s; for example, instead of takes_range("from".."to"), you can write the following instead:
use std::ops::Bound;
takes_range((Bound::Included("from"), Bound::Excluded("to"))
This will work because (Bound<&T>, Bound<&T>) does implement RangeBounds<T> even when T is !Sized.

Should I implement AddAssign on a newtype?

I've got a newtype:
struct NanoSecond(u64);
I want to implement addition for this. (I'm actually using derive_more, but here's an MCVE.)
impl Add for NanoSecond {
fn add(self, other: Self) -> Self {
self.0 + other.0
}
}
But should I implement AddAssign? Is it required for this to work?
let mut x: NanoSecond = 0.to();
let y: NanoSecond = 5.to();
x += y;
Will implementing it cause unexpected effects?
Implementing AddAssign is indeed required for the += operator to work.
The decision of whether to implement this trait will depend greatly on the actual type and kind of semantics that you are aiming for. This applies to any type of your own making, including newtypes. The most important prior is predictability: an implementation should behave as expected from the same mathematical operation. In this case, considering that the addition through Add is already well defined for that type, and nothing stops you from implementing the equivalent operation in-place, then adding an impl of AddAssign like so is the most predictable thing to do.
impl AddAssign for NanoSecond {
fn add_assign(&mut self, other: Self) {
self.0 += other.0
}
}
One may also choose to provide additional implementations for reference types as the second operand (e.g. Add<&'a Self> and AddAssign<&'a Self>).
Note that Clippy has lints which check whether the implementation of the arithmetic operation is sound (suspicious_arithmetic_impl and suspicious_op_assign_impl). As part of being predictable, the trait should behave pretty much like the respective mathematical operation, regardless of whether + or += was used. To the best of my knowledge though, there is currently no lint or API guideline suggesting to implement -Assign traits alongside the respective operation.

Is it possible to make my own Box-like wrapper?

I noticed that Box<T> implements everything that T implements and can be used transparently. For Example:
let mut x: Box<Vec<u8>> = Box::new(Vec::new());
x.push(5);
I would like to be able to do the same.
This is one use case:
Imagine I'm writing functions that operate using an axis X and an axis Y. I'm using values to change those axis that are of type numbers but belongs only to one or the other axis.
I would like my compiler to fail if I attempt to do operations with values that doesn't belong to the good axis.
Example:
let x = AxisX(5);
let y = AxisY(3);
let result = x + y; // error: incompatible types
I can do this by making a struct that will wrap the numbers:
struct AxisX(i32);
struct AxisY(i32);
But that won't give me access to all the methods that i32 provides like abs(). Example:
x.abs() + 3 // error: abs() does not exist
// ...maybe another error because I don't implement the addition...
Another possible use case:
You can appropriate yourself a struct of another library and implement or derive anything more you would want. For example: a struct that doesn't derive Debug could be wrapped and add the implementation for Debug.
You are looking for std::ops::Deref:
In addition to being used for explicit dereferencing operations with the (unary) * operator in immutable contexts, Deref is also used implicitly by the compiler in many circumstances. This mechanism is called 'Deref coercion'. In mutable contexts, DerefMut is used.
Further:
If T implements Deref<Target = U>, and x is a value of type T, then:
In immutable contexts, *x on non-pointer types is equivalent to *Deref::deref(&x).
Values of type &T are coerced to values of type &U
T implicitly implements all the (immutable) methods of the type U.
For more details, visit the chapter in The Rust Programming Language as well as the reference sections on the dereference operator, method resolution and type coercions.
By implementing Deref it will work:
impl Deref for AxisX {
type Target = i32;
fn deref(&self) -> &i32 {
&self.0
}
}
x.abs() + 3
You can see this in action on the Playground.
However, if you call functions from your underlying type (i32 in this case), the return type will remain the underlying type. Therefore
assert_eq!(AxisX(10).abs() + AxisY(20).abs(), 30);
will pass. To solve this, you may overwrite some of those methods you need:
impl AxisX {
pub fn abs(&self) -> Self {
// *self gets you `AxisX`
// **self dereferences to i32
AxisX((**self).abs())
}
}
With this, the above code fails. Take a look at it in action.

How do I solve the error "the precise format of `Fn`-family traits' type parameters is subject to change"?

I have written a problem solver in Rust which as a subroutine needs to make calls to a function which is given as a black box (essentially I would like to give an argument of type Fn(f64) -> f64).
Essentially I have a function defined as fn solve<F>(f: F) where F : Fn(f64) -> f64 { ... } which means that I can call solve like this:
solve(|x| x);
What I would like to do is to pass a more complex function to the solver, i.e. a function which depends on multiple parameters etc.
I would like to be able to pass a struct with a suitable trait implementation to the solver. I tried the following:
struct Test;
impl Fn<(f64,)> for Test {}
This yield the following error:
error: the precise format of `Fn`-family traits' type parameters is subject to change. Use parenthetical notation (Fn(Foo, Bar) -> Baz) instead (see issue #29625)
I would also like to add a trait which includes the Fn trait (which I don't know how to define, unfortunately). Is that possible as well?
Edit:
Just to clarify: I have been developing in C++ for quite a while, the C++ solution would be to overload the operator()(args). In that case I could use a struct or class like a function. I would like to be able to
Pass both functions and structs to the solver as arguments.
Have an easy way to call the functions. Calling obj.method(args) is more complicated than obj(args) (in C++). But it seems that this behavior is not achievable currently.
The direct answer is to do exactly as the error message says:
Use parenthetical notation instead
That is, instead of Fn<(A, B)>, use Fn(A, B)
The real problem is that you are not allowed to implement the Fn* family of traits yourself in stable Rust.
The real question you are asking is harder to be sure of because you haven't provided a MCVE, so we are reduced to guessing. I'd say you should flip it around the other way; create a new trait, implement it for closures and your type:
trait Solve {
type Output;
fn solve(&mut self) -> Self::Output;
}
impl<F, T> Solve for F
where
F: FnMut() -> T,
{
type Output = T;
fn solve(&mut self) -> Self::Output {
(self)()
}
}
struct Test;
impl Solve for Test {
// interesting things
}
fn main() {}

How do I implement the Add trait for a reference to a struct?

I made a two element Vector struct and I want to overload the + operator.
I made all my functions and methods take references, rather than values, and I want the + operator to work the same way.
impl Add for Vector {
fn add(&self, other: &Vector) -> Vector {
Vector {
x: self.x + other.x,
y: self.y + other.y,
}
}
}
Depending on which variation I try, I either get lifetime problems or type mismatches. Specifically, the &self argument seems to not get treated as the right type.
I have seen examples with template arguments on impl as well as Add, but they just result in different errors.
I found How can an operator be overloaded for different RHS types and return values? but the code in the answer doesn't work even if I put a use std::ops::Mul; at the top.
I am using rustc 1.0.0-nightly (ed530d7a3 2015-01-16 22:41:16 +0000)
I won't accept "you only have two fields, why use a reference" as an answer; what if I wanted a 100 element struct? I will accept an answer that demonstrates that even with a large struct I should be passing by value, if that is the case (I don't think it is, though.) I am interested in knowing a good rule of thumb for struct size and passing by value vs struct, but that is not the current question.
You need to implement Add on &Vector rather than on Vector.
impl<'a, 'b> Add<&'b Vector> for &'a Vector {
type Output = Vector;
fn add(self, other: &'b Vector) -> Vector {
Vector {
x: self.x + other.x,
y: self.y + other.y,
}
}
}
In its definition, Add::add always takes self by value. But references are types like any other1, so they can implement traits too. When a trait is implemented on a reference type, the type of self is a reference; the reference is passed by value. Normally, passing by value in Rust implies transferring ownership, but when references are passed by value, they're simply copied (or reborrowed/moved if it's a mutable reference), and that doesn't transfer ownership of the referent (because a reference doesn't own its referent in the first place). Considering all this, it makes sense for Add::add (and many other operators) to take self by value: if you need to take ownership of the operands, you can implement Add on structs/enums directly, and if you don't, you can implement Add on references.
Here, self is of type &'a Vector, because that's the type we're implementing Add on.
Note that I also specified the RHS type parameter with a different lifetime to emphasize the fact that the lifetimes of the two input parameters are unrelated.
1 Actually, reference types are special in that you can implement traits for references to types defined in your crate (i.e. if you're allowed to implement a trait for T, then you're also allowed to implement it for &T). &mut T and Box<T> have the same behavior, but that's not true in general for U<T> where U is not defined in the same crate.
If you want to support all scenarios, you must support all the combinations:
&T op U
T op &U
&T op &U
T op U
In rust proper, this was done through an internal macro.
Luckily, there is a rust crate, impl_ops, that also offers a macro to write that boilerplate for us: the crate offers the impl_op_ex! macro, which generates all the combinations.
Here is their sample:
#[macro_use] extern crate impl_ops;
use std::ops;
impl_op_ex!(+ |a: &DonkeyKong, b: &DonkeyKong| -> i32 { a.bananas + b.bananas });
fn main() {
let total_bananas = &DonkeyKong::new(2) + &DonkeyKong::new(4);
assert_eq!(6, total_bananas);
let total_bananas = &DonkeyKong::new(2) + DonkeyKong::new(4);
assert_eq!(6, total_bananas);
let total_bananas = DonkeyKong::new(2) + &DonkeyKong::new(4);
assert_eq!(6, total_bananas);
let total_bananas = DonkeyKong::new(2) + DonkeyKong::new(4);
assert_eq!(6, total_bananas);
}
Even better, they have a impl_op_ex_commutative! that'll also generate the operators with the parameters reversed if your operator happens to be commutative.

Resources