I'm struggling with the from_axis_angle of the Matrix4 of the cgmath crate.
It seems that it is required that the angle parameter implements Into<Rad>, so how could I implement this for my f32 ?
So that I can call the function with a f32, like so :
pub fn rotate(&mut self, axis: cgmath::Vector3<f32>, angle: f32) {
self.local_pos = self.local_pos * cgmath::Matrix4::from_axis_angle(axis.normalize(), angle);
}
I'm still new to rust and don't quite understand why this is required.
You can simply pass cgmath::Rad(angle) as the angle argument.
The Into<Rad> requirement that seemed like a hurdle is actually intended to be helpful. The idea is for angle to accept an actual Rad (which trivially implements Into<Rad>), but also other types that are convertible to Rad<S>, such as Deg<S>.
Related
1.) I have made a big knot in my code it seems. I defined my own structs e.g.
struct State { // some float values }
and require them be multiplied by f64 + Complex64 and added together member-wise. Now I tried to abstract away the f64 and Complex64 into a trait called "Weight" and I have two structs that need to implement them:
struct WeightReal {
strength: f64,
}
struct WeightComplex {
strength: num_complex::Complex<f64>,
}
Now it's more complicated as I need custom multiplication for these Weights with my struct "State" AND also with f64 itself (because I do other things as well). So I need "Weight" x State and "Weight" x f64 for both possible weight-types. Do I have to define all of these multiplications myself now? I used the derive_more-crate in the past, but I think now it's at its limits. Or I fundamentally misunderstood something here. Another question is: Do I need to define a struct here? I tried type-aliases before but I think there was an error, because I couldn't define custom multiplication on type aliases (at least it seemed so to me). It could've just been me doing it incorrectly.
The Rust way of defining multiplication / overloading of the "*"-operator somehow flies right over my head. With "cargo expand" I looked at a multiplication derived through the derive_more-crate:
impl<__RhsT: ::core::marker::Copy> ::core::ops::Mul<__RhsT> for State
where
f64: ::core::ops::Mul<__RhsT, Output = f64>,
{
type Output = State;
#[inline]
fn mul(self, rhs: __RhsT) -> State {
State {
value1: <f64 as ::core::ops::Mul<__RhsT>>::mul(self.value1, rhs),
value2: <f64 as ::core::ops::Mul<__RhsT>>::mul(self.value2, rhs),
}
}
}
If someone could explain a few of the parts here: what does the "<f64 as ::core.... "-part mean?
I understand that "__RhsT" means "Right-Hand-Side-Type", but I don't understand why it is still generic, because in this example shouldn't it be specifically f64? The third line is also puzzling me, why is it necessary?
I am really confused. The rust docs regarding multiplication are also unclear to me as they seem to be abstracted away in some macro.
There is a lot of noise in the generated code, which is fairly typical of code from macros. That's in order to reduce the chance of naming conflicts or remove ambiguity if there are several traits in scope with the same method name.
This is a bit more readable:
use std::ops::Mul;
impl<Rhs: Copy> Mul<Rhs> for State
where
f64: Mul<Rhs, Output = f64>,
{
type Output = State;
fn mul(self, rhs: Rhs) -> State {
State {
value1: <f64 as Mul<Rhs>>::mul(self.value1, rhs),
value2: <f64 as Mul<Rhs>>::mul(self.value2, rhs),
}
}
}
f64::mul(a, b) is another way to call a method, a.mul(b), while being precise about exactly which mul function you mean. That's needed because it's possible for there to be multiple possible methods with the same name. These could be inherent, from different traits, or from different parametrisations of the same trait.
Rhs is a geneneric parameter rather than just f64 because it's possible to implement Mul serveral times for the same type, using different type parameters. For example, it is reasonably to multiply an f64 by another f64, but it also makes sense to multiply by an f32, u8, i32 etc. Implementing Mul<u8> for f64 allows you to do 1.0f64 * 1u8.
<f64 as Mul<Rhs>>::mul(a, b) is specifying to call the mul method of Mul where the left hand side is an f64, but where the right hand side, Rhs, can be any type.
As for your first question, it's hard to understand what you are actually attempting, but the difficulty may hint that implementing Mul is not the right thing to do in the first place. If you have several different ways to multiply, then perhaps you should just have a different method for each one. It will probably end up being clearer and simpler. There isn't a big advantage in being able to use the * operator.
I want to pass Iterators to a function, which then computes some value from these iterators.
I am not sure how a robust signature to such a function would look like.
Lets say I want to iterate f64.
You can find the code in the playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=c614429c541f337adb102c14518cf39e
My first attempt was
fn dot(a : impl std::iter::Iterator<Item = f64>,b : impl std::iter::Iterator<Item = f64>) -> f64 {
a.zip(b).map(|(x,y)| x*y).sum()
}
This fails to compile if we try to iterate over slices
So you can do
fn dot<'a>(a : impl std::iter::Iterator<Item = &'a f64>,b : impl std::iter::Iterator<Item = &'a f64>) -> f64 {
a.zip(b).map(|(x,y)| x*y).sum()
}
This fails to compile if I try to iterate over mapped Ranges.
(Why does the compiler requires the livetime parameters here?)
So I tried to accept references and not references generically:
pub fn dot<T : Borrow<f64>, U : Borrow<f64>>(a : impl std::iter::Iterator::<Item = T>, b: impl std::iter::Iterator::<Item = U>) -> f64 {
a.zip(b).map(|(x,y)| x.borrow()*y.borrow()).sum()
}
This works with all combinations I tried, but it is quite verbose and I don't really understand every aspect of it.
Are there more cases?
What would be the best practice of solving this problem?
There is no right way to write a function that can accept Iterators, but there are some general principles that we can apply to make your function general and easy to use.
Write functions that accept impl IntoIterator<...>. Because all Iterators implement IntoIterator, this is strictly more general than a function that accepts only impl Iterator<...>.
Borrow<T> is the right way to abstract over T and &T.
When trait bounds get verbose, it's often easier to read if you write them in where clauses instead of in-line.
With those in mind, here's how I would probably write dot:
fn dot<I, J>(a: I, b: J) -> f64
where
I: IntoIterator,
J: IntoIterator,
I::Item: Borrow<f64>,
J::Item: Borrow<f64>,
{
a.into_iter()
.zip(b)
.map(|(x, y)| x.borrow() * y.borrow())
.sum()
}
However, I also agree with TobiP64's answer in that this level of generality may not be necessary in every case. This dot is nice because it can accept a wide range of arguments, so you can call dot(&some_vec, some_iterator) and it just works. It's optimized for readability at the call site. On the other hand, if you find the Borrow trait complicates the definition too much, there's nothing wrong with optimizing for readability at the definition, and forcing the caller to add a .iter().copied() sometimes. The only thing I would definitely change about the first dot function is to replace Iterator with IntoIterator.
You can iterate over slices with the first dot implementation like that:
dot([0, 1, 2].iter().cloned(), [0, 1, 2].iter().cloned());
(https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.cloned)
or
dot([0, 1, 2].iter().copied(), [0, 1, 2].iter().copied());
(https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.copied)
Why does the compiler requires the livetime parameters here?
As far as I know every reference in rust has a lifetime, but the compiler can infer simple it in cases. In this case, however the compiler is not yet smart enough, so you need to tell it how long the references yielded by the iterator lives.
Are there more cases?
You can always use iterator methods, like the solution above, to get an iterator over f64, so you don't have to deal with lifetimes or generics.
What would be the best practice of solving this problem?
I would recommend the first version (and thus leaving it to the caller to transform the iterator to Iterator<f64>), simply because it's the most readable.
I've got a newtype:
struct NanoSecond(u64);
I want to implement addition for this. (I'm actually using derive_more, but here's an MCVE.)
impl Add for NanoSecond {
fn add(self, other: Self) -> Self {
self.0 + other.0
}
}
But should I implement AddAssign? Is it required for this to work?
let mut x: NanoSecond = 0.to();
let y: NanoSecond = 5.to();
x += y;
Will implementing it cause unexpected effects?
Implementing AddAssign is indeed required for the += operator to work.
The decision of whether to implement this trait will depend greatly on the actual type and kind of semantics that you are aiming for. This applies to any type of your own making, including newtypes. The most important prior is predictability: an implementation should behave as expected from the same mathematical operation. In this case, considering that the addition through Add is already well defined for that type, and nothing stops you from implementing the equivalent operation in-place, then adding an impl of AddAssign like so is the most predictable thing to do.
impl AddAssign for NanoSecond {
fn add_assign(&mut self, other: Self) {
self.0 += other.0
}
}
One may also choose to provide additional implementations for reference types as the second operand (e.g. Add<&'a Self> and AddAssign<&'a Self>).
Note that Clippy has lints which check whether the implementation of the arithmetic operation is sound (suspicious_arithmetic_impl and suspicious_op_assign_impl). As part of being predictable, the trait should behave pretty much like the respective mathematical operation, regardless of whether + or += was used. To the best of my knowledge though, there is currently no lint or API guideline suggesting to implement -Assign traits alongside the respective operation.
I have written a problem solver in Rust which as a subroutine needs to make calls to a function which is given as a black box (essentially I would like to give an argument of type Fn(f64) -> f64).
Essentially I have a function defined as fn solve<F>(f: F) where F : Fn(f64) -> f64 { ... } which means that I can call solve like this:
solve(|x| x);
What I would like to do is to pass a more complex function to the solver, i.e. a function which depends on multiple parameters etc.
I would like to be able to pass a struct with a suitable trait implementation to the solver. I tried the following:
struct Test;
impl Fn<(f64,)> for Test {}
This yield the following error:
error: the precise format of `Fn`-family traits' type parameters is subject to change. Use parenthetical notation (Fn(Foo, Bar) -> Baz) instead (see issue #29625)
I would also like to add a trait which includes the Fn trait (which I don't know how to define, unfortunately). Is that possible as well?
Edit:
Just to clarify: I have been developing in C++ for quite a while, the C++ solution would be to overload the operator()(args). In that case I could use a struct or class like a function. I would like to be able to
Pass both functions and structs to the solver as arguments.
Have an easy way to call the functions. Calling obj.method(args) is more complicated than obj(args) (in C++). But it seems that this behavior is not achievable currently.
The direct answer is to do exactly as the error message says:
Use parenthetical notation instead
That is, instead of Fn<(A, B)>, use Fn(A, B)
The real problem is that you are not allowed to implement the Fn* family of traits yourself in stable Rust.
The real question you are asking is harder to be sure of because you haven't provided a MCVE, so we are reduced to guessing. I'd say you should flip it around the other way; create a new trait, implement it for closures and your type:
trait Solve {
type Output;
fn solve(&mut self) -> Self::Output;
}
impl<F, T> Solve for F
where
F: FnMut() -> T,
{
type Output = T;
fn solve(&mut self) -> Self::Output {
(self)()
}
}
struct Test;
impl Solve for Test {
// interesting things
}
fn main() {}
I am implementing a quick geometry crate for practice, and I want to implement two structs, Vector and Normal (this is because standard vectors and normal vectors map through certain transformations differently). I've implemented the following trait:
trait Components {
fn new(x: f32, y: f32, z: f32) -> Self;
fn x(&self) -> f32;
fn y(&self) -> f32;
fn z(&self) -> f32;
}
I'd also like to be add two vectors together, as well as two normals, so I have blocks that look like this:
impl Add<Vector> for Vector {
type Output = Vector;
fn add(self, rhs: Vector) -> Vector {
Vector { vals: [
self.x() + rhs.x(),
self.y() + rhs.y(),
self.z() + rhs.z()] }
}
}
And almost the exact same impl for Normals. What I really want is to provide a default Add impl for every struct that implements Components, since typically, they all will add the same way (e.g. a third struct called Point will do the same thing). Is there a way of doing this besides writing out three identical implementations for Point, Vector, and Normal? Something that might look like this:
impl Add<Components> for Components {
type Output = Components;
fn add(self, rhs: Components) -> Components {
Components::new(
self.x() + rhs.x(),
self.y() + rhs.y(),
self.z() + rhs.z())
}
}
Where "Components" would automatically get replaced by the appropriate type. I suppose I could do it in a macro, but that seems a little hacky to me.
In Rust, it is possible to define generic impls, but there are some important restrictions that result from the coherence rules. You'd like an impl that goes like this:
impl<T: Components> Add<T> for T {
type Output = T;
fn add(self, rhs: T) -> T {
T::new(
self.x() + rhs.x(),
self.y() + rhs.y(),
self.z() + rhs.z())
}
}
Unfortunately, this does not compile:
error: type parameter T must be used as the type parameter for some local type (e.g. MyStruct<T>); only traits defined in the current crate can be implemented for a type parameter [E0210]
Why? Suppose your Components trait were public. Now, a type in another crate could implement the Components trait. That type might also try to implement the Add trait. Whose implementation of Add should win, your crate's or that other crate's? By Rust's current coherence rules, the other crate gets this privilege.
For now, the only option, besides repeating the impls, is to use a macro. Rust's standard library uses macros in many places to avoid repeating impls (especially for the primitive types), so you don't have to feel dirty! :P
At present, macros are the only way to do this. Coherence rules prevent multiple implementations that could overlap, so you can’t use a generic solution.