I have a struct with a couple of operators implemented for it:
use std::ops;
/// Vector of 3 floats
#[derive(Debug, Copy, Clone)]
pub struct Vec3 {
pub x: f32,
pub y: f32,
pub z: f32,
}
/// Add operator
impl ops::Add<&Vec3> for &Vec3 {
type Output = Vec3;
#[inline(always)]
fn add(self, rhs: &Vec3) -> Self::Output {
Vec3 {
x: self.x + rhs.x,
y: self.y + rhs.y,
z: self.z + rhs.z,
}
}
}
/// Subtract operator
impl ops::Sub<&Vec3> for &Vec3 {
type Output = Vec3;
#[inline(always)]
fn sub(self, rhs: &Vec3) -> Self::Output {
Vec3 {
x: self.x - rhs.x,
y: self.y - rhs.y,
z: self.z - rhs.z,
}
}
}
/// Scalar multiplication operator
impl ops::Mul<&Vec3> for f32 {
type Output = Vec3;
#[inline(always)]
fn mul(self, rhs: &Vec3) -> Self::Output {
Vec3 {
x: self * rhs.x,
y: self * rhs.y,
z: self * rhs.z,
}
}
}
I want to use the operators:
let a = Vec3 { x: 0.0, y: 0.5, z: 1.0 };
let b = Vec3 { x: 1.0, y: 0.5, z: 0.0 };
let c = Vec3 { x: 1.0, y: 1.0, z: 0.0 };
let d = Vec3 { x: 0.0, y: 1.0, z: 1.0 };
let result = 2.0 * (a + b) - 3.0 * (c - d);
This code will not compile because the operators are implemented for &Vec3, not for Vec3. To fix the issue, the last line would have to look like this:
let result = &(2.0 * &(&a + &b)) - &(3.0 * &(&c - &d));
Which doesn't look that nice anymore.
I understand that I could implement the operators for Vec3 to avoid that problem, but what if I still want to use immutable references to these vectors on the stack? Is there perhaps a way to give Rust some hint that if I write a + b and there is no operator for Vec3 + Vec3, that it could try and look for a &Vec3 + &Vec3 operator instead, and if found, take the immutable references for both arguments automatically?
No, there is no way of automatically taking a reference when adding two values.
You could write your own macro that does this, I suppose. In usage, it would look like:
thing!{ a + b }
// expands to
(&a + &b)
I'd expect that this macro would quickly become tiresome to write.
See also:
Allow autoderef and autoref in operators — RFC #2147
Tracking issue: Allow autoderef and autoref in operators (experiment) #44762
Does println! borrow or own the variable?
How to implement idiomatic operator overloading for values and references in Rust?
Operator overloading by value results in use of moved value
How can I implement an operator like Add for a reference type so that I can add more than two values at once?
Related
I'm a beginner of Rust. I created an trait named Floating, f32 and f64 implement this trait. A generic struct Vec requiring that T must implement Floating trait. I would like to compute norm of the vector, this won't compile with the error message said that no method named sqrt found for type parameter T in the current scope. Why this is happening and how would it work?
use std::ops::{Mul, Add};
trait Floating: Sized + Copy + Clone + Mul<Output=Self> + Add<Output=Self> {}
impl Floating for f32 {}
impl Floating for f64 {}
struct Vec<T: Floating> {
x: T,
y: T,
z: T,
}
impl<T: Floating> Vec<T> {
fn norm(&self) -> T {
(self.x * self.x + self.y * self.y + self.z * self.z).sqrt()
}
}
fn main() {
let v: Vec<f32> = Vec {x: 1.0, y: 1.0, z: 1.0};
println!("norm is {:?}", v.norm());
}
First of all, you probably shouldn't name your type Vec, to avoid conflicts and confusion with the std::vec::Vec container.
The issue is that (your) Vec only knows that T requires Floating, and Floating doesn't have a sqrt method. So you need to define that yourself.
trait Floating: Sized + Copy + Clone + Mul<Output = Self> + Add<Output = Self> {
fn sqrt(self) -> Self;
}
impl Floating for f32 {
fn sqrt(self) -> Self {
f32::sqrt(self)
}
}
impl Floating for f64 {
fn sqrt(self) -> Self {
f64::sqrt(self)
}
}
This is where the num crate comes in handy, as it defines a Float trait, which includes among others a sqrt() method. Using num you can simplify your example to:
// num = "0.3"
use num::Float;
struct Vec<T: Float> {
x: T,
y: T,
z: T,
}
impl<T: Float> Vec<T> {
fn norm(&self) -> T {
(self.x * self.x + self.y * self.y + self.z * self.z).sqrt()
}
}
fn main() {
let v: Vec<f32> = Vec {
x: 1.0,
y: 1.0,
z: 1.0,
};
println!("norm is {:?}", v.norm());
}
I'm learning Rust so this might be a duplicate, since I'm still not sure how to search this. I tried to make a enum that contains different structs and since those structs have same methods but different implementation, I can't figure out how to properly write the types for the trait and the implementation. This is what I have so far:
struct Vector2 {
x: f32,
y: f32,
}
struct Vector3 {
x: f32,
y: f32,
z: f32,
}
enum Vector {
Vector2(Vector2),
Vector3(Vector3),
}
trait VectorAdd {
fn add(&self, other: &Vector) -> Vector;
}
impl VectorAdd for Vector2 {
fn add(&self, other: &Vector2) -> Vector2 {
Vector2 {
x: self.x + other.x,
y: self.y + other.y
}
}
}
This code does not compile, and the error messages don't make it clearer for me. Anyone can guide me how to write this properly? or if it's even possible?
Since you are using generics here, you don't need the enum to write the trait:
struct Vector2 {
x: f32,
y: f32,
}
struct Vector3 {
x: f32,
y: f32,
z: f32,
}
trait VectorAdd {
fn add(&self, other: &Self) -> Self;
}
impl VectorAdd for Vector2 {
fn add(&self, other: &Vector2) -> Vector2 {
Vector2 {
x: self.x + other.x,
y: self.y + other.y,
}
}
}
impl VectorAdd for Vector3 {
fn add(&self, other: &Vector3) -> Vector3 {
Vector3 {
x: self.x + other.x,
y: self.y + other.y,
z: self.z + other.z,
}
}
}
(playground)
You can implement the enum based on this definition:
enum Vector {
Vector2(Vector2),
Vector3(Vector3),
}
impl VectorAdd for Vector {
fn add(&self, other: &Vector) -> Vector {
match (self, other) {
(Self::Vector2(a), Self::Vector2(b)) => Self::Vector2(a.add(b)),
(Self::Vector3(a), Self::Vector3(b)) => Self::Vector3(a.add(b)),
_ => panic!("invalid operands to Vector::add"),
}
}
}
Macros may help you if the number of variants gets large.
I have implemented a Point3D struct:
use std::ops;
#[derive(Debug, PartialEq)]
pub struct Point3D {
pub x: f32,
pub y: f32,
pub z: f32,
}
impl ops::Add<&Point3D> for &Point3D {
type Output = Point3D;
fn add(self, rhs: &Point3D) -> Point3D {
Point3D {
x: self.x + rhs.x,
y: self.y + rhs.y,
z: self.z + rhs.z,
}
}
}
impl ops::Sub<&Point3D> for &Point3D {
type Output = Point3D;
fn sub(self, rhs: &Point3D) -> Point3D {
Point3D {
x: self.x - rhs.x,
y: self.y - rhs.y,
z: self.z - rhs.z,
}
}
}
impl ops::Mul<&Point3D> for &Point3D {
type Output = f32;
fn mul(self, rhs: &Point3D) -> f32 {
self.x * rhs.x + self.y * rhs.y + self.z * rhs.z
}
}
//Scalar impl of ops::Mul here
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn addition_point_3D() {
let point1 = Point3D {
x: 1.0,
y: 2.0,
z: 3.0,
};
let point2 = Point3D {
x: 4.0,
y: 5.0,
z: 6.0,
};
let result = &point1 + &point2;
assert_eq!(
result,
Point3D {
x: 5.0,
y: 7.0,
z: 9.0
},
"Testing Addition with {:?} and {:?}",
point1,
point2
);
}
#[test]
fn subtraction_point_3D() {
let point1 = Point3D {
x: 1.0,
y: 2.0,
z: 3.0,
};
let point2 = Point3D {
x: 4.0,
y: 5.0,
z: 6.0,
};
let result = &point1 - &point2;
assert_eq!(
result,
Point3D {
x: -3.0,
y: -3.0,
z: -3.0
},
"Testing Subtraction with {:?} and {:?}",
point1,
point2
);
}
#[test]
fn point3D_point3D_multiplication() {
let point1 = Point3D {
x: 1.0,
y: 2.0,
z: 3.0,
};
let point2 = Point3D {
x: 4.0,
y: 5.0,
z: 6.0,
};
let result = &point1 * &point2;
assert_eq!(
result, 32.0,
"Testing Multiplication with {:?} and {:?}",
point1, point2
);
}
/*
#[test]
fn point3D_scalar_multiplication() {
let point1 = Point3D { x: 1.0, y: 2.0, z: 3.0};
let scalar = 3.5;
let result = &point1 * &scalar;
assert_eq!(result, Point3D { x: 3.5, y: 7.0, z: 10.5 }, "Testing Multiplication with {:?} and {:?}", point1, scalar);
}
*/
}
I would like to use generics in my multiplication trait so that if I pass it another Point3D class it will implement the dot product, but if I pass it a basic numeric type (integer, f32, unsigned integer, f64) it will multiply x, y, and z by the scalar value. How would would I do this?
Do you mean something like that?
impl ops::Mul<f32> for &Point3D {
type Output = Point3D;
fn mul(self, rhs: f32) -> Point3D {
Point3D {
x: self.x * rhs,
y: self.y * rhs,
z: self.z * rhs
}
}
}
This would allow you to do the following:
let point = Point3D { x: 1.0, y: 2.0, z: 3.0};
let result = &point * 4.0;
To do this with generics you first need to make your Point3D struct accept generics, like
use std::ops::{Mul, Add};
#[derive(Debug, PartialEq)]
pub struct Point3D<T> {
pub x: T,
pub y: T,
pub z: T,
}
And your implementation of multiplication of Point3D with a numeric type would be
impl<T> Mul<T> for &Point3D<T>
where T: Mul<Output=T> + Copy
{
type Output = Point3D<T>;
fn mul(self, rhs: T) -> Self::Output {
Point3D {
x: self.x * rhs,
y: self.y * rhs,
z: self.z * rhs,
}
}
}
We have the where clause because our generic T would need to implement the traits Mul and Copy as well. Copy because we need to copy rhs to use in all the three multiplications.
Your dot product implementation would also need to change according to
impl<T> Mul<&Point3D<T>> for &Point3D<T>
where T: Mul<Output=T> + Add<Output=T> + Copy
{
type Output = T;
fn mul(self, rhs: &Point3D<T>) -> Self::Output {
self.x * rhs.x + self.y * rhs.y + self.z * rhs.z
}
}
with the Add because we of course need to be able to add the generics T here.
I want to convert a value from {integer} to f32:
struct Vector3 {
pub x: f32,
pub y: f32,
pub z: f32,
}
for x in -5..5 {
for y in -5..5 {
for z in -5..5 {
let foo: Vector3 = Vector3 { x: x, y: y, z: z };
// do stuff with foo
}
}
}
The compiler chokes on this with a type mismatch error (expecting f32 but getting {integer}). Unfortunately I can not simply change Vector3. I'm feeding a C-API with this.
Is there any easy and concise way I can convert x, y and z from {integer} to f32?
I guess there is no builtin conversion from i32 or {integer} to f32 because it could be lossy in certain situations. However, in my case the range I'm using is so small that this wouldn't be an issue. So I would like to tell the compiler to convert the value anyways.
Interestingly, the following works:
for x in -5..5 {
let tmp: i32 = x;
let foo: f32 = tmp as f32;
}
I'm using a lot more that just one foo and one x so this turns hideous really fast.
Also, this works:
for x in -5i32..5i32 {
let foo: f32 = x as f32;
// do stuff with foo here
}
But with my usecase this turns into:
for x in -5i32..5i32 {
for y in -5i32..5i32 {
for z in -5i32..5i32 {
let foo: Vector3 = Vector3 {
x: x as f32,
y: y as f32,
z: z as f32,
};
// do stuff with foo
}
}
}
Which I think is pretty unreadable and an unreasonable amount of cruft for a simple conversion.
What am I missing here?
It is not necessary to specify the i32s when using as, since this works fine (playground):
for x in -5..5 {
for y in -5..5 {
for z in -5..5 {
let foo = Vector3 { // no need to specify the type of foo
x: x as f32,
y: y as f32,
z: z as f32,
};
// etc.
}
}
}
As Klitos Kyriacou's answer observes, there is no such type as {integer}; the compiler gives that error message because it couldn't infer a concrete type for x. It doesn't actually matter, because there are no implicit conversions from integer types to floating-point types in Rust, or from integer types to other integer types, for that matter. In fact, Rust is quite short on implicit conversions of any sort (the most notable exception being Deref coercions).
Casting the type with as permits the compiler to reconcile the type mismatch, and it will eventually fill in {integer} with i32 (unconstrained integer literals always default to i32, not that the concrete type matters in this case).
Another option you may prefer, especially if you use x, y and z for other purposes in the loop, is to shadow them with f32 versions instead of creating new names:
for x in -5..5 {
let x = x as f32;
for y in -5..5 {
let y = y as f32;
for z in -5..5 {
let z = z as f32;
let foo = Vector3 { x, y, z };
// etc.
}
}
}
(You don't have to write x: x, y: y, z: z -- Rust does the right thing when the variable name is the same as the struct member name.)
Another option (last one, I promise) is to convert the iterators instead using map:
for x in (-5..5).map(|x| x as f32) {
for y in (-5..5).map(|y| y as f32) {
for z in (-5..5).map(|z| z as f32) {
let foo = Vector3 { x, y, z };
// etc.
}
}
}
However it is a little more dense and may be harder to read than the previous version.
The only integer types available are i8, i16, i32, etc. and their unsigned equivalents. There is no such type as {integer}. This is just a placeholder emitted by the compiler before it has determined the actual type by inference from the whole-method context.
The problem is that, at the point where you call Vector3 {x: x as f32, y: y as f32, z: z as f32}, it doesn't yet know exactly what x, y and z are, and therefore doesn't know what operations are available. It could use the operations given to determine the type, if it was a bit more intelligent; see bug report for details.
There is a conversion from i32 to f32, so you should be able to do this:
let foo = Vector3 {x: (x as i32) as f32, y: (y as i32) as f32, z: (z as i32) as f32};
Since everyone else is answering, I'll chime in with an iterator-flavored solution. This uses Itertools::cartesian_product instead of the for loops:
extern crate itertools;
use itertools::Itertools;
fn main() {
fn conv(x: i32) -> f32 { x as f32 }
let xx = (-5..5).map(conv);
let yy = xx.clone();
let zz = xx.clone();
let coords = xx.cartesian_product(yy.clone().cartesian_product(zz));
let vectors = coords.map(|(x, (y, z))| Vector3 { x, y, z });
}
Unfortunately, closures don't yet implement Clone, so I used a small function to perform the mapping. These do implement Clone.
If you wanted a helper method:
extern crate itertools;
use itertools::Itertools;
use std::ops::Range;
fn f32_range(r: Range<i32>) -> std::iter::Map<Range<i32>, fn(i32) -> f32> {
fn c(x: i32) -> f32 { x as _ }
r.map(c)
}
fn main() {
let xx = f32_range(-5..5);
let yy = f32_range(-5..5);
let zz = f32_range(-5..5);
let coords = xx.cartesian_product(yy.cartesian_product(zz));
let vectors = coords.map(|(x, (y, z))| Vector3 { x, y, z });
}
From<i16> is implemented for f32.
So it should be possible to
for x in -5..5 {
for y in -5..5 {
for z in -5..5 {
let foo: Vector3 = Vector3 {
x: f32::from(x),
y: f32::from(y),
z: f32::from(z),
};
// do stuff with foo
}
}
}
Of course if your iteration uses values bigger than i16 (i32 or i64) this is no longer possible in a safe way and you have to try another way.
As many problems in Computer Science, it can be solved by applying another layer of indirection.
For example, defining a constructor for Vec3:
impl Vec3 {
fn new(x: i16, y: i16, z: i16) -> Vec3 {
Vec3 { x: x as f32, y: y as f32, z: z as f32 }
}
}
fn main() {
for x in -5..5 {
for y in -5..5 {
for z in -5..5 {
let foo = Vector3::new(x, y, z);
println!("{:?}", foo);
}
}
}
}
You can use a plethora of other methods (generics, builders, etc...); but a good old constructor is just the simplest.
Another solution this time using a function and traits. playground
struct Vector3 {
pub x: f32,
pub y: f32,
pub z: f32,
}
impl Vector3 {
pub fn new<T: Into<f32>>(a: T, b: T, c: T) -> Vector3 {
Vector3 {
x: a.into(),
y: b.into(),
z: c.into(),
}
}
}
fn main() {
for x in -5..5i8 {
for y in -5..5i8 {
for z in -5..5i8 {
let foo: Vector3 = Vector3::new(x, y, z);
// do stuff with foo
}
}
}
}
Given this code (also here):
struct Vector2 {
x: int,
y: int
}
impl Vector2 {
pub fn new(xs: int, ys: int) -> Vector2 {
Vector2 {
x: xs,
y: ys
}
}
fn add(&self, otr: Vector2) -> &Vector2 {
self.x += otr.x; // cannot assign to immutable field (line 15)
self.y += otr.y; // cannot assign to immutable field (line 16)
return self; // lifetime mismatch (line 17)
}
}
fn main() {
let mut vec1 = Vector2::new(42, 12);
println(fmt!("vec1 = [x: %?, y: %?]", vec1.x, vec1.y));
let vec2 = Vector2::new(13, 34);
println(fmt!("vec2 = [x: %?, y: %?]", vec2.x, vec2.y));
let vec3 = vec1.add(vec2);
println(fmt!("vec1 + vec2 = [x: %?, y: %?]", vec3.x, vec3.y))
}
I'm having issues with lines 15-17.
For lines 15 and 16, can someone explain what the best way to change those two variables would be? It seems I'm either not using self right or I'm missing a mut somewhere.
For line 17, it's giving me a lifetime mismatch, also saying:
mismatched types: expected '&Vector2' but found '&Vector2'...the anonymous lifetime #1 defined on the block at 14:41 does not necessarily outlive the anonymous lifetime #2 defined on the block at 14:41.
Does anyone know of any way to fix these two issues?
If you wish to have add being a mutating operation, it should take &mut self rather than &self.
If you wish to have add create a new Vector2, then don't try mutating self—clone it (assuming 0.8-pre; on 0.7, you'd copy it instead with copy self) and modify the clone, or create a new instance with the same type. (This will be faster in a case like add.
While you're at it, don't just have a method called add: implement std::ops::Add, and + will work! (There is no += yet—see https://github.com/mozilla/rust/issues/5992.)
The final code:
struct Vector2 {
x: int,
y: int,
}
impl Vector2 {
pub fn new(x: int, y: int) -> Vector2 {
Vector2 {
x: x,
y: y,
}
}
}
impl Add<Vector2, Vector2> for Vector2 {
fn add(&self, rhs: &Vector2) -> Vector2 {
Vector2 {
x: self.x + rhs.x,
y: self.y + rhs.y,
}
}
}
fn main() {
let vec1 = Vector2::new(42, 12);
println(fmt!("vec1 = %?", vec1));
// 0.8-pre hint: use printfln!(...) instead of println(fmt!(...))
let vec2 = Vector2::new(13, 34);
println(fmt!("vec2 = %?", vec2));
let vec3 = vec1 + vec2;
println(fmt!("vec1 + vec2 = %?", vec3));
}
And its output:
vec1 = {x: 42, y: 12}
vec2 = {x: 13, y: 34}
vec1 + vec2 = {x: 55, y: 46}