Method `mul` has an incompatible type for trait - struct

I'm creating a simple matrix struct in Rust and I'm trying to implement some basic operator methods:
use std::ops::Mul;
struct Matrix {
cols: i32,
rows: i32,
data: Vec<f32>,
}
impl Matrix {
fn new(cols: i32, rows: i32, data: Vec<f32>) -> Matrix {
Matrix {
cols: cols,
rows: rows,
data: data,
}
}
}
impl Mul<f32> for Matrix {
type Output = Matrix;
fn mul(&self, m: f32) -> Matrix {
let mut new_data = Vec::with_capacity(self.cols * self.rows);
for i in 0..self.cols * self.rows {
new_data[i] = self.data[i] * m;
}
return Matrix {
cols: *self.cols,
rows: *self.rows,
data: new_data,
};
}
}
fn main() {}
I'm still familiarizing myself with Rust and systems programming and I'm sure the error is pretty obvious. The compiler tells me:
error[E0053]: method `mul` has an incompatible type for trait
--> src/main.rs:22:5
|
22 | fn mul(&self, m: f32) -> Matrix {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `Matrix`, found &Matrix
|
= note: expected type `fn(Matrix, f32) -> Matrix`
found type `fn(&Matrix, f32) -> Matrix`
It's referring to the contents of the for loop (I believe). I've tried playing around with a few other things but I can't get my head around it.

The error message is spot-on here:
error[E0053]: method `mul` has an incompatible type for trait
--> src/main.rs:22:5
|
22 | fn mul(&self, m: f32) -> Matrix {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `Matrix`, found &Matrix
|
= note: expected type `fn(Matrix, f32) -> Matrix`
found type `fn(&Matrix, f32) -> Matrix`
Let's look at the Mul trait to see why your implementation doesn't match:
pub trait Mul<RHS = Self> {
type Output;
fn mul(self, rhs: RHS) -> Self::Output;
}
This says that unless you specify anything further, RHS will be the same type as Self. Self is the type that the trait will be implemented on. Let's look at your definition:
impl Mul<f32> for Matrix {
type Output = Matrix;
fn mul(&self, m: f32) -> Matrix {}
}
In your case, you have substituted f32 for RHS, and Matrix for Output. Also, Matrix is the implementing type. Let's take the trait definition and substitute in, producing some pseudo-Rust:
pub trait Mul {
fn mul(self, rhs: f32) -> Matrix;
}
Now do you see what is different?
// Trait
fn mul(self, m: f32) -> Matrix;
// Your implementation
fn mul(&self, m: f32) -> Matrix;
You have incorrectly specified that you take &self instead of self.
For completeness, here's the implementation. I threw in style fixes at no charge!
impl Mul<f32> for Matrix {
type Output = Matrix;
fn mul(self, m: f32) -> Matrix {
let new_data = self.data.into_iter().map(|v| v * m).collect();
Matrix {
cols: self.cols,
rows: self.rows,
data: new_data,
}
}
}
This is a bit inefficient as it deallocates and reallocates the data vector. Since you are taking the Matrix by value, we can just edit it in place:
impl Mul<f32> for Matrix {
type Output = Matrix;
fn mul(mut self, m: f32) -> Matrix {
for v in &mut self.data {
*v *= m
}
self
}
}

Related

expected struct `Vec3d`, found type parameter `u32` when implmenting the Mul trait for a custom struct

I am trying to implement the Mul trait for the Vec3d struct. I want the mul function to multiply each element of Vec3d by a number and return a Vec3d. However I am getting faced with this error:
error[E0053]: method `mul` has an incompatible type for trait
--> src\vec_3d.rs:26:21
|
23 | impl<u32> Mul for Vec3d<u32> {
| --- this type parameter
...
26 | fn mul(self, t: u32) -> Self::Output {
| ^^^
| |
| expected struct `Vec3d`, found type parameter `u32`
| help: change the parameter type to match the trait: `Vec3d<u32>`
|
= note: expected fn pointer `fn(Vec3d<_>, Vec3d<u32>) -> Vec3d<_>`
found fn pointer `fn(Vec3d<_>, u32) -> Vec3d<_>`
My code looks like this:
use std::ops::Add;
use std::ops::Mul;
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub struct Vec3d<T> {
pub x: T,
pub y: T,
pub z: T,
}
impl<u32> Mul for Vec3d<u32> {
type Output = Self;
fn mul(self, t: u32) -> Self::Output {
Self {
x: self.x * t,
y: self.x * t,
z: self.x * t,
}
}
}
I had this originally with the generic type parameter T:
impl<T: Mul<Output = T>> Mul for Vec3d<T> {
type Output = Self;
fn mul(self, t: T) -> Self::Output {
Self {
x: self.x * t,
y: self.x * t,
z: self.x * t,
}
}
}
I have seen this be done the same way in some examples and other questions. What is wrong and how I can implement this the correct way?
This is nearly the same as the example on the Mul documentation: Multiplying vectors by scalars as in linear algebra
Mul has a default generic, which you can see in the trait definition:
pub trait Mul<Rhs = Self> {
This offers convenience since most implementers of Mul will be multiplying against Self. You aren't, so you need to specify the generic:
impl Mul<u32> for Vec3d<u32> {
type Output = Self;
fn mul(self, t: u32) -> Self::Output {
Self {
x: self.x * t,
y: self.y * t,
z: self.z * t,
}
}
}
To make this fully generic, you might try this:
impl<T: Mul<Output = T>> Mul<T> for Vec3d<T>
However, since you're using a single T 3 times, you would need to restrict it to Clone or Copy. The usual way to get around this is to implement Mul for references:
impl<'a, T> Mul<&'a T> for Vec3d<T>
where
T: Mul<&'a T, Output = T>,
{
type Output = Self;
fn mul(self, t: &'a T) -> Self::Output {
Self {
x: self.x * t,
y: self.y * t,
z: self.z * t,
}
}
}
Then you probably want a non-reference version for when the type implements Copy:
impl<T> Mul<T> for Vec3d<T>
where
T: Mul<Output = T> + Copy,
{
type Output = Self;
fn mul(self, t: T) -> Self::Output {
Self {
x: self.x * t,
y: self.y * t,
z: self.z * t,
}
}
}
But even better, you can do both of those in one impl:
impl<T, R> Mul<R> for Vec3d<T>
where
R: Copy,
T: Mul<R, Output = T>,
{
type Output = Self;
fn mul(self, t: R) -> Self::Output {
Self {
x: self.x * t,
y: self.y * t,
z: self.z * t,
}
}
}
When R is &T you cover the first impl, and when R is T and Copy you cover the second impl. & references are always Copy.
Another common thing to do is implement Mul for every integer type, which allows you more control over the implementation, for example if you wanted to optimize them for certain sized integers.

Rust Type annotation Error in Index trait bounding

I am trying to implement a matrix data structure using the code below,
use std::ops::{Add, Index, IndexMut};
use num::{One, Zero};
type Idx2<'a> = &'a [usize];
pub trait MatrixTrait {
fn zeros(shape: Idx2) -> Self;
fn ones(shape: Idx2) -> Self;
}
#[derive(Default, Debug, Clone)]
pub struct Dense2DArray<T> {
data: Vec<T>,
shape: Vec<usize>,
}
impl <T> MatrixTrait for Dense2DArray<T>
where
T: Zero + One + Clone,
{
fn zeros(shape: Idx2) -> Self {
let data = vec![T::zero(); shape[0] * shape[1]];
Self { data, shape: shape.to_vec() }
}
fn ones(shape: Idx2) -> Self {
let data = vec![T::one(); shape[0] * shape[1]];
Self { data, shape: shape.to_vec() }
}
}
impl <T> Index<Idx2<'_>> for Dense2DArray<T> {
type Output = T;
fn index(&self, index: Idx2) -> &T {
&self.data[index[0] * self.shape[1] + index[1]]
}
}
impl <T> IndexMut<Idx2<'_>> for Dense2DArray<T> {
fn index_mut(&mut self, index: Idx2) -> &mut T {
&mut self.data[index[0] * self.shape[1] + index[1]]
}
}
pub fn create_and_add_generic_matrix <'a, 'b, M, T> (a0: M) -> M
where
T: One + Add<Output=T> + Copy,
M: MatrixTrait + Index<Idx2<'a>, Output = T> + IndexMut<Idx2<'b>>,
{
let nx = 3; let ny = 2;
let mut a1 = M::zeros(&[nx, ny]);
for i in 0..nx {
for j in 0..ny {
a1[[i, j]] = a0[[i, j]] + T::one() + T::one();
}
}
a1
}
fn main() {
let nx = 3; let ny = 2;
let a0 = Dense2DArray::<f64>::ones(&[nx, ny]);
let b = create_and_add_generic_matrix(a0);
println!("{:?}", b);
}
but I always get the error:
error[E0283]: type annotations needed
--> linalg\src\matrices\mod.rs:56:26
|
56 | M: MatrixTrait + Index<Idx2<'a>, Output = T> + IndexMut<Idx2<'b>>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot infer type for type parameter `M`
|
= note: cannot satisfy `M: Index<&'a [usize]>`
strangely, or maybe not, if I change the type to (and perform the required changes to the code such as removing the lifetimes):
type Idx2 = [usize; 2];
it works without problems, but I get restricted to 2D matrices.
I can't understand why the change in the way indexing is performed affects the type annotation of M, nor how should I resolve the issue!? Can anyone please help me understand what is happening?
Thanks.
This bound says that M must implement Index<Idx2<'a>, Output = T> for some specific caller-specified lifetime 'a, but what you actually want is to say that it must implement this trait for any possible lifetime 'a.
You need a higher-rank trait bound:
pub fn create_and_add_generic_matrix<M, T> (a0: M) -> M
where
T: One + Add<Output=T> + Copy,
M: MatrixTrait + for<'a> Index<Idx2<'a>, Output = T> + for<'a> IndexMut<Idx2<'a>>,
Note that you also have to borrow the index expression in your for loop, because an array isn't a slice:
a1[&[i, j]] = a0[&[i, j]] + T::one() + T::one();

Require commutative operation in Rust trait bound

Suppose I have a group of related non-scalar structs with a commutative arithmetic operation defined on them. For example,
struct Foo {
a: f64,
b: f64
}
impl Add<f64> for Foo {
type Output = Foo;
fn add(self, v: f64) -> Self::Output {
Foo {
a: self.a + v,
b: self.b + v
}
}
}
impl Add<Foo> for f64 {
type Output = Foo;
fn add(self, foo: Foo) -> Self::Output {
Foo {
a: foo.a + self,
b: foo.b + self
}
}
}
I want to implement a trait on this group of structs, taking advantage of this operation. That is, I want something like the following:
trait Bar: Add<f64, Output = Self> + Sized {
fn right_add(self, f: f64) -> Self {
self + f
}
// Doesn't compile!
fn left_add(self, f: f64) -> Self {
f + self
}
}
However, this currently doesn't compile, since the super-trait bound doesn't include the left addition of f64 to Self. My question is: How can I state this commutative trait bound?
(Playground link.)
Edit: To be clear, I'm aware that right_add and left_add have the same output. I'm mainly interested in the ergonomics of not having to remember which is "correct" according to the compiler. In addition, I'm curious to learn how to do this, even if it's not strictly necessary.
Inverted trait bounds like this are the exact usecase for where syntax:
trait Bar
where
f64: Add<Self, Output = Self>,
Self: Add<f64, Output = Self> + Sized,
{
fn right_add(self, f: f64) -> Self {
self + f
}
fn left_add(self, f: f64) -> Self {
f + self
}
}
Playground link

Setting the Type of a Struct Constructor in Rust

I have a matrix data type in Rust that supports a generic element data type.
pub struct Matrix<T> {
data: Vec<T>, // row-major storage
nrows: usize,
ncols: usize,
}
I would like to create a family of different matrix constructors such as zero and eye which output the zero matrix and the identity matrix, respectively. A standard Matrix::new() constructor is straightforward:
impl<T> Matrix<T> {
pub fn new(data: Vec<T>, nrows: usize, ncols: usize) -> Matrix<T> {
assert!(data.len() == nrows*ncols);
Matrix { data: data, nrows: nrows, ncols: ncols }
}
}
The underlying type T is inferred from the type of the initializing vector. However, when I try to write a Matrix::zero() constructor I run into issues figuring out how to infer the type since the only parameters I want to pass is the size.
impl<T> Matrix<T> {
pub fn zero(nrows: usize, ncols: usize) -> Matrix<T>
where T : Clone
{
let data: Vec<T> = vec![0; nrows*ncols];
Matrix::new(data, nrows, ncols)
}
}
Trying to compile this results in the error message:
error[E0308]: mismatched types
--> src/tools.rs:39:33
|
39 | let data: Vec<T> = vec![0; nrows*ncols];
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
I tried 0 as T and T::from(0) and those don't resolve the issue. (To be honest, I don't yet understand why.) One possible solution is to change the function definition to zero(_value: T, nrows: usize, ncols: usize) and construct the data vector via vec![_value; ...] but that seems odd.
Whatever the solution my end goal is to be able to simply write,
let a: Matrix<f32> = Matrix::zero(nrows, ncols);
// ... or ...
let b = Matrix<f32>::zero(nrows, ncols);
// ... or ...
let c = Matrix::zero<f32>(nrows, ncols);
// ... or something else?
You probably want to use the num crate, which adds traits for situations like this.
Something like:
extern crate num;
use num::Zero;
impl<T> Matrix<T> {
pub fn zero(nrows: usize, ncols: usize) -> Matrix<T>
where T : Clone + Zero
{
let data: Vec<T> = vec![T::zero(); nrows*ncols];
Matrix::new(data, nrows, ncols)
}
}
would probably work, because it defines your matrix type as bounded by types that implement the num::Zero trait. This is implemented over all integer and floating-point primitives, and can be implemented for custom types as well.
If you don't want to import the num crate, you can define this trait manually like below, though it will require you to implement it for primitives yourself.
trait Zero {
fn zero() -> Self;
}
impl Zero for f32 {
fn zero() -> Self {
0.0
}
}
...

How to pass functions for traits implemented with lifetimes?

Rust beginner here. I have this "binary calculator" that uses a couple of types..
pub enum Bit { Off, On };
pub struct Binary([Bit; 64]);
Ignoring everything else that is implemented for them, I overloaded the operators for the Binary like so...
impl Add for Binary {
type Output = Binary;
/// Basic implementation of full addition circuit
fn add(self, other: Binary) -> Binary {
...
}
}
... Div, Sub, and Mul
... where each operator consumes the Binarys passed to them. I was then able to define a set of public functions that took care of converting, calling, and printing everything how I wanted to...
pub fn add(x: i64, y: i64) -> Calc {
execute(Binary::add, x, y)
}
pub fn subtract(x: i64, y: i64) -> Calc {
execute(Binary::sub, x, y)
}
...
fn execute(f: fn(Binary, Binary) -> Binary, x: i64, y: i64) -> Calc {
let bx = Binary::from_int(x);
println!("{:?}\n{}\n", bx, x);
let by = Binary::from_int(y);
println!("{:?}\n{}\n", by, y);
let result = f(bx, by);
println!("{:?}", result);
result.to_int()
}
problem
This worked, but the operations consumed the Binarys, which I didn't actually want. So instead, I implemented the traits using references instead...
impl<'a, 'b> Add<&'b Binary> for &'a Binary {
type Output = Binary;
/// Basic implementation of full addition circuit
fn add(self, other: &'b Binary) -> Binary {
...
}
}
Now, though, I cannot figure out how to pass those functions to execute as I did before. For example, execute(Binary::div, x, y) is giving the following error.
error[E0277]: cannot divide `types::binary::Binary` by `_`
--> src/lib.rs:20:13
|
20 | execute(Binary::div, x, y)
| ^^^^^^^^^^^ no implementation for `types::binary::Binary / _`
|
= help: the trait `std::ops::Div<_>` is not implemented for
`types::binary::Binary`
How can I pass that specific implementation with lifetimes? I assume that I need to update the signature for execute too, like...
fn execute<'a, 'b>(f: fn(&'a Binary, &'b Binary) -> Binary, ...
But I wind up also seeing...
error[E0308]: mismatched types
--> src/lib.rs:20:13
|
20 | execute(Binary::div, x, y)
| ^^^^^^^^^^^ expected reference, found struct `types::binary::Binary`
|
= note: expected type `fn(&types::binary::Binary, &types::binary::Binary) -> types::binary::Binary`
found type `fn(types::binary::Binary, _) -> <types::binary::Binary as std::ops::Div<_>>::Output {<types::binary::Binary as std::ops::Div<_>>::div}`
Being a complete beginner, I was able to follow all the error messages that got me to the "working" point (where operations consumed the values), but now I'm a bit out of my league, I think.
I made an exemplary implementation for addition (I made some assumptions regarding the return type of execute and others, you'd have to adapt this if my assumptions are wrong):
use std::ops::Add;
#[derive(Debug)]
pub enum Bit { Off, On }
#[derive(Debug)]
pub struct Binary([Bit; 32]);
impl Binary {
fn to_int(&self) -> i64 {unimplemented!()}
fn from_int(n: i64) -> Self {unimplemented!()}
}
impl<'a, 'b> Add<&'b Binary> for &'a Binary {
type Output = Binary;
fn add(self, other: &'b Binary) -> Binary {
unimplemented!()
}
}
pub fn add(x: i64, y: i64) -> i64 {
execute(|a, b| a+b, x, y)
}
fn execute(f: fn(&Binary, &Binary) -> Binary, x: i64, y: i64) -> i64 {
let bx = Binary::from_int(x);
println!("{:?}\n{}\n", bx, x);
let by = Binary::from_int(y);
println!("{:?}\n{}\n", by, y);
let result = f(&bx, &by);
println!("{:?}", result);
result.to_int()
}
Note that within execute, you'd have to call f(&bx, &by) (i.e. borrow them instead of consuming).
However: I wondered why you chose to have fn(&Binary, &Binary) -> Binary as argument type instead of making execute generic over F, constraining F to be a callable:
fn execute<F>(f: F, x: i64, y: i64) -> i64
where
F: Fn(&Binary, &Binary) -> Binary,
{
let bx = Binary::from_int(x);
println!("{:?}\n{}\n", bx, x);
let by = Binary::from_int(y);
println!("{:?}\n{}\n", by, y);
let result = f(&bx, &by);
println!("{:?}", result);
result.to_int()
}
This way, you are a bit more flexible (you can e.g. pass closures capturing variables in their scope).

Resources