What are some ways of creating generic N-dimensional vectors in rust?
What I tried:
struct Vec<N,T> where N: u32{
dim: N,
data: T
}
pub type Vec2<T> = Vec<2,T>;
pub type Vec3<T> = Vec<3,T>;
That is at least how I did in C++. But it is not working here for some reasons:
error[E0404]: expected trait, found builtin type `u32`
--> main.rs:1:26
|
1 | struct Vec<N,T> where N: u32{
| ^^^ not a trait
error[E0747]: constant provided when a type was expected
--> main.rs:5:24
|
5 | pub type Vec2<T> = Vec<2,T>;
|
Maybe I am still thinking in C++ template style. What I don't want to do is this:
// pub type Vec2<T> = [T;2];
// pub type Vec3<T> = [T;3];
Because if I do it in this way, I should have separate functions for each dimension (vec2_add, vec3_add, vec2_cross, etc)
How can I make this in the most generic way possible? In the end, I just wanna define a vector like Vec2 or Vec3 and a single add or dot or cross method that will work for both
You can use const generics for this:
struct Vec<const N: usize, T> {
data: [T;N],
}
impl<const N: usize, T> Vec<N, T> {
// your functions here
}
let my_int_vec = Vec { data: [1, 2, 3] }; // Vec<3, i32>
let my_str_vec = Vec { data: ["foo", "bar"] }; // Vec<2, &str>
Related
I'm trying to be generic on C's implementation, but use individual implementations of div_array for its member T. For example, C::something(&self) should use the correct implementation for the array [T; 2]. Since T: A, only 2 possible T can exist: T=u32, T=u64, so I implemented DivArray for [u32;2] and [u64;2] only. While I could do a generic implementation, I really want it to be specific on each array implementation because it could use some hardware operations only available for some types, etc.
use core::marker::PhantomData;
use num_traits::Zero;
pub trait A: Zero + Copy {}
impl A for u32{}
impl A for u64{}
pub trait DivArray<'a, Rhs>: Sized + Copy {
type Output;
fn div_array(
self,
denominator: Rhs,
) -> Result<Self::Output, ()>;
}
impl<'a, Rhs: Into<Rhs>> DivArray<'a, Rhs> for [u32; 2] {
type Output = [u32; 2];
fn div_array(
self,
denominator: Rhs,
) -> Result<Self::Output, ()> {
unimplemented!();
}
}
impl<'a, Rhs: Into<Rhs>> DivArray<'a, Rhs> for [u64; 2] {
type Output = [u64; 2];
fn div_array(
self,
denominator: Rhs,
) -> Result<Self::Output, ()> {
unimplemented!();
}
}
pub struct C<T>{
_phantom: PhantomData<T>
}
impl<T: A> C<T>{
pub fn something(&self) {
let arr = [T::zero(); 2];
arr.div_array(1u64);
}
}
https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=60694183a787b70e210a4612747a622c
error[E0599]: no method named `div_array` found for array `[T; 2]` in the current scope
--> src/lib.rs:44:13
|
44 | arr.div_array(1u64);
| ^^^^^^^^^ method not found in `[T; 2]`
|
= help: items from traits can only be used if the trait is implemented and in scope
note: `DivArray` defines an item `div_array`, perhaps you need to implement it
--> src/lib.rs:9:1
|
9 | pub trait DivArray<'a, Rhs>: Sized + Copy {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
What is happening here? I clearly implemented DivArray for all T such that T:A
What is happening here?
Traits are not "completable" so there's no such thing as "implementing for all T: A".
Not that the compiler would care because it doesn't check: Rust typechecks generic methods as-is, before any sort of expansion, so the information it has is that that arr: [T;2], and it needs an impl DivArray for that.
However all this mess seems completely unnecessary here, you can just implement your stuff for both types, possibly using a macro if you turn out to have more than two:
impl C<u32>{
pub fn something(&self) {
let arr = [0; 2];
arr.div_array(1u64);
}
}
impl C<u64>{
pub fn something(&self) {
let arr = [0; 2];
arr.div_array(1u64);
}
}
Though note that this will prevent a generic implementation: in the fulness of time an option might be specialisation (for both the original version and this one), but it's nowhere near done because it's full of unresolved soundness issues.
I have a matrix data type in Rust that supports a generic element data type.
pub struct Matrix<T> {
data: Vec<T>, // row-major storage
nrows: usize,
ncols: usize,
}
I would like to create a family of different matrix constructors such as zero and eye which output the zero matrix and the identity matrix, respectively. A standard Matrix::new() constructor is straightforward:
impl<T> Matrix<T> {
pub fn new(data: Vec<T>, nrows: usize, ncols: usize) -> Matrix<T> {
assert!(data.len() == nrows*ncols);
Matrix { data: data, nrows: nrows, ncols: ncols }
}
}
The underlying type T is inferred from the type of the initializing vector. However, when I try to write a Matrix::zero() constructor I run into issues figuring out how to infer the type since the only parameters I want to pass is the size.
impl<T> Matrix<T> {
pub fn zero(nrows: usize, ncols: usize) -> Matrix<T>
where T : Clone
{
let data: Vec<T> = vec![0; nrows*ncols];
Matrix::new(data, nrows, ncols)
}
}
Trying to compile this results in the error message:
error[E0308]: mismatched types
--> src/tools.rs:39:33
|
39 | let data: Vec<T> = vec![0; nrows*ncols];
| ^ expected type parameter, found integral variable
|
= note: expected type `T`
found type `{integer}`
I tried 0 as T and T::from(0) and those don't resolve the issue. (To be honest, I don't yet understand why.) One possible solution is to change the function definition to zero(_value: T, nrows: usize, ncols: usize) and construct the data vector via vec![_value; ...] but that seems odd.
Whatever the solution my end goal is to be able to simply write,
let a: Matrix<f32> = Matrix::zero(nrows, ncols);
// ... or ...
let b = Matrix<f32>::zero(nrows, ncols);
// ... or ...
let c = Matrix::zero<f32>(nrows, ncols);
// ... or something else?
You probably want to use the num crate, which adds traits for situations like this.
Something like:
extern crate num;
use num::Zero;
impl<T> Matrix<T> {
pub fn zero(nrows: usize, ncols: usize) -> Matrix<T>
where T : Clone + Zero
{
let data: Vec<T> = vec![T::zero(); nrows*ncols];
Matrix::new(data, nrows, ncols)
}
}
would probably work, because it defines your matrix type as bounded by types that implement the num::Zero trait. This is implemented over all integer and floating-point primitives, and can be implemented for custom types as well.
If you don't want to import the num crate, you can define this trait manually like below, though it will require you to implement it for primitives yourself.
trait Zero {
fn zero() -> Self;
}
impl Zero for f32 {
fn zero() -> Self {
0.0
}
}
...
I tried to implement a generic matrix product function which should work for all primitive types and the BigInt/BigUint types from the num_bigint crate:
extern crate num_bigint;
extern crate num_traits;
use num_traits::identities::Zero;
use std::ops::{AddAssign, Mul};
#[derive(Debug, Clone)]
struct Matrix<T> {
n: usize,
m: usize,
data: Vec<Vec<T>>,
}
fn m_prd<T>(a: &Matrix<T>, b: &Matrix<T>) -> Matrix<T>
where
T: Clone + AddAssign + Mul<Output = T> + Zero,
{
let n = a.n;
let p = b.n;
let m = b.m;
let mut c = Matrix {
n: n,
m: m,
data: vec![vec![T::zero(); m]; n],
};
for i in 0..n {
for j in 0..m {
for k in 0..p {
c.data[i][j] += &a.data[i][k] * &b.data[k][j];
}
}
}
c
}
fn main() {}
I got the following error message:
error[E0369]: binary operation `*` cannot be applied to type `&T`
--> src/main.rs:29:33
|
29 | c.data[i][j] += &a.data[i][k] * &b.data[k][j];
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: an implementation of `std::ops::Mul` might be missing for `&T`
How can I implement the function for &T in this case?
Shepmaster deems this question as duplicated, but it seems due to operator (AddAssign or Mul) applied on reference (&T) require separate statement after where keyword
I am doing a matrix addition using traits. I am stuck by the generic type mismatch. My code is as following:
use std::{ops, fmt};
#[derive(PartialEq, Debug)]
pub struct Matrix<T> {
data: Vec<T>,
row: usize,
col: usize,
}
impl<T: Copy> Matrix<T> {
/// Creates a new matrix of `row` rows and `col` columns, and initializes
/// the matrix with the elements in `values` in row-major order.
pub fn new(row: usize, col: usize, values: &[T]) -> Matrix<T> {
Matrix {
data: values.to_vec(), // make copy and convert &[T] to vector type
row: row,
col: col,
}
}
}
impl<T: ops::Add<Output = T> + Copy> ops::Add for Matrix<T> {
type Output = Self;
/// Returns the sum of `self` and `rhs`. If `self.row != rhs.row || self.col != rhs.col`, panic.
fn add(self, rhs: Self) -> Self::Output {
assert!(self.col == rhs.col);
assert!(self.row == rhs.row);
let mut newdata = Vec::new(); // make a new vector to store result
let mut sum: i32; // temp variable to record each addition
for num1 in self.data.iter() {
for num2 in rhs.data.iter() {
sum = *num1 + *num2;
newdata.push(sum)
}
}
Matrix {
data: newdata, // finally, return addition result using new_data
row: self.row,
col: self.col,
}
}
}
fn main() {
let x = Matrix::new(2, 3, &[-6, -5, 0, 1, 2, 3]);
let y = Matrix::new(2, 3, &[0, 1, 0, 0, 0, 0]);
// z = x + y;
}
Compiling the program, I got two errors about type mismatch:
error[E0308]: mismatched types
--> src/main.rs:36:23
|
36 | sum = *num1 + *num2;
| ^^^^^^^^^^^^^ expected i32, found type parameter
|
= note: expected type `i32`
= note: found type `T`
error[E0308]: mismatched types
--> src/main.rs:41:9
|
41 | Matrix {
| ^ expected type parameter, found i32
|
= note: expected type `Matrix<T>`
= note: found type `Matrix<i32>`
My thoughts:
num1 would deref the vector and get a integer type, that's why I use a sum to record the result.
I am trying to return a Matrix type value at the end of the function.
What is going wrong?
This is the entire knowledge of your types that the code can rely on inside of the method:
impl<T: ops::Add<Output = T> + Copy> ops::Add for Matrix<T> {
type Output = Self;
fn add(self, rhs: Self) -> Self::Output {
// ...
}
}
Based on that, how would it be possible to make this assumption?
num1 would deref the vector and get a integer type
There is no way to know what concrete type T will be!
Beyond that, even if it were some integer type, how would it be possible to assume that summing into an i32 is acceptable? What if T were a i64?
The solution is to remove any assumptions and let the compiler do its job. Remove the type annotation from sum and the code compiles. I find it good practice to always allow the compiler to infer my types when possible.
See also:
Requiring implementation of Mul in generic function
"Expected type parameter" error in the constructor of a generic struct
struct Vector {
data: [f32; 2]
}
impl Vector {
//many methods
}
Now I want to create a Normal which will almost behave exactly like a Vector but I need to differentiate the type. Because for example transforming a normal is different than transforming a vector. You need to transform it with the tranposed(inverse) matrix for example.
Now I could do it like this:
struct Normal {
v: Vector
}
And then reimplement all the functionality
impl Normal {
fn dot(self, other: Normal) -> f32 {
Vector::dot(self.v, other.v)
}
....
}
I think I could also do it with PhantomData
struct VectorType;
struct NormalType;
struct PointType;
struct Vector<T = VectorType> {
data: [f32; 2],
_type: PhantomData<T>,
}
type Normal = Vector<NormalType>;
But then I also need a way to implement functionality for specific types.
It should be easy to implement for example add for everything so that it is possible to add point + vector.
Or functionality specific to some type
impl Vector<NormalType> {..} // Normal specific stuff
Not sure how I would implement functionality for a subrange. For example maybe the dot product only makes sense for normals and vectors but not points.
Is it possible to express boolean expression for trait bounds?
trait NormalTrait;
trait VectorTrait;
impl NormalTrait for NormalType {}
impl VectorTrait for VectorType {}
impl<T: PointTrait or VectorTrait> for Vector<T> {
fn dot(self, other: Vector<T>) -> f32 {..}
}
What are my alternatives?
Your question is pretty broad and touches many topics. But your PhantomData idea could be a good solution. You can write different impl blocks for different generic types. I added a few things to your code:
struct VectorType;
struct NormalType;
struct PointType;
struct Vector<T = VectorType> {
data: [f32; 2],
_type: PhantomData<T>,
}
type Normal = Vector<NormalType>;
type Point = Vector<PointType>;
// --- above this line is old code --------------------
trait Pointy {}
impl Pointy for VectorType {}
impl Pointy for PointType {}
// implement for all vector types
impl<T> Vector<T> {
fn new() -> Self {
Vector {
data: [0.0, 0.0],
_type: PhantomData,
}
}
}
// impl for 'pointy' vector types
impl<T: Pointy> Vector<T> {
fn add<R>(&mut self, other: Vector<R>) {}
fn transform(&mut self) { /* standard matrix multiplication */ }
}
// impl for normals
impl Vector<NormalType> {
fn transform(&mut self) { /* tranposed inversed matrix stuff */ }
}
fn main() {
let mut n = Normal::new();
let mut p = Point::new();
n.transform();
p.transform();
// n.add(Normal::new()); // ERROR:
// no method named `add` found for type `Vector<NormalType>` in the current scope
p.add(Point::new());
}
Is it possible to express boolean expression for trait bounds?
No (not yet). But you can solve it in this case as shown above: you create a new trait (Pointy) and implement it for the types in your "or"-condition. Then you bound with that trait.