Given a rather simple "small-vector" struct used for graphics or linear algebra:
pub struct VecN<T, const N: usize>
{
v: [T; N],
}
How would one create the typical "axis" vectors as compile-time constants?
I.e., something along the line of:
impl<T> VecN<T, 3> where T: From<f32> + Default
{
const AXES: [VecN<T, 3>; 3] = [VecN::new([T::from(1.0), T::default(), T::default()]),
VecN::new([T::default(), T::from(1.0), T::default()]),
VecN::new([T::default(), T::default(), T::from(1.0)])];
}
(The above does not work since the From and Default traits cannot be evaluated
at compile time.)
At best, the only "workaround" I've found is to create the constants for each
type I'm interested in, e.g.,
impl VecN<f32, 3>
{
const AXES: [VecN<T, 3>; 3] = [VecN::new([1.0, 0.0, 0.0]),
VecN::new([0.0, 1.0, 0.0]),
VecN::new([0.0, 0.0, 1.0])];
}
This feels very redundant, even if it can be alleviated pretty easily using
macros. And more exotic types are of course not included. Is there some way to
makes this work in a more generic way?
As you've mentioned, the problem here is that trait methods cannot be evaluted at compile time. However, what can be evaluated at compile time are trait constants, so you can define a trait with constants for ZERO and ONE, and use those instead:
trait ConstNum {
const ZERO: Self;
const ONE: Self;
}
impl ConstNum for f32 { .. }
impl ConstNum for f64 { .. }
// etc
impl<T: ConstNum> VecN<T, 3> {
const AXES: [VecN<T, 3>; 3] = [VecN::new([T::ONE, T::ZERO, T::ZERO]),
VecN::new([T::ZERO, T::ONE, T::ZERO]),
VecN::new([T::ZERO, T::ZERO, T::ONE])];
}
Related
Given a matrix type in Rust using const generic parameters, something like this:
pub struct Mat<T, const R: usize, const C: usize>
{
m: [[T; C]; R],
}
I am aware that specializations are not available (yet, and not for this type of
specialization it seems?), but it seems like it is possible to implement some
'specializations' for specific instances of this type, e.g.:
impl<T> Mat<T, 2, 2>
{
pub fn invert(self) -> Self
where T: From<f32> + Clone + std::ops::Neg<Output=T> + std::ops::Sub<T, Output = T> +
std::ops::Mul<T, Output = T> + std::ops::Div<T, Output = T>
{
let r = T::from(1.0) / self.clone().determinant();
Mat::new([[ self[1][1]*r.clone(), -self[0][1]*r.clone()],
[-self[1][0]*r.clone(), self[0][0]*r]])
}
pub fn determinant(self) -> T
where T: std::ops::Sub<T, Output = T> + std::ops::Mul<T, Output = T>
{
self[0][0]*self[1][1] - self[0][1]*self[1][0]
}
}
Similar 'specializations' can be implemented for 3x3 and 4x4 matrices.
However, at some point I would like to also implement a generalized
determinant/inverse, something like this:
impl<T, const N: usize> Mat<T, N, N>
{
/// Generalized matrix inversion.
pub fn gen_invert(self) -> Self
{
// TODO: determinant(), matrix_minor(), cofactor(), adjugate()
Mat::default()
}
/// Generalized matrix determinant.
pub fn gen_det(self) -> T
{
match N
{
1 => self.determinant(), // <- Wrong type Mat<T, N, N>, not Mat<T, 1, 1>.
// 2 => // Same as above.
_ => general_fn(),
}
}
}
This turned out to be hard however, since there does not seem to be any way to
cast or even transmute a type of Mat<T, N, N> to Mat<T, 1, 1>, even though we
know that the sizes are correct, given that is done on a const variable under
an appropriate match case. Is there some way to accomplish the above without an
expensive from() function that manually creates a Mat<T, 1, 1> from the Mat<T,
N, N>?
Additionally, is there some way to avoid renaming the generalized versions of
these functions? I tried to do that using traits, similar to this question, but
since all functions have the same name this does not appear to work for this
case.
(There are even more complications with this though, since computing minor
matrices requires arithmetic on const generics and that is only in nightly Rust
right now, but that is beyond the scope of this question.)
I was experimenting with const generics when this strange error came out: error: unconstrained generic constant. What does it mean?
Maybe I should describe what was I tried to do before actual code - I wanted to treat types as numbers, using Lisp-like Cons and Nil.
Here's an example:
use core::marker::PhantomData;
struct Nil;
struct Cons <N> (PhantomData <N>);
So '1' is Cons <Nil>, '2' - Cons <Cons <Nil>>, etc.
Then I tried to implement extracting normal number from this.
trait Len {
const N: usize;
}
impl <N: Len> Len for Cons <N> {
const N: usize = 1 + N::N;
}
impl Len for Nil {
const N: usize = 0;
}
And it works.
Then I started doing actual job: my true task was not to just experiment with generic types, but to implement mathematical vecs, just like(maybe you know) in shaders.
So I tried to implement such powerful constructor:
vec3 i = vec3(1.0, 1.0, 1);
vec4 v = vec4(i);
I have decided to treat any value as 'Piece' and then just compound pieces together.
trait Piece <T> {
type Size: Len;
fn construct(self) -> [T; Self::Size::N];
}
No problems so far. Next step is to define few auxiliary traits(since rust does not yet support negative bounds):
pub auto trait NotTuple {}
impl <T> !NotTuple for (T,) {}
pub auto trait NotList {}
impl <T, const N: usize> !NotList for [T; N] {}
I am using nightly rust, so few #![feature]-s is not a problem.
Then, we can use it:
type One = Cons <Nil>;
impl <T: Copy + From <U>, U: Copy + NotTuple + NotList> Piece <T> for U {
type Size = One;
fn construct(self) -> [T; Self::Size::N] {
[T::from(self)]
}
}
This one constructs piece from an argument.
Everything is still good.
impl <T: Copy, U: Piece <T>> Piece <T> for (U,) {
type Size = U::Size;
fn construct(self) -> [T; Self::Size::N] {
self.0.construct()
}
}
And this is where problem occurs. Compiler says that Self::Size::N is an unconstrained generic constant, and tries to help with this: try adding a 'where' bound using this expression: 'where [(); Self::Size::N]:', but this is just useless.
Can anyone explain me what is going on and how to fix this issue?
I've recently taken on learning Rust and I'm trying to write a small expression evaluator. I've been practicing Rust for a few days now and thought this task would be cool to work with Rust's Traits. What I tried to do is make Sum & Number structs implement Expression trait, so that I could express (pun unintended) (1 + 2) as an expression where left and right hand sides are expressions too.
I've stumbled onto the problem that you can't just use Traits as properties' types, so you instead should use &dyn Trait or Box in the Book. Following this notion a rewrote it and now it compiles, but I can't get access to values inside Sum.
Here's my code:
trait Expression {}
#[derive(Debug)]
struct Number {
pub val: i32
}
impl Expression for Number {}
struct Sum {
pub left: Box<dyn Expression>,
pub right: Box<dyn Expression>
}
impl Expression for Sum {}
fn main() {
let e = Sum{ left: Box::new(Number{ val: 2}),
right: Box::new(Number{ val: 2})
};
let sum = (2 + 2);
println!("{:#?}", sum);
}
What I want to be able to do is get to Number's value:
e.left.val
and use nested constuctions like:
Sum{Sum{Number, Sum{Number, Number}}, Number}
I also tried to make explicit cast to Number:
let val = (e.left as Number).val;
But it fails with an error:
non-primitive cast: std::boxed::Box<(dyn Expression + 'static)> as Number
note: an as expression can only be used to convert between primitive types. Consider using the From trait.
Sorry for any language mistakes or messy explanation, English is not my first language.
I'm not an experienced programmer and very new to Rust so I would really appreciate any help, thanks!
Rust doesn't let you cast non primitive types.
Reference:
https://doc.rust-lang.org/rust-by-example/types/cast.html
I think what you're trying to do is this (complete code in the playground):
trait Expression {
fn evaluate(&self) -> i32;
}
impl Expression for Number {
fn evaluate(&self) -> i32 {
self.val
}
}
impl Expression for Sum {
fn evaluate(&self) -> i32 {
self.left.evaluate() + self.right.evaluate()
}
}
I'm a complete newbie in Rust and I'm trying to get some understanding of the basics of the language.
Consider the following trait
trait Function {
fn value(&self, arg: &[f64]) -> f64;
}
and two structs implementing it:
struct Add {}
struct Multiply {}
impl Function for Add {
fn value(&self, arg: &[f64]) -> f64 {
arg[0] + arg[1]
}
}
impl Function for Multiply {
fn value(&self, arg: &[f64]) -> f64 {
arg[0] * arg[1]
}
}
In my main() function I want to group two instances of Add and Multiply in a vector, and then call the value method. The following works:
fn main() {
let x = vec![1.0, 2.0];
let funcs: Vec<&dyn Function> = vec![&Add {}, &Multiply {}];
for f in funcs {
println!("{}", f.value(&x));
}
}
And so does:
fn main() {
let x = vec![1.0, 2.0];
let funcs: Vec<Box<dyn Function>> = vec![Box::new(Add {}), Box::new(Multiply {})];
for f in funcs {
println!("{}", f.value(&x));
}
}
Is there any better / less verbose way? Can I work around wrapping the instances in a Box? What is the takeaway with trait objects in this case?
Is there any better / less verbose way?
There isn't really a way to make this less verbose. Since you are using trait objects, you need to tell the compiler that the vectors's items are dyn Function and not the concrete type. The compiler can't just infer that you meant dyn Function trait objects because there could have been other traits that Add and Multiply both implement.
You can't abstract out the calls to Box::new either. For that to work, you would have to somehow map over a heterogeneous collection, which isn't possible in Rust. However, if you are writing this a lot, you might consider adding helper constructor functions for each concrete impl:
impl Add {
fn new() -> Add {
Add {}
}
fn new_boxed() -> Box<Add> {
Box::new(Add::new())
}
}
It's idiomatic to include a new constructor wherever possible, but it's also common to include alternative convenience constructors.
This makes the construction of the vector a bit less noisy:
let funcs: Vec<Box<dyn Function>> = vec!(Add::new_boxed(), Multiply::new_boxed()));
What is the takeaway with trait objects in this case?
There is always a small performance hit with using dynamic dispatch. If all of your objects are the same type, they can be densely packed in memory, which can be much faster for iteration. In general, I wouldn't worry too much about this unless you are creating a library crate, or if you really want to squeeze out the last nanosecond of performance.
I'm trying to integrate the cgmath library into my first experiments with glium, but I can't figure out how to pass my Matrix4 object to the draw() call.
My uniforms object is defined thus:
let uniforms = uniform! {
matrix: cgmath::Matrix4::from_scale(0.1)
};
and this is my draw call:
target.draw(&vertex_buffer, &index_slice, &program, &uniforms, &Default::default())
.unwrap();
which fails to compile with the message
error[E0277]: the trait bound `cgmath::Matrix4<{float}>: glium::uniforms::AsUniformValue` is not satisfied
I'm a total beginner with Rust, but I do believe I cannot implement this trait myself, as both it and the Matrix4 type are in a crate separate from mine.
Is there really no better option than to manually convert the matrix into an array of arrays of floats?
I do believe I cannot implement this trait myself, as both it and the Matrix4 type are in a crate separate from mine.
This is very true.
Is there really no better option than to manually convert the matrix into an array of arrays of floats?
Well, you don't have to do a lot manually.
First, it's useful to note that Matrix4<S> implements Into<[[S; 4]; 4]> (I can't link to that impl directly, so you have to use ctrl+f). That means that you can easily convert a Matrix4 into an array which is accepted by glium. Unfortunately, into() only works when the compiler knows exactly what type to convert to. So here is a non-working and a working version:
// Not working, the macro accepts many types, so the compiler can't be sure
let uniforms = uniform! {
matrix: cgmath::Matrix4::from_scale(0.1).into()
};
// Works, because we excplicitly mention the type
let matrix: [[f64; 4]; 4] = cgmath::Matrix::from_scale(0.1).into();
let uniforms = uniform! {
matrix: matrix,
};
But this solution might be still too much to write. When I worked with cgmath and glium, I created a helper trait to reduce code size even more. This might not be the best solution, but it works and has no obvious downsides (AFAIK).
pub trait ToArr {
type Output;
fn to_arr(&self) -> Self::Output;
}
impl<T: BaseNum> ToArr for Matrix4<T> {
type Output = [[T; 4]; 4];
fn to_arr(&self) -> Self::Output {
(*self).into()
}
}
I hope this code explains itself. With this trait, you now only have to use the trait near the draw() call and then:
let uniforms = uniform! {
matrix: cgmath::Matrix4::from_scale(0.1).to_arr(),
// ^^^^^^^^^
};