How would I add const generics? Lets say I have a type foo:
pub struct foo <const bar: i64> {
value: f64,
}
and I want to implement mul so I can multiply 2 foos together. I want to treat bar as a dimension, so foo<baz>{value: x} * foo<quux>{value: k} == foo<baz + quux>{value: x * k}, as follows:
impl<const baz: i64, const quux: i64> Mul<foo<quux>> for foo<baz> {
type Output = foo<{baz + quux}>;
fn mul(self, rhs: foo<quux>) -> Self::Output {
Self::Output {
value: self.value * rhs.value,
}
}
}
I get an error telling me I need to add a where bound on {baz+quux} within the definition of the output type. What exactly does this mean and how do I implement it? I can't find any seemingly relevant information on where.
The solution
I got a variation on your code to work here:
impl<const baz: i64, const quux: i64> Mul<Foo<quux>> for Foo<baz>
where Foo<{baz + quux}>: Sized {
type Output = Foo<{baz + quux}>;
fn mul(self, rhs: Foo<quux>) -> Self::Output {
Self::Output {
value: self.value * rhs.value,
}
}
}
How I got there
I've reproduced the full error that you get without the added where clause below:
error: unconstrained generic constant
--> src/main.rs:11:5
|
11 | type Output = Foo<{baz + quux}>;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: try adding a `where` bound using this expression: `where [u8; {baz + quux}]: Sized`
Now, the clause that it suggests is not very useful, for one reason: the length parameter of a statically sized slice must be a usize, but our values baz and quux (and their sum) are i64. I'd imagine that the compiler authors included that particular suggestion because the primary use case for const generics is embedding array sizes in types. I've opened an issue on GitHub about this diagnostic.
Why is this necessary?
A where clause specifies constraints on some generic code element---a function, type, trait, or in this case, implementation---based on the traits and lifetimes that one or more generic parameters, or a derivative thereof, must satisfy. There are equivalent shorthands for many cases, but the overall requirement is that the constraints are fully specified.
In our case, it may seem superficially that this implementation works for any combination of baz and quux, but this is not the case, due to integer overflow; if we supply sufficiently large values of the same sign for both, their sum cannot be represented by i64. This means that i64 is not closed under addition.
The constraint that we add requires that the sum of the two values is in the set of possible values of an i64, indirectly, by requiring something of the type which consumes it. Hence, supplying 2^31 for both baz and quux is not valid, since the resulting type Foo<{baz + quux}> does not exist, so it cannot possibly implement the Sized trait. While this technically is a stricter constraint than we need (Sized is a stronger requirement than a type simply existing), all Foo<bar> which exist implement Sized, so in our case it is the same. On the other hand, without the constraint, no where clause, explicit or shorthand, specifies this constraint.
Related
the rust docs gives the following example for blanket implementations
impl<T: Display> ToString for T {
// --snip--
}
My somewhat more complicated trait does not allow me to write a blanket implementation.
/// a fixed size 2D array (WxH) that can be indexed by [usize;2]
pub trait Grid<C: Copy + Eq, const W: usize, const H: usize>:
IndexMut<[usize; 2], Output = Option<C>>
{
}
// allow to transform one grid implementation into another
// as long as they contain the same elements and have the same size
impl<C, O, I, const W: usize, const H: usize> From<I> for O
where
O: Default + Grid<C, W, H>,
I: Grid<C, W, H>,
C: Copy
{
fn from(input: O) -> O {
let mut output = O::default();
for i in 0..O::W {
for j in 0..O::H {
output[[i, j]] = input[[i, j]];
}
}
output
}
}
The errors say
the type parameter `C` is not constrained by the impl trait, self type, or predicates
the const parameter `W` is not constrained by the impl trait, self type, or predicates
expressions using a const parameter must map each value to a distinct output value
proving the result of expressions other than the parameter are unique is not supported
It feels like it is constrained via the O: Grid<Player, W, H> but that doesn't seem to be the right constraint.
Rhe errors around the const generic parameters are secondary (a red herring?). When I replace them with constants, the error around C (the element type) still remains.
The problem is that your variables C, W, and H could have more than one value, as far as the compiler is concerned. Consider what happens if I implements Grid<Foo, 1, 2> and also Grid<Foo, 500, 50>: it is ambiguous which Grid implementation your blanket From implementation should be using. (C can't actually have multiple values for a given O or I, but the compiler doesn't actually reason that out.)
The solution to that problem is to change your trait's generics to associated types (and associated constants), which means that the trait cannot be implemented more than once for the same self type:
use std::ops::IndexMut;
/// a fixed size 2D array (WxH) that can be indexed by [usize;2]
pub trait Grid: IndexMut<[usize; 2], Output = Option<Self::Component>>
{
type Component: Copy + Eq;
const WIDTH: usize;
const HEIGHT: usize;
}
// allow to transform one grid implementation into another
// as long as they contain the same elements and have the same size
impl<O, I> From<I> for O
where
O: Default + Grid,
O::Component: Copy,
I: Grid<Component = O::Component, WIDTH = O::WIDTH, HEIGHT = O::HEIGHT>,
{
fn from(input: O) -> O {
let mut output = O::default();
for i in 0..O::W {
for j in 0..O::H {
output[[i, j]] = input[[i, j]];
}
}
output
}
}
However, this still will not compile, for several reasons:
You can't write a From impl for any two types that meet some trait bounds, because it might overlap with some other crate's From impl for any two types that meet different bounds — and it always overlaps with the blanket From<T> for T in the standard library.
In general, you can only implement From<TypeYouDefined> for T or From<T> for TypeYouDefined — not a fully generic, blanket From impl. You can define your own conversion trait, though.
You can't actually write a bound that two traits' associated constants are equal, as my code suggests — the compiler assumes it must be a type constraint. I don't know if there's a solution to that problem.
From what I understand, when x implements trait Foo,
the following two lines should be equivalent.
x.foo();
Foo::foo(&x);
However, I am facing a problem where the compiler accepts the first one, and rejects the second one, with a rather strange error message.
As usual, this example is available on the playground.
Consider the following two related traits.
pub trait Bar<'a> {
type BarError: Into<MyError>;
fn bar(&self) -> Result<(), Self::BarError>;
}
pub trait Foo: for<'a> Bar<'a> {
type FooError: Into<MyError>;
fn foo(&self) -> Result<(), Self::FooError>
where
for<'a> <Self as Bar<'a>>::BarError: Into<<Self as Foo>::FooError>;
}
This example is a bit complex, but I do need the lifetime parameter on Bar, and I can't have it on Foo. As a consequence:
I have to resort on Higher-Rank Trait Bounds (HRTB);
I can not rely on Bar::BarError in Foo (there are actually an infinite number of types Bar<'_>::BarError), so Foo must have its own FooError;
and so I need the complex trait bound in the foo method to convert BarErrors to FooErrors.
Now, let's implement Bar and Foo for a concrete type, e.g. Vec<i32>.
impl<'a> Bar<'a> for Vec<i32> {
type BarError = Never;
fn bar(&self) /* ... */
}
impl Foo for Vec<i32> {
type FooError = Never;
fn foo(&self) /* ... */
}
Note that Never is an empty enum, indicating that these implementations never fail. In order to comply with the trait definitions, From<Never> is implemented for MyError.
We can now demonstrate the problem: the following compiles like charm.
let x = vec![1, 2, 3];
let _ = x.foo();
But the following des not.
let x = vec![1, 2, 3];
let _ = Foo::foo(&x);
The error messages says:
error[E0271]: type mismatch resolving `<std::vec::Vec<i32> as Foo>::FooError == MyError`
--> src/main.rs:49:13
|
49 | let _ = Foo::foo(&x);
| ^^^^^^^^ expected enum `Never`, found struct `MyError`
|
= note: expected type `Never`
found type `MyError`
The compiler seems to believe that I wrote something like this (NB: this is not correct Rust, but just to give the idea).
let _ = Foo::<FooError=MyError>::foo(&x);
And this does not work because x implements Foo<FooError=Never>.
Why does the compiler adds this additional constraint? Is it a bug? If not, is it possible to write it otherwise so it compiles?
NB: you may wonder why I don't just stick to the first version (x.foo(&x)). In my actual situation, foo is actually named retain, which is also the name of a method in Vec. So I must use the second form, to avoid the ambiguity.
NB2: if I remove the HRTB in the declaration of method foo, both lines compile. But then I can not call Bar::bar in any implementation of Foo::foo, which is not an option for me. And changing foo to something like fn foo<'a>(&'a self) -> Result<(), <Self as Bar<'a>>::BarError) is not an option either, unfortunately.
From what I understand, when x implements trait Foo, the following two lines should be equivalent.
x.foo();
Foo::foo(&x);
This is true for an inherent method (one that is defined on the type of x itself), but not for a trait method. In your case the equivalent is <Vec<i32> as Foo>::foo(&x);.
Here is a playground link
I noticed that Box<T> implements everything that T implements and can be used transparently. For Example:
let mut x: Box<Vec<u8>> = Box::new(Vec::new());
x.push(5);
I would like to be able to do the same.
This is one use case:
Imagine I'm writing functions that operate using an axis X and an axis Y. I'm using values to change those axis that are of type numbers but belongs only to one or the other axis.
I would like my compiler to fail if I attempt to do operations with values that doesn't belong to the good axis.
Example:
let x = AxisX(5);
let y = AxisY(3);
let result = x + y; // error: incompatible types
I can do this by making a struct that will wrap the numbers:
struct AxisX(i32);
struct AxisY(i32);
But that won't give me access to all the methods that i32 provides like abs(). Example:
x.abs() + 3 // error: abs() does not exist
// ...maybe another error because I don't implement the addition...
Another possible use case:
You can appropriate yourself a struct of another library and implement or derive anything more you would want. For example: a struct that doesn't derive Debug could be wrapped and add the implementation for Debug.
You are looking for std::ops::Deref:
In addition to being used for explicit dereferencing operations with the (unary) * operator in immutable contexts, Deref is also used implicitly by the compiler in many circumstances. This mechanism is called 'Deref coercion'. In mutable contexts, DerefMut is used.
Further:
If T implements Deref<Target = U>, and x is a value of type T, then:
In immutable contexts, *x on non-pointer types is equivalent to *Deref::deref(&x).
Values of type &T are coerced to values of type &U
T implicitly implements all the (immutable) methods of the type U.
For more details, visit the chapter in The Rust Programming Language as well as the reference sections on the dereference operator, method resolution and type coercions.
By implementing Deref it will work:
impl Deref for AxisX {
type Target = i32;
fn deref(&self) -> &i32 {
&self.0
}
}
x.abs() + 3
You can see this in action on the Playground.
However, if you call functions from your underlying type (i32 in this case), the return type will remain the underlying type. Therefore
assert_eq!(AxisX(10).abs() + AxisY(20).abs(), 30);
will pass. To solve this, you may overwrite some of those methods you need:
impl AxisX {
pub fn abs(&self) -> Self {
// *self gets you `AxisX`
// **self dereferences to i32
AxisX((**self).abs())
}
}
With this, the above code fails. Take a look at it in action.
I have a type defined as follows (by another crate):
trait Bar {
type Baz;
}
struct Foo<B: Bar, T> {
baz: B::Baz,
t: ::std::marker::PhantomData<T>
}
The type parameter T serves to encode some data at compile-time, and no instances of it will ever exist.
I would like to store a number of Foos, all with the same B, but with different Ts, in a Vec. Any time I am adding to or removing from this Vec I will know the proper T for the item in question by other means.
I know I could have a Vec<Box<Any>>, but do not want to incur the overhead of dynamic dispatch here.
I decided to make it a Vec<Foo<B, ()>>, and transmute to the proper type whenever necessary. However, to my surprise, a function like the following is not allowed:
unsafe fn change_t<B: Bar, T, U>(foo: Foo<B, T>) -> Foo<B, U> {
::std::mem::transmute(foo)
}
This gives the following error:
error[E0512]: transmute called with types of different sizes
--> src/main.rs:13:5
|
13 | ::std::mem::transmute(foo)
| ^^^^^^^^^^^^^^^^^^^^^
|
= note: source type: Foo<B, T> (size can vary because of <B as Bar>::Baz)
= note: target type: Foo<B, U> (size can vary because of <B as Bar>::Baz)
I find this very confusing, as both types have the same B, so their B::Bazes must also be the same, and the type T should not affect the type's layout whatsoever. Why is this not allowed?
It's been brought it to my attention that this sort of transmute results in an error even when the type parameter T is not present at all!
trait Bar {
type Baz;
}
struct Foo<B: Bar> {
baz: B::Baz,
}
unsafe fn change_t<B: Bar>(foo: B::Baz) -> B::Baz {
::std::mem::transmute(foo)
}
playground
Bizarrely, it is not possible to translate even between B::Baz and B::Baz. Unless there is some extremely subtle reasoning that makes this unsafe, this seems very much like a compiler bug.
It seems that for transmute size compatibility can't be verified at compile time for generic T values: so you have to use &T references:
unsafe fn change_t<B: Bar>(foo: &B::Baz) -> &B::Baz {
::std::mem::transmute(foo)
}
This related question contains some interesting details about transmuting.
I released the cluFullTransmute crate (GitHub repository) that solves your question, but it requires a nightly compiler. Give it a try.
I made a two element Vector struct and I want to overload the + operator.
I made all my functions and methods take references, rather than values, and I want the + operator to work the same way.
impl Add for Vector {
fn add(&self, other: &Vector) -> Vector {
Vector {
x: self.x + other.x,
y: self.y + other.y,
}
}
}
Depending on which variation I try, I either get lifetime problems or type mismatches. Specifically, the &self argument seems to not get treated as the right type.
I have seen examples with template arguments on impl as well as Add, but they just result in different errors.
I found How can an operator be overloaded for different RHS types and return values? but the code in the answer doesn't work even if I put a use std::ops::Mul; at the top.
I am using rustc 1.0.0-nightly (ed530d7a3 2015-01-16 22:41:16 +0000)
I won't accept "you only have two fields, why use a reference" as an answer; what if I wanted a 100 element struct? I will accept an answer that demonstrates that even with a large struct I should be passing by value, if that is the case (I don't think it is, though.) I am interested in knowing a good rule of thumb for struct size and passing by value vs struct, but that is not the current question.
You need to implement Add on &Vector rather than on Vector.
impl<'a, 'b> Add<&'b Vector> for &'a Vector {
type Output = Vector;
fn add(self, other: &'b Vector) -> Vector {
Vector {
x: self.x + other.x,
y: self.y + other.y,
}
}
}
In its definition, Add::add always takes self by value. But references are types like any other1, so they can implement traits too. When a trait is implemented on a reference type, the type of self is a reference; the reference is passed by value. Normally, passing by value in Rust implies transferring ownership, but when references are passed by value, they're simply copied (or reborrowed/moved if it's a mutable reference), and that doesn't transfer ownership of the referent (because a reference doesn't own its referent in the first place). Considering all this, it makes sense for Add::add (and many other operators) to take self by value: if you need to take ownership of the operands, you can implement Add on structs/enums directly, and if you don't, you can implement Add on references.
Here, self is of type &'a Vector, because that's the type we're implementing Add on.
Note that I also specified the RHS type parameter with a different lifetime to emphasize the fact that the lifetimes of the two input parameters are unrelated.
1 Actually, reference types are special in that you can implement traits for references to types defined in your crate (i.e. if you're allowed to implement a trait for T, then you're also allowed to implement it for &T). &mut T and Box<T> have the same behavior, but that's not true in general for U<T> where U is not defined in the same crate.
If you want to support all scenarios, you must support all the combinations:
&T op U
T op &U
&T op &U
T op U
In rust proper, this was done through an internal macro.
Luckily, there is a rust crate, impl_ops, that also offers a macro to write that boilerplate for us: the crate offers the impl_op_ex! macro, which generates all the combinations.
Here is their sample:
#[macro_use] extern crate impl_ops;
use std::ops;
impl_op_ex!(+ |a: &DonkeyKong, b: &DonkeyKong| -> i32 { a.bananas + b.bananas });
fn main() {
let total_bananas = &DonkeyKong::new(2) + &DonkeyKong::new(4);
assert_eq!(6, total_bananas);
let total_bananas = &DonkeyKong::new(2) + DonkeyKong::new(4);
assert_eq!(6, total_bananas);
let total_bananas = DonkeyKong::new(2) + &DonkeyKong::new(4);
assert_eq!(6, total_bananas);
let total_bananas = DonkeyKong::new(2) + DonkeyKong::new(4);
assert_eq!(6, total_bananas);
}
Even better, they have a impl_op_ex_commutative! that'll also generate the operators with the parameters reversed if your operator happens to be commutative.