Rust adding associated type with u32 - rust

I have an associated type MyType.
This type is going to be an unsigned integer, but I use it because the size of the unsigned integer required for this variable is maybe going to change in the future. So MyType is going to be one of: u32, u64, u128.
So MyType will look like this when defined: MyType = u32 (of course it may not be u32).
In my code, I need to increment this variable of type MyType by one.
so I have to do this: let n: MyType = v + 1, where v is the type of MyType.
How can I do this, what trait restrictions should MyType have?
I would want something like this: type MyType: UnsignedInt, but the problem is there is no number trait in rust as far as I have seen.

Your description is very vague and it would be much easier if you added a code example, but deriving from the word associated type I tried to reconstruct a minimal example:
trait Incrementor {
type MyType;
fn increment(&self, value: Self::MyType) -> Self::MyType {
value + 1
}
}
struct U32Incrementor;
impl Incrementor for U32Incrementor {
type MyType = u32;
}
fn main() {
let incrementor = U32Incrementor;
println!("{}", incrementor.increment(10));
}
error[E0369]: cannot add `{integer}` to `<Self as Incrementor>::MyType`
--> src/main.rs:5:15
|
5 | value + 1
| ----- ^ - {integer}
| |
| <Self as Incrementor>::MyType
|
= note: the trait `std::ops::Add` is not implemented for `<Self as Incrementor>::MyType`
Is that about the problem you are having?
If yes, does this help?
use num_traits::{FromPrimitive, One, Unsigned};
trait Incrementor {
type MyType: Unsigned + FromPrimitive;
fn increment(&self, value: Self::MyType) -> Self::MyType {
value + Self::MyType::one()
}
fn increment_ten(&self, value: Self::MyType) -> Self::MyType {
value + Self::MyType::from_u8(10).unwrap()
}
}
struct U32Incrementor;
impl Incrementor for U32Incrementor {
type MyType = u32;
}
fn main() {
let incrementor = U32Incrementor;
println!("{}", incrementor.increment(10));
println!("{}", incrementor.increment_ten(10));
}
11
20
It's based on the excellent num_traits crate.

The + operator behavior is specified by the trait std::ops::Add. This trait is generic over its output type, so if you want for example MyType to have the semantics MyType + MyType = MyType you can write:
trait MyTrait {
type MyType: std::ops::Add<Output = Self::MyType>;
}
All integers will implement this. If you need additional operators you can use the traits from std::ops, but with multiple bounds this can become tedious. The crate num-traits can help you with pre-declared traits that has all required ops and more. For example the trait NumOps specify all arithmetic operators, and there are more traits such as Num that includes equality and zero/one or PrimInt that specifies basically every operation or method integers in Rust have.

Related

From and Into with binary std::ops: cannot infer type for type parameter `T`?

When I do
seq += u64::from(rhs);
Everything works. But I'd prefer the syntax of rhs.into() with that I'm currently getting,
error[E0283]: type annotations needed
--> src/sequence.rs:50:5
|
19 | seq += rhs.into();
| ^^ ---------- this method call resolves to `T`
| |
| cannot infer type for type parameter `T`
|
= note: cannot satisfy `_: Into<u64>`
= note: required because of the requirements on the impl of `AddAssign<_>` for `Sequence`
This .into() syntax normally works. Why doesn't type inference work on binary operators += such that if the LHS only implements AddAssign<u64> the RHS will coerce? And moreover, aside from using from what is the syntax (if possible) to provide this type information to .into that the compiler needs? I've tried things like .into::<u64>(rhs) and that also doesn't work.
I am implementing AddAssign like this,
impl<T: Into<u64>> AddAssign<T> for Sequence {
fn add_assign(&mut self, rhs: T) {...}
}
And From like this,
impl From<Sequence> for u64 {
fn from(seq: Sequence)-> u64 { ... }
}
You have a double Into indirection, probably by mistake. Since your type already implements AddAssign<T> where T: Into<u64>, then there is no need to add .into() to your right-hand member. It should be expected that the implementation of add_assign (not provided in your example) would call into underneath.
seq += rhs;
In fact, adding it would only bring ambiguity, because then the compiler was being told to call Into<X>::into(rhs) on a type X which is never mentioned nor constrained anywhere. The only constraint would be Into<u64>, but multiple types fulfill it.
A complete example:
use std::ops::AddAssign;
struct Sequence;
impl<T: Into<u64>> AddAssign<T> for Sequence {
fn add_assign(&mut self, rhs: T) {
let value: u64 = rhs.into();
// use value
}
}
fn main() {
let mut x = Sequence;
x += 6_u32;
}
what is the syntax (if possible) to provide this type information to .into that the compiler needs?
Again, this is not needed. But that would be possible with the so-called fully qualified syntax.
See also:
Why can't Rust infer the resulting type of Iterator::sum?

How can I create a trait/type to unify iterating over some set of integers from either a Range or a Vec?

I need the trait XYZ to define a method that allows iterating over some set of integers. This set of integers is defined either by a backing Vec or by a Range<usize>. However, I run into various (lifetime or type) issues depending on how I define the XYZIterator type that is supposed to unify these Iterators over Vec/Range.
The backup solution would be to allocate and return Vecs, but I wondered whether there was a way without cloning/allocating memory.
type XYZIterator = Box<dyn Iterator<Item = usize>>;
trait XYZ {
fn stuff(&self) -> XYZIterator;
}
struct Test {
objects: Vec<usize>,
}
impl XYZ for Test {
fn stuff(&self) -> XYZIterator {
Box::new(self.objects.iter())
}
}
struct Test2 {}
impl XYZ for Test2 {
fn stuff(&self) -> XYZIterator {
Box::new((1..4).into_iter())
}
}
fn main() {
let t1 = Test {
objects: vec![1, 2, 3],
};
let t2 = Test2 {};
t1.stuff().for_each(|x| println!("{}", x));
t2.stuff().for_each(|x| println!("{}", x));
t1.stuff()
.filter(|x| x % 2 == 0)
.for_each(|x| println!("{}", x));
t2.stuff()
.filter(|x| x % 2 == 0)
.for_each(|x| println!("{}", x));
}
error[E0271]: type mismatch resolving `<std::slice::Iter<'_, usize> as Iterator>::Item == usize`
--> src/main.rs:12:9
|
12 | Box::new(self.objects.iter())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `usize`, found reference
|
= note: expected type `usize`
found reference `&usize`
= note: required for the cast to the object type `dyn Iterator<Item = usize>`
Your code has two issues:
In the implementation of XYZ for Test1, you return the iterator self.objects.iter(). Vec::iter iterates over references to the objects, not objects themselves, so this is an iterator over &usize, which doesn't match the return type. You should have gotten an error about this. It's easy to fix though: self.objects.iter().copied() will copy each element out of the reference.
In type XYZIterator = Box<dyn Iterator<Item = usize>>;, since there is no lifetime in the trait object, it defaults to 'static - that is, your iterator can live forever. But that's not the case with the vector iterator - it has a reference to the vector is iterating over. This is where you are having lifetime issues.
The solution is to give the XYZIterator type a lifetime:
type XYZIterator<'a> = Box<dyn Iterator<Item = usize> + 'a>;
And alter the traits and trait implementations to use the lifetime.
Also consider altering your type or function to accept any T: Iterator<Item=usize>; it will then accept any iterator that produces usizes

Coercing Arc<Mutex<Option<Box<MyStruct>>>>> to Arc<Mutex<Option<Box<dyn Trait>>>>> won't work

I'm trying to store a dyn trait inside Arc<Mutex<Option<Box<>>>>>, however for some reason it won't work
use std::sync::{Arc, Mutex};
trait A{}
struct B{}
impl A for B{}
struct H{
c: Arc<Mutex<Option<Box<dyn A>>>>
}
fn main() {
let c =
Arc::new(Mutex::new(Some(Box::new(B{}))));
H{
c: c
};
}
Error:
error[E0308]: mismatched types
--> src/main.rs:17:12
|
17 | c: c
| ^ expected trait object `dyn A`, found struct `B`
|
= note: expected struct `Arc<Mutex<Option<Box<(dyn A + 'static)>>>>`
found struct `Arc<Mutex<Option<Box<B>>>>`
Playground
It looks like it cannot store a dyn as a Box<B>, which is strange because this works:
fn main() {
let c: Arc<Mutex<Option<Box<dyn A>>>> =
Arc::new(Mutex::new(Some(Box::new(B{}))));
}
What's the difference?
What's the difference?
There is a very special case for Box and other standard library types that can contain dynamically-sized values like dyn A.
let c = Arc::new(Mutex::new(Some(Box::new(B{}))));
H { c: c };
In this code, you have initialized the variable c — with no type declaration — to a value whose type is inferred as Arc<Mutex<Option<Box<B>>>, and then try to store it in a field of of type Arc<Mutex<Option<Box<dyn A>>>. This cannot work, because the two types have different memory layouts.
let c: Arc<Mutex<Option<Box<dyn A>>>> =
Arc::new(Mutex::new(Some(Box::new(B{}))));
In this code, you have given c a type declaration, as a consequence of which the need for dyn is known at the point where it is constructed, which allows the coercion to happen soon enough, You can coerce a Box<B> to a Box<dyn A>, because Box implements the special trait CoerceUnsized. (The same mechanism applies to converting &B to &dyn A.) But, this does not apply to arbitrary types containing a Box<B> — not even Option<Box<B>>, let alone your more complex type.
You can give c a type when you're constructing it:
let c: Arc<Mutex<Option<Box<dyn A>>>> = Arc::new(Mutex::new(Some(Box::new(B{}))));
H { c: c };
Or, slightly shorter but odder, you can annotate just the immediate container of the Box with the type it needs:
let c = Arc::new(Mutex::new(Some::<Box<dyn A>>(Box::new(B{}))));
H { c: c };
Or you can write an explicit coercion with the as operator:
let c = Arc::new(Mutex::new(Some(Box::new(B{}) as Box<dyn A>)));
H { c: c };

"Unconstrained generic constant" when adding const generics

How would I add const generics? Lets say I have a type foo:
pub struct foo <const bar: i64> {
value: f64,
}
and I want to implement mul so I can multiply 2 foos together. I want to treat bar as a dimension, so foo<baz>{value: x} * foo<quux>{value: k} == foo<baz + quux>{value: x * k}, as follows:
impl<const baz: i64, const quux: i64> Mul<foo<quux>> for foo<baz> {
type Output = foo<{baz + quux}>;
fn mul(self, rhs: foo<quux>) -> Self::Output {
Self::Output {
value: self.value * rhs.value,
}
}
}
I get an error telling me I need to add a where bound on {baz+quux} within the definition of the output type. What exactly does this mean and how do I implement it? I can't find any seemingly relevant information on where.
The solution
I got a variation on your code to work here:
impl<const baz: i64, const quux: i64> Mul<Foo<quux>> for Foo<baz>
where Foo<{baz + quux}>: Sized {
type Output = Foo<{baz + quux}>;
fn mul(self, rhs: Foo<quux>) -> Self::Output {
Self::Output {
value: self.value * rhs.value,
}
}
}
How I got there
I've reproduced the full error that you get without the added where clause below:
error: unconstrained generic constant
--> src/main.rs:11:5
|
11 | type Output = Foo<{baz + quux}>;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: try adding a `where` bound using this expression: `where [u8; {baz + quux}]: Sized`
Now, the clause that it suggests is not very useful, for one reason: the length parameter of a statically sized slice must be a usize, but our values baz and quux (and their sum) are i64. I'd imagine that the compiler authors included that particular suggestion because the primary use case for const generics is embedding array sizes in types. I've opened an issue on GitHub about this diagnostic.
Why is this necessary?
A where clause specifies constraints on some generic code element---a function, type, trait, or in this case, implementation---based on the traits and lifetimes that one or more generic parameters, or a derivative thereof, must satisfy. There are equivalent shorthands for many cases, but the overall requirement is that the constraints are fully specified.
In our case, it may seem superficially that this implementation works for any combination of baz and quux, but this is not the case, due to integer overflow; if we supply sufficiently large values of the same sign for both, their sum cannot be represented by i64. This means that i64 is not closed under addition.
The constraint that we add requires that the sum of the two values is in the set of possible values of an i64, indirectly, by requiring something of the type which consumes it. Hence, supplying 2^31 for both baz and quux is not valid, since the resulting type Foo<{baz + quux}> does not exist, so it cannot possibly implement the Sized trait. While this technically is a stricter constraint than we need (Sized is a stronger requirement than a type simply existing), all Foo<bar> which exist implement Sized, so in our case it is the same. On the other hand, without the constraint, no where clause, explicit or shorthand, specifies this constraint.

Why does the compiler treat those two equivalent(?) lines differently?

From what I understand, when x implements trait Foo,
the following two lines should be equivalent.
x.foo();
Foo::foo(&x);
However, I am facing a problem where the compiler accepts the first one, and rejects the second one, with a rather strange error message.
As usual, this example is available on the playground.
Consider the following two related traits.
pub trait Bar<'a> {
type BarError: Into<MyError>;
fn bar(&self) -> Result<(), Self::BarError>;
}
pub trait Foo: for<'a> Bar<'a> {
type FooError: Into<MyError>;
fn foo(&self) -> Result<(), Self::FooError>
where
for<'a> <Self as Bar<'a>>::BarError: Into<<Self as Foo>::FooError>;
}
This example is a bit complex, but I do need the lifetime parameter on Bar, and I can't have it on Foo. As a consequence:
I have to resort on Higher-Rank Trait Bounds (HRTB);
I can not rely on Bar::BarError in Foo (there are actually an infinite number of types Bar<'_>::BarError), so Foo must have its own FooError;
and so I need the complex trait bound in the foo method to convert BarErrors to FooErrors.
Now, let's implement Bar and Foo for a concrete type, e.g. Vec<i32>.
impl<'a> Bar<'a> for Vec<i32> {
type BarError = Never;
fn bar(&self) /* ... */
}
impl Foo for Vec<i32> {
type FooError = Never;
fn foo(&self) /* ... */
}
Note that Never is an empty enum, indicating that these implementations never fail. In order to comply with the trait definitions, From<Never> is implemented for MyError.
We can now demonstrate the problem: the following compiles like charm.
let x = vec![1, 2, 3];
let _ = x.foo();
But the following des not.
let x = vec![1, 2, 3];
let _ = Foo::foo(&x);
The error messages says:
error[E0271]: type mismatch resolving `<std::vec::Vec<i32> as Foo>::FooError == MyError`
--> src/main.rs:49:13
|
49 | let _ = Foo::foo(&x);
| ^^^^^^^^ expected enum `Never`, found struct `MyError`
|
= note: expected type `Never`
found type `MyError`
The compiler seems to believe that I wrote something like this (NB: this is not correct Rust, but just to give the idea).
let _ = Foo::<FooError=MyError>::foo(&x);
And this does not work because x implements Foo<FooError=Never>.
Why does the compiler adds this additional constraint? Is it a bug? If not, is it possible to write it otherwise so it compiles?
NB: you may wonder why I don't just stick to the first version (x.foo(&x)). In my actual situation, foo is actually named retain, which is also the name of a method in Vec. So I must use the second form, to avoid the ambiguity.
NB2: if I remove the HRTB in the declaration of method foo, both lines compile. But then I can not call Bar::bar in any implementation of Foo::foo, which is not an option for me. And changing foo to something like fn foo<'a>(&'a self) -> Result<(), <Self as Bar<'a>>::BarError) is not an option either, unfortunately.
From what I understand, when x implements trait Foo, the following two lines should be equivalent.
x.foo();
Foo::foo(&x);
This is true for an inherent method (one that is defined on the type of x itself), but not for a trait method. In your case the equivalent is <Vec<i32> as Foo>::foo(&x);.
Here is a playground link

Resources