I am working on a Rust program where I got stuck on a problem that can be reduced to following case:
struct Pair<L, R> {
left: L,
right: R,
}
// Returns the first `u32` in the pair (only defined for pairs containing an u32)
trait GetU32 {
fn get(&self) -> u32;
}
// This should also be used for `Pair<u32, u32>`
impl<R> GetU32 for Pair<u32, R> {
fn get(&self) -> u32 {
self.left
}
}
impl<L> GetU32 for Pair<L, u32> {
fn get(&self) -> u32 {
self.right
}
}
// impl GetU32 for Pair<u32, u32> {
// fn get(&self) -> u32 {
// self.left
// }
// }
fn main() {
let a: Pair<u8, u32> = Pair {left: 0u8, right: 999u32};
assert_eq!(999u32, a.get());
let b: Pair<u32, u8> = Pair {left: 999u32, right: 0u8};
assert_eq!(999u32, b.get());
let c: Pair<u32, u32> = Pair {left: 999u32, right: 0u32};
assert_eq!(999u32, c.get());
}
Playground link
I have a struct with two fields. If one (or both) of the fields are an u32, I want to return the first u32. The field to use should be picked statically during compilation.
The problem with the code above is that I can't express which implementation has a higher priority and it causes a conflict in the case Pair<u32, u32>.
error[E0119]: conflicting implementations of trait `GetU32` for type `Pair<u32, u32>`:
--> crates/etwin_simple_user_pg/src/main.rs:20:1
|
12 | impl<R> GetU32 for Pair<u32, R> {
| ------------------------------- first implementation here
...
18 | default impl<L> GetU32 for Pair<L, u32> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `Pair<u32, u32>`
How can I solve this conflict by picking the preferred implementation? I am OK with using nightly features such as specialization.
I looked into defining the conflicting case explicitly (commented code) but it only led to more conflicts.
Another solution I pursued was trying to use specialization but I wasn't able to apply it to my use-case.
Another solution would be to specify the second implementation with negative bounds as impl<L: !u32> GetU32 for Pair<L, u32> (only defined for traits where the only u32 is .right) but negative bounds don't exist.
I know that there are other questions on conflicting trait implementations, but I did not found a question where the conflict comes from being unable to pick the preferred implementation in such a simple case.
Edit I'd like to expand my question to give more context on my real problem and the solution I am using currently.
I am creating a structure similar to a frunk::HList to build an API object bit by bit by adding (or overriding) services. This struct remembers which services were registered and allows to retrieve them later. This all happens statically so the compiler can force the service to be registered and know which field corresponds to it. (Similarly to the minimal example above where the compiler should know that the pair has an u32 and in which field).
Since I cannot express a negative bound, I am currently implementing the secondary getter for every type from the negative set that I care about (see LotB's answer). This requires to manually update the implementations for this struct when I need new types. In the example above, it would correspond to the following code if my types were the unsigned integers:
impl<R> GetU32 for Pair<u32, R> {
fn get(&self) -> u32 {
self.left
}
}
impl GetU32 for Pair<u8, u32> {
fn get(&self) -> u32 { self.right }
}
impl GetU32 for Pair<u16, u32> {
fn get(&self) -> u32 { self.right }
}
impl GetU32 for Pair<u64, u32> {
fn get(&self) -> u32 { self.right }
}
As mentioned in the question, this situation can be solved using a negative bound. This feature is not available yet, even on the nightly branch.
Luckily, there is a workaround to achieve a sufficient form of negative bounds by combining two existing nightly features: auto_traits and negative_impls.
Here is the code:
#![feature(auto_traits, negative_impls)]
auto trait NotU32 {}
// Double negation: `u32` is not a "not `u32`"
impl !NotU32 for u32 {}
struct Pair<L, R> {
left: L,
right: R,
}
// Returns the first `u32` in the pair (only defined for pairs containing an u32)
trait GetU32 {
fn get(&self) -> u32;
}
// This should also be used for `Pair<u32, u32>`
impl<R> GetU32 for Pair<u32, R> {
fn get(&self) -> u32 {
self.left
}
}
impl<L: NotU32> GetU32 for Pair<L, u32> {
fn get(&self) -> u32 {
self.right
}
}
fn main() {
let a: Pair<u8, u32> = Pair {left: 0u8, right: 999u32};
assert_eq!(999u32, dbg!(a.get()));
let b: Pair<u32, u8> = Pair {left: 999u32, right: 0u8};
assert_eq!(999u32, dbg!(b.get()));
let c: Pair<u32, u32> = Pair {left: 999u32, right: 0u32};
assert_eq!(999u32, dbg!(c.get()));
}
Playground link
Since we can't define the secondary getter with impl<L: !u32> GetU32 for Pair<L, u32> { ... } (negative bound), we instead define it using the marker trait NotU32 as impl<L: NotU32> GetU32 for Pair<L, u32> { ... }. This moves the issue: we now have to set this marker trait for all types except u32. This is were the auto_trait (add a trait for all types) and negative_impl (remove it from some types) come in.
The limit of this answer is that you can't define a generic Not<T> trait with this method today.
An easy way to avoid the problem would be to implement only the types you need, and not be generic.
so instead of a generic
impl<R> GetU32 for Pair<u32, R>
you impl on the specific type, so you control each specific combination.
impl GetU32 for Pair<u32, u8>
A macro can help in defining the various impl instead of directly defining them, to make the process of specific definition easier if you have a lot of types and combinations to handle.
macro_rules! impl_pair {
( $left:ident, $right:ident, $side:ident) => {
impl GetU32 for Pair<$left, $right> {
fn get(&self) -> u32 {
self.$side
}
}
}
}
impl_pair!(u32, u8, left);
impl_pair!(u8, u32, right);
impl_pair!(u32, u32, left);
Related
I'm currently develloping my own library for vectors and matrices, and to simplify my life, I defined my Matrix to be a Vec of Vector, and defined the Deref trait as such:
pub struct Matrix(Vec<RowVector>);
impl Deref for Matrix {
type Target = [RowVector];
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for Matrix {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
This work like a charm, but it has one flaw: you can override one row to be a RowVector of a different size of the rest, which is obviously VERY BAD.
Am I doomed? is there a solution to disallow the overwrite but allow to mutate the Vector ?
You could implement Index and IndexMut over a pair (usize, usize):
use std::ops::{IndexMut, Index};
pub struct Matrix(Vec<Vec<usize>>);
impl Index<(usize, usize)> for Matrix {
type Output = usize;
fn index(&self, index: (usize, usize)) -> &Self::Output {
self.0.get(index.0).unwrap().get(index.1).unwrap()
}
}
impl IndexMut<(usize, usize)> for Matrix {
fn index_mut(&mut self, index: (usize, usize)) -> &mut Self::Output {
self.0.get_mut(index.0).unwrap().get_mut(index.1).unwrap()
}
}
Playground
Disclaimer: Please take into account that using unwrap is not clean here. Either assert lengths, deal with options or at least use expect depending on your needs.
How can I use related generic types? Here's what I've got (only the first line is giving me trouble):
impl<G, GS> TreeNode<GS> where G: Game, GS: GameState<G>{
pub fn expand(&mut self, game: &G){
if !self.expanded{
let child_states = self.data.generate_children(game);
for state in child_states{
self.add_child_with_value(state);
}
}
}
}
GameState is a trait that is generic to a Game, and self.data implements GameState<Game> of this type. The compiler tells me
error[E0207]: the type parameter `G` is not constrained by the impl trait, self type, or predicates
--> src/mcts.rs:42:6
|
42 | impl<G, GS> TreeNode<GS> where G: Game, GS: GameState<G>{
| ^ unconstrained type parameter
error: aborting due to previous error
but it seems to me like I'm constraining G both in the expand function, and in the fact that G needs to belong to GS.
Here are some more definitions as of now:
trait GameState<G: Game>: std::marker::Sized + Debug{
fn generate_children(&self, game: &G) -> Vec<Self>;
fn get_initial_state(game: &G) -> Self;
}
trait Game{}
struct TreeNode<S> where S: Sized{
parent: *mut TreeNode<S>,
expanded: bool,
pub children: Vec<TreeNode<S>>,
pub data: S,
pub n: u32
}
impl<S> TreeNode<S>{
pub fn new(data: S) -> Self{
TreeNode {
parent: null_mut(),
expanded: false,
children: vec![],
data,
n: 0
}
}
pub fn add_child(&mut self, mut node: TreeNode<S>){
node.parent = self;
self.children.push(node);
}
pub fn add_child_with_value(&mut self, val: S){
let new_node = TreeNode::new(val);
self.add_child(new_node);
}
pub fn parent(&self) -> &Self{
unsafe{
&*self.parent
}
}
}
impl<G, GS> TreeNode<GS> where G: Game, GS: GameState<G>{
// ...
}
The problem is that G is not constrained, so there may be multiple (possibly conflicting) implementations in this block, since GS maybe implement GameState<G> for multiple G. The parameter G is ambiguous.
If you want to keep GameState<G> able to be implemented for multiple G, you should move the constraints from the impl block to the method instead:
// note: G is now a type parameter of the method, not the impl block, which is fine
impl<GS> TreeNode<GS> {
pub fn expand<G>(&mut self, game: &G) where G: Game, GS: GameState<G> {
if !self.expanded{
let child_states = self.data.generate_children(game);
for state in child_states{
self.add_child_with_value(state);
}
}
}
}
If you only want GameState to be implemented for a single G, you should make G an associated type of GameState instead of a generic type parameter:
trait GameState: std::marker::Sized + Debug {
type G: Game;
fn generate_children(&self, game: &Self::G) -> Vec<Self>;
fn get_initial_state(game: &Self::G) -> Self;
}
// note: now G is given by the GameState implementation instead of
// being a free type parameter
impl<GS> TreeNode<GS> where GS: GameState {
pub fn expand(&mut self, game: &GS::G){
if !self.expanded{
let child_states = self.data.generate_children(game);
for state in child_states{
self.add_child_with_value(state);
}
}
}
}
The concrete type of G cannot be detemined based on the type of TreeNode<GS>; it is only known when expand is called. Note that expand could be called twice with different types for G.
You can express this by constraining the type parameters for the method instead of the entire implementation block:
impl<GS> TreeNode<GS> {
pub fn expand<G>(&mut self, game: &G)
where
G: Game,
GS: GameState<G>,
{
if !self.expanded {
let child_states = self.data.generate_children(game);
for state in child_states {
self.add_child_with_value(state);
}
}
}
}
If it should not be possible for expand to be called with different Gs then this is a problem of your modeling. Another way to fix this is to ensure that the type of G is known for all TreeNodes. e.g.:
struct TreeNode<G, S>
where
S: Sized,
{
parent: *mut TreeNode<G, S>,
expanded: bool,
pub children: Vec<TreeNode<G, S>>,
pub data: S,
pub n: u32,
}
And then your original implementation block should work as written, once you account for the extra type parameter.
I have a struct which manages several sensors. I have a gyroscope, accelerometer, magnetometer, barometer, and thermometer. All of which are traits.
pub struct SensorManager {
barometer: Barometer + Sized,
thermometer: Thermometer + Sized,
gyroscope: Gyroscope + Sized,
accelerometer: Accelerometer + Sized,
magnetometer: Magnetometer + Sized
}
I need to make it modular so in the configuration file you can specify which sensors you are using.
The problem is that some of the sensors overlap. For example: one person can have an LSM9DS0 which contains gyroscope, accelerometer, and magnetometer while another person can have an L3GD20 gyroscope and an LSM303D accelerometer, magnetometer.
In C++ I would store pointers or references, but I am not sure how to properly implement this safely in Rust.
Short version: Need to have references to each sensor as members of this struct. Some of these references are of the same object.
In C++ I would store pointers or references
Rust isn't that alien. You do the same thing. The main difference is that Rust prevents you from being able to mutate one thing via two different paths or to have a reference that dangles.
Answering your question has many potential solutions. For example, you don't describe whether you need to be able to mutate the sensors or describe if the sensors will outlive the manager, if threads will be involved, etc.. All of these things affect how micro-optimized the code can be.
The maximally-flexible solution is to:
use shared ownership, such as that provided by Rc or Arc. This allows multiple things to own the sensor.
use interior mutability, such as that provided by RefCell or Mutex. This moves enforcement of a single mutating reference at a time from compile time to run time.
use trait objects to model dynamic dispatch since the decision of what concrete objects to use is made at run time.
use std::{cell::RefCell, rc::Rc};
trait Barometer {
fn get(&self) -> i32;
fn set(&self, value: i32);
}
trait Thermometer {
fn get(&self) -> i32;
fn set(&self, value: i32);
}
trait Gyroscope {
fn get(&self) -> i32;
fn set(&self, value: i32);
}
struct Multitudes;
impl Barometer for Multitudes {
fn get(&self) -> i32 {
1
}
fn set(&self, value: i32) {
println!("Multitudes barometer set to {}", value)
}
}
impl Thermometer for Multitudes {
fn get(&self) -> i32 {
2
}
fn set(&self, value: i32) {
println!("Multitudes thermometer set to {}", value)
}
}
struct AutoGyro;
impl Gyroscope for AutoGyro {
fn get(&self) -> i32 {
3
}
fn set(&self, value: i32) {
println!("AutoGyro gyroscope set to {}", value)
}
}
struct SensorManager {
barometer: Rc<RefCell<dyn Barometer>>,
thermometer: Rc<RefCell<dyn Thermometer>>,
gyroscope: Rc<RefCell<dyn Gyroscope>>,
}
impl SensorManager {
fn new(
barometer: Rc<RefCell<dyn Barometer>>,
thermometer: Rc<RefCell<dyn Thermometer>>,
gyroscope: Rc<RefCell<dyn Gyroscope>>,
) -> Self {
Self {
barometer,
thermometer,
gyroscope,
}
}
fn dump_info(&self) {
let barometer = self.barometer.borrow();
let thermometer = self.thermometer.borrow();
let gyroscope = self.gyroscope.borrow();
println!(
"{}, {}, {}",
barometer.get(),
thermometer.get(),
gyroscope.get()
);
}
fn update(&self) {
self.barometer.borrow_mut().set(42);
self.thermometer.borrow_mut().set(42);
self.gyroscope.borrow_mut().set(42);
}
}
fn main() {
let multi = Rc::new(RefCell::new(Multitudes));
let gyro = Rc::new(RefCell::new(AutoGyro));
let manager = SensorManager::new(multi.clone(), multi, gyro);
manager.dump_info();
manager.update();
}
Complete example on the Playground
barometer: Barometer + Sized,
You really don't want to to this. Barometer is both a trait and a type, but the type doesn't have a size. it always needs to be referenced behind a pointer (&Barometer, Box<Barometer>, RefCell<Barometer>, etc.)
See also:
What does "dyn" mean in a type?
I am using typenum in Rust to add compile-time dimension checking to some types I am working with. I would like to combine it with a dynamic type so that an expression with mismatched dimensions would fail at compile time if given two incompatible typenum types, but compile fine and fail at runtime if one or more of the types is Dynamic. Is this possible in Rust? If so, how would I combine Unsigned and Dynamic?
extern crate typenum;
use typenum::Unsigned;
use std::marker::PhantomData;
struct Dynamic {}
// N needs to be some kind of union type of Unsigned and Dynamic, but don't know how
struct Vector<E, N: Unsigned> {
vec: Vec<E>,
_marker: PhantomData<(N)>,
}
impl<E, N: Unsigned> Vector<E, N> {
fn new(vec: Vec<E>) -> Self {
assert!(N::to_usize() == vec.len());
Vector {
vec: vec,
_marker: PhantomData,
}
}
}
fn add<E, N: Unsigned>(vector1: &Vector<E, N>, vector2: &Vector<E, N>) {
print!("Implement addition here")
}
fn main() {
use typenum::{U3, U4};
let vector3 = Vector::<usize, U3>::new(vec![1, 2, 3]);
let vector4 = Vector::<usize, U4>::new(vec![1, 2, 3, 4]);
// Can I make the default be Dynamic here?
let vector4_dynamic = Vector::new(vec![1, 2, 3, 4]);
add(&vector3, &vector4); // should fail to compile
add(&vector3, &vector4_dynamic); // should fail at runtime
}
Specifying a default for type parameters has, sadly, still not been stabilized, so you'll need to use a nightly compiler in order for the following to work.
If you're playing with defaulted type parameters, be aware that the compiler will first try to infer the types based on usage, and only fall back to the default when there's not enough information. For example, if you were to pass a vector declared with an explicit N and a vector declared without N to add, the compiler would infer that the second vector's N must be the same as the first vector's N, instead of selecting Dynamic for the second vector's N. Therefore, if the sizes don't match, the runtime error would happen when constructing the second vector, not when adding them together.
It's possible to define multiple impl blocks for different sets of type parameters. For example, we can have an implementation of new when N: Unsigned and another when N is Dynamic.
extern crate typenum;
use std::marker::PhantomData;
use typenum::Unsigned;
struct Dynamic;
struct Vector<E, N> {
vec: Vec<E>,
_marker: PhantomData<N>,
}
impl<E, N: Unsigned> Vector<E, N> {
fn new(vec: Vec<E>) -> Self {
assert!(N::to_usize() == vec.len());
Vector {
vec: vec,
_marker: PhantomData,
}
}
}
impl<E> Vector<E, Dynamic> {
fn new(vec: Vec<E>) -> Self {
Vector {
vec: vec,
_marker: PhantomData,
}
}
}
However, this approach with two impls providing a new method doesn't work well with defaulted type parameters; the compiler will complain about the ambiguity instead of inferring the default when calling new. So instead, we need to define a trait that unifies N: Unsigned and Dynamic. This trait will contain a method to help us perform the assert in new correctly depending on whether the size is fixed or dynamic.
#![feature(default_type_parameter_fallback)]
use std::marker::PhantomData;
use std::ops::Add;
use typenum::Unsigned;
struct Dynamic;
trait FixedOrDynamic {
fn is_valid_size(value: usize) -> bool;
}
impl<T: Unsigned> FixedOrDynamic for T {
fn is_valid_size(value: usize) -> bool {
Self::to_usize() == value
}
}
impl FixedOrDynamic for Dynamic {
fn is_valid_size(_value: usize) -> bool {
true
}
}
struct Vector<E, N: FixedOrDynamic = Dynamic> {
vec: Vec<E>,
_marker: PhantomData<N>,
}
impl<E, N: FixedOrDynamic> Vector<E, N> {
fn new(vec: Vec<E>) -> Self {
assert!(N::is_valid_size(vec.len()));
Vector {
vec: vec,
_marker: PhantomData,
}
}
}
In order to support add receiving a fixed and a dynamic vector, but not fixed vectors of different lengths, we need to introduce another trait. For each N: Unsigned, only N itself and Dynamic will implement the trait.
trait SameOrDynamic<N> {
type Output: FixedOrDynamic;
fn length_check(left_len: usize, right_len: usize) -> bool;
}
impl<N: Unsigned> SameOrDynamic<N> for N {
type Output = N;
fn length_check(_left_len: usize, _right_len: usize) -> bool {
true
}
}
impl<N: Unsigned> SameOrDynamic<Dynamic> for N {
type Output = N;
fn length_check(left_len: usize, right_len: usize) -> bool {
left_len == right_len
}
}
impl<N: Unsigned> SameOrDynamic<N> for Dynamic {
type Output = N;
fn length_check(left_len: usize, right_len: usize) -> bool {
left_len == right_len
}
}
impl SameOrDynamic<Dynamic> for Dynamic {
type Output = Dynamic;
fn length_check(left_len: usize, right_len: usize) -> bool {
left_len == right_len
}
}
fn add<E, N1, N2>(vector1: &Vector<E, N1>, vector2: &Vector<E, N2>) -> Vector<E, N2::Output>
where N1: FixedOrDynamic,
N2: FixedOrDynamic + SameOrDynamic<N1>,
{
assert!(N2::length_check(vector1.vec.len(), vector2.vec.len()));
unimplemented!()
}
If you don't actually need to support calling add with a fixed and a dynamic vector, then you can simplify this drastically:
fn add<E, N: FixedOrDynamic>(vector1: &Vector<E, N>, vector2: &Vector<E, N>) -> Vector<E, N> {
// TODO: perform length check when N is Dynamic
unimplemented!()
}
You could just keep using Vec<T> for what it does best, and use your Vector<T, N> for checked length vectors. To accomplish that, you can define a trait for addition and implement it for different combinations of the two types of vector:
trait MyAdd<T> {
type Output;
fn add(&self, other: &T) -> Self::Output;
}
impl <T, N: Unsigned> MyAdd<Vector<T, N>> for Vector<T, N> {
type Output = Vector<T, N>;
fn add(&self, other: &Vector<T, N>) -> Self::Output {
Vector::new(/* implement addition here */)
}
}
impl <T, N: Unsigned> MyAdd<Vec<T>> for Vector<T, N> {
type Output = Vector<T, N>;
fn add(&self, other: &Vec<T>) -> Self::Output {
Vector::new(/* implement addition here */)
}
}
impl <T> MyAdd<Vec<T>> for Vec<T> {
type Output = Vec<T>;
fn add(&self, other: &Vec<T>) -> Self::Output {
Vec::new(/* implement addition here */)
}
}
impl <T, N: Unsigned> MyAdd<Vector<T, N>> for Vec<T> {
type Output = Vector<T, N>;
fn add(&self, other: &Vector<T, N>) -> Self::Output {
Vector::new(/* implement addition here */)
}
}
Now you can use it almost in the same way as you were trying to:
fn main() {
use typenum::{U3, U4};
let vector3 = Vector::<usize, U3>::new(vec![1, 2, 3]);
let vector4 = Vector::<usize, U4>::new(vec![1, 2, 3, 4]);
let vector4_dynamic = vec![1, 2, 3, 4];
vector3.add(&vector4); // Compile error!
vector3.add(&vector4_dynamic); // Runtime error on length assertion
}
You could avoid creating your own trait by using the built in std::ops::Add, but you wouldn't be able to implement it for Vec. The left side of the .add would always have to be Vector<E, N> with Vec limited to only being in the argument. You could get around that with another Vec "newtype" wrapper, similar to what you've done with Vector<E, T> but without the length check and phantom type.
I've been working on a multi-dimensional array library, toying around with different interfaces, and ran into an issue I can't seem to solve. This may be a simple misunderstanding of lifetimes, but I've tried just about every solution I can think of, to no success.
The goal: implement the Index and IndexMut traits to return a borrowed vector from a 2d matrix, so this syntax can be used mat[rowind][colind].
A (very simplified) version of the data structure definition is below.
pub struct Matrix<T> {
shape: [uint, ..2],
dat: Vec<T>
}
impl<T: FromPrimitive+Clone> Matrix<T> {
pub fn new(shape: [uint, ..2]) -> Matrix<T> {
let size = shape.iter().fold(1, |a, &b| { a * b});
// println!("Creating MD array of size: {} and shape: {}", size, shape)
Matrix{
shape: shape,
dat: Vec::<T>::from_elem(size, FromPrimitive::from_uint(0u).expect("0 must be convertible to parameter type"))
}
}
pub fn mut_index(&mut self, index: uint) -> &mut [T] {
let base = index*self.shape[1];
self.dat.mut_slice(base, base + self.shape[1])
}
}
fn main(){
let mut m = Matrix::<f32>::new([4u,4]);
println!("{}", m.dat)
println!("{}", m.mut_index(3)[0])
}
The mut_index method works exactly as I would like the IndexMut trait to work, except of course that it doesn't have the syntax sugar. The first attempt at implementing IndexMut made me wonder, since it returns a borrowed reference to the specified type, I really want to specify [T] as a type, but it isn't a valid type. So the only option is to specify &mut [T] like this.
impl<T: FromPrimitive+Clone> IndexMut<uint, &mut [T]> for Matrix<T> {
fn index_mut(&mut self, index: &uint) -> &mut(&mut[T]) {
let base = index*self.shape[1];
&mut self.dat.mut_slice(base, base + self.shape[1])
}
}
This complains about a missing lifetime specifier on the trait impl line. So I try adding one.
impl<'a, T: FromPrimitive+Clone> IndexMut<uint, &'a mut [T]> for Matrix<T> {
fn index_mut(&'a mut self, index: &uint) -> &mut(&'a mut[T]) {
let base = index*self.shape[1];
&mut self.dat.mut_slice(base, base + self.shape[1])
}
}
Now I get method `index_mut` has an incompatible type for trait: expected concrete lifetime, but found bound lifetime parameter 'a [E0053]. Aside from this I've tried just about every combination of one and two lifetimes I can think of, as well as creating a secondary structure to hold a reference that is stored in the outer structure during the indexing operation so a reference to that can be returned instead, but that's not possible for Index. The final answer may just be that this isn't possible, given the response on this old github issue, but that would seem to be a problematic limitation of the Index and IndexMut traits. Is there something I'm missing?
At present, this is not possible, but when Dynamically Sized Types lands I believe it will become possible.
Let’s look at the signature:
pub trait IndexMut<Index, Result> {
fn index_mut<'a>(&'a mut self, index: &Index) -> &'a mut Result;
}
(Note the addition of the <'a> compared with what the docs say; I’ve filed #16228 about that.)
'a is an arbitrary lifetime, but it is important that it is specified on the method, not on the impl as a whole: it is in absolute truth a generic parameter to the method. I’ll show how it all comes out here with the names 'ρ₀ and 'ρ₁. So then, in this attempt:
impl<'ρ₀, T: FromPrimitive + Clone> IndexMut<uint, &'ρ₀ mut [T]> for Matrix<T> {
fn index_mut<'ρ₁>(&'ρ₁ mut self, index: &uint) -> &'ρ₁ mut &'ρ₀ mut [T] {
let base = index * self.shape[1];
&mut self.dat.mut_slice(base, base + self.shape[1])
}
}
This satisfies the requirements that (a) all lifetimes must be explicit in the impl header, and (b) that the method signature matches the trait definition: Index is uint and Result is &'ρ₀ mut [T]. Because 'ρ₀ is defined on the impl block (so that it can be used as a parameter there) and 'ρ₁ on the method (because that’s what the trait defines), 'ρ₀ and 'ρ₁ cannot be combined into a single named lifetime. (You could call them both 'a, but this is shadowing and does not change anything except for the introduction of a bit more confusion!)
However, this is not enough to have it all work, and it will indeed not compile, because 'ρ₀ is not tied to anything, nor is there to tie it to in the signature. And so you cannot cast self.data.mut_slice(…), which is of type &'ρ₁ mut [T], to &'ρ₀ mut [T] as the lifetimes do not match, nor is there any known subtyping relationship between them (that is, it cannot structurally be demonstrated that the lifetime 'ρ₀ is less than—a subtype of—'ρ₁; although the return type of the method would make that clear, it is not so at the basic type level, and so it is not permitted).
Now as it happens, IndexMut isn’t as useful as it should be anyway owing to #12825, as matrix[1] would always use IndexMut and never Index if you have implemented both. I’m not sure if that’s any consolation, though!
The solution comes in Dynamically Sized Types. When that is here, [T] will be a legitimate unsized type which can be used as the type for Result and so this will be the way to write it:
impl<T: FromPrimitive + Clone> IndexMut<uint, [T]> for Matrix<T> {
fn index_mut<'a>(&'a mut self, index: &uint) -> &'a mut [T] {
let base = index * self.shape[1];
&mut self.dat.mut_slice(base, base + self.shape[1])
}
}
… but that’s not here yet.
This code works in Rust 1.25.0 (and probably has for quite a while)
extern crate num;
use num::Zero;
pub struct Matrix<T> {
shape: [usize; 2],
dat: Vec<T>,
}
impl<T: Zero + Clone> Matrix<T> {
pub fn new(shape: [usize; 2]) -> Matrix<T> {
let size = shape.iter().product();
Matrix {
shape: shape,
dat: vec![T::zero(); size],
}
}
pub fn mut_index(&mut self, index: usize) -> &mut [T] {
let base = index * self.shape[1];
&mut self.dat[base..][..self.shape[1]]
}
}
fn main() {
let mut m = Matrix::<f32>::new([4; 2]);
println!("{:?}", m.dat);
println!("{}", m.mut_index(3)[0]);
}
You can enhance it to support Index and IndexMut:
use std::ops::{Index, IndexMut};
impl<T> Index<usize> for Matrix<T> {
type Output = [T];
fn index(&self, index: usize) -> &[T] {
let base = index * self.shape[1];
&self.dat[base..][..self.shape[1]]
}
}
impl<T> IndexMut<usize> for Matrix<T> {
fn index_mut(&mut self, index: usize) -> &mut [T] {
let base = index * self.shape[1];
&mut self.dat[base..][..self.shape[1]]
}
}
fn main() {
let mut m = Matrix::<f32>::new([4; 2]);
println!("{:?}", m.dat);
println!("{}", m[3][0]);
m[3][0] = 42.42;
println!("{:?}", m.dat);
println!("{}", m[3][0]);
}