Implementing Index/overloading the [] operator for a sparse vector - rust

This is literally the same question as this C++ question, but in Rust.
Suppose I have a "sparse vector" type that stores filled entries in a map of some kind. Unfilled entries are some kind of default value, like 0.
use std::ops::{Index, IndexMut};
use std::collections::BTreeMap;
use num_traits::Float;
struct SparseVector<T: Float, const N: usize>(BTreeMap<usize, T>);
// Returning a ref is easy when a value is present.
impl<T: Float, const N: usize> IndexMut<usize> for SparseVector<T, N> {
fn index_mut(&mut self, index: usize) -> &mut Self::Output {
self.0.entry(index).or_insert(T::zero())
}
}
// What to return when we hit the default?
impl<T: Float, const N: usize> Index<usize> for SparseVector<T, N> {
type Output = T;
fn index(&self, index: usize) -> &T {
match self.0.get(&index) {
Some(value) => t,
None => ???
}
}
}
To implement Index on this type, I need to return a &T. For filled entries, that's just a reference to the value. How do I return a ref to the default?
I obviously can't return a &0 for lifetime reasons.
I can't necessarily store the default in a const field of the struct impl. It might not come from a const function.
I'm trying to avoid storing the default value on every instance of the type. It's literally the same value, why must it be allocated on every instance, etc.
The C++ answer is to return an instance of a wrapper type that dereferences to a &T. But in Rust, a Deref<Target = &T> cannot be substituted for &T, as far as I know.
How can I implement Index (override the [] operator) on this type?

You cannot.
There were some discussions around extending the Index and Deref traits to support situations similar to that (https://github.com/rust-lang/rfcs/issues/997), but today you cannot.
This particular case is even more problematic because it requires GAT: the trait will have to be defined like:
pub trait IndexGat<I> {
type Output<'a>: Deref
where
Self: 'a;
fn index(&self, index: I) -> Self::Output<'_>;
}
impl<I, T: Index> IndexGat<I> for T {
type Output<'a> = &'a <Self as Index>::Output
where
Self: 'a;
fn index(&self, index: I) -> Self::Output<'_> { <Self as Index>::index(self, index) }
}
So you can implement it:
impl<T, const N: usize> IndexGat<usize> for SparseVector<T, N> {
type Output<'a> = Wrapper<'a, T>;
where
Self: 'a;
fn index(&self, index: usize) -> Self::Output<'_> { ... }
}
pub enum Wrapper<'a, T> {
Default(T),
Ref(&'a T),
}
impl<T> Deref for Wrapper<'_, T> {
type Target = T;
fn deref(&self) -> &Self::Target {
match self {
Self::Default(v) => v,
Self::Ref(v) => *v,
}
}
}
So it cannot be done until GATs are stabilized.
The best thing you can do is just to use a regular method get() (or at(), or whatever).

Related

What is the idiomatic way to implement `IntoIterator` when some items need to be substituted?

I have a custom collection like this:
struct VecChoice<T> {
v1: Vec<T>,
v2: Vec<T>,
use_v1: Vec<bool>,
}
in the impl I can iterate this collection like this:
fn foo(&self, ...) {
let item_refs: Vec<_> = (0..self.v1.len()).map(|i| {
if self.use_v1[i] {
&self.v1[i]
} else {
&self.v2[i]
}
});
// ... do whatever I want with chosen references
}
However, I am failing to make it iterable:
impl<'a, T> IntoIterator for &'a VecChoice<T> {
type Item = &'a T;
// this fails because the trait `Sized` is not implemented for `(dyn FnMut(usize) -> Self::Item + 'static)`
type IntoIter = Map<usize, dyn FnMut(usize) -> Self::Item>;
fn into_iter(self) -> Self::IntoIter {
(0..self.v1.len()).map(|i| {
if self.use_v1[i] {
&self.v1[i]
} else {
&self.v2[i]
}
})
}
}
I could probably collect results into a Vec<&T> as above, then use its into_iter, but I suspect there should be a way to do it without constructing intermediate Vec.
The closure that you have passed to map actually does have a size. The problem though is that this type isn't nameable. You've tried to solve that with dyn, which isn't quite the right solution because the closure is sized but dyn makes it so that it isn't. dyn would be appropriate if there were different possible sizes, but then you'd have to put it behind a pointer of some kind so that the IntoIter type is Sized.
This is one of those cases where it is probably better to implement the Iterator manually, rather than using combinators.
struct VecChoiceIter<'a, T> {
index: usize,
vec_choice: &'a VecChoice<T>,
}
impl<'a, T> Iterator for VecChoiceIter<'a, T> {
type Item = &'a T;
fn next(&mut self) -> Option<Self::Item> {
if self.index == self.vec_choice.v1.len() {
None
} else {
let i = self.index;
self.index += 1;
let use_v1 = self.vec_choice.use_v1[i];
if use_v1 {
Some(&self.vec_choice.v1[i])
} else {
Some(&self.vec_choice.v2[i])
}
}
}
}
This gives you a Sized and nameable type that you can use for the IntoIterator implementation:
impl<'a, T> IntoIterator for &'a VecChoice<T> {
type Item = &'a T;
type IntoIter = VecChoiceIter<'a, T>;
fn into_iter(self) -> Self::IntoIter {
VecChoiceIter { index: 0, vec_choice: self }
}
}
There are some interesting RFCs in progress that could make this work more like how you originally wanted. In particular RFC-2515. This would let you write your IntoIterator implementation as you originally tried, but without having to name the type (playground - nightly):
impl<'a, T> IntoIterator for &'a VecChoice<T> {
type Item = &'a T;
// This is an "existential" type. That is, tell the compiler that there is
// exactly one possibility for what this type can be, which it can infer
// from the usage.
type IntoIter = impl Iterator<Item = Self::Item>;
fn into_iter(self) -> Self::IntoIter {
(0..self.v1.len()).map(move |i| {
if self.use_v1[i] {
&self.v1[i]
} else {
&self.v2[i]
}
})
}
}
It's often very tempting to try to make an iterator out of a pre-made collection, but unfortunately this tends to run into a practical problem a lot of the time: you need some way to store an offset into that collection, so you serve the right chunk of data out of it when next is called. Consequently, you almost always need to provide some custom iterator type.
In this case, you can do so like this:
struct VecChoice<T> {
v1: Vec<T>,
v2: Vec<T>,
use_v1: Vec<bool>,
}
struct VecChoiceIter<'a, T> {
off: usize,
collection: &'a VecChoice<T>,
}
impl<'a, T> Iterator for VecChoiceIter<'a, T> {
type Item = &'a T;
fn next(&mut self) -> Option<Self::Item> {
let off = self.off;
self.off += 1;
if *self.collection.use_v1.get(off)? {
self.collection.v1.get(off)
} else {
self.collection.v2.get(off)
}
}
}
impl<'a, T> IntoIterator for &'a VecChoice<T> {
type Item = &'a T;
type IntoIter = VecChoiceIter<'a, T>;
fn into_iter(self) -> Self::IntoIter {
VecChoiceIter {
off: 0,
collection: self,
}
}
}
Note that in this case, I've switched use_v1 to a Vec<bool>, because this is not C and only booleans can be used in conditionals.
You could also do the conversion up front and store it in its own Vec, but in my experience people don't expect creating an iterator, whether by calling iter or into_iter, to be expensive. Iterators are pretty fundamental in Rust, and as a consequence it's very common for folks to create lots of them, often implicitly, and making those functions be expensive would be undesirable in many cases.
Probably the most simple way is to use .zip() and return an opaque impl Iterator from a method on the type (so you don't have to write out the actual type):
struct VecChoice<T> {
v1: Vec<T>,
v2: Vec<T>,
use_v1: Vec<bool>,
}
impl<T> VecChoice<T> {
fn iter(&self) -> impl Iterator<Item = &T> {
self.v1
.iter()
.zip(self.v2.iter())
.zip(self.use_v1.iter())
.map(|((v1, v2), use_v1)| if use_v1 { v1 } else { v2 })
}
}
This will iterate over all three Vec (actually the shortest of them) and return either from v1 or v2.
Notice that I switched use_v1 from a Vec<T> to a Vec<bool>, which seems to be what you have, given the way you use it.

How do I use a &HashSet<&T> as an IntoIterator<Item = &T>?

I have a function that takes a collection of &T (represented by IntoIterator) with the requirement that every element is unique.
fn foo<'a, 'b, T: std::fmt::Debug, I>(elements: &'b I)
where
&'b I: IntoIterator<Item = &'a T>,
T: 'a,
'b: 'a,
I would like to also write a wrapper function which can work even if the elements are not unique, by using a HashSet to remove the duplicate elements first.
I tried the following implementation:
use std::collections::HashSet;
fn wrap<'a, 'b, T: std::fmt::Debug + Eq + std::hash::Hash, J>(elements: &'b J)
where
&'b J: IntoIterator<Item = &'a T>,
T: 'a,
'b: 'a,
{
let hashset: HashSet<&T> = elements.into_iter().into_iter().collect();
foo(&hashset);
}
playground
However, the compiler doesn't seem happy with my assumption that HashSet<&T> implements IntoIterator<Item = &'a T>:
error[E0308]: mismatched types
--> src/lib.rs:10:9
|
10 | foo(&hashset);
| ^^^^^^^^ expected type parameter, found struct `std::collections::HashSet`
|
= note: expected type `&J`
found type `&std::collections::HashSet<&T>`
= help: type parameters must be constrained to match other types
= note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters
I know I could use a HashSet<T> by cloning all the input elements, but I want to avoid unnecessary copying and memory use.
If you have a &HashSet<&T> and need an iterator of &T (not &&T) that you can process multiple times, then you can use Iterator::copied to convert the iterator's &&T to a &T:
use std::{collections::HashSet, fmt::Debug, hash::Hash, marker::PhantomData};
struct Collection<T> {
item: PhantomData<T>,
}
impl<T> Collection<T>
where
T: Debug,
{
fn foo<'a, I>(elements: I) -> Self
where
I: IntoIterator<Item = &'a T> + Clone,
T: 'a,
{
for element in elements.clone() {
println!("{:?}", element);
}
for element in elements {
println!("{:?}", element);
}
Self { item: PhantomData }
}
}
impl<T> Collection<T>
where
T: Debug + Eq + Hash,
{
fn wrap<'a, I>(elements: I) -> Self
where
I: IntoIterator<Item = &'a T>,
T: 'a,
{
let set: HashSet<_> = elements.into_iter().collect();
Self::foo(set.iter().copied())
}
}
#[derive(Debug, Hash, PartialEq, Eq)]
struct Foo(i32);
fn main() {
let v = vec![Foo(1), Foo(2), Foo(4)];
Collection::<Foo>::wrap(&v);
}
See also:
Using the same iterator multiple times in Rust
Does cloning an iterator copy the entire underlying vector?
Why does cloning my custom type result in &T instead of T?
Note that the rest of this answer made the assumption that a struct named Collection<T> was a collection of values of type T. OP has clarified that this is not true.
That's not your problem, as shown by your later examples. That can be boiled down to this:
struct Collection<T>(T);
impl<T> Collection<T> {
fn new(value: &T) -> Self {
Collection(value)
}
}
You are taking a reference to a type (&T) and trying to store it where a T is required; these are different types and will generate an error. You are using PhantomData for some reason and accepting references via the iterator, but the problem is the same.
In fact, PhantomData makes the problem harder to see as you can just make up values that don't work. For example, we never have any kind of string here but we "successfully" created the struct:
use std::marker::PhantomData;
struct Collection<T>(PhantomData<T>);
impl Collection<String> {
fn new<T>(value: &T) -> Self {
Collection(PhantomData)
}
}
Ultimately, your wrap function doesn't make sense, either:
impl<T: Eq + Hash> Collection<T> {
fn wrap<I>(elements: I) -> Self
where
I: IntoIterator<Item = T>,
This is equivalent to
impl<T: Eq + Hash> Collection<T> {
fn wrap<I>(elements: I) -> Collection<T>
where
I: IntoIterator<Item = T>,
Which says that, given an iterator of elements T, you will return a collection of those elements. However, you put them in a HashMap and iterate on a reference to it, which yields &T. Thus this function signature cannot be right.
It seems most likely that you want to accept an iterator of owned values instead:
use std::{collections::HashSet, fmt::Debug, hash::Hash};
struct Collection<T> {
item: T,
}
impl<T> Collection<T> {
fn foo<I>(elements: I) -> Self
where
I: IntoIterator<Item = T>,
for<'a> &'a I: IntoIterator<Item = &'a T>,
T: Debug,
{
for element in &elements {
println!("{:?}", element);
}
for element in &elements {
println!("{:?}", element);
}
Self {
item: elements.into_iter().next().unwrap(),
}
}
}
impl<T> Collection<T>
where
T: Eq + Hash,
{
fn wrap<I>(elements: I) -> Self
where
I: IntoIterator<Item = T>,
T: Debug,
{
let s: HashSet<_> = elements.into_iter().collect();
Self::foo(s)
}
}
#[derive(Debug, Hash, PartialEq, Eq)]
struct Foo(i32);
fn main() {
let v = vec![Foo(1), Foo(2), Foo(4)];
let c = Collection::wrap(v);
println!("{:?}", c.item)
}
Here we place a trait bound on the generic iterator type directly and a second higher-ranked trait bound on a reference to the iterator. This allows us to use a reference to the iterator as an iterator itself.
See also:
How does one generically duplicate a value in Rust?
Is there any way to return a reference to a variable created in a function?
How do I write the lifetimes for references in a type constraint when one of them is a local reference?
There were a number of orthogonal issues with my code that Shepmaster pointed out, but to solve the issue of using a HashSet<&T> as an IntoIterator<Item=&T>, I found that one way to solve it is with a wrapper struct:
struct Helper<T, D: Deref<Target = T>>(HashSet<D>);
struct HelperIter<'a, T, D: Deref<Target = T>>(std::collections::hash_set::Iter<'a, D>);
impl<'a, T, D: Deref<Target = T>> Iterator for HelperIter<'a, T, D>
where
T: 'a,
{
type Item = &'a T;
fn next(&mut self) -> Option<Self::Item> {
self.0.next().map(|x| x.deref())
}
}
impl<'a, T, D: Deref<Target = T>> IntoIterator for &'a Helper<T, D> {
type Item = &'a T;
type IntoIter = HelperIter<'a, T, D>;
fn into_iter(self) -> Self::IntoIter {
HelperIter((&self.0).into_iter())
}
}
Which is used as follows:
struct Collection<T> {
item: PhantomData<T>,
}
impl<T: Debug> Collection<T> {
fn foo<I>(elements: I) -> Self
where
I: IntoIterator + Copy,
I::Item: Deref<Target = T>,
{
for element in elements {
println!("{:?}", *element);
}
for element in elements {
println!("{:?}", *element);
}
return Self { item: PhantomData };
}
}
impl<T: Debug + Eq + Hash> Collection<T> {
fn wrap<I>(elements: I) -> Self
where
I: IntoIterator + Copy,
I::Item: Deref<Target = T> + Eq + Hash,
{
let helper = Helper(elements.into_iter().collect());
Self::foo(&helper);
return Self { item: PhantomData };
}
}
fn main() {
let v = vec![Foo(1), Foo(2), Foo(4)];
Collection::<Foo>::wrap(&v);
}
I'm guessing that some of this may be more complicated than it needs to be, but I'm not sure how.
full playground

How do I write a function that allows me to convert a type T into any Box<B> if Box<T> can be coerced into Box<B>?

I'm trying to write an extension trait that allows me to move any value of type T into any Box<B>, where Box<T> can be coerced into Box<B>. My first attempt is the following:
trait IntoBox<B: ?Sized> {
fn into_box(self) -> Box<B>;
}
impl<T, B: ?Sized> IntoBox<B> for T
where
Box<T>: Into<Box<B>>,
{
fn into_box(self) -> Box<B> {
Box::new(self).into()
}
}
fn main() {
// Ok
let _: Box<u32> = 42.into_box();
// Error: the trait bound `std::boxed::Box<std::fmt::Display>:
// std::convert::From<std::boxed::Box<&str>>` is not satisfied
let _: Box<std::fmt::Display> = "Hello World".into_box();
}
This code works for regular boxes, but not trait objects. I suspect Into is the wrong bound here. What should I use instead?
Edit: As explained in the answer to this question, this problem can be solved with respect to any number of concrete types T by providing a blanket impl for T: Unsize<U>. However this does not work in the generic case because the impls would be conflicting:
impl<T, B> IntoBox<B> for T
where
Box<T>: Into<Box<B>>,
{
fn into_box(self) -> Box<B> {
Box::new(self).into()
}
}
impl<T, B: ?Sized> IntoBox<B> for T
where
B: std::marker::Unsize<T>
{
fn into_box(self) -> Box<B> {
Box::new(self)
}
}

How can I explicitly specify a lifetime when implementing a trait?

Given the implementation below, where essentially I have some collection of items that can be looked up via either a i32 id field or a string field. To be able to use either interchangeably, a trait "IntoKey" is used, and a match dispatches to the appropriate lookup map; this all works fine for my definition of get within the MapCollection impl:
use std::collections::HashMap;
use std::ops::Index;
enum Key<'a> {
I32Key(&'a i32),
StringKey(&'a String),
}
trait IntoKey<'a> {
fn into_key(&'a self) -> Key<'a>;
}
impl<'a> IntoKey<'a> for i32 {
fn into_key(&'a self) -> Key<'a> { Key::I32Key(self) }
}
impl<'a> IntoKey<'a> for String {
fn into_key(&'a self) -> Key<'a> { Key::StringKey(self) }
}
#[derive(Debug)]
struct Bar {
i: i32,
n: String,
}
struct MapCollection
{
items: Vec<Bar>,
id_map: HashMap<i32, usize>,
name_map: HashMap<String, usize>,
}
impl MapCollection {
fn new(items: Vec<Bar>) -> MapCollection {
let mut is = HashMap::new();
let mut ns = HashMap::new();
for (idx, item) in items.iter().enumerate() {
is.insert(item.i, idx);
ns.insert(item.n.clone(), idx);
}
MapCollection {
items: items,
id_map: is,
name_map: ns,
}
}
fn get<'a, K>(&self, key: &'a K) -> Option<&Bar>
where K: IntoKey<'a> //'
{
match key.into_key() {
Key::I32Key(i) => self.id_map.get(i).and_then(|idx| self.items.get(*idx)),
Key::StringKey(s) => self.name_map.get(s).and_then(|idx| self.items.get(*idx)),
}
}
}
fn main() {
let bars = vec![Bar { i:1, n:"foo".to_string() }, Bar { i:2, n:"far".to_string() }];
let map = MapCollection::new(bars);
if let Some(bar) = map.get(&1) {
println!("{:?}", bar);
}
if map.get(&3).is_none() {
println!("no item numbered 3");
}
if let Some(bar) = map.get(&"far".to_string()) {
println!("{:?}", bar);
}
if map.get(&"baz".to_string()).is_none() {
println!("no item named baz");
}
}
However, if I then want to implement std::ops::Index for this struct, if I attempt to do the below:
impl<'a, K> Index<K> for MapCollection
where K: IntoKey<'a> {
type Output = Bar;
fn index<'b>(&'b self, k: &K) -> &'b Bar {
self.get(k).expect("no element")
}
}
I hit a compiler error:
src/main.rs:70:18: 70:19 error: cannot infer an appropriate lifetime for automatic coercion due to conflicting requirements
src/main.rs:70 self.get(k).expect("no element")
^
src/main.rs:69:5: 71:6 help: consider using an explicit lifetime parameter as shown: fn index<'b>(&'b self, k: &'a K) -> &'b Bar
src/main.rs:69 fn index<'b>(&'b self, k: &K) -> &'b Bar {
src/main.rs:70 self.get(k).expect("no element")
src/main.rs:71 }
I can find no way to specify a distinct lifetime here; following the compiler's recommendation is not permitted as it changes the function signature and no longer matches the trait, and anything else I try fails to satisfy the lifetime specification.
I understand that I can implement the trait for each case (i32, String) separately instead of trying to implement it once for IntoKey, but I am more generally trying to understand lifetimes and appropriate usage. Essentially:
Is there actually an issue the compiler is preventing? Is there something unsound about this approach?
Am I specifying my lifetimes incorrectly? To me, the lifetime 'a in Key/IntoKey is dictating that the reference need only live long enough to do the lookup; the lifetime 'b associated with the index fn is stating that the reference resulting from the lookup will live as long as the containing MapCollection.
Or am I simply not utilizing the correct syntax to specify the needed information?
(using rustc 1.0.0-nightly (b63cee4a1 2015-02-14 17:01:11 +0000))
Do you intend on implementing IntoKey on struct's that are going to store references of lifetime 'a? If not, you can change your trait and its implementations to:
trait IntoKey {
fn into_key<'a>(&'a self) -> Key<'a>;
}
This is the generally recommended definition style, if you can use it. If you can't...
Let's look at this smaller reproduction:
use std::collections::HashMap;
use std::ops::Index;
struct Key<'a>(&'a u8);
trait IntoKey<'a> { //'
fn into_key(&'a self) -> Key<'a>;
}
struct MapCollection;
impl MapCollection {
fn get<'a, K>(&self, key: &'a K) -> &u8
where K: IntoKey<'a> //'
{
unimplemented!()
}
}
impl<'a, K> Index<K> for MapCollection //'
where K: IntoKey<'a> //'
{
type Output = u8;
fn index<'b>(&'b self, k: &K) -> &'b u8 { //'
self.get(k)
}
}
fn main() {
}
The problem lies in get:
fn get<'a, K>(&self, key: &'a K) -> &u8
where K: IntoKey<'a>
Here, we are taking a reference to K that must live as long as the Key we get out of it. However, the Index trait doesn't guarantee that:
fn index<'b>(&'b self, k: &K) -> &'b u8
You can fix this by simply giving a fresh lifetime to key:
fn get<'a, 'b, K>(&self, key: &'b K) -> &u8
where K: IntoKey<'a>
Or more succinctly:
fn get<'a, K>(&self, key: &K) -> &u8
where K: IntoKey<'a>

Index and IndexMut implementations to return borrowed vectors

I've been working on a multi-dimensional array library, toying around with different interfaces, and ran into an issue I can't seem to solve. This may be a simple misunderstanding of lifetimes, but I've tried just about every solution I can think of, to no success.
The goal: implement the Index and IndexMut traits to return a borrowed vector from a 2d matrix, so this syntax can be used mat[rowind][colind].
A (very simplified) version of the data structure definition is below.
pub struct Matrix<T> {
shape: [uint, ..2],
dat: Vec<T>
}
impl<T: FromPrimitive+Clone> Matrix<T> {
pub fn new(shape: [uint, ..2]) -> Matrix<T> {
let size = shape.iter().fold(1, |a, &b| { a * b});
// println!("Creating MD array of size: {} and shape: {}", size, shape)
Matrix{
shape: shape,
dat: Vec::<T>::from_elem(size, FromPrimitive::from_uint(0u).expect("0 must be convertible to parameter type"))
}
}
pub fn mut_index(&mut self, index: uint) -> &mut [T] {
let base = index*self.shape[1];
self.dat.mut_slice(base, base + self.shape[1])
}
}
fn main(){
let mut m = Matrix::<f32>::new([4u,4]);
println!("{}", m.dat)
println!("{}", m.mut_index(3)[0])
}
The mut_index method works exactly as I would like the IndexMut trait to work, except of course that it doesn't have the syntax sugar. The first attempt at implementing IndexMut made me wonder, since it returns a borrowed reference to the specified type, I really want to specify [T] as a type, but it isn't a valid type. So the only option is to specify &mut [T] like this.
impl<T: FromPrimitive+Clone> IndexMut<uint, &mut [T]> for Matrix<T> {
fn index_mut(&mut self, index: &uint) -> &mut(&mut[T]) {
let base = index*self.shape[1];
&mut self.dat.mut_slice(base, base + self.shape[1])
}
}
This complains about a missing lifetime specifier on the trait impl line. So I try adding one.
impl<'a, T: FromPrimitive+Clone> IndexMut<uint, &'a mut [T]> for Matrix<T> {
fn index_mut(&'a mut self, index: &uint) -> &mut(&'a mut[T]) {
let base = index*self.shape[1];
&mut self.dat.mut_slice(base, base + self.shape[1])
}
}
Now I get method `index_mut` has an incompatible type for trait: expected concrete lifetime, but found bound lifetime parameter 'a [E0053]. Aside from this I've tried just about every combination of one and two lifetimes I can think of, as well as creating a secondary structure to hold a reference that is stored in the outer structure during the indexing operation so a reference to that can be returned instead, but that's not possible for Index. The final answer may just be that this isn't possible, given the response on this old github issue, but that would seem to be a problematic limitation of the Index and IndexMut traits. Is there something I'm missing?
At present, this is not possible, but when Dynamically Sized Types lands I believe it will become possible.
Let’s look at the signature:
pub trait IndexMut<Index, Result> {
fn index_mut<'a>(&'a mut self, index: &Index) -> &'a mut Result;
}
(Note the addition of the <'a> compared with what the docs say; I’ve filed #16228 about that.)
'a is an arbitrary lifetime, but it is important that it is specified on the method, not on the impl as a whole: it is in absolute truth a generic parameter to the method. I’ll show how it all comes out here with the names 'ρ₀ and 'ρ₁. So then, in this attempt:
impl<'ρ₀, T: FromPrimitive + Clone> IndexMut<uint, &'ρ₀ mut [T]> for Matrix<T> {
fn index_mut<'ρ₁>(&'ρ₁ mut self, index: &uint) -> &'ρ₁ mut &'ρ₀ mut [T] {
let base = index * self.shape[1];
&mut self.dat.mut_slice(base, base + self.shape[1])
}
}
This satisfies the requirements that (a) all lifetimes must be explicit in the impl header, and (b) that the method signature matches the trait definition: Index is uint and Result is &'ρ₀ mut [T]. Because 'ρ₀ is defined on the impl block (so that it can be used as a parameter there) and 'ρ₁ on the method (because that’s what the trait defines), 'ρ₀ and 'ρ₁ cannot be combined into a single named lifetime. (You could call them both 'a, but this is shadowing and does not change anything except for the introduction of a bit more confusion!)
However, this is not enough to have it all work, and it will indeed not compile, because 'ρ₀ is not tied to anything, nor is there to tie it to in the signature. And so you cannot cast self.data.mut_slice(…), which is of type &'ρ₁ mut [T], to &'ρ₀ mut [T] as the lifetimes do not match, nor is there any known subtyping relationship between them (that is, it cannot structurally be demonstrated that the lifetime 'ρ₀ is less than—a subtype of—'ρ₁; although the return type of the method would make that clear, it is not so at the basic type level, and so it is not permitted).
Now as it happens, IndexMut isn’t as useful as it should be anyway owing to #12825, as matrix[1] would always use IndexMut and never Index if you have implemented both. I’m not sure if that’s any consolation, though!
The solution comes in Dynamically Sized Types. When that is here, [T] will be a legitimate unsized type which can be used as the type for Result and so this will be the way to write it:
impl<T: FromPrimitive + Clone> IndexMut<uint, [T]> for Matrix<T> {
fn index_mut<'a>(&'a mut self, index: &uint) -> &'a mut [T] {
let base = index * self.shape[1];
&mut self.dat.mut_slice(base, base + self.shape[1])
}
}
… but that’s not here yet.
This code works in Rust 1.25.0 (and probably has for quite a while)
extern crate num;
use num::Zero;
pub struct Matrix<T> {
shape: [usize; 2],
dat: Vec<T>,
}
impl<T: Zero + Clone> Matrix<T> {
pub fn new(shape: [usize; 2]) -> Matrix<T> {
let size = shape.iter().product();
Matrix {
shape: shape,
dat: vec![T::zero(); size],
}
}
pub fn mut_index(&mut self, index: usize) -> &mut [T] {
let base = index * self.shape[1];
&mut self.dat[base..][..self.shape[1]]
}
}
fn main() {
let mut m = Matrix::<f32>::new([4; 2]);
println!("{:?}", m.dat);
println!("{}", m.mut_index(3)[0]);
}
You can enhance it to support Index and IndexMut:
use std::ops::{Index, IndexMut};
impl<T> Index<usize> for Matrix<T> {
type Output = [T];
fn index(&self, index: usize) -> &[T] {
let base = index * self.shape[1];
&self.dat[base..][..self.shape[1]]
}
}
impl<T> IndexMut<usize> for Matrix<T> {
fn index_mut(&mut self, index: usize) -> &mut [T] {
let base = index * self.shape[1];
&mut self.dat[base..][..self.shape[1]]
}
}
fn main() {
let mut m = Matrix::<f32>::new([4; 2]);
println!("{:?}", m.dat);
println!("{}", m[3][0]);
m[3][0] = 42.42;
println!("{:?}", m.dat);
println!("{}", m[3][0]);
}

Resources