Can someone explain this lifetime code snippet? - rust

Uncompilable version:
fn main() {
let v = vec![1, 2, 3, 4, 5, 6];
let mut b = Buffer::new(v.as_slice());
let r1 = b.read();
let r2 = b.read();
out2(r1, r2);
}
struct Buffer<'a> {
buf : &'a [u8],
pos : usize,
}
fn out2(a: &[u8], b: &[u8]){
println!("{:#?} {:#?}", a, b);
}
impl<'a> Buffer<'a> {
fn new(a : &'a [u8] ) -> Buffer<'a> {
Buffer { buf: (a), pos: (0) }
}
fn read(&'a mut self) -> &'a [u8] {
self.pos += 3;
&self.buf[self.pos - 3..self.pos]
}
}
Compiled successfully version
fn main() {
let v = vec![1, 2, 3, 4, 5, 6];
let mut b = Buffer::new(v.as_slice());
let r1 = b.read();
let r2 = b.read();
out2(r1, r2);
}
struct Buffer<'a> {
buf : &'a [u8],
pos : usize,
}
fn out2(a: &[u8], b: &[u8]){
println!("{:#?} {:#?}", a, b);
}
// a > b
impl<'b, 'a : 'b> Buffer<'a> {
fn new(a : &'a [u8] ) -> Buffer<'a> {
Buffer { buf: (a), pos: (0) }
}
fn read(&'b mut self) -> &'a [u8] {
self.pos += 3;
&self.buf[self.pos - 3..self.pos]
}
}
Both r1 r2 are also hold the partial reference of buffer. And neither held mut ref.
The most difference part is that read function's return lifetime is longer than &mut self.
But I can't understand why.

The second snippet is equivalent to the following:
impl<'a> Buffer<'a> {
fn read<'b>(&'b mut self) -> &'a [u8] {
self.pos += 3;
&self.buf[self.pos - 3..self.pos]
}
}
Basically, &'a mut self where 'a is defined in the struct is almost always wrong. You say the struct needs to be borrowed for as long as the data it holds. Since the data it holds exists from the creation of the instance to its end, so does this borrow. Basically, you say we can use this method only once.
The second snippet, on the other hand, takes a fresh, smaller lifetime on self and therefore can be called multiple times.

Related

How to return an iterator for a tuple of slices that iterates the first slice then the second slice?

I have a function that splits a slice into three parts, a leading and trailing slice, and a reference to the middle element.
/// The leading and trailing parts of a slice.
struct LeadingTrailing<'a, T>(&'a mut [T], &'a mut [T]);
/// Divides one mutable slice into three parts, a leading and trailing slice,
/// and a reference to the middle element.
pub fn split_at_rest_mut<T>(x: &mut [T], index: usize) -> (&mut T, LeadingTrailing<T>) {
debug_assert!(index < x.len());
let (leading, trailing) = x.split_at_mut(index);
let (val, trailing) = trailing.split_first_mut().unwrap();
(val, LeadingTrailing(leading, trailing))
}
I would like to implement Iterator for LeadingTrailing<'a, T> so that it first iterates over the first slice, and then over the second. i.e., it will behave like:
let mut foo = [0,1,2,3,4,5];
let (item, lt) = split_at_rest_mut(&foo, 2);
for num in lt.0 {
...
}
for num in lt.1 {
...
}
I have tried converting to a Chain:
struct LeadingTrailing<'a, T>(&'a mut [T], &'a mut [T]);
impl <'a, T> LeadingTrailing<'a, T> {
fn to_chain(&mut self) -> std::iter::Chain<&'a mut [T], &'a mut [T]> {
self.0.iter_mut().chain(self.1.iter_mut())
}
}
But I get the error:
89 | self.0.iter_mut().chain(self.1.iter_mut())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `&mut [T]`, found struct `std::slice::IterMut`
I have also tried creating a custom Iterator
/// The leading and trailing parts of a slice.
struct LeadingTrailing<'a, T>(&'a mut [T], &'a mut [T]);
struct LTOthersIterator<'a, T> {
data: LeadingTrailing<'a, T>,
index: usize,
}
/// Iterates over the first slice, then the second slice.
impl<'a, T> Iterator for LTOthersIterator<'a, T> {
type Item = &'a T;
fn next(&mut self) -> Option<Self::Item> {
let leading_len = self.data.0.len();
let trailing_len = self.data.1.len();
let total_len = leading_len + trailing_len;
match self.index {
0..=leading_len => {
self.index += 1;
self.data.0.get(self.index - 1)
}
leading_len..=total_len => {
self.index += 1;
self.data.1.get(self.index - leading_len - 1)
}
}
}
}
But I get the error:
error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements
--> src\main.rs:104:29
|
104 | self.data.0.get(self.index - 1)
^^^
What is the correct way to do this?
You either let the compiler do the work:
impl <'a, T> LeadingTrailing<'a, T> {
fn to_chain(&mut self) -> impl Iterator<Item = &mut T> {
self.0.iter_mut().chain(self.1.iter_mut())
}
}
Or perscribe the correct type, Chain takes the iterators, not the thing they got created from.
impl <'a, T> LeadingTrailing<'a, T> {
fn to_chain(&'a mut self) -> std::iter::Chain<std::slice::IterMut<'a, T>, std::slice::IterMut<'a, T>> {
self.0.iter_mut().chain(self.1.iter_mut())
}
}
The return value of to_chain() is incorrect.
For simplicity, just use impl Iterator.
/// The leading and trailing parts of a slice.
#[derive(Debug)]
pub struct LeadingTrailing<'a, T>(&'a mut [T], &'a mut [T]);
/// Divides one mutable slice into three parts, a leading and trailing slice,
/// and a reference to the middle element.
pub fn split_at_rest_mut<T>(x: &mut [T], index: usize) -> (&mut T, LeadingTrailing<T>) {
debug_assert!(index < x.len());
let (leading, trailing) = x.split_at_mut(index);
let (val, trailing) = trailing.split_first_mut().unwrap();
(val, LeadingTrailing(leading, trailing))
}
impl<T> LeadingTrailing<'_, T> {
fn to_chain(&mut self) -> impl Iterator<Item = &mut T> {
self.0.iter_mut().chain(self.1.iter_mut())
}
}
fn main() {
let mut arr = [0, 1, 2, 3, 4, 5, 6, 7, 8];
let (x, mut leadtrail) = split_at_rest_mut(&mut arr, 5);
println!("x: {}", x);
println!("leadtrail: {:?}", leadtrail);
for el in leadtrail.to_chain() {
*el *= 2;
}
println!("leadtrail: {:?}", leadtrail);
}
x: 5
leadtrail: LeadingTrailing([0, 1, 2, 3, 4], [6, 7, 8])
leadtrail: LeadingTrailing([0, 2, 4, 6, 8], [12, 14, 16])
The fully written out version would be:
impl<T> LeadingTrailing<'_, T> {
fn to_chain(&mut self) -> std::iter::Chain<std::slice::IterMut<T>, std::slice::IterMut<T>> {
self.0.iter_mut().chain(self.1.iter_mut())
}
}

What does the mut usage here?

See the comments, What does the mut self_encoded means here?
pub trait DecodeLength {
fn len(self_encoded: &[u8]) -> Result<usize, Error>;
}
// This can compile.
impl DecodeLength for i32 {
// here
fn len(mut self_encoded: &[u8]) -> Result<usize, Error> {
usize::try_from(u32::from(Compact::<u32>::decode(&mut self_encoded)?))
.map_err(|_| "Failed convert decoded size into usize.".into())
}
}
// This way not works since the signature of this len is not correctly.
// impl DecodeLength for i32 {
// fn len(self_encoded: &mut [u8]) -> Result<usize, Error> {
// Ok(2)
// }
// }
you must remember when you change your fn like this
fn len(mut self_encoded: &[u8]) -> usize {
2
}
you didn't change actual input, your input still is &[u8], you just tell the compiler that value of input variable can be changed
like this
impl DecodeLength for i32 {
// here
fn len(mut self_encoded: &[u8]) -> usize {
self_encoded = b"123";
2
}
}
but when you change the input type &[u8] to &mut [u8] you chose a new type for input
now compiler give you an error and says "expected &[u8] but found &mut [u8]
// Error: expected fn pointer `fn(&[u8]) -> _` found fn pointer `fn(&mut [u8]) -> _
impl DecodeLength for i32 {
fn len(self_encoded: &mut [u8]) -> usize {
2
}
}
remember &[u8] and &mut [u8] are different and have different use
[Edit]
you can't use ? in function when your function doesn't return Result
[Edit 2]
look at the following code
impl DecodeLength for i32 {
// here
fn len(mut self_encoded: &[u8]) -> usize {
let mut_ref = &mut self_encoded; // this give you `&mut &[u8]`
let mut_ref2 = self_encoded.as_mut(); // Error: `as_mut` is only valid for `&mut` references
1
}
}
you can't change &[u8] type into &mut [u8], this will give you an error
the simple way is change your DecodeLength like this
pub trait DecodeLength {
fn len(self_encoded: &mut [u8]) -> Result<usize, Error>;
}
&[u8]
it's reference to a bytes slice, value of this slice can't change and is immutable, you can only read it
fn immutable_slice(input: &[u8]) {
//input[0] = b'a'; // Give you an Error
if input[0] == b'a' {
println!("Index 0 is 'a'");
}
}
&mut [u8]
it's reference to a editable bytes slice, it' can be change inside of fn
fn mutable_slice(input: &mut [u8]) {
input[0] = b'a';
println!("{:?}", input);
}
you can test it like this
fn main() {
let numbers1: Vec<u8> = vec![1, 2, 3, 4, 5];
immutable_slice(&numbers1);
let mut numbers2: Vec<u8> = vec![1, 2, 3, 4, 5];
mutable_slice(&mut numbers2);
}

How can concatenated &[u8] slices implement the Read trait without additional copying?

The Read trait is implemented for &[u8]. How can I get a Read trait over several concatenated u8 slices without actually doing any concatenation first?
If I concatenate first, there will be two copies -- multiple arrays into a single array followed by copying from single array to destination via the Read trait. I would like to avoid the first copying.
I want a Read trait over &[&[u8]] that treats multiple slices as a single continuous slice.
fn foo<R: std::io::Read + Send>(data: R) {
// ...
}
let a: &[u8] = &[1, 2, 3, 4, 5];
let b: &[u8] = &[1, 2];
let c: &[&[u8]] = &[a, b];
foo(c); // <- this won't compile because `c` is not a slice of bytes.
You could use the multi_reader crate, which can concatenate any number of values that implement Read:
let a: &[u8] = &[1, 2, 3, 4, 5];
let b: &[u8] = &[1, 2];
let c: &[&[u8]] = &[a, b];
foo(multi_reader::MultiReader::new(c.iter().copied()));
If you don't want to depend on an external crate, you can wrap the slices in a struct of your own and implement Read for it:
struct MultiRead<'a> {
sources: &'a [&'a [u8]],
pos_in_current: usize,
}
impl<'a> MultiRead<'a> {
fn new(sources: &'a [&'a [u8]]) -> MultiRead<'a> {
MultiRead {
sources,
pos_in_current: 0,
}
}
}
impl Read for MultiRead<'_> {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
let current = loop {
if self.sources.is_empty() {
return Ok(0); // EOF
}
let current = self.sources[0];
if self.pos_in_current < current.len() {
break current;
}
self.pos_in_current = 0;
self.sources = &self.sources[1..];
};
let read_size = buf.len().min(current.len() - self.pos_in_current);
buf[..read_size].copy_from_slice(&current[self.pos_in_current..][..read_size]);
self.pos_in_current += read_size;
Ok(read_size)
}
}
Playground
Create a wrapper type around the slices and implement Read for it. Compared to user4815162342's answer, I delegate down to the implementation of Read for slices:
use std::{io::Read, mem};
struct Wrapper<'a, 'b>(&'a mut [&'b [u8]]);
impl<'a, 'b> Read for Wrapper<'a, 'b> {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
let slices = mem::take(&mut self.0);
match slices {
[head, ..] => {
let n_bytes = head.read(buf)?;
if head.is_empty() {
// Advance the child slice
self.0 = &mut slices[1..];
} else {
// More to read, put back all the child slices
self.0 = slices;
}
Ok(n_bytes)
}
_ => Ok(0),
}
}
}
fn main() {
let parts: &mut [&[u8]] = &mut [b"hello ", b"world"];
let mut w = Wrapper(parts);
let mut buf = Vec::new();
w.read_to_end(&mut buf).unwrap();
assert_eq!(b"hello world", &*buf);
}
A more efficient implementation would implement further methods from Read, such as read_to_end or read_vectored.
See also:
How do I implement a trait I don't own for a type I don't own?

How to define a generic function that takes a function that converts a slice to an iterator

I want to write a function that process some slices in different order, so I decided to write a function that is generic over the iterating order, something like:
fn foo<'a, I: Iterator<Item = &'a mut i32>>(make_iter: impl Fn(&'a mut [i32]) -> I) {
let mut data = [1, 2, 3, 4];
make_iter(&mut data);
}
fn main() {
foo(|x| x.iter_mut());
foo(|x| x.iter_mut().rev());
}
This causes “borrowed value does not live long enough” error.
I imagine something like
fn foo(make_iter: impl for<'a> Fn(&'a mut [i32]) -> impl Iterator<Item = &'a mut i32>) {
let mut data = [1, 2, 3, 4];
make_iter(&mut data);
}
should be used, but impl Iterator is not allow at that position. So is there anything I can do?
Update:
The slices to to processed should be considered dynamically generated inside the foo function, and are dropped after processing.
Your function is mostly correct. The compiled error "borrowed value does not live long enough" is due to the fact that you are defining your data inside the foo rather than pass it in. The error is because of the linelet mut data = [1, 2, 3, 4];
The life time of the data is same as the function foo because it is created in the function foo. However, the closure's life time is longer than the variable data as the closure is passed in as an argument to foo so its lifetime is longer than data. When the function foo goes out of the scope, the data is dropped. Then your closure's is trying to return a reference data which is already dropped. This is why you have compiled error "borrowed value does not live long enough".
You can make this compile by passing the data into foo as an argument, in this case, you will not have the issue due to lifetime.
The below code will compile.
fn foo<'a, I: Iterator<Item = &'a mut i32>>(make_iter: impl Fn(&'a mut [i32]) -> I, data: &'a mut Vec<i32>) {
// let mut data = [1, 2, 3, 4];
make_iter(data);
}
fn main() {
let mut data= vec![1,2,3,4];
foo(|x| x.iter_mut(), &mut data);
foo(|x| x.iter_mut().rev(), &mut data);
}
The rustplay ground link : https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=8b263451fcb01518b3f35bda8485af9c
Update: Sry to misunderstand your requirements. I was trying to come out a clean way to write this, but the best I can come up with is using Box<dyn to do. I know there are runtime cost for Box<dyn..., but I can't come out a better way using impl Iterator
The implementation using Box<dyn ... is
fn foo<F>(make_iter: F) where for<'a> F: Fn(&'a mut [i32])->Box<dyn Iterator<Item=&'a mut i32>+'a>{
let mut data = vec![1, 2, 3, 4];
make_iter(&mut data);
}
fn main() {
foo(|x| Box::new(x.iter_mut()));
foo(|x| Box::new(x.iter_mut().rev()));
}
I came up with one solution:
use std::iter::Rev;
use std::slice::IterMut;
trait MakeIter<'a> {
type Iter: Iterator<Item = &'a mut i32>;
fn make_iter(&mut self, slice: &'a mut [i32]) -> Self::Iter;
}
fn foo(mut make_iter: impl for<'a> MakeIter<'a>) {
let mut data = [1, 2, 3, 4];
make_iter.make_iter(&mut data);
}
struct Forward;
impl<'a> MakeIter<'a> for Forward {
type Iter = IterMut<'a, i32>;
fn make_iter(&mut self, slice: &'a mut [i32]) -> Self::Iter {
slice.iter_mut()
}
}
struct Backward;
impl<'a> MakeIter<'a> for Backward {
type Iter = Rev<IterMut<'a, i32>>;
fn make_iter(&mut self, slice: &'a mut [i32]) -> Self::Iter {
slice.iter_mut().rev()
}
}
fn main() {
foo(Forward);
foo(Backward);
}
But I am not sure whether it can be simplified.
Update
Here is a simplification:
trait MakeIter<'a> {
type Iter: Iterator<Item = &'a mut i32>;
fn make_iter(&mut self, slice: &'a mut [i32]) -> Self::Iter;
}
fn foo(mut make_iter: impl for<'a> MakeIter<'a>) {
let mut data = [1, 2, 3, 4];
make_iter.make_iter(&mut data);
}
impl<'a, F, R> MakeIter<'a> for F
where
F: FnMut(&'a mut [i32]) -> R,
R: Iterator<Item = &'a mut i32>,
{
type Iter = R;
fn make_iter(&mut self, slice: &'a mut [i32]) -> Self::Iter {
self(slice)
}
}
fn iter_forward(slice: &mut [i32]) -> impl Iterator<Item = &mut i32> {
slice.iter_mut()
}
fn iter_backward(slice: &mut [i32]) -> impl Iterator<Item = &mut i32> {
slice.iter_mut().rev()
}
fn main() {
foo(iter_forward);
foo(iter_backward);
}

How do I upgrade my Slicable trait to satisfy the borrow checker in repeated calls

I have an IO library that has a big State struct and I am writing a function that requires two phases.
In the first phase, only the reader class is touched, but the call-site chooses a read-only table slice to pass in.
In the second phase, the whole State struct is modified but the read-only table is no longer needed.
I've split the function into two functions--and that works, but when I try to combine those functions, the borrow checker breaks down when I replace a concrete Vector class with a custom trait Slicable
Is there any way to make process_with_table_fn operate on Slicable values in the State struct rather than directly on Vectors?
tl;dr I'd like fn what_i_want_to_work to compile, but instead I have only gotten what_works to build. Is my trait definition for Slicable badly crafted for this use case? Why does the concrete type operate better than the trait?
pub struct MemReader {
buf : [u8; 1024],
off : usize,
}
pub struct Vector<'a> {
buf : &'a [u32],
}
trait Slicable {
fn slice(self : &Self) -> &[u32];
}
impl<'a> Slicable for Vector<'a> {
fn slice(self : &Self) -> &[u32]{
return self.buf;
}
}
impl MemReader {
fn read(self : &mut Self, how_much : usize, output : &mut u8) -> bool {
if self.off + how_much > self.buf.len() {
return false;
}
self.off += how_much;
*output = self.buf[self.off - 1];
return true;
}
}
pub struct State<'a> {
pub mr : MemReader,
pub translation_tables : [Vector<'a>; 4],
pub other_tables : [Vector<'a>; 4],
pub state : i32,
}
fn process_first(mr : &mut MemReader, table : &[u32]) -> (bool, u32) {
let mut temp : u8 = 0;
let ret = mr.read(8, &mut temp);
if !ret {
return (false, 0);
}
return (true, table[temp as usize]);
}
fn process_second(s : &mut State, ret_index : (bool, u32), mut outval : &mut u8) -> bool {
let (ret, index) = ret_index;
if ! ret {
return false;
}
s.state += 1;
return s.mr.read(index as usize, &mut outval);
}
pub fn process_with_table_fn(mut s : &mut State, table : &[u32], mut outval : &mut u8) -> bool {
let ret = process_first(&mut s.mr, table);
return process_second(&mut s, ret, &mut outval);
}
macro_rules! process_with_table_mac(
($state : expr, $table : expr, $outval : expr) => {
process_second(&mut $state, process_first(&mut $state.mr, &$table), &mut $outval)
};
);
pub fn what_works(mut s : &mut State) {
let mut outval0 : u8 = 0;
let _ret0 = process_with_table_fn(&mut s, &s.other_tables[2].buf[..], &mut outval0);
}
/*
pub fn what_i_want_to_work(mut s : &mut State) {
let mut outval0 : u8 = 0;
let ret0 = process_with_table_fn(&mut s, s.other_tables[2].slice(), &mut outval0);
// OR
let mut outval1 : u8 = 0;
//let ret1 = process_with_table_mac!(s, s.other_tables[2].slice(), outval1);
}
*/
fn main() {
}
There's two things going on. Lets look at your trait implementation first:
impl<'a> Slicable for Vector<'a> {
fn slice(self : &Self) -> &[u32]{
return self.buf;
}
}
The method's signature is expanded to
fn slice<'b>(self : &'b Self) -> &'b[u32]
which means that the lifetime of the resulting slice is shorter than the lifetime of self. At the call site, this means that s.other_tables[2].slice() borrows s while &s.other_tables[2].buf[..] borrows something that has lifetime 'a, completely ignoring the lifetime of s. To replicate this behavior, you can add a lifetime to your trait:
trait Slicable<'a> {
fn slice(self: &Self) -> &'a [u32];
}
impl<'a> Slicable<'a> for Vector<'a> {
fn slice(self: &Self) -> &'a [u32] {
self.buf
}
}
Now you should be set, but the compiler still has a minor limitation with respect to method call lifetimes, so you need to split your call into two lines:
let slice = s.other_tables[2].slice();
let ret0 = process_with_table_fn(&mut s, slice, &mut outval0);

Resources