Mapping a struct to an underlying buffer - rust

I am attempting to map a simple struct to an underlying buffer as follows, where modifying the struct also modifies the buffer:
#[repr(C, packed)]
pub struct User {
id: u8,
username: [u8; 20],
}
fn main() {
let buf = [97u8; 21];
let mut u: User = unsafe { std::ptr::read(buf.as_ptr() as *const _) };
let buf_addr = &buf[0] as *const u8;
let id_addr = &u.id as *const u8;
println!("buf addr: {:p} id addr: {:p} id val: {}", buf_addr, id_addr, u.id);
assert_eq!(buf_addr, id_addr); // TODO addresses not equal
u.id = 10;
println!("id val: {}", u.id);
println!("{:?}", &buf);
assert_eq!(buf[0], u.id); // TODO buffer not updated
}
However the starting address of the buffer is different to the address of the first member in the struct and modifying the struct does not modify the buffer. What is wrong with the above example?

The struct only contains owned values. That means that, in order to construct one, you have to copy data into it. And that is exactly what you are doing when you use ptr::read.
But what you want to do (at least the code presented) is not possible. If you work around Rust's safety checks with unsafe code then you would have two mutable references to the same data, which is Undefined Behaviour.
You can, however, create a safe API of mutable "views" onto the data, something like this:
#[repr(C, packed)]
pub struct User {
id: u8,
username: [u8; 20],
}
pub struct RawUser {
buf: [u8; 21],
}
impl RawUser {
pub fn as_bytes_mut(&mut self) -> &mut [u8; 21] {
&mut self.buf
}
pub fn as_user_mut(&mut self) -> &mut User {
unsafe { &mut *(self.buf.as_mut_ptr() as *mut _) }
}
}
These accessors let you view the same data in different ways, while allowing the Rust borrow checker to enforce memory safety. Usage looks like this:
fn main() {
let buf = [97u8; 21];
let mut u: RawUser = RawUser { buf };
let user = u.as_user_mut();
user.id = 10;
println!("id val: {}", user.id); // id val: 10
let bytes = u.as_bytes_mut();
// it would be a compile error to try to access `user` here
assert_eq!(bytes[0], 10);
}

Related

Is there a way to emit a compile error if a struct contains padding?

I'm really after an opt-in derivable trait that safely returns an objects unique representation as bytes. In my application, I noticed a ~20x speedup by hashing as a byte array over the derived implementation. AFAIK, this is safe for Copy types with a well-defined representation and no padding. The current implementation expands to something like this.
use core::mem::{size_of, transmute, MaybeUninit};
pub trait ByteValued: Copy {
fn byte_repr(&self) -> &[u8];
}
pub struct AssertByteValued<T: ByteValued> {
_phantom: ::core::marker::PhantomData<T>
}
macro_rules! impl_byte_repr {
() => {
fn byte_repr(&self) -> &[u8] {
let len = size_of::<Self>();
unsafe {
let self_ptr: *const u8 = transmute(self as *const Self);
core::slice::from_raw_parts(self_ptr, len)
}
}
}
}
// Manual implementations for builtin/std types
impl ByteValued for u32 { impl_byte_repr!{} }
impl ByteValued for usize { impl_byte_repr!{} }
impl<T: ByteValued> ByteValued for MaybeUninit<T> { impl_byte_repr!{} }
impl<T: ByteValued, const N: usize> ByteValued for [T; N] { impl_byte_repr!{} }
// Expanded version of a proc_macro generated derived implementation
pub struct ArrayVec<T, const CAP: usize> {
data: [MaybeUninit<T>; CAP],
len: usize,
}
impl<T: Clone, const CAP: usize> Clone for ArrayVec<T, CAP> {
fn clone(&self) -> Self { todo!() }
}
impl<T: Copy, const CAP: usize> Copy for ArrayVec<T, CAP> {}
// This is only valid if all unused capacity is always consistently represented
impl<T: ByteValued, const CAP: usize> ByteValued for ArrayVec<T, CAP> {
fn byte_repr(&self) -> &[u8] {
// Compiletime check all fields are also ByteValued
let _: AssertByteValued<[MaybeUninit<T>; CAP]>;
let _: AssertByteValued<usize>;
// Runtime check for no padding
let _self_size = size_of::<Self>();
let _field_size = size_of::<[MaybeUninit<T>; CAP]>() + size_of::<usize>();
assert!(_self_size == _field_size, "Must not contain padding");
let len = size_of::<Self>();
unsafe {
let self_ptr: *const u8 = transmute(self as *const Self);
::core::slice::from_raw_parts(self_ptr, len)
}
}
}
fn main() {
let x = ArrayVec::<u32, 4> {
data: unsafe { MaybeUninit::zeroed().assume_init() },
len: 0
};
let bytes = x.byte_repr();
assert_eq!(bytes, &[0; 24]);
// This unconditionally panics, but I want a compile error
let y = ArrayVec::<u32, 3> {
data: unsafe { MaybeUninit::zeroed().assume_init() },
len: 0
};
let _ = y.byte_repr();
}
The tricky bit here is asserting no padding in byte_repr. As written, this checks the object size against the sum of the sizes of its fields at runtime. I would like to make that assert const to get a compile error, but that wouldn't work because it depends on the generic types. So, is there a way to emit a compile error (potentially from a proc_macro) if a struct contains padding between its fields?
I suggest starting with bytemuck::NoUninit. This is a derivable trait which guarantees that the type has no uninitialized bytes of any sort (including padding). After implementing it, you can use bytemuck::bytes_of() to get the &[u8] you want to work with.
This cannot just be derived for your ArrayVec since you are explicitly using MaybeUninit, but you can add a T: NoUninit bound to ArrayVec, and blanket implement your ByteValued for all NoUninit, which will both check the condition of T you care about, and simplify the number of impls you need to write.

How can concatenated &[u8] slices implement the Read trait without additional copying?

The Read trait is implemented for &[u8]. How can I get a Read trait over several concatenated u8 slices without actually doing any concatenation first?
If I concatenate first, there will be two copies -- multiple arrays into a single array followed by copying from single array to destination via the Read trait. I would like to avoid the first copying.
I want a Read trait over &[&[u8]] that treats multiple slices as a single continuous slice.
fn foo<R: std::io::Read + Send>(data: R) {
// ...
}
let a: &[u8] = &[1, 2, 3, 4, 5];
let b: &[u8] = &[1, 2];
let c: &[&[u8]] = &[a, b];
foo(c); // <- this won't compile because `c` is not a slice of bytes.
You could use the multi_reader crate, which can concatenate any number of values that implement Read:
let a: &[u8] = &[1, 2, 3, 4, 5];
let b: &[u8] = &[1, 2];
let c: &[&[u8]] = &[a, b];
foo(multi_reader::MultiReader::new(c.iter().copied()));
If you don't want to depend on an external crate, you can wrap the slices in a struct of your own and implement Read for it:
struct MultiRead<'a> {
sources: &'a [&'a [u8]],
pos_in_current: usize,
}
impl<'a> MultiRead<'a> {
fn new(sources: &'a [&'a [u8]]) -> MultiRead<'a> {
MultiRead {
sources,
pos_in_current: 0,
}
}
}
impl Read for MultiRead<'_> {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
let current = loop {
if self.sources.is_empty() {
return Ok(0); // EOF
}
let current = self.sources[0];
if self.pos_in_current < current.len() {
break current;
}
self.pos_in_current = 0;
self.sources = &self.sources[1..];
};
let read_size = buf.len().min(current.len() - self.pos_in_current);
buf[..read_size].copy_from_slice(&current[self.pos_in_current..][..read_size]);
self.pos_in_current += read_size;
Ok(read_size)
}
}
Playground
Create a wrapper type around the slices and implement Read for it. Compared to user4815162342's answer, I delegate down to the implementation of Read for slices:
use std::{io::Read, mem};
struct Wrapper<'a, 'b>(&'a mut [&'b [u8]]);
impl<'a, 'b> Read for Wrapper<'a, 'b> {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
let slices = mem::take(&mut self.0);
match slices {
[head, ..] => {
let n_bytes = head.read(buf)?;
if head.is_empty() {
// Advance the child slice
self.0 = &mut slices[1..];
} else {
// More to read, put back all the child slices
self.0 = slices;
}
Ok(n_bytes)
}
_ => Ok(0),
}
}
}
fn main() {
let parts: &mut [&[u8]] = &mut [b"hello ", b"world"];
let mut w = Wrapper(parts);
let mut buf = Vec::new();
w.read_to_end(&mut buf).unwrap();
assert_eq!(b"hello world", &*buf);
}
A more efficient implementation would implement further methods from Read, such as read_to_end or read_vectored.
See also:
How do I implement a trait I don't own for a type I don't own?

In Rust, is it possible to have a mutable copy-able string allocated on the stack?

I need something that will make this work:
#[derive(Copy)]
struct TestData {
val : MutableString
}
Presumably, the MutableString will have to be of fixed size, and I am okay with that.
We can use const generics for this!
#[derive(Copy, Clone)]
struct TestData<const SIZE1: usize> {
val : MutableString<SIZE1>
}
#[derive(Copy, Clone)]
struct MutableString<const SIZE: usize> {
inner : [u8; SIZE],
len: usize
}
(note: these are both Copy, but Clone must also be present due to trait bounds)
The SIZE parameter must be a value that is computable at compile time so you are right about needing fixed length strings.
We need a len field here because we need to know how long the actual string itself is. The alternative would be using a null-terminator like C does.
I defined a couple methods to make one of these from an &str and turn one back into a &str:
impl<'a, const SIZE: usize> MutableString<SIZE> {
fn from_str(s: &str) -> Result<Self, &'static str> {
let slice = s.as_bytes();
if slice.len() > SIZE {
Err("String is too big")
} else {
let mut inner = [0; SIZE];
for (i, byte) in slice.iter().enumerate() { inner[i] = *byte; }
Ok(Self {
inner,
len: slice.len()
})
}
}
fn to_str(&'a self) -> &'a str {
std::str::from_utf8(&self.inner).unwrap()
}
}
In the real world you'd want more error handling around this and also offer methods that guard mutation to ensure it doesn't expand beyond SIZE or require len to be updated.
We can verify that the string is being copied:
fn main() {
let mut_str = MutableString::from_str("Hello, world!").unwrap();
let mut data: TestData<256> = TestData { val: mut_str };
let copy = data;
data.val.inner[12] = b'?'; // irl guard this around a method that updates len, etc.
println!("{}", data.val.to_str());
println!("{}", copy.val.to_str());
}

Lockless processing of non overlapping non contiguous indexes by multiple threads in Rust

I am practicing rust and decided to create a Matrix ops/factorization project.
Basically I want to be able to process the underlying vector in multiple threads. Since I will be providing each thread non-overlapping indexes (which may or may not be contiguous) and the threads will be joined before the end of whatever function created them, there is no need for a lock /synchronization.
I know that there are several crates that can do this, but I would like to know if there is a relatively idiomatic crate-free way to implement it on my own.
The best I could come up with is (simplified the code a bit):
use std::thread;
//This represents the Matrix
#[derive(Debug, Clone)]
pub struct MainStruct {
pub data: Vec<f64>,
}
//This is the bit that will be shared by the threads,
//ideally it should have its lifetime tied to that of MainStruct
//but i have no idea how to make phantomdata work in this case
#[derive(Debug, Clone)]
pub struct SliceTest {
pub data: Vec<SubSlice>,
}
//This struct is to hide *mut f64 to allow it to be shared to other threads
#[derive(Debug, Clone)]
pub struct SubSlice {
pub data: *mut f64,
}
impl MainStruct {
pub fn slice(&mut self) -> (SliceTest, SliceTest) {
let mut out_vec_odd: Vec<SubSlice> = Vec::new();
let mut out_vec_even: Vec<SubSlice> = Vec::new();
unsafe {
let ptr = self.data.as_mut_ptr();
for i in 0..self.data.len() {
let ptr_to_push = ptr.add(i);
//Non contiguous idxs
if i % 2 == 0 {
out_vec_even.push(SubSlice{data:ptr_to_push});
} else {
out_vec_odd.push(SubSlice{data:ptr_to_push});
}
}
}
(SliceTest{data: out_vec_even}, SliceTest{data: out_vec_odd})
}
}
impl SubSlice {
pub fn set(&self, val: f64) {
unsafe {*(self.data) = val;}
}
}
unsafe impl Send for SliceTest {}
unsafe impl Send for SubSlice {}
fn main() {
let mut maindata = MainStruct {
data: vec![0.0, 1.0, 2.0, 3.0, 4.0, 5.0],
};
let (mut outvec1, mut outvec2) = maindata.slice();
let mut threads = Vec::new();
threads.push(
thread::spawn(move || {
for i in 0..outvec1.data.len() {
outvec1.data[i].set(999.9);
}
})
);
threads.push(
thread::spawn(move || {
for i in 0..outvec2.data.len() {
outvec2.data[i].set(999.9);
}
})
);
for handles in threads {
handles.join();
}
println!("maindata = {:?}", maindata.data);
}
EDIT:
Following kmdreko suggestion below, got the code to work exactly how I wanted it without using unsafe code, yay!
Of course in terms of performance it may be cheaper to copy the f64 slices than to create mutable reference vectors unless your struct is filled with other structs instead of f64
extern crate crossbeam;
use crossbeam::thread;
#[derive(Debug, Clone)]
pub struct Matrix {
data: Vec<f64>,
m: usize, //number of rows
n: usize, //number of cols
}
...
impl Matrix {
...
pub fn get_data_mut(&mut self) -> &mut Vec<f64> {
&mut self.data
}
pub fn calculate_idx(max_cols: usize, i: usize, j: usize) -> usize {
let actual_idx = j + max_cols * i;
actual_idx
}
//Get individual mutable references for contiguous indexes (rows)
pub fn get_all_row_slices(&mut self) -> Vec<Vec<&mut f64>> {
let max_cols = self.max_cols();
let max_rows = self.max_rows();
let inner_data = self.get_data_mut().chunks_mut(max_cols);
let mut out_vec: Vec<Vec<&mut f64>> = Vec::with_capacity(max_rows);
for chunk in inner_data {
let row_vec = chunk.iter_mut().collect();
out_vec.push(row_vec);
}
out_vec
}
//Get mutable references for disjoint indexes (columns)
pub fn get_all_col_slices(&mut self) -> Vec<Vec<&mut f64>> {
let max_cols = self.max_cols();
let max_rows = self.max_rows();
let inner_data = self.get_data_mut().chunks_mut(max_cols);
let mut out_vec: Vec<Vec<&mut f64>> = Vec::with_capacity(max_cols);
for _ in 0..max_cols {
out_vec.push(Vec::with_capacity(max_rows));
}
let mut inner_idx = 0;
for chunk in inner_data {
let row_vec_it = chunk.iter_mut();
for elem in row_vec_it {
out_vec[inner_idx].push(elem);
inner_idx += 1;
}
inner_idx = 0;
}
out_vec
}
...
}
fn test_multithreading() {
fn test(in_vec: Vec<&mut f64>) {
for elem in in_vec {
*elem = 33.3;
}
}
fn launch_task(mat: &mut Matrix, f: fn(Vec<&mut f64>)) {
let test_vec = mat.get_all_row_slices();
thread::scope(|s| {
for elem in test_vec.into_iter() {
s.spawn(move |_| {
println!("Spawning thread...");
f(elem);
});
}
}).unwrap();
}
let rows = 4;
let cols = 3;
//new function code omitted, returns Result<Self, MatrixError>
let mut mat = Matrix::new(rows, cols).unwrap()
launch_task(&mut mat, test);
for i in 0..rows {
for j in 0..cols {
//Requires index trait implemented for matrix
assert_eq!(mat[(i, j)], 33.3);
}
}
}
This API is unsound. Since there is no lifetime annotation binding SliceTest and SubSlice to the MainStruct, they can be preserved after the data has been destroyed and if used would result in use-after-free errors.
Its easy to make it safe though; you can use .iter_mut() to get distinct mutable references to your elements:
pub fn slice(&mut self) -> (Vec<&mut f64>, Vec<&mut f64>) {
let mut out_vec_even = vec![];
let mut out_vec_odd = vec![];
for (i, item_ref) in self.data.iter_mut().enumerate() {
if i % 2 == 0 {
out_vec_even.push(item_ref);
} else {
out_vec_odd.push(item_ref);
}
}
(out_vec_even, out_vec_odd)
}
However, this surfaces another problem: thread::spawn cannot hold references to local variables. The threads created are allowed to live beyond the scope they're created in, so even though you did .join() them, you aren't required to. This was a potential issue in your original code as well, just the compiler couldn't warn about it.
There's no easy way to solve this. You'd need to use a non-referential way to use data on the other threads, but that would be using Arc, which doesn't allow mutating its data, so you'd have to resort to a Mutex, which is what you've tried to avoid.
I would suggest reaching for scope from the crossbeam crate, which does allow you to spawn threads that reference local data. I know you've wanted to avoid using crates, but this is the best solution in my opinion.
See a working version on the playground.
See:
How to get multiple mutable references to elements in a Vec?
Can you specify a non-static lifetime for threads?

How do I efficiently build a vector and an index of that vector while processing a data stream?

I have a struct Foo:
struct Foo {
v: String,
// Other data not important for the question
}
I want to handle a data stream and save the result into Vec<Foo> and also create an index for this Vec<Foo> on the field Foo::v.
I want to use a HashMap<&str, usize> for the index, where the keys will be &Foo::v and the value is the position in the Vec<Foo>, but I'm open to other suggestions.
I want to do the data stream handling as fast as possible, which requires not doing obvious things twice.
For example, I want to:
allocate a String only once per one data stream reading
not search the index twice, once to check that the key does not exist, once for inserting new key.
not increase the run time by using Rc or RefCell.
The borrow checker does not allow this code:
let mut l = Vec::<Foo>::new();
{
let mut hash = HashMap::<&str, usize>::new();
//here is loop in real code, like:
//let mut s: String;
//while get_s(&mut s) {
let s = "aaa".to_string();
let idx: usize = match hash.entry(&s) { //a
Occupied(ent) => {
*ent.get()
}
Vacant(ent) => {
l.push(Foo { v: s }); //b
ent.insert(l.len() - 1);
l.len() - 1
}
};
// do something with idx
}
There are multiple problems:
hash.entry borrows the key so s must have a "bigger" lifetime than hash
I want to move s at line (b), while I have a read-only reference at line (a)
So how should I implement this simple algorithm without an extra call to String::clone or calling HashMap::get after calling HashMap::insert?
In general, what you are trying to accomplish is unsafe and Rust is correctly preventing you from doing something you shouldn't. For a simple example why, consider a Vec<u8>. If the vector has one item and a capacity of one, adding another value to the vector will cause a re-allocation and copying of all the values in the vector, invalidating any references into the vector. This would cause all of your keys in your index to point to arbitrary memory addresses, thus leading to unsafe behavior. The compiler prevents that.
In this case, there's two extra pieces of information that the compiler is unaware of but the programmer isn't:
There's an extra indirection — String is heap-allocated, so moving the pointer to that heap allocation isn't really a problem.
The String will never be changed. If it were, then it might reallocate, invalidating the referred-to address. Using a Box<[str]> instead of a String would be a way to enforce this via the type system.
In cases like this, it is OK to use unsafe code, so long as you properly document why it's not unsafe.
use std::collections::HashMap;
#[derive(Debug)]
struct Player {
name: String,
}
fn main() {
let names = ["alice", "bob", "clarice", "danny", "eustice", "frank"];
let mut players = Vec::new();
let mut index = HashMap::new();
for &name in &names {
let player = Player { name: name.into() };
let idx = players.len();
// I copied this code from Stack Overflow without reading the prose
// that describes why this unsafe block is actually safe
let stable_name: &str = unsafe { &*(player.name.as_str() as *const str) };
players.push(player);
index.insert(idx, stable_name);
}
for (k, v) in &index {
println!("{:?} -> {:?}", k, v);
}
for v in &players {
println!("{:?}", v);
}
}
However, my guess is that you don't want this code in your main method but want to return it from some function. That will be a problem, as you will quickly run into Why can't I store a value and a reference to that value in the same struct?.
Honestly, there's styles of code that don't fit well within Rust's limitations. If you run into these, you could:
decide that Rust isn't a good fit for you or your problem.
use unsafe code, preferably thoroughly tested and only exposing a safe API.
investigate alternate representations.
For example, I'd probably rewrite the code to have the index be the primary owner of the key:
use std::collections::BTreeMap;
#[derive(Debug)]
struct Player<'a> {
name: &'a str,
data: &'a PlayerData,
}
#[derive(Debug)]
struct PlayerData {
hit_points: u8,
}
#[derive(Debug)]
struct Players(BTreeMap<String, PlayerData>);
impl Players {
fn new<I>(iter: I) -> Self
where
I: IntoIterator,
I::Item: Into<String>,
{
let players = iter
.into_iter()
.map(|name| (name.into(), PlayerData { hit_points: 100 }))
.collect();
Players(players)
}
fn get<'a>(&'a self, name: &'a str) -> Option<Player<'a>> {
self.0.get(name).map(|data| Player { name, data })
}
}
fn main() {
let names = ["alice", "bob", "clarice", "danny", "eustice", "frank"];
let players = Players::new(names.iter().copied());
for (k, v) in &players.0 {
println!("{:?} -> {:?}", k, v);
}
println!("{:?}", players.get("eustice"));
}
Alternatively, as shown in What's the idiomatic way to make a lookup table which uses field of the item as the key?, you could wrap your type and store it in a set container instead:
use std::collections::BTreeSet;
#[derive(Debug, PartialEq, Eq)]
struct Player {
name: String,
hit_points: u8,
}
#[derive(Debug, Eq)]
struct PlayerByName(Player);
impl PlayerByName {
fn key(&self) -> &str {
&self.0.name
}
}
impl PartialOrd for PlayerByName {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl Ord for PlayerByName {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
self.key().cmp(&other.key())
}
}
impl PartialEq for PlayerByName {
fn eq(&self, other: &Self) -> bool {
self.key() == other.key()
}
}
impl std::borrow::Borrow<str> for PlayerByName {
fn borrow(&self) -> &str {
self.key()
}
}
#[derive(Debug)]
struct Players(BTreeSet<PlayerByName>);
impl Players {
fn new<I>(iter: I) -> Self
where
I: IntoIterator,
I::Item: Into<String>,
{
let players = iter
.into_iter()
.map(|name| {
PlayerByName(Player {
name: name.into(),
hit_points: 100,
})
})
.collect();
Players(players)
}
fn get(&self, name: &str) -> Option<&Player> {
self.0.get(name).map(|pbn| &pbn.0)
}
}
fn main() {
let names = ["alice", "bob", "clarice", "danny", "eustice", "frank"];
let players = Players::new(names.iter().copied());
for player in &players.0 {
println!("{:?}", player.0);
}
println!("{:?}", players.get("eustice"));
}
not increase the run time by using Rc or RefCell
Guessing about performance characteristics without performing profiling is never a good idea. I honestly don't believe that there'd be a noticeable performance loss from incrementing an integer when a value is cloned or dropped. If the problem required both an index and a vector, then I would reach for some kind of shared ownership.
not increase the run time by using Rc or RefCell.
#Shepmaster already demonstrated accomplishing this using unsafe, once you have I would encourage you to check how much Rc actually would cost you. Here is a full version with Rc:
use std::{
collections::{hash_map::Entry, HashMap},
rc::Rc,
};
#[derive(Debug)]
struct Foo {
v: Rc<str>,
}
#[derive(Debug)]
struct Collection {
vec: Vec<Foo>,
index: HashMap<Rc<str>, usize>,
}
impl Foo {
fn new(s: &str) -> Foo {
Foo {
v: s.into(),
}
}
}
impl Collection {
fn new() -> Collection {
Collection {
vec: Vec::new(),
index: HashMap::new(),
}
}
fn insert(&mut self, foo: Foo) {
match self.index.entry(foo.v.clone()) {
Entry::Occupied(o) => panic!(
"Duplicate entry for: {}, {:?} inserted before {:?}",
foo.v,
o.get(),
foo
),
Entry::Vacant(v) => v.insert(self.vec.len()),
};
self.vec.push(foo)
}
}
fn main() {
let mut collection = Collection::new();
for foo in vec![Foo::new("Hello"), Foo::new("World"), Foo::new("Go!")] {
collection.insert(foo)
}
println!("{:?}", collection);
}
The error is:
error: `s` does not live long enough
--> <anon>:27:5
|
16 | let idx: usize = match hash.entry(&s) { //a
| - borrow occurs here
...
27 | }
| ^ `s` dropped here while still borrowed
|
= note: values in a scope are dropped in the opposite order they are created
The note: at the end is where the answer is.
s must outlive hash because you are using &s as a key in the HashMap. This reference will become invalid when s is dropped. But, as the note says, hash will be dropped after s. A quick fix is to swap the order of their declarations:
let s = "aaa".to_string();
let mut hash = HashMap::<&str, usize>::new();
But now you have another problem:
error[E0505]: cannot move out of `s` because it is borrowed
--> <anon>:22:33
|
17 | let idx: usize = match hash.entry(&s) { //a
| - borrow of `s` occurs here
...
22 | l.push(Foo { v: s }); //b
| ^ move out of `s` occurs here
This one is more obvious. s is borrowed by the Entry, which will live to the end of the block. Cloning s will fix that:
l.push(Foo { v: s.clone() }); //b
I only want to allocate s only once, not cloning it
But the type of Foo.v is String, so it will own its own copy of the str anyway. Just that type means you have to copy the s.
You can replace it with a &str instead which will allow it to stay as a reference into s:
struct Foo<'a> {
v: &'a str,
}
pub fn main() {
// s now lives longer than l
let s = "aaa".to_string();
let mut l = Vec::<Foo>::new();
{
let mut hash = HashMap::<&str, usize>::new();
let idx: usize = match hash.entry(&s) {
Occupied(ent) => {
*ent.get()
}
Vacant(ent) => {
l.push(Foo { v: &s });
ent.insert(l.len() - 1);
l.len() - 1
}
};
}
}
Note that, previously I had to move the declaration of s to before hash, so that it would outlive it. But now, l holds a reference to s, so it has to be declared even earlier, so that it outlives l.

Resources