How do I drop items in collections in rust - memory-leaks

I have a small crate called arr that is specifically designed for large fixed-sized heap arrays (billions of elements could be stored in this). I have a problem though in understanding the proper way to implement Drop for this array.
pub struct Array<T> {
size: usize,
ptr: *mut T,
}
My original Drop looked like this:
impl<T> Drop for Array<T> {
fn drop(&mut self) {
let objsize = std::mem::size_of::<T>();
let layout = Layout::from_size_align(self.size * objsize, 8).unwrap();
unsafe {
dealloc(self.ptr as *mut u8, layout);
}
}
}
However - this is clearly not right, because if T is a Drop then I am leaking memory - a fact which a kindly github member pointed out to me. But how best should I free this memory? Naively I think I could loop through all the elements and call std::ptr::drop_in_place on them:
for i in 0..(self.size as isize) {
std::ptr::drop_in_place(self.ptr.wrapping_offset(i));
}
But if the array is a billion u8's isn't this a terrible idea as that would be a billion no-ops? I guess the compiler should be smart enough to do dead-code elision so perhaps I'm falling prey to premature optimization.

Wrap your drop_in_place loop in an if statement checking std::mem::needs_drop::<T>().

Related

Are fixed position buffers a good use case for Pin?

I'm writing some code that shares buffers between Rust and the Linux kernel. After the buffer is registered, its memory location should never move. Since the API takes a *mut u8 (actually a libc::iovec) but doesn't enforce the fixed memory location constraint, it works fine if I represent the buffer as RefCell<Vec<u8>>, Arc<RefCell<Vec<u8>>>, or Arc<Mutex<Vec<u8>>>. (But the Rust code actually never writes to the buffers.)
Does Pin offer additional safety against data moving in this use case? I think it may not, since one of the major risks is calling resize() and growing the vector, but I do intend to call resize() without growing the vector. The code works, but I'm interested in writing this the most correct way.
I don't really know about Pin but it is mostly useful for self-referential structs and coroutines AFAIK.
For you case, I would just create a struct which enforces invariant about data movement. E.g.:
use std::ops::{Deref, DerefMut};
struct MyData{
data: Vec<u8>,
}
impl MyData{
fn with_capacity(cap: usize)->Self{
Self{data: Vec::with_capacity(cap)}
}
fn resize(&mut self, new_len: usize, val_to_set: u8){
let old_ptr = self.data.as_ptr();
// Enforce invariant that memory never moves.
assert!(new_len <= self.data.capacity());
self.data.resize(new_len, val_to_set);
debug_assert_eq!(old_ptr, self.data.as_ptr());
}
}
// To get access to bytes
impl Deref for MyData{
type Target = [u8];
fn deref(&self)->&[u8]{
&*self.data
}
}
impl DerefMut for MyData{
fn deref_mut(&mut self)->&mut[u8]{
&mut *self.data
}
}
This struct enforces that bytes never move without involving Pin.
And you can wrap it into RefCell.

How to create a pool of contigous memory across multiple structs in Rust?

I'm trying to create a set of structs in Rust that use a contiguous block of memory. E.g:
<------------ Memory Pool -------------->
[ Child | Child | Child | Child ]
These structs:
may each contain slices of the pool of different sizes
should allow access to their slice of the pool without any blocking operations once initialized (I intend to access them on an audio thread).
I'm very new to Rust but I'm well versed in C++ so the main hurdle thus far has been working with the ownership semantics - I'm guessing there's a trivial way to achieve this (without using unsafe) but the solution isn't clear to me. I've written a small (broken) example of what I'm trying to do:
pub struct Child<'a> {
pub slice: &'a mut [f32],
}
impl Child<'_> {
pub fn new<'a>(s: &mut [f32]) -> Child {
Child {
slice: s,
}
}
}
pub struct Parent<'a> {
memory_pool: Vec<f32>,
children: Vec<Child<'a>>,
}
impl Parent<'_> {
pub fn new<'a>() -> Parent<'a> {
const SIZE: usize = 100;
let p = vec![0f32; SIZE];
let mut p = Parent {
memory_pool: p,
children: Vec::new(),
};
// Two children using different parts of the memory pool:
let (lower_pool, upper_pool) = p.memory_pool.split_at_mut(SIZE / 2);
p.children = vec!{ Child::new(lower_pool), Child::new(upper_pool) };
return p; // ERROR - p.memory_pool is borrowed 2 lines earlier
}
}
I would prefer a solution that doesn't involve unsafe but I'm not entirely opposed to using it. Any suggestions would be very much appreciated, as would any corrections on how I'm (mis?)using Rust in my example.
Yes, it's currently impossible (or quite hard) in Rust to contain references into sibling data, for example, as you have here, a Vec and slices into that Vec as fields in the same struct. Depending on the architecture of your program, you might solve this by storing the original Vec at some higher-level of your code (for example, it could live on the stack in main() if you're not writing a library) and the slice references at some lower level in a way that the compiler can clearly infer it won't go out of scope before the Vec (doing it in main() after the Vec has been instantiated could work, for example).
This is the perfect use case for an arena allocator. There are quite a few. The following demonstration uses bumpalo:
//# bumpalo = "2.6.0"
use bumpalo::Bump;
use std::mem::size_of;
struct Child1(u32, u32);
struct Child2(f32, f32, f32);
fn main() {
let arena = Bump::new();
let c1 = arena.alloc(Child1(1, 2));
let c2 = arena.alloc(Child2(1.0, 2.0, 3.0));
let c3 = arena.alloc(Child1(10, 11));
// let's verify that they are indeed continuous in memory
let ptr1 = c1 as *mut _ as usize;
let ptr2 = c2 as *mut _ as usize;
let ptr3 = c3 as *mut _ as usize;
assert_eq!(ptr1 + size_of::<Child1>(), ptr2);
assert_eq!(ptr1 + size_of::<Child1>() + size_of::<Child2>(), ptr3);
}
There are caveats, too. The main concern is of course the alignment; there may be some padding between two consecutive allocations. It is up to you to make sure that doesn't gonna happen if it is a deal breaker.
The other is allocator specific. The bumpalo arena allocator used here, for example, doesn't drop object when itself gets deallocated.
Other than that, I do believe a higher level abstraction like this will benefit your project. Otherwise, it'll just be pointer manipulating c/c++ disguised as rust.

Getting 'unordered' semantics in Rust

How do I create a fixed length list of integers V with the "unordered" semantics of LLVM (see https://llvm.org/docs/Atomics.html).
The "unordered" semantics means if you read a location in the thread, you will get a previously written value (not necessarily the most recent one, as the optimisers is allowed to rearrange / cache values from the array). This can be viewed as the "natural" behaviour of reading and writing the raw memory, as long as values are only written and read in a single CPU instruction (so other threads never see "half a written value").
It is important to me this is as close to the performance of a single-threaded array of integers as possible, because writes are extremely rare, and I am happy for them to be lost.
rustc exposes a fair number of LLVM intrinsics through the std::intrinsics module, which is permanently unstable.
Still, it is available in Nightly, and there you can find:
atomic_load_unordered,
atomic_store_unordered.
With those at hand, you can use UnsafeCell as a basic building block to build your own UnorderedAtomicXXX.
You can follow the std atomics to help with your implementation. The basics should look like:
pub struct UnorderedAtomic(UnsafeCell<i32>);
impl UnorderedAtomic {
pub fn new() -> Self {
UnorderedAtomic(Default::default())
}
pub fn load(&self) -> i32 {
unsafe { atomic_load_unordered(self.0.get()) }
}
pub fn store(&self, i: i32) {
unsafe { atomic_store_unordered(self.0.get(), i) }
}
unsafe fn raw(&self) -> *mut i32 { self.0.get() }
}
It's unclear whether you can get unordered compare/exchange or fetch/add.

Can a type know when a mutable borrow to itself has ended?

I have a struct and I want to call one of the struct's methods every time a mutable borrow to it has ended. To do so, I would need to know when the mutable borrow to it has been dropped. How can this be done?
Disclaimer: The answer that follows describes a possible solution, but it's not a very good one, as described by this comment from Sebastien Redl:
[T]his is a bad way of trying to maintain invariants. Mostly because dropping the reference can be suppressed with mem::forget. This is fine for RefCell, where if you don't drop the ref, you will simply eventually panic because you didn't release the dynamic borrow, but it is bad if violating the "fraction is in shortest form" invariant leads to weird results or subtle performance issues down the line, and it is catastrophic if you need to maintain the "thread doesn't outlive variables in the current scope" invariant.
Nevertheless, it's possible to use a temporary struct as a "staging area" that updates the referent when it's dropped, and thus maintain the invariant correctly; however, that version basically amounts to making a proper wrapper type and a kind of weird way to use it. The best way to solve this problem is through an opaque wrapper struct that doesn't expose its internals except through methods that definitely maintain the invariant.
Without further ado, the original answer:
Not exactly... but pretty close. We can use RefCell<T> as a model for how this can be done. It's a bit of an abstract question, but I'll use a concrete example to demonstrate. (This won't be a complete example, but something to show the general principles.)
Let's say you want to make a Fraction struct that is always in simplest form (fully reduced, e.g. 3/5 instead of 6/10). You write a struct RawFraction that will contain the bare data. RawFraction instances are not always in simplest form, but they have a method fn reduce(&mut self) that reduces them.
Now you need a smart pointer type that you will always use to mutate the RawFraction, which calls .reduce() on the pointed-to struct when it's dropped. Let's call it RefMut, because that's the naming scheme RefCell uses. You implement Deref<Target = RawFraction>, DerefMut, and Drop on it, something like this:
pub struct RefMut<'a>(&'a mut RawFraction);
impl<'a> Deref for RefMut<'a> {
type Target = RawFraction;
fn deref(&self) -> &RawFraction {
self.0
}
}
impl<'a> DerefMut for RefMut<'a> {
fn deref_mut(&mut self) -> &mut RawFraction {
self.0
}
}
impl<'a> Drop for RefMut<'a> {
fn drop(&mut self) {
self.0.reduce();
}
}
Now, whenever you have a RefMut to a RawFraction and drop it, you know the RawFraction will be in simplest form afterwards. All you need to do at this point is ensure that RefMut is the only way to get &mut access to the RawFraction part of a Fraction.
pub struct Fraction(RawFraction);
impl Fraction {
pub fn new(numerator: i32, denominator: i32) -> Self {
// create a RawFraction, reduce it and wrap it up
}
pub fn borrow_mut(&mut self) -> RefMut {
RefMut(&mut self.0)
}
}
Pay attention to the pub markings (and lack thereof): I'm using those to ensure the soundness of the exposed interface. All three types should be placed in a module by themselves. It would be incorrect to mark the RawFraction field pub inside Fraction, since then it would be possible (for code outside the module) to create an unreduced Fraction without using new or get a &mut RawFraction without going through RefMut.
Supposing all this code is placed in a module named frac, you can use it something like this (assuming Fraction implements Display):
let f = frac::Fraction::new(3, 10);
println!("{}", f); // prints 3/10
f.borrow_mut().numerator += 3;
println!("{}", f); // prints 3/5
The types encode the invariant: Wherever you have Fraction, you can know that it's fully reduced. When you have a RawFraction, &RawFraction, etc., you can't be sure. If you want, you may also make RawFraction's fields non-pub, so that you can't get an unreduced fraction at all except by calling borrow_mut on a Fraction.
Basically the same thing is done in RefCell. There you want to reduce the runtime borrow-count when a borrow ends. Here you want to perform an arbitrary action.
So let's re-use the concept of writing a function that returns a wrapped reference:
struct Data {
content: i32,
}
impl Data {
fn borrow_mut(&mut self) -> DataRef {
println!("borrowing");
DataRef { data: self }
}
fn check_after_borrow(&self) {
if self.content > 50 {
println!("Hey, content should be <= {:?}!", 50);
}
}
}
struct DataRef<'a> {
data: &'a mut Data
}
impl<'a> Drop for DataRef<'a> {
fn drop(&mut self) {
println!("borrow ends");
self.data.check_after_borrow()
}
}
fn main() {
let mut d = Data { content: 42 };
println!("content is {}", d.content);
{
let b = d.borrow_mut();
//let c = &d; // Compiler won't let you have another borrow at the same time
b.data.content = 123;
println!("content set to {}", b.data.content);
} // borrow ends here
println!("content is now {}", d.content);
}
This results in the following output:
content is 42
borrowing
content set to 123
borrow ends
Hey, content should be <= 50!
content is now 123
Be aware that you can still obtain an unchecked mutable borrow with e.g. let c = &mut d;. This will be silently dropped without calling check_after_borrow.

Vec<MyTrait> without N heap allocations?

I'm trying to port some C++ code to Rust. It composes a virtual (.mp4) file from a few kinds of slices (string reference, lazy-evaluated string reference, part of a physical file) and serves HTTP requests based on the result. (If you're curious, see Mp4File which takes advantage of the FileSlice interface and its concrete implementations in http.h.)
Here's the problem: I want require as few heap allocations as possible. Let's say I have a few implementations of resource::Slice that I can hopefully figure out on my own. Then I want to make the one that composes them all:
pub trait Slice : Send + Sync {
/// Returns the length of the slice in bytes.
fn len(&self) -> u64;
/// Writes bytes indicated by `range` to `out.`
fn write_to(&self, range: &ByteRange,
out: &mut io::Write) -> io::Result<()>;
}
// (used below)
struct SliceInfo<'a> {
range: ByteRange,
slice: &'a Slice,
}
/// A `Slice` composed of other `Slice`s.
pub struct Slices<'a> {
len: u64,
slices: Vec<SliceInfo<'a>>,
}
impl<'a> Slices<'a> {
pub fn new() -> Slices<'a> { ... }
pub fn append(&mut self, slice: &'a resource::Slice) { ... }
}
impl<'a> Slice for Slices<'a> { ... }
and use them to append lots and lots of slices with as few heap allocations as possible. Simplified, something like this:
struct ThingUsedWithinMp4Resource {
slice_a: resource::LazySlice,
slice_b: resource::LazySlice,
slice_c: resource::LazySlice,
slice_d: resource::FileSlice,
}
struct Mp4Resource {
slice_a: resource::StringSlice,
slice_b: resource::LazySlice,
slice_c: resource::StringSlice,
slice_d: resource::LazySlice,
things: Vec<ThingUsedWithinMp4Resource>,
slices: resource::Slices
}
impl Mp4Resource {
fn new() {
let mut f = Mp4Resource{slice_a: ...,
slice_b: ...,
slice_c: ...,
slices: resource::Slices::new()};
// ...fill `things` with hundreds of things...
slices.append(&f.slice_a);
for thing in f.things { slices.append(&thing.slice_a); }
slices.append(&f.slice_b);
for thing in f.things { slices.append(&thing.slice_b); }
slices.append(&f.slice_c);
for thing in f.things { slices.append(&thing.slice_c); }
slices.append(&f.slice_d);
for thing in f.things { slices.append(&thing.slice_d); }
f;
}
}
but this isn't working. The append lines cause errors "f.slice_* does not live long enough", "reference must be valid for the lifetime 'a as defined on the block at ...", "...but borrowed value is only valid for the block suffix following statement". I think this is similar to this question about the self-referencing struct. That's basically what this is, with more indirection. And apparently it's impossible.
So what can I do instead?
I think I'd be happy to give ownership to the resource::Slices in append, but I can't put a resource::Slice in the SliceInfo used in Vec<SliceInfo> because resource::Slice is a trait, and traits are unsized. I could do a Box<resource::Slice> instead but that means a separate heap allocation for each slice. I'd like to avoid that. (There can be thousands of slices per Mp4Resource.)
I'm thinking of doing an enum, something like:
enum BasicSlice {
String(StringSlice),
Lazy(LazySlice),
File(FileSlice)
};
and using that in the SliceInfo. I think I can make this work. But it definitely limits the utility of my resource::Slices class. I want to allow it to be used easily in situations I didn't anticipate, preferably without having to define a new enum each time.
Any other options?
You can add a User variant to your BasicSlice enum, which takes a Box<SliceInfo>. This way only the specialized case of users will take the extra allocation, while the normal path is optimized.

Resources