I am looking for a way to implement something like a memory pool in Rust.
I want to allocate a set of related small objects in chunks, and delete the set of objects at once. The objects won't be freed separately. There are several benefits to this approach:
It reduces fragmentation.
It saves memory.
Is there any way to create a allocator like this in Rust?
It sounds like you want the typed arena crate, which is stable and can be used in Rust 1.0.
extern crate typed_arena;
#[derive(Debug)]
struct Foo {
a: u8,
b: u8,
}
fn main() {
let allocator = typed_arena::Arena::new();
let f = allocator.alloc(Foo { a: 42, b: 101 });
println!("{:?}", f)
}
This does have limitations - all the objects must be the same. In my usage, I have a very small set of types that I wish to have, so I have just created a set of Arenas, one for each type.
If that isn't suitable, you can look to arena::Arena, which is unstable and slower than a typed arena.
The basic premise of both allocators is simple - you allow the arena to consume an item and it moves the bits around to its own memory allocation.
Another meaning for the word "allocator" is what is used when you box a value. It is planned that Rust will gain support for "placement new" at some point, and the box syntax is reserved for that.
In unstable versions of Rust, you can do something like box Foo(42), and a (hypothetical) enhancement to that would allow you to say something like box my_arena Foo(42), which would use the specified allocator. This capability is a few versions away from existing it seems.
Funny thing is, the allocator you want is already available in arena crate. It is unstable, so you have to use nightlies to use this crate. You can look at its sources if you want to know how it is implemented.
You may want to look at arena::TypedArena in the standard library (Note: this is not stable and, as a result, is only available in nightly builds).
If this doesn't fit your needs, you can always examine the source code (you can click the [src] link in the top right of the documentation) to see how it's done.
Related
I'm trying to load JSON files that refer to structs implementing a trait. When the JSON files are loaded, the struct is grabbed from a hashmap. The problem is, I'll probably have to have a lot of structs put into that hashmap all over my code. I would like to have that done automatically. To me this seems to be doable with procedural macros, something like:
#[my_proc_macro(type=ImplementedType)]
struct MyStruct {}
impl ImplementedType for MyStruct {}
fn load_implementors() {
let implementors = HashMap::new();
load_implementors!(implementors, ImplementedType);
}
Is there a way to do this?
No
There is a core issue that makes it difficult to skip manually inserting into a structure. Consider this simplified example, where we simply want to print values that are provided separately in the code-base:
my_register!(alice);
my_register!(bob);
fn main() {
my_print(); // prints "alice" and "bob"
}
In typical Rust, there is no mechanism to link the my_print() call to the multiple invocations of my_register. There is no support for declaration merging, run-time/compile-time reflection, or run-before-main execution that you might find in other languages that might make this possible (unless of course there's something I'm missing).
But Also Yes
There are third party crates built around link-time or run-time tricks that can make this possible:
ctor allows you to define functions that are executed before main(). With it, you can have my_register!() create invididual functions for alice and bob that when executed will add themselves to some global structure which can then be accessed by my_print().
linkme allows you to define a slice that is made from elements defined separately, which are combined at compile time. The my_register!() simply needs to use this crate's attributes to add an element to the slice, which my_print() can easily access.
I understand skepticism of these methods since the declarative approach is often clearer to me, but sometimes they are necessary or the ergonomic benefits outweigh the "magic".
So I've read Why can't I store a value and a reference to that value in the same struct? and I understand why my naive approach to this was not working, but I'm still very unclear how to better handle my situation.
I have a program I wanted to structure like follows (details omitted because I can't make this compile anyway):
use std::sync::Mutex;
struct Species{
index : usize,
population : Mutex<usize>
}
struct Simulation<'a>{
species : Vec<Species>,
grid : Vec<&'a Species>
}
impl<'a> Simulation<'a>{
pub fn new() -> Self {...} //I don't think there's any way to implement this
pub fn run(&self) {...}
}
The idea is that I create a vector of Species (which won't change for the lifetime of Simulation, except in specific mutex-guarded fields) and then a grid representing which species live where, which will change freely. This implementation won't work, at least not any way I've been able to figure out. As I understand it, the issue is that pretty much however I make my new method, the moment it returns, all of the references in grid would becomine invalid as Simulation and therefor Simulation.species are moved to another location in the stack. Even if I could prove to the compiler that species and its contents would continue to exist, they actually won't be in the same place. Right?
I've looked into various ways around this, such as making species as an Arc on the heap or using usizes instead of references and implementing my own lookup function into the species vector, but these seem slower, messier or worse. What I'm starting to think is that I need to really re-structure my code to look something like this (details filled in with placeholders because now it actually runs):
use std::sync::Mutex;
struct Species{
index : usize,
population : Mutex<usize>
}
struct Simulation<'a>{
species : &'a Vec<Species>, //Now just holds a reference rather than data
grid : Vec<&'a Species>
}
impl<'a> Simulation<'a>{
pub fn new(species : &'a Vec <Species>) -> Self { //has to be given pre-created species
let grid = vec!(species.first().unwrap(); 10);
Self{species, grid}
}
pub fn run(&self) {
let mut population = self.grid[0].population.lock().unwrap();
println!("Population: {}", population);
*population += 1;
}
}
pub fn top_level(){
let species = vec![Species{index: 0, population : Mutex::new(0_)}];
let simulation = Simulation::new(&species);
simulation.run();
}
As far as I can tell this runs fine, and ticks off all the ideal boxes:
grid uses simple references with minimal boilerplate for me
these references are checked at compile time with minimal overhead for the system
Safety is guaranteed by the compiler (unlike a custom map based approach)
But, this feels very weird to me: the two-step initialization process of creating owned memory and then references can't be abstracted any way that I can see, which feels like I'm exposing an implementation detail to the calling function. top_level has to also be responsible for establishing any other functions or (scoped) threads to run the simulation, call draw/gui functions, etc. If I need multiple levels of references, I believe I will need to add additional initialization steps to that level.
So, my question is just "Am I doing this right?". While I can't exactly prove this is wrong, I feel like I'm losing a lot of near-universal abstraction of the call structure. Is there really no way to return species and simulation as a pair at the end (with some one-off update to make all references point to the "forever home" of the data).
Phrasing my problem a second way: I do not like that I cannot have a function with a signature of ()-> Simulation, when I can can have a pair of function calls that have that same effect. I want to be able to encapsulate the creation of this simulation. I feel like the fact that this approach cannot do so indicates I'm doing something wrong, and that there may be a more idiomatic approach I'm missing.
I've looked into various ways around this, such as making species as an Arc on the heap or using usizes instead of references and implementing my own lookup function into the species vector, but these seem slower, messier or worse.
Don't assume that, test it. I once had a self-referential (using ouroboros) structure much like yours, with a vector of things and a vector of references to them. I tried rewriting it to use indices instead of references, and it was faster.
Rc/Arc is also an option worth trying out — note that there is only an extra cost to the reference counting when an Arc is cloned or dropped. Arc<Species> doesn't cost any more to dereference than &Species, and you can always get an &Species from an Arc<Species>. So the reference counting only matters if and when you're changing which Species is in an element of Grid.
If you're owning a Vec of objects, then want to also keep track of references to particular objects in that Vec, a usize index is almost always the simplest design. It might feel like extra boilerplate to you now, but it's a hell of a lot better than properly dealing with keeping pointers in check in a self-referential struct (as somebody who's made this mistake in C++ more than I should have, trust me). Rust's rules are saving you from some real headaches, just not ones that are obvious to you necessarily.
If you want to get fancy and feel like a raw usize is too arbitrary, then I recommend you look at slotmap. For a simple SlotMap, internally it's not much more than an array of values, iteration is fast and storage is efficient. But it gives you generational indices (slotmap calls these "keys") to the values: each value is embellished with a "generation" and each index also internally keeps hold of a its generation, therefore you can safely remove and replace items in the Vec without your references suddenly pointing at a different object, it's really cool.
Please tell me why didn't result in double free pointer that the value is allocated in stack? Thanks.
#[test]
fn read_value_that_allocated_in_stack_is_no_problem() {
let origin = Value(1);
let copied = unsafe { std::ptr::read(&origin) };
assert_eq!(copied, Value(1));
assert_eq!(copied, origin);
}
/// test failed as expected: double free detected
#[test]
fn read_value_that_allocated_in_heap_will_result_in_double_free_problem() {
let origin = Box::new(Value(1));
let copied = unsafe { std::ptr::read(&origin) };
assert_eq!(copied, Box::new(Value(1)));
assert_eq!(copied, origin);
}
#[derive(Debug, PartialEq)]
struct Value<T>(T);
The unsafe method you are using just creates a bitwise copy of the referenced value. When you do this with a Box, it's not okay but for something like your Value struct containing an integer, it is okay to make the copy as Drop of integers has no side effects while drop of Box accesses global allocator and changes the state.
If you do not understand any term I used for this explanation, try to search it or ask in the comments.
Those tests hide the fact that you use different types in them. It isn't really about stack or heap.
In the first one you use Value<i32> type, which is your custom type, presumably without custom Drop implemented. If so, then Rust will call Drop on each member, in this case the i32 member. Which does nothing. And so nothing happens when both objects go out of scope. Even if you implement Drop, it would have to have some serious side-effects (like call to free) for it to fail.
In the second one you actually use Box type, which does implement Drop. Internally it calls free (in addition to dropping the underlying object). And so free is called twice on drop, trying to free the same pointer (because of the unsafe copy).
This is not a double free because we do not free the memory twice. Because we do not free memory at all.
However, whether this is valid or UB is another question. Miri does not flag it as UB, however Miri does not aim to flag any UB out there. If Value<i32> was Copy, that would be fine. As it is not, it depends on the question whether std::ptr::read() invalidates the original data, i.e. is it always invalid to use a data that was std::ptr::read()'ed, or only if it violates Stacked Borrows semantics, like in the case of copying the Box itself where both destructors try to access the Box thereafter?
The answer is that it's not decided yet. As per UCG issue #307, "Are Copy implementations semantically relevant (besides specialization)?":
#steffahn
Overall, this still being an open question means that while miri doesn't complain, one should avoid code like this because it's not yet certain that it won't be UB, right?
#RalfJung
Yes.
In conclusion, you should avoid code like that.
At the beginning of my program, I read data from a file:
let file = std::fs::File::open("data/games.json").unwrap();
let data: Games = serde_json::from_reader(file).unwrap();
I would like to know how it would be possible to do this at compile time for the following reasons:
Performance: no need to deserialize at runtime
Portability: the program can be run on any machine without the need to have the json file containing the data with it.
I might also be useful to mention that, the data can be read only which means the solution can store it as static.
This is straightforward, but leads to some potential issues. First, we need to deal with something: do we want to load the tree of objects from a file, or parse that at runtime?
99% of the time, parsing on boot into a static ref is enough for people, so I'm going to give you that solution; I will point you to the "other" version at the end, but that requires a lot more work and is domain-specific.
The macro (because it has to be a macro) you are looking for to be able to include a file at compile-time is in the standard library: std::include_str!. As the name suggests, it takes your file at compile-time and generates a &'static str from it for you to use. You are then free to do whatever you like with it (such as parsing it).
From there, it is a simple matter to then use lazy_static! to generate a static ref to our JSON Value (or whatever it may be that you decide to go for) for every part of the program to use. In your case, for instance, it could look like this:
const GAME_JSON: &str = include_str!("my/file.json");
#[derive(Serialize, Deserialize, Debug)]
struct Game {
name: String,
}
lazy_static! {
static ref GAMES: Vec<Game> = serde_json::from_str(&GAME_JSON).unwrap();
}
You need to be aware of two things when doing this:
This will massively bloat your file size, as the &str isn't compressed in any way. Consider gzip
You'll need to worry about the usual concerns around multiple, threaded access to the same static ref, but since it isn't mutable you only really need to worry about a portion of it
The other way requires dynamically generating your objects at compile-time using a procedural macro. As stated, I wouldn't recommend it unless you really have a really expensive startup cost when parsing that JSON; most people will not, and the last time I had this was when dealing with deeply-nested multi-GB JSON files.
The crates you want to look out for are proc_macro2 and syn for the code generation; the rest is very similar to how you would write a normal method.
When you are deserializing something at runtime, you're essentially building some representation in program memory from another representation on disk. But at compile-time, there's no notion of "program memory" yet - where will this data deserialize too?
However, what you're trying to achieve is, in fact, possible. The main idea is like following: to create something in program memory, you must write some code which will create the data. What if you're able to generate the code automatically, based on the serialized data? That's what uneval crate does (disclaimer: I'm the author, so you're encouraged to look through the source to see if you can do better).
To use this approach, you'll have to create build.rs with approximately the following content:
// somehow include the Games struct with its Serialize and Deserialize implementations
fn main() {
let games: Games = serde_json::from_str(include_str!("data/games.json")).unwrap();
uneval::to_out_dir(games, "games.rs");
}
And in you initialization code you'll have the following:
let data: Games = include!(concat!(env!("OUT_DIR"), "/games.rs"));
Note, however, that this might be fairly hard to do in ergonomic way, since the necessary struct definitions now must be shared between the build.rs and the crate itself, as I mentioned in the comment. It might be a little easier if you split your crate in two, keeping struct definitions (and only them) in one crate, and the logic which uses them - in another one. There's some other ways - with include! trickery, or by using the fact that the build script is an ordinary Rust binary and can include other modules as well, - but this will complicate things even more.
I am writing a service that will collect a great number of values and build large structures around them. For some of these, lookup tables are needed, and due to memory constraints I do not want to copy the key or value passed to the HashMap. However, using references gets me into trouble with the borrow checker (see example below). What is the preferred way of working with run-time created instances?
use std::collections::HashMap;
#[derive(PartialEq, Eq, PartialOrd, Ord, Hash)]
struct LargeKey;
struct LargeValue;
fn main() {
let mut lots_of_lookups: HashMap<&LargeKey, &LargeValue> = HashMap::new();
let run_time_created_key = LargeKey;
let run_time_created_value = LargeValue;
lots_of_lookups.insert(&run_time_created_key, &run_time_created_value);
lots_of_lookups.clear();
}
I was expecting clear() to release the borrows, but even if it actually does so, perhaps the compiler cannot figure that out?
Also, I was expecting clear() to release the borrows, but even if it actually does so, perhaps the compiler cannot figure that out?
At the moment, borrowing is purely scope based. Only a method which consumes the borrower can revoke the borrow, which is not always ideal.
What is the preferred way of working with shared run-time created instances?
The simplest way to expressed shared ownership is to use shared ownership. It does come with some syntactic overhead, however it would greatly simplify reasoning.
In Rust, there are two simple standard ways of expressing shared ownership:
Rc<RefCell<T>>, for sharing within a thread,
Arc<Mutex<T>>, for sharing across threads.
There are some variations (using Cell instead of RefCell or RWLock instead of Mutex), however those are the basics.
Beyond syntactic overhead, there's also some amount of run-time overhead going into (a) increasing/decreasing the reference count whenever you make a clone and (b) checking/marking/clearing the usage flag when accessing the wrapped instance of T.
There is one non-negligible downside to this approach, though. The borrowing rules are now checked at runtime instead of compile time, and therefore violations lead to panic instead of compile time errors.