Why is Box called like that in Rust? - rust

Box<> is explained like this on the Rust Book:
... allow you to store data on the heap rather than the stack. What remains on the stack is the pointer to the heap data.
With a description like that, I would expect the described object to be called Heap<> or somethingHeapsomethingelse (DerefHeap, perhaps?). Instead, we use Box.
Why was the name Box chosen?

First, Heap is a very overloaded term, and importantly a heap is an abstract datastructure often used to implement things like priority queues. Having a type called Heap which is not a heap would be extremely confusing, a good reason to avoid that.
Second, "box" is related to the concept of "boxing" or "boxed" objects, in languages which strongly distinguish between value and reference types e.g. Java or Javascript: https://en.wikipedia.org/wiki/Object_type_(object-oriented_programming), in those a "boxed" type is the heap-allocated version of a value type e.g. int/Integer in java, or number/Number in Javascript.
Rust's Box performs an operation which is similar in spirit. Box also originally had a built-in "lifting" operator called box (it's still an internal operation and was originally planned to be stabilised for placement new), as such "box"/"boxing" makes sense linguistically in a way "heap"/"heaping" really does not (as "heaping" hints at a lot of things being put on a heap).

Related

When should I use Pin<Arc<T>> in Rust?

I'm pretty new to Rust and have a couple different implementations of a method that includes a closure referencing self. To use the reference in the closure effectively, I've been using Arc<Self> (I am multithreading) and Pin<Arc<Self>>.
I would like to make this method as generally memory efficient as possible. I assume pinning the Arc in memory would help with this. However, (a) I've read that Arcs are pinned and (b) it seems like Pin<Arc<T>> may require additional allocations.
What is Pin<Arc<T>> good for?
Adding Pin around some pointer type does not change the behavior of the program. It only adds a restriction on what further code you can write (and even that, only if the T in Pin<Arc<T>> is not Unpin, which most types are).
Therefore, there is no "memory efficiency" to be gained by adding Pin.
The only use of Pin is to allow working with types that require they be pinned to use them, such as Futures.

Can a garbage collected language allocate an object inline on the stack?

Say I have a value type Foo, and a method Bar which accepts a reference to a Foo. Most languages will allow me to allocate a new Foo on the stack, and will automatically box it when I try and pass it in to Bar. However, as far as I am aware, this involves copying the Foo value onto the heap, and then using that reference.
Is it possible for a language to include a way of allocating a garbage collected object on the stack? When the method ends, the runtime could check if the object is still in use, and only then would it need to allocate the object on the heap, and update the references.
I imagine this would improve performance for methods that do not keep the reference, and it would hinder performance for methods that do.
Yes, Graal's partial escape analysis does that. While regular EA can only stack-allocate (more precisely: decompose into fields, put fields onto stack) when the object doesn't escape partial EA can optimistically allocate on the stack and only reify the data into an object on uncommon cases where the object must exist.
Also note that garbage collection is not a binary choice. You can have environments that mix and match garbage-collection, ref-counting, arena or scope-based allocators with automatic deallocation and completely manual management. In such a case stack allocations could also be one of the latter things while some heap would be garbage-collected.

Stack allocation of `isbits` types in Julia

Summary of question and answers
Objects of a particular type, say
type Foo
a::A
b::B
end
can be stored in either of two ways:
Inlined (aka by value): in this case, the statement "variable foo::Foo is stored at location x" effectively means we have a variable foo.a::A at location x and a variable foo.b::B at location x + sizeof(A) (technically the addresses could be a bit more complicated, but that's irrelevant for our purposes).
Referenced (aka by reference): "foo::Foo is stored at location x" means the location x contains a pointer fooptr::Ptr{Foo} such that there is a variable foo.a::A at location fooptr and foo.b::B at location fooptr + sizeof(A).
Unlike other languages (I'm looking at you, C/C++), Julia decides by itself whether to store variables inlined or referenced, and it does so based on the properties of the type:
mutable types -> referenced,
immutable types -> referenced if at least one of its fields is referenced, inlined otherwise.
There are at least two reasons for this rule:
StefanKarpinski's answer: The garbage collector needs be able to find all pointers to heap-allocated objects on the stack. Currently, Julia ensures this by storing all such pointers on a separate "shadow stack", but if we allowed composite types containing pointers to be placed on the stack then such a neat separation would no longer be possible. Instead, the compiler would need to look for pointers among other variables which poses technical difficulties.
yuyichao's answer: Julia requires the inline/reference decision to be made on a per-type rather than per-object basis, which means a hypothetical type
immutable A
a::A
end
would have to be infinitely big if we insisted on inlining it. So we would either have to forbid such recursive immutable types, or we could at most allow non-recursive immutable types to be inlined.
Original question
My understanding of memory management in Julia is:
mutable types -> heap-allocated,
immutable types and tuples -> stack-allocated unless one of their fields is heap-allocated (i.e. mutable).
I don't quite understand the rationale for this behaviour, however. I've read somewhere that the problem with stack-allocating immutables with pointers to mutables is that then the garbage collector might consider the mutables unreachable and destroy them prematurely. On the other hand, if we place the immutable on the heap then there will still be a pointer to the mutables, so it might seem like we avoided the problem, but actually we just shifted it to making sure that now the immutable itself will not be destroyed.
Can anyone explain this to me who has only very superficial knowledge of how garbage collection works?
The problem with stack-allocation of objects which reference other objects is knowing that they need to be traced during garbage collection. The simplest way to do this is what Julia does: heap allocate the objects and "root" them using "shadow stack" which is pushed and popped in sync with the actual stack. This introduces a fair bit of overhead and forces these objects to be heap allocated.
A more sophisticated approach that avoids the overhead of a shadow stack and heap allocation is to stack allocated these objects and then scan the stack which doing garbage collection and follow references from objects in the stack to objects on the heap. However, this requires knowing which objects in the stack are pointers to objects on the heap – in general, non heap-allocated objects are not guaranteed to be kept intact or contiguous in registers or the stack. One approach to doing this is called "conservative stack scanning" which entails assuming during gc that any value on the stack which looks like it could be a pointer to an object on the heap actually is. That approach has been successfully used in applications like Safari's JavaScript engine, but it's not without it's challenges. We've contemplated using conservative stack scanning in Julia, and an initial effort to do so was started but the effort was never completed.
References:
https://github.com/JuliaLang/julia/issues/11714
https://github.com/JuliaLang/julia/pull/8134
There are multiple issues/concepts that are frequently mixed together whenever this is brought up.
mutable or non-pointerfree immutable doesn't necessarily mean heap allocation, we already have optimization passes to elide some of the optimizations and are working on improving them further.
The object layout ABI is an user visible behavior and not something an optimization pass can easily change (unless it can prove that the local optimization it wants to do does not escape). The current ABI is that only isbits immutable will be stored inline (and "stack allocated" when used as local variable). There's a fundamental limitation of lifting the requirement of pointerfree-ness for inlined object, i.e. the necessity to handle recursive types. It is impossible to make all types in a reference circle stored inline and the loop has to be broken somewhere if we want to make some of them inlined. I believe we do have a consistent and predictable model to do this though whether this is desireable is another issue.
This is somewhat related to performance but not always. Stored inline means more copy so it's hard to make sure there's no regression if we do the switch.
Edit: And I should also mention that pointer-free is a sufficient condition for cycle free and is easier to compute, which is partly why we are currently using it to break inlining cycles.
GC support. This is basically the easiest part. It's very easy to make GC recognize pointers on the stack. It just needs to be done if we decide to change the object layout ABI.
Edit: And I should add that "GC support" is needed because we currently only support a limited / simple stack layout for object reference (i.e. an array of pointers). It's this that needs to be improved.

How do Rust's ownership semantics relate to uniqueness typing as found in Clean and Mercury?

I noticed that in Rust moving is applied to lvalues, and it's statically enforced that moved-from objects are not used.
How do these semantics relate to uniqueness typing as found in Clean and Mercury? Are they the same concept? If not, how do they differ?
The concept of ownership in Rust is not the same as uniqueness in Mercury and Clean, although they are related in that they both aim to provide safety via static checking, and they are both defined in terms of the number of references within a scope. The key differences are:
Uniqueness is a more abstract concept. While it can be interpreted as saying that a reference to a memory location is unique, like Rust's lvalues, it can also apply to abstract values such as the state of every object in the universe, to give an extreme but typical example. There is no pointer corresponding to such a value - it cannot be opened up and inspected within a debugger or anything like that - but it can be used through an interface just like any other abstract type. The aim is to give a value-oriented semantics that remains consistent in the presence of statefulness.
In Mercury, at least (I can't speak for Clean), uniqueness is a more limited concept than ownership, in that there must be exactly one reference. You can't share several copies of a reference on the proviso that they will not be written to, as can be done in Rust. You also can't lend a reference for writing but get it back later after the borrower has finished with it.
Declaring something unique in Mercury does not guarantee that writing to references will occur, just that the compiler will check that it would be safe to do so; it is still valid for an implementation to copy the contents of a unique reference rather than update in place. The compiler will arrange for the update in place if it deems it appropriate at its given optimization level. Alternatively, authors of abstract types may perform similar (or sometimes drastically better) optimizations manually, safe in the knowledge that users will be forced to use the abstract type in a way that is consistent with them. Ownership in Rust, on the other hand, is more directly connected to the memory model and gives stronger guarantees about behaviour.

Dynamic Languages and Variable Allocation

How does a dynamic language decide how much memory to allocate for a variable?
eg. How does the compiler change variable= 5 to variable ="xxx" without too much memory overhead? When does it use the hardware stack and when does it use the memory heap?
The compiler allocates enough memory for each variable to hold a pointer plus whatever metadata the language runtime requires. But I think you mean to be asking how much memory is allocated for each object. In that case the answer is that it depends on the type of object. When a variable gets assigned to a different object, the pointer associated with that variable changes what it points to.
The answer, of course, varies by language - both the hosted dynamic language and the lower-level implementation language. That which applies to Perl does not necessarily apply to Python, nor does what applies in Tcl apply in Java or LISP or ... well, do they count as dynamic languages.
In Perl, there's a C-level structure that goes by the name SV (scalar variable) that contains different storage for different versions of the variable's value. These often heap-based; the storage for strings always ends up being heap based, though a pure numeric value that has never been converted to string might be in an SV that is strictly on the stack. In Perl, these things are reference counted (and mortalized, or immortalized, and all sorts of other interesting terms). More complicated types (AV, HV, RV, etc) are based on SV.

Resources