Rust backpointers [duplicate] - rust

This question already has answers here:
Why can't I store a value and a reference to that value in the same struct?
(4 answers)
Shared circular references in Rust
(1 answer)
Closed 3 years ago.
I am learning Rust from a C++/Java background, and I have the following pattern
struct Node<'a> {
network_manager: NetworkManager<'a>,
}
struct NetworkManager<'a> {
base_node: &'a Node<'a>,
}
The node contains the threadpool that the NetworkManager uses to "handoff" messages once they've been processed. Because of the recursive call, it is not possible to set the base_node field in the NetworkManager immediately. In Java, I would leave it as null and have a second method that is called after the constructor called initialise(BaseNode node) that would set the base_node field (ensuring that there are no calls to the network manager before initialise is called).
What is the idiomatic way of doing this in Rust? The only way I can think of is to make base_node an Option type, but this seems suboptimal.
In general, what is the "right" way in Rust to deal with situations where A points to B and B points to A, and where (as in my case), refactoring is not possible?

From my experience, these situations are very different from other languages. In "safe, simple, everyday Rust" having backpointers/pointers within the struct is complex since it leads to non-trivial problems. (Consider what would happen if you would move Node around in memory: How would you properly update the backpointer in NetworkManager?)
What I usually resort to is simply passing base_node as a parameter to the functions that need the backpointer. This is sometimes easier said than done, but leads to code that clearly states ownership.

Related

In Rust, what is !Unpin [duplicate]

This question already has answers here:
What does the exclamation point mean in a trait implementation?
(3 answers)
Closed 9 months ago.
I am unable to locate the documentation for !Unpin referred to here in the docs.
More generally, the ! operator seem to lack corresponding documentation regarding traits. Specifically, it seems to represent Not as in Not Unpin or perhaps Not Unpinable in this case. I suppose it is different from Pin in some way otherwise it would be redundant. Currently, searching for the documentation is challenging since ! occurs so frequently otherwise.
It would be good if the operator behavior of ! on traits could be included in Appendix B: Operators and Symbols of the docs.
Unpin is one of several auto-traits, which are implemented automatically for any type that's compatible with it. And in the case of Unpin, that's, well, basically all of the types.
Auto-traits (and only auto-traits) can have negative implementations written by preceding the trait name with a !.
// By default, A implements Unpin
struct A {}
// But wait! Don't do that! I know something you don't, compiler.
impl !Unpin for A {}
Unpin, specifically, indicates to Rust that it is safe to move values of the implementing type T out of a Pin. Normally, Pin indicates that the thing inside shall not be moved. Unpin is the sort of opposite of that, which says "I know we just pinned this value, but I, as the writer of this type, know it's safe to move it anyway".
Generally, anything in Safe Rust is Unpin. The only time you'd want to mark something as !Unpin is if it's interfacing with something in another language like C. If you have a datatype that you're storing pointers to in C, for instance, then the C code may be written on the assumption that the data never changes addresses. In that case, you'd want to negate the Unpin implementation.

What are the differences between Cell, RefCell, and UnsafeCell? [duplicate]

This question already has answers here:
Is this error due to the compiler's special knowledge about RefCell?
(1 answer)
How does the Rust compiler know `Cell` has internal mutability?
(3 answers)
When I can use either Cell or RefCell, which should I choose?
(3 answers)
Situations where Cell or RefCell is the best choice
(3 answers)
Closed 9 months ago.
The community reviewed whether to reopen this question 9 months ago and left it closed:
Original close reason(s) were not resolved
What are the exact differences between Cell, RefCell, and UnsafeCell?
RefCell
Let's start with RefCell, which in my personal experience is the most commonly used of the three.
Normally, in Rust, borrowing rules are checked at compile-time. You have to prove to the compiler (effectively, to a type-checker-like system called the borrow checker) that everything you're doing is fair game: that every object is either mutably borrowed once or immutably borrowed several times, and that those two threads don't cross. RefCell moves this check to runtime. You can convert a RefCell<T> to a &T with borrow and you can convert a RefCell<T> to &mut T with borrow_mut[1].
When you call these functions, at runtime, Rust checks that the usual borrow rules apply. You don't have to prove to the type checker that it works, but if it turns out that you're wrong, your program will panic[2]. We haven't changed the rules; you've just shoved them from compile time to runtime. This can be useful, for instance, if you're writing a recursive algorithm that manipulates data in some complicated way, and you've personally checked that the borrow rules are followed but you can't prove it to Rust.
Cell
Then there's Cell. A Cell is similar but it's not reference-based. Instead, the fundamental operation of a Cell is replace. replace takes a new value for the cell (by value) and, as the name implies, replaces the contents of the cell. No mutability checks need to be done in this case, since this operation happens in one fell swoop. You call the function, the function does its magic, and it returns. It's not giving you a reference back. There are other functions on Cell, like update in Nightly Rust, but they're all implemented in terms of replace.
Note that none of the types we're talking about can be shared across threads. So, in the case of Cell, there's no concern that one thread is trying to replace the value while another reads it. And in the case of RefCell, there's no need for a complicated locking mechanism to coordinate the runtime checks.
UnsafeCell
Enter UnsafeCell. UnsafeCell is RefCell with all of the safety checks removed. We haven't pushed safety off to the runtime; we've taken it in the backyard and shot it. The fundamental operation of UnsafeCell is get, which is like borrow_mut for RefCell except that it always returns a mutable pointer. It won't fail, it won't complain about race conditions or shared data, it will just give you a pointer. Note that *mut T is a raw pointer type, and the only way to modify a pointer of that type is with Unsafe Rust.
UnsafeCell should not be used directly. It's the compiler primitive used to implement the other two (safe) cell types. If you do decide that you need UnsafeCell for some reason, you need to be very careful to preserve compiler assumptions about data access, because it's very easy to make things go very wrong when you start dipping into Unsafe Rust. Trust me, I speak from experience on this. Last time I unsafe'd some of my code, it started causing enough problems that I eventually rewrote the whole thing using (safe) primitives and never actually figured out what was going wrong. It gets messy fast.
[1] You're actually converting a RefCell<T> to specialized types called Ref<'_, T> and RefMut<'_, T>, respectively, but those types are designed to act like ordinary references, so I omit that detail for brevity.
[2] There are variants of these functions called try_borrow and try_borrow_mut which return a Result rather than panicking on failure.

Is it idiomatic rust to accept arguments that `impl Borrow<T>` to abstract over references and values of T? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I find myself writing functions that accepts arguments as Borrow<T> so that it accepts both values and references transparently.
Example:
use std::borrow::Borrow;
#[derive(Debug, Copy)]
struct Point {
pub x: i32,
pub y: i32,
}
pub fn manhattan<T, U>(p1: T, p2: U) -> i32
where
T: Borrow<Point>,
U: Borrow<Point>,
{
let p1 = p1.borrow();
let p2 = p2.borrow();
(p1.x - p2.x + p1.y - p2.y).abs()
}
That can be useful to implement std:ops like Add, which would otherwise require a lot of repetition to support references transparently.
Is this idiomatic? Are there drawbacks?
I think there are two parts to this question.
1. Is the Borrow trait the idiomatic way to abstract over ownership in Rust?
Yes. If what you intend is to write a function that either takes a Foo or a &Foo, F: Borrow<Foo> is the right bound to use. AsRef, on the other hand, is usually only implemented for things that are reference-like, and not for owned values.
2. Is it idiomatic in Rust to abstract over ownership at all?
Sometimes. This is an interesting question because there is a subtle but important distinction between a function like manhattan and how Borrow is idiomatically used.
In Rust, whether a function needs to own its arguments or merely borrow them is an important part of the function's interface. Rustaceans, as a rule, don't mind writing & in a function call because it's a syntactic marker of a relevant semantic fact about the function being called. A function that can accept either Point or &Point is no more generally useful than the one that can accept only &Point: if you have a Point, all you have to do is borrow it. So it's idiomatic to use the simpler signature that most accurately documents the type the function really needs: &Point.
But wait! There are other differences between those ways of accepting arguments. One difference is call overhead: a &Point will generally be passed in a single pointer-sized register, while a Point may be passed in multiple registers or on the stack, depending on the ABI. Another difference is code size: each unique instantiation of <T: Borrow<Point>> represents a monomorphization of the function, which bloats the binary. A third difference is drop order: if Point has destructors, a function that accepts T: Borrow<Point> will call Point::drop internally, while a function that accepts &Point will leave the object in place for the caller to deal with. Whether this is good or bad depends on the context; for performance, though, it's usually irrelevant (if you assume the Point will eventually be dropped anyway).
A function accepting T: Borrow<Point> suggests that it's doing something with T internally for which a mere &Point might be suboptimal. Drop order is probably the best reason for doing this (I wrote more about this in this answer, although the puts function I used as an example isn't a particularly strong one).
In the case of manhattan drop order is irrelevant, because Point is Copy (trivially copied types may not have drop glue). So there is no performance advantage from accepting Point as well as &Point (and although a single function isn't likely to make much difference one way or another, if generics are used pervasively, the cost to code size may well be a disadvantage).
There is one more reason to avoid using generics unnecessarily: they interfere with type inference and can decrease the quality of error messages and suggestions from the compiler. For instance, imagine if Point only implemented Clone (not Copy) and you wrote manhattan(p, q) and then used p again later in the same function. The compiler would warn you that p was used after being moved into the function and suggest adding a .clone(). In fact, the better solution is to borrow p, and if manhattan takes references the compiler will enforce that you do just that.
The fact Point is small (so overhead to using it as a function argument is probably minimal) and Copy (so has no drop glue to worry about) raises another question: should manhattan simply accept Point and not use references at all? This is an opinion-based question and really it comes down to which better fits your mental model. Either accept &Point, and use & when a caller has an owned value, or accept Point, and use * when a caller has a reference - there is no hard and fast rule.
What is an appropriate use of Borrow, then?
The argument above strongly depends on the fact that references are easy to take anywhere, so you may as well take them concretely in the caller as abstractly inside the generic function. One time this is not the case is when the borrowed-or-owned type is not passed directly to the function, but wrapped in another generic data structure. Consider sorting a slice of Point-like things by their distance from (0, 0):
fn sort_by_radius<T: Borrow<Point>>(points: &mut [T]) {
points.sort_by_key(|p| {
let Point { x, y } = p.borrow();
x * x + y * y
});
}
In this case it's definitely not the case that the caller with a &mut [Point] can simply borrow it to get a &mut [&Point]. Yet we would like sort_by_radius to be able to accept both kinds of slices (without writing two functions) so Borrow<Point> comes to the rescue. The difference between sort_by_radius and your version of manhattan is that T is not being passed directly to the function to be immediately borrowed, but is a part of the type that sort_by_radius needs to treat like a Point in order to perform a task ultimately unrelated to borrowing (sorting a slice).

Why does Rust require ownership annotations instead of inferring it? [duplicate]

This question already has answers here:
Why are explicit lifetimes needed in Rust?
(10 answers)
Closed 2 years ago.
How come Rust does not fully infer ownership of its variables? Why are annotations needed?
If that were even possible I believe it would be a terrible user experience because:
if the compiler cannot deduce ownership of an object, the error can barely be understood (like with trial-and-error approach in C++ templates link);
the ownership policy doesn't seem to be easy to grasp (that's one opinion though) and trying to understand which semantic has been chosen by a compiler may lead to unexpected behaviors (reference a Javascript weird type conversions);
more bugs during refactoring can be introduced (implied by the point above);
full program inference would definitely take a huge amount of time, if it is even a solvable problem.
However, if you struggle with a lack of polymorphism, it is usually possible to parametrize a method with an ownership kind, which might be considered a somewhat explicit alternative to inference, e.g.:
fn print_str(s: impl AsRef<str>) {
println!("{}", s.as_ref());
}
fn main() {
print_str("borrowed");
print_str("owned".to_owned());
}

Why can't Rust do more complex type inference? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
In Chapter 3 of The Rust Programming Language, the following code is used as an example for a kind of type inference that Rust cannot manage:
fn main() {
let condition = true;
let number = if condition { 5 } else { "six" };
println!("The value of number is: {}", number);
}
with the explanation that:
Rust needs to know at compile time what type the number variable is, definitively, so it can verify at compile time that its type is valid everywhere we use number. Rust wouldn’t be able to do that if the type of number was only determined at runtime; the compiler would be more complex and would make fewer guarantees about the code if it had to keep track of multiple hypothetical types for any variable.
I'm not certain I understand the rationale, because the example does seem like something where a simple compiler could infer the type.
What exactly makes this kind of type inference so difficult? In this case, the value of condition can be clearly inferred at compile time (it's true), and so thus the type of number can be too (it's i32?).
I can see how things could become a lot more complicated, if you were trying to infer types across multiple compilation units for instance, but is there something about this specific example that would add a lot of complexity to the compiler?
There are three main reasons I can think of:
1. Action at a distance effects
Let's suppose the language worked that way. Since we're extending type inference, we might as well make the language even smarter and have it infer return types as well. This allows me to write something like:
pub fn get_flux_capacitor() {
let is_prod = true;
if is_prod { FluxCapacitor::new() } else { MovieProp::new() }
}
And elsewhere in my project, I can get a FluxCapacitor by calling that function. However, one day, I change is_prod to false. Now, instead of getting an error that my function is returning the wrong type, I will get errors at every callsite. A small change inside one function has lead to errors in entirely unchanged files! That's pretty weird.
(If we don't want to add inferered return types, just imagine it's a very long function instead.)
2. Compiler internals exposed
What happens in the case where it's not so simple? Surely this should be the same as the above example:
pub fn get_flux_capacitor() {
let is_prod = (1 + 1) == 2;
...
}
But how far does that extend? The compiler's constant propagation is mostly an implementation detail. You don't want the types in your program to depend on how smart this version of the compiler is.
3. What did you actually mean?
As a human looking at this code, it looks like something is missing. Why are you branching on true at all? Why not just write FluxCapacitor::new()? Perhaps there's logic missing to check and see if a env=DEV environment variable is missing. Perhaps a trait object should actually be used so that you can take advantage of runtime polymorphism.
In this kind of situation where you're asking the computer to do something that doesn't seem quite right, Rust often chooses to throw its hands up and ask you to fix the code.
You're right, in this very specific case (where condition=true statically), the compiler could be made able to detect that the else branch is unreachable and therefore number must be 5.
This is just a contrived example, though... in the more general case, the value of condition would only be dynamically known at runtime.
It's in that case, as other have said, that inference becomes hard to implement.
On that topic, there are two things I haven't seen mentioned yet.
The Rust language design tends to err on the side of doing things as
explicitly as possible
Rust type inference is only local
On point #1, the explicit way for Rust to deal with the "this type can be one of multiple types" use case are enums.
You can define something like this:
#[derive(Debug)]
enum Whatsit {
Num(i32),
Text(&'static str),
}
and then do let number = if condition { Num(5) } else { Text("six") };
On point #2, let's see how the enum (while wordier) is the preferred approach in the language. In the example from the book we just try printing the value of number.
In a more real-case scenario we would at one point use number for something other than printing.
This means passing it to another function or including it in another type. Or (to even enable use of println!) implementing the Debug or Display traits on it. Local inference means that (if you can't name the type of number in Rust), you would not be able to do any of these things.
Suppose you want to create a function that does something with a number;
with the enum you would write:
fn do_something(number: Whatsit)
but without it...
fn do_something(number: /* what type is this? */)
In a nutshell, you're right that in principle it IS doable for the compiler to synthesize a type for number. For instance, the compiler might create an anonymous enum like Whatsit above when compiling that code.
But you - the programmer - would not know the name of that type, would not be able to refer to it, wouldn't even know what you can do with it (can I multiply two "numbers"?) and this would greatly limit its usefulness.
A similar approach was followed for instance to add closures to the language. The compiler would know what specific type a closure has, but you, the programmer, would not. If you're interested I can try finding out discussions on the difficulties that the approach introduced in the design of the language.

Resources