Given a Rust program, which compiles correctly, can I get the compiler to tell me what the elided lifetimes were inferred to be?
The cases where the compiler (currently1) can allow elided lifetimes are actually so simple that there isn't much the compiler could tell you about what it inferred:
Given a function, all elided lifetimes have the same value.
The compiler doesn't accept elided lifetimes in cases where it would have a choice to make. The exception is in methods, but tying all lifetimes to self is nearly always what is intended, so it makes sense for it to make this assumption.
[1] If a future version of Rust performed more sophisticated inference on elided lifetimes, then this question might have a far less trivial answer. For example the compiler could analyse the entire codebase to deduce a coherent set of lifetimes for all functions (or impls or structs if elision was permitted there too).
Related
After some discussion, I'm now a little bit confused about the relation between auto-dereferencing and deref coercion.
It seems that the term "auto-dereferencing" applies only when the target to dereference is a method receiver,
whereas it seems that the term "deref coercion" applies to function arguments and all contexts it needs to.
I thought that a dereference does not always involve deref coercion, but I'm not sure: does dereferencing always use some Deref::deref trait implementation?
If so, is the implementor of T: Deref<Target = U> where T: &U built into the compiler?
Finally, it sounds natural to use the term "autoderef" in all the cases where the compiler implicitly transforms &&&&x to &x:
pub fn foo(_v: &str) -> bool {
false
}
let x="hello world";
foo(&&&&x);
Is this the general consensus of the community?
The parallels between the two cases are rather superficial.
In a method call expression, the compiler first needs to determine which method to call. This decision is based on the type of the receiver. The compiler builds a list of candidate receiver types, which include all types obtained by repeatedly derefencing the receiver, but also &T and &mut T for all types T encountered. This is the reason why you can call a method receiving &mut self directly as x.foo() instead of having to write (&mut x).foo(). For each type in the candidate list, the compiler then looks up inherent methods and methods on visible traits. See the language reference for further details.
A deref coercion is rather different. It only occurs at a coercion site where the compiler exactly knows what type to expect. If the actual type encountered is different from the expected type, the compiler can use any coercion, including a deref coercion, to convert the actual type to the expected type. The list of possible coercions includes unsized coercions, pointer weakening and deref coercions. See the the chapter on coercions in the Nomicon for further details.
So these are really two quite different mechanisms – one for finding the right method, and one for converting types when it is already known what type exactly to expect. The first mechanism also automatically references the receiver, which can never happen in a coercion.
I thought that a dereference does not always involve deref coercion, but I'm not sure: does dereferencing always use some Deref::deref trait implementation?
Not every dereferencing is a deref coercion. If you write *x, you explicitly dereference x. A deref coercion in contrast is performed implicitly by the compiler, and only in places where the compiler knows the expected type.
The semantics of dereferencing depend on whether the type of x is a pointer type, i.e. a reference or a raw pointer, or not. For pointer types, *x denotes the object x points to, while for other types *x is equivalent to *Deref::deref(&x) (or the mutable anlogue of this).
If so, is the implementor of T: Deref<Target = U> where T: &U built into the compiler?
I'm not quite sure what your syntax is supposed to mean – it's certainly not valid Rust syntax – but I guess you are asking whether derefencing an instance of &T to T is built into the compiler. As mentioned above, dereferencing of pointer types, including references, is built into the compiler, but there is also a blanket implementation of Deref for &T in the standard library. This blanket implementation is useful for generic code – the trait bound T: Deref<Target = U> otherwise wouldn't allow for T = &U.
After some discussion, I'm now a little bit confused about the relation between auto-dereferencing and deref coercion.
It seems that the term "auto-dereferencing" applies only when the target to dereference is a method receiver,
whereas it seems that the term "deref coercion" applies to function arguments and all contexts it needs to.
I thought that a dereference does not always involve deref coercion, but I'm not sure: does dereferencing always use some Deref::deref trait implementation?
If so, is the implementor of T: Deref<Target = U> where T: &U built into the compiler?
Finally, it sounds natural to use the term "autoderef" in all the cases where the compiler implicitly transforms &&&&x to &x:
pub fn foo(_v: &str) -> bool {
false
}
let x="hello world";
foo(&&&&x);
Is this the general consensus of the community?
The parallels between the two cases are rather superficial.
In a method call expression, the compiler first needs to determine which method to call. This decision is based on the type of the receiver. The compiler builds a list of candidate receiver types, which include all types obtained by repeatedly derefencing the receiver, but also &T and &mut T for all types T encountered. This is the reason why you can call a method receiving &mut self directly as x.foo() instead of having to write (&mut x).foo(). For each type in the candidate list, the compiler then looks up inherent methods and methods on visible traits. See the language reference for further details.
A deref coercion is rather different. It only occurs at a coercion site where the compiler exactly knows what type to expect. If the actual type encountered is different from the expected type, the compiler can use any coercion, including a deref coercion, to convert the actual type to the expected type. The list of possible coercions includes unsized coercions, pointer weakening and deref coercions. See the the chapter on coercions in the Nomicon for further details.
So these are really two quite different mechanisms – one for finding the right method, and one for converting types when it is already known what type exactly to expect. The first mechanism also automatically references the receiver, which can never happen in a coercion.
I thought that a dereference does not always involve deref coercion, but I'm not sure: does dereferencing always use some Deref::deref trait implementation?
Not every dereferencing is a deref coercion. If you write *x, you explicitly dereference x. A deref coercion in contrast is performed implicitly by the compiler, and only in places where the compiler knows the expected type.
The semantics of dereferencing depend on whether the type of x is a pointer type, i.e. a reference or a raw pointer, or not. For pointer types, *x denotes the object x points to, while for other types *x is equivalent to *Deref::deref(&x) (or the mutable anlogue of this).
If so, is the implementor of T: Deref<Target = U> where T: &U built into the compiler?
I'm not quite sure what your syntax is supposed to mean – it's certainly not valid Rust syntax – but I guess you are asking whether derefencing an instance of &T to T is built into the compiler. As mentioned above, dereferencing of pointer types, including references, is built into the compiler, but there is also a blanket implementation of Deref for &T in the standard library. This blanket implementation is useful for generic code – the trait bound T: Deref<Target = U> otherwise wouldn't allow for T = &U.
After some discussion, I'm now a little bit confused about the relation between auto-dereferencing and deref coercion.
It seems that the term "auto-dereferencing" applies only when the target to dereference is a method receiver,
whereas it seems that the term "deref coercion" applies to function arguments and all contexts it needs to.
I thought that a dereference does not always involve deref coercion, but I'm not sure: does dereferencing always use some Deref::deref trait implementation?
If so, is the implementor of T: Deref<Target = U> where T: &U built into the compiler?
Finally, it sounds natural to use the term "autoderef" in all the cases where the compiler implicitly transforms &&&&x to &x:
pub fn foo(_v: &str) -> bool {
false
}
let x="hello world";
foo(&&&&x);
Is this the general consensus of the community?
The parallels between the two cases are rather superficial.
In a method call expression, the compiler first needs to determine which method to call. This decision is based on the type of the receiver. The compiler builds a list of candidate receiver types, which include all types obtained by repeatedly derefencing the receiver, but also &T and &mut T for all types T encountered. This is the reason why you can call a method receiving &mut self directly as x.foo() instead of having to write (&mut x).foo(). For each type in the candidate list, the compiler then looks up inherent methods and methods on visible traits. See the language reference for further details.
A deref coercion is rather different. It only occurs at a coercion site where the compiler exactly knows what type to expect. If the actual type encountered is different from the expected type, the compiler can use any coercion, including a deref coercion, to convert the actual type to the expected type. The list of possible coercions includes unsized coercions, pointer weakening and deref coercions. See the the chapter on coercions in the Nomicon for further details.
So these are really two quite different mechanisms – one for finding the right method, and one for converting types when it is already known what type exactly to expect. The first mechanism also automatically references the receiver, which can never happen in a coercion.
I thought that a dereference does not always involve deref coercion, but I'm not sure: does dereferencing always use some Deref::deref trait implementation?
Not every dereferencing is a deref coercion. If you write *x, you explicitly dereference x. A deref coercion in contrast is performed implicitly by the compiler, and only in places where the compiler knows the expected type.
The semantics of dereferencing depend on whether the type of x is a pointer type, i.e. a reference or a raw pointer, or not. For pointer types, *x denotes the object x points to, while for other types *x is equivalent to *Deref::deref(&x) (or the mutable anlogue of this).
If so, is the implementor of T: Deref<Target = U> where T: &U built into the compiler?
I'm not quite sure what your syntax is supposed to mean – it's certainly not valid Rust syntax – but I guess you are asking whether derefencing an instance of &T to T is built into the compiler. As mentioned above, dereferencing of pointer types, including references, is built into the compiler, but there is also a blanket implementation of Deref for &T in the standard library. This blanket implementation is useful for generic code – the trait bound T: Deref<Target = U> otherwise wouldn't allow for T = &U.
Given a Rust program, which compiles correctly, can I get the compiler to tell me what the elided lifetimes were inferred to be?
The cases where the compiler (currently1) can allow elided lifetimes are actually so simple that there isn't much the compiler could tell you about what it inferred:
Given a function, all elided lifetimes have the same value.
The compiler doesn't accept elided lifetimes in cases where it would have a choice to make. The exception is in methods, but tying all lifetimes to self is nearly always what is intended, so it makes sense for it to make this assumption.
[1] If a future version of Rust performed more sophisticated inference on elided lifetimes, then this question might have a far less trivial answer. For example the compiler could analyse the entire codebase to deduce a coherent set of lifetimes for all functions (or impls or structs if elision was permitted there too).
Traits in Rust seem at least superficially similar to typeclasses in Haskell, however I've seen people write that there are some differences between them. I was wondering exactly what these differences are.
At the basic level, there's not much difference, but they're still there.
Haskell describes functions or values defined in a typeclass as 'methods', just as traits describe OOP methods in the objects they enclose. However, Haskell deals with these differently, treating them as individual values rather than pinning them to an object as OOP would lead one to do. This is about the most obvious surface-level difference there is.
The one thing that Rust could not do for a while was higher-order typed traits, such as the infamous Functor and Monad typeclasses.
This means that Rust traits could only describe what's often called a 'concrete type', in other words, one without a generic argument. Haskell from the start could make higher-order typeclasses which use types similar to how higher-order functions use other functions: using one to describe another. For a period of time this was not possible in Rust, but since associated items have been implemented, such traits have become commonplace and idiomatic.
So if we ignore extensions, they are not exactly the same, but each can approximate what the other can do.
It is also mentionable, as said in the comments, that GHC (Haskell's principal compiler) supports further options for typeclasses, including multi-parameter (i.e. many types involved) typeclasses, and functional dependencies, a lovely option that allows for type-level computations, and leads on to type families. To my knowledge, Rust has neither funDeps or type families, though it may in the future.†
All in all, traits and typeclasses have fundamental differences, which due to the way they interact, make them act and seem quite similar in the end.
† A nice article on Haskell's typeclasses (including higher-typed ones) can be found here, and the Rust by Example chapter on traits may be found here
I think the current answers overlook the most fundamental differences between Rust traits and Haskell type classes. These differences have to do with the way traits are related to object oriented language constructs. For information about this, see the Rust book.
A trait declaration creates a trait type. This means that you can declare variables of such a type (or rather, references of the type). You can also use trait types as parameters on function, struct fields and type parameter instantiations.
A trait reference variable can at runtime contain objects of different types, as long as the runtime type of the referenced object implements the trait.
// The shape variable might contain a Square or a Circle,
// we don't know until runtime
let shape: &Shape = get_unknown_shape();
// Might contain different kinds of shapes at the same time
let shapes: Vec<&Shape> = get_shapes();
This is not how type classes work. Type classes create no types, so you can't declare variables with the class name. Type classes act as bounds on type parameters, but the type parameters must be instantiated with a concrete type, not the type class itself.
You can not have a list of different things of different types which implement the same type class. (Instead, existential types are used in Haskell to express a similar thing.) Note 1
Trait methods can be dynamically dispatched. This is strongly related to the things that are described in the section above.
Dynamic dispatch means that the runtime type of the object a reference points is used to determine which method that is called though the reference.
let shape: &Shape = get_unknown_shape();
// This calls a method, which might be Square.area or
// Circle.area depending on the runtime type of shape
print!("Area: {}", shape.area());
Again, existential types are used for this in Haskell.
In Conclusion
It seems to me like traits are in many aspects the same concept as type classes. It addition, they have the functionality of object oriented interfaces.
On the other hand Haskell's type classes are more advanced. Haskell has for example higher-kinded types and extension like multi-parameter type classes.
Note 1: Recent versions of Rust have an update to differentiate the usage of trait names as types and the usage of trait names as bounds. In a trait type the name is prefixed by the dyn keyword. See for example this answer for more information.
Rust's “traits” are analogous to Haskell's type classes.
The main difference with Haskell is that traits only intervene for expressions with dot notation, i.e. of the form a.foo(b).
Haskell type classes extend to higher-order types. Rust traits only don't support higher order types because they are missing from the whole language, i.e. it's not a philosophical difference between traits and type classes
Rust's traits are like Haskell's type classes. They are a way of grouping similar functions together.
The main difference between Haskell and other programming languages is that traits only work with expressions that use dot notation, such as a.foo(b).
In Haskell, type classes can extend to higher-order types. This means that you can create types that behave like other types in the language, by using traits. Rust doesn't have this feature, because it doesn't have higher-order types.