Why is Rust's .expect() called expect? [closed] - rust

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Rust's .expect() is one of the most surprising names in the Result/Option space. While unwrap makes sense -- get a value from a Result by "unwrapping" it -- expect is surprisingly counterintuitive (to me).
Since so much of Rust is inspired by conventions from functional programming languages, I have been assuming that this is another example of "strange homage to obscurity," but when I asked the Duck it couldn't find an answer for me.
So, I give up. Why is .expect() the name for Rust's .unwrap_or_panic_with_this_message() function? Is this a reference to a feature in yet another functional language? Is it back-handed shade (or a complement) to Tcl? The result of too many late nights and too much Espresso at Mozilla?
What is the etymology of this plucky (but fierce!) little member of the standard library?
Edit:
Great news, everyone! After more than a year, we now have both an "explanation" in #GManNickG's tremendously well-researched and documented answer, below, and a "mental model" provided by #Peng's comment and link:
https://doc.rust-lang.org/std/error/index.html#common-message-styles
In short, it is now recommended that diagnostics for expect() should be written as "expect...should" or maybe "expect...should have". This makes more sense in the diagnostic output (see the link for examples) but also makes the source code scan better:
foo.expect("something bad happened") // NO!
foo.expect("gollywumpus should be current") // YES!

Summary:
No explicit reason for the name is given. However, it is incredibly likely the name comes from the world of parsers, where one "expects" to see a particular token (else the compilation fails).
Within rustc, the use of expect-like functions long predate use within Option. These are functions like expect(p, token::SEMI) to expect to parse a semicolon and expect_word(p, "let") to expect to parse the let keyword. If the expectation isn't met, compilation fails with an error message.
Eventually a utility function was added within the compiler that would expect not a specific token or string, but that a given Option contained a value (else, fail compilation with the given error message). Over time this was moved to the Option struct itself, where it remains today.
Personally, I don't find it unusual at all. It's just another verb you can do to the object, like unwrapping or taking or mapping its value. Expecting a value (else, fail) from your Option seems quite natural.
History:
The oldest commit of note is the following:
https://github.com/rust-lang/rust/commit/b06dc884e57644a0c7e9c5391af9e0392e5f49ac
Which adds this function within the compiler:
fn expect<T: copy>(sess: session, opt: option<T>, msg: fn() -> str) -> T {
alt opt {
some(t) { t }
none { sess.bug(msg()); }
}
}
As far as I can tell, this is the first function named "expect" that deals with inspecting an Option. Observe in particular this example use-case within the commit (which was implementing support for class methods!):
#debug("looking up %? : %?", def, class_doc);
let the_field = expect(tcx.sess,
decoder::maybe_find_item(def.node, class_doc),
{|| #fmt("get_field_type: in class %?, field ID %? not found",
class_id, def)});
If the result of decoder::maybe_find_item is None, compilation will fail with the given error.
I encourage you to look at the parser code in this commit - there is extensive use of other expect-esque functions: e.g., expect(p, token::RPAREN) and expect_word(p, "let"). The name of this new function is almost obvious in this environment.
Eventually, the utility of this function was extracted and placed within Option itself:
https://github.com/rust-lang/rust/commit/e000d1db0ab047b8d2949de4ab221718905ce3b1
Which looked like:
pure fn expect<T: copy>(opt: option<T>, reason: str) -> T {
#[doc = "
Gets the value out of an option, printing a specified message on failure
# Failure
Fails if the value equals `none`
"];
alt opt { some(x) { x } none { fail reason; } }
}
It's worth noting that sometime later, there was eventually an (additional) function named unwrap_expect added in:
https://github.com/rust-lang/rust/commit/be3a71a1aa36173ce2cd521f811d8010029aa46f
pure fn unwrap_expect<T>(-opt: option<T>, reason: ~str) -> T {
//! As unwrap, but with a specified failure message.
if opt.is_none() { fail reason; }
unwrap(opt)
}
Over time these were both subsumed by an Expect trait, which Option implemented:
https://github.com/rust-lang/rust/commit/0d8f5fa618da00653897be2050980c800389be82
/// Extension trait for the `Option` type to add an `expect` method
// FIXME(#14008) should this trait even exist?
pub trait Expect<T> {
/// Unwraps an option, yielding the content of a `Some`
///
/// # Failure
///
/// Fails if the value is a `None` with a custom failure message provided by
/// `msg`.
fn expect<M: Any + Send>(self, m: M) -> T;
}
Spoiler for that TODO: that trait no longer exists. It was removed shortly after per:
https://github.com/rust-lang/rust/issues/14008.
More or less this is where we are at today.
I think the most likely conclusion is that the use of expect as a meaningful function name long predates its use in Option. Given it says what it does (expect a value or fail) there is little reason to break the pattern.

Related

Learning rust getting compile error when declaring None

I'm currently reading the official rust-lang book (the one on their website/documentation) and I'm taking notes by copying code and writing comments for everything. I'm currently on chapter 6, Options enum type. Based on the book and some Rustlings code I came across while googling the following should be possible based on the the official book
let none: Option<i32> = None;
I also have the following notes in comment form next to it :
If we use None rather than Some, we need to tell Rust what type of Option<T> we have, because the compiler can’t infer the type that the Some variant will hold by looking only at a None value. And I mean it satisfies the requirement but I keep getting the following error:
mismatched types
expected enum `main::Option<i32>`
found enum `std::option::Option<_>`
I did come across this which works:
let _equivalent_none = None::<i32>;
Can anyone explain why one works but the other doesn't? The official book doesn't even mention the second variant (that doesn't throw an error). Is the newest version different from what's documented in the book?
It appears that you have defined your own enum called Option in your program. Thus, there are two different types called Option: yours (main::Option), and the standard one (std::option::Option). The variable none has type main::Option, but None is of type std::option::Option.
Obvious solution is to just delete your own enum. If, however, for the sake of experiment you do want to create an instance of your own enum called Option, and assign the value of None to it, you'd need to qualify None:
let none: Option<i32> = Option::None;
The problem is that defining an enum Option { None, … } brings a new Option into scope, shadowing the std::option::Option imported by std::prelude by default. However, enum Option { None, … } does not bring a new None into scope, so the std::option::Option::None imported by prelude is still there.
So, you have two options:
Use: let none: Option<i32> = Option::None;, specifying which none to use explicitly.
Add a use crate::Option::*; below your enum, bringing your own None into scope.

Is there a way to show a warning that a Result is redundant as a return type?

Is there an attribute like #[warn(redundant_result_as_return_type)] to show a warning that Result is redundant as a return type?
#[derive(Debug)]
struct SomeError();
fn process_some() -> Result<(), SomeError> {
Ok(())
}
fn main() {
process_some().unwrap();
}
(playground)
This code produces no warnings despite the fact that Result is not needed as the return type at all.
I've been deciding how to properly implement error handling of a function from the crate used in my project. After digging into the function implementation, it turned out that no errors are generated.
Since then, I want to prevent such cases in my own code. Ideally it would be great to get such warnings from inside of imported crates too, when utilizing methods with redundant Results, but as far as I understand such checking is impossible for used crates without adjusting their code.
The question is in some way the opposite of Possible to declare functions that will warn on unused results in Rust?.
No, there is no such warning, either in the compiler or in Clippy. You could submit an issue to Clippy suggesting they add it, of course.
I challenge that this is a common occurrence or even one that is actually a problem. Even if it was, I'd believe such a warning will have many disadvantages:
I use this pattern to scaffold out functions that I'm pretty sure will return an error once I've implemented them.
A method that is part of a trait cannot change the return type.
You might be passing the function as a function pointer or closure argument and the return type cannot be changed.
Changing the return type breaks API backwards compatibility, so you have to wait until a major version bump.

Differences in syntax for invoking trait method on generic type

I'm trying to understand the need for the Trait:: and <T as Trait>:: method invocation syntax. In particular, I'm looking at the following function from this answer:
fn clone_into_array<A, T>(slice: &[T]) -> A
where
A: Default + AsMut<[T]>,
T: Clone,
{
assert_eq!(
slice.len(),
std::mem::size_of::<A>() / std::mem::size_of::<T>()
);
let mut a = Default::default();
<A as AsMut<[T]>>::as_mut(&mut a).clone_from_slice(slice);
a
}
It seems that the middle two method invocation lines can be rewritten as:
let mut a = A::default();
a.as_mut().clone_from_slice(slice);
I believe this is more readable, as we already know that A implements Default and AsMut<[T]> and we can invoke as_mut directly as a method instead of having to pass a to it explicitly.
However, am I missing a good reason for the linked answer to have written it more verbosely? Is it considered good style? Are the two semantically different under certain conditions?
I agree with your rewrites — they are more clear and are what I would recommend for that function.
am I missing a good reason for the linked answer to have written it more verbosely?
My guess is that the author simply was tired of writing that function and stopped. If they took a look at it again, they might refine it to be shorter.
Is it considered good style?
I don't think there's a general community stylistic preference around this yet, other than the general "shorter is better until it isn't".
Are the two semantically different under certain conditions?
They shouldn't be.
There are times where the <>:: syntax is needed because otherwise it would be ambiguous. One example from a recent question:
let array = <&mut [u8; 3]>::try_from(slice);
Another time is when you don't have a nicely-named intermediate type or the intermediate type is ambiguous across multiple traits. One gross example is from a where clause but shows the same issue as an expression would:
<<Tbl as OrderDsl<Desc<Expr>>>::Output as LimitDsl>::Output: QueryId,
See also:
How to call a method when a trait and struct use the same name?

How to define and call a function in Jenkinsfile?

I've seen a bunch of questions related to this subject, but none of them offers anything that would be an acceptable solution (please, no loading external Groovy scripts, no calling to sh step etc.)
The operation I need to perform is a oneliner, but pipeline limitations made it impossible to write anything useful in that unter-language...
So, here's minimal example:
#NonCPS
def encodeProperties(Map properties) {
properties.collect { k, v -> "$k=$v" }.join('|')
}
node('dockerized') {
stage('Whatever') {
properties = [foo: 123, bar: "foo"]
echo encodeProperties(properties)
}
}
Depending on whether I add or remove #NonCPS annotation, or type declaration of the argument, the error changes, but it never gives any reason for what happened. It's basically random noise, that contradicts the reality of the situation (at times it would claim that some irrelevant object doesn't have a method encodeProperties, other times it would say that it cannot find a method encodeProperties with a signature that nobody was trying to call it with (like two arguments instead of one) and so on.
From reading the documentation, which is of disastrous quality, I sort of understood that maybe functions in general aren't serializable, and that is why you need to explain this explicitly to the Groovy interpreter... I'm sorry, this makes no sense, but this is roughly what documentation says.
Obviously, trying to use collect inside stage creates a load of new errors... Which are, at least understandable in that the author confesses that their version of Groovy doesn't implement most of the Groovy standard...
It's just a typo. You defined encodeProperties but called encodeProprties.

Self-mutate Swift struct in background thread

Assume we have a struct capable of self-mutation that has to happen as part of a background operation:
struct Thing {
var something = 0
mutating func operation(block: () -> Void) {
// Start some background operation
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0)) {
// Mutate self upon background task completion
self.something += 1
block()
}
}
}
Now, when I use such a struct in context:
var myThing = Thing()
myThing.operation {
println(myThing.something)
}
The println gives me 0, as if myThing was never mutated. Printing self.something from within the dispatch_async obviously yields 1.
How do I work around this issue, preferably without having to pass the updated struct's self in the operation competition block and overriding the original variable in the main context?
// Ew
var myThing = Thing()
myThing.operation {
(mutatedThing) in
myThing = mutatedThing
println(myThing.something)
}
I'm adding a second answer because my first answer addressed a different point.
I have just encountered this difficulty myself in circumstances almost identical to yours.
After working and working and working to try to find out what was going on, and fix it, I realized that the problem was, basically, using a value type where a reference type should be used.
The closure seems to create a duplicate of the struct and operate on that, leaving the original untouched--which is more in line with the behavior of a value type.
On the other hand, the desired behavior is for the actions performed in the closure to be retained by the environment outside the closure--in other words two different contexts (inside the closure and out of it) need to refer to the same objects--which is more in line with the behavior of a reference type.
Long story short, I changed the struct to a class. The problem vanished, no other code needed.
I have seen this exact problem many times, but without more detail I can't say if it was happening for the same reason that you are having it.
It drove me crazy until I realized that the dispatched operation was taking place at an illogical time, i.e. usually before the current time. In the dispatch block's frame of reference, it correctly updated the variable, which is why it prints out correctly from inside the block. But in the "real world", for some reason, the dispatch is considered to have never happened, and the value change is discarded. Or perhaps it's because the mutation implicitly creates a new struct and because it was "time travelling" the reference to it never updated. Can't say.
In my case, the problem went away once I correctly scheduled the dispatch. I hope that helps!

Resources