What is the difference between "Qed" and "Defined"? - scope

In the interactive theorem prover Coq, any interactive proof or definition can be terminated with either Qed or Defined. There is some concept of "opacity" which Qed enforces but Defined does not. For instance, the book Certified Programming with Dependent Types, by Adam Chlipala, states:
We end the "proof" with Defined instead of Qed, so that the definition we constructed remains visible. This contrasts to the case of ending a proof with Qed, where the details of the proof are hidden afterward. (More formally, Defined marks an identifier as transparent, allowing it to be unfolded; while Qed marks an identifier as opaque, preventing unfolding.)
However, I'm not quite sure what this means in practice. There is a later example in which it is necessary to use Defined due to the need for Fix to inspect the structure of a certain proof, but I don't understand exactly what this "inspection" entails, or why it would fail if Qed were used instead. (Looking at the definition of Fix wasn't exactly enlightening either).
Superficially, it's hard to tell what Qed is actually doing. For instance, if I write:
Definition x : bool.
exact false.
Qed.
I can still see the value of x by executing the command Print x. In addition, I'm allowed later to pattern-match on the "opaque" value of x:
Definition not_x : bool :=
match x with
| true => false
| false => true
end.
Therefore it seems like I'm able to use the value of x just fine. What does Prof. Chlipala mean by "unfolding" here? What exactly is the difference between an opaque and a transparent difference? Most importantly, what is special about Fix that makes this matter?

You are not really able to use the value of x, but only its type. For example, since x is false, try to prove that x = false or that x = true, and you won't be able to. You can unfold the definition of not_x (its definition is the same as that of x, but using Defined), but you won't be able to inspect the value of x, you only know that it is a boolean.
Lemma not_x_is_true : not_x = true.
Proof.
unfold not_x. (* this one is fine *)
unfold x. (* This one is not. Error: Cannot coerce x to an evaluable reference. *)
The idea behind Qed vs Defined is that in some cases, you don't want to look at the content of proof term (because it is not relevant, or just a really huge term you don't want to unfold), and all you need to know is that the statement is true, not why it is true. In the end, the question you have to ask before using Qed or Defined is: Do I need to know why one theorem is true, or do I only need to know that it is true?

Related

How to make a partial function?

I was thinking about how I could save myself from undefinition, and one idea I had was to enumerate all possible sources of partiality. At least I would know what of to beware. I found three yet:
Incomplete pattern matches or guards.
Recursion. (Optionally excluding structural recursion on algebraic types.)
If a function is unsafe, any use of that function infects the user code. (Should I be saying "partiality is transitive"?)
I have heard of other ways to obtain a logical contradiction, for instance by using negative types, but I am not sure if anything of that sort applies to Haskell. There are many logical paradoxes out there, and some of them can be encoded in Haskell, but may it be true that any logical paradox requires the use of recursion, and is therefore covered by the point 2 above?
For instance, if it were proven that a Haskell expression free of recursion can always be evaluated to normal form, then the three points I give would be a complete list. I fuzzily remember seeing something like a proof of this in one of Simon Peyton Jones's books, but that was written like 30 years ago, so even if I remember correctly and it used to apply to a prototype Haskell back then, it may be false today, seeing how many a language extension we have. Possibly some of them enable other ways to undefine a program?
And then, if it were so easy to detect expressions that cannot be partial, why do we not do that? How easier would life be!
This is a partial answer (pun intended), where I'll only list a few arguably non obvious ways one can achieve non termination.
First, I'll confirm that negative-recursive types can indeed cause non termination. Indeed, it is known that allowing a recursive type such as
data R a = R (R a -> a)
allows one to define fix, and obtain non termination from there.
{-# LANGUAGE ScopedTypeVariables #-}
{-# OPTIONS -Wall #-}
data R a = R (R a -> a)
selfApply :: R a -> a
selfApply t#(R x) = x t
-- Church's fixed point combinator Y
-- fix f = (\x. f (x x))(\x. f (x x))
fix :: forall a. (a -> a) -> a
fix f = selfApply (R (\x -> f (selfApply x)))
Total languages like Coq or Agda prohibit this by requiring recursive types to use only strictly-positive recursion.
Another potential source of non-termination is that Haskell allows Type :: Type. As far as I can see, that makes it possible to encode System U in Haskell, where Girard's paradox can be used to cause a logical inconsistency, constructing a term of type Void. That term (as far as I understand) would be non terminating.
Girard's paradox is unfortunately rather complex to fully describe, and I have not completely studied it yet. I only know it is related to the so-called hypergame, a game where the first move is to choose a finite game to play. A finite game is one which causes every match to terminate after finitely many moves. The next moves after that would correspond to a match according to the chosen finite game at step one. Here's the paradox: since the chosen game must be finite, no matter what it is, the whole hypergame match will always terminate after a finite amount of moves. This makes hypergame itself a finite game, making the infinite sequence of moves "I choose hypergame, I choose hypergame, ..." a valid play, in turn proving that hypergame is not finite.
Apparently, this argument can be encoded in a rich enough pure type system like System U, and Type :: Type allows to embed the same argument.

Is there a simpler type system with the practical utilities of CoC? [duplicate]

The article Simpler, Easier! claims it could be possible to encode dependent type systems even without the presence of "Pi" - that is, you could reuse the "Lam" constructor for it. But how can that be true, if "Pi" and "Lam" are treated differently in some cases?
Moreover, could "Star" be removed? I think you could replace all occurrences of it by "λ x . x" (id).
That's just overloading like (a, b) in Haskell: it can be both a type and a value. You can use the same binder for Π and λ and typechecker will decide from the context which one you mean. If you typecheck one binder against another, then the former is λ and the latter is Π — and that's why you can't unambiguously replace * with λ x . x — because then the former binder could be Π and the latter is * (* as a binder doesn't make any sense to me). There is a bigger problem with ∀ = λ and * = λ x . x: by transitivity * = ∀ x . x, which is a common way to postulate False — this type must be uninhabited in a sound system, so you won't have any types at all.
There was a recent thread on Coq-club "Similarities between forall and fun" (gmane.org gives me "No such message", is it just me?), here are some excerpts:
Dominic Mulligan:
And here is another with a small bibliography pointing to similar work:
http://www.macs.hw.ac.uk/~fairouz/forest/papers/journals-publications/jfp05.pdf
Ironically, according to that paper Coquand first presented the Calculus of
Constructions with a single, unified binder, following a convention
established by De Bruijn in AutoMath.
Thorsten Altenkirch:
A function and its type are very different concepts even if they have
some superficial syntactic similarity.
Especially for the newcomer this identification is very confusing and
completely misleading. I do think that one should understand type
theoretical concepts from what they mean and not how they look like.
Andreas Abel:
My student Matthias Benkard also worked on such a system, see "Type
Checking without Types"
http://www.cse.chalmers.se/~abela/benkardThesis.pdf
Note that the system described at the first link has Π-reduction (i.e. you can apply pi-types just like lambdas) — your system will have it too, if you unify Π and λ internally (as opposed to syntactically). And the system described at the second link unifies types and values
One immediate consequence is the absence of any distinction between
types and their inhabitants: Every value is a type containing itself
and all of its parts; and conversely, every type is a composite value
consisting of its inhabitants.
so there is really just one binder (except for let and maybe fix).

Stripping out let in Haskell

I should probably first mention that I'm pretty new to Haskell. Is there a particular reason to keep the let expression in Haskell?
I know that Haskell got rid of the rec keyword that corresponds to the Y-combinator portion of a let statement that indicates it's recursive. Why didn't they get rid of the let statement altogether?
If they did, statements will seem more iterative to some degree. For example, something like:
let y = 1+2
z = 4+6
in y+z
would just be:
y = 1+2
z = 4+6
y+z
Which is more readable and easier for someone new to functional programming to follow. The only reason I can think of to keep it around is something like this:
aaa = let y = 1+2
z = 4+6
in y+z
Which would look this this without the let, which I think ends up being ambiguous grammar:
aaa =
y = 1+2
z = 4+6
y+z
But if Haskell didn't ignore whitespace, and code blocks/scope worked similar to Python, would it be able to remove the let?
Is there a stronger reason to keep around let?
Sorry if this question seems stupid, I'm just trying to understand more about why it's in there.
Syntactically you can easily imagine a language without let. Immediately, we can produce this in Haskell by simply relying on where if we wanted. Beyond that are many possible syntaxes.
Semantically, you might think that let could translate away to something like this
let x = e in g ==> (\x -> g) e
and, indeed, at runtime these two expressions are identical (modulo recursive bindings, but those can be achieved with fix). Traditionally, however, let has special typing semantics (along with where and top-level name definitions... all of which being, effectively, syntax sugar for let).
In particular, in the Hindley-Milner type system which forms the foundation of Haskell there's a notion of let-generalization. Intuitively, it regards situations where we upgrade functions to their most polymorphic form. In particular, if we have a function appearing in an expression somewhere with a type like
a -> b -> c
those variables, a, b, and c, may or may not already have meaning in that expression. In particular, they're assumed to be fixed yet unknown types. Compare that to the type
forall a b c. a -> b -> c
which includes the notion of polymorphism by stating, immediately, that even if there happen to be type variables a, b, and c available in the envionment, these references are fresh.
This is an incredibly important step in the HM inference algorithm as it is how polymorphism is generated allowing HM to reach its more general types. Unfortunately, it's not possible to do this step whenever we please—it must be done at controlled points.
This is what let-generalization does: it says that types should be generalized to polymorphic types when they are let-bound to a particular name. Such generalization does not occur when they are merely passed into functions as arguments.
So, ultimately, you need a form of "let" in order to run the HM inference algorithm. Further, it cannot just be syntax sugar for function application despite them having equivalent runtime characteristics.
Syntactically, this "let" notion might be called let or where or by a convention of top-level name binding (all three are available in Haskell). So long as it exists and is a primary method for generating bound names where people expect polymorphism then it'll have the right behavior.
There are important reasons why Haskell and other functional languages use let. I'll try to describe them step by step:
Quantification of type variables
The Damas-Hindley-Milner type system used in Haskell and other functional languages allows polymorphic types, but the type quantifiers are allowed only in front of a given type expression. For example, if we write
const :: a -> b -> a
const x y = x
then the type of const is polymorphic, it is implicitly universally quantified as
∀a.∀b. a -> b -> a
and const can be specialized to any type that we obtain by substituting two type expressions for a and b.
However, the type system doesn't allow quantifiers inside type expressions, such as
(∀a. a -> a) -> (∀b. b -> b)
Such types are allowed in System F, but then type checking and type inference is undecidable, which means that the compiler wouldn't be able to infer types for us and we would have to explicitly annotate expressions with types.
(For long time the question of decidability of type-checking in System F had been open, and it had been sometimes addressed as "an embarrassing open problem", because the undecidability had been proven for many other systems but this one, until proved by Joe Wells in 1994.)
(GHC allows you to enable such explicit inner quantifiers using the RankNTypes extension, but as mentioned, the types can't be inferred automatically.)
Types of lambda abstractions
Consider the expression λx.M, or in Haskell notation \x -> M,
where M is some term containing x. If the type of x is a and the type of M is b, then the type of the whole expression will be λx.M : a → b. Because of the above restriction, a must not contain ∀, therefore the type of x can't contain type quantifiers, it can't be polymorphic (or in other words it must be monomorphic).
Why lambda abstraction isn't enough
Consider this simple Haskell program:
i :: a -> a
i x = x
foo :: a -> a
foo = i i
Let's disregard for now that foo isn't very useful. The main point is that id in the definition of foo is instantiated with two different types. The first one
i :: (a -> a) -> (a -> a)
and the second one
i :: a -> a
Now if we try to convert this program into the pure lambda calculus syntax without let, we'd end up with
(λi.i i)(λx.x)
where the first part is the definition of foo and the second part is the definition of i. But this term will not type check. The problem is that i must have a monomorphic type (as described above), but we need it polymorphic so that we can instantiate i to the two different types.
Indeed, if you try to typecheck i -> i i in Haskell, it will fail. There is no monomorphic type we can assign to i so that i i would typecheck.
let solves the problem
If we write let i x = x in i i, the situation is different. Unlike in the previous paragraph, there is no lambda here, there is no self-contained expression like λi.i i, where we'd need a polymorphic type for the abstracted variable i. Therefore let can allow i to have a polymorhpic type, in this case ∀a.a → a and so i i typechecks.
Without let, if we compiled a Haskell program and converted it to a single lambda term, every function would have to be assigned a single monomorphic type! This would be pretty useless.
So let is an essential construction that allows polymorhism in languages based on Damas-Hindley-Milner type systems.
The History of Haskell speaks a bit to the fact that Haskell has long since embraced a complex surface syntax.
It took some while to identify the stylistic choice as we have done here, but once we had done so, we engaged in furious debate about which style was “better.” An underlying assumption was that if possible there should be “just one way to do something,” so that, for example, having both let and where would be redundant and confusing.
In the end, we abandoned the underlying assumption, and provided full syntactic support for both styles. This may seem like a classic committee decision, but it is one that the present authors believe was a fine choice, and that we now regard as a strength of the language. Different constructs have different nuances, and real programmers do in practice employ both let and where, both guards and conditionals, both pattern-matching definitions and case expressions—not only in the same program but sometimes in the same function definition. It is certainly true that the additional syntactic sugar makes the language seem more elaborate, but it is a superficial sort of complexity, easily explained by purely syntactic transformations.
This is not a stupid question. It is completely reasonable.
First, let/in bindings are syntactically unambiguous and can be rewritten in a simple mechanical way into lambdas.
Second, and because of this, let ... in ... is an expression: that is, it can be written wherever expressions are allowed. In contrast, your suggested syntax is more similar to where, which is bound to a surrounding syntactic construct, like the pattern matching line of a function definition.
One might also make an argument that your suggested syntax is too imperative in style, but this is certainly subjective.
You might prefer using where to let. Many Haskell developers do. It's a reasonable choice.
There is a good reason why let is there:
let can be used within the do notation.
It can be used within list comprehension.
It can be used within function definition as mentioned here conveniently.
You give the following example as an alternative to let :
y = 1+2
z = 4+6
y+z
The above example will not typecheck and the y and z will also lead to the pollution of global namespace which can be avoided using let.
Part of the reason Haskell's let looks like it does is also the consistent way it manages its indentation sensitivity. Every indentation-sensitive construct works the same way: first there's an introducing keyword (let, where, do, of); then the next token's position determines what is the indentation level for this block; and subsequent lines that start at the same level are considered to be a new element in the block. That's why you can have
let a = 1
b = 2
in a + b
or
let
a = 1
b = 2
in a + b
but not
let a = 1
b = 2
in a + b
I think it might actually be possible to have keywordless indentation-based bindings without making the syntax technically ambiguous. But I think there is value in the current consistency, at least for the principle of least surprise. Once you see how one indentation-sensitive construct works, they all work the same. And as a bonus, they all have the same indentation-insensitive equivalent. This
keyword <element 1>
<element 2>
<element 3>
is always equivalent to
keyword { <element 1>; <element 2>; <element 3> }
In fact, as a mainly F# developer, this is something I envy from Haskell: F#'s indentation rules are more complex and not always consistent.

Can a pure function have free variables?

For example, a referentially transparent function with no free variables:
g op x y = x `op` y
A now now a function with the free (from the point-of-view of f) variables op and x:
x = 1
op = (+)
f y = x `op` y
f is also referentially transparent. But is it a pure function?
If it's not a pure function, what is the name for a function that is referentially transparent, but makes use of 1 or more variables bound in an enclosing scope?
Motivation for this question:
It's not clear to me from Wikipedia's article:
The result value need not depend on all (or any) of the argument values. However, it must depend on nothing other than the argument values.
(emphasis mine)
nor from Google searches whether a pure function can depend on free (in the sense of being bound in an enclosing scope, and not being bound in the scope of the function) variables.
Also, this book says:
If functions without free variables are pure, are closures impure?
The function function (y) { return x } is interesting. It contains a
free variable, x. A free variable is one that is not bound within
the function. Up to now, we’ve only seen one way to “bind” a variable,
namely by passing in an argument with the same name. Since the
function function (y) { return x } doesn’t have an argument named x,
the variable x isn’t bound in this function, which makes it “free.”
Now that we know that variables used in a function are either bound or
free, we can bifurcate functions into those with free variables and
those without:
Functions containing no free variables are called pure functions.
Functions containing one or more free variables are called closures.
So what is the definition of a "pure function"?
To the best of my understanding "purity" is defined at the level of semantics while "referentially transparent" can take meaning both syntactically and embedded in lambda calculus substitution rules. Defining either one also leads to a bit of a challenge in that we need to have a robust notion of equality of programs which can be challenging. Finally, it's important to note that the idea of a free variable is entirely syntactic—once you've gone to a value domain you can no longer have expressions with free variables—they must be bound else that's a syntax error.
But let's dive in and see if this becomes more clear.
Quinian Referential Transparency
We can define referential transparency very broadly as a property of a syntactic context. Per the original definition, this would be built from a sentence like
New York is an American city.
of which we've poked a hole
_ is an American city.
Such a holey-sentence, a "context", is said to be referentially transparent if, given two sentence fragments which both "refer" to the same thing, filling the context with either of those two does not change its meaning.
To be clear, two fragments with the same reference we can pick would be "New York" and "The Big Apple". Injecting those fragments we write
New York is an American city.
The Big Apple is an American city.
suggesting that
_ is an American city.
is referentially transparent. To demonstrate the quintessential counterexample, we might write
"The Big Apple" is an apple-themed epithet referring to New York.
and consider the context
"_" is an apple-themed epithet referring to New York.
and now when we inject the two referentially identical phrases we get one valid and one invalid sentence
"The Big Apple" is an apple-themed epithet referring to New York.
"New York" is an apple-themed epithet referring to New York.
In other words, quotations break referential transparency. We can see how this occurs by causing the sentence to refer to a syntactic construct instead of purely the meaning of that construct. This notion will return later.
Syntax v Semantics
There's something confusing going on in that this definition of referential transparency above applies directly to English sentences of which we build contexts by literally stripping words out. While we can do that in a programming language and consider whether such a context is referentially transparent, we also might recognize that this idea of "substitution" is critical to the very notion of a computer language.
So, let's be clear: there are two kinds of referential transparency we can consider over lambda calculus—the syntactic one and the semantic one. The syntactic one requires we define "contexts" as holes in the literal words written in a programming language. That lets us consider holes like
let x = 3 in _
and fill it in with things like "x". We'll leave the analysis of that replacement for later. At the semantic level we use lambda terms to denote contexts
\x -> x + 3 -- similar to the context "_ + 3"
and are restricted to filling in the hole not with syntax fragments but instead only valid semantic values, the action of that being performed by application
(\x -> x + 3) 5
==>
5 + 3
==>
8
So, when someone refers to referential transparency in Haskell it's important to figure out what kind of referential transparency they're referring to.
Which kind is being referred to in this question? Since it's about the notion of an expression containing a free variable, I'm going to suggest that it's syntactic. There are two major thrusts for my reasoning here. Firstly, in order to convert a syntax to a semantics requires that the syntax be valid. In the case of Haskell this means both syntactic validity and a successfully type check. However, we'll note that a program fragment like
x + 3
is actually a syntax error since x is simply unknown, unbound leaving us unable to consider the semantics of it as a Haskell program. Secondly, the very notion of a variable such as one that can be let-bound (and consider the difference between "variable" as it refers to a "slot" such as an IORef) is entirely a syntactic construct—there's no way to even talk about them from inside the semantics of a Haskell program.
So let's refine the question to be:
Can an expression containing free variables be (syntactically) referentially transparent?
and the answer is, uninterestingly, no. Referential transparency is a property of "contexts", not expressions. So let's explore the notion of free variables in contexts instead.
Free variable contexts
How can a context meaningfully have a free variable? It could be beside the hole
E1 ... x ... _ ... E2
and so long as we cannot insert something into that syntactic hole which "reaches over" and affects x syntactically then we're fine. So, for instance, if we fill that hole with something like
E1 ... x ... let x = 3 in E ... E2
then we haven't "captured" the x and thus can perhaps consider that syntactic hole to be referentially transparent. However, we're being nice to our syntax. Let's consider a more dangerous example
do x <- foo
let x = 3
_
return x
Now we see that the hole we've provided in some sense has dominion over the later phrase "return x". In fact, if we inject a fragment like "let x = 4" then it indeed changes the meaning of the whole. In that sense, the syntax here is no referentially transparent.
Another interesting interaction between referential transparency and free variables is the notion of an assigning context like
let x = 3 in _
where, from an outside perspective, both phrases "x" and "y" are reference the same thing, some named variable, but
let x = 3 in x ==/== let x = 3 in y
Progression from thorniness around equality and context
Now, hopefully the previous section explained a few ways for referential transparency to break under various kinds of syntactic contexts. It's worth asking harder questions about what kinds of contexts are valid and what kinds of expressions are equivalent. For instance, we might desugar our do notation in a previous example and end up noticing that we weren't working with a genuine context, but instead sort of a higher-order context
foo >>= \x -> (let x = 3 in ____(return x)_____)
Is this a valid notion of context? It depends a lot on what kind of meaning we're giving the program. The notion of desugaring the syntax already implies that the syntax must be well-defined enough to allow for such desugaring.
As a general rule, we must be very careful with defining both contexts and notions of equality. Further, the more meaning we demand the fragments of our language to take on the greater the ways they can be equal and the fewer the valid contexts we can build.
Ultimately, this leads us all the way to what I called "semantic referential transparency" earlier where we can only substitute proper values into a proper, closed lambda expression and we take the resulting equality to be "equality as programs".
What this ends up meaning is that as we impute more and more meaning on our language, as we begin to accept fewer and fewer things as valid, we get stronger and stronger guarantees about referential transparency.
Purity
And so this finally leads to the notion of a pure function. My understanding here is (even) less complete, but it's worth noting that purity, as a concept, does not much exist until we've moved to a very rich semantic space—that of Haskell semantics as a category over lifted Complete Partial Orders.
If that doesn't make much sense, then just imagine purity is a concept that only exists when talking about Haskell values as functions and equality of programs. In particular, we examine the collection of Haskell functions
trivial :: a -> ()
trivial x = x `seq` ()
where we have a trivial function for every choice of a. We'll notate the specific choice using an underscore
trivial_Int :: Int -> ()
trivial_Int x = x `seq` ()
Now we can define a (very strictly) pure function to be a function f :: a -> b such that
trivial_b . f = trivial_a
In other words, if we throw out the result of computing our function, the b, then we may as well have never computed it in the first place.
Again, there's no notion of purity without having Haskell values and no notion of Haskell values when your expressions contain free variables (since it's a syntax error).
So what's the answer?
Ultimately, the answer is that you can't talk about purity around free variables and you can break referential transparency in lots of ways whenever you are talking about syntax. At some point as you convert your syntactic representation to its semantic denotation you must forget the notion and names of free variables in order to have them represent the reduction semantics of lambda terms and by this point we've begun to have referential transparency.
Finally, purity is something even more stringent than referential transparency having to do with even the reduction characteristics of your (referentially transparent) lambda terms.
By the definition of purity given above, most of Haskell isn't pure itself as Haskell may represent non-termination. Many feel that this is a better definition of purity, however, as non-termination can be considered a side effect of computation instead of a meaningful resultant value.
The Wikipedia definition is incomplete, insofar a pure function may use constants to compute its answer.
When we look at
increment n = 1+n
this is obvious. Perhaps it was not mentioned because it is that obvious.
Now the trick in Haskell is that not only top level values and functions are constants, but inside a closure also the variables(!) closed over:
add x = (\y -> x+y)
Here x stands for the value we applied add to - we call it variable not because it could change within the right hand side of add, but because it can be different each time we apply add. And yet, from the point of view of the lambda, x is a constant.
It follows that free variables always name constant values at the point where they are used and hence do not impact purity.
Short answer is YES f is pure
In Haskell map is defined with foldr. Would you agree that map is functional? If so did it matter that it had global function foldr that wasn't supplied to map as an argument?
In map foldr is a free variable. It's not doubt about it. It makes no difference that it's a function or something that evaluates to a value. It's the same.
Free variables, like the functions foldl and +, are essential for functional languages to exist. Without it you wouldn't have abstraction and the languages would be worse off than the Fortran.

Haskell pattern match "diverge" and ⊥

I'm trying to understand the Haskell 2010 Report section 3.17.2 "Informal Semantics of Pattern Matching". Most of it, relating to a pattern match succeeding or failing seems straightforward, however I'm having difficulty understanding the case which is described as the pattern match "diverging".
I'm semi-persuaded it means that the match algorithm does not "converge" to an answer (hence the match function never returns). But if doesn't return, then, how can it return a value, as suggested by the parenthetical "i.e. return ⊥"? And what does it mean to "return ⊥" anyway? How one handle that outcome?
Item 5 has the particularly confusing (to me) point "If the value is ⊥, the match diverges". Is this just saying that a value of ⊥ produces a match result of ⊥? (Setting aside that I don't know what that outcome means!)
Any illumination, possibly with an example, would be appreciated!
Addendum after a couple of lengthy answers:
Thanks Tikhon and all for your efforts.
It seems my confusion comes from there being two different realms of explanation: The realm of Haskell features and behaviors, and the realm of mathematics/semantics, and in Haskell literature these two are intermingled in an attempt to explain the former in terms of the latter, without sufficient signposts (to me) as to which elements belong to which.
Evidently "bottom" ⊥ is in the semantics domain, and does not exist as a value within Haskell (ie: you can't type it in, you never get a result that prints out as " ⊥").
So, where the explanation says a function "returns ⊥", this refers to a function that does any of a number of inconvenient things, like not terminate, throw an exception, or return "undefined". Is that right?
Further, those who commented that ⊥ actually is a value that can be passed around, are really thinking of bindings to ordinary functions that haven't yet actually been called upon to evaluate ("unexploded bombs" so to speak) and might never, due to laziness, right?
The value is ⊥, usually pronounced "bottom". It is a value in the semantic sense--it is not a normal Haskell value per se. It represents computations that do not produce a normal Haskell value: exceptions and infinite loops, for example.
Semantics is about defining the "meaning" of a program. In Haskell, we usually talk about denotational semantics, where the value is a mathematical object of some sort. The most trivial example would be that the expression 10 (but also the expression 9 + 1) have denotations of the number 10 (rather than the Haskell value 10). We usually write that ⟦9 + 1⟧ = 10 meaning that the denotation of the Haskell expression 9 + 1 is the number 10.
However, what do we do with an expression like let x = x in x? There is no Haskell value for this expression. If you tried to evaluate it, it would simply never finish. Moreover, it is not obvious what mathematical object this corresponds to. However, in order to reason about programs, we need to give some denotation for it. So, essentially, we just make up a value for all these computations, and we call the value ⊥ (bottom).
So ⊥ is just a way to define what a computation that doesn't return "means".
We also define other computations like undefined and error "some message" as ⊥ because they also do not have obvious normal values. So throwing an exception corresponds to ⊥. This is exactly what happens with a failed pattern match.
The usual way of thinking about this is that every Haskell type is "lifted"--it contains ⊥. That is, Bool corresponds to {⊥, True, False} rather than just {True, False}. This represents the fact that Haskell programs are not guaranteed to terminate and can have exceptions. This is also true when you define your own type--the type contains every value you defined for it as well as ⊥.
Interestingly, since Haskell is non-strict, ⊥ can exist in normal code. So you could have a value like Just ⊥, and if you never evaluate it, everything will work fine. A good example of this is const: const 1 ⊥ evaluates to 1. This works for failed pattern matches as well:
const 1 (let Just x = Nothing in x) -- 1
You should read the section on denotational semantics in the Haskell WikiBook. It's a very approachable introduction to the subject, which I personally find very fascinating.
Denotational semantics
So, briefly denotational semantics, which is where ⊥ lives, is a mapping from Haskell values to some other space of values. You do this to give meaning to programs in a more formal manner than just talking about what programs should do—you say that they must respect their denotational semantics.
So for Haskell, you often think about how Haskell expressions denote mathematical values. You often see Strachey brackets ⟦·⟧ to denote the "semantic mapping" from Haskell to Math. Finally, we want our semantic brackets to be compatible with semantic operations. For instance
⟦x + y⟧ = ⟦x⟧ + ⟦y⟧
where on the left side + is the Haskell function (+) :: Num a => a -> a -> a and on the right side it's the binary operation in a commutative group. While is cool, because then we know that we can use the properties from the semantic map to know how our Haskell functions should work. To wit, let's write the commutative property "in Math"
⟦x⟧ + ⟦y⟧ == ⟦y⟧ + ⟦x⟧
= ⟦x + y⟧ == ⟦y + x⟧
= ⟦x + y == y + x⟧
where the third step also indicates that the Haskell (==) :: Eq a => a -> a -> a ought to have the properties of a mathematical equivalence relationship.
Well, except...
Anyway, that's all well and good until we remember that computers are finite things and Maths don't much care about that (unless you're using intuitionistic logic, and then you get Coq). So, we have to take note of places where our semantics don't follow Math quite right. Here are three examples
⟦undefined⟧ = ??
⟦error "undefined"⟧ = ??
⟦let x = x in x⟧ = ??
This is where ⊥ comes into play. We just assert that so far as the denotational semantics of Haskell are concerned each of those examples might as well mean (the newly introduced Mathematical/semantic concept of) ⊥. What are the Mathematical properties of ⊥? Well, this is where we start to really dive into what the semantic domain is and start talking about monotonicity of functions and CPOs and the like. Essentially, though, ⊥ is a mathematical object which plays roughly the same game as non-termination does. To the point of view of the semantic model, ⊥ is toxic and it infects expressions with its toxic-nondeterminism.
But it's not a Haskell-the-language concept, just a Semantic-domain-of-the-language-Haskell thing. In Haskell we have undefined, error and infinite looping. This is important.
Extra-semantic behavior (side note)
So the semantics of ⟦undefined⟧ = ⟦error "undefined"⟧ = ⟦let x = x in x⟧ = ⊥ are clear once we understand the mathematical meanings of ⊥, but it's also clear that those each have different effects "in reality". This is sort of like "undefined behavior" of C... it's behavior that's undefined so far as the semantic domain is concerned. You might call it semantically unobservable.
So how does pattern matching return ⊥?
So what does it mean "semantically" to return ⊥? Well, ⊥ is a perfectly valid semantic value which has the infection property which models non-determinism (or asynchronous error throwing). From the semantic point of view it's a perfectly valid value which can be returned as is.
From the implementation point of view, you have a number of choices, each of which map to the same semantic value. undefined isn't quite right, nor is entering an infinite loop, so if you're going to pick a semantically undefined behavior you might as well pick one that's useful and throw an error
*** Exception: <interactive>:2:5-14: Non-exhaustive patterns in function cheers

Resources