What would pattern matching look like in a strict Haskell? - haskell

As a research experiment, I've recently worked on implementing strict-by-default Haskell modules. Instead of being lazy-by-default and having ! as an escape hatch, we're strict-by-default and have ~ as an escape hatch. This behavior is enabled using a {-# LANGUAGE Strict #-} pragma.
While working on making patterns strict I came up on an interesting question: should patterns be strict in the "top-level" only or in all bind variables. For example, if we have
f x = case x of
y -> ...
we will force y even though Haskell would not do so. The more tricky case is
f x = case x of
Just y -> ...
Should we interpret that as
f x = case x of
Just y -> ... -- already strict in 'x' but not in `y`
or
f x = case x of
Just !y -> ... -- now also strict in 'y'
(Note that we're using the normal, lazy Haskell Just here.)
One design constraint that might of value is this: I want the pragma to be modular. For example, even with Strict turned on we don't evaluate arguments to functions defined in other modules. That would make it non-modular.
Is there any prior art here?

As far as I understand things, refutable patterns are always strict at least on the outer level. Which is another way to say that the scrutinized expression must have been evaluated to WHNF, otherwise you couldn't see if it is a 'Just' or a 'Nothing'.
Hence your
!(Just y) -> ...
notation appears useless.
OTOH, since in a strict language, the argument to Just must already have been evaluated, the notation
Just !y ->
doesn't make sense either.

Related

Pattern match over non-constructor functions

One of the most powerful ways pattern matching and lazy evaluation can come together is to bypass expensive computation. However I am still shocked that Haskell only permits the pattern matching of constructors, which is barely pattern matching at all!
Is there some way to impliment the following functionality in Haskell:
exp :: Double -> Double
exp 0 = 1
exp (log a) = a
--...
log :: Double -> Double
log 1 = 0
log (exp a) = a
--...
The original problem I found this useful in was writing an associativity preference / rule in a Monoid class:
class Monoid m where
iden :: m
(+) m -> m -> m
(+) iden a = a
(+) a iden = a
--Line with issue
(+) ((+) a b) c = (+) a ((+) b c)
There's no reason to be shocked about this. How would it be even remotely feasible to pattern match on arbitrary functions? Most functions aren't invertible, and even for those that are it is typically nontrivial to actually compute the inverses.
Of course the compiler could in principle handle trivial examples like replacing literal exp (log x) with x, but that would be almost completely useless in practice (in the unlikely event somebody were to literally write that, they could as well reduce it right there in the source), and would generally lead to very weird unpredictable behaviour if inlining order changes whether or not the compiler can see that a match applies.
(There is however a thing called rewrite rules, which is similar to what you proposed but is seen as only an optimisation tool.)
Even the two lines from the Monoid class that don't error don't make sense, but for different reasons. First, when you write
(+) iden a = a
(+) a iden = a
this doesn't do what you seem to think. These are actually two redundant catch-call clauses, equivalent to
(+) x y = y
(+) x y = x
...which is an utterly nonsensical thing to write. What you want to state could in fact be written as
default (+) :: Eq a => a -> a -> a
x+y
| x==iden = y
| y==iden = x
| otherwise = ...
...but this still doesn't accomplish anything useful, because this is never going to be a full definition of +. And as soon as a concrete instance even begins to define its own + operator, the complete default one is going to be ignored.
Moreover, if you were to have these kind of clauses all over your Haskell project it would in practice just mean your performing a lot of unnecessary, redundant extra checks. A law-abiding Monoid instance needs to fulfill mempty <> a ≡ a anyway, no point explicitly special-casing it.
I think what you really want is tests. It would make sense to specify laws right in a class declaration in a way that they could automatically be checked, but standard Haskell has no syntax for this. Most projects just do it in a separate test suite, using QuickCheck to generate example inputs. I think there's also a tool that allow you to put the test cases right in your source file, but I forgot what it's called.

Can any recursive definition be rewritten using foldr?

Say I have a general recursive definition in haskell like this:
foo a0 a1 ... = base_case
foo b0 b1 ...
| cond1 = recursive_case_1
| cond2 = recursive_case_2
...
Can it always rewritten using foldr? Can it be proved?
If we interpret your question literally, we can write const value foldr to achieve any value, as #DanielWagner pointed out in a comment.
A more interesting question is whether we can instead forbid general recursion from Haskell, and "recurse" only through the eliminators/catamorphisms associated to each user-defined data type, which are the natural generalization of foldr to inductively defined data types. This is, essentially, (higher-order) primitive recursion.
When this restriction is performed, we can only compose terminating functions (the eliminators) together. This means that we can no longer define non terminating functions.
As a first example, we lose the trivial recursion
f x = f x
-- or even
a = a
since, as said, the language becomes total.
More interestingly, the general fixed point operator is lost.
fix :: (a -> a) -> a
fix f = f (fix f)
A more intriguing question is: what about the total functions we can express in Haskell? We do lose all the non-total functions, but do we lose any of the total ones?
Computability theory states that, since the language becomes total (no more non termination), we lose expressiveness even on the total fragment.
The proof is a standard diagonalization argument. Fix any enumeration of programs in the total fragment so that we can speak of "the i-th program".
Then, let eval i x be the result of running the i-th program on the natural x as input (for simplicity, assume this is well typed, and that the result is a natural). Note that, since the language is total, then a result must exist. Moreover, eval can be implemented in the unrestricted Haskell language, since we can write an interpreter of Haskell in Haskell (left as an exercise :-P), and that would work as fine for the fragment. Then, we simply take
f n = succ $ eval n n
The above is a total function (a composition of total functions) which can be expressed in Haskell, but not in the fragment. Indeed, otherwise there would be a program to compute it, say the i-th program. In such case we would have
eval i x = f x
for all x. But then,
eval i i = f i = succ $ eval i i
which is impossible -- contradiction. QED.
In type theory, it is indeed the case that you can elaborate all definitions by dependent pattern-matching into ones only using eliminators (a more strongly-typed version of folds, the generalisation of lists' foldr).
See e.g. Eliminating Dependent Pattern Matching (pdf)

What's the reason of 'let rec' for impure functional language OCaml?

In the book Real World OCaml, the authors put why OCaml uses let rec for defining recursive functions.
OCaml distinguishes between nonrecursive definitions (using let) and recursive definitions (using let rec) largely for technical reasons: the type-inference algorithm needs to know when a set of function definitions are mutually recursive, and for reasons that don't apply to a pure language like Haskell, these have to be marked explicitly by the programmer.
What are the technical reasons that enforces let rec while pure functional languages not?
When you define a semantics of function definition, as a language designer, you have choices: either to make the name of the function visible in the scope of its own body, or not. Both choices are perfectly legal, for example C-family languages being far from functional, still do have names of definitions visible in their scope (this also extends to all definitions in C, making this int x = x + 1 legal). OCaml language decides to give us extra flexibility of making the choice by ourselves. And that's really great. They decided to make it invisible by default, a fairly decent solution, since most of the functions that we write are non recursive.
What concerning the cite, it doesn't really correspond to the function definitions – the most common use of the rec keyword. It is mostly about "Why the scope of function definition doesn't extend to the body of the module". This is a completely different question.
After some research I've found a very similar question, that has an answer, that might satisfy you, a cite from it:
So, given that the type checker needs to know about which sets of
definitions are mutually recursive, what can it do? One possibility is
to simply do a dependency analysis on all the definitions in a scope,
and reorder them into the smallest possible groups. Haskell actually
does this, but in languages like F# (and OCaml and SML) which have
unrestricted side-effects, this is a bad idea because it might reorder
the side-effects too. So instead it asks the user to explicitly mark
which definitions are mutually recursive, and thus by extension where
generalization should occur.
Even without any reordering, with arbitrary non-pure expressions, that can occur in the function definition (a side effect of definition, not evaluation) it is impossible to build the dependency graph. Consider demarshaling and executing function from file.
To summarize, we have two usages of let rec construct, one is to create a self recursive function, like
let rec seq acc = function
| 0 -> acc
| n -> seq (acc+1) (n-1)
Another is to define mutually recursive functions:
let rec odd n =
if n = 0 then true
else if n = 1 then false else even (n - 1)
and even n =
if n = 0 then false
else if n = 1 then true else odd (n - 1)
At the first case, there is no technical reasons to stick to one or to another solution. This is just a matter of taste.
The second case is harder. When inferring type you need to split all function definitions into clusters consisting of mutually depending definitions, in order to narrow typing environment. In OCaml it is harder to make, since you need to take into account side-effects. (Or you can continue without splitting it into principal components, but this will lead to another issue – your type system will be more restrictive, i.e., will disallow more valid programs).
But, revisiting the original question and the quote from RWO, I'm still pretty sure that there is no technical reasons for adding the rec flag. Consider, SML that has the same problems, but still has rec enabled by default. There is a technical reason, for let ... and ... syntax for defining a set of mutual recursive functions. In SML this syntax doesn't require us to put the rec flag, in OCaml does, thus giving us more flexibility, like the ability to swap to values with let x = y and y = x expression.
What are the technical reasons that enforces let rec while pure functional languages not?
Recursiveness is a strange beast. It has a relation to purity, but it's a little more oblique than this. To be clear, you could write "alterna-Haskell" which retains its purity, its laziness but does not have recursively bound lets by default and demands some kind of rec marker just as OCaml does. Some would even prefer this.
In essence, there are just many different kinds of "let"s possible. If we compare let and let rec in OCaml we'll see a small difference. In static formal semantics, we might write
Γ ⊢ E : A Γ, x : A ⊢ F : B
-----------------------------
Γ ⊢ let x = E in F : B
which says that if we can prove in a variable environment Γ that E has type A and if we can prove in the same variable environment Γ augmented with x : A that F : B then we can prove that in the variable environment Γ let x = E in F has type B.
The thing to watch is the Γ argument. This is just a list of ("variable name", "value") pairs like [(x, 3); (y, "hello")] and augmenting the list like Γ, x : A just means consing (x, A) on to it (sorry that the syntax is flipped).
In particular, let's write the same formalism for let rec
Γ, x : A ⊢ E : A Γ, x : A ⊢ F : B
-------------------------------------
Γ ⊢ let rec x = E in F : B
In particular, the only difference is that neither of our premises work in the plain Γ environment; both are allowed to assume the existence of the x variable.
In this sense, let and let rec are simply different beasts.
So what does it mean to be pure? At the strictest definition, of which Haskell doesn't even participate, we must eliminate all effects including non-termination. The only way to achieve this is to pull away our ability to write unrestricted recursion and replace it only carefully.
There exist plenty of languages without recursion. Perhaps the most important one is the Simply Typed Lambda Calculus. In it's basic form it is regular lambda calculus but augmented with a typing discipline where types are bit like
type ty =
| Base
| Arr of ty * ty
It turns out that STLC cannot represent recursion---the Y combinator, and all other fixed-point cousin combinators, cannot be typed. Thusly, STLC is not Turing Complete.
It is however uncompromisingly pure. It achieves that purity with the bluntest of instruments, however, by completely outlawing recursion. What we'd really like is some kind of balanced, careful recursion which doesn't lead to non-termination---we'll still be Turing Incomplete, but not so crippled.
Some languages try this game. There are clever ways of adding typed recursion back along a division between data and codata which ensures that you cannot write non-terminating functions. If you're interested, I suggest learning a bit of Coq.
But OCaml's goal (and Haskell's as well) is not to be delicate here. Both languages are uncompromisingly Turing Complete (and therefore "practical"). So let's discuss some more blunt ways of augmenting the STLC with recursion.
The bluntest of the bunch is to add a single built-in function called fix
val fix : ('a -> 'a) -> 'a
or, in more genuine OCaml-y notation which requires eta-expansion
val fix : (('a -> 'b) -> ('a -> 'b)) -> ('a -> 'b)
Now, remember that we're only considering a primitive STLC with fix added. We can indeed write fix (the latter one at least) in OCaml, but that's cheating at the moment. What does fix buy the STLC as a primitive?
It turns out that the answer is: "everything". STLC + Fix (basically a language called PCF) is impure and Turing Complete. It's also simply tremendously difficult to use.
So this is the final hurdle to jump: how do we make fix easier to work with? By adding recursive bindings!
Already, STLC has a let construction. You can think of it as just syntax sugar:
let x = E in F ----> (fun x -> F) (E)
but once we've added fix we also have the power to introduce let rec bindings
let rec x a = E in F ----> (fun x -> F) (fix (fun x a -> E))
At this point it should again be clear: let and let rec are very different beasts. They embody different levels of linguistic power and let rec is a window to allow fundamental impurity through Turing Completeness and its partner-effect non-termination.
So, at the end of the day, it's a little amusing that Haskell, the purer of the two languages, made the interesting choice of abolishing plain let bindings. That's really the only difference: there is no syntax for representing a non-recursive binding in Haskell.
At this point it's essentially just a style decision. The authors of Haskell determined that recursive bindings were so useful that one might as well assume that every binding is recursive (and mutually so, a can of worms ignored in this answer so far).
On the other hand, OCaml gives you to ability to be totally explicit about the kind of binding you choose, let or let rec!
I think this has nothing to do with being purely functional, it is just a design decision that in Haskell you are not allowed to do
let a = 0;;
let a = a + 1;;
whereas you can do it in Caml.
In Haskell this code won't work because let a = a + 1 is interpreted as a recursive definition and will not terminate.
In Haskell you don't have to specify that a definition is recursive simply because you can't create a non-recursive one (so the keyword rec is everywhere but is not written).
I am not an expert, but I'll make a guess until the truly knowledgable guys show up. In OCaml there can be side effects that happen during the definition of a function:
let rec f =
let () = Printf.printf "hello\n" in
fun x -> if x <= 0 then 12 else 1 + f (x - 1)
This means that the order of function definitions must be preserved in some sense. Now imagine that two distinct sets of mutually recursive functions are interleaved. It doesn't seem at all easy for the compiler to preserve the order while processing them as two separate mutually recursive sets of definitions.
The use of `let rec ... and`` means that distinct sets of mutually recursive function definitions can't be interleaved in OCaml as they can in Haskell. Haskell doesn't have side effects (in some sense), so definitions can be freely reordered.
It's not a question of purity, it's a question of specifying what environment the typechecker should check an expression in. It actually gives you more power than you would have otherwise. For example (I'm going to write Standard ML here because I know it better than OCaml, but I believe the typechecking process is pretty much the same for the two languages), it lets you distinguish between these cases:
val foo : int = 5
val foo = fn (x) => if x = foo then 0 else 1
Now as of the second redefinition, foo has the type int -> int. On the other hand,
val foo : int = 5
val rec foo = fn (x) => if x = foo then 0 else 1
does not typecheck, because the rec means that the typechecker has already decided that foo has been rebound to the type 'a -> int, and when it tries to figure out what that 'a needs to be, there is a unification failure because x = foo forces foo to have a numeric type, which it doesn't.
It can certainly "look" more imperative, because the case without rec allows you to do things like this:
val foo : int = 5
val foo = foo + 1
val foo = foo + 1
and now foo has the value 7. That's not because it's been mutated, however --- the name foo has been rebound 3 times, and it just so happens that each of those bindings shadowed a previous binding of a variable named foo. It's the same as this:
val foo : int = 5
val foo' = foo + 1
val foo'' = foo' + 1
except that foo and foo' are no longer available in the environment after the identifier foo has been rebound. The following are also legal:
val foo : int = 5
val foo : real = 5.0
which makes it clearer that what's happening is shadowing of the original definition, rather than a side effect.
Whether or not it's stylistically a good idea to rebind identifiers is questionable -- it can get confusing. It can be useful in some situations (e.g. rebinding a function name to a version of itself that prints debugging output).
I'd say that in OCaml they are trying to make REPL and source files work the same way. So, it's perfectly reasonable to redefine some function in REPL; therefore, they have to allow it in the source as well. Now, if you use the (redefined) function in itself, OCaml needs some way of knowing which of the definitions to use: the previous one or the new one.
In Haskell they've just gave up and accepted that REPL works differentyle from source files.

Why should I use case expressions if I can use "equations"?

I'm learning Haskell, from the book "Real World Haskell". In pages 66 and 67, they show the case expressions with this example:
fromMaybe defval wrapped =
case wrapped of
Nothing -> defval
Just value -> value
I remember a similar thing in F#, but (as shown earlier in the book) Haskell can define functions as series of equations; while AFAIK, F Sharp cannot. So I tried to define this in such a way:
fromMaybe2 defval Nothing = defval
fromMaybe2 defval (Just value) = value
I loaded it in GHCi and after a couple of results, I convinced myself it was the same However; this makes me wonder, why should there be case expressions when equations:
are more comprehensible (it's Mathematics; why use case something of, who says that?);
are less verbose (2 vs 4 lines);
require much less structuring and syntatic sugar (-> could be an operator, look what they've done!);
only use variables when needed (in basic cases, such as this wrapped just takes up space).
What's good about case expressions? Do they exist only because similar FP-based languages (such as F#) have them? Am I missing something?
Edit:
I see from #freyrs's answer that the compiler makes these exactly the same. So, equations can always be turned into case expressions (as expected). My next question is the converse; can one go the opposite route of the compiler and use equations with let/where expressions to express any case expression?
This comes from a culture of having small "kernel" expression-oriented languages. Haskell grows from Lisp's roots (i.e. lambda calculus and combinatory logic); it's basically Lisp plus syntax plus explicit data type definitions plus pattern matching minus mutation plus lazy evaluation (lazy evaluation was itself first described in Lisp AFAIK; i.e. in the 70-s).
Lisp-like languages are expression-oriented, i.e. everything is an expression, and a language's semantics is given as a set of reduction rules, turning more complex expressions into simpler ones, and ultimately into "values".
Equations are not expressions. Several equations could be somehow mashed into one expression; you'd have to introduce some syntax for that; case is that syntax.
Rich syntax of Haskell gets translated into smaller "core" language, that has case as one of its basic building blocks. case has to be a basic construct, because pattern-matching in Haskell is made to be such a basic, core feature of the language.
To your new question, yes you can, by introducing auxiliary functions as Luis Casillas shows in his answer, or with the use of pattern guards, so his example becomes:
foo x y | (Meh o p) <- z = baz y p o
| (Gah t q) <- z = quux x t q
where
z = bar x
The two functions compile into exactly the same internal code in Haskell ( called Core ) which you can dump out by passing the flags -ddump-simpl -dsuppress-all to ghc.
It may look a bit intimidating with the variable names, but it's effectively just a explicitly typed version of the code you wrote above. The only difference is the variables names.
fromMaybe2
fromMaybe2 =
\ # t_aBC defval_aB6 ds_dCK ->
case ds_dCK of _ {
Nothing -> (defval_aB6) defval_aB6;
Just value_aB8 -> (value_aB8) value_aB8
}
fromMaybe
fromMaybe =
\ # t_aBJ defval_aB3 wrapped_aB4 ->
case wrapped_aB4 of _ {
Nothing -> (defval_aB3) defval_aB3;
Just value_aB5 -> (value_aB5) value_aB5
}
The paper "A History of Haskell: Being Lazy with Class" (PDF) provides some useful perspective on this question. Section 4.4 ("Declaration style vs. expression style," p.13) is about this topic. The money quote:
[W]e engaged in furious debate about which style was “better.” An underlying assumption was that if possible there should be “just one way to do something,” so that, for example, having both let and where would be redundant and confusing. [...] In the end, we abandoned the underlying assumption, and provided full syntactic support for both styles.
Basically they couldn't agree on one so they threw both in. (Note that quote is explicitly about let and where, but they treat both that choice and the case vs. equations choice as two manifestations of the same basic choice—what they call "declaration style" vs. "expression style.")
In modern practice, the declaration style (your "series of equations") has become the more common one. case is often seen in this situation, where you need to match on a value that is computed from one of the arguments:
foo x y = case bar x of
Meh o p -> baz y p o
Gah t q -> quux x t q
You can always rewrite this to use an auxiliary function:
foo x y = go (bar x)
where go (Meh o p) = baz y p o
go (Gah t q) = quux x t q
This has the very minor disadvantage that you need to name your auxiliary function—but go is normally a perfectly fine name in this situation.
Case expression can be used anywhere an expression is expected, while equations can't. Example:
1 + (case even 9 of True -> 2; _ -> 3)
You can even nest case expression, and I've seen code that does that. However I tend to stay away from case expressions, and try to solve the problem with equations, even if I have to introduce a local function using where/let.
Every definition using equations is equivalent to one using case. For instance
negate True = False
negate False = True
stands for
negate x = case x of
True -> False
False -> True
That is to say, these are two ways of expressing the same thing, and the former is translated to the latter by GHC.
From the Haskell code that I've read, it seems canonical to use the first style wherever possible.
See section 4.4.3.1 of the Haskell '98 report.
The answer to your added question is yes, but it's pretty ugly.
case exp of
pat1 -> e1
pat2 -> e2
etc.
can, I believe, be emulated by
let
f pat1 = e1
f pat2 = e2
etc.
in f exp
as long as f is not free in exp, e1, e2, etc. But you shouldn't do that because it's horrible.

what is the meaning of "let x = x in x" and "data Float#" in GHC.Prim in Haskell

I looked at the module of GHC.Prim and found that it seems that all datas in GHC.Prim are defined as data Float# without something like =A|B, and all functions in GHC.Prim is defined as gtFloat# = let x = x in x.
My question is whether these definations make sense and what they mean.
I checked the header of GHC.Prim like below
{-
This is a generated file (generated by genprimopcode).
It is not code to actually be used. Its only purpose is to be
consumed by haddock.
-}
I guess it may have some relations with the questions and who could please explain that to me.
It's magic :)
These are the "primitive operators and operations". They are hardwired into the compiler, hence there are no data constructors for primitives and all functions are bottom since they are necessarily not expressable in pure haskell.
(Bottom represents a "hole" in a haskell program, an infinite loop or undefined are examples of bottom)
To put it another way
These data declarations/functions are to provide access to the raw compiler internals. GHC.Prim exists to export these primitives, it doesn't actually implement them or anything (eg its code isn't actually useful). All of that is done in the compiler.
It's meant for code that needs to be extremely optimized. If you think you might need it, some useful reading about the primitives in GHC
A brief expansion of jozefg's answer ...
Primops are precisely those operations that are supplied by the runtime because they can't be defined within the language (or shouldn't be, for reasons of efficiency, say). The true purpose of GHC.Prim is not to define anything, but merely to export some operations so that Haddock can document their existence.
The construction let x = x in x is used at this point in GHC's codebase because the value undefined has not yet been, um, "defined". (That waits until the Prelude.) But notice that the circular let construction, just like undefined, is both syntactically correct and can have any type. That is, it's an infinite loop with the semantics of ⊥, just as undefined is.
... and an aside
Also note that in general the Haskell expression let x = z in y means "change the variable x to the expression z wherever x occurs in the expression y". If you're familiar with the lambda calculus, you should recognize this as the reduction rule for the application of the lambda abstraction \x -> y to the term z. So is the Haskell expression let x = x in x nothing more than some syntax on top of the pure lambda calculus? Let's take a look.
First, we need to account for the recursiveness of Haskell's let expressions. The lambda calculus does not admit recursive definitions, but given a primitive fixed-point operator fix,1 we can encode recursiveness explicitly. For example, the Haskell expression let x = x in x has the same meaning as (fix \r x -> r x) z.2 (I've renamed the x on the right side of the application to z to emphasize that it has no implicit relation to the x inside the lambda).
Applying the usual definition of a fixed-point operator, fix f = f (fix f), our translation of let x = x in x reduces (or rather doesn't) like this:
(fix \r x -> r x) z ==>
(\s y -> s y) (fix \r x -> r x) z ==>
(\y -> (fix \r x -> r x) y) z ==>
(fix \r x -> r x) z ==> ...
So at this point in the development of the language, we've introduced the semantics of ⊥ from the foundation of the (typed) lambda calculus with a built-in fixed-point operator. Lovely!
We need a primitive fixed-point operation (that is, one that is built into the language) because it's impossible to define a fixed-point combinator in the simply typed lambda calculus and its close cousins. (The definition of fix in Haskell's Prelude doesn't contradict this—it's defined recursively, but we need a fixed-point operator to implement recursion.)
If you haven't seen this before, you should read up on fixed-point recursion in the lambda calculus. A text on the lambda calculus is best (there are some free ones online), but some Googling should get you going. The basic idea is that we can convert a recursive definition into a non-recursive one by abstracting over the recursive call, then use a fixed-point combinator to pass our function (lambda abstraction) to itself. The base-case of a well-defined recursive definition corresponds to a fixed point of our function, so the function executes, calling itself over and over again until it hits a fixed point, at which point the function returns its result. Pretty damn neat, huh?

Resources