Why should I use case expressions if I can use "equations"? - haskell

I'm learning Haskell, from the book "Real World Haskell". In pages 66 and 67, they show the case expressions with this example:
fromMaybe defval wrapped =
case wrapped of
Nothing -> defval
Just value -> value
I remember a similar thing in F#, but (as shown earlier in the book) Haskell can define functions as series of equations; while AFAIK, F Sharp cannot. So I tried to define this in such a way:
fromMaybe2 defval Nothing = defval
fromMaybe2 defval (Just value) = value
I loaded it in GHCi and after a couple of results, I convinced myself it was the same However; this makes me wonder, why should there be case expressions when equations:
are more comprehensible (it's Mathematics; why use case something of, who says that?);
are less verbose (2 vs 4 lines);
require much less structuring and syntatic sugar (-> could be an operator, look what they've done!);
only use variables when needed (in basic cases, such as this wrapped just takes up space).
What's good about case expressions? Do they exist only because similar FP-based languages (such as F#) have them? Am I missing something?
Edit:
I see from #freyrs's answer that the compiler makes these exactly the same. So, equations can always be turned into case expressions (as expected). My next question is the converse; can one go the opposite route of the compiler and use equations with let/where expressions to express any case expression?

This comes from a culture of having small "kernel" expression-oriented languages. Haskell grows from Lisp's roots (i.e. lambda calculus and combinatory logic); it's basically Lisp plus syntax plus explicit data type definitions plus pattern matching minus mutation plus lazy evaluation (lazy evaluation was itself first described in Lisp AFAIK; i.e. in the 70-s).
Lisp-like languages are expression-oriented, i.e. everything is an expression, and a language's semantics is given as a set of reduction rules, turning more complex expressions into simpler ones, and ultimately into "values".
Equations are not expressions. Several equations could be somehow mashed into one expression; you'd have to introduce some syntax for that; case is that syntax.
Rich syntax of Haskell gets translated into smaller "core" language, that has case as one of its basic building blocks. case has to be a basic construct, because pattern-matching in Haskell is made to be such a basic, core feature of the language.
To your new question, yes you can, by introducing auxiliary functions as Luis Casillas shows in his answer, or with the use of pattern guards, so his example becomes:
foo x y | (Meh o p) <- z = baz y p o
| (Gah t q) <- z = quux x t q
where
z = bar x

The two functions compile into exactly the same internal code in Haskell ( called Core ) which you can dump out by passing the flags -ddump-simpl -dsuppress-all to ghc.
It may look a bit intimidating with the variable names, but it's effectively just a explicitly typed version of the code you wrote above. The only difference is the variables names.
fromMaybe2
fromMaybe2 =
\ # t_aBC defval_aB6 ds_dCK ->
case ds_dCK of _ {
Nothing -> (defval_aB6) defval_aB6;
Just value_aB8 -> (value_aB8) value_aB8
}
fromMaybe
fromMaybe =
\ # t_aBJ defval_aB3 wrapped_aB4 ->
case wrapped_aB4 of _ {
Nothing -> (defval_aB3) defval_aB3;
Just value_aB5 -> (value_aB5) value_aB5
}

The paper "A History of Haskell: Being Lazy with Class" (PDF) provides some useful perspective on this question. Section 4.4 ("Declaration style vs. expression style," p.13) is about this topic. The money quote:
[W]e engaged in furious debate about which style was “better.” An underlying assumption was that if possible there should be “just one way to do something,” so that, for example, having both let and where would be redundant and confusing. [...] In the end, we abandoned the underlying assumption, and provided full syntactic support for both styles.
Basically they couldn't agree on one so they threw both in. (Note that quote is explicitly about let and where, but they treat both that choice and the case vs. equations choice as two manifestations of the same basic choice—what they call "declaration style" vs. "expression style.")
In modern practice, the declaration style (your "series of equations") has become the more common one. case is often seen in this situation, where you need to match on a value that is computed from one of the arguments:
foo x y = case bar x of
Meh o p -> baz y p o
Gah t q -> quux x t q
You can always rewrite this to use an auxiliary function:
foo x y = go (bar x)
where go (Meh o p) = baz y p o
go (Gah t q) = quux x t q
This has the very minor disadvantage that you need to name your auxiliary function—but go is normally a perfectly fine name in this situation.

Case expression can be used anywhere an expression is expected, while equations can't. Example:
1 + (case even 9 of True -> 2; _ -> 3)
You can even nest case expression, and I've seen code that does that. However I tend to stay away from case expressions, and try to solve the problem with equations, even if I have to introduce a local function using where/let.

Every definition using equations is equivalent to one using case. For instance
negate True = False
negate False = True
stands for
negate x = case x of
True -> False
False -> True
That is to say, these are two ways of expressing the same thing, and the former is translated to the latter by GHC.
From the Haskell code that I've read, it seems canonical to use the first style wherever possible.
See section 4.4.3.1 of the Haskell '98 report.

The answer to your added question is yes, but it's pretty ugly.
case exp of
pat1 -> e1
pat2 -> e2
etc.
can, I believe, be emulated by
let
f pat1 = e1
f pat2 = e2
etc.
in f exp
as long as f is not free in exp, e1, e2, etc. But you shouldn't do that because it's horrible.

Related

Monads, composition and the order of computation

All the monad articles often state, that monads allow you to sequence effects in order.
But what about simple composition? Ain't
f x = x + 1
g x = x * 2
result = f g x
requires g x to be computed before f ...?
Do monads do the same but with handling of effects?
Disclaimer: Monads are a lot of things. They are notoriously difficult to explain, so I will not attempt to explain what monads are in general here, since the question does not ask for that. I will assume you have a basic grasp on what the Monad interface is as well as how it works for some useful datatypes, like Maybe, Either, and IO.
What is an effect?
Your question begins with a note:
All the monad articles often state, that monads allow you to sequence effects in order.
Hmm. This is interesting. In fact, it is interesting for a couple reasons, one of which you have identified: it implies that monads let you create some sort of sequencing. That’s true, but it’s only part of the picture: it also states that sequencing happens on effects.
Here’s the thing, though… what is an “effect”? Is adding two numbers together an effect? Under most definitions, the answer would be no. What about printing something to stdout, is that an effect? In that case, I think most people would agree that the answer is yes. However, consider something more subtle: is short-circuiting a computation by producing Nothing an effect?
Error effects
Let’s take a look at an example. Consider the following code:
> do x <- Just 1
y <- Nothing
return (x + y)
Nothing
The second line of that example “short-circuits” due to the Monad instance for Maybe. Could that be considered an effect? In some sense, I think so, since it’s non-local, but in another sense, probably not. After all, if the x <- Just 1 or y <- Nothing lines are swapped, the result is still the same, so the ordering doesn’t matter.
However, consider a slightly more complex example that uses Either instead of Maybe:
> do x <- Left "x failed"
y <- Left "y failed"
return (x + y)
Left "x failed"
Now this is more interesting. If you swap the first two lines now, you get a different result! Still, is this a representation of an “effect” like the ones you allude to in your question? After all, it’s just a bunch of function calls. As you know, do notation is just an alternative syntax for a bunch of uses of the >>= operator, so we can expand it out:
> Left "x failed" >>= \x ->
Left "y failed" >>= \y ->
return (x + y)
Left "x failed"
We can even replace the >>= operator with the Either-specific definition to get rid of monads entirely:
> case Left "x failed" of
Right x -> case Left "y failed" of
Right y -> Right (x + y)
Left e -> Left e
Left e -> Left e
Left "x failed"
Therefore, it’s clear that monads do impose some sort of sequencing, but this is not because they are monads and monads are magic, it’s just because they happen to enable a style of programming that looks more impure than Haskell typically allows.
Monads and state
But perhaps that is unsatisfying to you. Error handling is not compelling because it is simply short-circuiting, it doesn’t actually have any sequencing in the result! Well, if we reach for some slightly more complex types, we can do that. For example, consider the Writer type, which allows a sort of “logging” using the monadic interface:
> execWriter $ do
tell "hello"
tell " "
tell "world"
"hello world"
This is even more interesting than before, since now the result of each computation in the do block is unused, but it still affects the output! This is clearly side-effectful, and order is clearly very important! If we reorder the tell expressions, we would get a very different result:
> execWriter $ do
tell " "
tell "world"
tell "hello"
" worldhello"
But how is this possible? Well, again, we can rewrite it to avoid do notation:
execWriter (
tell "hello" >>= \_ ->
tell " " >>= \_ ->
tell "world")
We could inline the definition of >>= again for Writer, but it’s too long to be terribly illustrative here. The point is, though, that Writer is just a completely ordinary Haskell datatype that doesn’t do any I/O or anything like that, and yet we have used the monadic interface to create something that looks like ordered effects.
We can go even further by creating an interface that looks like mutable state using the State type:
> flip execState 0 $ do
modify (+ 3)
modify (* 2)
6
Once again, if we reorder the expressions, we get a different result:
> flip execState 0 $ do
modify (* 2)
modify (+ 3)
3
Clearly, monads are a useful tool for creating interfaces that look stateful and have a well-defined ordering despite actually just being ordinary function calls.
Why can monads do this?
What gives monads this power? Well, they’re not magic—they’re just ordinary pure Haskell code. But consider the type signature for >>=:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Notice how the second argument depends on a, and the only way to get an a is from the first argument? This means that >>= needs to “run” the first argument to produce a value before it can apply the second argument. This doesn’t have to do with evaluation order so much as it has to do with actually writing code that will typecheck.
Now, it’s true that Haskell is a lazy language. But Haskell’s laziness doesn’t really matter for any of this because all of this code is actually pure, even the example using State! It’s simply a pattern that encodes computations that look sort of stateful in a pure way, but if you actually implemented State yourself, you’d find that it just passes around the “current state” in the definition of the >>= function. There’s not any actual mutation.
And that’s it. Monads, by virtue of their interface, impose an ordering on how their arguments may be evaluated, and instances of Monad exploit that to make stateful-looking interfaces. You don’t need Monad to have evaluation ordering, though, as you found; obviously in (1 + 2) * 3 the addition will be evaluated before the multiplication.
But what about IO??
Okay, you got me. Here’s the problem: IO is magic.
Monads are not magic, but IO is. All of the above examples are purely functional, but obviously reading a file or writing to stdout is not pure. So how the heck does IO work?
Well, IO is implemented by the GHC runtime, and you could not write it yourself. However, in order to make it work nicely with the rest of Haskell, there needs to be a well-defined evaluation order! Otherwise things would get printed out in the wrong order and all sorts of other hell would break loose.
Well, it turns out the Monad’s interface is a great way to ensure that evaluation order is predictable, since it works for pure code already. So IO leverages the same interface to guarantee the evaluation order is the same, and the runtime actually defines what that evaluation means.
However, don’t be misled! You don’t need monads to do I/O in a pure language, and you don’t need IO to have monadic effects. Early versions of Haskell experimented with a non-monadic way to do I/O, and the other parts of this answer explain how you can have pure monadic effects. Remember that monads are not special or holy, they’re just a pattern that Haskell programmers have found useful because of its various properties.
Yes, the functions you proposed are strict for the standard numerical types. But not all functions are! In
f _ = 3
g x = x * 2
result = f (g x)
it is not the case that g x must be computed before f (g x).
Yes, monads use function composition to sequence effects, and are not the only way to achieve sequenced effects.
Strict semantics and side effects
In most languages, there is sequencing by strict semantics applied first to the function side of an expression, then to each argument in turn, and finally the function is applied to the arguments. So in JS, the function application form,
<Code 1>(<Code 2>, <Code 3>)
runs four pieces of code in a specified order: 1, 2, 3, then it checks that the output of 1 was a function, then it calls the function with those two computed arguments. And it does this because any of those steps can have side-effects. You would write,
const logVal = (log, val) => {
console.log(log);
return val;
};
logVal(1, (a, b) => logVal(4, a+b))(
logVal(2, 2),
logVal(3, 3));
And that works for those languages. These are side-effects, which we can say in this context means that JS's type system doesn't give you any way to know that they are there.
Haskell does have a strict application primitive, but it wanted to be pure, which roughly means that it wanted the type system to track effects. So they introduced a form of metaprogramming where one of their types is a type-level adjective, “programs which compute a _____”. A program interacts with the real world; Haskell code in theory doesn't. You have to define “main is a program which computes a unit type” and then the compiler actually just builds that program for you as an executable binary file. By the time that file is run Haskell is not really in the picture any more!
This is therefore more specific than normal function application, because the abstract problem I wrote in JavaScript is,
I have a program which computes {a function from (X, Y) pairs to programs which compute Zs}.
I also have a program which computes an X, and a program which computes a Y.
I want to put these all together into a program which computes a Z.
That's not just function composition itself. But a function can do that.
Peeking inside monads
A monad is a pattern. The pattern is, sometimes you have an adjective which does not add much when you repeat it. For example there is not much added when you say "a delayed delayed x" or "zero or more (zero or more xs)" or "either a null or else either a null or else an x." Similarly for the IO monad, not much is added by "a program to compute a program to compute an x" that is not available in "a program to compute an x."
The pattern is that there is some canonical merging algorithm which merges:
join: given an <adjective> <adjective> x, I will make you an <adjective> x.
We also add two other properties, the adjective should be outputtish,
map: given an x -> y and an <adjective> x, I will make you an <adjective> y
and universally embeddable,
pure: given an x, I will make you an <adjective> x.
Given these three things and a couple axioms you happen to have a common "monad" idea which you can develop One True Syntax for.
Now this metaprogramming idea obviously contains a monad. In JS we would write,
interface IO<x> {
run: () => Promise<x>
}
function join<x>(pprog: IO<IO<x>>): IO<x> {
return { run: () => pprog.run().then(prog => prog.run()) };
}
function map<x, y>(prog: IO<x>, fn: (in: x) => y): IO<y> {
return { run: () => prog.run().then(x => fn(x)) }
}
function pure<x>(input: x): IO<x> {
return { run: () => Promise.resolve(input) }
}
// with those you can also define,
function bind<x, y>(prog: IO<x>, fn: (in: x) => IO<y>): IO<y> {
return join(map(prog, fn));
}
But the fact that a pattern exists does not mean it is useful! I am claiming that these functions turn out to be all you need to resolve the problem above. And it is not hard to see why: you can use bind to create a function scope inside of which the adjective doesn't exist, and manipulate your values there:
function ourGoal<x, y, z>(
fnProg: IO<(inX: x, inY: y) => IO<z>>,
xProg: IO<x>,
yProg: IO<y>): IO<z> {
return bind(fnProg, fn =>
bind(xProg, x =>
bind(yProg, y => fn(x, y))));
}
How this answers your question
Notice that in the above we choose an order of operations by how we write the three binds. We could have written those in some other order. But we needed all the parameters to run the final program.
This choice of how we sequence our operations is indeed realized in the function calls: you are 100% right. But the way that you are doing it, with only function composition, is flawed because it demotes the effects down to side-effects in order to get the types through.

Can any recursive definition be rewritten using foldr?

Say I have a general recursive definition in haskell like this:
foo a0 a1 ... = base_case
foo b0 b1 ...
| cond1 = recursive_case_1
| cond2 = recursive_case_2
...
Can it always rewritten using foldr? Can it be proved?
If we interpret your question literally, we can write const value foldr to achieve any value, as #DanielWagner pointed out in a comment.
A more interesting question is whether we can instead forbid general recursion from Haskell, and "recurse" only through the eliminators/catamorphisms associated to each user-defined data type, which are the natural generalization of foldr to inductively defined data types. This is, essentially, (higher-order) primitive recursion.
When this restriction is performed, we can only compose terminating functions (the eliminators) together. This means that we can no longer define non terminating functions.
As a first example, we lose the trivial recursion
f x = f x
-- or even
a = a
since, as said, the language becomes total.
More interestingly, the general fixed point operator is lost.
fix :: (a -> a) -> a
fix f = f (fix f)
A more intriguing question is: what about the total functions we can express in Haskell? We do lose all the non-total functions, but do we lose any of the total ones?
Computability theory states that, since the language becomes total (no more non termination), we lose expressiveness even on the total fragment.
The proof is a standard diagonalization argument. Fix any enumeration of programs in the total fragment so that we can speak of "the i-th program".
Then, let eval i x be the result of running the i-th program on the natural x as input (for simplicity, assume this is well typed, and that the result is a natural). Note that, since the language is total, then a result must exist. Moreover, eval can be implemented in the unrestricted Haskell language, since we can write an interpreter of Haskell in Haskell (left as an exercise :-P), and that would work as fine for the fragment. Then, we simply take
f n = succ $ eval n n
The above is a total function (a composition of total functions) which can be expressed in Haskell, but not in the fragment. Indeed, otherwise there would be a program to compute it, say the i-th program. In such case we would have
eval i x = f x
for all x. But then,
eval i i = f i = succ $ eval i i
which is impossible -- contradiction. QED.
In type theory, it is indeed the case that you can elaborate all definitions by dependent pattern-matching into ones only using eliminators (a more strongly-typed version of folds, the generalisation of lists' foldr).
See e.g. Eliminating Dependent Pattern Matching (pdf)

What's the reason of 'let rec' for impure functional language OCaml?

In the book Real World OCaml, the authors put why OCaml uses let rec for defining recursive functions.
OCaml distinguishes between nonrecursive definitions (using let) and recursive definitions (using let rec) largely for technical reasons: the type-inference algorithm needs to know when a set of function definitions are mutually recursive, and for reasons that don't apply to a pure language like Haskell, these have to be marked explicitly by the programmer.
What are the technical reasons that enforces let rec while pure functional languages not?
When you define a semantics of function definition, as a language designer, you have choices: either to make the name of the function visible in the scope of its own body, or not. Both choices are perfectly legal, for example C-family languages being far from functional, still do have names of definitions visible in their scope (this also extends to all definitions in C, making this int x = x + 1 legal). OCaml language decides to give us extra flexibility of making the choice by ourselves. And that's really great. They decided to make it invisible by default, a fairly decent solution, since most of the functions that we write are non recursive.
What concerning the cite, it doesn't really correspond to the function definitions – the most common use of the rec keyword. It is mostly about "Why the scope of function definition doesn't extend to the body of the module". This is a completely different question.
After some research I've found a very similar question, that has an answer, that might satisfy you, a cite from it:
So, given that the type checker needs to know about which sets of
definitions are mutually recursive, what can it do? One possibility is
to simply do a dependency analysis on all the definitions in a scope,
and reorder them into the smallest possible groups. Haskell actually
does this, but in languages like F# (and OCaml and SML) which have
unrestricted side-effects, this is a bad idea because it might reorder
the side-effects too. So instead it asks the user to explicitly mark
which definitions are mutually recursive, and thus by extension where
generalization should occur.
Even without any reordering, with arbitrary non-pure expressions, that can occur in the function definition (a side effect of definition, not evaluation) it is impossible to build the dependency graph. Consider demarshaling and executing function from file.
To summarize, we have two usages of let rec construct, one is to create a self recursive function, like
let rec seq acc = function
| 0 -> acc
| n -> seq (acc+1) (n-1)
Another is to define mutually recursive functions:
let rec odd n =
if n = 0 then true
else if n = 1 then false else even (n - 1)
and even n =
if n = 0 then false
else if n = 1 then true else odd (n - 1)
At the first case, there is no technical reasons to stick to one or to another solution. This is just a matter of taste.
The second case is harder. When inferring type you need to split all function definitions into clusters consisting of mutually depending definitions, in order to narrow typing environment. In OCaml it is harder to make, since you need to take into account side-effects. (Or you can continue without splitting it into principal components, but this will lead to another issue – your type system will be more restrictive, i.e., will disallow more valid programs).
But, revisiting the original question and the quote from RWO, I'm still pretty sure that there is no technical reasons for adding the rec flag. Consider, SML that has the same problems, but still has rec enabled by default. There is a technical reason, for let ... and ... syntax for defining a set of mutual recursive functions. In SML this syntax doesn't require us to put the rec flag, in OCaml does, thus giving us more flexibility, like the ability to swap to values with let x = y and y = x expression.
What are the technical reasons that enforces let rec while pure functional languages not?
Recursiveness is a strange beast. It has a relation to purity, but it's a little more oblique than this. To be clear, you could write "alterna-Haskell" which retains its purity, its laziness but does not have recursively bound lets by default and demands some kind of rec marker just as OCaml does. Some would even prefer this.
In essence, there are just many different kinds of "let"s possible. If we compare let and let rec in OCaml we'll see a small difference. In static formal semantics, we might write
Γ ⊢ E : A Γ, x : A ⊢ F : B
-----------------------------
Γ ⊢ let x = E in F : B
which says that if we can prove in a variable environment Γ that E has type A and if we can prove in the same variable environment Γ augmented with x : A that F : B then we can prove that in the variable environment Γ let x = E in F has type B.
The thing to watch is the Γ argument. This is just a list of ("variable name", "value") pairs like [(x, 3); (y, "hello")] and augmenting the list like Γ, x : A just means consing (x, A) on to it (sorry that the syntax is flipped).
In particular, let's write the same formalism for let rec
Γ, x : A ⊢ E : A Γ, x : A ⊢ F : B
-------------------------------------
Γ ⊢ let rec x = E in F : B
In particular, the only difference is that neither of our premises work in the plain Γ environment; both are allowed to assume the existence of the x variable.
In this sense, let and let rec are simply different beasts.
So what does it mean to be pure? At the strictest definition, of which Haskell doesn't even participate, we must eliminate all effects including non-termination. The only way to achieve this is to pull away our ability to write unrestricted recursion and replace it only carefully.
There exist plenty of languages without recursion. Perhaps the most important one is the Simply Typed Lambda Calculus. In it's basic form it is regular lambda calculus but augmented with a typing discipline where types are bit like
type ty =
| Base
| Arr of ty * ty
It turns out that STLC cannot represent recursion---the Y combinator, and all other fixed-point cousin combinators, cannot be typed. Thusly, STLC is not Turing Complete.
It is however uncompromisingly pure. It achieves that purity with the bluntest of instruments, however, by completely outlawing recursion. What we'd really like is some kind of balanced, careful recursion which doesn't lead to non-termination---we'll still be Turing Incomplete, but not so crippled.
Some languages try this game. There are clever ways of adding typed recursion back along a division between data and codata which ensures that you cannot write non-terminating functions. If you're interested, I suggest learning a bit of Coq.
But OCaml's goal (and Haskell's as well) is not to be delicate here. Both languages are uncompromisingly Turing Complete (and therefore "practical"). So let's discuss some more blunt ways of augmenting the STLC with recursion.
The bluntest of the bunch is to add a single built-in function called fix
val fix : ('a -> 'a) -> 'a
or, in more genuine OCaml-y notation which requires eta-expansion
val fix : (('a -> 'b) -> ('a -> 'b)) -> ('a -> 'b)
Now, remember that we're only considering a primitive STLC with fix added. We can indeed write fix (the latter one at least) in OCaml, but that's cheating at the moment. What does fix buy the STLC as a primitive?
It turns out that the answer is: "everything". STLC + Fix (basically a language called PCF) is impure and Turing Complete. It's also simply tremendously difficult to use.
So this is the final hurdle to jump: how do we make fix easier to work with? By adding recursive bindings!
Already, STLC has a let construction. You can think of it as just syntax sugar:
let x = E in F ----> (fun x -> F) (E)
but once we've added fix we also have the power to introduce let rec bindings
let rec x a = E in F ----> (fun x -> F) (fix (fun x a -> E))
At this point it should again be clear: let and let rec are very different beasts. They embody different levels of linguistic power and let rec is a window to allow fundamental impurity through Turing Completeness and its partner-effect non-termination.
So, at the end of the day, it's a little amusing that Haskell, the purer of the two languages, made the interesting choice of abolishing plain let bindings. That's really the only difference: there is no syntax for representing a non-recursive binding in Haskell.
At this point it's essentially just a style decision. The authors of Haskell determined that recursive bindings were so useful that one might as well assume that every binding is recursive (and mutually so, a can of worms ignored in this answer so far).
On the other hand, OCaml gives you to ability to be totally explicit about the kind of binding you choose, let or let rec!
I think this has nothing to do with being purely functional, it is just a design decision that in Haskell you are not allowed to do
let a = 0;;
let a = a + 1;;
whereas you can do it in Caml.
In Haskell this code won't work because let a = a + 1 is interpreted as a recursive definition and will not terminate.
In Haskell you don't have to specify that a definition is recursive simply because you can't create a non-recursive one (so the keyword rec is everywhere but is not written).
I am not an expert, but I'll make a guess until the truly knowledgable guys show up. In OCaml there can be side effects that happen during the definition of a function:
let rec f =
let () = Printf.printf "hello\n" in
fun x -> if x <= 0 then 12 else 1 + f (x - 1)
This means that the order of function definitions must be preserved in some sense. Now imagine that two distinct sets of mutually recursive functions are interleaved. It doesn't seem at all easy for the compiler to preserve the order while processing them as two separate mutually recursive sets of definitions.
The use of `let rec ... and`` means that distinct sets of mutually recursive function definitions can't be interleaved in OCaml as they can in Haskell. Haskell doesn't have side effects (in some sense), so definitions can be freely reordered.
It's not a question of purity, it's a question of specifying what environment the typechecker should check an expression in. It actually gives you more power than you would have otherwise. For example (I'm going to write Standard ML here because I know it better than OCaml, but I believe the typechecking process is pretty much the same for the two languages), it lets you distinguish between these cases:
val foo : int = 5
val foo = fn (x) => if x = foo then 0 else 1
Now as of the second redefinition, foo has the type int -> int. On the other hand,
val foo : int = 5
val rec foo = fn (x) => if x = foo then 0 else 1
does not typecheck, because the rec means that the typechecker has already decided that foo has been rebound to the type 'a -> int, and when it tries to figure out what that 'a needs to be, there is a unification failure because x = foo forces foo to have a numeric type, which it doesn't.
It can certainly "look" more imperative, because the case without rec allows you to do things like this:
val foo : int = 5
val foo = foo + 1
val foo = foo + 1
and now foo has the value 7. That's not because it's been mutated, however --- the name foo has been rebound 3 times, and it just so happens that each of those bindings shadowed a previous binding of a variable named foo. It's the same as this:
val foo : int = 5
val foo' = foo + 1
val foo'' = foo' + 1
except that foo and foo' are no longer available in the environment after the identifier foo has been rebound. The following are also legal:
val foo : int = 5
val foo : real = 5.0
which makes it clearer that what's happening is shadowing of the original definition, rather than a side effect.
Whether or not it's stylistically a good idea to rebind identifiers is questionable -- it can get confusing. It can be useful in some situations (e.g. rebinding a function name to a version of itself that prints debugging output).
I'd say that in OCaml they are trying to make REPL and source files work the same way. So, it's perfectly reasonable to redefine some function in REPL; therefore, they have to allow it in the source as well. Now, if you use the (redefined) function in itself, OCaml needs some way of knowing which of the definitions to use: the previous one or the new one.
In Haskell they've just gave up and accepted that REPL works differentyle from source files.

What would pattern matching look like in a strict Haskell?

As a research experiment, I've recently worked on implementing strict-by-default Haskell modules. Instead of being lazy-by-default and having ! as an escape hatch, we're strict-by-default and have ~ as an escape hatch. This behavior is enabled using a {-# LANGUAGE Strict #-} pragma.
While working on making patterns strict I came up on an interesting question: should patterns be strict in the "top-level" only or in all bind variables. For example, if we have
f x = case x of
y -> ...
we will force y even though Haskell would not do so. The more tricky case is
f x = case x of
Just y -> ...
Should we interpret that as
f x = case x of
Just y -> ... -- already strict in 'x' but not in `y`
or
f x = case x of
Just !y -> ... -- now also strict in 'y'
(Note that we're using the normal, lazy Haskell Just here.)
One design constraint that might of value is this: I want the pragma to be modular. For example, even with Strict turned on we don't evaluate arguments to functions defined in other modules. That would make it non-modular.
Is there any prior art here?
As far as I understand things, refutable patterns are always strict at least on the outer level. Which is another way to say that the scrutinized expression must have been evaluated to WHNF, otherwise you couldn't see if it is a 'Just' or a 'Nothing'.
Hence your
!(Just y) -> ...
notation appears useless.
OTOH, since in a strict language, the argument to Just must already have been evaluated, the notation
Just !y ->
doesn't make sense either.

The meaning of ' in Haskell function name?

What is quote ' used for? I have read about curried functions and read two ways of defining the add function - curried and uncurried. The curried version...
myadd' :: Int -> Int -> Int
myadd' x y = x + y
...but it works equally well without the quote. So what is the point of the '?
The quote means nothing to Haskell. It is just part of the name of that function.
People tend to use this for "internal" functions. If you have a function that sums a list by using an accumulator argument, your sum function will take two args. This is ugly, so you make a sum' function of two args, and a sum function of one arg like sum list = sum' 0 list.
Edit, perhaps I should just show the code:
sum' s [] = s
sum' s (x:xs) = sum' (s + x) xs
sum xs = sum' 0 xs
You do this so that sum' is tail-recursive, and so that the "public API" is nice looking.
It is often pronounced "prime", so that would be "myadd prime". It is usually used to notate a next step in the computation, or an alternative.
So, you can say
add = blah
add' = different blah
Or
f x =
let x' = subcomputation x
in blah.
It just a habit, like using int i as the index in a for loop for Java, C, etc.
Edit: This answer is hopefully more helpful now that I've added all the words, and code formatting. :) I keep on forgetting that this is not a WYSIWYG system!
There's no particular point to the ' character in this instance; it's just part of the identifier. In other words, myadd and myadd' are distinct, unrelated functions.
Conventionally though, the ' is used to denote some logical evaluation relationship. So, hypothetical function myadd and myadd' would be related such that myadd' could be derived from myadd. This is a convention derived from formal logic and proofs in academia (where Haskell has its roots). I should underscore that this is only a convention, Haskell does not enforce it.
quote ' is just another allowed character in Haskell names. It's often used to define variants of functions, in which case quote is pronounced 'prime'. Specifically, the Haskell libraries use quote-variants to show that the variant is strict. For example: foldl is lazy, foldl' is strict.
In this case, it looks like the quote is just used to separate the curried and uncurried variants.
As said by others, the ' does not hold any meaning for Haskell itself. It is just a character, like the a letter or a number.
The ' is used to denote alternative versions of a function (in the case of foldl and foldl') or helper functions. Sometimes, you'll even see several ' on a function name. Adding a ' to the end of a function name is just much more concise than writing someFunctionHelper and someFunctionStrict.
The origin of this notation is in mathematics and physics, where, if you have a function f(x), its derivate is often denoted as f'(x).

Resources