Could `if-then-else` (always) be replaced by a function call? - haskell

This question is out of curiousity about how PLs work, more than anything else. (It actually came to me while looking at SML which differs from Haskell in that the former uses call-by-value - but my question is about Haskell.)
Haskell (as I understand) has "call-by-need" semantics.
Does this imply that if I define a function as follows:
cond True thenExp elseExp = thenExp
cond _ thenExp elseExp = elseExp
that this will always behave exactly like an if-then-else expression?
Or, in another sense, can if-then-else be regarded as syntactic sugar for something that could've been defined as a function?
Edit:
Just to contrast the behaviour of Haskell with Standard ML, define (in SML)
cond p t f = if p then t else f;
and then the factorial function
fun factc n = cond (n=0) 1 (n * factc (n-1));
evaluating factc 1 (say) never finishes, because the recursion in the last argument of cond never terminates.
However, defining
fun fact n = if n=0 then 1 else fact (n-1);
works as we expect because the then branch is only evaluated as needed.
Maybe there are clever ways to defer argument evaluation in SML (don't know as I'm not that familiar with it yet) but the point is that in a call-by-value type language, if-then-else often behaves differently.
My question was whether this (call-by-need vs call-by-value) was the principle reason behind this difference (and the consensus seems to be "yes").

Like the Haskell Wikipedia on if-then-else says:
For processing conditions, the `if-then-else` **syntax was defined in Haskell98**. However it could be simply replaced by the function `if'`
with
if' :: Bool -> a -> a -> a
if' True x _ = x
if' False _ y = y
So if we use the above if' function, and we need to evaluate it (since Haskell is lazy, we do not necessary need to evaluate an if-then-else expression at all), Haskell will first evaluate the first operand to decide if it is True or False. In case it is True, it will return the first expression, if it is False it will return the second expression. Note that this does not per se means that we (fully) evaluate these expressions. Only when we need the result, we will evaluate the expressions.
But in case the condition is True, there is no reason at all to ever evaluate the second expression, since we ignore it.
In case we share an expression over multiple parts of the expression tree, it is of course possible that another call will (partially) evaluate the other expression.
ghci even has an option to override the if <expr> then <expr> else <expr> syntax: the -XRebindable flag. It will, besides other things also:
Conditionals (e.g. if e1 then e2 else e3) means ifThenElse e1 e2 e3. However case expressions are unaffected.

Yes, a lazy language will evaluate an expression only when it needs to. Therefore it is no problem to convert the then/else parts of the expression into function arguments.
This is unlike strict languages like Idris or OCaml, where arguments for a function call are evaluated before the called function is executed.

Yes, you can regard if/then/else as syntactic sugar. In fact, a programming language doesn't even need Boolean primitives, so you can also consider True and False as syntactic sugar.
The lambda calculus defines Boolean values as a set of functions that take two arguments:
true = λt.λf.t
false = λt.λf.f
Both functions are functions that takes two values, t for true, and f for false. The function true always returns the t value, whereas the function false always returns the f value.
In Haskell, you can define similar functions like this:
true = \t f -> t
false = \t f -> f
You could then write your cond function like:
cond = \b t f -> b t f
Examples:
Prelude> cond true "foo" "bar"
"foo"
Prelude> cond false "foo" "bar"
"bar"
Read more in Travis Whitaker's article Scrap Your Constructors: Church Encoding Algebraic Types.

Related

Question on Pattern Matching: Why can I not use the same arguments for a conjuction?

Context from Programming with haskell Textbook Page 32
This version also has the benefit that, under lazy evaluation as
discussed in chapter 12, if the first argument is False, then the
result False is returned without the need to evaluate the second
argument. In practice, the prelude defines ∧ using equations that have
this same property, but make the choice about which equation applies
using the value of the first argument only:
True ∧ b = b
False ∧ _ = False
That is, if the first argument is True, then the result is the value
of the second argument, and, if the first argument is False, then the
result is False. Note that for technical reasons the same name may not
be used for more than one argument in a single equation. For example,
the following definition for the operator ∧ is based upon the
observation that, if the two arguments are equal, then the result is
the same value, otherwise the result is False, but is invalid because
of the above naming requirement:
Question
I do not understand the explanation for not being able to use the expression
b ∧ b = b
__∧_ _= False
To accept a definition such as
myFunction x x = ...
myFunction _ _ = ...
we need to be able to test two values for equality. That is, given two arbitrary values x and y, we need to be able to compute whether x == y holds.
This can be done in many cases, but (perhaps surprisingly) not all. We surely can do that when x and y are Bools or Ints, but not when they are Integer -> Bool or IO (), for instance, since we can not really test functions on infinitely many points, or IO actions on infinitely many worlds.
Consequently, pattern matching is allowed only when variables are used linearly, i.e. when the appear at most once in patterns. So, myFunction x x = ... is disallowed.
If needed, when == is defined (like on Bools) we can use guards to express the same idea:
myFunction x1 x2 | x1 == x2 = ....
| otherwise = ....
In principle, Haskell could automatically translate non-linear patterns into patterns with guards using ==, possible causing an error if == is not available for the type at hand. However, it was not defined in this way, and requires the guard to be explicitly written, if required.
This automatic translation could be convenient but could also be a source of subtle bugs, if one inadvertently reuses a variable in a pattern without realizing it.

Does a function in Haskell always evaluate its return value?

I'm trying to better understand Haskell's laziness, such as when it evaluates an argument to a function.
From this source:
But when a call to const is evaluated (that’s the situation we are interested in, here, after all), its return value is evaluated too ... This is a good general principle: a function obviously is strict in its return value, because when a function application needs to be evaluated, it needs to evaluate, in the body of the function, what gets returned. Starting from there, you can know what must be evaluated by looking at what the return value depends on invariably. Your function will be strict in these arguments, and lazy in the others.
So a function in Haskell always evaluates its own return value? If I have:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
head (foo [1..]) -- = 4
According to the above paragraph, map (* 2) xs, must be evaluated. Intuitively, I would think that means applying the map to the entire list- resulting in an infinite loop.
But, I can successfully take the head of the result. I know that : is lazy in Haskell, so does this mean that evaluating map (* 2) xs just means constructing something else that isn't fully evaluated yet?
What does it mean to evaluate a function applied to an infinite list? If the return value of a function is always evaluated when the function is evaluated, can a function ever actually return a thunk?
Edit:
bar x y = x
var = bar (product [1..]) 1
This code doesn't hang. When I create var, does it not evaluate its body? Or does it set bar to product [1..] and not evaluate that? If the latter, bar is not returning its body in WHNF, right, so did it really 'evaluate' x? How could bar be strict in x if it doesn't hang on computing product [1..]?
First of all, Haskell does not specify when evaluation happens so the question can only be given a definite answer for specific implementations.
The following is true for all non-parallel implementations that I know of, like ghc, hbc, nhc, hugs, etc (all G-machine based, btw).
BTW, something to remember is that when you hear "evaluate" for Haskell it normally means "evaluate to WHNF".
Unlike strict languages you have to distinguish between two "callers" of a function, the first is where the call occurs lexically, and the second is where the value is demanded. For a strict language these two always coincide, but not for a lazy language.
Let's take your example and complicate it a little:
foo [] = []
foo (_:xs) = map (* 2) xs
bar x = (foo [1..], x)
main = print (head (fst (bar 42)))
The foo function occurs in bar. Evaluating bar will return a pair, and the first component of the pair is a thunk corresponding to foo [1..]. So bar is what would be the caller in a strict language, but in the case of a lazy language it doesn't call foo at all, instead it just builds the closure.
Now, in the main function we actually need the value of head (fst (bar 42)) since we have to print it. So the head function will actually be called. The head function is defined by pattern matching, so it needs the value of the argument. So fst is called. It too is defined by pattern matching and needs its argument so bar is called, and bar will return a pair, and fst will evaluate and return its first component. And now finally foo is "called"; and by called I mean that the thunk is evaluated (entered as it's sometimes called in TIM terminology), because the value is needed. The only reason the actual code for foo is called is that we want a value. So foo had better return a value (i.e., a WHNF). The foo function will evaluate its argument and end up in the second branch. Here it will tail call into the code for map. The map function is defined by pattern match and it will evaluate its argument, which is a cons. So map will return the following {(*2) y} : {map (*2) ys}, where I have used {} to indicate a closure being built. So as you can see map just returns a cons cell with the head being a closure and the tail being a closure.
To understand the operational semantics of Haskell better I suggest you look at some paper describing how to translate Haskell to some abstract machine, like the G-machine.
I always found that the term "evaluate," which I had learned in other contexts (e.g., Scheme programming), always got me all confused when I tried to apply it to Haskell, and that I made a breakthrough when I started to think of Haskell in terms of forcing expressions instead of "evaluating" them. Some key differences:
"Evaluation," as I learned the term before, strongly connotes mapping expressions to values that are themselves not expressions. (One common technical term here is "denotations.")
In Haskell, the process of forcing is IMHO most easily understood as expression rewriting. You start with an expression, and you repeatedly rewrite it according to certain rules until you get an equivalent expression that satisfies a certain property.
In Haskell the "certain property" has the unfriendly name weak head normal form ("WHNF"), which really just means that the expression is either a nullary data constructor or an application of a data constructor.
Let's translate that to a very rough set of informal rules. To force an expression expr:
If expr is a nullary constructor or a constructor application, the result of forcing it is expr itself. (It's already in WHNF.)
If expr is a function application f arg, then the result of forcing it is obtained this way:
Find the definition of f.
Can you pattern match this definition against the expression arg? If not, then force arg and try again with the result of that.
Substitute the pattern match variables in the body of f with the parts of (the possibly rewritten) arg that correspond to them, and force the resulting expression.
One way of thinking of this is that when you force an expression, you're trying to rewrite it minimally to reduce it to an equivalent expression in WHNF.
Let's apply this to your example:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
-- We want to force this expression:
head (foo [1..])
We will need definitions for head and `map:
head [] = undefined
head (x:_) = x
map _ [] = []
map f (x:xs) = f x : map f x
-- Not real code, but a rule we'll be using for forcing infinite ranges.
[n..] ==> n : [(n+1)..]
So now:
head (foo [1..]) ==> head (map (*2) [1..]) -- using the definition of foo
==> head (map (*2) (1 : [2..])) -- using the forcing rule for [n..]
==> head (1*2 : map (*2) [2..]) -- using the definition of map
==> 1*2 -- using the definition of head
==> 2 -- using the definition of *
I believe the idea must be that in a lazy language if you're evaluating a function application, it must be because you need the result of the application for something. So whatever reason caused the function application to be reduced in the first place is going to continue to need to reduce the returned result. If we didn't need the function's result we wouldn't be evaluating the call in the first place, the whole application would be left as a thunk.
A key point is that the standard "lazy evaluation" order is demand-driven. You only evaluate what you need. Evaluating more risks violating the language spec's definition of "non-strict semantics" and looping or failing for some programs that should be able to terminate; lazy evaluation has the interesting property that if any evaluation order can cause a particular program to terminate, so can lazy evaluation.1
But if we only evaluate what we need, what does "need" mean? Generally it means either
a pattern match needs to know what constructor a particular value is (e.g. I can't know what branch to take in your definition of foo without knowing whether the argument is [] or _:xs)
a primitive operation needs to know the entire value (e.g. the arithmetic circuits in the CPU can't add or compare thunks; I need to fully evaluate two Int values to call such operations)
the outer driver that executes the main IO action needs to know what the next thing to execute is
So say we've got this program:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
main :: IO ()
main = print (head (foo [1..]))
To execute main, the IO driver has to evaluate the thunk print (head (foo [1..])) to work out that it's print applied to the thunk head (foo [1..]). print needs to evaluate its argument on order to print it, so now we need to evaluate that thunk.
head starts by pattern matching its argument, so now we need to evaluate foo [1..], but only to WHNF - just enough to tell whether the outermost list constructor is [] or :.
foo starts by pattern matching on its argument. So we need to evaluate [1..], also only to WHNF. That's basically 1 : [2..], which is enough to see which branch to take in foo.2
The : case of foo (with xs bound to the thunk [2..]) evaluates to the thunk map (*2) [2..].
So foo is evaluated, and didn't evaluate its body. However, we only did that because head was pattern matching to see if we had [] or x : _. We still don't know that, so we must immediately continue to evaluate the result of foo.
This is what the article means when it says functions are strict in their result. Given that a call to foo is evaluated at all, its result will also be evaluated (and so, anything needed to evaluate the result will also be evaluated).
But how far it needs to be evaluated depends on the calling context. head is only pattern matching on the result of foo, so it only needs a result to WHNF. We can get an infinite list to WHNF (we already did so, with 1 : [2..]), so we don't necessarily get in an infinite loop when evaluating a call to foo. But if head were some sort of primitive operation implemented outside of Haskell that needed to be passed a completely evaluated list, then we'd be evaluating foo [1..] completely, and thus would never finish in order to come back to head.
So, just to complete my example, we're evaluating map (2 *) [2..].
map pattern matches its second argument, so we need to evaluate [2..] as far as 2 : [3..]. That's enough for map to return the thunk (2 *) 2 : map (2 *) [3..], which is in WHNF. And so it's done, we can finally return to head.
head ((2 *) 2 : map (2 *) [3..]) doesn't need to inspect either side of the :, it just needs to know that there is one so it can return the left side. So it just returns the unevaluated thunk (2 *) 2.
Again though, we only evaluated the call to head this far because print needed to know what its result is, so although head doesn't evaluate its result, its result is always evaluated whenever the call to head is.
(2 *) 2 evaluates to 4, print converts that into the string "4" (via show), and the line gets printed to the output. That was the entire main IO action, so the program is done.
1 Implementations of Haskell, such as GHC, do not always use "standard lazy evaluation", and the language spec does not require it. If the compiler can prove that something will always be needed, or cannot loop/error, then it's safe to evaluate it even when lazy evaluation wouldn't (yet) do so. This can often be faster so GHC optimizations do actually do this.
2 I'm skipping over a few details here, like that print does have some non-primitive implementation we could step inside and lazily evaluate, and that [1..] could be further expanded to the functions that actually implement that syntax.
Not necessarily. Haskell is lazy, meaning that it only evaluates when it needs to. This has some interesting effects. If we take the below code, for example:
-- File: lazinessTest.hs
(>?) :: a -> b -> b
a >? b = b
main = (putStrLn "Something") >? (putStrLn "Something else")
This is the output of the program:
$ ./lazinessTest
Something else
This indicates that putStrLn "Something" is never evaluated. But it's still being passed to the function, in the form of a 'thunk'. These 'thunks' are unevaluated values that, rather than being concrete values, are like a breadcrumb-trail of how to compute the value. This is how Haskell laziness works.
In our case, two 'thunks' are passed to >?, but only one is passed out, meaning that only one is evaluated in the end. This also applies in const, where the second argument can be safely ignored, and therefore is never computed. As for map, GHC is smart enough to realise that we don't care about the end of the array, and only bothers to compute what it needs to, in your case the second element of the original list.
However, it's best to leave the thinking about laziness to the compiler and keep coding, unless you're dealing with IO, in which case you really, really should think about laziness, because you can easily go wrong, as I've just demonstrated.
There are lots and lots of online articles on the Haskell wiki to look at, if you want more detail.
Function could evaluate either return type:
head (x:_) = x
or exception/error:
head _ = error "Head: List is empty!"
or bottom (⊥)
a = a
b = last [1 ..]

Why should I use case expressions if I can use "equations"?

I'm learning Haskell, from the book "Real World Haskell". In pages 66 and 67, they show the case expressions with this example:
fromMaybe defval wrapped =
case wrapped of
Nothing -> defval
Just value -> value
I remember a similar thing in F#, but (as shown earlier in the book) Haskell can define functions as series of equations; while AFAIK, F Sharp cannot. So I tried to define this in such a way:
fromMaybe2 defval Nothing = defval
fromMaybe2 defval (Just value) = value
I loaded it in GHCi and after a couple of results, I convinced myself it was the same However; this makes me wonder, why should there be case expressions when equations:
are more comprehensible (it's Mathematics; why use case something of, who says that?);
are less verbose (2 vs 4 lines);
require much less structuring and syntatic sugar (-> could be an operator, look what they've done!);
only use variables when needed (in basic cases, such as this wrapped just takes up space).
What's good about case expressions? Do they exist only because similar FP-based languages (such as F#) have them? Am I missing something?
Edit:
I see from #freyrs's answer that the compiler makes these exactly the same. So, equations can always be turned into case expressions (as expected). My next question is the converse; can one go the opposite route of the compiler and use equations with let/where expressions to express any case expression?
This comes from a culture of having small "kernel" expression-oriented languages. Haskell grows from Lisp's roots (i.e. lambda calculus and combinatory logic); it's basically Lisp plus syntax plus explicit data type definitions plus pattern matching minus mutation plus lazy evaluation (lazy evaluation was itself first described in Lisp AFAIK; i.e. in the 70-s).
Lisp-like languages are expression-oriented, i.e. everything is an expression, and a language's semantics is given as a set of reduction rules, turning more complex expressions into simpler ones, and ultimately into "values".
Equations are not expressions. Several equations could be somehow mashed into one expression; you'd have to introduce some syntax for that; case is that syntax.
Rich syntax of Haskell gets translated into smaller "core" language, that has case as one of its basic building blocks. case has to be a basic construct, because pattern-matching in Haskell is made to be such a basic, core feature of the language.
To your new question, yes you can, by introducing auxiliary functions as Luis Casillas shows in his answer, or with the use of pattern guards, so his example becomes:
foo x y | (Meh o p) <- z = baz y p o
| (Gah t q) <- z = quux x t q
where
z = bar x
The two functions compile into exactly the same internal code in Haskell ( called Core ) which you can dump out by passing the flags -ddump-simpl -dsuppress-all to ghc.
It may look a bit intimidating with the variable names, but it's effectively just a explicitly typed version of the code you wrote above. The only difference is the variables names.
fromMaybe2
fromMaybe2 =
\ # t_aBC defval_aB6 ds_dCK ->
case ds_dCK of _ {
Nothing -> (defval_aB6) defval_aB6;
Just value_aB8 -> (value_aB8) value_aB8
}
fromMaybe
fromMaybe =
\ # t_aBJ defval_aB3 wrapped_aB4 ->
case wrapped_aB4 of _ {
Nothing -> (defval_aB3) defval_aB3;
Just value_aB5 -> (value_aB5) value_aB5
}
The paper "A History of Haskell: Being Lazy with Class" (PDF) provides some useful perspective on this question. Section 4.4 ("Declaration style vs. expression style," p.13) is about this topic. The money quote:
[W]e engaged in furious debate about which style was “better.” An underlying assumption was that if possible there should be “just one way to do something,” so that, for example, having both let and where would be redundant and confusing. [...] In the end, we abandoned the underlying assumption, and provided full syntactic support for both styles.
Basically they couldn't agree on one so they threw both in. (Note that quote is explicitly about let and where, but they treat both that choice and the case vs. equations choice as two manifestations of the same basic choice—what they call "declaration style" vs. "expression style.")
In modern practice, the declaration style (your "series of equations") has become the more common one. case is often seen in this situation, where you need to match on a value that is computed from one of the arguments:
foo x y = case bar x of
Meh o p -> baz y p o
Gah t q -> quux x t q
You can always rewrite this to use an auxiliary function:
foo x y = go (bar x)
where go (Meh o p) = baz y p o
go (Gah t q) = quux x t q
This has the very minor disadvantage that you need to name your auxiliary function—but go is normally a perfectly fine name in this situation.
Case expression can be used anywhere an expression is expected, while equations can't. Example:
1 + (case even 9 of True -> 2; _ -> 3)
You can even nest case expression, and I've seen code that does that. However I tend to stay away from case expressions, and try to solve the problem with equations, even if I have to introduce a local function using where/let.
Every definition using equations is equivalent to one using case. For instance
negate True = False
negate False = True
stands for
negate x = case x of
True -> False
False -> True
That is to say, these are two ways of expressing the same thing, and the former is translated to the latter by GHC.
From the Haskell code that I've read, it seems canonical to use the first style wherever possible.
See section 4.4.3.1 of the Haskell '98 report.
The answer to your added question is yes, but it's pretty ugly.
case exp of
pat1 -> e1
pat2 -> e2
etc.
can, I believe, be emulated by
let
f pat1 = e1
f pat2 = e2
etc.
in f exp
as long as f is not free in exp, e1, e2, etc. But you shouldn't do that because it's horrible.

What would pattern matching look like in a strict Haskell?

As a research experiment, I've recently worked on implementing strict-by-default Haskell modules. Instead of being lazy-by-default and having ! as an escape hatch, we're strict-by-default and have ~ as an escape hatch. This behavior is enabled using a {-# LANGUAGE Strict #-} pragma.
While working on making patterns strict I came up on an interesting question: should patterns be strict in the "top-level" only or in all bind variables. For example, if we have
f x = case x of
y -> ...
we will force y even though Haskell would not do so. The more tricky case is
f x = case x of
Just y -> ...
Should we interpret that as
f x = case x of
Just y -> ... -- already strict in 'x' but not in `y`
or
f x = case x of
Just !y -> ... -- now also strict in 'y'
(Note that we're using the normal, lazy Haskell Just here.)
One design constraint that might of value is this: I want the pragma to be modular. For example, even with Strict turned on we don't evaluate arguments to functions defined in other modules. That would make it non-modular.
Is there any prior art here?
As far as I understand things, refutable patterns are always strict at least on the outer level. Which is another way to say that the scrutinized expression must have been evaluated to WHNF, otherwise you couldn't see if it is a 'Just' or a 'Nothing'.
Hence your
!(Just y) -> ...
notation appears useless.
OTOH, since in a strict language, the argument to Just must already have been evaluated, the notation
Just !y ->
doesn't make sense either.

Short circuiting (&&) in Haskell

A quick question that has been bugging me lately. Does Haskell perform all the equivalence test in a function that returns a boolean, even if one returns a false value?
For example
f a b = ((a+b) == 2) && ((a*b) == 2)
If the first test returns false, will it perform the second test after the &&? Or is Haskell lazy enough to not do it and move on?
Should be short circuited just like other languages. It's defined like this in the Prelude:
(&&) :: Bool -> Bool -> Bool
True && x = x
False && _ = False
So if the first parameter is False the 2nd never needs to be evaluated.
Like Martin said, languages with lazy evaluation never evaluate anything that's value is not immediately needed. In a lazy language like Haskell, you get short circuiting for free. In most languages, the || and && and similar operators must be built specially into the language in order for them to short circuit evaluation. However, in Haskell, lazy evaluation makes this unnecessary. You could define a function that short circuits yourself even:
scircuit fb sb = if fb then fb else sb
This function will behave just like the logical 'or' operator. Here is how || is defined in Haskell:
True || _ = True
False || x = x
So, to give you the specific answer to your question, no. If the left hand side of the || is true, the right hand side is never evaluated. You can put two and two together for the other operators that 'short circuit'.
A simple test to "prove" that Haskell DO have short circuit, as Caleb said.
If you try to run summation on an infinite list, you will get stack overflow:
Prelude> foldr (+) 0 $ repeat 0
*** Exception: stack overflow
But if you run e.g. (||) (logical OR) on in infinite list, you will get a result, soon, because of short circuiting:
Prelude> foldr (||) False $ repeat True
True
Lazy evaluation means, that nothing is evaluated till it is really needed.

Resources