Short circuiting (&&) in Haskell - haskell

A quick question that has been bugging me lately. Does Haskell perform all the equivalence test in a function that returns a boolean, even if one returns a false value?
For example
f a b = ((a+b) == 2) && ((a*b) == 2)
If the first test returns false, will it perform the second test after the &&? Or is Haskell lazy enough to not do it and move on?

Should be short circuited just like other languages. It's defined like this in the Prelude:
(&&) :: Bool -> Bool -> Bool
True && x = x
False && _ = False
So if the first parameter is False the 2nd never needs to be evaluated.

Like Martin said, languages with lazy evaluation never evaluate anything that's value is not immediately needed. In a lazy language like Haskell, you get short circuiting for free. In most languages, the || and && and similar operators must be built specially into the language in order for them to short circuit evaluation. However, in Haskell, lazy evaluation makes this unnecessary. You could define a function that short circuits yourself even:
scircuit fb sb = if fb then fb else sb
This function will behave just like the logical 'or' operator. Here is how || is defined in Haskell:
True || _ = True
False || x = x
So, to give you the specific answer to your question, no. If the left hand side of the || is true, the right hand side is never evaluated. You can put two and two together for the other operators that 'short circuit'.

A simple test to "prove" that Haskell DO have short circuit, as Caleb said.
If you try to run summation on an infinite list, you will get stack overflow:
Prelude> foldr (+) 0 $ repeat 0
*** Exception: stack overflow
But if you run e.g. (||) (logical OR) on in infinite list, you will get a result, soon, because of short circuiting:
Prelude> foldr (||) False $ repeat True
True

Lazy evaluation means, that nothing is evaluated till it is really needed.

Related

How do I modify this Haskell function so I don't have to import Data.Bool and only use prelude function?

I want to build function below using only prelude built in function without importing Data.Bool. I want to replace bool function to something else so I don't have to import Data.Bool and function prints same output as below function. How can I do this so it returns same output?
increment :: [Bool] -> [Bool]
increment x = case x of
[] -> [True]
(y : ys) -> not y : bool id increment y ys
bool from Data.Bool is doing exactly the same thing as a if statement, so it can be a way to implement it:
bool x y b = if b then y else x
#dfeuer suggested in a comment that you should throw away this code because it's disgusting, and instead try to write it yourself. This might be distressing to you if you're the one that wrote the code in the first place and can't see why it's disgusting, so allow me to elaborate.
In fact, "disgusting" is too strong a word. However, the code is unnecessarily complex and difficult to understand. A more straightforward implementation does all the processing using pattern matching on the function argument:
increment :: [Bool] -> [Bool]
increment [] = [True]
increment (False : rest) = True : rest
increment (True : rest) = False : increment rest
This code is easier to read for most people, because all of the decision logic is at the same "level" and implemented the same way -- by inspecting the three patterns on the left-hand side of the definitions, you can see exactly how the three, mutually exclusive cases are handled at a glance.
In contrast, the original code requires the reader to consider the pattern match against an empty versus not empty list, the effect of the "not" computation on the first boolean, the bool call based on that same boolean, and the application of either the function id or the recursive increment on the rest of the boolean list. For any given input, you need to consider all four conceptually distinct processing steps to understand what the function is doing, and at the end, you'll probably still be uncertain about which steps were triggered by which aspects of the input.
Now, ideally, GHC with -O2 would compile both of these version to exactly the same code internally. It almost does. But, it turns out that due to an apparent optimization bug, the original code ends up being slightly less efficient than this rewritten version because it unnecessarily checks y == True twice.

Could `if-then-else` (always) be replaced by a function call?

This question is out of curiousity about how PLs work, more than anything else. (It actually came to me while looking at SML which differs from Haskell in that the former uses call-by-value - but my question is about Haskell.)
Haskell (as I understand) has "call-by-need" semantics.
Does this imply that if I define a function as follows:
cond True thenExp elseExp = thenExp
cond _ thenExp elseExp = elseExp
that this will always behave exactly like an if-then-else expression?
Or, in another sense, can if-then-else be regarded as syntactic sugar for something that could've been defined as a function?
Edit:
Just to contrast the behaviour of Haskell with Standard ML, define (in SML)
cond p t f = if p then t else f;
and then the factorial function
fun factc n = cond (n=0) 1 (n * factc (n-1));
evaluating factc 1 (say) never finishes, because the recursion in the last argument of cond never terminates.
However, defining
fun fact n = if n=0 then 1 else fact (n-1);
works as we expect because the then branch is only evaluated as needed.
Maybe there are clever ways to defer argument evaluation in SML (don't know as I'm not that familiar with it yet) but the point is that in a call-by-value type language, if-then-else often behaves differently.
My question was whether this (call-by-need vs call-by-value) was the principle reason behind this difference (and the consensus seems to be "yes").
Like the Haskell Wikipedia on if-then-else says:
For processing conditions, the `if-then-else` **syntax was defined in Haskell98**. However it could be simply replaced by the function `if'`
with
if' :: Bool -> a -> a -> a
if' True x _ = x
if' False _ y = y
So if we use the above if' function, and we need to evaluate it (since Haskell is lazy, we do not necessary need to evaluate an if-then-else expression at all), Haskell will first evaluate the first operand to decide if it is True or False. In case it is True, it will return the first expression, if it is False it will return the second expression. Note that this does not per se means that we (fully) evaluate these expressions. Only when we need the result, we will evaluate the expressions.
But in case the condition is True, there is no reason at all to ever evaluate the second expression, since we ignore it.
In case we share an expression over multiple parts of the expression tree, it is of course possible that another call will (partially) evaluate the other expression.
ghci even has an option to override the if <expr> then <expr> else <expr> syntax: the -XRebindable flag. It will, besides other things also:
Conditionals (e.g. if e1 then e2 else e3) means ifThenElse e1 e2 e3. However case expressions are unaffected.
Yes, a lazy language will evaluate an expression only when it needs to. Therefore it is no problem to convert the then/else parts of the expression into function arguments.
This is unlike strict languages like Idris or OCaml, where arguments for a function call are evaluated before the called function is executed.
Yes, you can regard if/then/else as syntactic sugar. In fact, a programming language doesn't even need Boolean primitives, so you can also consider True and False as syntactic sugar.
The lambda calculus defines Boolean values as a set of functions that take two arguments:
true = λt.λf.t
false = λt.λf.f
Both functions are functions that takes two values, t for true, and f for false. The function true always returns the t value, whereas the function false always returns the f value.
In Haskell, you can define similar functions like this:
true = \t f -> t
false = \t f -> f
You could then write your cond function like:
cond = \b t f -> b t f
Examples:
Prelude> cond true "foo" "bar"
"foo"
Prelude> cond false "foo" "bar"
"bar"
Read more in Travis Whitaker's article Scrap Your Constructors: Church Encoding Algebraic Types.

Swap logical operator arguments for faster evaluation?

Does any programming language implement swapping of arguments of logical operation (such as AND, OR) for faster evaluation?
Example (I think such method could be implemented in a lazy evaluation language like Haskell)
Lets say we have defined two predicates A and B.
During program execution, B was evaluated to "True" and A was not evaluated
In the later execution we have condition IF A OR B
Arguments of "OR" are swapped, and the condition becomes IF B OR A
Condition is evaluated to "True" without evaluating A
Under lazy evaluation, AND and OR are not commutative.
foo :: Int -> Bool
foo n = False && foo (n+1) -- evaluates to False
bar :: Int -> Bool
bar n = bar (n+1) && False -- diverges
Under eager evaluation (strict semantics), and absence of side effects, they are commutative. I am not aware of any usual optimization being done by some compiler here, though. (Constants folding aside.)
If side effects are present, AND/OR are not commutative, of course. For instance, an Ocaml compiler can not swap the arguments unless it can prove that at least one of them is side effect-free.
It's not done automatically as part of the language (possibly because it would not be free to perform this reordering check, so you would often end up paying for an optimization that can't be made). However, there are library functions you can use to this end. See, for example, unamb. With that, you can write
(|||) :: Bool -> Bool -> Bool
a ||| b = (a || b) `unamb` (b || a)
And if one operation is cheaper to compute, it can be chosen.

A real life example when pattern matching is more preferable than a case expression in Haskell?

So I have been busy with the Real World Haskell book and I did the lastButOne exercise. I came up with 2 solutions, one with pattern matching
lastButOne :: [a] -> a
lastButOne ([]) = error "Empty List"
lastButOne (x:[]) = error "Only one element"
lastButOne (x:[x2]) = x
lastButOne (x:xs) = lastButOne xs
And one using a case expression
lastButOneCase :: [a] -> a
lastButOneCase x =
case x of
[] -> error "Empty List"
(x:[]) -> error "Only One Element"
(x:[x2]) -> x
(x:xs) -> lastButOneCase xs
What I wanted to find out is when would pattern matching be preferred over case expressions and vice versa. This example was not good enough for me because it seems that while both of the functions work as intended, it did not lead me to choose one implementation over the other. So the choice "seems" preferential at first glance?
So are there good cases by means of source code, either in haskell's own source or github or somewhere else, where one is able to see when either method is preferred or not?
First a short terminology diversion: I would call both of these "pattern matching". I'm not sure there is a good term for distinguishing pattern-matching-via-case and pattern-matching-via-multiple-definition.
The technical distinction between the two is quite light indeed. You can verify this yourself by asking GHC to dump the core it generates for the two functions, using the -ddump-simpl flag. I tried this at a few different optimization levels, and in all cases the only differences in the Core were naming. (By the way, if anyone knows a good "semantic diff" program for Core -- which knows about at the very least alpha equivalence -- I'm very interested in hearing about it!)
There are a few small gotchas to watch out for, though. You might wonder whether the following is also equivalent:
{-# LANGUAGE LambdaCase #-}
lastButOne = \case
[] -> error "Empty List"
(x:[]) -> error "Only One Element"
(x:[x2]) -> x
(x:xs) -> lastButOneCase xs
In this case, the answer is yes. But consider this similar-looking one:
-- ambiguous type error
sort = \case
[] -> []
x:xs -> insert x (sort xs)
All of a sudden this is a typeclass-polymorphic CAF, and so on old GHCs this will trigger the monomorphism restriction and cause an error, whereas the superficially identical version with an explicit argument does not:
-- this is fine!
sort [] = []
sort (x:xs) = insert x (sort xs)
The other minor difference (which I forgot about -- thank you to Thomas DuBuisson for reminding me) is in the handling of where clauses. Since where clauses are attached to binding sites, they cannot be shared across multiple equations but can be shared across multiple cases. For example:
-- error; the where clause attaches to the second equation, so
-- empty is not in scope in the first equation
null [] = empty
null (x:xs) = nonempty
where empty = True
nonempty = False
-- ok; the where clause attaches to the equation, so both empty
-- and nonempty are in scope for the entire case expression
null x = case x of
[] -> empty
x:xs -> nonempty
where
empty = True
nonempty = False
You might think this means you can do something with equations that you can't do with case expressions, namely, have different meanings for the same name in the two equations, like this:
null [] = answer where answer = True
null (x:xs) = answer where answer = False
However, since the patterns of case expressions are binding sites, this can be emulated in case expressions as well:
null x = case x of
[] -> answer where answer = True
x:xs -> answer where answer = False
Whether the where clause is attached to the case's pattern or to the equation depends on indentation, of course.
If I recall correctly both these will "desugar" into the same core code in ghc, so the choice is purely stylistic. Personally I would go for the first one. As someone said, its shorter, and what you term "pattern matching" is intended to be used this way. (Actually the second version is also pattern matching, just using a different syntax for it).
It's a stylistic preference. Some people sometimes argue that one choice or another makes certain code changes take less effort, but I generally find such arguments, even when accurate, don't actually amount to a big improvement. So do as you like.
A perspective that's well worth bringing into this is Hudak, Hughes, Peyton Jones and Wadler's paper "A History of Haskell: Being Lazy With Class". Section 4.4 is about this topic. The short story: Haskell supports both because the designers couldn't agree on one over the other. Yep, again, it's a stylistic preference.
When you're matching on more than one expression, case expressions start to look more attractive.
f pat11 pat21 = ...
f pat11 pat22 = ...
f pat11 pat23 = ...
f pat12 pat24 = ...
f pat12 pat25 = ...
can be more annoying to write than
f pat11 y =
case y of
pat21 -> ...
pat22 -> ...
pat23 -> ...
f pat12 y =
case y of
pat24 -> ...
pat25 -> ...
More significantly, I've found that when using GADTs, the "declaration style" doesn't seem to propagate evidence from left to right the way I'd expect it to. There might be some trick I haven't worked out, but I end up having to nest case expressions to avoid spurious incomplete pattern warnings.

Does a function in Haskell always evaluate its return value?

I'm trying to better understand Haskell's laziness, such as when it evaluates an argument to a function.
From this source:
But when a call to const is evaluated (that’s the situation we are interested in, here, after all), its return value is evaluated too ... This is a good general principle: a function obviously is strict in its return value, because when a function application needs to be evaluated, it needs to evaluate, in the body of the function, what gets returned. Starting from there, you can know what must be evaluated by looking at what the return value depends on invariably. Your function will be strict in these arguments, and lazy in the others.
So a function in Haskell always evaluates its own return value? If I have:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
head (foo [1..]) -- = 4
According to the above paragraph, map (* 2) xs, must be evaluated. Intuitively, I would think that means applying the map to the entire list- resulting in an infinite loop.
But, I can successfully take the head of the result. I know that : is lazy in Haskell, so does this mean that evaluating map (* 2) xs just means constructing something else that isn't fully evaluated yet?
What does it mean to evaluate a function applied to an infinite list? If the return value of a function is always evaluated when the function is evaluated, can a function ever actually return a thunk?
Edit:
bar x y = x
var = bar (product [1..]) 1
This code doesn't hang. When I create var, does it not evaluate its body? Or does it set bar to product [1..] and not evaluate that? If the latter, bar is not returning its body in WHNF, right, so did it really 'evaluate' x? How could bar be strict in x if it doesn't hang on computing product [1..]?
First of all, Haskell does not specify when evaluation happens so the question can only be given a definite answer for specific implementations.
The following is true for all non-parallel implementations that I know of, like ghc, hbc, nhc, hugs, etc (all G-machine based, btw).
BTW, something to remember is that when you hear "evaluate" for Haskell it normally means "evaluate to WHNF".
Unlike strict languages you have to distinguish between two "callers" of a function, the first is where the call occurs lexically, and the second is where the value is demanded. For a strict language these two always coincide, but not for a lazy language.
Let's take your example and complicate it a little:
foo [] = []
foo (_:xs) = map (* 2) xs
bar x = (foo [1..], x)
main = print (head (fst (bar 42)))
The foo function occurs in bar. Evaluating bar will return a pair, and the first component of the pair is a thunk corresponding to foo [1..]. So bar is what would be the caller in a strict language, but in the case of a lazy language it doesn't call foo at all, instead it just builds the closure.
Now, in the main function we actually need the value of head (fst (bar 42)) since we have to print it. So the head function will actually be called. The head function is defined by pattern matching, so it needs the value of the argument. So fst is called. It too is defined by pattern matching and needs its argument so bar is called, and bar will return a pair, and fst will evaluate and return its first component. And now finally foo is "called"; and by called I mean that the thunk is evaluated (entered as it's sometimes called in TIM terminology), because the value is needed. The only reason the actual code for foo is called is that we want a value. So foo had better return a value (i.e., a WHNF). The foo function will evaluate its argument and end up in the second branch. Here it will tail call into the code for map. The map function is defined by pattern match and it will evaluate its argument, which is a cons. So map will return the following {(*2) y} : {map (*2) ys}, where I have used {} to indicate a closure being built. So as you can see map just returns a cons cell with the head being a closure and the tail being a closure.
To understand the operational semantics of Haskell better I suggest you look at some paper describing how to translate Haskell to some abstract machine, like the G-machine.
I always found that the term "evaluate," which I had learned in other contexts (e.g., Scheme programming), always got me all confused when I tried to apply it to Haskell, and that I made a breakthrough when I started to think of Haskell in terms of forcing expressions instead of "evaluating" them. Some key differences:
"Evaluation," as I learned the term before, strongly connotes mapping expressions to values that are themselves not expressions. (One common technical term here is "denotations.")
In Haskell, the process of forcing is IMHO most easily understood as expression rewriting. You start with an expression, and you repeatedly rewrite it according to certain rules until you get an equivalent expression that satisfies a certain property.
In Haskell the "certain property" has the unfriendly name weak head normal form ("WHNF"), which really just means that the expression is either a nullary data constructor or an application of a data constructor.
Let's translate that to a very rough set of informal rules. To force an expression expr:
If expr is a nullary constructor or a constructor application, the result of forcing it is expr itself. (It's already in WHNF.)
If expr is a function application f arg, then the result of forcing it is obtained this way:
Find the definition of f.
Can you pattern match this definition against the expression arg? If not, then force arg and try again with the result of that.
Substitute the pattern match variables in the body of f with the parts of (the possibly rewritten) arg that correspond to them, and force the resulting expression.
One way of thinking of this is that when you force an expression, you're trying to rewrite it minimally to reduce it to an equivalent expression in WHNF.
Let's apply this to your example:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
-- We want to force this expression:
head (foo [1..])
We will need definitions for head and `map:
head [] = undefined
head (x:_) = x
map _ [] = []
map f (x:xs) = f x : map f x
-- Not real code, but a rule we'll be using for forcing infinite ranges.
[n..] ==> n : [(n+1)..]
So now:
head (foo [1..]) ==> head (map (*2) [1..]) -- using the definition of foo
==> head (map (*2) (1 : [2..])) -- using the forcing rule for [n..]
==> head (1*2 : map (*2) [2..]) -- using the definition of map
==> 1*2 -- using the definition of head
==> 2 -- using the definition of *
I believe the idea must be that in a lazy language if you're evaluating a function application, it must be because you need the result of the application for something. So whatever reason caused the function application to be reduced in the first place is going to continue to need to reduce the returned result. If we didn't need the function's result we wouldn't be evaluating the call in the first place, the whole application would be left as a thunk.
A key point is that the standard "lazy evaluation" order is demand-driven. You only evaluate what you need. Evaluating more risks violating the language spec's definition of "non-strict semantics" and looping or failing for some programs that should be able to terminate; lazy evaluation has the interesting property that if any evaluation order can cause a particular program to terminate, so can lazy evaluation.1
But if we only evaluate what we need, what does "need" mean? Generally it means either
a pattern match needs to know what constructor a particular value is (e.g. I can't know what branch to take in your definition of foo without knowing whether the argument is [] or _:xs)
a primitive operation needs to know the entire value (e.g. the arithmetic circuits in the CPU can't add or compare thunks; I need to fully evaluate two Int values to call such operations)
the outer driver that executes the main IO action needs to know what the next thing to execute is
So say we've got this program:
foo :: Num a => [a] -> [a]
foo [] = []
foo (_:xs) = map (* 2) xs
main :: IO ()
main = print (head (foo [1..]))
To execute main, the IO driver has to evaluate the thunk print (head (foo [1..])) to work out that it's print applied to the thunk head (foo [1..]). print needs to evaluate its argument on order to print it, so now we need to evaluate that thunk.
head starts by pattern matching its argument, so now we need to evaluate foo [1..], but only to WHNF - just enough to tell whether the outermost list constructor is [] or :.
foo starts by pattern matching on its argument. So we need to evaluate [1..], also only to WHNF. That's basically 1 : [2..], which is enough to see which branch to take in foo.2
The : case of foo (with xs bound to the thunk [2..]) evaluates to the thunk map (*2) [2..].
So foo is evaluated, and didn't evaluate its body. However, we only did that because head was pattern matching to see if we had [] or x : _. We still don't know that, so we must immediately continue to evaluate the result of foo.
This is what the article means when it says functions are strict in their result. Given that a call to foo is evaluated at all, its result will also be evaluated (and so, anything needed to evaluate the result will also be evaluated).
But how far it needs to be evaluated depends on the calling context. head is only pattern matching on the result of foo, so it only needs a result to WHNF. We can get an infinite list to WHNF (we already did so, with 1 : [2..]), so we don't necessarily get in an infinite loop when evaluating a call to foo. But if head were some sort of primitive operation implemented outside of Haskell that needed to be passed a completely evaluated list, then we'd be evaluating foo [1..] completely, and thus would never finish in order to come back to head.
So, just to complete my example, we're evaluating map (2 *) [2..].
map pattern matches its second argument, so we need to evaluate [2..] as far as 2 : [3..]. That's enough for map to return the thunk (2 *) 2 : map (2 *) [3..], which is in WHNF. And so it's done, we can finally return to head.
head ((2 *) 2 : map (2 *) [3..]) doesn't need to inspect either side of the :, it just needs to know that there is one so it can return the left side. So it just returns the unevaluated thunk (2 *) 2.
Again though, we only evaluated the call to head this far because print needed to know what its result is, so although head doesn't evaluate its result, its result is always evaluated whenever the call to head is.
(2 *) 2 evaluates to 4, print converts that into the string "4" (via show), and the line gets printed to the output. That was the entire main IO action, so the program is done.
1 Implementations of Haskell, such as GHC, do not always use "standard lazy evaluation", and the language spec does not require it. If the compiler can prove that something will always be needed, or cannot loop/error, then it's safe to evaluate it even when lazy evaluation wouldn't (yet) do so. This can often be faster so GHC optimizations do actually do this.
2 I'm skipping over a few details here, like that print does have some non-primitive implementation we could step inside and lazily evaluate, and that [1..] could be further expanded to the functions that actually implement that syntax.
Not necessarily. Haskell is lazy, meaning that it only evaluates when it needs to. This has some interesting effects. If we take the below code, for example:
-- File: lazinessTest.hs
(>?) :: a -> b -> b
a >? b = b
main = (putStrLn "Something") >? (putStrLn "Something else")
This is the output of the program:
$ ./lazinessTest
Something else
This indicates that putStrLn "Something" is never evaluated. But it's still being passed to the function, in the form of a 'thunk'. These 'thunks' are unevaluated values that, rather than being concrete values, are like a breadcrumb-trail of how to compute the value. This is how Haskell laziness works.
In our case, two 'thunks' are passed to >?, but only one is passed out, meaning that only one is evaluated in the end. This also applies in const, where the second argument can be safely ignored, and therefore is never computed. As for map, GHC is smart enough to realise that we don't care about the end of the array, and only bothers to compute what it needs to, in your case the second element of the original list.
However, it's best to leave the thinking about laziness to the compiler and keep coding, unless you're dealing with IO, in which case you really, really should think about laziness, because you can easily go wrong, as I've just demonstrated.
There are lots and lots of online articles on the Haskell wiki to look at, if you want more detail.
Function could evaluate either return type:
head (x:_) = x
or exception/error:
head _ = error "Head: List is empty!"
or bottom (⊥)
a = a
b = last [1 ..]

Resources