Are these premises about folds and recursion right? - haskell

When using foldr, the recursion occours inside the function, so,
when the given function doesn't strictly evaluate both sides, and
can return based on the first one, foldr must be a good solution,
because it will work on infinity lists
findInt :: Int -> [Int] -> Bool
findInt z [] = False
-- The recursion occours inside de given function
findInt z (x:xs)
| z == x = True
| otherwise = findInt z xs
equivalent to:
findInt' :: Int -> [Int] -> Bool
findInt' z = foldr (\x r -> if z == x then True else r) False
-- Where False is the "default value" (when it finds [], ex: findInt z [] = False)
A situation when foldr is not appropriate:
addAll :: Int -> [Int] -> Int
addAll z [] = z
-- The recursion occours outside the given function (+)
addAll z (x:xs) = addAll (z + x) xs
In this case, because + is strict (needs to evaluate both sides to return)
it would be greately useful if we applied it in some way which we could
have a redex (reducible expression), to make it possible to avoid thunks
and (when forced to run with previous evaluation, not lazy) in constant
space and without pushing to much onto the stack
(similar to the advantages of a for loop in imperative algorithms)
addAll' :: Int -> [Int] -> Int
addAll' z [] = z
addAll' z (x:xs) = let z' = z + x
in seq z' $ addAll' z' xs
equivalent to:
addAll'' :: Int -> [Int] -> Int
addAll'' z = foldl' (+) z
In this little case, using foldr (inside recursion) doesn't make sense
because it wouldn't make redexes.
It would be like this:
addAll''' :: Int -> [Int] -> Int
addAll''' z [] = z
addAll''' z (x:xs) = (+) x $ addAll''' z xs
The main objective of this question is first, know whether my premises are
right or where they could be better and second, help to make it more clear
for others who are also learning Haskell the differences between inside and
outside recursion, among the approaches, to have it clear in mind which one
could be more appropriated to a given situation
Helpful links:
Haskell Wiki
Stackoverflow - Implications of foldr vs. foldl (or foldl')

Aside from the fact that foldr is the natural catamorphism of a list, while foldl and foldl' are not, a few guidelines for their use:
you are correct on that foldr will always return, even on infinite lists, as long as the function is non-strict in its second argument, since the elements of the list are made available to the first argument of the function immediately (as opposed to foldl and foldl', where the elements of the list are not available to the first argument of the function until the list has been entirely consumed);
foldl' will be a better choice for non-infinite lists if you want to ensure constant space, since it's tail recursive, but it will always parse the entire list regardless of the strictness in the evaluation of the arguments to the function passed to it;
in general, foldr is equivalent to recursion, while foldl and foldl' are analogous to loops;
because of the fact that foldr is the natural catamorphism, if your function needs to recreate the list (for example, if your function is just the list constructor ':'), foldr would be more adequate;
with respect to foldl vs. foldl', foldl' is usually preferable because it will not build a huge thunk but, if the function passed to it is non strict in its first argument and the list is not infinite, foldl may return while foldl' may give an error (there is a good example in the Haskell wiki).
As a side note, I believe that you are using the term "inside recursion" to define foldr and "outside recursion" for foldl and foldl', but I haven't seen these terms before in the literature. More commonly these functions are just referred to as folding from the right and folding from the left respectively, terms that while may not be exactly correct, they give a good notion of the order in which the elements of the list are passed to the function.

Related

Confused about parameters in Haskell lambdas

I've been learning Haskell recently and came across something I don't quite understand: the parameters of a lambda function.
In the Learn You a Haskell for Great Good book, chap. 5, there are the following two functions:
elem' :: (Eq a) => a -> [a] -> Bool
elem' y ys = foldr (\x acc -> if x == y then True else acc) False ys
reverse' :: [a] -> [a]
reverse' = foldl (\acc x -> x : acc) []
In the first function, the accumulator is listed as the lambda's second parameter, but then is the first to follow the lambda for foldl, which I took to mean it would be the first, not the second, thus, defying expectations.
Whereas, in the second function, it follows expectations, showing up as the lambda's first parameter, making the list that reverse' takes as a parameter the second for the lambda.
I tested both functions and they work as expected. I also noticed that one function involves a right fold and the other a left fold, but I'm not sure why that would alter the meaning of the parameters.
QUESTION: Can someone explain what I'm missing? Why are the parameters seeming to swap places?
foldl and foldr expect the accumulating function to have different formats. The two functions have the following types:
foldl :: Foldable t => (b -> a -> b) -> b -> t a -> b
foldr :: Foldable t => (a -> b -> b) -> b -> t a -> b
You're correct that in foldr, the accumulator is the second argument, and in foldl it's the left.
While this may seem unintuitive, it may help to think of foldl and foldr in terms of how they associate values in a list, the following images come from the "fold" page on the Haskell wiki:
Treating the natural order of the list as left to right: In foldr, the accumulator starts at the right hand side of the list, so it's natural that it's the second argument, while in foldl, the opposite is true.
It is just a convention that the accumulator in foldr is the second argument, and in foldl it is the first argument.
Why was this convention chosen?
The first reason was answered by #Joe. acc is the folded part of the list. In foldl it's left part but in foldr it's right part. So it's natural to provide acc as left operand (the first argument) to folding operator in foldl and as right operand (the second argument) to folding operator in foldr.
foldl should iterate over all the elements in the provided list, while foldr should not. You can provide folding operator to the foldr which can skip rest of elements in the list. The first example does that. The second argument acc in the foldr is thing which is not computed yet, it hold folding the rest of elements. And if you skip it in your folding operator it never be computed. In your example, if x == y you just "return" True (and skip rest elements), else you "return" acc which force to evaluate the next element in the list. So, foldr works lazyly, but foldl works strictly.
In Haskell is another convention. When operator can works lazyly then it usually have the first argument with strict semantic and the second with non strict. For example: &&, || are this sort of operators.
False && undefined => False
True || undefined => True
Folding operator in your the first example is lazy too.
(\x acc -> if x == y then True else acc) y undefined => True
And it can be rewrite in terms of || like this:
(\x acc -> x == y || acc)
Combining above reasons together we have what we have :-)

Parameters in anonymous function

I've been practicing with anonymous functions and got the following:
takeWhile' :: (a -> Bool) -> [a] -> [a]
takeWhile' f xs = foldl (\x y z -> if (f x) && z then x : y else y) xs [] True
which is basically a rewrite of the takeWhile function already in Haskell.
For those who don't know, the takeWhile function takes a list and a function and returns a new list with every element in the original list that satisfies the function until one of them gives false.
From my point of view everything seems to be correct, I have 3 arguments x y and z ready to use in my anonymous function, x being the list of numbers, y the empty list where I'll be inserting every element and z is basically a debouncer so that if one of the elements doesn't meet the requirements, we don't insert any more.
And yet Haskell gives me the following error:
"Occurs check: cannot construct the infinite type: a ~ Bool -> [a]"
Any idea why?
The fold function in fold takes as parameters the accumulator x, and the element y. So there is no z that is passed.
But even if that was somehow possible, there are still other issues. x is the accumulator here, so a list, that means that x : y makes no sense, since (:) :: a -> [a] -> [a] takes an element and a list, and constructs a new list.
You can however easily make use of foldr to implement a takeWhile function. Indeed:
takeWhile' p = foldr (\x -> if p x then (x :) else const []) []
We here thus check if the predicate holds, if that is the case, we preprend the accumulator with x. If not, we return [], regardless of the value of the accumulator.
Due to the laziness of foldr, it will not look for elements after an element has failed the accumulator, since const [] will ingore the value of the accumulator.

Why does foldr use a helper function?

In explaining foldr to Haskell newbies, the canonical definition is
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr _ z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
But in GHC.Base, foldr is defined as
foldr k z = go
where
go [] = z
go (y:ys) = y `k` go ys
It seems this definition is an optimization for speed, but I don't see why using the helper function go would make it faster. The source comments (see here) mention inlining, but I also don't see how this definition would improve inlining.
I can add some important details about GHC's optimization system.
The naive definition of foldr passes around a function. There's an inherent overhead in calling a function - especially when the function isn't known at compile time. It'd be really nice to able to inline the definition of the function if it's known at compile time.
There are tricks available to perform that inlining in GHC - and this is an example of them. First, foldr needs to be inlined (I'll get to why later). foldr's naive implementation is recursive, so cannot be inlined. So a worker/wrapper transformation is applied to the definition. The worker is recursive, but the wrapper is not. This allows foldr to be inlined, despite the recursion over the structure of the list.
When foldr is inlined, it creates a copy of all of its local bindings, too. It's more or less a direct textual inlining (modulo some renaming, and happening after the desugaring pass). This is where things get interesting. go is a local binding, and the optimizer gets to look inside it. It notices that it calls a function in the local scope, which it names k. GHC will often remove the k variable entirely, and will just replace it with the expression k reduces to. And then afterwards, if the function application is amenable to inlining, it can be inlined at this time - removing the overhead of calling a first-class function entirely.
Let's look at a simple, concrete example. This program will echo a line of input with all trailing 'x' characters removed:
dropR :: Char -> String -> String
dropR x r = if x == 'x' && null r then "" else x : r
main :: IO ()
main = do
s <- getLine
putStrLn $ foldr dropR "" s
First, the optimizer will inline foldr's definition and simplify, resulting in code that looks something like this:
main :: IO ()
main = do
s <- getLine
-- I'm changing the where clause to a let expression for the sake of readability
putStrLn $ let { go [] = ""; go (x:xs) = dropR x (go xs) } in go s
And that's the thing the worker-wrapper transformation allows.. I'm going to skip the remaining steps, but it should be obvious that GHC can now inline the definition of dropR, eliminating the function call overhead. This is where the big performance win comes from.
GHC cannot inline recursive functions, so
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr _ z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
cannot be inlined. But
foldr k z = go
where
go [] = z
go (y:ys) = y `k` go ys
is not a recursive function. It is a non-recursive function with a local recursive definition!
This means that, as #bheklilr writes, in map (foldr (+) 0) the foldr can be inlined and hence f and z replaced by (+) and 0 in the new go, and great things can happen, such as unboxing of the intermediate value.
As the comments say:
-- Inline only in the final stage, after the foldr/cons rule has had a chance
-- Also note that we inline it when it has *two* parameters, which are the
-- ones we are keen about specialising!
In particular, note the "we inline it when it has two parameters, which are the ones we are keen about specialising!"
What this is saying is that when foldr gets inlined, it's getting inlined only for the specific choice of f and z, not for the choice of the list getting folded. I'm not expert, but it would seem it would make it possible to inline it in situations like
map (foldr (+) 0) some_list
so that the inline happens in this line and not after map has been applied. This makes it optimizable in more situations and more easily. All the helper function does is mask the 3rd argument so {-# INLINE #-} can do its thing.
One tiny important detail not mentioned in other answers is that GHC, given a function definition like
f x y z w q = ...
cannot inline f until all of the arguments x, y, z, w, and q are applied. This means that it's often advantageous to use the worker/wrapper transformation to expose a minimal set of function arguments which must be applied before inlining can occur.

Are there ways to call two functions (one just after another) in purely functional language? (in non-io mode)

I'm trying to understand order of execution in purely functional language.
I know that in purely functional languages, there is no necessary execution order.
So my question is:
Suppose there are two functions.
I would like to know all ways in which I can call one function after another (except nested call of one function from another) (and except io-mode).
I would like to see examples in Haskell or pseudo-code.
There is no way to do what you describe, if the functions are totally independent and you don't use the result of one when you call the other.
This is because there is no reason to do this. In a side effect free setting, calling a function and then ignoring its result is exactly the same as doing nothing for the amount of time it takes to call that function (setting aside memory usage).
It is possible that seq x y will evaluate x and then y, and then give you y as its result, but this evaluation order isn't guaranteed.
Now, if we do have side effects, such as if we are working inside a Monad or Applicative, this could be useful, but we aren't truly ignoring the result since there is context being passed implicitly. For instance, you can do
main :: IO ()
main = putStrLn "Hello, " >> putStrLn "world"
in the IO Monad. Another example would be the list Monad (which could be thought of as representing a nondeterministic computation):
biggerThanTen :: Int -> Bool
biggerThanTen n = n > 10
example :: String
example = filter biggerThanTen [1..15] >> return 'a' -- This evaluates to "aaaaa"
Note that even here we aren't really ignoring the result. We ignore the specific values, but we use the structure of the result (in the second example, the structure would be the fact that the resulting list from filter biggerThanTen [1..15] has 5 elements).
I should point out, though, that things that are sequenced in this way aren't necessarily evaluated in the order that they are written. You can sort of see this with the list Monad example. This becomes more apparent with bigger examples though:
example2 :: [Int]
example2 =
[1,2,3] >>=
(\x -> [10,100,1000] >>=
(\y -> return (x * y))) -- ==> [10,100,1000,20,200,2000,30,300,3000]
The main takeaway here is that evaluation order (in the absence of side effects like IO and ignoring bottoms) doesn't affect the ultimate meaning of code in Haskell (other than possible differences in efficiency, but that is another topic). As a result, there is never a reason to call two functions "one after another" in the fashion described in the question (that is, where the calls are totally independent from each other).
Do notation
Do notation is actually exactly equivalent to using >>= and >> (there is actually one other thing involved that takes care of pattern match failures, but that is irrelevant to the discussion at hand). The compiler actually takes things written in do notation and converts them to >>= and >> through a process called "desugaring" (since it removes the syntactic sugar). Here are the three examples from above written with do notation:
IO Example
main :: IO ()
main = do
putStrLn "Hello, "
putStrLn "World"
First list example
biggerThanTen :: Int -> Bool
biggerThanTen n = n > 10
example :: String -- String is a synonym for [Char], by the way
example = do
filter biggerThanTen [1..15]
return 'a'
Second list example
example2 :: [Int]
example2 = do
x <- [1,2,3]
y <- [10,100,1000]
return (x * y)
Here is a side-by-side comparison of the conversions:
do --
m -- m >> n
n --
do --
x <- m -- m >>= (\x ->
... -- ...)
The best way to understand do notation is to first understand >>= and return since, as I said, that's what the compiler transforms do notation into.
As a side-note, >> is just the same as >>=, it just ignores the "result" of it's left argument (although it preserves the "context" or "structure"). So all definitions of >> must be equivalent to m >> n = m >>= (\_ -> n).
Expanding the >>= in the second list example
To help drive home the point that Monads are not usually impure, lets expand the >>= calls in the second list example, using the Monad definition for lists. The definition is:
instance Monad [] where
return x = [x]
xs >>= f = concatMap f xs
and we can convert example2 into:
Step 0 (what we already have)
example2 :: [Int]
example2 =
[1,2,3] >>=
(\x -> [10,100,1000] >>=
(\y -> return (x * y)))
Step 1 (converting the first >>=)
example2 =
concatMap
(\x -> [10,100,1000] >>=
(\y -> return (x * y)))
[1,2,3]
Step 2
example2 =
concatMap
(\x -> concatMap
(\y -> return (x * y))
[10,100,1000])
[1,2,3]
Step 3
example2 =
concatMap
(\x -> concatMap
(\y -> [x * y])
[10,100,1000])
[1,2,3]
So, there is no magic going on here, just normal function calls.
You can write a function whose arguments depend on the evaluation of another function:
-- Ads the first two elements of a list together
myFunc :: [Int] -> Int
myFunc xs = (head xs) + (head $ tail xs)
If that's what you mean. In this case, you can't get the output of myFunc xs without evaluating head xs, head $ tail xs and (+). There is an order here. However, the compiler can choose which order to execute head xs and head $ tail xs in since they are not dependent on each other, but it can't do the addition without having both of the other results. It could even choose to evaluate them in parallel, or on different machines. The point is that pure functions, because they have no side effects, don't have to be evaluated in a given order until their results are interdependent.
Another way to look at the above function is as a graph:
myFunc
|
(+)
/ \
/ \
head head
\ |
\ tail
\ /
xs
In order to evaluate a node, all nodes below it have to be evaluated first, but different branches can be evaluated in parallel. First xs must be evaluated, at least partially, but after that the two branches can be evaluated in parallel. There are some nuances due to lazy evaluation, but this is essentially how the compiler constructs evaluation trees.
If you really want to force one function call before the other, you can use the seq function. It takes two arguments, forces the first to be evaluated, then returns the second, e.g.
myFunc2 :: [Int] -> Int
myFunc2 xs = hxs + (hxs `seq` (head $ tail xs))
where hxs = head xs
This will force head xs to evaluate before head $ tail xs, but this is more dealing with strictness than sequencing functions.
Here is an easy way:
case f x of
result1 -> case g y of
result2 -> ....
Still, unless g y uses something from result1 and the subsequent calculations something from result2, or the pattern is such that the result must be evaluated, there is no guarantee that either of f or g are actually called, nor in what order.
Still, you wanted a way to call one function after another, and this is such a way.

Lazy Evaluation and Strict Evaluation Haskell

I understand what lazy evaluation is, and how it works and the advantages it has, but could you explain me what strict evaluation really is in Haskell? I can't seem to find much info about it, since lazy evaluation is the most known.
What are the benefit of each of them over the other. When is strict evaluation actually used?
Strictness happens in a few ways in Haskell,
First, a definition. A function is strict if and only if when its argument a doesn't terminate, neither does f a. Nonstrict (sometimes called lazy) is just the opposite of this.
You can be strict in an argument, either using pattern matching
-- strict
foo True = 1
foo False = 1
-- vs
foo _ = 1
Since we don't need to evaluate the argument, we could pass something like foo (let x = x in x) and it'd still just return 1. With the first one however, the function needs to see what value the input is so it can run the appropriate branch, thus it is strict.
If we can't pattern match for whatever reason, then we can use a magic function called seq :: a -> b -> b. seq basically stipulates that whenever it is evaluated, it will evaluated a to what's called weak head normal form.
You may wonder why it's worth it. Let's consider a case study, foldl vs foldl'. foldl is lazy in it's accumulator so it's implemented something like
foldl :: (a -> b -> a) -> a -> [b] -> a
foldl f accum [] = acuum
foldl f accum (x:xs) = foldl (f accum x) xs
Notice that since we're never strict in accum, we'll build up a huge series of thunks, f (f (f (f (f (f ... accum)))..)))
Not a happy prospect since this will lead to memory issues, indeed
*> foldl (+) 0 [1..500000000]
*** error: stack overflow
Now what'd be better is if we forced evaluation at each step, using seq
foldl' :: (a -> b -> a) -> a -> [b] -> a
foldl' f accum [] = accum
foldl' f accum (x:xs) = let accum' = f accum x
in accum' `seq` foldl' f accum' xs
Now we force the evaluation of accum at each step making it much faster. This will make foldl' run in constant space and not stackoverflow like foldl.
Now seq only evaluates it values to weak head normal form, sometimes we want them to be evaluated fully, to normal form. For that we can use a library/type class
import Control.DeepSeq -- a library on hackage
deepseq :: NFData a => a -> b -> a
This forces a to be fully evaluated so,
*> [1, 2, error "Explode"] `seq` 1
1
*> [1, 2, error "Explode"] `deepseq` 1
error: Explode
*> undefined `seq` 1
error: undefined
*> undefined `deepseq` 1
error undefined
So this fully evaluates its arguments. This is very useful for parallel programming for example, where you want to fully evaluate something on one core before it's sent back to the main thread, otherwise you'd just create a thunk and all the actual computation would still be sequential.

Resources