foldr1 and infinite list on Haskell - haskell

Reading about folds on this wonderful book I have a question regarding foldr1 and the head' implementation proposed there, the code in question is:
head' = foldr1 (\x _ -> x)
this code works on infinite list, whereas foldl1 don't. A good visual explanation about why is this answer.
I do not quite understand though why does it work, considering that foldr1 is using the last element as accumulator. For example:
foldr1 (\x _ -> x) [1..]
This works because (I Think) lazy evaluation, even though foldr is starting from the last element of the list (which is infinite), I'm assuming because the function is not making use of any intermediate result, just return the first element.
So, is the compiler smart enough to know that, because inside of the lambda function only x is being used, just returns the first element of the list? even though it should start from the end?
On the contrary, doing
scanr1 (\x _ -> x) [1..]
Will print all elements of the infinite list without ending, which I suppose it's what the foldr is doing, just the compiler is smart enough to not evaluate it and return the head.
Thanks in advance.
Update
I found a really good answer that helped me understand how foldr works more deeply:
https://stackoverflow.com/a/63177677/1612432

foldr1 is using the last element as an initial accumulator value, but the combining function (\x _ -> x) is lazy in its second argument.
So provided the list is non-empty (let alone infinite), the "accumulator" value is never needed, thus never demanded.
foldr does not mean it should start from the right, just that the operations are grouped / associated / parenthesized on the right. If the combining function is strict in its 2nd argument that will entail indeed starting the calculations from the right, but if not -- then not.
So no, this is not about compiler being smart, this is about Haskell's lazy semantics that demand this. foldr is defined so that
foldr g z [x1,x2,...,xn] = g x1 (foldr g z [x2,...,xn])
and
foldr1 g xs = foldr g (last xs) (init xs)
and that's that.

Related

Confused about parameters in Haskell lambdas

I've been learning Haskell recently and came across something I don't quite understand: the parameters of a lambda function.
In the Learn You a Haskell for Great Good book, chap. 5, there are the following two functions:
elem' :: (Eq a) => a -> [a] -> Bool
elem' y ys = foldr (\x acc -> if x == y then True else acc) False ys
reverse' :: [a] -> [a]
reverse' = foldl (\acc x -> x : acc) []
In the first function, the accumulator is listed as the lambda's second parameter, but then is the first to follow the lambda for foldl, which I took to mean it would be the first, not the second, thus, defying expectations.
Whereas, in the second function, it follows expectations, showing up as the lambda's first parameter, making the list that reverse' takes as a parameter the second for the lambda.
I tested both functions and they work as expected. I also noticed that one function involves a right fold and the other a left fold, but I'm not sure why that would alter the meaning of the parameters.
QUESTION: Can someone explain what I'm missing? Why are the parameters seeming to swap places?
foldl and foldr expect the accumulating function to have different formats. The two functions have the following types:
foldl :: Foldable t => (b -> a -> b) -> b -> t a -> b
foldr :: Foldable t => (a -> b -> b) -> b -> t a -> b
You're correct that in foldr, the accumulator is the second argument, and in foldl it's the left.
While this may seem unintuitive, it may help to think of foldl and foldr in terms of how they associate values in a list, the following images come from the "fold" page on the Haskell wiki:
Treating the natural order of the list as left to right: In foldr, the accumulator starts at the right hand side of the list, so it's natural that it's the second argument, while in foldl, the opposite is true.
It is just a convention that the accumulator in foldr is the second argument, and in foldl it is the first argument.
Why was this convention chosen?
The first reason was answered by #Joe. acc is the folded part of the list. In foldl it's left part but in foldr it's right part. So it's natural to provide acc as left operand (the first argument) to folding operator in foldl and as right operand (the second argument) to folding operator in foldr.
foldl should iterate over all the elements in the provided list, while foldr should not. You can provide folding operator to the foldr which can skip rest of elements in the list. The first example does that. The second argument acc in the foldr is thing which is not computed yet, it hold folding the rest of elements. And if you skip it in your folding operator it never be computed. In your example, if x == y you just "return" True (and skip rest elements), else you "return" acc which force to evaluate the next element in the list. So, foldr works lazyly, but foldl works strictly.
In Haskell is another convention. When operator can works lazyly then it usually have the first argument with strict semantic and the second with non strict. For example: &&, || are this sort of operators.
False && undefined => False
True || undefined => True
Folding operator in your the first example is lazy too.
(\x acc -> if x == y then True else acc) y undefined => True
And it can be rewrite in terms of || like this:
(\x acc -> x == y || acc)
Combining above reasons together we have what we have :-)

How does GHC know how to cache one function but not the others?

I'm reading Learn You a Haskell (loving it so far) and it teaches how to implement elem in terms of foldl, using a lambda. The lambda solution seemed a bit ugly to me so I tried to think of alternative implementations (all using foldl):
import qualified Data.Set as Set
import qualified Data.List as List
-- LYAH implementation
elem1 :: (Eq a) => a -> [a] -> Bool
y `elem1` ys =
foldl (\acc x -> if x == y then True else acc) False ys
-- When I thought about stripping duplicates from a list
-- the first thing that came to my mind was the mathematical set
elem2 :: (Eq a) => a -> [a] -> Bool
y `elem2` ys =
head $ Set.toList $ Set.fromList $ filter (==True) $ map (==y) ys
-- Then I discovered `nub` which seems to be highly optimized:
elem3 :: (Eq a) => a -> [a] -> Bool
y `elem3` ys =
head $ List.nub $ filter (==True) $ map (==y) ys
I loaded these functions in GHCi and did :set +s and then evaluated a small benchmark:
3 `elem1` [1..1000000] -- => (0.24 secs, 160,075,192 bytes)
3 `elem2` [1..1000000] -- => (0.51 secs, 168,078,424 bytes)
3 `elem3` [1..1000000] -- => (0.01 secs, 77,272 bytes)
I then tried to do the same on a (much) bigger list:
3 `elem3` [1..10000000000000000000000000000000000000000000000000000000000000000000000000]
elem1 and elem2 took a very long time, while elem3 was instantaneous (almost identical to the first benchmark).
I think this is because GHC knows that 3 is a member of [1..1000000], and the big number I used in the second benchmark is bigger than 1000000, hence 3 is also a member of [1..bigNumber] and GHC doesn't have to compute the expression at all.
But how is it able to automatically cache (or memoize, a term that Land of Lisp taught me) elem3 but not the two other ones?
Short answer: this has nothing to do with caching, but the fact that you force Haskell in the first two implementations, to iterate over all elements.
No, this is because foldl works left to right, but it will thus keep iterating over the list until the list is exhausted.
Therefore you better use foldr. Here from the moment it finds a 3 it in the list, it will cut off the search.
This is because foldris defined as:
foldr f z [x1, x2, x3] = f x1 (f x2 (f x3 z))
whereas foldl is implemented as:
foldl f z [x1, x2, x3] = f (f (f (f z) x1) x2) x3
Note that the outer f thus binds with x3, so that means foldl first so if due to laziness you do not evaluate the first operand, you still need to iterate to the end of the list.
If we implement the foldl and foldr version, we get:
y `elem1l` ys = foldl (\acc x -> if x == y then True else acc) False ys
y `elem1r` ys = foldr (\x acc -> if x == y then True else acc) False ys
We then get:
Prelude> 3 `elem1l` [1..1000000]
True
(0.25 secs, 112,067,000 bytes)
Prelude> 3 `elem1r` [1..1000000]
True
(0.03 secs, 68,128 bytes)
Stripping the duplicates from the list will not imrpove the efficiency. What here improves the efficiency is that you use map. map works left-to-right. Note furthermore that nub works lazy, so nub is here a no op, since you are only interested in the head, so Haskell does not need to perform memberchecks on the already seen elements.
The performance is almost identical:
Prelude List> 3 `elem3` [1..1000000]
True
(0.03 secs, 68,296 bytes)
In case you work with a Set however, you do not perform uniqueness lazily: you first fetch all the elements into the list, so again, you will iterate over all the elements, and not cut of the search after the first hit.
Explanation
foldl goes to the innermost element of the list, applies the computation, and does so again recursively to the result and the next innermost value of the list, and so on.
foldl f z [x1, x2, ..., xn] == (...((z `f` x1) `f` x2) `f`...) `f` xn
So in order to produce the result, it has to traverse all the list.
Conversely, in your function elem3 as everything is lazy, nothing gets computed at all, until you call head.
But in order to compute that value, you just the first value of the (filtered) list, so you just need to go as far as 3 is encountered in your big list. which is very soon, so the list is not traversed. if you asked for the 1000000th element, eleme3 would probably perform as badly as the other ones.
Lazyness
Lazyness ensure that your language is always composable : breaking a function into subfunction does not changes what is done.
What you are seeing can lead to a space leak which is really about how control flow works in a lazy language. both in strict and in lazy, your code will decide what gets evaluated, but with a subtle difference :
In a strict language, the builder of the function will choose, as it forces evaluation of its arguments: whoever is called is in charge.
In a lazy language, the consumer of the function chooses. whoever called is in charge. It may choose to only evaluate the first element (by calling head), or every other element. All that provided its own caller choose to evaluate his own computation as well. there is a whole chain of command deciding what to do.
In that reading, your foldl based elem function uses that "inversion of control" in an essential way : elem gets asked to produce a value. foldl goes deep inside the list. if the first element if y then it return the trivial computation True. if not, it forwards the requests to the computation acc. In other words, what you read as values acc, x or even True, are really placeholders for computations, which you receive and yield back. And indeed, acc may be some unbelievably complex computation (or divergent one like undefined), as long as you transfer control to the computation True, your caller will never see the existence of acc.
foldr vs foldl vs foldl' VS speed
As suggested in another answer, foldr might best your intent on how to traverse the list, and will shield you away from space leaks (whereas foldl' will prevent space leaks as well if you really want to traverse the other way, which can lead to buildup of complex computations ... and can be very useful for circular computation for instance).
But the speed issue is really an algorithmic one. There might be better data structure for set membership if and only if you know beforehand that you have a certain pattern of usage.
For instance, it might be useful to pay some upfront cost to have a Set, then have fast membership queries, but that is only useful if you know that you will have such a pattern where you have a few sets and lots of queries to those sets. Other data structure are optimal for other patterns, and it's interesting to note that from a API/specification/interface point of view, they are usually the same to the consumer. That's a general phenomena in any languages, and why many people love abstract data types/modules in programming.
Using foldr and expecting to be faster really encodes the assumption that, given your static knowledge of your future access pattern, the values you are likely to test membership of will sit at the beginning. Using foldl would be fine if you expect your values to be at the end of it.
Note that using foldl, you might construct the entire list, you do not construct the values themselves, until you need it of course, for instance to test for equality, as long as you have not found the searched element.

Why does foldr use a helper function?

In explaining foldr to Haskell newbies, the canonical definition is
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr _ z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
But in GHC.Base, foldr is defined as
foldr k z = go
where
go [] = z
go (y:ys) = y `k` go ys
It seems this definition is an optimization for speed, but I don't see why using the helper function go would make it faster. The source comments (see here) mention inlining, but I also don't see how this definition would improve inlining.
I can add some important details about GHC's optimization system.
The naive definition of foldr passes around a function. There's an inherent overhead in calling a function - especially when the function isn't known at compile time. It'd be really nice to able to inline the definition of the function if it's known at compile time.
There are tricks available to perform that inlining in GHC - and this is an example of them. First, foldr needs to be inlined (I'll get to why later). foldr's naive implementation is recursive, so cannot be inlined. So a worker/wrapper transformation is applied to the definition. The worker is recursive, but the wrapper is not. This allows foldr to be inlined, despite the recursion over the structure of the list.
When foldr is inlined, it creates a copy of all of its local bindings, too. It's more or less a direct textual inlining (modulo some renaming, and happening after the desugaring pass). This is where things get interesting. go is a local binding, and the optimizer gets to look inside it. It notices that it calls a function in the local scope, which it names k. GHC will often remove the k variable entirely, and will just replace it with the expression k reduces to. And then afterwards, if the function application is amenable to inlining, it can be inlined at this time - removing the overhead of calling a first-class function entirely.
Let's look at a simple, concrete example. This program will echo a line of input with all trailing 'x' characters removed:
dropR :: Char -> String -> String
dropR x r = if x == 'x' && null r then "" else x : r
main :: IO ()
main = do
s <- getLine
putStrLn $ foldr dropR "" s
First, the optimizer will inline foldr's definition and simplify, resulting in code that looks something like this:
main :: IO ()
main = do
s <- getLine
-- I'm changing the where clause to a let expression for the sake of readability
putStrLn $ let { go [] = ""; go (x:xs) = dropR x (go xs) } in go s
And that's the thing the worker-wrapper transformation allows.. I'm going to skip the remaining steps, but it should be obvious that GHC can now inline the definition of dropR, eliminating the function call overhead. This is where the big performance win comes from.
GHC cannot inline recursive functions, so
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr _ z [] = z
foldr f z (x:xs) = f x (foldr f z xs)
cannot be inlined. But
foldr k z = go
where
go [] = z
go (y:ys) = y `k` go ys
is not a recursive function. It is a non-recursive function with a local recursive definition!
This means that, as #bheklilr writes, in map (foldr (+) 0) the foldr can be inlined and hence f and z replaced by (+) and 0 in the new go, and great things can happen, such as unboxing of the intermediate value.
As the comments say:
-- Inline only in the final stage, after the foldr/cons rule has had a chance
-- Also note that we inline it when it has *two* parameters, which are the
-- ones we are keen about specialising!
In particular, note the "we inline it when it has two parameters, which are the ones we are keen about specialising!"
What this is saying is that when foldr gets inlined, it's getting inlined only for the specific choice of f and z, not for the choice of the list getting folded. I'm not expert, but it would seem it would make it possible to inline it in situations like
map (foldr (+) 0) some_list
so that the inline happens in this line and not after map has been applied. This makes it optimizable in more situations and more easily. All the helper function does is mask the 3rd argument so {-# INLINE #-} can do its thing.
One tiny important detail not mentioned in other answers is that GHC, given a function definition like
f x y z w q = ...
cannot inline f until all of the arguments x, y, z, w, and q are applied. This means that it's often advantageous to use the worker/wrapper transformation to expose a minimal set of function arguments which must be applied before inlining can occur.

Functionally solving questions: how to use Haskell?

I am trying to solve one of the problem in H99:
Split a list into two parts; the length of the first part is given.
Do not use any predefined predicates.
Example:
> (split '(a b c d e f g h i k) 3)
( (A B C) (D E F G H I K))
And I can quickly come with a solution:
split'::[a]->Int->Int->[a]->[[a]]
split' [] _ _ _ = []
split' (x:xs) y z w = if y == z then [w,xs] else split' xs y (z+1) (w++[x])
split::[a]->Int->[[a]]
split x y = split' x y 0 []
My question is that what I am doing is kind of just rewriting the loop version in a recursion format. Is this the right way you do things in Haskell? Isn't it just the same as imperative programming?
EDIT: Also, how do you generally avoid the extra function here?
It's convenient that you can often convert an imperative solution to Haskell, but you're right, you do usually want to find a more natural recursive statement. For this one in particular, reasoning in terms of base case and inductive case can be very helpful. So what's your base case? Why, when the split location is 0:
split x 0 = ([], x)
The inductive case can be built on that by prepending the first element of the list onto the result of splitting with n-1:
split (x:xs) n = (x:left, right)
where (left, right) = split xs (n-1)
This may not perform wonderfully (it's probably not as bad as you'd think) but it illustrates my thought process when I first encounter a problem and want to approach it functionally.
Edit: Another solution relying more heavily on the Prelude might be:
split l n = (take n l, drop n l)
It's not the same as imperative programming really, each function call avoids any side effects, they're just simple expressions. But I have a suggestion for your code
split :: Int -> [a] -> ([a], [a])
split p xs = go p ([], xs)
where go 0 (xs, ys) = (reverse xs, ys)
go n (xs, y:ys) = go (n-1) (y : xs, ys)
So how we've declared that we're only returning two things ([a], [a]) instead of a list of things (which is a bit misleading) and that we've constrained our tail recursive call to be in local scope.
I'm also using pattern matching, which is a more idiomatic way to write recursive functions in Haskell, when go is called with a zero, then the first case is run. It's more pleasant generally to write recursive functions that go down rather than up since you can use pattern matching rather than if statements.
Finally this is more efficient since ++ is linear in the length of the first list, which means that the complexity of your function is quadratic rather than linear. This method is also tail recursive unlike Daniel's solution, which is important for handling any large lists.
TLDR: Both versions are functional style, avoiding mutation, using recursion instead of loops. But the version I've presented is a little more Haskell-ish and slightly faster.
A word on tail recursion
This solution uses tail recursion which isn't always essential in Haskell but in this case is helpful when you use the resulting lists, but at other times is actually a bad thing. For example, map isn't tail recursive, but if it was you couldn't use it over infinite lists!
In this case, we can use tail recursion, since an integer is always finite. But, if we only use the first element of the list, Daniel's solution is much faster, since it produces the list lazily. On the other hand, if we use the whole list, my solution is much faster.
split'::[a]->Int->([a],[a])
split' [] _ = ([],[])
split' xs 0 = ([],xs)
split' (x:xs) n = (x:(fst splitResult),snd splitResult)
where splitResult = split' xs (n-1)
It seems you have already shown an example of a better solution.
I would recommend you read SICP. Then you come to the conclusion that the extra function is normal. There's also widely used approach to hide functions in the local area. The book may seem boring to you but in the early chapters she will get used to the functional approach in solving problems.
There are tasks in which the recursive approach is more necessary. But for example if you use tail recursion (which is so often praised without cause) then you will notice that this is just the usual iteration. Often with "extra-function" which hide iteration variable (oh.. word variable is not very appropriate, likely argument).

Haskell: Minimum sum of list

So, I'm new here, and I would like to ask 2 questions about some code:
Duplicate each element in list by n times. For example, duplicate [1,2,3] should give [1,2,2,3,3,3]
duplicate1 xs = x*x ++ duplicate1 xs
What is wrong in here?
Take positive numbers from list and find the minimum positive subtraction. For example, [-2,-1,0,1,3] should give 1 because (1-0) is the lowest difference above 0.
For your first part, there are a few issues: you forgot the pattern in the first argument, you are trying to square the first element rather than replicate it, and there is no second case to end your recursion (it will crash). To help, here is a type signature:
replicate :: Int -> a -> [a]
For your second part, if it has been covered in your course, you could try a list comprehension to get all differences of the numbers, and then you can apply the minimum function. If you don't know list comprehensions, you can do something similar with concatMap.
Don't forget that you can check functions on http://www.haskell.org/hoogle/ (Hoogle) or similar search engines.
Tell me if you need a more thorough answer.
To your first question:
Use pattern matching. You can write something like duplicate (x:xs). This will deconstruct the first cell of the parameter list. If the list is empty, the next pattern is tried:
duplicate (x:xs) = ... -- list is not empty
duplicate [] = ... -- list is empty
the function replicate n x creates a list, that contains n items x. For instance replicate 3 'a' yields `['a','a','a'].
Use recursion. To understand, how recursion works, it is important to understand the concept of recursion first ;)
1)
dupe :: [Int] -> [Int]
dupe l = concat [replicate i i | i<-l]
Theres a few problems with yours, one being that you are squaring each term, not creating a new list. In addition, your pattern matching is off and you would create am infinite recursion. Note how you recurse on the exact same list as was input. I think you mean something along the lines of duplicate1 (x:xs) = (replicate x x) ++ duplicate1 xs and that would be fine, so long as you write a proper base case as well.
2)
This is pretty straight forward from your problem description, but probably not too efficient. First filters out negatives, thewn checks out all subtractions with non-negative results. Answer is the minumum of these
p2 l = let l2 = filter (\x -> x >= 0) l
in minimum [i-j | i<-l2, j<-l2, i >= j]
Problem here is that it will allow a number to be checkeed against itself, whichwiull lend to answers of always zero. Any ideas? I'd like to leave it to you, commenter has a point abou t spoon-feeding.
1) You can use the fact that list is a monad:
dup = (=<<) (\x -> replicate x x)
Or in do-notation:
dup xs = do x <- xs; replicate x x; return x
2) For getting only the positive numbers from a list, you can use filter:
filter (>= 0) [1,-1,0,-5,3]
-- [1,0,3]
To get all possible "pairings" you can use either monads or applicative functors:
import Control.Applicative
(,) <$> [1,2,3] <*> [1,2,3]
[(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)]
Of course instead of creating pairs you can generate directly differences when replacing (,) by (-). Now you need to filter again, discarding all zero or negative differences. Then you only need to find the minimum of the list, but I think you can guess the name of that function.
Here, this should do the trick:
dup [] = []
dup (x:xs) = (replicate x x) ++ (dup xs)
We define dup recursively: for empty list it is just an empty list, for a non empty list, it is a list in which the first x elements are equal to x (the head of the initial list), and the rest is the list generated by recursively applying the dup function. It is easy to prove the correctness of this solution by induction (do it as an exercise).
Now, lets analyze your initial solution:
duplicate1 xs = x*x ++ duplicate1 xs
The first mistake: you did not define the list pattern properly. According to your definition, the function has just one argument - xs. To achieve the desired effect, you should use the correct pattern for matching the list's head and tail (x:xs, see my previous example). Read up on pattern matching.
But that's not all. Second mistake: x*x is actually x squared, not a list of two values. Which brings us to the third mistake: ++ expects both of its operands to be lists of values of the same type. While in your code, you're trying to apply ++ to two values of types Int and [Int].
As for the second task, the solution has already been given.
HTH

Resources