i am reading about haskell in wikibook:
https://en.wikibooks.org/wiki/Haskell/Graph_reduction
and in the article, there is an example that puzzles me:
Tricky space leak example:
(\xs -> head xs + last xs) [1..n]
(\xs -> last xs + head xs) [1..n]
The first version runs on O(1) space. The second in O(n).
how come?
I assume last is implemented like following:
last [] = error ""
last [r] = r
last (_:t) = last t
so a tail recursion should take constant space. why the second one would yield linear space?
The simple answer has to be that when evaluating a + b, the expression a gets evaluated before b in practice.
Normally last xs only takes O(1) space - the elements of xs can be discarded as last traverses down the list. However, if xs is needed for another expression, the expansion of xs must be kept in memory for the other expression, and this is what is happening in the second case since it appears that xs is needed for head.
In the first case head xs is evaluated "first" which only evaluates the first element of xs. Then last xs is evaluated, traversing down the list. But since there isn't any further need for the elements of the list they may be discarded during the traversal.
Related
The code for cycle is as follows
cycle :: [a] -> [a]
cycle [] = errorEmptyList "cycle"
cycle xs = xs' where xs' = xs ++ xs'
I would appreciate an explanation on how the last line works. I feel that it would go off into an infinite recursion without returning or shall I say without printing anything on the screen. I guess my intuition is wrong.
A list, like basically everything else in Haskell, is lazily evaluated.
Roughly, to make an OOP analogy, you can think of a list as a sort of "iterator object". When queried, it reports on whether there is a next element, and if so, what is such element and what is the tail of the list (this being another "iterator object").
A list defined as
xs = 1 : xs
does not cause non termination. It corresponds to an "iterator object" o that, when queried, answers: "the next element is 1, and the rest of the list can be queried using o". Basically, it returns itself.
This is no different than a list having as a tail a "pointer" to the list itself: a circular list. This takes a constant amount of space.
Appending with ++ works the same:
xs = [1] ++ xs
is identical to the previous list.
In your code, the part
where xs' = xs ++ xs'
crafts a list that starts with xs and then continues with the list itself xs'. Operationally, it is an "iterator object" o that returns, one by one, the elements of xs, and when the last element of xs is returned, it is paired with "you can query the rest of the list at o". Again, a back-pointer, which builds a sort of circular list.
Let's take out the last line separately:
cycle xs = xs' where xs' = xs ++ xs'
Now, let's try to reduce it:
cycle xs = xs ++ (xs ++ (xs ++ (xs ++ ...)))
You can see that it expands infinitely. But note that this is not how expressions get reduced in Haskell. Expressions will be reduced to WHNF when it's demanded. So, let's demand some values from cycle function:
ghci > take 1 $ cycle [1..]
[1]
This is how take function is implemented:
take n _ | n <= 0 = []
take _ [] = []
take n (x:xs) = x : take (n-1) xs
Now, because of pattern matching - the value n will be evaluated first. Since it is already in normal form, no further reduction needs to be done and it will be checked if it is less than or equal to zero. Since the condition fails, it will move on to the second condition. Here, it's second argument will be checked to see if it's equal to []. As usual, haskell will evaluate it to WHNF which will be 1:_. Here _ represents thunk. Now the whole expression will be reduced to 1:take 0 _. Since this value has to be printed in ghci, the whole 1:take 0 _ will be reduced again. Following the similar steps like above again, we will get 1:[] which reduces to [1].
Hence cycle [1,2,3] will get reduced to WHNF in the form (1:xs) and will be ultimately reduced to [1]. But if the cycle function, itself is strict in it's implementation, then it will just go into an infinite loop:
cycle :: NFData a => [a] -> [a]
cycle [] = []
cycle xs = let xs' = xs ++ xs'
in deepseq xs xs'
You can test that in ghci:
ghci > take 1 $ cycle [1..]
^CInterrupted.
I read the old Russian translate of the Learn You a Haskell for Great Good! book. I see that the current English version (online) is newer, therefore I look it time of time also.
The quote:
When you put together two lists (even if you append a singleton list
to a list, for instance: [1,2,3] ++ [4]), internally, Haskell has to
walk through the whole list on the left side of ++. That's not a
problem when dealing with lists that aren't too big. But putting
something at the end of a list that's fifty million entries long is
going to take a while. However, putting something at the beginning of
a list using the : operator (also called the cons operator) is
instantaneous.
I assumed that Haskell has to walk through the whole list to get the last item of the list for the foldr, foldr1, scanr and scanr1 functions. Also I assumed that Haskell will do the same for getting a previous element (and so on for each item).
But I see I was mistaken:
UPD
I try this code and I see the similar time of processing for both cases:
data' = [1 .. 10000000]
sum'r = foldr1 (\x acc -> x + acc ) data'
sum'l = foldl1 (\acc x -> x + acc ) data'
Is each list of Haskell bidirectional? I assume that for getting last item of list Haskell at first are to iterate each item and to remember the necessary item (last item for example) for getting (later) the previous item of bidirectional list (for lazy computation). Am I right?
It's tricky since Haskell is lazy.
Evaluating head ([1..1000000]++[1..1000000]) will return immediately, with 1. The lists will never be fully created in memory: only the first element of the first list will be.
If you instead demand the full list [1..1000000]++[1..1000000] then ++ will indeed have to create a two-million long list.
foldr may or may not evaluate the full list. It depends on whether the function we use is lazy. For example, here's map f xs written using foldr:
foldr (\y ys -> f y : ys) [] xs
This is efficient as map f xs is: lists cells are produced on demand, in a streaming fashion. If we need only the first ten elements of the resulting list, then we indeed create only the first ten cells -- foldr will not be applied to the rest of the list. If we need the full resulting list, then foldr will be run over the full list.
Also note that xs++ys can be defined similarly in terms of foldr:
foldr (:) ys xs
and has similar performance properties.
By comparison, foldl instead always runs over the whole list.
In the example you mention we have longList ++ [something], appending to the end of the list. This only costs constant time if all we demand is the first element of the resulting list. But if we really need the last element we added, then appending will need to run over the whole list. This is why appending at the end is considered O(n) instead of O(1).
In the last update, the question speaks about computing the sum with foldr vs foldl, using the (+) operator. In such case, since (+) is strict (it needs both arguments to compute result) then both folds witll need to scan the whole list. The performance in such cases can be comparable. Indeed, they would compute, respectively
1 + (2 + (3 + (4 + ..... -- foldr
(...(((1 + 2) + 3) +4) + .... -- foldl
By comparison foldl' would be more memory efficient, since it starts reducing the above sum before building the above giant expression. That is, it would compute 1+2 first (3), then 3+3 (6), then 6 + 4 (10),... keeping in memory only the last result (a single integer) while the list is being scanned.
To the OP: the topic of laziness is not easy to grasp the first time. It is quite vast -- you just met a ton of different examples which have subtle but significant performance differences. It's hard to explain everything succinctly -- it's just too broad. I'd recommend to focus on small examples and start digesting those first.
I need to take last n elements from list, using O(n) memory so I wrote this code
take' :: Int -> [Int] -> [Int]
take' n xs = (helper $! (length $! xs) - n + 1) xs
where helper skip [] = []
helper skip (x : xs) = if skip == 0 then xs else (helper $! skip - 1) xs
main = print (take' 10 [1 .. 100000])
this code takes O(|L|) memory where |L| -- is the length of given list.
But when I write this code
take' :: Int -> [Int] -> [Int]
take' n xs = helper (100000 - n + 1) xs
where helper skip [] = []
helper skip (x : xs) = if skip == 0 then xs else (helper $! skip - 1) xs
main = print (take' 10 [1 .. 100000])
This code now takes only O(n) memory (the only chage is (helper $! (length $! xs) - n + 1) -> helper (100000 - n + 1))
So, as I understand, Haskell for some reason doesn't evaluate length xs before the first call of helper so it leaves a thunk in skip and haskell has to keep this value in every stack frame instead of making tail recursion. But in second piece of code it evaluates (100000 - n + 1) and gives the pure value to the helper.
So the problem is how to evaluate the length of list before the first call of helper and use only O(n) memory.
The other answer referred to what it means to be a good consumer. You have posted two versions of your function, one which works for arbitrary-length lists but is not a good consumer, and one which is a good consumer but assumes a particular list length. For completeness, here is a function that is a good consumer and works for arbitrary list lengths:
takeLast n xs = go (drop n xs) xs where
go (_:xs) (_:ys) = go xs ys
go _ ys = ys
The second version does not really take only O(n) memory. Regardless of what take' does: you start off with a list of length L, and that has to be stored somewhere.
The reason it effectively takes O(n) memory is that the list is only used by one “good consumer” here, namely helper. Such a consumer deconstructs the list from head to last; because no reference to the head is needed anywhere else, the garbage collector can immediately start cleaning up those first elements – before the list comprehension has even built up the rest of the list!
That changes however if before using helper you compute the length of that list. This already forces the entire list to be NF'd†, and as I said this inevitably takes O(L) memory. Because you're still holding a reference to be used with helper, in this case the garbage collector can not take any action before the whole list is in memory.
So, it really has nothing to do with strict evaluation. In fact the only way you could achieve your goal is by making it less strict (require only a sublist of length n to be evaluated at any given time).
†More precisely: it forces the list's spine to normal form. The elements aren't evaluated, but it's still O(L).
Consider the following two cumulative sum (cumsum) functions:
cumsum :: Num a => [a] -> [a]
cumsum [] = []
cumsum [x] = [x]
cumsum (x:y:ys) = x : (cumsum $ (x+y) : ys)
and
cumsum' :: Num a => [a] -> [a]
cumsum' x = [sum $ take k x | k <- [1..length x]]
Of course, I prefer the definition of cumsum to that of cumsum' and I understand that the former has linear complexity.
But just why does cumsum' also have linear complexity? take itself has linear complexity in the length of its argument and k runs from 1 to length x. Therefore I'd have expected quadratic complexity for cumsum'.
Moreover, the constant of cumsum' is lower than that of cumsum. Is that due to the recursive list appending of the latter?
NOTE: welcoming any smart definition of a cumulative sum.
EDIT: I'm measuring execution times using (after enabling :set +s in GHCi):
last $ cumsum [1..n]
This is a measurement error caused by laziness.
Every value in Haskell is lazy: it isn't evaluated until necessary. This includes sub-structure of values - so for example when we see a pattern (x:xs) this only forces evaluation of the list far enough to identify that the list is non-empty, but it doesn't force the head x or the tail xs.
The definition of last is something like:
last [x] = x
last (x:xs) = last xs
So when last is applied to the result of cumsum', it inspects the list comprehension recursively, but only enough to track down the last entry. It doesn't force any of the entries, but it does return the last one.
When this last entry is printed in ghci or whatever, then it is forced which takes linear time as expected. But the other entries are never calculated so we don't see the "expected" quadratic behaviour.
Using maximum instead of last does demonstrate that cumnorm' is quadratic whereas cumnorm is linear.
[Note: this explanation is somewhat hand-wavy: really evaluation is entirely driven by what's needed for the final result, so even last is only evaluted at all because its result is needed. Search for things like "Haskell evaluation order" and "Weak Head Normal Form" to get a more precise explanation.]
I have two Haskell functions, both of which seem very similar to me. But the first one FAILS against infinite lists, and the second one SUCCEEDS against infinite lists. I have been trying for hours to nail down exactly why that is, but to no avail.
Both snippets are a re-implementation of the "words" function in Prelude. Both work fine against finite lists.
Here's the version that does NOT handle infinite lists:
myWords_FailsOnInfiniteList :: String -> [String]
myWords_FailsOnInfiniteList string = foldr step [] (dropWhile charIsSpace string)
where
step space ([]:xs) | charIsSpace space = []:xs
step space (x:xs) | charIsSpace space = []:x:xs
step space [] | charIsSpace space = []
step char (x:xs) = (char : x) : xs
step char [] = [[char]]
Here's the version that DOES handle infinite lists:
myWords_anotherReader :: String -> [String]
myWords_anotherReader xs = foldr step [""] xs
where
step x result | not . charIsSpace $ x = [x:(head result)]++tail result
| otherwise = []:result
Note: "charIsSpace" is merely a renaming of Char.isSpace.
The following interpreter session illustrates that the first one fails against an infinite list while the second one succeeds.
*Main> take 5 (myWords_FailsOnInfiniteList (cycle "why "))
*** Exception: stack overflow
*Main> take 5 (myWords_anotherReader (cycle "why "))
["why","why","why","why","why"]
EDIT: Thanks to the responses below, I believe I understand now. Here are my conclusions and the revised code:
Conclusions:
The biggest culprit in my first attempt were the 2 equations that started with "step space []" and "step char []". Matching the second parameter of the step function against [] is a no-no, because it forces the whole 2nd arg to be evaluated (but with a caveat to be explained below).
At one point, I had thought (++) might evaluate its right-hand argument later than cons would, somehow. So, I thought I might fix the problem by changing " = (char:x):xs" to "= [char : x] ++ xs". But that was incorrect.
At one point, I thought that pattern matching the second arg against (x:xs) would cause the function to fail against infinite lists. I was almost right about this, but not quite! Evaluating the second arg against (x:xs), as I do in a pattern match above, WILL cause some recursion. It will "turn the crank" until it hits a ":" (aka, "cons"). If that never happened, then my function would not succeed against an infinite list. However, in this particular case, everything is OK because my function will eventually encounter a space, at which point a "cons" will occur. And the evaluation triggered by matching against (x:xs) will stop right there, avoiding the infinite recursion. At that point, the "x" will be matched, but the xs will remain a thunk, so there's no problem. (Thanks to Ganesh for really helping me grasp that).
In general, you can mention the second arg all you want, as long as you don't force evaluation of it. If you've matched against x:xs, then you can mention xs all you want, as long as you don't force evaluation of it.
So, here's the revised code. I usually try to avoid head and tail, merely because they are partial functions, and also because I need practice writing the pattern matching equivalent.
myWords :: String -> [String]
myWords string = foldr step [""] (dropWhile charIsSpace string)
where
step space acc | charIsSpace space = "":acc
step char (x:xs) = (char:x):xs
step _ [] = error "this should be impossible"
This correctly works against infinite lists. Note there's no head, tail or (++) operator in sight.
Now, for an important caveat:
When I first wrote the corrected code, I did not have the 3rd equation, which matches against "step _ []". As a result, I received the warning about non-exhaustive pattern matches. Obviously, it is a good idea to avoid that warning.
But I thought I was going to have a problem. I already mentioned above that it is not OK to pattern match the second arg against []. But I would have to do so in order to get rid of the warning.
However, when I added the "step _ []" equation, everything was fine! There was still no problem with infinite lists!. Why?
Because the 3rd equation in the corrected code IS NEVER REACHED!
In fact, consider the following BROKEN version. It is EXACTLY the SAME as the correct code, except that I have moved the pattern for empty list up above the other patterns:
myWords_brokenAgain :: String -> [String]
myWords_brokenAgain string = foldr step [""] (dropWhile charIsSpace string)
where
step _ [] = error "this should be impossible"
step space acc | charIsSpace space = "":acc
step char (x:xs) = (char:x):xs
We're back to stack overflow, because the first thing that happens when step is called is that the interpreter checks to see if equation number one is a match. To do so, it must see if the second arg is []. To do that, it must evaluate the second arg.
Moving the equation down BELOW the other equations ensures that the 3rd equation is never attempted, because either the first or the second pattern always matches. The 3rd equation is merely there to dispense with the non-exhaustive pattern warning.
This has been a great learning experience. Thanks to everyone for your help.
Others have pointed out the problem, which is that step always evaluates its second argument before producing any output at all, yet its second argument will ultimately depend on the result of another invocation of step when the foldr is applied to an infinite list.
It doesn't have to be written this way, but your second version is kind of ugly because it relies on the initial argument to step having a particular format and it's quite hard to see that the head/tail will never go wrong. (I'm not even 100% certain that they won't!)
What you should do is restructure the first version so it produces output without depending on the input list in at least some situations. In particular we can see that when the character is not a space, there's always at least one element in the output list. So delay the pattern-matching on the second argument until after producing that first element. The case where the character is a space will still be dependent on the list, but that's fine because the only way that case can infinitely recurse is if you pass in an infinite list of spaces, in which case not producing any output and going into a loop is the expected behaviour for words (what else could it do?)
Try expanding the expression by hand:
take 5 (myWords_FailsOnInfiniteList (cycle "why "))
take 5 (foldr step [] (dropWhile charIsSpace (cycle "why ")))
take 5 (foldr step [] (dropWhile charIsSpace ("why " ++ cycle "why ")))
take 5 (foldr step [] ("why " ++ cycle "why "))
take 5 (step 'w' (foldr step [] ("hy " ++ cycle "why ")))
take 5 (step 'w' (step 'h' (foldr step [] ("y " ++ cycle "why "))))
What's the next expansion? You should see that in order to pattern match for step, you need to know whether it's the empty list or not. In order to find that out, you have to evaluate it, at least a little bit. But that second term happens to be a foldr reduction by the very function you're pattern matching for. In other words, the step function cannot look at its arguments without calling itself, and so you have an infinite recursion.
Contrast that with an expansion of your second function:
myWords_anotherReader (cycle "why ")
foldr step [""] (cycle "why ")
foldr step [""] ("why " ++ cycle "why ")
step 'w' (foldr step [""] ("hy " ++ cycle "why ")
let result = foldr step [""] ("hy " ++ cycle "why ") in
['w':(head result)] ++ tail result
let result = step 'h' (foldr step [""] ("y " ++ cycle "why ") in
['w':(head result)] ++ tail result
You can probably see that this expansion will continue until a space is reached. Once a space is reached, "head result" will obtain a value, and you will have produced the first element of the answer.
I suspect that this second function will overflow for infinite strings that don't contain any spaces. Can you see why?
The second version does not actually evaluate result until after it has started producing part of its own answer. The first version evaluates result immediately by pattern matching on it.
The key with these infinite lists is that you have to produce something before you start demanding list elements so that the output can always "stay ahead" of the input.
(I feel like this explanation is not very clear, but it's the best I can do.)
The library function foldr has this implementation (or similar):
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f k (x:xs) = f x (foldr f k xs)
foldr _ k _ = k
The result of myWords_FailsOnInfiniteList depends on the result of foldr which depends on the result of step which depends on the result of the inner foldr which depends on ... and so on an infinite list, myWords_FailsOnInfiniteList will use up an infinite amount of space and time before producing its first word.
The step function in myWords_anotherReader does not require the result of the inner foldr until after it has produced the first letter of the first word. Unfortunately, as Apocalisp says, it uses O(length of first word) space before it produces the next word, because as the first word is being produced, the tail thunk keeps growing tail ([...] ++ tail ([...] ++ tail (...))).
In contrast, compare to
myWords :: String -> [String]
myWords = myWords' . dropWhile isSpace where
myWords' [] = []
myWords' string =
let (part1, part2) = break isSpace string
in part1 : myWords part2
using library functions which may be defined as
break :: (a -> Bool) -> [a] -> ([a], [a])
break p = span $ not . p
span :: (a -> Bool) -> [a] -> ([a], [a])
span p xs = (takeWhile p xs, dropWhile p xs)
takeWhile :: (a -> Bool) -> [a] -> [a]
takeWhile p (x:xs) | p x = x : takeWhile p xs
takeWhile _ _ = []
dropWhile :: (a -> Bool) -> [a] -> [a]
dropWhile p (x:xs) | p x = dropWhile p xs
dropWhile _ xs = xs
Notice that producing the intermediate results is never held up by future computation, and only O(1) space is needed as each element of the result is made available for consumption.
Addendum
So, here's the revised code. I usually try to avoid head and tail, merely because they are partial functions, and also because I need practice writing the pattern matching equivalent.
myWords :: String -> [String]
myWords string = foldr step [""] (dropWhile charIsSpace string)
where
step space acc | charIsSpace space = "":acc
step char (x:xs) = (char:x):xs
step _ [] = error "this should be impossible"
(Aside: You may not care, but the words "" == [] from the library, but your myWords "" = [""]. Similar issue with trailing spaces.)
Looks much-improved over myWords_anotherReader, and is pretty good for a foldr-based solution.
\n -> tail $ myWords $ replicate n 'a' ++ " b"
It's not possible to do better than O(n) time, but both myWords_anotherReader and myWords take O(n) space here. This may be inevitable given the use of foldr.
Worse,
\n -> head $ head $ myWords $ replicate n 'a' ++ " b"
myWords_anotherReader was O(1) but the new myWords is O(n), because pattern matching (x:xs) requires the further result.
You can work around this with
myWords :: String -> [String]
myWords = foldr step [""] . dropWhile isSpace
where
step space acc | isSpace space = "":acc
step char ~(x:xs) = (char:x):xs
The ~ introduces an "irrefutable pattern". Irrefutable patterns never fail and do not force immediate evaluation.