Haskell inefficient fibonacci implementation - haskell

I am new to haskell and just learning the fun of functional programming. but have run into trouble right away with an implementation of the fibonacci function. Please find the code below.
--fibonacci :: Num -> [Num]
fibonacci 1 = [1]
fibonacci 2 = [1,1]
--fibonacci 3 = [2]
--fibonacci n = fibonacci n-1
fibonacci n = fibonacci (n-1) ++ [last(fibonacci (n-1)) + last(fibonacci (n-2))]
Rather awkward, I know. I can't find time to look up and write a better one. Though I wonder what makes this so inefficient. I know I should look it up, just hoping someone would feel the need to be pedagogic and spare me the effort.

orangegoat's answer and Sec Oe's answer contain a link to probably the best place to learn how to properly write the fibonacci sequence in Haskell, but here's some reasons why your code is inefficient (note, your code is not that different from the classic naive definition. Elegant? Sure. Efficient? Goodness, no):
Let's consider what happens when you call
fibonacci 5
That expands into
(fibonacci 4) ++ [(last (fibonacci 4)) + (last (fibonacci 3))]
In addition to concatenating two lists together with ++, we can already see that one place we're being inefficient is that we calculate fibonacci 4 twice (the two places we called fibonacci (n-1). But it gets worst.
Everywhere it says fibonacci 4, that expands into
(fibonacci 3) ++ [(last (fibonacci 3)) + (last (fibonacci 2))]
And everywhere it says fibonacci 3, that expands into
(fibonacci 2) ++ [(last (fibonacci 2)) + (last (fibonacci 1))]
Clearly, this naive definition has a lot of repeated computations, and it only gets worse when n gets bigger and bigger (say, 1000). fibonacci is not a list, it just returns lists, so it isn't going to magically memoize the results of the previous computations.
Additionally, by using last, you have to navigate through the list to get its last element, which adds on top of the problems with this recursive definition (remember, lists in Haskell don't support constant time random access--they aren't dynamic arrays, they are linked lists).
One example of a recursive definition (from the links mentioned) that does keep down on the computations is this:
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
Here, fibs is actually a list, and we can take advantage of Haskell's lazy evaluation to generate fibs and tail fibs as needed, while the previous computations are still stored inside of fibs. And to get the first five numbers, it's as simple as:
take 5 fibs -- [0,1,1,2,3]
(Optionally, you can replace the first 0 with a 1 if you want the sequence to start at 1).

All the ways to implement the fibonacci sequence in Haskell just follow the link
http://www.haskell.org/haskellwiki/The_Fibonacci_sequence

This implementation is inefficient because it makes three recursive calls. If we were to write a recurrence relation for computing fibonacci n to a normal form (note, pedantic readers: not whnf), it would look like:
T(1) = c
T(2) = c'
T(n) = T(n-1) + T(n-1) + T(n-2) + c''
(Here c, c', and c'' are some constants that we don't know.) Here's a recurrence which is smaller:
S(1) = min(c, c')
S(n) = 2 * S(n-1)
...but this recurrence has a nice easy closed form, namely S(n) = min(c, c') * 2^(n-1): it's exponential! Bad news.
I like the general idea of your implementation (that is, track the second-to-last and last terms of the sequence together), but you fell down by recursively calling fibonacci multiple times, when that's totally unnecessary. Here's a version that fixes that mistake:
fibonacci 1 = [1]
fibonacci 2 = [1,1]
fibonacci n = case fibonacci (n-1) of
all#(last:secondLast:_) -> (last + secondLast) : all
This version should be significantly faster. As an optimization, it produces the list in reverse order, but the most important optimization here was making only one recursive call, not building the list efficiently.

So even if you wouldn't know about the more efficient ways, how could you improve your solution?
First, looking at the signature it seems you don't want an infinite list, but a list of a given length. That's fine, the infinite stuff might be too crazy for you right now.
The second observation is that you need to access the end of the list quite often in your version, which is bad. So here is a trick which is often useful when working with lists: Write a version that work backwards:
fibRev 0 = []
fibRev 1 = [1]
fibRev 2 = [1,1]
fibRev n = let zs#(x:y:_) = fibRev (n-1) in (x+y) : zs
Here is how the last case works: We get the list which is one element shorter and call it zs. At the same time we match against the pattern (x:y:_) (this use of # is called an as-pattern). This gives us the first two elements of that list. To calculate the next value of the sequence, we have just to add these elements. We just put the sum (x+y) in front of the list zs we already got.
Now we have the fibonacci list, but it is backwards. No problem, just use reverse:
fibonacci :: Int -> [Int]
fibonacci n = reverse (fibRev n)
The reverse function isn't that expensive, and we call it here only one time.

Related

Haskell dynamic program efficiency

I am learning more about dynamic programming and was trying to implement it in haskell. I was running tests with different ways to write the algorithms and found that one is faster than the other. Here it is in the fibonacci problem
fib1 :: [Integer]
fib1 = 0:1:zipWith (+) fib1 (tail fib1)
fib2 :: [Integer]
fib2 = 0:1:[(fib2 !! (n-1)) + (fib2 !! (n-2)) | n <- [2..]]
fib1 is much faster than fib2 but I can't tell why. fib2 seems intuitive, the nth number is (n-1)st plus the (n-2)nd.
And I get fib1, but it looks like it is zipping over the whole list everytime so wouldn't that take longer than. Just calculating the next index?
Lists in Haskell are lazy. They're being calculated as they're being used, but not further.
The function fib1 indeed calculates the whole list, but only does it once, and only up to the index you're asking for.
The function fib2 does a lot of extra work: it potentially calculates elements many, many times over.
Just try to do it with pen and paper. For example, in the case of fib2 !! 5, the list needs to be expanded up to index 5. Calculating fib2 !! 0 and fib2 !! 1 takes little time, as they are constants. The next element, fib2 !! 2 is calculated by adding fib2 !! 0 and fib2 !! 1, and then fib2 !! 3 = fib2 !! 1 + fib2 !! 2, and so on.
BUT.
The most important thing to note here, is that the compiler and/or runtime does not memoize the fib2 function, meaning: it does not remembers previous calculations. So every time the code hits a fib2 !! n it starts calculating it all over again, it doesn't matter how many time this has been done before, even if this happened in the very same (recursive) function call.
Regarding computational efficiency, your fib2 implementation is equivalent to this:
fib3' :: Integer -> Integer
fib3' 0 = 0
fib3' 1 = 1
fib3' n = fib3' (n - 2) + fib3' (n - 1)
fib3 :: [Integer]
fib3 = [fib3' n | n <- [0..]]
which suffers from the same inefficiency, I merely factored out the list part.
On the other hand, fib1 takes advantage of the previous calculations, by using them to avoid re-calculating them. And that's the core idea behind dynamic programming: use a data structure that can be used to store and retrieve results of previous calculations to exchange a potentially expensive recursive function call to a - hpefully - much cheap lookup.
#netom sorry but I don't think that's what's happening. I ran some tests on time and to calculate the 10000th number took 0.7 seconds. In the same run it was instant to calculate the 10000th + 9999th (the 10001th number) showing it remembered.
I then tested the time it took to freshly calculate the 10001st and it took the same time to calculate the 10001st as if it calculated the 10000 and remembered all the rest. To calculate the 10001st, it does not calculate for 10000 and 9999 (in separate recursions) it behaves like you'd expect if it just indexed the remembered list.
The recursive function however takes almost twice as long! So they're both using dynamic programming correctly. But as I've found, the fib2 takes O(n) each step to access the array but fib1 zips it in O(1) each step.

Why the `foldr`, `foldr1`, `scanr` and `scanr1` functions haven't a _problem with productivity when they are applied to big lists?

I read the old Russian translate of the Learn You a Haskell for Great Good! book. I see that the current English version (online) is newer, therefore I look it time of time also.
The quote:
When you put together two lists (even if you append a singleton list
to a list, for instance: [1,2,3] ++ [4]), internally, Haskell has to
walk through the whole list on the left side of ++. That's not a
problem when dealing with lists that aren't too big. But putting
something at the end of a list that's fifty million entries long is
going to take a while. However, putting something at the beginning of
a list using the : operator (also called the cons operator) is
instantaneous.
I assumed that Haskell has to walk through the whole list to get the last item of the list for the foldr, foldr1, scanr and scanr1 functions. Also I assumed that Haskell will do the same for getting a previous element (and so on for each item).
But I see I was mistaken:
UPD
I try this code and I see the similar time of processing for both cases:
data' = [1 .. 10000000]
sum'r = foldr1 (\x acc -> x + acc ) data'
sum'l = foldl1 (\acc x -> x + acc ) data'
Is each list of Haskell bidirectional? I assume that for getting last item of list Haskell at first are to iterate each item and to remember the necessary item (last item for example) for getting (later) the previous item of bidirectional list (for lazy computation). Am I right?
It's tricky since Haskell is lazy.
Evaluating head ([1..1000000]++[1..1000000]) will return immediately, with 1. The lists will never be fully created in memory: only the first element of the first list will be.
If you instead demand the full list [1..1000000]++[1..1000000] then ++ will indeed have to create a two-million long list.
foldr may or may not evaluate the full list. It depends on whether the function we use is lazy. For example, here's map f xs written using foldr:
foldr (\y ys -> f y : ys) [] xs
This is efficient as map f xs is: lists cells are produced on demand, in a streaming fashion. If we need only the first ten elements of the resulting list, then we indeed create only the first ten cells -- foldr will not be applied to the rest of the list. If we need the full resulting list, then foldr will be run over the full list.
Also note that xs++ys can be defined similarly in terms of foldr:
foldr (:) ys xs
and has similar performance properties.
By comparison, foldl instead always runs over the whole list.
In the example you mention we have longList ++ [something], appending to the end of the list. This only costs constant time if all we demand is the first element of the resulting list. But if we really need the last element we added, then appending will need to run over the whole list. This is why appending at the end is considered O(n) instead of O(1).
In the last update, the question speaks about computing the sum with foldr vs foldl, using the (+) operator. In such case, since (+) is strict (it needs both arguments to compute result) then both folds witll need to scan the whole list. The performance in such cases can be comparable. Indeed, they would compute, respectively
1 + (2 + (3 + (4 + ..... -- foldr
(...(((1 + 2) + 3) +4) + .... -- foldl
By comparison foldl' would be more memory efficient, since it starts reducing the above sum before building the above giant expression. That is, it would compute 1+2 first (3), then 3+3 (6), then 6 + 4 (10),... keeping in memory only the last result (a single integer) while the list is being scanned.
To the OP: the topic of laziness is not easy to grasp the first time. It is quite vast -- you just met a ton of different examples which have subtle but significant performance differences. It's hard to explain everything succinctly -- it's just too broad. I'd recommend to focus on small examples and start digesting those first.

How does (co)recursive definition work in Haskell?

I'm playing around with the language to start learning and I am puzzled beyond my wits about how a recursive definition works.
For example, let's take the sequence of the Triangular numbers (TN n = sum [1..n])
The solution provided was:
triangularNumbers = scanl1 (+) [1..]
So far, so good.
But the solution I did come up with was:
triangularNumbers = zipWith (+) [1..] $ 0 : triangularNumbers
Which is also correct.
Now my question is: how does this translate to a lower level implementation? What happens exactly behind the scene when such a recursive definition is met?
Here is a simple recursive function that gives you the nth Triangular number:
triag 0 = 0
triag n = n + triag (n-1)
Your solution triag' = zipWith (+) [1..] $ 0 : triag' is something more fancy: It's corecursive (click, click). Instead of calculating the nth number by reducing it to the value of smaller inputs, you define the whole infinite sequence of triangular numbers by recursively specifying the next value, given an initial segment.
How does Haskell handle such corecursion? When it encounters your definition, no calculation is actually performed, it is deferred until results are needed for further computations. When you access a particular element of your list triag', Haskell starts computing the elements of the list based on the definition, up to the element that gets accessed. For more detail, I found this article on lazy evaluation helpful. In summary, lazy evaluation is great to have, unless you need to predict memory usage.
Here is a similar SO question, with a step-by step explanation of the evaluation of fibs = 0 : 1 : zipWith (+) fibs (tail fibs), a corecursive definition of the Fibonacci sequence.

Using non-deterministic list monad to find long Collatz sequences

I wrote the following code to solve Project Euler's No. 14:
The following iterative (Collatz) sequence is defined for the set of positive integers:
n → n/2 (n is even)
n → 3n + 1 (n is odd)
Q: Which starting number, under one million, produces the longest chain?
And my code:
collatz :: Integer -> [Integer]
collatz 1 = [1]
collatz n =
filter (< 1000000) prev >>= poss
where prev = collatz (n - 1)
poss :: Integer -> [Integer]
poss prev
| even prev && prev `mod` 3 == 1 && (prev - 1) `div` 3 > 1 = [2 * prev, (prev - 1) `div` 3]
| otherwise = [2 * prev]
Where collatz n returns a list of numbers that will generate a Collatz chain of length n. The problem is, I can only either not restrict the result or restrict the whole chain, instead of only the seed number, to be under 1000,000. Is it possible to use this model to solve the problem at all?
I think that this approach - while interesting - is fundamentally doomed. Suppose I discover that all the seeds which result in a chain of length 500 are above 2,000,000. How can I know that I won't find that in three more steps there's a seed under 1,000,000 that gets me there? I see no way to know when you're done.
The only viable approach I see to this problem is to compute the collatz length for every number from 1 to 999,999 and then do something like:
main :: IO ()
main = do
let collatzMax = maximumBy (compare `on` collatzLength) [1..999999]
print collatzMax
On the other hand, this provides a great opportunity to learn about CAFs since the function collatzLength could be naively defined as:
collatzLength 1 = 1
collatzLength n | n `mod` 2 == 0 = 1 + collatzLength (n `div` 2)
collatzLength n = 1 + collatzLength (3 * n + 1)
And that kind of recursion screams out for a CAF.
Sure, there are memoization modules that will go and build the CAF for you, but building one yourself is a useful exercise. It's a whole little mini-course in lazy infinitely-recursive data structures.
If that defeats you, you can glance at this spoiler of how to use a CAF and then rewrite it using a different data structure. (what about a 10-way tree instead of a binary tree? What about traversing the tree in a different order? Can you remove the call to showIntAtBase?)
Your idea is interesting, although not the most efficient one. It could be worth trying, although it'll be probably memory intensive. Some thoughts:
As some chains can go over 1000000, so you can't just filter out everything less in collatz. You need to keep all the numbers in each pass.
Calling collatz this way is inefficient, as it computes the sets all over again. Making it an infinite list that shares values would be more efficient:
collatz :: [[Integer]]
collatz = [1] : map (>>= poss) collatz
You need to figure out when you're done. For this you'd need to go through the number lists generated by collatz and count how many of them are below 1000000. When you have seen all the numbers below the limit, the last list will contain the numbers with the longest chain.
That said, I'm afraid this approach isn't computationally feasible. In particular, you'll generate exponentially many numbers and exponentially large ones. For example, if the longest chain would be 500, the result of collatz in that step would contain numbers up to 2^500. And as mentioned, there is no way to tell which of these huge numbers might be the one leading to the solution, so you can't just discard them.

Haskell script running out of space

I'm using project Euler to teach myself Haskell, and I'm having some trouble reasoning about how my code is being executed by haskell. The second problem has me computing the sum of all even Fibonacci numbers up to 4 million. My script looks like this:
fibs :: [Integer]
fibs = 1 : 2 : [ a+b | (a,b) <- zip fibs (tail fibs)]
evens :: Integer -> Integer -> Integer
evens x sum | (even x) = x + sum
| otherwise = sum
main = do
print (foldr evens 0 (take 4000000 fibs))
Hugs gives the error "Garbage collection fails to reclaim sufficient space", which I assume means that the list entries are not released as they are consumed by foldr.
What do I need to do to fix this? I tried writing a tail-recursive (I think) version that used accumulators, but couldn't get that to work either.
Firstly, you shouldn't use hugs. It is a toy for teaching purposes only.
GHC, however, is a fast, multicore-ready optimizing compiler for Haskell. Get it here. In particular, it does strictness analysis, and compiles to native code.
The main thing that stands out about your code is the use of foldr on a very large list. Probably you want a tail recursive loop. Like so:
import Data.List
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
evens x sum | even x = x + sum
| otherwise = sum
-- sum of even fibs in first 4M fibs
main = print (foldl' evens 0 (take 4000000 fibs))
Besides all this, the first 4M even fibs will use a fair amount of space, so it'll take a while.
Here's the sum of the first 400k even fibs, to save you some time (21s). :-)
A number of observations / hints:
the x + sums from even aren't getting evaluated until the very end
You're taking the first 4,000,000 fibs, not the fibs up to 4,000,000
There is an easier way to do this
Edit in response to comment
I'm not going to tell you what the easier way is, since that's the fun of Project Euler problems. But I will ask you a bunch of questions:
How many even fibs can you have in a row?
How long can you go without an even fib?
If you sum up all the even fibs and all the odd fibs (do this by hand), what do you notice about the sums?
You understood the problem wrong. The actual problem wants you to sum all the even Fibonacci numbers such that the Fibonacci number itself doesn't exceed 4 million (which happens to be only the first 33 Fibonacci numbers).
You are evaluating four million elements of fibs. Those numbers grow exponentially. I don't know how many bytes are required to represent the millionth Fibonacci number; just the one-thousandth Fibonacci number has 211 decimal digits, so that's going to take 22 32-bit words just to hold the digits, never mind whatever overhead gmp imposes. And these grow exponentially.
Exercise: calculuate the amount of memory needed to hold four million Fibonacci numbers.
have a look at the Prelude functions takeWhile, filter, even, and sum
takeWhile (<40) [0..]
filter even $ takeWhile (<40) [0..]
put 'em together:
ans = sum $ filter even $ takeWhile (< 4* 10^6) fibs

Resources