Haskell recursion performance calculating central binomial coefficients - haskell

I am new to Haskell and learning how to properly use recursion.
The following function (which uses a formula to calculate central binomial coefficients) is extremely slow; for instance, grid (20,20) crashes my laptop. Can you help me understand why?
grid::(Integer,Integer)->Integer
grid (1, x) = 1 + x
grid (x, 1) = 1 + x
grid (x, y) = grid ((x-1),y) + grid ((x),(y-1))

Notably, there's not caching or memoization in your algorithm. GHC does not do magic and will not optimize away problems like that. For a 5x5 grid you're calling grid 139 times, for a 6x6 503, for a 7x7 it's 1847, and for 10x10 it's 97239 times. By the time you get to 20x20 you're making so many recursive calls that it's just not feasible. It's the same concept as doing
fib 0 = 1
fib 1 = 1
fib n = fib (n - 1) + fib (n - 2)
You're going to have an exponential number of calls as you increase n, slowing you down. Instead, you can approach this problem similarly to how it's solved in the case of the Fibonacci sequence, using lists and memoization:
fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
Except here you want it to calculate the binomial coefficients. As for the implementation of such an algorithm, you'll have to figure that out yourself ;) I can point you at a previous answer of mine that solved the problem for generating Pascal's triangle.

The reason why the execution of this function grinds to a crawl is that it utilizes multiple recursion e.g. the function calls itself twice upon each recursive call. That means that there are repeated computations taking place during the execution of this recursive function, and that the time complexity of the computation increases exponentially as the size of the inputs increase.
The effects of this are more noticeable with larger input values like 20.
Let's look at a call to grid(5, 5).
This expands as follows.
grid(5, 5)
grid(4, 5) + grid(5, 4)
(grid(3, 5) + grid(4, 4)) + (grid(4, 4) + grid(5, 3))
((grid(2, 5) + grid(3, 4)) + (grid(4, 3) + grid(3, 4))) +
((grid(3, 4) + grid(4, 3)) + (grid(4, 3) + grid(5, 2)))
...and so on
As you can see things get out of hand quickly even with small values of x and y, grid(3, 4) and grid(4, 3) are calculated multiple times. As stated previously, a solution that utilizes zipWith will be much more efficient.

As explained in the other answers, the problem with your implementation is that the number of recursive calls is exponential, even though the number of distinct values of grid (x,y) that need to be computed is just quadratic.
The solution to the problem is called memoization, which basically means caching values that have been computed before. I definitely recommend you to write your own implementation based on lists, as recommended by #bheklilr.
There is however a quick solution offered by existing libraries such as MemoTrie:
import Data.MemoTrie
grid :: (Integer, Integer) -> Integer
grid = memo grid'
grid' :: (Integer, Integer) -> Integer
grid' (1, x) = 1 + x
grid' (x, 1) = 1 + x
grid' (x, y) = grid (x - 1, y) + grid (x , y - 1)
Notice that now grid is defined as a value - it's not polymorphic and it takes no arguments (although it's value is a function). The call to memo creates a single instance of a trie that caches all values and uses grid' to compute values not present in the cache.

Alternative to memoization is generating rows iteratively, reducing the number of computations.
central :: [Integer] -> [Integer]
central x = zipWith (+) x (0:central x)
For example, to generate the next row from previous
> central [1,2,3]
[1,3,6]
or for your grid function
grid x y = (iterate central [1..]) !! x !! y
and for zero based index
> grid 2 4
35

Related

Haskell dynamic program efficiency

I am learning more about dynamic programming and was trying to implement it in haskell. I was running tests with different ways to write the algorithms and found that one is faster than the other. Here it is in the fibonacci problem
fib1 :: [Integer]
fib1 = 0:1:zipWith (+) fib1 (tail fib1)
fib2 :: [Integer]
fib2 = 0:1:[(fib2 !! (n-1)) + (fib2 !! (n-2)) | n <- [2..]]
fib1 is much faster than fib2 but I can't tell why. fib2 seems intuitive, the nth number is (n-1)st plus the (n-2)nd.
And I get fib1, but it looks like it is zipping over the whole list everytime so wouldn't that take longer than. Just calculating the next index?
Lists in Haskell are lazy. They're being calculated as they're being used, but not further.
The function fib1 indeed calculates the whole list, but only does it once, and only up to the index you're asking for.
The function fib2 does a lot of extra work: it potentially calculates elements many, many times over.
Just try to do it with pen and paper. For example, in the case of fib2 !! 5, the list needs to be expanded up to index 5. Calculating fib2 !! 0 and fib2 !! 1 takes little time, as they are constants. The next element, fib2 !! 2 is calculated by adding fib2 !! 0 and fib2 !! 1, and then fib2 !! 3 = fib2 !! 1 + fib2 !! 2, and so on.
BUT.
The most important thing to note here, is that the compiler and/or runtime does not memoize the fib2 function, meaning: it does not remembers previous calculations. So every time the code hits a fib2 !! n it starts calculating it all over again, it doesn't matter how many time this has been done before, even if this happened in the very same (recursive) function call.
Regarding computational efficiency, your fib2 implementation is equivalent to this:
fib3' :: Integer -> Integer
fib3' 0 = 0
fib3' 1 = 1
fib3' n = fib3' (n - 2) + fib3' (n - 1)
fib3 :: [Integer]
fib3 = [fib3' n | n <- [0..]]
which suffers from the same inefficiency, I merely factored out the list part.
On the other hand, fib1 takes advantage of the previous calculations, by using them to avoid re-calculating them. And that's the core idea behind dynamic programming: use a data structure that can be used to store and retrieve results of previous calculations to exchange a potentially expensive recursive function call to a - hpefully - much cheap lookup.
#netom sorry but I don't think that's what's happening. I ran some tests on time and to calculate the 10000th number took 0.7 seconds. In the same run it was instant to calculate the 10000th + 9999th (the 10001th number) showing it remembered.
I then tested the time it took to freshly calculate the 10001st and it took the same time to calculate the 10001st as if it calculated the 10000 and remembered all the rest. To calculate the 10001st, it does not calculate for 10000 and 9999 (in separate recursions) it behaves like you'd expect if it just indexed the remembered list.
The recursive function however takes almost twice as long! So they're both using dynamic programming correctly. But as I've found, the fib2 takes O(n) each step to access the array but fib1 zips it in O(1) each step.

Implementing an infinite list with seq to improve time complexity in Haskell

I don't quite understand how seq exactly works and how to implement a function using seq.
So as an exercise, if I want to implement a function that generates a list starting with a number a.
for example, I'm using the following function
countup n = n : countup (n+1)
and trying to get [1, 2, 3, 4 ...] (starting from 1, just increment 1 and add to the list) as an infinite lazy list.
How should I do this?
UPDATED:
I'm trying to make (take k (countup 0)) to use O(1) space.
Here, take function is as follows:
take k (x:xl) =
if k==0
then x
else take (k-1) xl
countup n = n : countup (n+1)
Each element of the list is created as a thunk previousElement + 1. So if you take the 1000000th or whatever element, that will be a very large thunk (...(n + 1)... + 1) containing ~1000000 suspensions. Even though the :-cells can be GC'd as soon as they are made (so traversing the list spine itself takes O(1) space), the elements themselves duplicate the structure of the list and so take k (countup n) still takes O(k) space.
We would like it if evaluating the : cells would also evaluate the elements. We can do that with
countup n = seq n $ n : countup (n + 1)
Now, when evaluating seq n $ n : countup (n + 1), seq will cause both n and n : countup (n + 1) to be evaluated. Evaluating the latter does nothing (it is already evaluated), and evaluating the former performs any thunked addition so that the + 1s never build up. With this definition, take k (countup n) takes O(1) space (or, really O(log(n + k))).
We can also write the improved function as
countup !n = n : countup (n + 1)

Using non-deterministic list monad to find long Collatz sequences

I wrote the following code to solve Project Euler's No. 14:
The following iterative (Collatz) sequence is defined for the set of positive integers:
n → n/2 (n is even)
n → 3n + 1 (n is odd)
Q: Which starting number, under one million, produces the longest chain?
And my code:
collatz :: Integer -> [Integer]
collatz 1 = [1]
collatz n =
filter (< 1000000) prev >>= poss
where prev = collatz (n - 1)
poss :: Integer -> [Integer]
poss prev
| even prev && prev `mod` 3 == 1 && (prev - 1) `div` 3 > 1 = [2 * prev, (prev - 1) `div` 3]
| otherwise = [2 * prev]
Where collatz n returns a list of numbers that will generate a Collatz chain of length n. The problem is, I can only either not restrict the result or restrict the whole chain, instead of only the seed number, to be under 1000,000. Is it possible to use this model to solve the problem at all?
I think that this approach - while interesting - is fundamentally doomed. Suppose I discover that all the seeds which result in a chain of length 500 are above 2,000,000. How can I know that I won't find that in three more steps there's a seed under 1,000,000 that gets me there? I see no way to know when you're done.
The only viable approach I see to this problem is to compute the collatz length for every number from 1 to 999,999 and then do something like:
main :: IO ()
main = do
let collatzMax = maximumBy (compare `on` collatzLength) [1..999999]
print collatzMax
On the other hand, this provides a great opportunity to learn about CAFs since the function collatzLength could be naively defined as:
collatzLength 1 = 1
collatzLength n | n `mod` 2 == 0 = 1 + collatzLength (n `div` 2)
collatzLength n = 1 + collatzLength (3 * n + 1)
And that kind of recursion screams out for a CAF.
Sure, there are memoization modules that will go and build the CAF for you, but building one yourself is a useful exercise. It's a whole little mini-course in lazy infinitely-recursive data structures.
If that defeats you, you can glance at this spoiler of how to use a CAF and then rewrite it using a different data structure. (what about a 10-way tree instead of a binary tree? What about traversing the tree in a different order? Can you remove the call to showIntAtBase?)
Your idea is interesting, although not the most efficient one. It could be worth trying, although it'll be probably memory intensive. Some thoughts:
As some chains can go over 1000000, so you can't just filter out everything less in collatz. You need to keep all the numbers in each pass.
Calling collatz this way is inefficient, as it computes the sets all over again. Making it an infinite list that shares values would be more efficient:
collatz :: [[Integer]]
collatz = [1] : map (>>= poss) collatz
You need to figure out when you're done. For this you'd need to go through the number lists generated by collatz and count how many of them are below 1000000. When you have seen all the numbers below the limit, the last list will contain the numbers with the longest chain.
That said, I'm afraid this approach isn't computationally feasible. In particular, you'll generate exponentially many numbers and exponentially large ones. For example, if the longest chain would be 500, the result of collatz in that step would contain numbers up to 2^500. And as mentioned, there is no way to tell which of these huge numbers might be the one leading to the solution, so you can't just discard them.

puzzle function with fold in haskell?

I am having a hard time trying to figure out how does this function works, and I need an explanation from an expert. Appreciate any help!
puzzle n x = scanr (\y acc -> (acc + y/acc)/2) 1 (replicate n x)
I tried running these:
--puzzle 10 2
--puzzle 10 5
--puzzle 10 36
and it gives me this output, respectively:
[1.414213562373095,1.414213562373095,1.414213562373095,1.414213562373095,1.414213562373095,1.414213562373095,1.4142135623746899,1.4142156862745097,1.4166666666666665,1.5,1.0]
[2.23606797749979,2.23606797749979,2.23606797749979,2.23606797749979,2.23606797749979,2.236067977499978,2.2360688956433634,2.238095238095238,2.3333333333333335,3.0,1.0]
[6.0,6.0,6.0,6.0,6.000000005333189,6.0002529841194185,6.055351744849479,6.872226737643129,10.222972972972974,18.5,1.0]
This function calculates a square root using Newton`s formula and stores all iteration results in list.
Here is a Newton's method on wiki.
Storing process is based on definition of scanr function:
scanr is similar to foldr, but returns a list
of successive reduced values from the right
It makes a list of n x-es like [x,x,x,x,x,..] (n times)
and then what it does is:
x1 = ( 1 + x/1)/2
x2 = (x1 + x/x1)/2
x3 = (x2 + x/x2)/2
x4 = (x3 + x/x3)/2
and the result is [xn,x(n-1),...,x2,x1]

Haskell inefficient fibonacci implementation

I am new to haskell and just learning the fun of functional programming. but have run into trouble right away with an implementation of the fibonacci function. Please find the code below.
--fibonacci :: Num -> [Num]
fibonacci 1 = [1]
fibonacci 2 = [1,1]
--fibonacci 3 = [2]
--fibonacci n = fibonacci n-1
fibonacci n = fibonacci (n-1) ++ [last(fibonacci (n-1)) + last(fibonacci (n-2))]
Rather awkward, I know. I can't find time to look up and write a better one. Though I wonder what makes this so inefficient. I know I should look it up, just hoping someone would feel the need to be pedagogic and spare me the effort.
orangegoat's answer and Sec Oe's answer contain a link to probably the best place to learn how to properly write the fibonacci sequence in Haskell, but here's some reasons why your code is inefficient (note, your code is not that different from the classic naive definition. Elegant? Sure. Efficient? Goodness, no):
Let's consider what happens when you call
fibonacci 5
That expands into
(fibonacci 4) ++ [(last (fibonacci 4)) + (last (fibonacci 3))]
In addition to concatenating two lists together with ++, we can already see that one place we're being inefficient is that we calculate fibonacci 4 twice (the two places we called fibonacci (n-1). But it gets worst.
Everywhere it says fibonacci 4, that expands into
(fibonacci 3) ++ [(last (fibonacci 3)) + (last (fibonacci 2))]
And everywhere it says fibonacci 3, that expands into
(fibonacci 2) ++ [(last (fibonacci 2)) + (last (fibonacci 1))]
Clearly, this naive definition has a lot of repeated computations, and it only gets worse when n gets bigger and bigger (say, 1000). fibonacci is not a list, it just returns lists, so it isn't going to magically memoize the results of the previous computations.
Additionally, by using last, you have to navigate through the list to get its last element, which adds on top of the problems with this recursive definition (remember, lists in Haskell don't support constant time random access--they aren't dynamic arrays, they are linked lists).
One example of a recursive definition (from the links mentioned) that does keep down on the computations is this:
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
Here, fibs is actually a list, and we can take advantage of Haskell's lazy evaluation to generate fibs and tail fibs as needed, while the previous computations are still stored inside of fibs. And to get the first five numbers, it's as simple as:
take 5 fibs -- [0,1,1,2,3]
(Optionally, you can replace the first 0 with a 1 if you want the sequence to start at 1).
All the ways to implement the fibonacci sequence in Haskell just follow the link
http://www.haskell.org/haskellwiki/The_Fibonacci_sequence
This implementation is inefficient because it makes three recursive calls. If we were to write a recurrence relation for computing fibonacci n to a normal form (note, pedantic readers: not whnf), it would look like:
T(1) = c
T(2) = c'
T(n) = T(n-1) + T(n-1) + T(n-2) + c''
(Here c, c', and c'' are some constants that we don't know.) Here's a recurrence which is smaller:
S(1) = min(c, c')
S(n) = 2 * S(n-1)
...but this recurrence has a nice easy closed form, namely S(n) = min(c, c') * 2^(n-1): it's exponential! Bad news.
I like the general idea of your implementation (that is, track the second-to-last and last terms of the sequence together), but you fell down by recursively calling fibonacci multiple times, when that's totally unnecessary. Here's a version that fixes that mistake:
fibonacci 1 = [1]
fibonacci 2 = [1,1]
fibonacci n = case fibonacci (n-1) of
all#(last:secondLast:_) -> (last + secondLast) : all
This version should be significantly faster. As an optimization, it produces the list in reverse order, but the most important optimization here was making only one recursive call, not building the list efficiently.
So even if you wouldn't know about the more efficient ways, how could you improve your solution?
First, looking at the signature it seems you don't want an infinite list, but a list of a given length. That's fine, the infinite stuff might be too crazy for you right now.
The second observation is that you need to access the end of the list quite often in your version, which is bad. So here is a trick which is often useful when working with lists: Write a version that work backwards:
fibRev 0 = []
fibRev 1 = [1]
fibRev 2 = [1,1]
fibRev n = let zs#(x:y:_) = fibRev (n-1) in (x+y) : zs
Here is how the last case works: We get the list which is one element shorter and call it zs. At the same time we match against the pattern (x:y:_) (this use of # is called an as-pattern). This gives us the first two elements of that list. To calculate the next value of the sequence, we have just to add these elements. We just put the sum (x+y) in front of the list zs we already got.
Now we have the fibonacci list, but it is backwards. No problem, just use reverse:
fibonacci :: Int -> [Int]
fibonacci n = reverse (fibRev n)
The reverse function isn't that expensive, and we call it here only one time.

Resources