What does "!!" mean in haskell? - haskell

There are two functions written in the Haskell wiki website:
Function 1
fib = (map fib' [0 ..] !!)
where
fib' 0 = 0
fib' 1 = 1
fib' n = fib (n - 1) + fib (n - 2)
Function 2
fib x = map fib' [0 ..] !! x
where
fib' 0 = 0
fib' 1 = 1
fib' n = fib (n - 1) + fib (n - 2)
What does the "!!" mean?

This is actually more difficult to read then it would seem at first as operators in haskell are more generic then in other languages.
The first thing that we are all thinking of telling you is to go look this up yourself. If you do not already know about hoogle this is the time to become familiar with it. You can ask it either to tell you what a function does by name or (and this is even more cool) you can give it the type of a function and it can offer suggestions on which function implement that type.
Here is what hoogle tells you about this function (operator):
(!!) :: [a] -> Int -> a
List index (subscript) operator, starting from 0. It is an
instance of the more general genericIndex, which takes an index
of any integral type.
Let us assume that you need help reading this. The first line tell us that (!!) is a function that takes a list of things ([a]) and an Int then gives you back one of the thing in the list (a). The descriptions tells you what it does. It will give you the element of the list indexed by the Int. So, xs !! i works like xs[i] would in Java, C or Ruby.
Now we need to talk about how operators work in haskell. I'm not going to give you the whole thing here, but I will at least let you know that there is something more here then what you would encounter in other programming languages. Operators "always" take two arguments and return something (a -> b -> c). You can use them just like a normal function:
add x y
(+) x y -- same as above
But, by default, you can also use them between expression (the word for this is 'infix'). You can also make a normal function work like an operator with backtics:
x + y
x `add` y -- same as above
What makes the first code example you gave (especially for new haskell coders) is that the !! operator is used as a function rather then in a typical operator (infix) position. Let me add some binding so it is clearer:
-- return the ith Fibonacci number
fib :: Int -> Int -- (actually more general than this but do't worry about it)
fib i = fibs !! i
where
fibs :: [Int]
fibs = map fib' [0 ..]
fib' :: Int -> Int
fib' 0 = 0
fib' 1 = 1
fib' n = fib (n - 1) + fib (n - 2)
You can now work your way back to example 1. Make sure you understand what map fib' [0 ..] means.
I'm sorry your question got down-voted because if you understood what was going on the answer would have been easy to look up, but if you don't know about operators as the exist in haskell it is very hard to mentally parse the code above.

(!!) :: [a] -> Int -> a
List index (subscript) operator, starting from 0. It is an instance of the more general genericIndex, which takes an index of any integral type.
See here: http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-List.html#g:16

Related

Fibonacci numbers without using zipWith

I have been trying to implement a list of Fibonacci number sequence from 0 to n without using the lazy zipwith method. What I have so far is code that returns a list from n to 1. Is there any way I can change this code so it returns the list from 0-n at all?
Example:
fib_seq 4 = [3,2,1,1]
-- output wanted: [1,1,2,3]
If there is not a way to do what I want the code to do, is there a way to just return the list of Fibonacci numbers taking in a number say again 4 it would return [0, 1, 1, 2].
fib_seq :: Int -> [Int]
fib_seq 0 = [0]
fib_seq 1 = [1]
fib_seq n = sum (take 2 (fib_seq (n-1))) : fib_seq (n-1)
Another way you could choose to implement the fib numbers is the use of a helper function then a function on it's own that will produce the infinite list of fib numbers, or you could use take 10 fibs and the output for this would be the first 10 fib numbers. My function is definitely not the fastest way to work out the fib numbers infintely that would be with the zipWith function, but you are not using that here so here is my way to implement it without zipWith.
for example take 10 fibs would return: [0,1,1,2,3,5,8,13,21,34]
fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
fibs :: [Int]
fibs = (map fib [0..])
It is often the case that you can solve a problem by considering a slightly more general version of it.
Say we want the infinite Fibonacci list starting with two prescribed initial values a and b. There is an obvious recursive solution:
$ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
...
λ>
λ> aux_fib a b = a : (aux_fib b (a+b))
λ>
λ> take 4 (aux_fib 1 1)
[1,1,2,3]
λ>
And so:
λ>
λ> fib_seq n = take n (aux_fib 1 1)
λ>
λ> fib_seq 4
[1,1,2,3]
λ>
Note: camel case is regarded as more idiomatic in Haskell, so it would be more like auxFib and fibSeq.
If you wanted to have the list start from 0 you could use a helper function and then use this helper function within your fib_seq (which i recommend you change to Camel case so like fibSeq, standard haskell notation)
Ok so the functions as follow fibSeq 7 would return [0,1,1,2,3,5,8]:
fibHelp :: Int -> Int -> [Int]
fibHelp x y = x : (fibHelp y (x+y))
fibSeq :: Int -> [Int]
fibSeq n = take n (fibHelp 0 1)
It feels a bit like cheating, but you could use the closed formula for the Fibonacci sequence like this:
fib n = (phi^n - psi^n) / sqrt 5
where
phi = (1 + sqrt 5) / 2
psi = (1 - sqrt 5) / 2
fibSeq n = fib <$> [1 .. n]
Otherwise the Haskell Wiki has many more implementation variants to chose from. For example very succinctly
fibs = 0 : 1 : next fibs
where
next (a : t#(b:_)) = (a+b) : next t

How does haskell manage memory of recursive function calls

I have been working on a problem that benefits a lot from catching results of my functions and in my research I came across this article. I am stunned at how simple is the core in "Memoization with recursion" section namely:
memoized_fib :: Int -> Integer
memoized_fib = (map fib [0 ..] !!)
where fib 0 = 0
fib 1 = 1
fib n = memoized_fib (n-2) + memoized_fib (n-1)
I feel like I understand how it work but do correct me if I'm wrong - this function saves a list which is populated using same function.
What bothers me is that I don't understand why this works, originally I was under impression that once haskell evaluates a function it releases memory that was used to store variables inside this function, but here it seems that if part of the list was evaluated by one call of this function those values are still available to another call of the same function.
Just typing this up makes my head hurt, because I don't understand why value used in calculation of fib 2 should be available in calculation of fib 3 or better yest fib 100?
My gut feeling tells me that this behavior has two problems(I'm probably wrong but again not sure why):
purity of this function we are evaluating one call using variable that did not arrive from parameters of this function
memory leaks no longer sure when will haskell release memory from this list
I think it's easier to understand if you compare your definition to this:
not_memoized_fib :: Int -> Integer
not_memoized_fib m = map fib [0 ..] !! m
where fib 0 = 0
fib 1 = 1
fib n = not_memoized_fib (n-2) + not_memoized_fib (n-1)
The definition above is essentially the same as yours, except that it takes an explicit argument m. It is a so-called eta-expansion of the previous function, and is semantically equivalent to it. Yet, operationally, this has drastically worse performance, since memoization here does not take place.
Why? Well, your function defines the list map fib [0..] before taking the (implicit) input parameter m, so there is only one list around, for all m we may pass later on as arguments. Instead, in not_memoized_fib we first take m as input, and then define the list, making the function create a list for every call to not_memoized_fib, destroying performance.
It is even easier to see if we use let and lambdas instead of where. Compare
memoized :: Int -> Integer
memoized = let
list = map fib [0..]
fib 0 = 0
fib 1 = 1
fib n = memoized (n-1) + memoized (n-2)
in \m -> list !! m
-- ^^ here we take m, after everything is defined,
with its let over lambda&hairsp;(*) code structure, to
not_memoized :: Int -> Integer
not_memoized = \m -> let
-- ^^ here we take m, before everything is defined, so
-- we define local bindings {list,fib} at every call
list = map fib [0..]
fib 0 = 0
fib 1 = 1
fib n = not_memoized (n-1) + not_memoized (n-2)
in list !! m
with the let inside the lambda.
In the former case, there is only one list around, while in the latter there is one list for each call.
(*) a searchable term.
The list defined by map fib [0..] is defined as part of the definition of the function, rather than being created each time the function is called. Due to laziness, though, the list is only "realized" as necessary for any given call.
Say your first call is memoized_fib 10. This will cause the first 10 Fibonacci numbers to actually be computed and stored in memory, and they will stay in memory for the duration of the program. Subsequent calls with a smaller argument don't need to compute anything; subsequent calls with larger arguments need only compute those elements that occur later in the list than the largest existing element.

Non-pointfree style is substantially slower

I have the following, oft-quoted code for calculating the nth Fibonacci number in Haskell:
fibonacci :: Int -> Integer
fibonacci = (map fib [0..] !!)
where fib 0 = 0
fib 1 = 1
fib n = fibonacci (n-2) + fibonacci (n-1)
Using this, I can do calls such as:
ghci> fibonacci 1000
and receive an almost instantaneous answer.
However, if I modify the above code so that it's not in pointfree style, i.e.
fibonacci :: Int -> Integer
fibonacci x = (map fib [0..] !!) x
where fib 0 = 0
fib 1 = 1
fib n = fibonacci (n-2) + fibonacci (n-1)
it is substantially slower. To the extent that a call such as
ghci> fibonacci 1000
hangs.
My understanding was that the above two pieces of code were equivalent, but GHCi begs to differ. Does anyone have an explanation for this behaviour?
To observe the difference, you should probably look at Core. My guess that this boils down to comparing (roughly)
let f = map fib [0..] in \x -> f !! x
to
\x -> let f = map fib [0..] in f !! x
The latter will recompute f from scratch on every invocation. The former does not, effectively caching the same f for each invocation.
It happens that in this specific case, GHC was able to optimize the second into the first, once optimization is enabled.
Note however that GHC does not always perform this transformation, since this is not always an optimization. The cache used by the first is kept in memory forever. This might lead to a waste of memory, depending on the function at hand.
I tried to find it but struck out. I think I have it on my PC at home.
What I read was that functions using fixed point were inherently faster.
There are other reasons for using fixed point. I encountered one in writing this iterative Fibonacci function. I wanted to see how an iterative version would perform then I realized I had no ready way to measure. I am a Haskell neophyte. But here is an iterative version for someone to test.
I could not get this to define unless I used the dot after the first last function.
I could not reduce it further. the [0,1] parameter is fixed and not to be supplied as a parameter value.
Prelude> fib = last . flip take (iterate (\ls -> ls ++ [last ls + last (init ls)]) [0,1])
Prelude> fib 25
[0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,1597,2584,4181,6765,10946,17711,28657,46368,75025]

Why does memoization not work?

After reading a memoization introduction I reimplemented the Fibonacci example by using a more general memoize function (only for learning purposes):
memoizer :: (Int -> Integer) -> Int -> Integer
memoizer f = (map f [0 ..] !!)
memoized_fib :: Int -> Integer
memoized_fib = memoizer fib
where fib 0 = 0
fib 1 = 1
fib n = memoized_fib (n-2) + memoized_fib (n-1)
This works, but when I just change the last line to the following code, memoization suddenly does not work as I expected (the program becomes slow again):
fib n = memoizer fib (n-2) + memoizer fib (n-1)
Where is the crucial difference w.r.t. to memoization?
It is about explicit vs. implicit sharing. When you explicitly name a thing, it naturally can be shared, i.e. exist as separate entity in memory, and reused. (Of course sharing is not part of the language per se, we can only nudge the compiler ever so slightly towards sharing certain things).
But when you write same expression twice or thrice, you rely on compiler to replace the common sub-expressions with one explicitly shared entity. That might or might not happen.
Your first variant is equivalent to
memoized_fib :: Int -> Integer
memoized_fib = (map fib [0 ..] !!) where
fib 0 = 0
fib 1 = 1
fib n = memoized_fib (n-2) + memoized_fib (n-1)
Here you specifically name an entity, and refer to it by that name. But that is a function. To make the reuse even more certain, we can name the actual list of values that gets shared here, explicitly:
memoized_fib :: Int -> Integer
memoized_fib = (fibs !!) where
fibs = map fib [0 ..]
fib 0 = 0
fib 1 = 1
fib n = memoized_fib (n-2) + memoized_fib (n-1)
The last line can be made yet more visually apparent, with explicit reference to the actual entity which is shared here - the list fibs which we just named in the step above:
fib n = fibs !! (n-2) + fibs !! (n-1)
Your second variant is equivalent to this:
memoized_fib :: Int -> Integer
memoized_fib = (map fib [0 ..] !!) where
fib 0 = 0
fib 1 = 1
fib n = (map fib [0 ..] !!) (n-2) + (map fib [0 ..] !!) (n-1)
Here we have three seemingly independent map expressions, which might or might not get shared by a compiler. Compiling it with ghc -O2 seems to reintroduce sharing, and with it the speed.
momoized_fib = ... - that's top-level simple definition. it might be read as a constant lazy value (without any additional arguments required to be bound before expanding it. That's kinda "source" of your memoized values.
When you use (memoizer fib) (n-2) creates new source of values which have no relation with memoized_fib and thus it isn't reused. Actually you move a lot of load on GC here since you produce a lot (map fib [0 ..]) sequences in second variant.
Consider also more simple example:
f = \n -> sq !! n where sq = [x*x | x <- [0 ..]]
g n = sq !! n where sq = [x*x | x <- [0 ..]]
First will generate single f and associated with it sq because there is no n in head of declaration. Second will produce family of lists for each different value of f n and move over it (without bounding down to actual values) to get value.

Memoization with recursion

I am trying to understand Haskell realization of memoization , but I don't get how it works:
memoized_fib :: Int -> Integer
memoized_fib = (map fib [0..] !!)
where fib 0 = 0
fib 1 = 1
fib n = memoized_fib(n - 2) + memoized_fib(n - 1)
First of all I even don't understand why 'map'-function get three parameters (function - fib, list [0..], and ||), but not two how it must do.
Updated:
I have tried to rewrite the code, but get the different result:
f' :: (Int -> Int) -> Int -> Int
f' mf 0 = 0
f' mf 1 = 1
f' mf n = mf(n - 2) + mf(n - 1)
f'_list :: [Int]
f'_list = map (f' faster_f') [0..]
faster_f' :: Int -> Int
faster_f' n = f'_list !! n
Why? Is the any error in my reasoning?
First: Haskell supports operator sections. So (+ 2) is equal to \ x -> x + 2. This means the expression with map is equal to \ x -> map fib [0..] !! x.
Secondly, how this works: this is taking advantage of Haskell's call-by-need evaluation strategy (its laziness).
Initially, the list which results from the map is not evaluated. Then, when you need to get to some particular index, all the elements up to that point get evaluated. However, once an element is evaluated, it does not get evaluated again (as long as you're referring to the same element). This is what gives you memoization.
Basically, Haskell's lazy evaluation strategy involves memoizing forced values. This memoized fib function just relies on that behavior.
Here "forcing" a value means evaluating a deferred expression called a thunk. So the list is basically initially stored as a "promise" of a list, and forcing it turns that "promise" into an actual value, and a "promise" for more values. The "promises" are just thunks, but I hope calling it a promise makes more sense.
I'm simplifying a bit, but this should clarify how the actual memoization works.
map does not take three parameters here.
(map fib [0..] !!)
partially applies (slices) the function (!!) with map fib [0..], a list, as its first (left-hand) argument.
Maybe it's clearer written it as:
memoized_fib n = (map fib [0..]) !! n
so it's just taking the nth element from the list, and the list is evaluated lazily.
This operator section stuff is exactly the same as normal partial application, but for infix operators. In fact, if we write the same form with a regular function instead of the !! infix operator, see how it looks:
import Data.List (genericIndex)
memoized_fib :: Int -> Integer
memoized_fib = genericIndex (map fib [0..])
where fib 0 = 0
fib 1 = 1
fib n = memoized_fib(n - 2) + memoized_fib(n - 1)

Resources