Mutual recursion -- can someone help explain how this code works? - haskell

I'm reading through "A Gentle Introduction to Haskell," and early on it uses this example, which works fine in GHC and horribly in my brain:
initial = 0
next resp = resp
process req = req+1
reqs = client initial resps
resps = server reqs
server (req:reqs) = process req : server reqs
client initial ~(resp:resps) = initial : client (next resp) resps
And the calling code:
take 10 reqs
How I'm seeing it, is reqs is called, yielding a call to client with args 0 and resps. Thus wouldn't resps now need to be called... which in turn calls reqs again? It all seems so infinite... if someone could detail how it's actually working, I'd be most appreciative!

I find that it's usually worthwhile to work out the behavior of small Haskell programs by hand. The evaluation rules are quite simple. The key thing to remember is that Haskell is non-strict (aka lazy): expressions are evaluated only when needed. Laziness is the reason seemingly infinite definitions can yield useful results. In this case, using take means we will only need the first 10 elements of the infinite list reqs: they are all we "need".
In practical terms, "need" is generally driven by pattern matches. E.g., a list expression will generally be evaluated up to the point where we can distinguish between [] and (x:xs) before function application. (Note that a '~' preceding a pattern , as in the definition of client, makes it lazy (or irrefutable): a lazy pattern won't force its argument until the whole expression is forced.)
Remembering that take is:
take 0 _ = []
take n (x:xs) = x : take (n-1) xs
The evaluation of take 10 reqs looks like:
take 10 reqs
-- definition of reqs
= take 10 (client initial resps)
-- definition of client [Note: the pattern match is lazy]
= take 10 (initial : (\ resp:resps' -> client (next resp) resps')
resps)
-- definition of take
= initial : take 9 ((\ resp:resps' -> client (next resp) resps')
resps)
-- definition of initial
= 0 : take 9 ((\ resp:resps' -> client (next resp) resps')
resps)
-- definition of resps
= 0 : take 9 ((\ resp:resps' -> client (next resp) resps')
(server reqs))
-- definition of reqs
= 0 : take 9 ((\ resp:resps' -> client (next resp) resps')
(server (client initial resps)))
-- definition of client
= 0 : take 9 ((\ resp:resps' -> client (next resp) resps')
(server (initial : {- elided... -}))
-- definition of server
= 0 : take 9 ((\ resp:resps' -> client (next resp) resps')
(process initial : server {-...-}))
-- beta reduction
= 0 : take 9 (client (next (process initial)) (server {-...-})
-- definition of client
= 0 : take 9 (next (process initial) : {-...-})
-- definition of take
= 0 : next (process initial) : take 8 {-...-}
-- definition of next
= 0 : process initial : take 8 {-...-}
-- definition of process
= 0 : initial+1 : take 8 {-...-}
-- definition of initial
= 0 : 1 : take 8 {-...-}
-- and so on...

Understanding this code requires two skills:
distinguishing between 'definition', which may be infinite (like the set of natural numbers: naturals = (1 : map '\n->n+1' naturals), or the list of processed requests) and 'reduction', which is the process of mapping actual data to these definitions
seeing the structure of this client-server application: it's just a pair of processes talking to eachother: 'client-server' is a bad name, really: it should have been called 'wallace-gromit' or 'foo-bar', or talking philosophers or whatever, but it's symmetrical: the two parties are peers.
As Jon already stated, reduction works in a lazy way (aka 'call by need'): take 2 naturals would not first evaluate the complete set of naturals, but just take the first one, and prepend that to take 1 (map '\n->n+1' naturals), which would reduce to [1,(1+1) ] = [1,2].
Now the structure of the client server app is this (to my eye):
server is a way to create a list of responses out of a list of requests by using the process function
client is a way to create a request based on a response, and append the response of that request to the list of responses.
If you look closely, you see that both are 'a way to create x:xs out of y:ys'. So we could evenly call them wallace and gromit.
Now it would be easy to understand if client would be called with just a list of responses:
someresponses = wallace 0 [1,8,9] -- would reduce to 0,1,8,9
tworesponses = take 2 someresponses -- [0,1]
If the responses are not literally known, but produced by gromit, we can say
gromitsfirstgrunt = 0
otherresponses = wallace gromitsfirstgrunt (gromit otherresponses)
twootherresponses = take 2 otherresponses -- reduces to [0, take 1 (wallace (gromit ( (next 0):...) )]
-- reduces to [0, take 1 (wallace (gromit ( 0:... ) ) ) ]
-- reduces to [0, take 1 (wallace (1: gromit (...) ) ) ]
-- reduces to [0, take 1 (1 : wallace (gromit (...) ) ) ]
-- reduces to [0, 1 ]
One of both peers needs to 'start' the discussion, hence the initial value provided to wallace.
Also note the ~ before the pattern of gromit: this tells Haskell that the contents of the list argument don't need to be reduced - if it sees it's a list, that's enough. There's a nice topic on that in a wikibook on Haskell (look for "Lazy Pattern Matching).

It's been a while since I've played with Haskell, but I'm pretty sure that it's lazily evaluated, meaning it only calculates what it actually needs. So while reqs is infinitely recursive, since take 10 reqs only needs the first 10 elements of the list returned, that is all that is actually calculated.

It looks like nice obfuscation. If you read precisely you found it simple:
next? it's identity
server? it's simply map process which is map '\n->n+1'
client? It's obscure way how to write 0 : server client e.g. 0 : map '\n->n+1' [0: map '\n->n+1' [0:...]]

Related

Haskell recursion list of self growing lists

I found the following piece of code in Contract.hs line 147 Pricing Financial Contracts with Haskell:
konstSlices :: a -> [[a]]
konstSlices x = nextSlice [x]
where nextSlice sl = sl : nextSlice (x:sl)
This produces a infinite list of lists:
konstSlices 100 = [[100],[100,100],[100,100,100],...]
I am not sure what is happening inside the where clause. If we just take 3 iterations what should be inside the nextSlice at this time
[100]:[100,100]:nextSlice (100 :[100,100]) ?
how the terminating: [] appears to pack the lists inside a list [100]:[100,100]:[100,100,100]:[] = [[100],[100,100],[100,100,100]]
the recursive construction is really hard to follow btw I am curious if there are tools allowing to follow such iterations and see how such values are build? Actually in such cases I am using a pen and a paper to get a grip on what is hapenning. Recursion lists are not the worst case btw.. (what bring me to this question was the analysis of the function at t (line 130) with the liftA2'ing stuff inside applicative functions which are build from other smaller functions or data constructor with function type, you rapidly see growing a big chunk of inter-related computations and you are totally lost - brain washed..)
Here is a much simpler case for you
Prelude> let ones = 1 : ones
Prelude> take 3 ones
[1,1,1]
ones is defined to be an infinite list of 1s. There is no end, so there is no final empty list constructor. take n initiates the generation of the first n elements, here with n=3.
karakfa has a great illustration of what’s going on here, but I’ll expand a bit.
There isn’t any closing ]. A list is a data structure whose head is an item of data, and whose tail is a list. Furthermore, objects in Haskell are lazily evaluated.
Let’s take another look at this example:
konstSlices :: a -> [[a]]
konstSlices x = nextSlice [x]
where nextSlice sl = sl : nextSlice (x:sl)
Lazy evaluation means that, if you try to use konstSlices 100, the program will only calculate as many items of the list as it needs to. So, if you take 1 (konstSlices 100), the program will compute
konstSlices 100 = [100]:
nextSlice (100:[100]))
The tail of the list, everything after the [100]:, is stored as a thunk. Its value hasn’t been computed yet.
What if you ask for take 2 (konstSlices 100)? Then, the program needs to compute the thunk until it finds the second element. That’s all it needs, so it will stop when it gets to,
konstSlices 100 = [100]:
[100,100]:
(nextSlice (100:[100,100]))
And so on, for however many entries you need to compute.
There’s never anything corresponding to a closing bracket. There doesn’t need to be. The recursive definition of konstSlices never generates anything like one, just more thunks. And that’s allowed.
On the other hand, if you try to take length (konstSlices 100), the program will attempt to generate an infinite number of nodes, run out of memory, and crash. If you tried to compute the entirety of a circular list, like xs = 1:xs, it wouldn’t need to allocate any new nodes, because it links back to the same ones, and it wouldn’t need to generate new stack frames, because it’s tail-recursive modulo cons, so it would go into an infinite loop.
the logic is very simple
konstSlices :: a -> [[a]]
konstSlices x = nextSlice [x]
where nextSlice sl = sl : nextSlice (x:sl)
if x = 100 then nextSlice will return [100] then it will recursive get first element and append it to the previous list nextSlice (x:sl) in this case sl will grow with each call , sorted by list index
0 -> [100] = [100]
1 -> [100,100] = nextSlice (100:[100])
2 -> [100 ,100,100] = nextSlice (100:[100 ,100])
3 -> [100 ,100 , 100,100] = nextSlice (100:[100 , 100,100])
and process will continue ,
x:sl the first element of sl list is x.
and every nextSlice call will return a list. sl:sl:sl ...
same as this one where acc = 1
num = 100 : num
slice acc = take acc num :slice (acc+1)
Your code is meant to be instructional.
The list with one value prepended is displayed every iteration so that you get
[100]
[100,100]
[100,100,100]
however many times you want it. Every list listed is a brand new list. Haskell builds a new list every iteration by prepending one value to the previous list. The use of a previous value in the next value is what recursive functions do. See below, the first example.
In an actual program, you might not be interested in the building of a list but only in the final result.
Haskell has functions that help you see your list as it is being built. The functions generalize primitive recursion and so can work with 90% of all recursive functions that you may need.
The functions are foldl/foldr and scanl/scanr. When you want to see your list being built use scan?. When you want just the final result use fold?.
You may only be interested in the construction as in the following to build a Fibonacci list up to 12.
scanl (\(a,b) x -> (a+b,a)) (1,0) [2..12]
[(1,0),(1,1),(2,1),(3,2),(5,3),(8,5),(13,8),(21,13),(34,21),(55,34),(89,55),(144,89)]
in which the previous two values are added to make the next first value and the previous first value becomes the next second value.
In your code with 3 iterations, you can see what happens to each, easily.
take 3.konstSlices $ 100
[ [100], [100,100], [100,100,100] ]
scanl (\b a -> a : b) [] $ take 3 $ repeat 100
[ [], [100], [100,100], [100,100,100] ]
But this shows more. It has the initial null list value to which it prepends 100 to, for the next value.
If you want only the final result,
foldl (\b a -> a : b) [] $ take 3 $ repeat 100
[100,100,100]
It is exactly
100 : [] = [100]
100 : [100] = [100,100]
100 : [100,100] = [100,100,100]

length vs head vs last in Haskell

I know 2 of these statements are true, but I dont know which
Let e be an expression of type [Int]
there exists e such that: Evaluation of head e won't finish but last e will
there exists e such that: Evaluation of last e won't finish but head e will
there exists e such that: Evaluation of length e won't finish but last e will
Seems clear to me that 2 is true, but I can't see how 1 or 3 can be true.
My thinking is that in order to calculate the length of a list you need to get to the last one, making 1 and 3 impossible
Since this is a test question, I won't answer it directly, but instead, here are some hints; it'd be better if you work this out yourself.
Since we're talking about computations that don't terminate, it might be useful to define one such computation. However, if this confuses you you can safely ignore this and refer only to examples that don't include this.
-- `never` never terminates when evaluated, and can be any type.
never :: a
never = never
Question 1
Consider the list [never, 1], or alternatively the list [last [1..], 1] as suggested by #chi.
Question 2
Consider the list [1..], or alternatively the list [1, never].
Question 3
Consider the definition of length:
length [] = 0
length (_:xs) = 1 + length xs
Under what conditions does length not terminate? How does this relate to last?

GHCI Haskell not remembering bindings in command line

I am trying to learn Haskell but it is a little hard as non of my bindings are remembered from the command line; output from my terminal below.
> let b = []
> b
[]
> 1:b
[1]
> b
[]
I have no idea why this is like this can anyone please help.
What did you expect your example to do? From what you've presented, I don't see anything surprising.
Of course, that answer is probably surprising to you, or you wouldn't have asked. And I'll be honest: I can guess what you were expecting. If I'm right, you thought the output would be:
> let b = []
> b
[]
> 1:b
[1]
> b
[1]
Am I right? Supposing I am, then the question is: why isn't it?
Well, the short version is "that's not what (:) does". Instead, (:) creates a new list out of its arguments; x:xs is a new list whose first element is x and the rest of which is identical to xs. But it creates a new list. It's just like how + creates a new number that's the sum of its arguments: is the behavior
> let b = 0
> b
0
> 1+b
1
> b
0
surprising, too? (Hopefully not!)
Of course, this opens up the next question of "well, how do I update b, then?". And this is where Haskell shows its true colors: you don't. In Haskell, once a variable is bound to a value, that value will never change; it's as though all variables and all data types are const (in C-like languages or the latest Javascript standard) or val (in Scala).
This feature of Haskell – it's called being purely functional – is possibly the single biggest difference between Haskell and every single mainstream language out there. You have to think about writing programs in a very different way when you aren't working with mutable state everywhere.
For example, to go a bit further afield, it's quite possible the next thing you'll try will be something like this:
> let b = []
> b
[]
> let b = 1 : b
In that case, what do you think is going to be printed out when you type b?
Well, remember, variables don't change! So the answer is:
[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,…
forever – or until you hit control-C and abort.
This is because let b = 1 : b defines a new variable named b; you might as well have written let c = 1 : c. Thus, you're saying "b is a list which is 1 followed by b"; since we know what b is, we can substitute and get "b is a list which is 1 followed by 1 followed by b", and so on forever. Or: b = 1 : b, so substituting in for b we get b = 1 : 1 : b, and substituting in we get b = 1 : 1 : 1 : 1 : ….
(The fact that Haskell produces an infinite list, rather than going into an infinite loop, is because Haskell is non-strict, more popularly referred to as lazy – this is also possibly the single biggest difference between Haskell and every single mainstream language out there. For further information, search for "lazy evaluation" on Google or Stack Overflow.)
So, in the end, I hope you can see why I wasn't surprised: Haskell can't possibly update variable bindings. So since your definition was let b = [], then of course the final result was still [] :-)

How does this haskell code work?

I'm a new student and I'm studying in Computer Sciences. We're tackling Haskell, and while I understand the idea of Haskell, I just can't seem to figure out how exactly the piece of code we're supposed to look at works:
module U1 where
double x = x + x
doubles (d:ds) = (double d):(doubles ds)
ds = doubles [1..]
I admit, it seems rather simple for someone that knows whats happening, but I can't wrap my head around it. If I write "take 5 ds", it obviously gives back [2,4,6,8,10]. What I dont get, is why.
Here's my train of thought : I call ds, which then looks for doubles. because I also submit the value [1..], doubles (d:ds) should mean that d = 1 and ds = [2..], correct? I then double the d, which returns 2 and puts it at the start of a list (array?). Then it calls upon itself, transferring ds = [2..] to d = 2 and ds = [3..], which then doubles d again and again calls upon itself and so on and so forth until it can return 5 values, [2,4,6,8,10].
So first of all, is my understanding right? Do I have any grave mistakes in my string of thought?
Second of all, since it seems to save all doubled d into a list to call for later, whats the name of that list? Where did I exactly define it?
Thanks in advance, hope you can help out a student to understand this x)
I think you are right about the recursion/loop part about how doubles goes through each element of the infinite list.
Now regarding
it seems to save all doubled d into a list to call for later, whats
the name of that list? Where did I exactly define it?
This relates to a feature that's called Lazy Evaluation in Haskell. The list isn't precomputed and stored any where. Instead, you can imagine that a list is a function object in C++ that can generate elements when needed. (The normal language you may see is that expressions are evaluated on demand). So when you do
take 5 [1..]
[1..] can be viewed as a function object that generates numbers when used with head, take etc. So,
take 5 [1..] == (1 : take 4 [2..])
Here [2..] is also a "function object" that gives you numbers. Similarly, you can have
take 5 [1..] == (1 : 2 : take 3 [3..]) == ... (1 : 2 : 3 : 4 : 5 : take 0 [6..])
Now, we don't need to care about [6..], because take 0 xs for any xs is []. Therefore, we can have
take 5 [1..] == (1 : 2 : 3 : 4 : 5 : [])
without needing to store any of the "infinite" lists like [2..]. They may be viewed as function objects/generators if you want to get an idea of how Lazy computation can actually happen.
Your train of thought looks correct. The only minor inaccuracy in it lies in describing the computation using expressions such has "it doubles 2 and then calls itself ...". In pure functional programming languages, such as Haskell, there actually is no fixed evaluation order. Specifically, in
double 1 : double [2..]
it is left unspecified whether doubling 1 happens before of after doubling the rest of the list. Theoretical results guarantee that order is indeed immaterial, in that -- roughly -- even if you evaluate your expression in a different order you will get the same result. I would recommend that you see this property at work using the Lambda Bubble Pop website: there you can pop bubbles in a different order to simulate any evaluation order. No matter what you do, you will get the same result.
Note that, because evaluation order does not matter, the Haskell compiler is free to choose any evaluation order it deems to be the most appropriate for your code. For instance, let ds be defined as in the final line in your code, and consider
take 5 (drop 5 ds)
this results in [12,14,16,18,20]. Note that the compiler has no need to double the first 5 numbers, since you are dropping them, so they can be dropped before they are completely computed (!!).
If you want to experiment, define yourself a function which is very expensive to compute (say, write fibonacci following the recursive definifion).
fibonacci 0 = 0
fibonacci 1 = 1
fibonacci n = fibonacci (n-1) + fibonacci (n-2)
Then, define
const5 n = 5
and compute
fibonacci 100
and observe how long that actually takes. Then, evaluate
const5 (fibonacci 100)
and see that the result is immediately reached -- the argument was not even computed (!) since there was no need for it.

haskell: factors of a natural number

I'm trying to write a function in Haskell that calculates all factors of a given number except itself.
The result should look something like this:
factorlist 15 => [1,3,5]
I'm new to Haskell and the whole recursion subject, which I'm pretty sure I'm suppoused to apply in this example but I don't know where or how.
My idea was to compare the given number with the first element of a list from 1 to n div2
with the mod function but somehow recursively and if the result is 0 then I add the number on a new list. (I hope this make sense)
I would appreciate any help on this matter
Here is my code until now: (it doesn't work.. but somehow to illustrate my idea)
factorList :: Int -> [Int]
factorList n |n `mod` head [1..n`div`2] == 0 = x:[]
There are several ways to handle this. But first of all, lets write a small little helper:
isFactorOf :: Integral a => a -> a -> Bool
isFactorOf x n = n `mod` x == 0
That way we can write 12 `isFactorOf` 24 and get either True or False. For the recursive part, lets assume that we use a function with two arguments: one being the number we want to factorize, the second the factor, which we're currently testing. We're only testing factors lesser or equal to n `div` 2, and this leads to:
createList n f | f <= n `div` 2 = if f `isFactorOf` n
then f : next
else next
| otherwise = []
where next = createList n (f + 1)
So if the second parameter is a factor of n, we add it onto the list and proceed, otherwise we just proceed. We do this only as long as f <= n `div` 2. Now in order to create factorList, we can simply use createList with a sufficient second parameter:
factorList n = createList n 1
The recursion is hidden in createList. As such, createList is a worker, and you could hide it in a where inside of factorList.
Note that one could easily define factorList with filter or list comprehensions:
factorList' n = filter (`isFactorOf` n) [1 .. n `div` 2]
factorList'' n = [ x | x <- [1 .. n`div` 2], x `isFactorOf` n]
But in this case you wouldn't have written the recursion yourself.
Further exercises:
Try to implement the filter function yourself.
Create another function, which returns only prime factors. You can either use your previous result and write a prime filter, or write a recursive function which generates them directly (latter is faster).
#Zeta's answer is interesting. But if you're new to Haskell like I am, you may want a "simple" answer to start with. (Just to get the basic recursion pattern...and to understand the indenting, and things like that.)
I'm not going to divide anything by 2 and I will include the number itself. So factorlist 15 => [1,3,5,15] in my example:
factorList :: Int -> [Int]
factorList value = factorsGreaterOrEqual 1
where
factorsGreaterOrEqual test
| (test == value) = [value]
| (value `mod` test == 0) = test : restOfFactors
| otherwise = restOfFactors
where restOfFactors = factorsGreaterOrEqual (test + 1)
The first line is the type signature, which you already knew about. The type signature doesn't have to live right next to the list of pattern definitions for a function, (though the patterns themselves need to be all together on sequential lines).
Then factorList is defined in terms of a helper function. This helper function is defined in a where clause...that means it is local and has access to the value parameter. Were we to define factorsGreaterOrEqual globally, then it would need two parameters as value would not be in scope, e.g.
factorsGreaterOrEqual 4 15 => [5,15]
You might argue that factorsGreaterOrEqual is a useful function in its own right. Maybe it is, maybe it isn't. But in this case we're going to say it isn't of general use besides to help us define factorList...so using the where clause and picking up value implicitly is cleaner.
The indentation rules of Haskell are (to my tastes) weird, but here they are summarized. I'm indenting with two spaces here because it grows too far right if you use 4.
Having a list of boolean tests with that pipe character in front are called "guards" in Haskell. I simply establish the terminal condition as being when the test hits the value; so factorsGreaterOrEqual N = [N] if we were doing a call to factorList N. Then we decide whether to concatenate the test number into the list by whether dividing the value by it has no remainder. (otherwise is a Haskell keyword, kind of like default in C-like switch statements for the fall-through case)
Showing another level of nesting and another implicit parameter demonstration, I added a where clause to locally define a function called restOfFactors. There is no need to pass test as a parameter to restOfFactors because it lives "in the scope" of factorsGreaterOrEqual...and as that lives in the scope of factorList then value is available as well.

Resources