Trying to get my head around recursion in Haskell? - haskell

I have used many recursive functions now but still have trouble getting my head around how such a function exactly works (i'm familiar with the second line (i.e. | n==0 = 1) but am not so familiar with the final line (i.e. | n>0 = fac (n-1) * n)).
fac :: Int -> Int
fac n
| n==0 = 1
| n>0 = fac (n-1) * n

Recursive algorithms are very closely linked to mathematical induction. Perhaps studying one will help you better understand the other.
You need to keep two key principles in mind when using recursion:
Base Case
Inductive Step
The Inductive Step is often the most difficult piece, because it assumes that everything it relies upon has already been computed correctly. Making this leap of faith can be difficult (at least it took me a while to get the hang of it), but it is only because we've got preconditions on our functions; those preconditions (in this case, that n is a non-negative integer) must be specified so that the inductive step and base case are always true.
The Base Case is also sometimes difficult: say, you know that the factorial N! is N * (N-1)!, but how exactly do you handle the first step on the ladder? (In this case, it is easy, define 0! := 1. This explicit definition provides you with a way to terminate the recursive application of your Inductive Step.)
You can see your type specification and guard patterns in this function are providing the preconditions that guarantee the Inductive Step can be used over and over again until it reaches the Base Case, n == 0. If the preconditions can't be met, recursive application of the Inductive Step would fail to reach the Base Case, and your computation would never terminate. (Well, it would when it runs out of memory. :)
One complicating factor, especially with functional programming languages, is the very strong desire to re-write all 'simple' recursive functions, as you have here, with variants that use Tail Calls or Tail Recursion.
Because this function calls itself, and then performs another operation on the result, you can build a call-chain like this:
fac 3 3 * fac 2
fac 2 2 * fac 1
fac 1 1 * fac 0
fac 0 1
fac 1 1
fac 2 2
fac 3 6
That deep call stack takes up memory; but a compiler that notices that a function doesn't change any state after making a recursive call can optimize away the recursive calls. These kinds of functions typically pass along an accumulator argument. A fellow stacker has a very nice example: Tail Recursion in Haskell
factorial 1 c = c
factorial k c = factorial (k-1) (c*k)
This very complicated change :) means that the previous call chain is turned into this:
fac 3 1 fac 2 3
fac 2 3 fac 1 6
fac 1 6 6
(The nesting is there just for show; the runtime system wouldn't actually store details of the execution on the stack.)
This runs in constant memory, regardless of the value of n, and thus this optimization can convert 'impossible' algorithms into 'possible' algorithms. You'll see this kind of technique used extensively in functional programming, much as you'd see char * frequently in C programming or yield frequently in Ruby programming.

When you write | condition = expression it introduces a guard. The guards are tried in order from top to bottom until a true condition is found, and the corresponding expression is the result of your function.
This means that if n is zero, the result is 1, otherwise if n > 0 the result is fac (n-1) * n. If n is negative you get an incomplete pattern match error.
Once you've determined which expression to use, it's just a matter of substituting in the recursive calls to see what's going on.
fac 4
(fac 3) * 4
((fac 2) * 3) * 4
(((fac 1) * 2) * 3) * 4
((((fac 0) * 1) * 2) * 3) * 4
(((1 * 1) * 2) * 3) * 4
((1 * 2) * 3) * 4
(2 * 3) * 4
6 * 4
24

Especially for more complicated cases of recursion, the trick to save mental health is not to follow recursive calls, but just assume that they "do the right thing". E.g. in your fac example, you want to compute fac n. Imagine you already have the result fac (n-1). Then it's trivial to calculate fac n: just multiply it by n. But the magic of induction is that this reasonig actually works (as long as you provide a proper base case in order to terminate recursion). So e.g. for Fibonacci numbers, just look what the base case is, and assume that you are able to calculate the function for all numbers smaller then n:
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
See? You want to calculate fib n. It's easy if you would know fib (n-1) and fib (n-2). But you can simply assume you are able to calculate them, and that the "deeper levels" of recursion do the "right thing". So just use them, it will work.
Note that there are much better ways to write this function, as currently many values are recalculated very often.
BTW: The "best" way to write fac would be fac n = product [1..n].

whats throwing you? maybe the guards (the |) are confusing things.
You can think of the guards loosely as a chain of ifs, or a switch statement (difference being only one can run, and it directly evaluates to a result. does NOt perform a series of tasks, and certainly no side-effects. just evaluates to a value)
To pan imperative-like seudo-code....
Fac n:
if n == 0: return 1
if n > 0: return n * (result of calling fac w/ n decreased by one)
The call tree by other poster looks like it could be helpful. do yourself a favor and really walk through it

Related

erase common constant of equation using leanprover

I want to prove below goal.
n_n: ℕ
n_ih: n_n * (n_n + 1) / 2 = arith_sum n_n
⊢ (n_n + 1) * (n_n + 1 + 1) / 2 = n_n + 1 + n_n * (n_n + 1) / 2
ring, simp, linarith is not working.
Also I tried calc, but it too long.
Is there any automatic command to erase common constant in equation?
I would say that you were asking the wrong question. Your hypothesis and goal contain / but this is not mathematical division, this is a pathological function which computer scientists use, which takes as input two natural numbers and is forced to return a natural number, so often can't return the right answer. For example 5 / 2 = 2 with the division you're using. Computer scientists call it "division with remainder" and I call it "broken and should never be used". When I'm doing this sort of exercise with my class I always coerce everything to the rationals before I do the division, so the division is mathematical division rather than this pathological function which does not satisfy things like (a / b) * b = a. The fact that this division doesn't obey the rules of normal division is why you can't get the tactics to work. If you coerce everything to the rationals before doing the division then you won't get into this mess and ring will work fine.
If you do want to persevere down the natural division road then one approach would be to start doing things proving that n(n+1) is always even, so that you can deduce n(n+1)/2)*2=n(n+1). Alternatively you could avoid this by observing that to show A/2=B/2 it suffices to prove that A=B. But either way you'll have to do a few lines of manual fiddling because you're not using mathematical functions, you're using computer science approximations, so mathematical tactics don't work with them.
Here's what my approach looks like:
import algebra.big_operators
open_locale big_operators
open finset
def arith_sum (n : ℕ) := ∑ i in range n, (i : ℚ) -- answer is rational
example (n : ℕ) : arith_sum n = n*(n-1)/2 :=
begin
unfold arith_sum,
induction n with d hd,
{ simp },
{ rw [finset.sum_range_succ, hd, nat.succ_eq_add_one],
push_cast,
ring, -- works now
}
end

Two recursive called no finish (Towers of Hanoi)

I programmed this function in haskell of the Towers of Hanoi problem. The function gives the numbers of steps from source stick to destination stick with only one alternative stick.
numStepsHanoi :: Integer -> Integer
numStepsHanoi 0 = 0
numStepsHanoi n = numStepsHanoi (n-1) + numStepsHanoi (n-1) + 1
This function works fine ... until n, the number of discs, gets too high. GHCi does not finish. I know the complexity of this problem and I know it can't run faster.
For example, if I call it with n = 64, I can wait 20 minutes and get no output (it doesn't complete). Even if n = 20, it takes approximately 2 seconds.
With another implementation (below), the time is quite reduced.
numStepsHanoi :: Integer -> Integer
numStepsHanoi 0 = 0
numStepsHanoi n = 2 * numStepsHanoi (n-1) - 1
Now, with n = 64, I get the result instantly. Obviously this has only one recursive call, but does that have such a large effect?
Could this be a problem of GHCi optimization?
I suspect this actually is the function complexity. Your first version makes 2 recursive calls for every invocation, a complexity of O(2^n). For n=64, you're making 2^65 - 1 total calls. That's roughly 37 * 10^18 calls, so you're not going to see results in this lifetime with current computing power. At one call per microsecond, that's still well over 10 million years.
The second routine makes only one call per iteration; it's O(n).
To see the effect, try timing your first function at n = 19, 20, 21, 22. That should be enough to show the 2x time difference for each added disc.
It would appear that the standard advice is to do common subexpression optimization yourself if you want to guarantee it is applied. See https://wiki.haskell.org/GHC/FAQ#Does_GHC_do_common_subexpression_elimination.3F.
numStepsHanoi :: Integer -> Integer
numStepsHanoi 0 = 0
numStepsHanoi n = let steps = numStepsHanoi (n-1)
in steps + steps + 1

Recursive arithmetic sequence in Haskell

It's been nearly 30 years since I took an Algebra class and I am struggling with some of the concepts in Haskell as I work through Learn you a Haskell. The concept that I am working on now is "recursion". I have watched several youtube videos on the subject and found a site with the arithmetic sequence problem: an = 8 + 3(an-1) which I understand to be an = an-1 + 3 This is what I have in Haskell.
addThree :: (Integral a) => a -> a
addThree 1 = 8
addThree n = (n-1) + 3
Running the script yields:
addThree 1
8
addThree 2
4
addThree 3
6
I am able to solve this and similar recursions on paper, (after polishing much rust), but do not understand the syntax in Haskell.
My Question How do I define the base and the function in Haskell as per my example?
If this is not the place for such questions, kindly direct me to where I should post. I see there are Stack Exchanges for Super User, Programmers, and Mathematics, but not sure which of the Stack family best fits my question.
First a word on Algebra and you problem: I think you are slightly wrong - if we write 3x it usually means 3*x (Mathematicans are even more lazy then programmers) so your series indeed should look like an = 8 + 3*an-1 IMO
Then an is the n-th element in a series of a's: a0, a1, a2, a3, ... that's why you there is a big difference between (n-1) and addThree (n-1) as the last one would designate an-1 while the first one would just be a number not really connected to your series.
Ok, let's have a look at your series an = 8 + 3an-1 (this is how I would understand it - because otherwise you would have x=8+3*x and therefore just x = -4:
you can choose a0 - let's say it`s 0 (as you did?)
then a1=8+3*0 = 8
a2=8+3*8 = 4*8 = 32
a3=8+3*32 = 8+3*32 = 104
...
ok let's say you want to use recursion than the problem directly translates into Haskell:
a :: Integer -> Integer
a 0 = 0
a n = 8 + 3 * a (n-1)
series :: [Integer]
series = map a [0..]
giving you (for the first 5 elements):
λ> take 5 series
[0,8,32,104,320]
Please note that this is a very bad performing way to do it - as the recursive call in a really does the same work over and over again.
A technical way to solve this is to observe that you only need the previous element to get the next one and use Data.List.unfoldr:
series :: [Integer]
series = unfoldr (\ prev -> Just (prev, 8 + 3 * prev)) 0
now of course you can get a lot more fancier with Haskell - for example you can define the series as it is (using Haskells laziness):
series :: [Integer]
series = 0 : map (\ prev -> 8 + 3 * prev) series
and I am sure there are much more ways out there to do it but I hope this will help you along a bit

Generating triangular number using iteration in haskell

I am trying to write a function in Haskell to generate triangular number, I am not allowed to use recursion, I am supposed to use iteration
here is my code ...
triSeries 0 = [0]
triSeries n = take n $iterate (\x->(0+x)) 1
I know that my function after iterate is wrong .
But It has been hours looking for a function, any hint please?
Start by writing out some triangular numbers
T(1) = 1
T(2) = 1 + 2
T(3) = 1 + 2 + 3
An iterative process to generate T(n) is to start from [1..n], take the first element of the list, and add it to a running total. In a language with mutable state, you might write:
def tri(n):
sum = 0
for x in [1..n]:
sum += x
return sum
In Haskell, you can iteratively consume a list of numbers and accumulate state via a fold function (foldl, foldr, or some variant). Hopefully that's enough to get started with.
Maybe wikipedia could be a hint, where something like
triangular :: Int -> Int
triangular x = x * (x + 1) `div` 2
could be got from.
triSeries could be something like
triSeries :: Int -> [Int]
triSeries x = map triangular [1..x]
and works like that
> triSeries 10
[1,3,6,10,15,21,28,36,45,55]
Talking about iterate. Maybe there is some way to use it here, but as John said, foldl would be sufficient. Take a look at this page, what are you looking is in the very beginning.
It is not clear what is meant by "recursion is not allowed, use iteration". All functions that appear to be "iterative" are recursive inside.
iterate in all your uses can only modify the input with a constant, and iterate (+1) 1 is the same as [1..]. Consider using a Data.List function that can combine a number from infinite range [1..] and the previously computed sum to produce a infinite list of such sums:
T_i=i+T_{i-1}
This is definitely cheaper than x*(x+1) div 2
Consider using a Data.List function that can produce an infinite list of finite lists of sums from a infinite list of sums. This is going to be cheaper than computing a list of 10, then a list of 11 repeating the same computation done for the list of 10, etc.

Induction proof of correctness of fibonacci function

Haskell implementation of the familiar Fibonacci function
fibSlow n
| n == 0 = 1 --fib.1
| n == 1 = 1 --fib.2
| otherwise = fibSlow(n-1) + fibSlow(n-2) --fib.3
What is the induction proof of correctness for fibSlow?
To prove correctness of a function on the natural numbers by induction, you would show that it's correct for certain base cases, and then that it's correct for higher values of the parameter given the assumption that it's correct for lower ones. So you'd verify first that fibSlow 0 = 1, and then that fibSlow 1 = 1, and then that for n > 1, fibSlow n is equal to the (n-1)th fibonacci number plus the (n-2)th fibonacci number. Here you get to assume that those numbers are fibSlow (n-1) and fibSlow (n-2), since fibSlow is correct for all inputs less than n by the inductive hypothesis.
This might seem all rather trivial... because it is! The whole point of such an example in Haskell is that you can write code that's obviously correct. When you go to prove it correct, the proof just writes itself and amounts to looking at the code and noting that it clearly says exactly what you're trying to prove. This is one of the nice properties of a declarative language like Haskell.
Apologies I haven't formally seen this kind of material for a while, so you're probably best looking at other sources if this is homework.
I think you want to show the existence of a monotone function which describes the "progress" of the recursion. This case should be pretty simple: the argument itself is monotonically decreasing. For a nonnegative n, the recursive call will be made with a lesser n', and that n' will never be less than zero.
You can also use power induction to argue the function is defined on all n. You have declared it defined on 0 and 1, and it suffices to say that if it's defined on n and n+1, then it's defined on n+2. This is obvious by the definition of the recursive call.
I think you might be able to read up on some formalities in Jech's Set Theory book, in the Ordinals chapter.

Resources