I'm trying to learn Haskll, and so I was trying out question 26 of Project Euler in Haskell:
http://projecteuler.net/problem=26
My solution to the problem is this:
answer26 = answer26' 1000
answer26' n = snd $ maximum $ map (\x -> cycleLength x [1]) [2..n - 1]
where
cycleLength n (r:rs)
| i /= Nothing = (1 + fromJust i, n)
| r < n = cycleLength n $ (10*r):r:rs
| otherwise = cycleLength n $ (r `mod` n):r:rs
where i = elemIndex r rs
I realize that this isn't the most efficient algorithm, but seeing as it's naively O(n^3) (where n = 1000) that is not such an issue. What I am concerned about though, is that from my understanding of monads, one of their main properties is that they in some sense "mark" anything that has used the monad. The function "fromJust" seems to fly directly in the face of that. Why does it exist? Also, assuming its existence is justified, is my usage of it in the above code good practice?
Usage of partial functions (functions that may not return a value) is generally discouraged. Functions like head and fromJust exist because they're occasionally convenient; you can sometimes write shorter code, which is more understandable to learners. Lots of functional algorithms are expressed in terms of head and tail and fromJust is conceptually the same as head.
It's usually preferable to use pattern matching, and to avoid partial functions, because it allows the compiler to catch errors for you. In your code snippet you have carefully checked that the value is never Nothing, but in large real-life codebases, code can be many years old, 1000's of lines long and maintained by many developers. It's very easy for a developer to re-order some code and miss out a check like that. With pattern-matching, it's right there in the code structure, not just in some arbitrary Bool expression.
It's not too difficult to replace your usage of fromJust with pattern-matching:
answer26 = answer26' 1000
answer26' n = snd $ maximum $ map (\x -> cycleLength x [1]) [2..n - 1]
where
cycleLength n (r:rs) = case elemIndex r rs of
Just i -> (1 + i, n)
Nothing -> if r < n
then cycleLength n $ (10*r):r:rs
else cycleLength n $ (r `mod` n):r:rs
And (I think) the result is a bit clearer too.
Edit: There's an apparently "theoretically ok" place to use fromJust mentioned in Typeclassopedia, though you will need someone other than me to explain wtf that is all about.. ;)
The monad interface doesn't include any specific function for "extracting" values from a monad, only for putting them in (return).
However, it doesn't forbid these kinds of functions either. When they exist, they will be specific to each monad (hence the multitude of run* functions: runIdentity, runReader, runWriter, runState... each with different arguments.)
By design, IO doesn't have any such "get out" function, and so it serves to "trap" impure values inside the monad. But "not being able to get out" is not a requirement for monads in general. What counts is that they respect the monad laws.
With comonads, the situation is reversed. There is a common function to extract values from them (extract) that every comonad must implement. But the functions to "put the values in", when they exist, vary for each particular comonad (env, store...)
As for fromJust, it is good practice to avoid it whenever possible because it is a partial function which may fail to match at runtime.
This pattern is so common, there is even a function for that: maybe :: b -> (a -> b) -> Maybe a -> b
In your case, if you do \x -> (cycleLength x [1], x), that is, construct the pair outside cycleLength:
cycleLength n (r:rs) = maybe (cycleLength n rs') (1+) $ elemIndex r rs where
rs'
| r < n = (10*r):r:rs
| otherwise = (r `mod` n):r:rs
Also, because you are looking just for a maximum, not the actual value, it will work even with id instead of (1+).
Related
I have a few questions regarding the Just syntax in Haskell.
When question arose when I was experimenting with different ways to write a function to calculate binomial coefficients.
Consider the function:
binom :: Integer -> Integer -> Maybe Integer
binom n k | n < k = Nothing
binom n k | k == 0 = Just 1
binom n k | n == k = Just 1
binom n k | otherwise = let
Just x = (binom (n-1) (k-1))
Just y = (binom (n-1) k)
in
Just (x + y)
When I try to write the otherwise case without the let..in block without the let..in block like so:
binom n k | otherwise = (binom (n-1) (k-1)) + (binom (n-1) k)
I am faced with a compilation error No instance for (Num (Maybe Integer)) arising from a use of ‘+’. And so my first thought was that I was forgetting the Just syntax so I rewrote it as
binom n k | otherwise = Just ((binom (n-1) (k-1)) + (binom (n-1) k))
I am faced with an error even more confusing:
Couldn't match type ‘Maybe Integer’ with ‘Integer’
Expected: Maybe Integer
Actual: Maybe (Maybe Integer)
If I add Just before the binom calls, the error just compounds:
Couldn't match type ‘Maybe (Maybe Integer)’ with ‘Integer’
Expected: Maybe Integer
Actual: Maybe (Maybe (Maybe Integer))
Furthermore, if I write:
Just x = binom 3 2
y = binom 3 2
x will have the value 3 and y will have the value Just 3.
So my questions are:
Why does the syntax requite the let..in block to compile properly?
In the function, why does Just add the Maybe type when I don't use let..in?
Contrarily, why does using Just outside of the function remove the Just from the value if it's type is Just :: a -> Maybe a
Bonus question, but unrelated:
When I declare the function without the type the compiler infers the type binom :: (Ord a1, Num a2, Num a1) => a1 -> a1 -> Maybe a2. Now I mostly understand what is happening here, but I don't see why a1 has two types.
Your question demonstrates a few ways you may have got confused about what is going on.
Firstly, Just is not any kind of syntax - it's just a data constructor (and therefore also a function) provided by the standard library. The reasons your failing attempts didn't compile are therefore not due to any syntax mishaps (the compiler would report a "parse error" in this case), but - as it actually reports - type errors. In other words the compiler is able to parse the code to make sense of it, but then when checking the types, realises something is up.
So to expand on your failing attempts, #1 was this:
binom n k | otherwise = Just ((binom (n-1) (k-1)) + (binom (n-1) k))
for which the reported error was
No instance for (Num (Maybe Integer)) arising from a use of ‘+’
This is because you were trying to add the results of 2 calls to binom - which according to your type declaration, are values of type Maybe Integer. And Haskell doesn't by default know how to add two Maybe Integer values (what would Just 2 + Nothing be?), so this doesn't work. You would need to - as you eventually do with your successful attempt - unwrap the underlying Integer values (assuming they exist! I'll come back to this later), add those up, and then wrap the resulting sum in a Just.
I won't dwell on the other failing attempts, but hopefully you can see that, in various ways, the types also fail to match up here too, in the ways described by the compiler. In Haskell you really have to understand the types, and just flinging various bits of syntax and function calls about in the wild hope that the thing will finally compile is a recipe for frustration and lack of success!
So to your explicit questions:
Why does the syntax requite the let..in block to compile properly?
It doesn't. It just needs the types to match everywhere. The version you ended up with:
let
Just x = (binom (n-1) (k-1))
Just y = (binom (n-1) k)
in
Just (x + y)
is fine (from the type-checking point of view, anyway!) because you're doing as I previously described - extracting the underlying values from the Just wrapper (these are x and y), adding them up and rewrapping them.
But this approach is flawed. For one thing, it's boilerplate - a lot of code to write and try to understand if you're seeing it for the first time, when the underlying pattern is really simple: "unwrap the values, add them together, then rewrap". So there should be a simpler, more understandable, way to do this. And there is, using the methods of the Applicative typeclass - of which the Maybe type is a member.
Experienced Haskellers would write the above in one of two ways. Either:
binom n k | otherwise = liftA2 (+) (binom (n-1) (k-1)) (binom (n-1) k)
or
binom n k | otherwise = (+) <$> binom (n-1) (k-1) <*> binom (n-1) k
(the latter being in what is called the "applicative style" - if you're unfamiliar with Applicative functors there's a great introduction in Learn You a Haskell here. )
And there's another advantage of doing this compared to your way, besides the avoidance of boilerplate code. Your pattern matches in the let... in expression assume that the results of binom (n-1) (k-1) and so on are of the form Just x. But they could also be Nothing - in which case your program will crash at runtime! And exactly this will indeed happen in your case, as #chepner describes in his answer.
Using liftA2 or <*> will, due to how the Applicative instance is implemented for Maybe, avoid a crash by simply giving you Nothing as soon as one of the things you're trying to add is Nothing. (And this in turn means your function will always return Nothing - I'll leave it to you to figure out how to fix it!)
I'm not sure I really understand your questions #2 and #3, so I won't address those directly - but I hope this has given you some increased understanding of how to work with Maybe in Haskell. Finally for your last question, although it's quite unrelated: "I don't see why a1 has two types" - it doesn't. a1 denotes a single type, because it's a single type variable. You're presumably referring to the fact it has two constraints - here Ord a1 and Num a1. Ord and Num here are typeclasses - like Applicative is that I mentioned earlier (albeit Ord and Num are simpler typeclasses). If you don't know what a typeclass is I recommend reading an introductory source, like Learn You a Haskell, before continuing much further with the language - but in short it's a bit like an interface, saying that the type must implement certain functions. Concretely, Ord says the type must implement order comparisons - you need that here because you've used the < operator - while Num says you can do numeric things with it, like addition. So that type signature just makes explicit what is implicit in your function definition - the values you use this function on must be of a type that implements both order comparison and numeric operations.
binom n k | otherwise = (binom (n-1) (k-1)) + (binom (n-1) k)
You can't add two Maybe values, but you can make use of the Functor instance to add the values already wrapped in Just.
binom n k | otherwise = fmap (+) (binom (n-1) (k-1)) (binom (n-1) k)
This doesn't quite work, as eventually the recursive calls will return Nothing, and fmap (+) x y == Nothing if either x or y is Nothing. The solution is to treat two different occurrences of n < k differently.
An "initial" use can return Nothing
A "recursive" use can simply return 0, since x + 0 == x.
binom will be implemented in terms of a helper that is guaranteed to receive arguments such that n >= k.
binom :: Integer -> Integer -> Maybe Integer
binom n k | n < k = Nothing
| otherwise = Just (binom' n k)
where binom' n 0 = 1
binom' n k | n == k = 1
| otherwise = binom' (n-1) (k-1) + binom' (n-1) k
This question has received excellent answers. However, I think it is worth mentioning that you can also use a monadic do construct, like the one normally used for the “main program” of a Haskell application.
The main program generally uses a do construct within the IO monad. Here, you would use a do construct within the Maybe monad.
Your binom function can be modified like this:
binom :: Integer -> Integer -> Maybe Integer
binom n k | n < 0 = Nothing -- added for completeness
binom n k | k < 0 = Nothing -- added for completeness
binom n k | n < k = Nothing
binom n k | k == 0 = Just 1
binom n k | n == k = Just 1
binom n k | otherwise = do -- monadic do construct, within the Maybe monad
x <- (binom (n-1) (k-1))
y <- (binom (n-1) k)
return (x+y)
main :: IO ()
main = do -- classic monadic do construct, within the IO monad
putStrLn "Hello impure world !"
putStrLn $ show (binom 6 3)
If a single <- extractor fails, the whole result is Nothing.
Please recall that in that context, return is just an ordinary function, with type signature:
return :: Monad m => a -> m a
Unlike in most imperative languages, return is not a keyword, and is not part of control flow.
A key concern is that if you have many quantities that can become Nothing, the do construct looks more scalable, that is, it can become more readable than pattern matching or lift'ing functions. More details about using the Maybe monad in the online Real World Haskell book.
Note that the Haskell library provides not only liftA2, as mentioned in Robin Zigmond's answer, but also other lift'ing functions up to lift6.
Interactive testing:
You can test the thing under the ghci interpreter, like this:
$ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
λ>
λ> do { n1 <- (Just 3) ; n2 <- (Just 42); return (n1+n2) ; }
Just 45
λ>
λ> do { n1 <- (Just 3) ; n2 <- (Just 42); n3 <- Nothing ; return (n1+n2+n3) ; }
Nothing
λ>
The exact semantics depend on the sort of monad involved. If you use the list monad, you get a Cartesian product of the lists you're extracting from:
λ>
λ> do { n1 <- [1,2,3] ; n2 <- [7,8,9]; return (n1,n2) ; }
[(1,7),(1,8),(1,9),(2,7),(2,8),(2,9),(3,7),(3,8),(3,9)]
λ>
I have read (and also reasoned) that calculating the length of a list is not good for performance in Haskell. However, i have long lists in my program and my requirement is to find that if length is greater than or less than some number X.
Is their already something in build in Haskell for these kind of predicates ? or i have to resort to manual looping.
On vanilla lists, you can check this using drop:
cmpLen :: Int -> [a] -> Ordering
cmpLen n xs
| n < 0 = GT
| otherwise = case drop (n-1) xs of
[] -> GT
[_] -> EQ
_ -> LT
However, this still takes as long as the value (not size, as is typical of asymptotic analysis!) of n. If you intend to do this often, you can take a cue from Okasaki and build a new structure that caches the operation you want to be efficient. I have wanted this a few times before, and found the following sort of interface convenient in those cases:
type LenList a = (Sum Word, [a])
singleton x = (1, [x])
cons x = (singleton x<>)
length = getSum . fst
elems = snd
Note that, since LenList a is already a Monoid, you get some of the usual operations for free, e.g. there is an empty LenList a named mempty, and concatenation is given by (<>). Some operations (notably the ones that produce infinite lists) will not be implementable for this type. However, you pay an O(1) price on each construction operation to make asking for the length of one of these O(1), which can be a nice tradeoff in many situations.
I'm trying to understand order of execution in purely functional language.
I know that in purely functional languages, there is no necessary execution order.
So my question is:
Suppose there are two functions.
I would like to know all ways in which I can call one function after another (except nested call of one function from another) (and except io-mode).
I would like to see examples in Haskell or pseudo-code.
There is no way to do what you describe, if the functions are totally independent and you don't use the result of one when you call the other.
This is because there is no reason to do this. In a side effect free setting, calling a function and then ignoring its result is exactly the same as doing nothing for the amount of time it takes to call that function (setting aside memory usage).
It is possible that seq x y will evaluate x and then y, and then give you y as its result, but this evaluation order isn't guaranteed.
Now, if we do have side effects, such as if we are working inside a Monad or Applicative, this could be useful, but we aren't truly ignoring the result since there is context being passed implicitly. For instance, you can do
main :: IO ()
main = putStrLn "Hello, " >> putStrLn "world"
in the IO Monad. Another example would be the list Monad (which could be thought of as representing a nondeterministic computation):
biggerThanTen :: Int -> Bool
biggerThanTen n = n > 10
example :: String
example = filter biggerThanTen [1..15] >> return 'a' -- This evaluates to "aaaaa"
Note that even here we aren't really ignoring the result. We ignore the specific values, but we use the structure of the result (in the second example, the structure would be the fact that the resulting list from filter biggerThanTen [1..15] has 5 elements).
I should point out, though, that things that are sequenced in this way aren't necessarily evaluated in the order that they are written. You can sort of see this with the list Monad example. This becomes more apparent with bigger examples though:
example2 :: [Int]
example2 =
[1,2,3] >>=
(\x -> [10,100,1000] >>=
(\y -> return (x * y))) -- ==> [10,100,1000,20,200,2000,30,300,3000]
The main takeaway here is that evaluation order (in the absence of side effects like IO and ignoring bottoms) doesn't affect the ultimate meaning of code in Haskell (other than possible differences in efficiency, but that is another topic). As a result, there is never a reason to call two functions "one after another" in the fashion described in the question (that is, where the calls are totally independent from each other).
Do notation
Do notation is actually exactly equivalent to using >>= and >> (there is actually one other thing involved that takes care of pattern match failures, but that is irrelevant to the discussion at hand). The compiler actually takes things written in do notation and converts them to >>= and >> through a process called "desugaring" (since it removes the syntactic sugar). Here are the three examples from above written with do notation:
IO Example
main :: IO ()
main = do
putStrLn "Hello, "
putStrLn "World"
First list example
biggerThanTen :: Int -> Bool
biggerThanTen n = n > 10
example :: String -- String is a synonym for [Char], by the way
example = do
filter biggerThanTen [1..15]
return 'a'
Second list example
example2 :: [Int]
example2 = do
x <- [1,2,3]
y <- [10,100,1000]
return (x * y)
Here is a side-by-side comparison of the conversions:
do --
m -- m >> n
n --
do --
x <- m -- m >>= (\x ->
... -- ...)
The best way to understand do notation is to first understand >>= and return since, as I said, that's what the compiler transforms do notation into.
As a side-note, >> is just the same as >>=, it just ignores the "result" of it's left argument (although it preserves the "context" or "structure"). So all definitions of >> must be equivalent to m >> n = m >>= (\_ -> n).
Expanding the >>= in the second list example
To help drive home the point that Monads are not usually impure, lets expand the >>= calls in the second list example, using the Monad definition for lists. The definition is:
instance Monad [] where
return x = [x]
xs >>= f = concatMap f xs
and we can convert example2 into:
Step 0 (what we already have)
example2 :: [Int]
example2 =
[1,2,3] >>=
(\x -> [10,100,1000] >>=
(\y -> return (x * y)))
Step 1 (converting the first >>=)
example2 =
concatMap
(\x -> [10,100,1000] >>=
(\y -> return (x * y)))
[1,2,3]
Step 2
example2 =
concatMap
(\x -> concatMap
(\y -> return (x * y))
[10,100,1000])
[1,2,3]
Step 3
example2 =
concatMap
(\x -> concatMap
(\y -> [x * y])
[10,100,1000])
[1,2,3]
So, there is no magic going on here, just normal function calls.
You can write a function whose arguments depend on the evaluation of another function:
-- Ads the first two elements of a list together
myFunc :: [Int] -> Int
myFunc xs = (head xs) + (head $ tail xs)
If that's what you mean. In this case, you can't get the output of myFunc xs without evaluating head xs, head $ tail xs and (+). There is an order here. However, the compiler can choose which order to execute head xs and head $ tail xs in since they are not dependent on each other, but it can't do the addition without having both of the other results. It could even choose to evaluate them in parallel, or on different machines. The point is that pure functions, because they have no side effects, don't have to be evaluated in a given order until their results are interdependent.
Another way to look at the above function is as a graph:
myFunc
|
(+)
/ \
/ \
head head
\ |
\ tail
\ /
xs
In order to evaluate a node, all nodes below it have to be evaluated first, but different branches can be evaluated in parallel. First xs must be evaluated, at least partially, but after that the two branches can be evaluated in parallel. There are some nuances due to lazy evaluation, but this is essentially how the compiler constructs evaluation trees.
If you really want to force one function call before the other, you can use the seq function. It takes two arguments, forces the first to be evaluated, then returns the second, e.g.
myFunc2 :: [Int] -> Int
myFunc2 xs = hxs + (hxs `seq` (head $ tail xs))
where hxs = head xs
This will force head xs to evaluate before head $ tail xs, but this is more dealing with strictness than sequencing functions.
Here is an easy way:
case f x of
result1 -> case g y of
result2 -> ....
Still, unless g y uses something from result1 and the subsequent calculations something from result2, or the pattern is such that the result must be evaluated, there is no guarantee that either of f or g are actually called, nor in what order.
Still, you wanted a way to call one function after another, and this is such a way.
I was trying to implement a Haskell function that takes as input an array of integers A
and produces another array B = [A[0], A[0]+A[1], A[0]+A[1]+A[2] ,... ]. I know that scanl from Data.List can be used for this with the function (+). I wrote the second implementation
(which performs faster) after seeing the source code of scanl. I want to know why the first implementation is slower compared to the second one, despite being tail-recursive?
-- This function works slow.
ps s x [] = x
ps s x y = ps s' x' y'
where
s' = s + head y
x' = x ++ [s']
y' = tail y
-- This function works fast.
ps' s [] = []
ps' s y = [s'] ++ (ps' s' y')
where
s' = s + head y
y' = tail y
Some details about the above code:
Implementation 1 : It should be called as
ps 0 [] a
where 'a' is your array.
Implementation 2: It should be called as
ps' 0 a
where 'a' is your array.
You are changing the way that ++ associates. In your first function you are computing ((([a0] ++ [a1]) ++ [a2]) ++ ...) whereas in the second function you are computing [a0] ++ ([a1] ++ ([a2] ++ ..)). Appending a few elements to the start of the list is O(1), whereas appending a few elements to the end of a list is O(n) in the length of the list. This leads to a linear versus quadratic algorithm overall.
You can fix the first example by building the list up in reverse order, and then reversing again at the end, or by using something like dlist. However the second will still be better for most purposes. While tail calls do exist and can be important in Haskell, if you are familiar with a strict functional language like Scheme or ML your intuition about how and when to use them is completely wrong.
The second example is better, in large part, because it's incremental; it immediately starts returning data that the consumer might be interested in. If you just fixed the first example using the double-reverse or dlist tricks, your function will traverse the entire list before it returns anything at all.
I would like to mention that your function can be more easily expressed as
drop 1 . scanl (+) 0
Usually, it is a good idea to use predefined combinators like scanl in favour of writing your own recursion schemes; it improves readability and makes it less likely that you needlessly squander performance.
However, in this case, both my scanl version and your original ps and ps' can sometimes lead to stack overflows due to lazy evaluation: Haskell does not necessarily immediately evaluate the additions (depends on strictness analysis).
One case where you can see this is if you do last (ps' 0 [1..100000000]). That leads to a stack overflow. You can solve that problem by forcing Haskell to evaluate the additions immediately, for instance by defining your own, strict scanl:
myscanl :: (b -> a -> b) -> b -> [a] -> [b]
myscanl f q [] = []
myscanl f q (x:xs) = q `seq` let q' = f q x in q' : myscanl f q' xs
ps' = myscanl (+) 0
Then, calling last (ps' [1..100000000]) works.
I am trying to solve one of the problem in H99:
Split a list into two parts; the length of the first part is given.
Do not use any predefined predicates.
Example:
> (split '(a b c d e f g h i k) 3)
( (A B C) (D E F G H I K))
And I can quickly come with a solution:
split'::[a]->Int->Int->[a]->[[a]]
split' [] _ _ _ = []
split' (x:xs) y z w = if y == z then [w,xs] else split' xs y (z+1) (w++[x])
split::[a]->Int->[[a]]
split x y = split' x y 0 []
My question is that what I am doing is kind of just rewriting the loop version in a recursion format. Is this the right way you do things in Haskell? Isn't it just the same as imperative programming?
EDIT: Also, how do you generally avoid the extra function here?
It's convenient that you can often convert an imperative solution to Haskell, but you're right, you do usually want to find a more natural recursive statement. For this one in particular, reasoning in terms of base case and inductive case can be very helpful. So what's your base case? Why, when the split location is 0:
split x 0 = ([], x)
The inductive case can be built on that by prepending the first element of the list onto the result of splitting with n-1:
split (x:xs) n = (x:left, right)
where (left, right) = split xs (n-1)
This may not perform wonderfully (it's probably not as bad as you'd think) but it illustrates my thought process when I first encounter a problem and want to approach it functionally.
Edit: Another solution relying more heavily on the Prelude might be:
split l n = (take n l, drop n l)
It's not the same as imperative programming really, each function call avoids any side effects, they're just simple expressions. But I have a suggestion for your code
split :: Int -> [a] -> ([a], [a])
split p xs = go p ([], xs)
where go 0 (xs, ys) = (reverse xs, ys)
go n (xs, y:ys) = go (n-1) (y : xs, ys)
So how we've declared that we're only returning two things ([a], [a]) instead of a list of things (which is a bit misleading) and that we've constrained our tail recursive call to be in local scope.
I'm also using pattern matching, which is a more idiomatic way to write recursive functions in Haskell, when go is called with a zero, then the first case is run. It's more pleasant generally to write recursive functions that go down rather than up since you can use pattern matching rather than if statements.
Finally this is more efficient since ++ is linear in the length of the first list, which means that the complexity of your function is quadratic rather than linear. This method is also tail recursive unlike Daniel's solution, which is important for handling any large lists.
TLDR: Both versions are functional style, avoiding mutation, using recursion instead of loops. But the version I've presented is a little more Haskell-ish and slightly faster.
A word on tail recursion
This solution uses tail recursion which isn't always essential in Haskell but in this case is helpful when you use the resulting lists, but at other times is actually a bad thing. For example, map isn't tail recursive, but if it was you couldn't use it over infinite lists!
In this case, we can use tail recursion, since an integer is always finite. But, if we only use the first element of the list, Daniel's solution is much faster, since it produces the list lazily. On the other hand, if we use the whole list, my solution is much faster.
split'::[a]->Int->([a],[a])
split' [] _ = ([],[])
split' xs 0 = ([],xs)
split' (x:xs) n = (x:(fst splitResult),snd splitResult)
where splitResult = split' xs (n-1)
It seems you have already shown an example of a better solution.
I would recommend you read SICP. Then you come to the conclusion that the extra function is normal. There's also widely used approach to hide functions in the local area. The book may seem boring to you but in the early chapters she will get used to the functional approach in solving problems.
There are tasks in which the recursive approach is more necessary. But for example if you use tail recursion (which is so often praised without cause) then you will notice that this is just the usual iteration. Often with "extra-function" which hide iteration variable (oh.. word variable is not very appropriate, likely argument).