My solution for Euler Project #3 is too slow - haskell

I'm new to Haskell and tinkering around with the Euler Project problems. My solution for problem #3 is far too slow. At first I tried this:
-- Problem 3
-- The prime factors of 13195 are 5, 7, 13 and 29.
-- What is the largest prime factor of the number 600851475143 ?
problem3 = max [ x | x <- [1..n], (mod n x) == 0, n /= x]
where n = 600851475143
Then I changed it to return all x and not just the largest one.
problem3 = [ x | x <- [1..n], (mod n x) == 0, n /= x]
where n = 600851475143
After 30 minutes, the list is still being processed and the output looks like this
[1,71,839,1471,6857,59569,104441,486847,1234169,5753023,10086647,87625999,408464633,716151937
Why is it so slow? Am I doing something terribly wrong or is it normal for this sort of task?

With your solution, there are about 600 billion possible numbers. As noted by delnan, making every check of the number quicker is not going to make much difference, we must limit the number of candidates.
Your solution does not seem to be correct either. 59569 = 71 * 839 isn't it? The question
only asks for prime factors. Notice that 71 and 839 is in your list so you are
doing something right. In fact, you are trying to find all factors.
I think the most dramatic effect you get simply by dividing away the factor before continuing.
euler3 = go 2 600851475143
where
go cand num
| cand == num = [num]
| cand `isFactorOf` num = cand : go cand (num `div` cand)
| otherwise = go (cand + 1) num
isFactorOf a b = b `mod` a == 0
This may seem like an obvious optimization but it relies on the fact that if both a and b divides c and a is coprime to b then a divides c/b.
If you want to do more, the common "Only check until the square root" trick has been
mentioned here. The same trick can be applied to this problem, but the performance gain does not show, unfortunately, on this instance:
euler3 = go 2 600851475143
where
go cand num
| cand*cand > num = [num]
| cand `isFactorOf` num = cand : go cand (num `div` cand)
| otherwise = go (cand + 1) num
isFactorOf a b = b `mod` a == 0
Here, when a candidate is larger than the square root of the remaining number (num), we know that num must be a prime and therefore a prime factor of the original
number (600851475143).
It is possible to remove even more candidates by only considering prime numbers,
but this is slightly more advanced because you need to make a reasonably performant
way of generating primes. See this page for ways of doing that.

It's doing a lot of work! (It's also going to give you the wrong answer, but that's a separate issue!)
There are a few very quick ways you could speed it up by thinking about the problem a little first:
You are applying your function over all numbers 1..n, and checking each one of them to ensure it isn't n. Instead, you could just go over all numbers 1..n-1 and skip out n different checks (small though they are).
The answer is odd, so you can very quickly filter out any even numbers by going from 1..(n-1)/2 and checking for 2x instead of x.
If you think about it, all factors occur in pairs, so you can in fact just search from 1..sqrt(n) (or 1..sqrt(n)/2 if you ignore even numbers) and output pairs of numbers in each step.
Not related to the performance of this function, but it's worth noting that what you've implemented here will find all of the factors of a number, whereas what you want is only the largest prime factor. So either you have to test each of your divisors for primality (which is going to be slow, again) or you can implement the two in one step. You probably want to look at 'sieves', the most simple being the Sieve of Eratosthenes, and how you can implement them.

A complete factorization of a number can take a long time for big numbers. For Project Euler problems, a brute force solution (which this is) is usually not enough to find the answer in your lifetime.
Hint: you do not need to find all prime factors, just the biggest one.

TL;DR: The two things you were doing non-optimally, are: not stopping at the square root, and not dividing out each smallest factor, as they are found.
Here's a little derivation of the (2nd) factorization code shown in the answer by HaskellElephant. We start with your code:
f1 n = [ x | x <- [2..n], rem n x == 0]
n3 = 600851475143
Prelude> f1 n3
[71,839,1471,6857,59569,104441,486847Interrupted.
So it doesn't finish in any reasonable amount of time, and some of the numbers it produces are not prime... But instead of adding primality check to the list comprehension, let's notice that 71 is prime. The first number produced by f1 n is the smallest divisor of n, and thus it is prime. If it weren't, we'd find its smallest divisor first - a contradiction.
So, we can divide it out, and continue searching for the prime factors of newly reduced number:
f2 n = tail $ iterate (\(_,m)-> (\f->(f, quot m f)) . head $ f1 m) (1,n)
Prelude> f2 n3
[(71,8462696833),(839,10086647),(1471,6857),(6857,1),(*** Exception: Prelude.hea
d: empty list
(the error, because f1 1 == []). We're done! (6857 is the answer, here...). Let's wrap it up:
takeUntil p xs = foldr (\x r -> if p x then [x] else x:r) [] xs
pfactors1 n = map fst . takeUntil ((==1).snd) . f2 $ n -- prime factors of n
Trying out our newly minted solution,
Prelude> map pfactors1 [n3..]
[[71,839,1471,6857],[2,2,2,3,3,1259Interrupted.
suddenly we hit a new inefficiency wall, on numbers without small divisors. But if n = a*b and 1 < a <= b, then a*a <= a*b == n and so it is enough to test only until the square root of a number, to find its smallest divisor.
f12 n = [ x | x <- takeWhile ((<= n).(^2)) [2..n], rem n x == 0] ++ [n]
f22 n = tail $ iterate (\(_,m)-> (\f->(f, quot m f)) . head $ f12 m) (1,n)
pfactors2 n = map fst . takeUntil ((==1).snd) . f22 $ n
What couldn't finish in half an hour now finishes in under one second (on a typical performant box):
Prelude> f12 n3
[71,839,1471,6857,59569,104441,486847,600851475143]
All the divisors above sqrt n3 were not needed at all. We unconditionally add n itself as the last divisor in f12 so it is able to handle prime numbers:
Prelude> f12 (n3+6)
[600851475149]
Since n3 / sqrt n3 = sqrt n3 ~= 775146, your original attempt at f1 n3 should have taken about a week to finish. That's how important this optimization is, of stopping at the square root.
Prelude> f22 n3
[(71,8462696833),(839,10086647),(1471,6857),(6857,1),(1,1),(1,1),(1,1),(1,1),(1,
1),(1,1),(1,1),(1,1),(1,1),(1,1),(1,1),(1,1),(1,1),(1,1)Interrupted
We've apparently traded the "Prelude.head: empty list" error for a non-terminating - but productive - behavior.
Lastly, we break f22 up in two parts and fuse them each into the other functions, for a somewhat simplified code. Also, we won't start over anew, as f12 does, searching for the smallest divisor from 2 all the time, anymore:
-- smallest factor of n, starting from d. directly jump from sqrt n to n.
smf (d,n) = head $ [ (x, quot n x) | x <- takeWhile ((<=n).(^2)) [d..]
, rem n x == 0] ++ [(n,1)]
pfactors n = map fst . takeUntil ((==1).snd) . tail . iterate smf $ (2,n)
This expresses guarded (co)recursion through a higher-order function iterate, and is functionally equivalent to that code mentioned above. The following now runs smoothly, and we're even able to find a pair of twin primes as a bonus there:
Prelude Saga> map pfactors [n3..]
[[71,839,1471,6857],[2,2,2,3,3,1259,6628403],[5,120170295029],[2,13,37,227,27514
79],[3,7,7,11,163,2279657],[2,2,41,3663728507],[600851475149],[2,3,5,5,19,31,680
0809],[600851475151],[2,2,2,2,37553217197],[3,3,3,211,105468049],[2,7,11161,3845
351],[5,67,881,2035853],[2,2,3Interrupted.

Here is my solution for Euler Project #3. It takes only 1.22 sec on my Macbook Air.
First we should find all factors of the given number. But we know, that even numbers can't be prime numbers (except number 2). So, to solve Euler Project #3 we need not all, but only odd factors:
getOddFactors num = [ x | x <- [3,5..num], num `rem` x == 0 ]
But we can optimize this function. If we plan to find a factor of num greater than sqrt num, we should have another factor which is less than sqrt num - and these possible factors we have found already. Hence, we can limit our list of possible factors by sqrt num:
getOddFactors num = [ x | x <- [3, 5..(floor.sqrt.fromIntegral) num],
num `rem` x == 0 ]
Next we want to know which of our odd factors of num are prime numbers:
isPrime number = [ x | x <- [3..(floor.sqrt.fromIntegral) number],
number `rem` x == 0] == []
Next we can filter odd factors of num with the function isPrime to find all prime factors of num. But to use laziness of Haskell to optimize our solution, we apply function filter isPrime to the reversed list of odd factors of the num. As soon as our function finds the first value which is prime number, Haskell stops computations and returns solution:
largestPrimeFactor = head . filter isPrime . reverse . getOddDivisors
Hence, the solution is:
ghci> largestPrimeFactor 600851475143
6857
(1.22 secs, 110646064 bytes)

Related

How to break out of recursion at a base case in Haskell?

I'm trying to write a function that computes all the weird numbers up to a given integer. To determine if a number n is pseudoperfect, I was trying to come up with a better solution than naively computing the sum of every combination of divisors until I either exhaust all the combinations or find a solution. I ended up with the idea that you can recursively reduce the problem by definitely proving inclusion of certain divisors in the sum. Take 12 for example: you know that 6 must be included in the final sum because the total of all other divisors is less than 12. Then the problem is reduced to summing to the rest of 12 (ie 12 - 6, ie 6) from the remaining divisors, and so forth. If you reach 0, then the number is pseudoperfect. If you exhaust all the divisors then it's not pseudoperfect.
Here's my code so far:
(The functions powerset and divisors do the expected: return a list of all sublists, and a list of divisors respectively. Obviously RETURN is just a placeholder for I want to do.)
pseudoperfect _ 0 = RETURN True
pseudoperfect d n =
let g = zipWith (\x y -> x) d (filter (< 0) (map (subtract n) (scanl1 (+) d))) in
map (\x -> pseudoperfect g (n - (sum x))) (filter (not . null) (powerset(d \\ g)))
-- Determine if n is weird, ie, abundant and not pseudoperfect.
weird n = ((sum . divisors) (n) > n) && not (pseudoperfect (divisors n) n)
-- Find weird numbers up to given n.
weirds n = filter (weird) [1..n]
I tried working through a couple examples on paper and I got the correct result. My problem is that I want to just break out of the recursion and return True when I reach that base case of n = 0. Is there a way to do this in Haskell, or do I need to restructure the function somehow so that I'm passing returned value at the base case value back up through the stack?

Primes in Haskell

I'm learning Haskell, and I've tried to generate an infinite list of primes, but I can't understand what my function is doing wrong.
The function:
prime = 2:3:filter (\x -> all (\y -> (mod x y) > 0) (init prime)) [5..]
I think it's the init prime, but the strange thing is that even if I set an upper bound to the range (5..10 for example), the function loops forever and never gets any result for prime !! 2
Can you please tell me what I'm doing wrong?
Well, for one let's look at what init does for a finite list:
init [1] == []
init [1,2] == [1]
init [1,2,3] == [1,2]
ok, so it gives us all but the last element of the list.
So what's init primes? Well, prime without the last element. Hopefully if we implemented prime correctly it shouldn't have a last element (because there are infinitely many primes!), but more importantly we don't quite need to care yet because we don't have the full list for now anyway - we only care about the first couple of elements after all, so for us it's pretty much the same as just prime itself.
Now, looking at all: What does this do? Well, it takes a list and a predicate and tells us if all the elements of the list satisfy the predicate:
all (<5) [1..4] == True
all even [1..4] == False
it even works with infinite lists!
all (<5) [1..] == False
so what's going on here? Well, here's the thing: It does work with infinite lists... but only if we can actually evaluate the list up to the first element of the list that violates the predicate! Let's see if this holds true here:
all (\y -> (mod 5 y) > 0) (init prime)
so to find out if 5 is a prime number, we'd have to check if there's a number in prime minus the last element of prime that divides it. Let's see if we can do that.
Now let's look at the definition of prime, we get
all (\y -> (mod 5 y) > 0) (2:3:filter (\x -> all (\y -> (mod x y) > 0) (init prime)) [5..])
So to determine whether 5 is a prime number, we only have to check if it's:
divisible by 2 - it's not, let's continue
divisible by 3 - still no
divisible by ...? Well, we're in the process of checking what the 3rd prime is so we don't know yet...
and there's the crux of the problem. With this logic, to determine the third prime number you need to know the third prime number! Of course logically, we actually don't want to check this at all, rather we only need to check if any of the smaller prime numbers are divisors of the current candidate.
So how do we go about doing that? Well, we'll have to change our logic unfortunately. One thing we can do is try to remember how many primes we already have, and only take as many as we need for our comparison:
prime = 2 : 3 : morePrimes 2 [5..]
morePrimes n (x:xs)
| all (\y -> mod x y > 0) (take n prime) = x : morePrimes (n+1) xs
| otherwise = morePrimes n xs
so how does this work? Well, it basically does what we were just talking about: We remember how many primes we already have (starting at 2 because we know we have at least [2,3] in n. We then check if our next prime is divisible by any of the of n primes we already know by using take n, and if it is we know it's our next prime and we need to increment n - otherwise we just carry on.
There's also the more well known form inspired by (although not quite the same as) the Sieve of Eratosthenes:
prime = sieve [2..] where
sieve (p:xs) = p : sieve (filter (\x -> mod x p > 0) xs)
so how does this work? Well, again with a similar idea: We know that the next prime number needs to be non-divisible by any previous prime number. So what do we do? Well, starting at 2 we know that the first element in the list is a prime number. We then throw away every number divisible by that prime number using filter. And afterwards, the next item in the list is going to be a prime number again (because we didn't throw it away), so we can repeat the process.
Neither of these are one liners like the one you were hoping for though.
If the code in the other answer is restructured under the identity
[take n primes | n <- [0..]] == inits primes
eventually we get
import Data.List
-- [ ([], 2), ([2], 3), ([2,3], 5), ... ]
primes = 2 : [ c | (ps, p) <- zip (inits primes) primes,
c <- take 1 [c | c <- [p+1..],
and [mod c p > 0 | p <- ps]]]
Further improving it algorithmically, it becomes
primes = 2 : [ c | (ps, r:q:_) <- zip (inits primes) -- [] [3,4,...]
(tails $ 3 : map (^2) primes), -- [2] [4,9,...]
c <- [r..q-1], and [mod c p > 0 | p <- ps]] -- [2,3] [9,25,...]

How to tell if a number is a square number with recursion?

I solved the following exercise, but I'm not a fan of the solution:
Write the function isPerfectSquare using recursion, to tell if an
Int is a perfectSquare
isPerfectSquare 1 -> Should return True
isPerfectSquare 3 -> Should return False
the num+1 part is for the case for isPerfectSquare 0 and isPerfectSquare 1, one of the parts I don't like one bit, this is my solutiuon:
perfectSquare 0 1 = [0] ++ perfectSquare 1 3
perfectSquare current diff = [current] ++ perfectSquare (current + diff) (diff + 2)
isPerfectSquare num = any (==num) (take (num+1) (perfectSquare 0 1))
What is a more elegant solution to this problem? of course we can't use sqrt, nor floating point operations.
#luqui you mean like this?
pow n = n*n
perfectSquare pRoot pSquare | pow(pRoot) == pSquare = True
| pow(pRoot)>pSquare = perfectSquare (pRoot-1) pSquare
| otherwise = False
--
isPerfectSquare number = perfectSquare number number
I can't believe I didn't see it xD thanks a lot! I must be really tired
You can perform some sort of "binary search" on some implicit list of squares. There is however a problem of course, and that is that we first need an upper bound. We can use as upper bound the number itself, since for all integral squares, the square is larger than the value we square.
So it could look like:
isPerfectSquare n = search 0 n
where search i k | i > k = False
| j2 > n = search i (j-1)
| j2 < n = search (j+1) k
| otherwise = True
where j = div (i+k) 2
j2 = j * j
To verify that a number n is a perfect square, we thus have an algorithm that runs in O(log n) in case the integer operations are done in constant time (for example if the number of bits is fixed).
Wikipedia suggests using Newton's method. Here's how that would look. We'll start with some boilerplate. ensure is a little combinator I've used fairly frequently. It's written to be very general, but I've included a short comment that should be pretty explanatory for how we'll plan to use it.
import Control.Applicative
import Control.Monad
ensure :: Alternative f => (a -> Bool) -> a -> f a
ensure p x = x <$ guard (p x)
-- ensure p x | p x = Just x
-- | otherwise = Nothing
Here's the implementation of the formula given by Wikipedia for taking one step in Newton's method. x is our current guess about the square root, and n is the number we're taking the square root of.
stepApprox :: Integer -> Integer -> Integer
stepApprox x n = (x + n `div` x) `div` 2
Now we can recursively call this stepping function until we get the floor of the square root. Since we're using integer division, the right termination condition is to watch for the next step of the approximation to be equal or one greater to the current step. This is the only recursive function.
iterateStepApprox :: Integer -> Integer -> Integer
iterateStepApprox x n = case x' - x of
0 -> x
1 -> x
_ -> iterateStepApprox x' n
where x' = stepApprox x n
To wrap the whole development up in a nice API, to check if a number is a square we can just check that the floor of its square root squares to it. We also need to pick a starting approximation, but we don't have to be super smart -- Newton's method converges very quickly for square roots. We'll pick half the number (rounded up) as our approximation. To avoid division by zero and other nonsense, we'll make zero and negative numbers special cases.
isqrt :: Integer -> Maybe Integer
isqrt n | n < 0 = Nothing
isqrt 0 = Just 0
isqrt n = ensure (\x -> x*x == n) (iterateStepApprox ((n+1)`div`2) n)
Now we're done! It's pretty fast even for large numbers:
> :set +s
> isqrt (10^10000) == Just (10^5000)
True
(0.58 secs, 182,610,408 bytes)
Yours would spend rather a longer time than the universe has got left computing that. It is also marginally faster than the binary search algorithm in my tests. (Of course, not hand-rolling it yourself is several orders of magnitude faster still, probably in part because it uses a better, but more complicated, algorithm based on Karatsuba multiplication.)
If the function is recursive then it is primitive recursive as are 90% of all recursive functions. For these folds are fast and effective. Considering the programmers time, while keeping things simple and correct is important.
Now, that said, it might be fruitful to cinsider text patterns of functions like sqrt. sqrt return a floating point number. If a number is a perfect square then two characters are ".0" at the end. The pattern might occur, however, at the start of any mantissa. If a string goes in, in reverse, then "0." is at the top of the list.
This function takes a Number and returns a Bool
fps n = (take 2.reverse.show $ (n / (sqrt n))) == "0."
fps 10000.00001
False
fps 10000
True

Generating Cartesian products in Haskell

I am trying to generate all possible combinations of n numbers. For example if n = 3 I would want the following combinations:
(0,0,0), (0,0,1), (0,0,2)... (0,0,9), (0,1,0)... (9,9,9).
This post describes how to do so for n = 3:
[(a,b,c) | m <- [0..9], a <- [0..m], b <- [0..m], c <- [0..m] ]
Or to avoid duplicates (i.e. multiple copies of the same n-uple):
let l = 9; in [(a,b,c) | m <- [0..3*l],
a <- [0..l], b <- [0..l], c <- [0..l],
a + b + c == m ]
However following the same pattern would become very silly very quickly for n > 3. Say I wanted to find all of the combinations: (a, b, c, d, e, f, g, h, i, j), etc.
Can anyone point me in the right direction here? Ideally I'd rather not use a built in funtion as I am trying to learn Haskell and I would rather take the time to understand a peice of code than just use a package written by someone else. A tuple is not required, a list would also work.
My other answer gave an arithmetic algorithm to enumerate all the combinations of digits. Here's an alternative solution which arises by generalising your example. It works for non-numbers, too, because it only uses the structure of lists.
First off, let's remind ourselves of how you might use a list comprehension for three-digit combinations.
threeDigitCombinations = [[x, y, z] | x <- [0..9], y <- [0..9], z <- [0..9]]
What's going on here? The list comprehension corresponds to nested loops. z counts from 0 to 9, then y goes up to 1 and z starts counting from 0 again. x ticks the slowest. As you note, the shape of the list comprehension changes (albeit in a uniform way) when you want a different number of digits. We're going to exploit that uniformity.
twoDigitCombinations = [[x, y] | x <- [0..9], y <- [0..9]]
We want to abstract over the number of variables in the list comprehension (equivalently, the nested-ness of the loop). Let's start playing around with it. First, I'm going to rewrite these list comprehensions as their equivalent monad comprehensions.
threeDigitCombinations = do
x <- [0..9]
y <- [0..9]
z <- [0..9]
return [x, y, z]
twoDigitCombinations = do
x <- [0..9]
y <- [0..9]
return [x, y]
Interesting. It looks like threeDigitCombinations is roughly the same monadic action as twoDigitCombinations, but with an extra statement. Rewriting again...
zeroDigitCombinations = [[]] -- equivalently, `return []`
oneDigitCombinations = do
z <- [0..9]
empty <- zeroDigitCombinations
return (z : empty)
twoDigitCombinations = do
y <- [0..9]
z <- oneDigitCombinations
return (y : z)
threeDigitCombinations = do
x <- [0..9]
yz <- twoDigitCombinations
return (x : yz)
It should be clear now what we need to parameterise:
combinationsOfDigits 0 = return []
combinationsOfDigits n = do
x <- [0..9]
xs <- combinationsOfDigits (n - 1)
return (x : xs)
ghci> combinationsOfDigits' 2
[[0,0],[0,1],[0,2],[0,3],[0,4],[0,5],[0,6],[0,7],[0,8],[0,9],[1,0],[1,1] ... [9,8],[9,9]]
It works, but we're not done yet. I want to show you that this is an instance of a more general monadic pattern. First I'm going to change the implementation of combinationsOfDigits so that it folds up a list of constants.
combinationsOfDigits n = foldUpList $ replicate n [0..9]
where foldUpList [] = return []
foldUpList (xs : xss) = do
x <- xs
ys <- foldUpList xss
return (x : ys)
Looking at the definiton of foldUpList :: [[a]] -> [[a]], we can see that it doesn't actually require the use of lists per se: it only uses the monad-y parts of lists. It could work on any monad, and indeed it does! It's in the standard library, and it's called sequence :: Monad m => [m a] -> m [a]. If you're confused by that, replace m with [] and you should see that those types mean the same thing.
combinationsOfDigits n = sequence $ replicate n [0..9]
Finally, noting that sequence . replicate n is the definition of replicateM, we get it down to a very snappy one-liner.
combinationsOfDigits n = replicateM n [0..9]
To summarise, replicateM n gives the n-ary combinations of an input list. This works for any list, not just a list of numbers. Indeed, it works for any monad - though the "combinations" interpretation only makes sense when your monad represents choice.
This code is very terse indeed! So much so that I think it's not entirely obvious how it works, unlike the arithmetic version I showed you in my other answer. The list monad has always been one of the monads I find less intuitive, at least when you're using higher-order monad combinators and not do-notation.
On the other hand, it runs quite a lot faster than the number-crunching version. On my (high-spec) MacBook Pro, compiled with -O2, this version calculates the 5-digit combinations about 4 times faster than the version which crunches numbers. (If anyone can explain the reason for this I'm listening!)
What are all the combinations of three digits? Let's write a few out manually.
000, 001, 002 ... 009, 010, 011 ... 099, 100, 101 ... 998, 999
We ended up simply counting! We enumerated all the numbers between 0 and 999. For an arbitrary number of digits this generalises straightforwardly: the upper limit is 10^n (exclusive), where n is the number of digits.
Numbers are designed this way on purpose. It would be jolly strange if there was a possible combination of three digits which wasn't a valid number, or if there was a number below 1000 which couldn't be expressed by combining three digits!
This suggests a simple plan to me, which just involves arithmetic and doesn't require a deep understanding of Haskell*:
Generate a list of numbers between 0 and 10^n
Turn each number into a list of digits.
Step 2 is the fun part. To extract the digits (in base 10) of a three-digit number, you do this:
Take the quotient and remainder of your number with respect to 100. The quotient is the first digit of the number.
Take the remainder from step 1 and take its quotient and remainder with respect to 10. The quotient is the second digit.
The remainder from step 2 was the third digit. This is the same as taking the quotient with respect to 1.
For an n-digit number, we take the quotient n times, starting with 10^(n-1) and ending with 1. Each time, we use the remainder from the last step as the input to the next step. This suggests that our function to turn a number into a list of digits should be implemented as a fold: we'll thread the remainder through the operation and build a list as we go. (I'll leave it to you to figure out how this algorithm changes if you're not in base 10!)
Now let's implement that idea. We want calculate a specified number of digits, zero-padding when necessary, of a given number. What should the type of digits be?
digits :: Int -> Int -> [Int]
Hmm, it takes in a number of digits and an integer, and produces a list of integers representing the digits of the input integer. The list will contain single-digit integers, each one of which will be one digit of the input number.
digits numberOfDigits theNumber = reverse $ fst $ foldr step ([], theNumber) powersOfTen
where step exponent (digits, remainder) =
let (digit, newRemainder) = remainder `divMod` exponent
in (digit : digits, newRemainder)
powersOfTen = [10^n | n <- [0..(numberOfDigits-1)]]
What's striking to me is that this code looks quite similar to my English description of the arithmetic we wanted to perform. We generate a powers-of-ten table by exponentiating numbers from 0 upwards. Then we fold that table back up; at each step we put the quotient on the list of digits and send the remainder to the next step. We have to reverse the output list at the end because of the right-to-left way it got built.
By the way, the pattern of generating a list, transforming it, and then folding it back up is an idiomatic thing to do in Haskell. It's even got its own high-falutin' mathsy name, hylomorphism. GHC knows about this pattern too and can compile it into a tight loop, optimising away the very existence of the list you're working with.
Let's test it!
ghci> digits 3 123
[1, 2, 3]
ghci> digits 5 10101
[1, 0, 1, 0, 1]
ghci> digits 6 99
[0, 0, 0, 0, 9, 9]
It works like a charm! (Well, it misbehaves when numberOfDigits is too small for theNumber, but never mind about that.) Now we just have to generate a counting list of numbers on which to use digits.
combinationsOfDigits :: Int -> [[Int]]
combinationsOfDigits numberOfDigits = map (digits numberOfDigits) [0..(10^numberOfDigits)-1]
... and we've finished!
ghci> combinationsOfDigits 2
[[0,0],[0,1],[0,2],[0,3],[0,4],[0,5],[0,6],[0,7],[0,8],[0,9],[1,0],[1,1] ... [9,7],[9,8],[9,9]]
* For a version which does require a deep understanding of Haskell, see my other answer.
combos 1 list = map (\x -> [x]) list
combos n list = foldl (++) [] $ map (\x -> map (\y -> x:y) nxt) list
where nxt = combos (n-1) list
In your case
combos 3 [0..9]

Keep getting stack overflow

I am repeatedly getting a stack overflow on my solution to Project Euler #7 and i have no idea why.
Here is my code:
import System.Environment
checkPrime :: Int -> Bool
checkPrime n = not $ testList n [2..n `div` 2]
--testList :: Int -> [Int] -> Bool
testList _ [] = False
testList n xs
| (n `rem` (head xs) == 0) = True
| otherwise = testList n (tail xs)
primesTill n = sum [1 | x <- [2..n], checkPrime x]
nthPrime n = nthPrime' n 2
nthPrime' n x
| (primesTill x == n) = x
| otherwise = nthPrime' n x+1
main = print (nthPrime 10001)
resolving the stackoverflow
As #bheklilr mentioned in his comment the stackoverflow is caused by a wrong evaluation order in the otherwise branch of the nthPrime' function:
nthPrime' n x+1
Will be interpreted as
(nthPrime' n x)+1
Because this expression is called recursively, your call of nthPrime' n 2 will expand into
(nthPrime' n 2)+1+1+1+1+1+1+1+1 ...
but the second parameter will never get incremented and your program collects a mass of unevaluated thunks. The evaluation can only happen if the first parameter is reduced to an Int, but your function is in an endless recursion so this will never take place. All the plus ones are stored on the stack, if there is no more space left you'll get a stackoverflow error.
To solve this problem you need to put parranteses around the x+1 so your recursive call will look like this
nthPrime' n (x+1)
Now the parameters gets incremented before it is passed to the recursive call.
This should solve your stackoverflow problem, you can try it out with a smaller number e.g. 101 and you'll get the desired result.
runtime optimization
If you test your program with the original value 10001 you may realize that it still won't finish in a reasonable amount of time.
I won't go into the details of fancy algorithms to solve this problems, if you're interested in them you can easily find them online.
Instead I'll show you were the problem in your code is and show you a simple solution.
The bottleneck is your nthPrime function:
primesTill n = sum [1 | x <- [2..n], checkPrime x]
nthPrime n = nthPrime' n 2
nthPrime' n x
| (primesTill x == n) = x
| otherwise = nthPrime' n (x+1)
This function checks if the number of primes between 2 and x is equal to n. The idea is correct, but it leads to an exponential runtime. The problem is that you recalculate primesTill x for every iteration. To count the primes smaller than x you calculate them all and than sum them up. In the next step for x+1 you forget every thing you know about the numbers between 2 and x and test them all again if they are prime only as a last step you test the if x+1 is prime. Than you repeat this - forget every thing and test all numbers again - until you are finished.
Wouldn't it be great if the computer could remember the primes it has already found?
There are many possibilities to do this I'll use a simple infinite list, if you are interested in other approaches you can search for the terms memoization or dynamic programming.
We start with the list comprehension you used in primesTill:
[1 | x <- [2..n], checkPrime x]
This calculates all primes between 2 and n, but immediately forgets the prime number and replaces it with 1, so the first step will be to keep the actual numbers.
[x | x <- [2..n], checkPrime x]
This gives us a list of all prime numbers between 2 and n. If we had a sufficiently large list of prime numbers we could use the index function !! to get the 10001st prime number. So we need to set n to a really really big number, to be sure that the filtered list is long enough?
Lazy evaluation to the rescue!
Lazy evaluation in haskell allows us to build an infinite list, that is only evaluated as much as needed. If we don't supply an upper bound to a list generator it will build such an infinite list for us.
[x | x <- [2..], checkPrime x]
Now we have a infinite list of all prime numbers.
We can bind it to the a name e.g. primes and use it to define nthPrime
primes = [x | x <- [2..], checkPrime x]
nthPrime n = primes !! n
Now you can compile it with ghc -O2, run it and the result will be promptly delivered to you.

Resources