Number type boundaries in Common LISP and Stack flowing over in GHCI - haskell

First question ever here, and newbie in both Common LISP and Haskell, please be kind.
I have a function in Common LISP - code below - which is intended to tell whether the area of a triangle is an integral number (integer?).
(defun area-int-p (a b c)
(let* ((s (/ (+ a b c) 2))
(area (sqrt (* s (- s a) (- s b) (- s c)))))
(if (equal (ceiling area) (floor area))
t
nil)))
This is supposed to use Heron's formula to calculate the area of the triangle, given the size of the three sides, and decide whether it is an integer comparing the ceiling and the floor. We are told that the area of an equilateral triangle is never an integer. Therefore, to test whether the function is working, I ran it with the arguments 333. Here is what I got in return:
CL-USER> (area-int-p 333 333 333)
NIL
Perfect! It works. To test it even more, I ran it with the arguments 3333. This is what I got in return:
CL-USER> (area-int-p 3333 3333 3333)
T
Something is wrong, this is not supposed to happen!
So, I try the following, hopefully equivalent Haskell function to see what happens:
areaIntP :: (Integral a) => a -> a -> a -> Bool
areaIntP a b c =
let aa = fromIntegral a
bb = fromIntegral b
cc = fromIntegral c
perimeter = aa + bb + cc
s = perimeter/2
area = sqrt(s * (s - aa) * (s - bb) * (s - cc))
in if ceiling area == floor area
then True
else False
This is what I get:
*Main> areaIntP 3333 3333 3333
False
*Main> areaIntP 333 333 333
False
Looks perfect. Encouraged by this, I use the below functions in Haskell to sum the perimeters of of isosceles triangles with the third side differing just one unit from the other sides, an integral area, and perimeter below 1,000,000,000.
toplamArtilar :: Integral a => a -> a -> a -> a
toplamArtilar altSinir ustSinir toplam =
if ustSinir == altSinir
then toplam
else if areaIntP ustSinir ustSinir (ustSinir + 1) == True
then toplamArtilar altSinir (ustSinir - 1) (toplam + (3 * ustSinir + 1))
else toplamArtilar altSinir (ustSinir - 1) toplam
toplamEksiler :: Integral a => a -> a -> a -> a
toplamEksiler altSinir ustSinir toplam =
if ustSinir == altSinir
then toplam
else if areaIntP ustSinir ustSinir (ustSinir - 1) == True
then toplamEksiler altSinir (ustSinir - 1) (toplam + (3 * ustSinir - 1))
else toplamEksiler altSinir (ustSinir - 1) toplam
sonuc altSinir ustSinir =
toplamEksiler altSinir ustSinir (toplamArtilar altSinir ustSinir 0)
(ustSinir means upper limit, altSinir lower limit by the way.)
Running sonuc with the arguments 2 and 333333333 however, my stack flows over. Runnning the equivalent functions in Common LISP the stack is OK, but area-int-p function is not reliable, probably because of the boundaries of the number type the interpreter deduces.
After all this, my question is two-fold:
1) How do I get round the problem in the Common LISP function area-int-p?
2) How do I prevent the stack overflow with the Haskell functions above, either within Emacs or with GHCi run from the terminal?
Note for those who figure out what I am trying to achieve here: please don't tell me to use Java BigDecimal and BigInteger.
Edit after very good replies: I asked two questions in one, and received perfectly satisfying, newbie friendly answers and a note on style from very helpful people. Thank you.

Let's define an intermediate Common Lisp function:
(defun area (a b c)
(let ((s (/ (+ a b c) 2)))
(sqrt (* s (- s a) (- s b) (- s c)))))
Your tests give:
CL-USER> (area 333 333 333)
48016.344
CL-USER> (area 3333 3333 3333)
4810290.0
In the second case, it should be clear that both the ceiling and floor are equal. This is not the case in Haskell where the second test, with 3333, returns:
4810290.040910754
Floating point
In Common Lisp, the value from which we take a square root is:
370222244442963/16
This is because computations are made with rational numbers. Up to this point, the precision is maximal. However, SQRT is free to return either a rational, when possible, or an approximate result. As a special case, the result can be an integer on some implementations, as Rainer Joswig pointed out in a comment. It makes sense because both integer and ratio are disjoint subtypes of the rational type. But as your problem shows, some square roots are irrational (e.g. √2), and in that case CL can return a float approximating the value (or a complex float).
The relevant section regarding floats and mathematical functions is 12.1.3.3 Rule of Float Substitutability. Long story short, the result is converted to a single-float when you compute the square root, which happens to loose some precision. In order to have a double, you have to be more explicit:
(defun area (a b c)
(let ((s (/ (+ a b c) 2)))
(sqrt (float (* s (- s a) (- s b) (- s c)) 0d0))))
I could also have used (coerce ... 'double-float), but here
I chose to call the FLOAT conversion function. The optional second argument is a float prototype, ie. a value of the target type. Above, it is 0d0, a double float. You could also use 0l0 for long doubles or 0s0 for short. This parameter is useful if you want to have the same precision as an input float, but can be used with literals too, like in the example. The exact meaning of short, single, double or long float types is implementation-defined, but they shall respect some rules. Current implementations generally give more precision that the minimum required.
CL-USER> (area 3333 3333 3333)
4810290.040910754d0
Now, if I wanted to test if the result is integral, I would truncate the float and look if the second returned value, the remainder, is zero.
CL-USER> (zerop (nth-value 1 (truncate 4810290.040910754d0)))
NIL
Arbitrary-precision
Note that regardless of the implementation language (Haskell, CL or another one) the approach is going to give incorrect results for some inputs, given how floats are represented. Indeed, the same problem you had with CL could arise for some inputs with more precise floats, where the result would be very close to an integer. You might need another mathematical approach or something like MPFR for arbitrary precision floating point computations. SBCL ships with sb-mpfr:
CL-USER> (require :sb-mpfr)
("SB-MPFR" "SB-GMP")
CL-USER> (in-package :sb-mpfr)
#<PACKAGE "SB-MPFR">
And then:
SB-MPFR> (with-precision 256
(sqrt (coerce 370222244442963/16 'mpfr-float)))
.4810290040910754427104204965311207243133723228299086361205561385039201180068712e+7
-1

I will answer your second question, I'm not sure about the first. In Haskell, because it's a lazy language, when you use tail recursion with an accumulator parameter, an "accumulation of thunks" can take place. A thunk is an expression that is suspended and not yet evaluated. To take a much simpler example, summing all the numbers from 0 to n:
tri :: Int -> Int -> Int
tri 0 accum = accum
tri n accum = tri (n-1) (accum + n)
If we trace the evaluation, we can see what's going on:
tri 3 0
= tri (3-1) (0+3)
= tri 2 (0+3)
= tri (2-1) ((0+3)+2)
= tri 1 ((0+3)+2)
= tri (1-1) (((0+3)+2)+1)
= tri 0 (((0+3)+2)+1)
= ((0+3)+2)+1 -- here is where ghc uses the C stack
= (0+3)+2 (+1) on stack
= 0+3 (+2) (+1) on stack
= 0 (+3) (+2) (+1) on stack
= 3 (+2) (+1) on stack
= 5 (+1) on stack
= 6
This is a simplification of course, but it's an intuition that can help you understand both stack overflows and space leaks caused by thunk buildup. GHC only evalues a thunk when it's needed. We ask whether the value of n is 0 each time through tri so there is no thunk buildup in that parameter, but nobody needs to know the value of accum until the very end, which might be a really huge thunk as you can see from the example. In evaluating that huge thunk the stack can overflow.
The solution is to make tri evaluate accum sooner. This is usually done using a BangPattern (but can be done with seq if you don't like extensions).
{-# LANGUAGE BangPatterns #-}
tri :: Int -> Int -> Int
tri 0 !accum = accum
tri n !accum = tri (n-1) (accum + n)
The ! before accum means "evaluate this parameter at the moment of pattern matching" (even though the pattern doesn't technically need to know its value). Then we get this evaluation trace:
tri 3 0
= tri (3-1) (0+3)
= tri 2 3 -- we evaluate 0+3 because of the bang pattern
= tri (2-1) (3+2)
= tri 1 5
= tri (1-1) (5+1)
= tri 0 6
= 6
I hope this helps.

About style:
(if (predicate? ...) t nil)
is just
(predicate? ...)
You are checking with your IF, if T is T and then return T. But T is already T, so you can just return it.

Related

Perplexing behaviour when approximating the derivative in haskell

I have defined a typeclass Differentiable to be implemented by any type which can operate on infinitesimals.
Here is an example:
class Fractional a => Differentiable a where
dif :: (a -> a) -> (a -> a)
difs :: (a -> a) -> [a -> a]
difs = iterate dif
instance Differentiable Double where
dif f x = (f (x + dx) - f(x)) / dx
where dx = 0.000001
func :: Double -> Double
func = exp
I have also defined a simple Double -> Double function to differentiate.
But when I test this in the ghc this happens:
... $ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
Prelude> :l testing
[1 of 1] Compiling Main ( testing.hs, interpreted )
Ok, one module loaded.
*Main> :t func
func :: Double -> Double
*Main> derivatives = difs func
*Main> :t derivatives
derivatives :: [Double -> Double]
*Main> terms = map (\f -> f 0) derivatives
*Main> :t terms
terms :: [Double]
*Main> take 5 terms
[1.0,1.0000004999621837,1.000088900582341,-222.0446049250313,4.440892098500626e8]
*Main>
The approximations to the nth derivative of e^x|x=0 are:
[1.0,1.0000004999621837,1.000088900582341,-222.0446049250313,4.440892098500626e8]
The first and 2nd derivatives are perfectly reasonable approximations given the setup, but suddenly, the third derivative of func at 0 is... -222.0446049250313! HOW!!?
The method you're using here is a finite difference method of 1st-order accuracy.
Layman's translation: it works, but is pretty rubbish numerically speaking. Specifically, because it's only 1st-order accurate, you need those really small steps to get good accuracy even with exact-real-arithmetic. You did choose a small step size so that's fine, but small step size brings in another problem: rounding errors. You need to take the difference f (x+δx) - f x with small δx, meaning the difference is small whereas the individual values may be large. That always brings up the floating-point inaccuracy – consider for example
Prelude> (1 + pi*1e-13) - 1
3.141931159689193e-13
That might not actually hurt that much, but since you then need to divide by δx you boost up the error.
This issue just gets worse/compounded as you go to the higher derivatives, because now each of the f' x and f' (x+δx) has already an (non-identical!) boosted error on it, so taking the difference and boosting again is a clear recipe for disaster.
The simplest way to remediate the problem is to switch to a 2nd-order accurate method, the obvious being central difference. Then you can make the step a lot bigger, and thus largely avoid rounding issues:
Prelude> let dif f x = (f (x + δx) - f(x - δx)) / (2*δx) where δx = 1e-3
Prelude> take 8 $ ($0) <$> iterate dif exp
[1.0,1.0000001666666813,1.0000003333454632,1.0000004990740052,0.9999917560676863,0.9957312752106873,8.673617379884035,7806.255641895632]
You see the first couple of derivatives are good now, but then eventually it also becomes unstable – and this will happen with any FD method as you iterate it. But that's anyway not really a good approach: note that every evaluation of the n-th derivative requires 2 evaluations of the n−1-th. So, the complexity is exponential in the derivative degree.
A better approach to approximate the n-th derivative of an opaque function is to fit an n-th order polynomial to it and differentiate this symbolically/automatically. Or, if the function is not opaque, differentiate itself symbolically/automatically.
tl;dr: the dx denominator gets small exponentially quickly, which means that even small errors in the numerator get blown out of proportion.
Let's do some equational reasoning on the first "bad" approximation, the third derivative.
dif (dif (dif exp))
= { definition of dif }
dif (dif (\x -> (exp (x+dx) - exp x)/dx))
= { definition of dif }
dif (\y -> ((\x -> (exp (x+dx) - exp x)/dx) (y+dx)
- (\x -> (exp (x+dx) - exp x)/dx) y
)/dx)
= { questionable algebra }
dif (\y -> (exp (y + 2*dx) - 2*exp (y + dx) + exp y)/dx^2)
= { alpha }
dif (\x -> (exp (x + 2*dx) - 2*exp (x + dx) + exp x)/dx^2)
= { definition of dif and questionable algebra }
\x -> (exp (x + 3*dx) - 3*exp (x + 2*dx) + 3*exp (x + dx) - exp x)/dx^3
Hopefully by now you can see the pattern we're getting into: as we take more and more derivatives, the error in the numerator gets worse (because we are computing exp farther and farther away from the original point, x + 3*dx is three times as far away e.g.) while the sensitivity to error in the denominator gets higher (because we are computing dx^n for the nth derivative). By the third derivative, these two factors become untenable:
> exp (3*dx) - 3*exp (2*dx) + 3*exp (dx) - exp 0
-4.440892098500626e-16
> dx^3
9.999999999999999e-19
So you can see that, although the error in the numerator is only about 5e-16, the sensitivity to error in the denominator is so high that you start to see nonsensical answers.

How mod calculate

I have following operation:
Prelude> mod (3 - 12) 7
As result I've got 5.
Why is the result is 5?
And when I try something like this:
Prelude> mod -9 7
Then I've got error:
<interactive>:6:1: error:
• Non type-variable argument
in the constraint: Num (t -> a -> a -> a)
(Use FlexibleContexts to permit this)
• When checking the inferred type
it :: forall a t.
(Num (t -> a -> a -> a), Num (a -> a -> a), Num t, Integral a) =>
a -> a -> a
Why?
I've forgot to mention, that I just start learning haskell.
mod is specified as
integer modulus, satisfying
(x `div` y)*y + (x `mod` y) == x
and div as
integer division truncated toward negative infinity
In your case x is -9 and y is 7.
-9 / 7 is -1.2857..., which (rounded down) is -2. Thus (-9) `div` 7 is -2.
Looking at the equation above, we have ((-9) `div` 7)*7 + ((-9) `mod` 7) == (-9), which becomes (-2)*7 + ((-9) `mod` 7) == (-9), which in turn simplifies to (-14) + ((-9) `mod` 7) == (-9), (-9) `mod` 7 == (-9) - (-14), and finally (-9) `mod` 7 == 5 (because -9 + 14 is 5).
As for your second question: Haskell parses mod -9 7 as mod - (9 7), i.e. take the mod function and subtract from it the result of applying 9 to 7. This makes no sense because 9 is not a function (so you can't apply it) and mod is not a number (so you can't subtract from it).1
The fix is to use mod (-9) 7 to force - to be parsed as a unary operator (negating 9) instead of a binary infix operator.
1 As the error message hints, there actually is a way to make ghc swallow this code. It involves defining interesting instances of Num, but I won't go into that here.
First of all, as is already specified in the comments, if you write -9 without brackets, it is interpreted as a function, not as a negative number.
Now for the mod part: there is a difference between modulo (mod :: Integral i => i -> i -> i) and remainder (rem :: Integral i => i -> i -> i):
mod :: Integral i => i -> i -> i
integer modulus, satisfying
(x `div` y)*y + (x `mod` y) == x
rem :: Integral i => i -> i -> i
integer remainder, satisfying:
(x `quot` y)*y + (x `rem` y) == x
So if both the numerator and denominator are positive, there is no difference, because the quot (which is division truncated towards zero) and div (floored division) are equivalent.
However when the numerator is negative, mod will still be positive, because the div is floored, and this ((div x y)*y) will be lower or equal to the actual result. Whereas the rem will be negative.
If on the other hand the denominator is negative, the result mod will be negative, whereas for the rem it will again depend on the sign of the sign of the numerator.
So mod always takes the sign of the denominator whereas for rem it depends on the sign of the numerator.
I guess you are expecting -2. The mathematical definition of modulo is a natural number ( >=0), so it has to return 5 (-2 +7) instead of -5.
If you want a function which returns -2 instead (as most languages do) you can use the rem function (remainder)

No instance for (Fractional Int) arising from a use of `area'

I'm new to Haskell and I'm writing a program that calculates the limit of a function. So given two lists a and b, a delta dx = 0.001, and the limits of integration l and r, I want to recursively compute the area under the curve with equation:
a1(x)^b1 + a2(x)^b2 + ... + an(x)bn where x is all the values between l an r with an increment of dx between each value. The technical part isn't that important I guess but it helps to read the code:
import Text.Printf (printf)
-- This function should return a list [area].
solve :: Int -> Int -> [Int] -> [Int] -> [Double]
solve l r x y = [area l r x y]
area l r a b = if (l < r)
then (calc l a b) * 0.001 + (area (l + 1) r a b)
else (calc r a b) * 0.001
calc n (a:arest) (b:brest) = (fromIntegral(n) ^^ b) * fromIntegral(a) + (calc n arest brest)
calc n [] [] = 0
--Input/Output.
main :: IO ()
main = getContents >>= mapM_ (printf "%.1f\n"). (\[a, b, [l, r]] -> solve l r a b). map (map read. words). lines
I get no error with the above code but as soon as I change area (l + 1) r a b to area (l + 0.001) r a b I get the following error message:
No instance for (Fractional Int) arising from a use of `area'
I tried making a new class and having a be an abstract type but that didn't work, any other ideas?
So the problem is that Int is not a Fractional type. In other words, it does not have a value called 0.001 [note 1], but you have requested Haskell to give you such a value in your code.
You are making this request because 0.001 is fed to the (+) function with another argument (in this case l) which is of type Int. This is a problem because the function has type (+) :: (Num a) => a -> a -> a: in other words, there are a lot of different functions (+) all having the type a -> a -> a; one of these functions exists for every type a in the Num type class.
Since we know that one argument to the function is an Int, it follows that we're using the specific function (+) :: Int -> Int -> Int. That is why l + 0.001 gets weird.
As for solving the problem: You probably wanted l and r to be of type Double (they're left and right bounds on where a number can be?) but if you're sure that they must be Ints then you probably meant to write fromIntegral l + 0.001.
Side note on style: parentheses in Haskell are always just grouping/precedence, functions are higher precedence than operators which are higher precedence than special forms (let, case, if, do), and function application is always left-associative or "greedy nom": a function eats whatever is immediately in front of it. What you have written:
(fromIntegral(n) ^^ b) * fromIntegral(a) + (calc n arest brest)
is probably better written as:
fromIntegral a * fromIntegral n ^^ b + calc n arest brest
The parentheses around calc are not necessary (because operators like + have lower precedence than function applications), nor are the parentheses around n and a (because those sub-expressions are indivisible chunks; fromIntegral(n) is identical to fromIntegral (n) is identical to fromIntegral n).
As #dfeuer mentions below: secretly, when you write 0.001 it does not have a definite type; rather it is translated to fromRational 0.001 internally, where the latter 0.001 is a definite value of the definite type Rational, just as when you write 4 it is translated to fromInteger 4 where the latter 4 is a definite value of the definite type Integer. The problem is really that there is no fromRational function for Int, because Int is not part of the Fractional typeclass which defines fromRational. And it's not part of that typeclass because the language designers preferred an error to a silent rounding/dropping of a fraction.

Why does divMod round division down instead of ensuring a positive remainder?

The Euclidean division theorem, with which most math students and Haskellers are familiar, states that
Given two integers a and b, with b ≠ 0, there exist unique integers q and r such that a = bq + r and 0 ≤ r < |b|.
This gives the conventional definitions of quotient and remainder. This 1992 paper argues that they are the best ones to implement in a programming language. Why, then, does divMod always round the dividend toward negative infinity?
Exact difference between div and quot shows that divMod already does a fair bit of extra work over quotRem; it seems unlikely to be much harder to get it right.
Code
I wrote the following implementation of a Euclidean-style divMod based on the implementation in GHC.Base. I'm pretty sure it's right.
divModInt2 :: Int -> Int -> (Int, Int)
divModInt2 (I# x) (I# y) = case (x `divModInt2#` y) of
divModInt2# :: Int# -> Int# -> (# Int#, Int# #)
x# `divModInt2#` y#
| (x# <# 0#) = case (x# +# 1#) `quotRemInt#` y# of
(# q, r #) -> if y# <# 0#
then (# q +# 1#, r -# y# -# 1# #)
else (# q -# 1#, r +# y# -# 1# #)
| otherwise = x# `quotRemInt#` y#
Not only does this produce pleasantly Euclidean results, but it's actually simpler than the GHC code. It clearly performs at most two comparisons (as opposed to four for the GHC code).
In fact, this could probably be made entirely branchless without too much work by someone who knows more about primitives than I.
The gist of a branchless version (presumably someone who knows more could make it more efficient).
x `divMod` y = (q + yNeg, r - yNeg * y - xNeg)
where
(q,r) = (x + xNeg) `quotRem` y
xNeg = fromEnum (x < 0)
yNeg = xNeg*(2 * fromEnum (y < 0) - 1)
At this point, I'd say backwards compatibility. (See #augustss comment.) Maybe it could be changed in the next major release of the report, but you'd have to convince the haskell-prime committee and possibly the GHC developers.

Unable to find bug in my Haskell program (Puzzle #2 from Project Euler)

SPOILER ALERT: Don't look at this if you are trying to solve Project Euler's problem #2 on your own w/o looking at the answer.
I've already completed problem #2 of Project Euler (computing the sum of all even Fibonacci(n) numbers less than or equal to 4 million) - I'm using these problems to practice my C/Ada skills, to revisit my Common Lisp and to learn Haskell.
When I'm trying to re-implement my solution in Haskell, I'm running into a problem. In classical fashion, it is calculating the wrong answer. However, I think my Haskell implementation resembles my Common Lisp one (which does produce the correct result.)
The algorithm only computes the Fibonacci number for n where n % 3 == 0. This is because
We need to sum only the even-valued Fibonacci numbers F(n) <= 4M, and
(n % 3 == 0) <--> (F(n) % 2 == 0)
Here is the Haskell implementation:
uber_bound = 40000000 -- Upper bound (exclusive) for fibonacci values
expected = 4613732 -- the correct answer
-- The implementation amenable for tail-recursion optimization
fibonacci :: Int -> Int
fibonacci n = __fibs (abs n) 0 1
where
-- The auxiliary, tail-recursive fibs function
__fibs :: Int -> Int -> Int -> Int
__fibs 0 f1 f2 = f1 -- the stopping case
__fibs n f1 f2 = __fibs (n - 1) f2 (f1 + f2)
-- NOT working. It computes 19544084 when it should compute 4613732
find_solution :: Int
find_solution = sum_fibs 0
where
sum_fibs :: Int -> Int
sum_fibs n =
if fibs > uber_bound
then
0 -- stopping condition
else
-- remember, (n % 3 == 0) <--> (fib(n) % 2 == 0)
-- so, seek the next even fibs by looking at the
-- the next n = n#pre + 3
fibs + sum_fibs (n + 3)
where
fibs = fibonacci n
actual = find_solution
problem_2 = (expected == actual)
The thing is printing 19544084 when the correct answer is 4613732. My Common Lisp solution (which does work) is below.
I thought my Haskell implementation would resemble it, but I'm missing something.
(set `expected 4613732) ;; the correct answer
;; tail-recursive fibonacci
(defun fibonacci (n)
(labels
( ;; define an auxiliary fibs for tail recursion optimization
(__fibs (n f1 f2)
(if (<= n 0)
f1 ;; the stopping condition
(__fibs
(- n 1) ;; decrement to ensure a stopping condition
f2
(+ f1 f2))))
) ;; end tail_rec_fibs auxiliary
(__fibs n 0 1)
);; end labels
) ;; end fibonacci
(defun sum_fibs(seed)
(let*
((f (fibonacci seed)))
(if (> f 4000000)
0
;; else
(+ f (sum_fibs (+ 3 seed)))
);; end if
);; end of let
);; end of sum-fibs
(defun solution () (sum_fibs 0))
(defun problem_2 ()
(let
(
(actual (solution))
)
(format t "expected:~d actual:~d" expected actual)
(= expected actual)
)
) ;; end of problem_2 defun
What on Earth am I doing wrong? Granted that I'm using a Neanderthal approach to learning Haskell (I'm currently just re-implementing my Lisp on Haskell as opposed to learning idiomatic Haskell, the coding/problem solving approach that goes with the language.)
I'm not looking for somebody to just give me the solution (this is not a can i haz the codez?). I'm looking more, but much more for an explanation of what I'm missing in my Haskell program. Where is the bug, or am I missing a very specific Haskell idiosyncratic evaluation/pattern matching thing? Thanks.
You have a typo
uber_bound = 40000000
when it should be
uber_bound = 4000000
Just for reference, a more idiomatic solution would be to generate a list of all the Fibonacci numbers (lazy evaluation is really useful for this), and then use takeWhile, filter and sum.
This will be more efficient too, since tail recursion is rarely helpful in Haskell (lazy evaluation gets in the way), and since the element of the list are shared (if the list is define appropriately) each Fibonacci number is computed exactly once.
deleted, wasn't supposed to give a spoiler. dbaupp's suggestions are good. There's a well known expression using zipWith but I think it's too clever--there are more straightforward ways.

Resources