haskell matrix power without using if-else statement - haskell

I have the following function to obtain power of a matrix
X^0 = identity matrix,
X^1 =X;
X^2 = X'X;
X^3 = X X' X;
X^4 = X' X X' X ......
I tried with following function:
import Numeric.Container
import Numeric.LinearAlgebra
mpow :: Field t => (Matrix t) -> Integer -> (Matrix t)
mpow x 0 = ident $ cols x
mpow x 1 = x
mpow x n =
if (mod n 2) == 0 then
multiply (trans x) (mpow x $ n - 1)
else
multiply x (mpow x $ n - 1)
Is it possible to rewrite this function without using the if-else statement ?

Yes, you could use guards. But quite often it will compile into the same internal representation in Haskell.
import Numeric.Container
import Numeric.LinearAlgebra
mpow :: Field t => (Matrix t) -> Integer -> (Matrix t)
mpow x 0 = ident $ cols x
mpow x 1 = x
mpow x n | (mod n 2) == 0 = multiply (trans x) (mpow x $ n - 1)
| otherwise = multiply x (mpow x $ n - 1)

As freyrs mentioned, guards and if statements are exactly equivalent as they are both converted to case of when you compile your code. But, you can still get rid of them:
mpow' :: Field t => (Matrix t) -> Integer -> (Matrix t)
mpow' x 0 = ident $ cols x
mpow' x 1 = x
mpow' x n = multiply (head (drop n' fs) $ x) (mpow' x $ n - 1)
where fs = [trans, id]
n' = fromInteger (mod n 2)
However, this isn't more concise, nor does it better communicate what your function is doing to the reader. So don't do this, unless you really hate conditionals.

Related

Haskell: How to find the number of integer solutions to equation for use in Sieve of Atkin?

I am currently trying to implement the Sieve of Atkin in Haskell
In step 3 on the Wikipedia article on the Sieve of Atkin I need to find the number of Integer solutions to multiple equations.
However my solution to the first of these equations (4x² + y² = n, x > 0, y > 0
with n being a entry in a list of positive Integers) produces an infinite loop upon a query with any n.
This is my code for this part of the problem so far:
eq1 :: Integer -> Integer
eq1 n = eq1_ n []
eq1_ :: Integer -> [(Integer, Integer)] -> Integer
eq1_ n list | (x > 0) && (y > 0) && (n == 4*(x^2) + (y^2)) && (notElem ((x,y)) list) = eq1_ n ([(x, y)] ++ list)
| otherwise = toInteger (length list)
where
x = floor (sqrt (fromIntegral ((n - y^2) `div` 4)))
y = floor (sqrt (fromIntegral (n - 4*(x^2))))
It is loaded just fine by WinGHCi, but when I query e.g. eq1 0 it just stays in an infinite loop and has to be interrupted before producing an answer. I suspect it goes in a loop between the two assignments of x and y.
How can I prevent this? Is this even possible?
Edit: Realised where the infinite loop must be.
I'm going to start by reformatting your code a tad to make it more readable. Line breaks are helpful! Also, the order of operations can reduce the weight of parentheses. Side note:
f x | e1 && e2 && e3 = e4
can also be written
f x | e1
, e2
, e3
= e4
which may be easier on the eyes.
eq1 :: Integer -> Integer
eq1 n = eq1_ n []
eq1_ :: Integer -> [(Integer, Integer)] -> Integer
eq1_ n list
| x > 0 &&
y > 0 &&
n == 4*x^2 + y^2 &&
notElem (x,y) list
= eq1_ n ([(x, y)] ++ list)
| otherwise
= toInteger (length list)
where
isqrt = floor . sqrt . fromIntegral
x = isqrt $ (n - y^2) `div` 4
y = isqrt $ n - 4*(x^2)
Now I can immediately see that the logic is wonky. Given n, you calculate x and y. Then you either stop or call the function recursively. On the recursive call, however, you're guaranteed to stop! So even if you were otherwise right, you'd definitely have a semantic problem, always returning 0 or 1.
But as you've seen, that's not the only problem. You're also defining x in terms of y and y in terms of x. Now there are important situations where such mutual recursion is useful. But when the mutually recursive values are "atomic" things like integers, you're sure to get an infinite loop. Haskell won't solve the equations for you; that's your job!
Here's my suggestion:
Start with a brute force list comprehension solution:
sols n
= [(x,y)
|x <- takeWhile (\p -> 4 * p^2 < n) [1..]
,y <- takeWhile (\q -> f x y <= n) [1..]
,f x y = n]
where
f x y = 4*x^2+y^2
Next, you can use an approximate integer square root to narrow the search space for y:
sols n
= [(x,y)
|x <- takeWhile (\p -> 4 * p^2 < n) [1..]
,y <- takeWhile
(\q -> f x y <= n)
[floor(sqrt(fromIntegral(n-4*x^2)))..]
,f x y = n]
where
f x y = 4*x^2+y^2

Memoization with Monad.Memo for mutual recursion in Haskell

I'm doing some dynamic programming in Haskell with mutual recursion implementation.
I decided to speed things up using memoization.
Monad.Memo offers MemoT transformer for that exact case. But it uses Map as internal representation for stored values. And while this gave me order of magnitude speed boost it is still not enough.
While lib supports Array-based and Vector-based implementation as internal storage it only works for simple recursion and I did not found any transformers like MemoT to use it for mutual recursion.
What is the best way to do mutual recursion memoization with efficient vector based internal representation (if any)?
My next question is about memoization effect. So I expected my function to take more time during first run and much less during consecutive runs. But what I found running it in ghci the time it takes each time is the same. So no difference between first and second run. I measured time as follows:
timeit $ print $ dynamic (5,5)
With dynamic being my function.
The full implementation is as follows:
import Control.Monad.Memo
import Control.Monad.Identity
type Pos = (Int, Int)
type MemoQ = MemoT (Int, Int, Int) [Int]
type MemoV = MemoT (Int, Int, Int) Int
type MemoQV = MemoQ (MemoV Identity)
-- we are moving to (0,0) as we can always shift the world by substituting variables
-- due to symmetry of cost function it is enougth to solve for only positive x and y
dynamic :: Pos -> [Int]
dynamic (x, y) = lastUnique $ map (evalQ x y) [1 ..]
where lastUnique (x0:x1:xs) | x0 == x1 = x0
| otherwise = lastUnique (x1:xs)
evalQ :: Int -> Int -> Int -> [Int]
evalQ x y n = startEvalMemo . startEvalMemoT $ fqmon x y n
fqmon :: Int -> Int -> Int -> MemoQV [Int]
fqmon _ _ 0 = return [0,0,0,0]
fqmon x y n = do
let pts = neighbours (x, y)
let v = for3 memol1 fvmon n
let c = cost (x, y)
let q = fmap (c +) . uncurry v
traverse q pts
fvmon :: Int -> Int -> Int -> MemoQV Int
fvmon _ 0 0 = return 0
fvmon 0 x y = return $ cost (x, y)
fvmon n x y | limit = return 1000000
| otherwise = liftM minimum $ for3 memol0 fqmon x' y' (n - 1)
where x' = abs x
y' = abs y
limit = x' > 25 || y' > 25
cost :: Pos -> Int
cost (x, y) = abs x + abs y
neighbours :: Pos -> [Pos]
neighbours (x, y) = [(x-1, y), (x+1, y), (x, y-1), (x, y+1)]
Added:
According to #liqui comment I tried memcombinators.
So first is the non memoized initial implementation:
type Pos = (Int, Int)
dynamic :: Int -> Int -> [Int]
dynamic x y = lastUnique $ map (fq x y) [1 ..]
where lastUnique (x0:x1:xs) | x0 == x1 = x0
| otherwise = lastUnique (x1:xs)
fq :: Int -> Int -> Int -> [Int]
fq _ _ 0 = [0, 0, 0, 0] -- Q at 0 step is 0 in all directions
fq x y n = (cost (x, y) +) . (uncurry $ fv n) <$> neighbours (x, y)
fv :: Int -> Int -> Int -> Int
fv _ 0 0 = 0 -- V at (0, 0) is 0 at any atep
fv 0 x y = cost (x, y) -- V at 0 step is a cost
fv n x y = minimum $ fq x y (n - 1)
cost :: Pos -> Int
cost (x, y) = abs x + abs y
neighbours :: Pos -> [Pos]
neighbours (x, y) = [(x-1, y), (x+1, y), (x, y-1), (x, y+1)]
Then my attempt to memization (only changed part):
dynamic :: Int -> Int -> [Int]
dynamic x y = lastUnique $ map (fqmem x y) [1 ..]
where lastUnique (x0:x1:xs) | x0 == x1 = x0
| otherwise = lastUnique (x1:xs)
-- memoizing version of fq
fqmem :: Int -> Int -> Int -> [Int]
fqmem x y n = fqmem' x y n
where fqmem' = memo3 integral integral integral fq
-- memoizing version of fv
fvmem :: Int -> Int -> Int -> Int
fvmem n x y = fvmem' n x y
where fvmem' = memo3 integral integral integral fv
fq :: Int -> Int -> Int -> [Int]
fq _ _ 0 = [0, 0, 0, 0] -- Q at 0 step is 0 in all directions
fq x y n = (cost (x, y) +) . (uncurry $ fvmem n) <$> neighbours (x, y)
fv :: Int -> Int -> Int -> Int
fv _ 0 0 = 0 -- V at (0, 0) is 0 at any atep
fv 0 x y = cost (x, y) -- V at 0 step is a cost
fv n x y = minimum $ fqmem x y (n - 1)
The result a bit of paradox. It is 3 time slower than non memoized recursive implementation. Memoizing only one function (namely fq) and not touching fv gives results 2 times slower. The more I memoize with memcombinators the slower the computation. And again no difference between first and second invocation.
Also the last question. What is the rationale for choosing between Monad.Memo or memcombinators or MemotTrie? There is a point on using last 2 in comments. What are the situations when Monad.Memo is a better choice?
Finally MemoTrie did the job.
At first invocation it works as fast (possibly much faster) than Monad.Memo and at consecutive invocations it take virtually no time!
And tha change in code is trivial compared to monadic approach:
import Data.MemoTrie
type Pos = (Int, Int)
-- we are moving to (0,0) as we can always shift the world by substituting variables
-- due to symmetry it is enougth to solve for only positive x and y
dynamic :: Int -> Int -> [Int]
dynamic x y = lastUnique $ map (fqmem x y) [1 ..]
where lastUnique (x0:x1:xs) | x0 == x1 = x0
| otherwise = lastUnique (x1:xs)
fqmem = memo3 fq
fvmem = memo3 fv
fq :: Int -> Int -> Int -> [Int]
fq _ _ 0 = [0, 0, 0, 0] -- Q at 0 step is 0 in all directions
fq x y n = (cost (x, y) +) . (uncurry $ fvmem n) <$> neighbours (x, y)
fv :: Int -> Int -> Int -> Int
fv _ 0 0 = 0 -- V at (0, 0) is 0 at any atep
fv 0 x y = cost (x, y) -- V at 0 step is a cost
fv n x y = minimum $ fqmem x y (n - 1)
cost :: Pos -> Int
cost (x, y) = abs x + abs y
neighbours :: Pos -> [Pos]
neighbours (x, y) = [(x-1, y), (x+1, y), (x, y-1), (x, y+1)]
Still I would like to know what is the benefits of using Monad.Memo and what are use cases for that? Or it becomes obsolete with MemoTrie?
Why Memocombinators did not worked for me?
What is the rule of thumb on choosing between Monad.Memo, Memocombinators or MemoTrie?

Least common multiple without using gcd

With gcd its fairly easy but i do not understand how to tie in all the functions to make it happen without.
kgv :: Int -> Int -> Int
kgv x y = abs ((x `quot` (gcd x y)) * y)
I got this function to find the prime factors which works (prime_factors) and I am working on making a function that takes the maximum number from one list and checks if its on the other list (comp):
prime_factors :: Int -> [Int]
prime_factors 1 = []
prime_factors n
| factors == [] = [n]
| otherwise = factors ++ prime_factors (n `div` (head factors))
where factors = take 1 $ filter (\x -> (n `mod` x) == 0) [2 .. n-1]
comp :: [Int]->Int
comp (ys)(x:xs)
|maximum prime_factors xs elem prime_factors ys == x
|otherwise tail x
kgv :: Int -> Int -> Int
kgv x y = abs ((x `quot` (comp x y)) * y)
Here's an absurdly simple and obscenely inefficient solution:
lcm m n = head [x | x <- [1..], x `rem` m == 0, x `rem` n == 0]
Of course, this relies on two different notions of "least" coinciding under the circumstances, which they do. A fully naive solution doesn't seem possible.
here is the (very) naive algorithm I was talking about:
kgv :: (Ord a, Num a) => a -> a -> a
kgv x y = find x y
where find i j
| i == j = i
| i < j = find (i+x) j
| i > j = find i (j+y)
it's basically what a school-child would do ;)
caution I ignored negative numbers and 0 - you'll probably have to handle those
perhaps another easy way is
import Data.List(intersect)
lcm m n = head $ intersect (series m n) (series n m)
where series a b = take a $ map (*b) [1..]
I figured it out myself mostly. Thanks for the ideas and pointers.
ggt n m | n > m = maximum [t | t <- [1 .. m], gt n m t]
| otherwise = maximum [t | t <- [1 .. n], gt n m t]
gt n m c = t n c && t m c
t n c | n >= c = (mod n c == 0)
| otherwise = False
kgv :: Int -> Int -> Int
kgv x y |x==0=0|y==0=0 |otherwise = abs ((x `quot` (ggt x y)) * y)

Two parameter memoization in Haskell

I'm trying to memoize the following function:
gridwalk x y
| x == 0 = 1
| y == 0 = 1
| otherwise = (gridwalk (x - 1) y) + (gridwalk x (y - 1))
Looking at this I came up with the following solution:
gw :: (Int -> Int -> Int) -> Int -> Int -> Int
gw f x y
| x == 0 = 1
| y == 0 = 1
| otherwise = (f (x - 1) y) + (f x (y - 1))
gwlist :: [Int]
gwlist = map (\i -> gw fastgw (i `mod` 20) (i `div` 20)) [0..]
fastgw :: Int -> Int -> Int
fastgw x y = gwlist !! (x + y * 20)
Which I then can call like this:
gw fastgw 20 20
Is there an easier, more concise and general way (notice how I had to hardcode the max grid dimensions in the gwlist function in order to convert from 2D to 1D space so I can access the memoizing list) to memoize functions with multiple parameters in Haskell?
You can use a list of lists to memoize the function result for both parameters:
memo :: (Int -> Int -> a) -> [[a]]
memo f = map (\x -> map (f x) [0..]) [0..]
gw :: Int -> Int -> Int
gw 0 _ = 1
gw _ 0 = 1
gw x y = (fastgw (x - 1) y) + (fastgw x (y - 1))
gwstore :: [[Int]]
gwstore = memo gw
fastgw :: Int -> Int -> Int
fastgw x y = gwstore !! x !! y
Use the data-memocombinators package from hackage. It provides easy to use memorization techniques and provides an easy and breve way to use them:
import Data.MemoCombinators (memo2,integral)
gridwalk = memo2 integral integral gridwalk' where
gridwalk' x y
| x == 0 = 1
| y == 0 = 1
| otherwise = (gridwalk (x - 1) y) + (gridwalk x (y - 1))
Here is a version using Data.MemoTrie from the MemoTrie package to memoize the function:
import Data.MemoTrie(memo2)
gridwalk :: Int -> Int -> Int
gridwalk = memo2 gw
where
gw 0 _ = 1
gw _ 0 = 1
gw x y = gridwalk (x - 1) y + gridwalk x (y - 1)
If you want maximum generality, you can memoize a memoizing function.
memo :: (Num a, Enum a) => (a -> b) -> [b]
memo f = map f (enumFrom 0)
gwvals = fmap memo (memo gw)
fastgw :: Int -> Int -> Int
fastgw x y = gwvals !! x !! y
This technique will work with functions that have any number of arguments.
Edit: thanks to Philip K. for pointing out a bug in the original code. Originally memo had a "Bounded" constraint instead of "Num" and began the enumeration at minBound, which would only be valid for natural numbers.
Lists aren't a good data structure for memoizing, though, because they have linear lookup complexity. You might be better off with a Map or IntMap. Or look on Hackage.
Note that this particular code does rely on laziness, so if you wanted to switch to using a Map you would need to take a bounded amount of elements from the list, as in:
gwByMap :: Int -> Int -> Int -> Int -> Int
gwByMap maxX maxY x y = fromMaybe (gw x y) $ M.lookup (x,y) memomap
where
memomap = M.fromList $ concat [[((x',y'),z) | (y',z) <- zip [0..maxY] ys]
| (x',ys) <- zip [0..maxX] gwvals]
fastgw2 :: Int -> Int -> Int
fastgw2 = gwByMap 20 20
I think ghc may be stupid about sharing in this case, you may need to lift out the x and y parameters, like this:
gwByMap maxX maxY = \x y -> fromMaybe (gw x y) $ M.lookup (x,y) memomap

Use QuickCheck by generating primes

Background
For fun, I'm trying to write a property for quick-check that can test the basic idea behind cryptography with RSA.
Choose two distinct primes, p and q.
Let N = p*q
e is some number relatively prime to (p-1)(q-1) (in practice, e is usually 3 for fast encoding)
d is the modular inverse of e modulo (p-1)(q-1)
For all x such that 1 < x < N, it is always true that (x^e)^d = x modulo N
In other words, x is the "message", raising it to the eth power mod N is the act of "encoding" the message, and raising the encoded message to the dth power mod N is the act of "decoding" it.
(The property is also trivially true for x = 1, a case which is its own encryption)
Code
Here are the methods I have coded up so far:
import Test.QuickCheck
-- modular exponentiation
modExp :: Integral a => a -> a -> a -> a
modExp y z n = modExp' (y `mod` n) z `mod` n
where modExp' y z | z == 0 = 1
| even z = modExp (y*y) (z `div` 2) n
| odd z = (modExp (y*y) (z `div` 2) n) * y
-- relatively prime
rPrime :: Integral a => a -> a -> Bool
rPrime a b = gcd a b == 1
-- multiplicative inverse (modular)
mInverse :: Integral a => a -> a -> a
mInverse 1 _ = 1
mInverse x y = (n * y + 1) `div` x
where n = x - mInverse (y `mod` x) x
-- just a quick way to test for primality
n `divides` x = x `mod` n == 0
primes = 2:filter isPrime [3..]
isPrime x = null . filter (`divides` x) $ takeWhile (\y -> y*y <= x) primes
-- the property
prop_rsa (p,q,x) = isPrime p &&
isPrime q &&
p /= q &&
x > 1 &&
x < n &&
rPrime e t ==>
x == (x `powModN` e) `powModN` d
where e = 3
n = p*q
t = (p-1)*(q-1)
d = mInverse e t
a `powModN` b = modExp a b n
(Thanks, google and random blog, for the implementation of modular multiplicative inverse)
Question
The problem should be obvious: there are way too many conditions on the property to make it at all usable. Trying to invoke quickCheck prop_rsa in ghci made my terminal hang.
So I've poked around the QuickCheck manual a bit, and it says:
Properties may take the form
forAll <generator> $ \<pattern> -> <property>
How do I make a <generator> for prime numbers? Or with the other constraints, so that quickCheck doesn't have to sift through a bunch of failed conditions?
Any other general advice (especially regarding QuickCheck) is welcome.
Here's one way to make a QuickCheck-compatible prime-number generator (stealing a Sieve of Eratosthenes implementation from http://en.literateprograms.org/Sieve_of_Eratosthenes_(Haskell)):
import Test.QuickCheck
newtype Prime = Prime Int deriving Show
primes = sieve [2..]
where
sieve (p:xs) = Prime p : sieve [x | x <- xs, x `mod` p > 0]
instance Arbitrary Prime where
arbitrary = do i <- arbitrary
return $ primes!!(abs i)
It can be used in QuickCheck like so:
prop_primes_dont_divide (Prime x) (Prime y) = x == y || x `mod` y > 0
For your use, you'd replace p and q with (Prime p) and (Prime q) in your property.
OK so here's what I did.
Top of file
{-# LANGUAGE NoMonomorphismRestriction #-}
import Test.QuickCheck
import Control.Applicative
All code as given in the question, except for prop_rsa. That was (obviously) heavily modified:
prop_rsa = forAll primePair $ \(p,q) ->
let n = p*q
in forAll (genUnder n) $ \x ->
let e = 3
t = (p-1)*(q-1)
d = mInverse e t
a `powModN` b = modExp a b n
in p /= q &&
rPrime e t ==>
x == (x `powModN` e) `powModN` d
The type for primePair is Gen (Int, Int), and the type for genUnder is Int -> Gen Int. I'm not exactly sure what the magic is behind forAll but I'm pretty sure this is correct. I've done some ad-hoc adjustments to 1) make sure it fails if I mess up the conditions and 2) make sure the nested forAll is varying the value of x across test cases.
So here's how to write those generators. Once I realized that <generator> in the documentation just meant something of type Gen a, it was cake.
genNonzero = (\x -> if x == 0 then 1 else x) `fmap` arbitrary
genUnder :: Int -> Gen Int
genUnder n = ((`mod` n) . abs) `fmap` genNonzero
genSmallPrime = ((\x -> (primes !! (x `mod` 2500))) . abs) `fmap` arbitrary
primePair :: Gen (Int, Int)
primePair = (,) <$> genSmallPrime <*> genSmallPrime
primePair took some trial and error for me to get right; I knew that some combinators like that should work, but I'm still not as familiar with fmap, <$> and <*> as I'd like to be. I restricted the computation to only select from among the first 2500 primes; otherwise it apparently wanted to pick some really big ones that took forever to generate.
Random thing to note
Thanks to laziness, d = mInverse e t isn't computed unless the conditions are met. Which is good, because it's undefined when the condition rPrime e t is false. In English, an integer a only has a multiplicative inverse (mod b) when a and b are relatively prime.

Resources