I've been trying to learn about static analysis of applicative functors. Many sources say that an advantage of using them over monads is the susceptibility to static analysis.
However, the only example I can find of actually performing static analysis is too complicated for me to understand. Are there any simpler examples of this?
Specifically, I want to know if I can performing static analysis on recursive applications. For example, something like:
y = f <$> x <*> y <*> z
When analyzing the above code, is it possible to detect that it is recursive on y? Or does referential transparency still prevent this from being possible?
Applicative functors allow static analysis at runtime. This is better explained by a simpler example.
Imagine you want to calculate a value, but want to track what dependencies that value has. Eg you may use IO a to calculate the value, and have a list of Strings for the dependencies:
data Input a = Input { dependencies :: [String], runInput :: IO a }
Now we can easily make this an instance of Functor and Applicative. The functor instance is trivial. As it doesn't introduce any new dependencies, you just need to map over the runInput value:
instance Functor (Input) where
fmap f (Input deps runInput) = Input deps (fmap f runInput)
The Applicative instance is more complicated. the pure function will just return a value with no dependencies. The <*> combiner will concat the two list of dependencies (removing duplicates), and combine the two actions:
instance Applicative Input where
pure = Input [] . return
(Input deps1 getF) <*> (Input deps2 runInput) = Input (nub $ deps1 ++ deps2) (getF <*> runInput)
With that, we can also make an Input a an instance of Num if Num a:
instance (Num a) => Num (Input a) where
(+) = liftA2 (+)
(*) = liftA2 (*)
abs = liftA abs
signum = liftA signum
fromInteger = pure . fromInteger
Nexts, lets make a couple of Inputs:
getTime :: Input UTCTime
getTime = Input { dependencies = ["Time"], runInput = getCurrentTime }
-- | Ideally this would fetch it from somewhere
stockPriceOf :: String -> Input Double
stockPriceOf stock = Input { dependencies = ["Stock ( " ++ stock ++ " )"], runInput = action } where
action = case stock of
"Apple" -> return 500
"Toyota" -> return 20
Finally, lets make a value that uses some inputs:
portfolioValue :: Input Double
portfolioValue = stockPriceOf "Apple" * 10 + stockPriceOf "Toyota" * 20
This is a pretty cool value. Firstly, we can find the dependencies of portfolioValue as a pure value:
> :t dependencies portfolioValue
dependencies portfolioValue :: [String]
> dependencies portfolioValue
["Stock ( Apple )","Stock ( Toyota )"]
That is the static analysis that Applicative allows - we know the dependencies without having to execute the action.
We can still get the value of the action though:
> runInput portfolioValue >>= print
5400.0
Now, why can't we do the same with Monad? The reason is Monad can express choice, in that one action can determine what the next action will be.
Imagine there was a Monad interface for Input, and you had the following code:
mostPopularStock :: Input String
mostPopularStock = Input { dependencies ["Popular Stock"], getInput = readFromWebMostPopularStock }
newPortfolio = do
stock <- mostPopularStock
stockPriceOf "Apple" * 40 + stockPriceOf stock * 10
Now, how can we calculate the dependencies of newPortolio? It turns out we can't do it without using IO! It will depend on the most popular stock, and the only way to know is to run the IO action. Therefore it isn't possible to statically track dependencies when the type uses Monad, but completely possible with just Applicative. This is a good example of why often less power means more useful - as Applicative doesn't allow choice, dependencies can be calculated statically.
Edit: With regards to the checking if y is recursive on itself, such a check is possible with applicative functors if you are willing to annotate your function names.
data TrackedComp a = TrackedComp { deps :: [String], recursive :: Bool, run :: a}
instance (Show a) => Show (TrackedComp a) where
show comp = "TrackedComp " ++ show (run comp)
instance Functor (TrackedComp) where
fmap f (TrackedComp deps rec1 run) = TrackedComp deps rec1 (f run)
instance Applicative TrackedComp where
pure = TrackedComp [] False
(TrackedComp deps1 rec1 getF) <*> (TrackedComp deps2 rec2 value) =
TrackedComp (combine deps1 deps2) (rec1 || rec2) (getF value)
-- | combine [1,1,1] [2,2,2] = [1,2,1,2,1,2]
combine :: [a] -> [a] -> [a]
combine x [] = x
combine [] y = y
combine (x:xs) (y:ys) = x : y : combine xs ys
instance (Num a) => Num (TrackedComp a) where
(+) = liftA2 (+)
(*) = liftA2 (*)
abs = liftA abs
signum = liftA signum
fromInteger = pure . fromInteger
newComp :: String -> TrackedComp a -> TrackedComp a
newComp name tracked = TrackedComp (name : deps tracked) isRecursive (run tracked) where
isRecursive = (name `elem` deps tracked) || recursive tracked
y :: TrackedComp [Int]
y = newComp "y" $ liftA2 (:) x z
x :: TrackedComp Int
x = newComp "x" $ 38
z :: TrackedComp [Int]
z = newComp "z" $ liftA2 (:) 3 y
> recursive x
False
> recursive y
True
> take 10 $ run y
[38,3,38,3,38,3,38,3,38,3]
Yes, applicative functors allow more analysis than monads. But no, you can't observe the recursion. I've written a paper about parsing which explains the problem in detail:
https://lirias.kuleuven.be/bitstream/123456789/352570/1/gc-jfp.pdf
The paper then discusses an alternative encoding of recursion which does allow analysis and has some other advantages and some downsides. Other related work is:
https://lirias.kuleuven.be/bitstream/123456789/376843/1/p97-devriese.pdf
And more related work can be found in the related work sections of those papers...
Related
I have random number generator
rand :: Int -> Int -> IO Int
rand low high = getStdRandom (randomR (low,high))
and a helper function to remove an element from a list
removeItem _ [] = []
removeItem x (y:ys) | x == y = removeItem x ys
| otherwise = y : removeItem x ys
I want to shuffle a given list by randomly picking an item from the list, removing it and adding it to the front of the list. I tried
shuffleList :: [a] -> IO [a]
shuffleList [] = []
shuffleList l = do
y <- rand 0 (length l)
return( y:(shuffleList (removeItem y l) ) )
But can't get it to work. I get
hw05.hs:25:33: error:
* Couldn't match expected type `[Int]' with actual type `IO [Int]'
* In the second argument of `(:)', namely
....
Any idea ?
Thanks!
Since shuffleList :: [a] -> IO [a], we have shuffleList (xs :: [a]) :: IO [a].
Obviously, we can't cons (:) :: a -> [a] -> [a] an a element onto an IO [a] value, but instead we want to cons it onto the list [a], the computation of which that IO [a] value describes:
do
y <- rand 0 (length l)
-- return ( y : (shuffleList (removeItem y l) ) )
shuffled <- shuffleList (removeItem y l)
return y : shuffled
In do notation, values to the right of <- have types M a, M b, etc., for some monad M (here, IO), and values to the left of <- have the corresponding types a, b, etc..
The x :: a in x <- mx gets bound to the pure value of type a produced / computed by the M-type computation which the value mx :: M a denotes, when that computation is actually performed, as a part of the combined computation represented by the whole do block, when that combined computation is performed as a whole.
And if e.g. the next line in that do block is y <- foo x, it means that a pure function foo :: a -> M b is applied to x and the result is calculated which is a value of type M b, denoting an M-type computation which then runs and produces / computes a pure value of type b to which the name y is then bound.
The essence of Monad is thus this slicing of the pure inside / between the (potentially) impure, it is these two timelines going on of the pure calculations and the potentially impure computations, with the pure world safely separated and isolated from the impurities of the real world. Or seen from the other side, the pure code being run by the real impure code interacting with the real world (in case M is IO). Which is what computer programs must do, after all.
Your removeItem is wrong. You should pick and remove items positionally, i.e. by index, not by value; and in any case not remove more than one item after having picked one item from the list.
The y in y <- rand 0 (length l) is indeed an index. Treat it as such. Rename it to i, too, as a simple mnemonic.
Generally, with Haskell it works better to maximize the amount of functional code at the expense of non-functional (IO or randomness-related) code.
In your situation, your “maximum” functional component is not removeItem but rather a version of shuffleList that takes the input list and (as mentioned by Will Ness) a deterministic integer position. List function splitAt :: Int -> [a] -> ([a], [a]) can come handy here. Like this:
funcShuffleList :: Int -> [a] -> [a]
funcShuffleList _ [] = []
funcShuffleList pos ls =
if (pos <=0) || (length(take (pos+1) ls) < (pos+1))
then ls -- pos is zero or out of bounds, so leave list unchanged
else let (left,right) = splitAt pos ls
in (head right) : (left ++ (tail right))
Testing:
λ>
λ> funcShuffleList 4 [0,1,2,3,4,5,6,7,8,9]
[4,0,1,2,3,5,6,7,8,9]
λ>
λ> funcShuffleList 5 "#ABCDEFGH"
"E#ABCDFGH"
λ>
Once you've got this, you can introduce randomness concerns in simpler fashion. And you do not need to involve IO explicitely, as any randomness-friendly monad will do:
shuffleList :: MonadRandom mr => [a] -> mr [a]
shuffleList [] = return []
shuffleList ls =
do
let maxPos = (length ls) - 1
pos <- getRandomR (0, maxPos)
return (funcShuffleList pos ls)
... IO being just one instance of MonadRandom.
You can run the code using the default IO-hosted random number generator:
main = do
let inpList = [0,1,2,3,4,5,6,7,8]::[Integer]
putStrLn $ "inpList = " ++ (show inpList)
-- mr automatically instantiated to IO:
outList1 <- shuffleList inpList
putStrLn $ "outList1 = " ++ (show outList1)
outList2 <- shuffleList outList1
putStrLn $ "outList2 = " ++ (show outList2)
Program output:
$ pickShuffle
inpList = [0,1,2,3,4,5,6,7,8]
outList1 = [6,0,1,2,3,4,5,7,8]
outList2 = [8,6,0,1,2,3,4,5,7]
$
$ pickShuffle
inpList = [0,1,2,3,4,5,6,7,8]
outList1 = [4,0,1,2,3,5,6,7,8]
outList2 = [2,4,0,1,3,5,6,7,8]
$
The output is not reproducible here, because the default generator is seeded by its launch time in nanoseconds.
If what you need is a full random permutation, you could have a look here and there - Knuth a.k.a. Fisher-Yates algorithm.
The do notation allows us to express monadic code without overwhelming nestings, so that
main = getLine >>= \ a ->
getLine >>= \ b ->
putStrLn (a ++ b)
can be expressed as
main = do
a <- getLine
b <- getLine
putStrLn (a ++ b)
Suppose, though, the syntax allows ... #expression ... to stand for do { x <- expression; return (... x ...) }. For example, foo = f a #(b 1) c would be desugared as: foo = do { x <- b 1; return (f a x c) }. The code above could, then, be expressed as:
main = let a = #getLine in
let b = #getLine in
putStrLn (a ++ b)
Which would be desugared as:
main = do
x <- getLine
let a = x in
return (do
x' <- getLine
let b = x' in
return (putStrLn (a ++ b)))
That is equivalent. This syntax is appealing to me because it seems to offer the same functionality as the do-notation, while also allowing some shorter expressions such as:
main = putStrLn (#(getLine) ++ #(getLine))
So, I wonder if there is anything defective with this proposed syntax, or if it is indeed complete and equivalent to the do-notation.
putStrLn is already String -> IO (), so your desugaring ... return (... return (putStrLn (a ++ b))) ends up having type IO (IO (IO ())), which is likely not what you wanted: running this program won't print anything!
Speaking more generally, your notation can't express any do-block which doesn't end in return. [See Derek Elkins' comment.]
I don't believe your notation can express join, which can be expressed with do without any additional functions:
join :: Monad m => m (m a) -> m a
join mx = do { x <- mx; x }
However, you can express fmap constrained to Monad:
fmap' :: Monad m => (a -> b) -> m a -> m b
fmap' f mx = f #mx
and >>= (and thus everything else) can be expressed using fmap' and join. So adding join would make your notation complete, but still not convenient in many cases, because you end up needing a lot of joins.
However, if you drop return from the translation, you get something quite similar to Idris' bang notation:
In many cases, using do-notation can make programs unnecessarily verbose, particularly in cases such as m_add above where the value bound is used once, immediately. In these cases, we can use a shorthand version, as follows:
m_add : Maybe Int -> Maybe Int -> Maybe Int
m_add x y = pure (!x + !y)
The notation !expr means that the expression expr should be evaluated and then implicitly bound. Conceptually, we can think of ! as being a prefix function with the following type:
(!) : m a -> a
Note, however, that it is not really a function, merely syntax! In practice, a subexpression !expr will lift expr as high as possible within its current scope, bind it to a fresh name x, and replace !expr with x. Expressions are lifted depth first, left to right. In practice, !-notation allows us to program in a more direct style, while still giving a notational clue as to which expressions are monadic.
For example, the expression:
let y = 42 in f !(g !(print y) !x)
is lifted to:
let y = 42 in do y' <- print y
x' <- x
g' <- g y' x'
f g'
Adding it to GHC was discussed, but rejected (so far). Unfortunately, I can't find the threads discussing it.
How about this:
do a <- something
b <- somethingElse a
somethingFinal a b
For reference my code.
import Control.Monad.Par
makeGridx:: (Enum a,Num a)=>a->a->a->[a]
makeGridx start end h = [start,(start+h)..end]
makeGridt:: (Enum a, Num a)=>a->a->a->[a]
makeGridt start end h = [start,(start+h)..end]
generateBaseLine:: (Eq a,Num a)=>(a->a)-> [a] -> [(a,a,a)]
generateBaseLine f (x:xs) = if (null xs)
then [(x,0,0)]
else if(x==0)
then (x,0,0) : (generateBaseLine f xs)
else (x,0,(f x)) : (generateBaseLine f xs)
--fdm :: (Enum a,Num a) =>a->a->a->a->a->a->a->(a->a)->[(a,a,a)]
--fdm alpha startt endt startx endx dx dt bbFunction = start alpha (makeGridx startx endx dx) (makeGridt startt endt dt) (generateBaseLine bbFunction (makeGridx startx endx dx)) dx dt
--start:: Num a=>a->[a]->[a]->[(a,a,a)]->a->a->[(a,a,a)]
--start alpha (x:xs) (t:ts) (phi:phis) dx dt = (startPar alpha (x:xs) (ts) (phi:phis) dx dt [] [])
startPar:: Num a =>a->[a]->[a]->[(a,a,a)]->a->a->[(a,a,a)]
startPar alpha (x:xs) (t:ts) (phi1:(ph2:(ph3:phis))) dx dt = (phi1:(ph2:(ph3:phis))) ++ (buildPhiListIds alpha (x:xs) (t:ts) (phi1:(ph2:(ph3:phis))) dx dt [] [])
buildPhiListIds:: Num a=> a->[a]->[a]->[(a,a,a)]->a->a->[Par (IVar (a, a, a))]->[a]->[(a,a,a)]
buildPhiListIds alpha (x:xs) (t:ts) (phi1:(ph2:(ph3:phis))) dx dt phiIds newX = do
one<-third phi1
two<-third ph2
three<-third ph3
newSolId<- spawn( return (newPhi (x:xs) t (one,two,three,dx,dt,alpha) ))
buildPhiListIds alpha xs (t:ts) (ph2:(ph3:phis)) dx dt (phiIds ++ [newSolId]) (newX ++ [x])
buildPhiListIds alpha (0:xs) (t:ts) (phi1:(ph2:(ph3:phis))) dx dt phiIds newX = do
newSolId<-spawn (return (newPhi (0:xs) t (1,2,3,4,5,6)))
buildPhiListIds alpha xs (t:ts) (phi1:(ph2:(ph3:phis))) dx dt (phiIds ++ [newSolId]) (newX ++ [0])
buildPhiListIds alpha [] (t:ts) (phi1:(ph2:(ph3:phis))) dx dt phiIds newX = do
(getSolutions (getTuples(getSolutions phiIds))) ++ (buildPhiListIds alpha newX ts (getSolutions (getTuples(getSolutions phiIds))) dx dt [] [])
buildPhiListIds _ _ [] _ _ _ _ _ = []
getTuples::[IVar a]->[Par a]
getTuples (x:xs) = (get x) : (getSolutions xs)
getTuples [] = []
getSolutions:: [Par a]->[a]
getSolutions (x:xs) = (runPar x):(getTuples xs)
getSolutions [] = []
third (_,_,x)=x
ex f g x = runPar $ do
fx <- spawn (return (f x))
gx <- spawn (return (g x))
a <- get fx
b <- get gx
return (a,b)
newPhi:: (Eq a,Fractional a)=> [a]->a->(a,a,a,a,a,a)->(a,a,a)
newPhi (0:xs) t (phiL,phiC,phiR,dx,dt,alpha)= (0,t,0)
newPhi (x:[]) t (phiL,phiC,phiR,dx,dt,alpha)= (x,t,0)
newPhi (x:xs) t (phiL,phiC,phiR,dx,dt,alpha)= (x,t,(phiC + (alpha * (dt/(dx^2)))*(phiR -(2*phiC) + phiL)))
I get a bunch of errors, but one very much complexes me.
heateqpar.hs:28:156:
Couldn't match type `Par' with `[]'
Expected type: [IVar (a1, a1, a1)]
Actual type: Par (IVar (a1, a1, a1))
In a stmt of a 'do' block:
newSolId <- spawn
(return (newPhi (x : xs) t (one, two, three, dx, dt, alpha))) ::
Par (IVar (a, a, a))
In the expression:
do { one <- third phi1;
two <- third ph2;
three <- third ph3;
newSolId <- spawn
(return (newPhi (x : xs) t (one, two, three, dx, dt, alpha))) ::
Par (IVar (a, a, a));
.... }
In an equation for `buildPhiListIds':
buildPhiListIds
alpha
(x : xs)
(t : ts)
(phi1 : (ph2 : (ph3 : phis)))
dx
dt
phiIds
newX
= do { one <- third phi1;
two <- third ph2;
three <- third ph3;
.... }
The actual type of this is what i want it to be, but for some reason it is trying to enforce this type that isnt the return type of spawn? When i see this it seems like in my type declaration is trying to enforce this however i have the type as followed
buildPhiListIds:: Num a=> a->[a]->[a]->[(a,a,a)]->a->a->[Par (IVar (a, a, a))]->[a]->[(a,a,a)]
I see no type specifically of [IVar (a1, a1, a1)], which is really confusing me. If someone could lead me on the right road, it would be very much appreciated.
I get a bunch of errors, but one very much complexes me.
In a do expression, every monadic action must belong to the same monad. The return type of buildPhiListIds is [something], so the result of do has type [something]. Therefore, all your actions should be in the list monad, not in the Par monad. Now look at spawn again:
spawn :: NFData a => Par a -> Par (IVar a)
Compare what I mentioned above with your error: "Couldn't match type `Par' with `[]'". Aha! It expects a list, but you're using something of wrong type (Par)!
Now, extrapolating from your previous questions I suppose that you're new to Haskell and the concept of monads. There are many tutorials about them, including chapters in RWH or in LYAH, so I won't provide one in this answer (they're actually rather easy, don't be intimidated by the number of tutorials). Either way, your current usage is completely off.
That being said, you should refactor buildPhiListIds to have the following type:
buildPhiListIds:: Num a => ... -> Par [(a,a,a)]
Also, your definitions of getTuples and getSolutions don't make much sense. The following are much simpler and probably achieve what you actually want:
getTuples :: [IVar a] -> [Par a]
getTuples = map get
getSolutions :: [Par a] -> [a]
getSolutions = runPar . sequence
Also, you should try to keep the calls to runPar to a minimum:
The runPar function itself is relatively expensive [...]. So when using the Par monad, you should usually try to thread the Par monad around to all the places that need parallelism to avoid needing multiple runPar calls. [...] In particular, nested calls to runPar (where a runPar is evaluated during the course of executing another Par computation) usually give poor results.
I suggest you to write some simpler programs which actually compile, till you get both monads in general and Par.
I have found myself in a dire need of your insights.
Here's my object of interest:
class Mergable m where
merge :: m -> m -> Maybe m
mergeList :: [m] -> [m]
mergeList [] = []
mergeList [x] = [x]
mergeList (x:y:t) = r1 ++ mergeList (r2 ++ t)
where
(r1,r2) = case (x `merge` y) of
Just m -> ([ ], [m])
Nothing -> ([x], [y])
But I'll come back to it later. For now I prepared some examples:
data AffineTransform = Identity
| Translation Float Float
| Rotation Float
| Scaling Float Float
| Affine Matrix3x3
instance Monoid AffineTransform where
mempty = Identity
Identity `mappend` x = x
x `mappend` Identity = x
(Translation dx1 dy1) `mappend` (Translation dx2 dy2) = Translation (dx1+dx2) (dy1+dy2)
(Rotation theta1) `mappend` (Rotation theta2) = Rotation (theta1+theta2)
(Scaling sx1 sy1) `mappend` (Scaling sx2 sy2) = Scaling (sx1*sx2) (sy1*sy2)
-- last resort: compose transforms from different subgroups
-- using an "expensive" matrix multiplication
x `mappend` y = Affine (toMatrix x `mult3x3` toMatrix y)
So now I can do:
toMatrix $ Rotation theta1 `mappend` Translation dx1 dy1 `mappend` Translation dx2 dy2 `mappend` Rotation theta2
or more briefly:
(toMatrix . mconcat) [Rotation theta1, Translation dx1 dy1, Translation dx2 dy2, Rotation theta2]
or more generally:
(toMatrix . (fold[r|r'|l|l'] mappend)) [Rotatio...], etc
In the above examples the first rotation and translation will be combined (expensively) to a matrix; then, that matrix combined with translation (also using multiplication) and then once again a multiplication will be used to produce the final result, even though (due to associativity) two translations in the middle could be combined cheaply for a total of two multiplications instead of three.
Anyhow, along comes my Mergable class to the rescue:
instance Mergable AffineTransform where
x `merge` Identity = Just x
Identity `merge` x = Just x
x#(Translation _ _) `merge` y#(Translation _ _) = Just $ x `mappend` y
x#(Rotation _) `merge` y#(Rotation _) = Just $ x `mappend` y
x#(Scaling _ _) `merge` y#(Scaling _ _) = Just $ x `mappend` y
_ `merge` _ = Nothing
so now (toMatrix . mconcat . mergeList) ~ (toMatrix . mconcat), as it should:
mergeList [Rotation theta1, Translation dx1 dy1, Translation dx2 dy2, Rotation theta2] == [Rotation theta1, Translation (dx1+dx2) (dy1+dy2), Rotation theta2]
Other examples I have in mind are more involved (code-wise) so I will just state the ideas.
Let's say I have some
data Message = ...
and a
dispatch :: [Message] -> IO a
where dispatch takes a message from the list, depending on it's type opens an appropriate channel (file, stream, etc), writes that message, closes the channel and continues with next message. So if opening and closing channels is an "expensive" operation, simply composing (dispatch . mergeList) can help improve performance with minimal effort.
Other times i have used it to handle events in gui applications like merging mousemoves, key presses, commands in an undo-redo system, etc.
The general pattern is that i take two items from the list, check if they are "mergeable" in some way and if so try to merge the result with the next item in the list or otherwise I leave the first item as it were and continue with the next pair (now that i think of it's a bit like generalized run length encoding)
My problem is that I can't shake the feeling that I'm reinventing the wheel and there has to be a similar structure in haskell that i could use. If that's not the case then:
1) How do I generalize it to other containers other than lists?
2) Can you spot any other structures Mergable is an instance of? (particularly Arrows if applicable, i have trouble wrapping my head around them)
3) Any insights on how strict/lazy should mergeList be and how to present it to user?
4) Optimization tips? Stackoverflow? Anything else?
Thanks!
I don't think there is anything like this already in a library. Hoogle and Hayoo don't turn up anything suitable.
Mergeable (I think it's spelt that way) looks like a generalisation of Monoid. Not an Arrow, sorry.
Sometimes you need to merge preserving order. Sometimes you don't need to preserve order when you merge.
I might do something like
newtype MergedInOrder a = MergedInOrder [a] -- without exporting the constructor
mergeInOrder :: Mergeable a => [a] -> MergedInOrder a
mergeInOrder = MergedInOrder . foldr f []
where f x [] = [x]
f x xs # (y : ys) = case merge x y of
Just z -> z : ys
Nothing -> x : xs
and similar newtypes for unordered lists, that take advantage of and do not require an Ord instance, respectively.
These newtypes have obvious Monoid instances.
I don't think we can write code to merge arbitrary containers of Mergeables, I think it would have to be done explicitly for each container.
Here was my first thought. Notice "deriving Ord". Otherwise this first section is almost exactly the same as some of the code you presented:
import Data.Monoid
import Data.List
data AffineTransform = Identity
| Translation Float Float
| Rotation Float
| Scaling Float Float
| Affine Matrix3x3
deriving (Eq, Show, Ord)
-- some dummy definitions to satisfy the typechecker
data Matrix3x3 = Matrix3x3
deriving (Eq, Show, Ord)
toMatrix :: AffineTransform -> Matrix3x3
toMatrix _ = Matrix3x3
mult3x3 :: Matrix3x3 -> Matrix3x3 -> Matrix3x3
mult3x3 _ _ = Matrix3x3
instance Monoid AffineTransform where
mempty = Identity
Identity `mappend` x = x
x `mappend` Identity = x
(Translation dx1 dy1) `mappend` (Translation dx2 dy2) =
Translation (dx1+dx2) (dy1+dy2)
(Rotation theta1) `mappend` (Rotation theta2) = Rotation (theta1+theta2)
(Scaling sx1 sy1) `mappend` (Scaling sx2 sy2) = Scaling (sx1*sx2) (sy1*sy2)
-- last resort: compose transforms from different subgroups
-- using an "expensive" matrix multiplication
x `mappend` y = Affine (toMatrix x `mult3x3` toMatrix y)
And now, the kicker:
mergeList :: [AffineTransform] -> [AffineTransform]
mergeList = map mconcat . groupBy sameConstructor . sort
where sameConstructor Identity Identity = True
sameConstructor (Translation _ _) (Translation _ _) = True
sameConstructor (Rotation _) (Rotation _) = True
sameConstructor (Scaling _ _) (Scaling _ _) = True
sameConstructor (Affine _) (Affine _) = True
sameConstructor _ _ = False
Assuming that translations, rotations, and scalings are orthagonal, why not reorder the list and group up all of those same operations together? (Is that a bad assumption?) That is the Haskell pattern that I saw: the good ol' group . sort trick. If you really want, you could pull sameConstructor out of mergeList:
mergeList :: (Monoid a, Ord a) => (a -> a -> Bool) -> [a] -> [a]
mergeList f = map mconcat . groupBy f . sort
P.S. if that was a bad assumption, then you could still do something like
mergeList = map mconcat . groupBy canMerge
But it seems to me that there is unusual overlap between merge and mappend the way you defined them.
Solving a problem from Google Code Jam (2009.1A.A: "Multi-base happiness") I came up with an awkward (code-wise) solution, and I'm interested in how it could be improved.
The problem description, shortly, is: Find the smallest number bigger than 1 for which iteratively calculating the sum of squares of digits reaches 1, for all bases from a given list.
Or description in pseudo-Haskell (code that would solve it if elem could always work for infinite lists):
solution =
head . (`filter` [2..]) .
all ((1 `elem`) . (`iterate` i) . sumSquareOfDigitsInBase)
And my awkward solution:
By awkward I mean it has this kind of code: happy <- lift . lift . lift $ isHappy Set.empty base cur
I memoize results of the isHappy function. Using the State monad for the memoized results Map.
Trying to find the first solution, I did not use head and filter (like the pseudo-haskell above does), because the computation isn't pure (changes state). So I iterated by using StateT with a counter, and a MaybeT to terminate the computation when condition holds.
Already inside a MaybeT (StateT a (State b)), if the condition doesn't hold for one base, there is no need to check the other ones, so I have another MaybeT in the stack for that.
Code:
import Control.Monad.Maybe
import Control.Monad.State
import Data.Maybe
import qualified Data.Map as Map
import qualified Data.Set as Set
type IsHappyMemo = State (Map.Map (Integer, Integer) Bool)
isHappy :: Set.Set Integer -> Integer -> Integer -> IsHappyMemo Bool
isHappy _ _ 1 = return True
isHappy path base num = do
memo <- get
case Map.lookup (base, num) memo of
Just r -> return r
Nothing -> do
r <- calc
when (num < 1000) . modify $ Map.insert (base, num) r
return r
where
calc
| num `Set.member` path = return False
| otherwise = isHappy (Set.insert num path) base nxt
nxt =
sum . map ((^ (2::Int)) . (`mod` base)) .
takeWhile (not . (== 0)) . iterate (`div` base) $ num
solve1 :: [Integer] -> IsHappyMemo Integer
solve1 bases =
fmap snd .
(`runStateT` 2) .
runMaybeT .
forever $ do
(`when` mzero) . isJust =<<
runMaybeT (mapM_ f bases)
lift $ modify (+ 1)
where
f base = do
cur <- lift . lift $ get
happy <- lift . lift . lift $ isHappy Set.empty base cur
unless happy mzero
solve :: [String] -> String
solve =
concat .
(`evalState` Map.empty) .
mapM f .
zip [1 :: Integer ..]
where
f (idx, prob) = do
s <- solve1 . map read . words $ prob
return $ "Case #" ++ show idx ++ ": " ++ show s ++ "\n"
main :: IO ()
main =
getContents >>=
putStr . solve . tail . lines
Other contestants using Haskell did have nicer solutions, but solved the problem differently. My question is about small iterative improvements to my code.
Your solution is certainly awkward in its use (and abuse) of monads:
It is usual to build monads piecemeal by stacking several transformers
It is less usual, but still happens sometimes, to stack several states
It is very unusual to stack several Maybe transformers
It is even more unusual to use MaybeT to interrupt a loop
Your code is a bit too pointless :
(`when` mzero) . isJust =<<
runMaybeT (mapM_ f bases)
instead of the easier to read
let isHappy = isJust $ runMaybeT (mapM_ f bases)
when isHappy mzero
Focusing now on function solve1, let us simplify it.
An easy way to do so is to remove the inner MaybeT monad. Instead of a forever loop which breaks when a happy number is found, you can go the other way around and recurse only if the
number is not happy.
Moreover, you don't really need the State monad either, do you ? One can always replace the state with an explicit argument.
Applying these ideas solve1 now looks much better:
solve1 :: [Integer] -> IsHappyMemo Integer
solve1 bases = go 2 where
go i = do happyBases <- mapM (\b -> isHappy Set.empty b i) bases
if and happyBases
then return i
else go (i+1)
I would be more han happy with that code.
The rest of your solution is fine.
One thing that bothers me is that you throw away the memo cache for every subproblem. Is there a reason for that?
solve :: [String] -> String
solve =
concat .
(`evalState` Map.empty) .
mapM f .
zip [1 :: Integer ..]
where
f (idx, prob) = do
s <- solve1 . map read . words $ prob
return $ "Case #" ++ show idx ++ ": " ++ show s ++ "\n"
Wouldn't your solution be more efficient if you reused it instead ?
solve :: [String] -> String
solve cases = (`evalState` Map.empty) $ do
solutions <- mapM f (zip [1 :: Integer ..] cases)
return (unlines solutions)
where
f (idx, prob) = do
s <- solve1 . map read . words $ prob
return $ "Case #" ++ show idx ++ ": " ++ show s
The Monad* classes exist to remove the need for repeated lifting. If you change your signatures like this:
type IsHappyMemo = Map.Map (Integer, Integer) Bool
isHappy :: MonadState IsHappyMemo m => Set.Set Integer -> Integer -> Integer -> m Bool
This way you can remove most of the 'lift's. However, the longest sequence of lifts cannot be removed, since it is a State monad inside a StateT, so using the MonadState type class will give you the outer StateT, where you need tot get to the inner State. You could wrap your State monad in a newtype and make a MonadHappy class, similar to the existing monad classes.
ListT (from the List package) does a much nicer job than MaybeT in stopping the calculation when necessary.
solve1 :: [Integer] -> IsHappyMemo Integer
solve1 bases = do
Cons result _ <- runList . filterL cond $ fromList [2..]
return result
where
cond num = andL . mapL (isHappy Set.empty num) $ fromList bases
Some elaboration on how this works:
Had we used a regular list the code would had looked like this:
solve1 bases = do
result:_ <- filterM cond [2..]
return result
where
cond num = fmap and . mapM (isHappy Set.empty num) bases
This calculation happens in a State monad, but if we'd like to get the resulting state, we'd have a problem, because filterM runs the monadic predicate it gets for every element of [2..], an infinite list.
With the monadic list, filterL cond (fromList [2..]) represents a list that we can access one item at a time as a monadic action, so our monadic predicate cond isn't actually executed (and affecting the state) unless we consume the corresponding list items.
Similarly, implementing cond using andL makes us not calculate and update the state if we already got a False result from one of the isHappy Set.empty num calculations.