I need to use a list monad transformer. I've read that there are potential problems with ListT IO from Control.Monad.List, since IO isn't commutative, so I'm looking at ListT done right. But I'm getting some unexpected behavior.
Consider this simple test:
test = runListT $ do
x <- liftList [1..3]
liftIO $ print x
y <- liftList [6..8]
liftIO $ print (x,y)
Using Control.Monad.List:
Main> test
1
(1,6)
(1,7)
(1,8)
2
(2,6)
(2,7)
(2,8)
3
(3,6)
(3,7)
(3,8)
[(),(),(),(),(),(),(),(),()]
Using "ListT done right":
Main> test
1
(1,6)
Is this a problem with "ListT done right", or am I just using it wrong? Is there a preferred alternative?
Thanks!
This might be intensional on the part of the author, since they say
it lets each element of the list have its own side effects, which only get
`excecuted' if this element of the list is really inspected.
I'm not sure, though. Anyway, you can use this function to sequence the whole
list:
runAll_ :: (Monad m) => ListT m a -> m ()
runAll_ (ListT m) = runAll_' m where
runAll_' m = do
mm <- m
case mm of
MNil -> return ()
_ `MCons` mxs -> runAll_' mxs
And an analogous runAll that returns a list should be easy to construct.
main = runAll_ $ do
x <- liftList [1..3]
liftIO $ print x
y <- liftList [6..8]
liftIO $ print (x,y)
1
(1,6)
(1,7)
(1,8)
2
(2,6)
(2,7)
(2,8)
3
(3,6)
(3,7)
(3,8)
Related
So I wrote a program to query a forex API (foreign exchange), and it works like a charm, but when I want to query every currency pair available, it evaluates all API calls as it takes a long time to execute but prints nothing.
import Data.Functor ((<&>))
supportedPairs :: IO (Maybe [(String, String)])
forex :: String -> String -> IO (Maybe (Scientific, UnixTime))
main :: IO ()
main = do
x <- supportedPairs
mapM_ (flip (<&>) print . uncurry forex) (fromJust x)
-- this prints nothing at all
The single calls work just fine like this:
main = do
x <- supportedPairs
u <- (uncurry forex . (flip (!!) 10 . fromJust)) x
print u
-- this prints "Just (438.685041,UnixTime {utSeconds = 1649588583, utMicroSeconds = 0})"
Why doesn't the mapM_ print the results although they are evaluated? If I understood Haskell's laziness correctly, then if the results are not to be printed they should not be evaluated in the first place?
Check the types:
print is ... -> IO ().
Therefore, ... <&> print is IO (IO ()). Note the double IO here.
Hence, mapping over that, will run the "outermost IO" but not the "innermost IO". More concretely, compare this:
main = do
x <- print True >> return 5 -- x is 5
y <- return (print True >> return 5) -- y is an IO action
...
Only the first print True here is executed: the second IO action is used to define y but until we run y it won't be executed.
The final point: here, you do not need <&> since that creates the nested IO's. Use flip (>>=) print (or (=<<) print, or (>>= print)) instead of flip <&> print.
I have the following function:
main = do xs <- getContents
edLines <- ed $ lines xs
putStr $ unlines edLines
Firstly I used the working version main = interact (unlines . ed . lines) but changed the signature of ed since. Now it returns IO [String] instead of just [String] so I can't use this convenient definition any more.
The problem is that now my function ed is still getting evaluated partly but nothing is displayed till I close the stdin via CTRL + D.
Definition of ed:
ed :: Bool -> [EdCmdLine] -> IO EdLines
ed xs = concatM $ map toLinesExt $ scanl (flip $ edLine defHs) (return [Leaf ""]) xs where
toLinesExt :: IO [EdState] -> IO EdLines
toLinesExt rsIO = do
rs#(r:_) <- rsIO -- todo add fallback pattern with (error)
return $ fromEd r ++ [" "]
The scanl is definitely evaluated lazy because edLine is getting evaluated for sure (observable by the side effects).
I think it could have to do with concatM:
concatM :: (Foldable t, Monad m) => t (m [a]) -> m [a]
concatM xsIO = foldr (\accIO xIO -> do {x <- xIO; acc <- accIO; return $ acc ++ x}) (return []) xsIO
All I/O in Haskell is explicitly ordered. The last two lines of your main function desugar into something like
ed (lines xs) >>= (\edLines -> putStr $ unlines edLines)
>>= sequences all of the I/O effects on the left before all of those on the right. You're constructing an I/O action of the form generate line 1 >> ... >> generate line n >> output line 1 >> ... >> output line n.
This isn't really an evaluation order issue, it's a correctness issue. An implementation is free to evaluate in any order it wants, but it can't change the ordering of I/O actions that you specified, any more than it can reorder the elements of a list.
Here's a toy example showing what you need to do:
lineProducingActions :: [IO String]
lineProducingActions = replicate 10 getLine
wrongOrder, correctOrder :: IO ()
wrongOrder = do
xs <- sequence lineProducingActions
mapM_ putStrLn xs
correctOrder = do
let xs = [x >>= putStrLn | x <- lineProducingActions]
sequence_ xs
Note that you can decouple the producer and consumer while getting the ordering you want. You just need to avoid combining the I/O actions in the producer. I/O actions are pure values that can be manipulated just like any other values. They aren't side-effectful expressions that happen immediately as they're written. They happen, rather, in whatever order you glue them together in.
You would need to use unsafeInterleaveIO to schedule some of your IO actions for later. Beware that the IO actions may then be executed in a different order than you might first expect!
However, I strongly recommend not doing that. Change your IO [String] action to print each line as it's produced instead.
Alternately, if you really want to maintain the computation-as-pipeline view, check out one of the many streaming libraries available on Hackage (streamly, pipes, iteratees, conduit, machines, and probably half a dozen others).
Thanks to #benrg answer I was able to solve the issue with the following code:
ed :: [EdCmdLine] -> [IO EdLines]
ed cmds = map (>>= return . toLines . head) $ edHistIO where
toLines :: EdState -> EdLines
toLines r = fromEd r ++ [" "]
edHistIO = edRec defHs cmds (return [initState])
edRec :: [HandleHandler] -> [EdCmdLine] -> IO EdHistory -> [IO EdHistory]
edRec _ [] hist = [hist] -- if CTRL + D
edRec defHs (cmd:cmds) hist = let next = edLine defHs cmd hist in next : edRec defHs cmds next
main = getContents >>= mapM_ (>>= (putStr . unlines)) . ed . lines
I'm using a state transformer to randomly sample a dataset at every point of a 2D recursive walk, which outputs a list of 2D grids of samples that together succeed a condition. I'd like to pull from the results lazily, but my approach instead exhausts the whole dataset at every point before I can pull the first result.
To be concrete, consider this program:
import Control.Monad ( sequence, liftM2 )
import Data.Functor.Identity
import Control.Monad.State.Lazy ( StateT(..), State(..), runState )
walk :: Int -> Int -> [State Int [Int]]
walk _ 0 = [return [0]]
walk 0 _ = [return [0]]
walk x y =
let st :: [State Int Int]
st = [StateT (\s -> Identity (s, s + 1)), undefined]
unst :: [State Int Int] -- degenerate state tf
unst = [return 1, undefined]
in map (\m_z -> do
z <- m_z
fmap concat $ sequence [
liftM2 (zipWith (\x y -> x + y + z)) a b -- for 1D: map (+z) <$> a
| a <- walk x (y - 1) -- depth
, b <- walk (x - 1) y -- breadth -- comment out for 1D
]
) st -- vs. unst
main :: IO ()
main = do
std <- getStdGen
putStrLn $ show $ head $ fst $ (`runState` 0) $ head $ walk 2 2
The program walks the rectangular grid from (x, y) to (0, 0) and sums all the results, including the value of one of the lists of State monads: either the non-trivial transformers st that read and advance their state, or the trivial transformers unst. Of interest is whether the algorithm explores past the heads of st and unst.
In the code as presented, it throws undefined. I chalked this up to a misdesign of my order of chaining the transformations, and in particular, a problem with the state handling, as using unst instead (i.e. decoupling the result from state transitions) does produce a result. However, I then found that a 1D recursion also preserves laziness even with the state transformer (remove the breadth step b <- walk... and swap the liftM2 block for fmap).
If we trace (show (x, y)), we also see that it does walk the whole grid before triggering:
$ cabal run
Build profile: -w ghc-8.6.5 -O1
...
(2,2)
(2,1)
(1,2)
(1,1)
(1,1)
sandbox: Prelude.undefined
I suspect that my use of sequence is at fault here, but as the choice of monad and the dimensionality of the walk affect its success, I can't say broadly that sequenceing the transformations is the source of strictness by itself.
What's causing the difference in strictness between 1D and 2D recursion here, and how can I achieve the laziness I want?
Consider the following simplified example:
import Control.Monad.State.Lazy
st :: [State Int Int]
st = [state (\s -> (s, s + 1)), undefined]
action1d = do
a <- sequence st
return $ map (2*) a
action2d = do
a <- sequence st
b <- sequence st
return $ zipWith (+) a b
main :: IO ()
main = do
print $ head $ evalState action1d 0
print $ head $ evalState action2d 0
Here, in both the 1D and 2D calculations, the head of the result depends explicitly only on the heads of the inputs (just head a for the 1D action and both head a and head b for the 2D action). However, in the 2D calculation, there's an implicit dependency of b (even just its head) on the current state, and that state depends on the evaluation of the entirety of a, not just its head.
You have a similar dependency in your example, though it's obscured by the use of lists of state actions.
Let's say we wanted to run the action walk22_head = head $ walk 2 2 manually and inspect the first integer in the resulting list:
main = print $ head $ evalState walk22_head
Writing the elements of the state action list st explicitly:
st1, st2 :: State Int Int
st1 = state (\s -> (s, s+1))
st2 = undefined
we can write walk22_head as:
walk22_head = do
z <- st1
a <- walk21_head
b <- walk12_head
return $ zipWith (\x y -> x + y + z) a b
Note that this depends only on the defined state action st1 and the heads of walk 2 1 and walk 1 2. Those heads, in turn, can be written:
walk21_head = do
z <- st1
a <- return [0] -- walk20_head
b <- walk11_head
return $ zipWith (\x y -> x + y + z) a b
walk12_head = do
z <- st1
a <- walk11_head
b <- return [0] -- walk02_head
return $ zipWith (\x y -> x + y + z) a b
Again, these depend only on the defined state action st1 and the head of walk 1 1.
Now, let's try to write down a definition of walk11_head:
walk11_head = do
z <- st1
a <- return [0]
b <- return [0]
return $ zipWith (\x y -> x + y + z) a b
This depends only on the defined state action st1, so with these definitions in place, if we run main, we get a defined answer:
> main
10
But these definitions aren't accurate! In each of walk 1 2 and walk 2 1, the head action is a sequence of actions, starting with the action that invokes walk11_head, but continuing with actions based on walk11_tail. So, more accurate definitions would be:
walk21_head = do
z <- st1
a <- return [0] -- walk20_head
b <- walk11_head
_ <- walk11_tail -- side effect of the sequennce
return $ zipWith (\x y -> x + y + z) a b
walk12_head = do
z <- st1
a <- walk11_head
b <- return [0] -- walk02_head
_ <- walk11_tail -- side effect of the sequence
return $ zipWith (\x y -> x + y + z) a b
with:
walk11_tail = do
z <- undefined
a <- return [0]
b <- return [0]
return [zipWith (\x y -> x + y + z) a b]
With these definitions in place, there's no problem running walk12_head and walk21_head in isolation:
> head $ evalState walk12_head 0
1
> head $ evalState walk21_head 0
1
The state side effects here are not needed to calculate the answer and so never invoked. But, it's not possible to run them both in sequence:
> head $ evalState (walk12_head >> walk21_head) 0
*** Exception: Prelude.undefined
CallStack (from HasCallStack):
error, called at libraries/base/GHC/Err.hs:78:14 in base:GHC.Err
undefined, called at Lazy2D_2.hs:41:8 in main:Main
Therefore, trying to run main fails for the same reason:
> main
*** Exception: Prelude.undefined
CallStack (from HasCallStack):
error, called at libraries/base/GHC/Err.hs:78:14 in base:GHC.Err
undefined, called at Lazy2D_2.hs:41:8 in main:Main
because, in calculating walk22_head, even the very beginning of walk21_head's calculation depends on the state side effect walk11_tail initiated by walk12_head.
Your original walk definition behaves the same way as these mockups:
> head $ evalState (head $ walk 1 2) 0
1
> head $ evalState (head $ walk 2 1) 0
1
> head $ evalState (head (walk 1 2) >> head (walk 2 1)) 0
*** Exception: Prelude.undefined
CallStack (from HasCallStack):
error, called at libraries/base/GHC/Err.hs:78:14 in base:GHC.Err
undefined, called at Lazy2D_0.hs:15:49 in main:Main
> head $ evalState (head (walk 2 2)) 0
*** Exception: Prelude.undefined
CallStack (from HasCallStack):
error, called at libraries/base/GHC/Err.hs:78:14 in base:GHC.Err
undefined, called at Lazy2D_0.hs:15:49 in main:Main
It's hard to say how to fix this. Your toy example was excellent for the purposes of illustrating the problem, but it's not clear how the state is used in your "real" problem and if head $ walk 2 1 really has a state dependency on the sequence of walk 1 1 actions induced by head $ walk 1 2.
The accepted answer by K.A. Buhr is right: while getting the head of one step in each direction is fine (try walk with either x < 2 or y < 2) the combination of the implicit >>= in liftM2, the sequence in the value of a and the state dependency in the value of b makes b depend on all side effects of a. As he also pointed out, a working solution depends on what dependencies are actually wanted.
I'll share a solution for my particular case: each walk call depends on the state of the caller at least, and perhaps some other states, based on a pre-order traversal of the grid and alternatives in st. In addition, as the question suggests, I want to try to make a full result before testing any unneeded alternatives in st. This is a little difficult to explain visually, but here's the best I could do: the left shows the variable number of st alternatives at each coordinate (which is what I have in my actual use case) and the right shows a [rather messy] map of the desired dependency order of the state: we see it traverses x-y first in a 3D DFS, with "x" as depth (fastest axis), "y" as breadth (middle axis), then finally alternatives as the slowest axis (shown in dashed lines with open circles).
The central issue in the original implementation came from sequencing lists of state transitions to accommodate the non-recursive return type. Let's replace the list type altogether with a type that's recursive in the monad parameter, so the caller can better control the dependency order:
data ML m a = MCons a (MML m a) | MNil -- recursive monadic list
newtype MML m a = MML (m (ML m a)) -- base case wrapper
An example of [1, 2]:
MCons 1 (MML (return (MCons 2 (MML (return MNil)))))
Functor and Monoid behaviors are used often, so here's the relevant implementations:
instance Functor m => Functor (ML m) where
fmap f (MCons a m) = MCons (f a) (MML $ (fmap f) <$> coerce m)
fmap _ MNil = MNil
instance Monad m => Semigroup (MML m a) where
(MML l) <> (MML r) = MML $ l >>= mapper where
mapper (MCons la lm) = return $ MCons la (lm <> (MML r))
mapper MNil = r
instance Monad m => Monoid (MML m a) where
mempty = MML (pure MNil)
There are two critical operations: combining steps in two different axes, and combining lists from different alternatives at the same coordinate. Respectively:
Based on the diagram, we want to get a single full result from the x step first, then a full result from the y step. Each step returns a list of results from all combinations of viable alternatives from inner coordinates, so we take a Cartesian product over both lists, also biased in one direction (in this case y fastest). First we define a "concatenation" that applies a base case wrapper MML at the end of a bare list ML:
nest :: Functor m => MML m a -> ML m a -> ML m a
nest ma (MCons a mb) = MCons a (MML $ nest ma <$> coerce mb)
then a Cartesian product:
prodML :: Monad m => (a -> a -> a) -> ML m a -> ML m a -> ML m a
prodML f x (MCons ya ym) = (MML $ prodML f x <$> coerce ym) `nest` ((f ya) <$> x)
prodML _ MNil _ = MNil
We want to smash the lists from different alternatives into one list and we don't care that this introduces dependencies between alternatives. This is where we use mconcat from the Monoid instance.
All in all, it looks like this:
walk :: Int -> Int -> MML (State Int) Int
-- base cases
walk _ 0 = MML $ return $ MCons 1 (MML $ return MNil)
walk 0 _ = walk 0 0
walk x y =
let st :: [State Int Int]
st = [StateT (\s -> Identity (s, s + 1)), undefined]
xstep = coerce $ walk (x-1) y
ystep = coerce $ walk x (y-1)
-- point 2: smash lists with mconcat
in mconcat $ map (\mz -> MML $ do
z <- mz
-- point 1: product over results
liftM2 ((fmap (z+) .) . prodML (+)) xstep ystep
) st
headML (MCons a _) = a
headML _ = undefined
main :: IO ()
main = putStrLn $ show $ headML $ fst $ (`runState` 0) $ (\(MML m) -> m) $ walk 2 2
Note the result have changed with the semantics. It doesn't matter to me since my goal only needed to pull random numbers from state, and whatever dependency order is needed can be controlled with the right shepherding of list elements into the final result.
(I'll also warn that without memoization or attention to strictness, this implementation is very inefficient for large x and y.)
HLint suggests that I use forM_ rather than forM. Why? I see they have different type signatures but haven't found a good reason to use one over the other.
forM :: (Traversable t, Monad m) => t a -> (a -> m b) -> m (t b)
forM_ :: (Foldable t, Monad m) => t a -> (a -> m b) -> m ()
The forM_ function is more efficient because it does not save the results of the operations. That is all. (This only makes sense when working with monads because a pure function of type a -> () is not particularly useful.)
Ok,
forM is mapM with its arguments flipped.
forM_ is mapM_ with its arguments flipped.
Let's see in mapM and mapM_ :
mapM :: Monad m => (a -> m b) -> [a] -> m [b]
mapM mf xs takes a monadic function mf (having type Monad m => (a -> m b)) and applies it to each element in list xs; the result is a list inside a monad.
The difference between mapM and mapM_ is, that mapM returns a list of the results, while mapM_ returns an empty result. The result of each action in mapM_ is not stored.
To understand the difference between (A): forM xs f and (B): forM_ xs f, it might help to compare the difference between the following:
-- Equivalent to (A)
do
r1 <- f x1
r2 <- f x2
...
rn <- f xn
return [r1, r2, ..., rn]
-- Equivalent to (B)
do
_ <- f x1
_ <- f x2
...
_ <- f xn
return ()
The crucial difference being that forM_ ignores the results r1, ... rn and just returns an empty result via return (). Think of the underscore as meaning "don't care" ... forM_ doesn't care about the results. forM however, does care about the results and returns them in as a list via return [r1, r2, ... rn].
Example 1
The code below asks for your name three times and prints the results of the forM.
import Control.Monad (forM, forM_)
main = do
let askName i = do
putStrLn $ "What's your name (" ++ (show i) ++ ")"
name <- getLine
return name
results <- forM [1,2,3] askName
putStrLn $ "Results = " ++ show results
An example execution with forM:
What's your name? (1)
> James
What's your name? (2)
> Susan
What's your name? (3)
> Alex
Results = ["James", "Susan", "Alex"]
But if we change the forM to a forM_, then we would have instead:
What's your name? (1)
> James
What's your name? (2)
> Susan
What's your name? (3)
> Alex
Results = ()
In your case, the linter is telling you that you're not using the return values of your forM (you don't have foo <- forM xs f, you probably have forM xs f by itself on a line) and so should use forM_ instead. This happens, for
example, when you are using a monadic action like putStrLn.
Example 2 The code below asks for your name and then says "Hello" – repeating three times.
import Control.Monad (forM, forM_)
main = do
let askThenGreet i = do
putStrLn $ "What's your name (" ++ (show i) ++ ")"
name <- getLine
putStrLn $ "Hello! " ++ name
forM [1,2,3] askThenGreet
An example execution with forM:
What's your name? (1)
> Sarah
Hello! Sarah
What's your name? (2)
> Dick
Hello! Dick
What's your name? (3)
> Peter
Hello! Peter
[(), (), ()]
The overall result of main comes from the result of the forM: [(), (), ()]. It's pretty useless and annoyingly, it appears in the console. But if we change the forM to a forM_, then we would have instead:
What's your name? (1)
> Sarah
Hello! Sarah
What's your name? (2)
> Dick
Hello! Dick
What's your name? (3)
> Peter
Hello! Peter
With that change, the overall result comes from the mapM_ and is now (). This doesn't show up in the console (a quirk of the IO monad)! Great!
Also, by using mapM_ here, it's clearer to other readers of your code – you're indirectly explaining / self-documenting that you don't care about the results [r1, ..., rn] = [(), (), ()] – and rightly so as they're useless here.
Solving a problem from Google Code Jam (2009.1A.A: "Multi-base happiness") I came up with an awkward (code-wise) solution, and I'm interested in how it could be improved.
The problem description, shortly, is: Find the smallest number bigger than 1 for which iteratively calculating the sum of squares of digits reaches 1, for all bases from a given list.
Or description in pseudo-Haskell (code that would solve it if elem could always work for infinite lists):
solution =
head . (`filter` [2..]) .
all ((1 `elem`) . (`iterate` i) . sumSquareOfDigitsInBase)
And my awkward solution:
By awkward I mean it has this kind of code: happy <- lift . lift . lift $ isHappy Set.empty base cur
I memoize results of the isHappy function. Using the State monad for the memoized results Map.
Trying to find the first solution, I did not use head and filter (like the pseudo-haskell above does), because the computation isn't pure (changes state). So I iterated by using StateT with a counter, and a MaybeT to terminate the computation when condition holds.
Already inside a MaybeT (StateT a (State b)), if the condition doesn't hold for one base, there is no need to check the other ones, so I have another MaybeT in the stack for that.
Code:
import Control.Monad.Maybe
import Control.Monad.State
import Data.Maybe
import qualified Data.Map as Map
import qualified Data.Set as Set
type IsHappyMemo = State (Map.Map (Integer, Integer) Bool)
isHappy :: Set.Set Integer -> Integer -> Integer -> IsHappyMemo Bool
isHappy _ _ 1 = return True
isHappy path base num = do
memo <- get
case Map.lookup (base, num) memo of
Just r -> return r
Nothing -> do
r <- calc
when (num < 1000) . modify $ Map.insert (base, num) r
return r
where
calc
| num `Set.member` path = return False
| otherwise = isHappy (Set.insert num path) base nxt
nxt =
sum . map ((^ (2::Int)) . (`mod` base)) .
takeWhile (not . (== 0)) . iterate (`div` base) $ num
solve1 :: [Integer] -> IsHappyMemo Integer
solve1 bases =
fmap snd .
(`runStateT` 2) .
runMaybeT .
forever $ do
(`when` mzero) . isJust =<<
runMaybeT (mapM_ f bases)
lift $ modify (+ 1)
where
f base = do
cur <- lift . lift $ get
happy <- lift . lift . lift $ isHappy Set.empty base cur
unless happy mzero
solve :: [String] -> String
solve =
concat .
(`evalState` Map.empty) .
mapM f .
zip [1 :: Integer ..]
where
f (idx, prob) = do
s <- solve1 . map read . words $ prob
return $ "Case #" ++ show idx ++ ": " ++ show s ++ "\n"
main :: IO ()
main =
getContents >>=
putStr . solve . tail . lines
Other contestants using Haskell did have nicer solutions, but solved the problem differently. My question is about small iterative improvements to my code.
Your solution is certainly awkward in its use (and abuse) of monads:
It is usual to build monads piecemeal by stacking several transformers
It is less usual, but still happens sometimes, to stack several states
It is very unusual to stack several Maybe transformers
It is even more unusual to use MaybeT to interrupt a loop
Your code is a bit too pointless :
(`when` mzero) . isJust =<<
runMaybeT (mapM_ f bases)
instead of the easier to read
let isHappy = isJust $ runMaybeT (mapM_ f bases)
when isHappy mzero
Focusing now on function solve1, let us simplify it.
An easy way to do so is to remove the inner MaybeT monad. Instead of a forever loop which breaks when a happy number is found, you can go the other way around and recurse only if the
number is not happy.
Moreover, you don't really need the State monad either, do you ? One can always replace the state with an explicit argument.
Applying these ideas solve1 now looks much better:
solve1 :: [Integer] -> IsHappyMemo Integer
solve1 bases = go 2 where
go i = do happyBases <- mapM (\b -> isHappy Set.empty b i) bases
if and happyBases
then return i
else go (i+1)
I would be more han happy with that code.
The rest of your solution is fine.
One thing that bothers me is that you throw away the memo cache for every subproblem. Is there a reason for that?
solve :: [String] -> String
solve =
concat .
(`evalState` Map.empty) .
mapM f .
zip [1 :: Integer ..]
where
f (idx, prob) = do
s <- solve1 . map read . words $ prob
return $ "Case #" ++ show idx ++ ": " ++ show s ++ "\n"
Wouldn't your solution be more efficient if you reused it instead ?
solve :: [String] -> String
solve cases = (`evalState` Map.empty) $ do
solutions <- mapM f (zip [1 :: Integer ..] cases)
return (unlines solutions)
where
f (idx, prob) = do
s <- solve1 . map read . words $ prob
return $ "Case #" ++ show idx ++ ": " ++ show s
The Monad* classes exist to remove the need for repeated lifting. If you change your signatures like this:
type IsHappyMemo = Map.Map (Integer, Integer) Bool
isHappy :: MonadState IsHappyMemo m => Set.Set Integer -> Integer -> Integer -> m Bool
This way you can remove most of the 'lift's. However, the longest sequence of lifts cannot be removed, since it is a State monad inside a StateT, so using the MonadState type class will give you the outer StateT, where you need tot get to the inner State. You could wrap your State monad in a newtype and make a MonadHappy class, similar to the existing monad classes.
ListT (from the List package) does a much nicer job than MaybeT in stopping the calculation when necessary.
solve1 :: [Integer] -> IsHappyMemo Integer
solve1 bases = do
Cons result _ <- runList . filterL cond $ fromList [2..]
return result
where
cond num = andL . mapL (isHappy Set.empty num) $ fromList bases
Some elaboration on how this works:
Had we used a regular list the code would had looked like this:
solve1 bases = do
result:_ <- filterM cond [2..]
return result
where
cond num = fmap and . mapM (isHappy Set.empty num) bases
This calculation happens in a State monad, but if we'd like to get the resulting state, we'd have a problem, because filterM runs the monadic predicate it gets for every element of [2..], an infinite list.
With the monadic list, filterL cond (fromList [2..]) represents a list that we can access one item at a time as a monadic action, so our monadic predicate cond isn't actually executed (and affecting the state) unless we consume the corresponding list items.
Similarly, implementing cond using andL makes us not calculate and update the state if we already got a False result from one of the isHappy Set.empty num calculations.