In order to understand how to use monad transformers, I wrote the following code without one. It reads standard input line by line and displays each line reversed until an empty line is encountered. It also counts the lines using State and in the end displays the total number.
import Control.Monad.State
main = print =<< fmap (`evalState` 0) go where
go :: IO (State Int Int)
go = do
l <- getLine
if null l
then return get
else do
putStrLn (reverse l)
-- another possibility: fmap (modify (+1) >>) go
rest <- go
return $ do
modify (+1)
rest
I wanted to add the current line number before each line. I was able to do it with StateT:
import Control.Monad.State
main = print =<< evalStateT go 0 where
go :: StateT Int IO Int
go = do
l <- lift getLine
if null l
then get
else do
n <- get
lift (putStrLn (show n ++ ' ' : reverse l))
modify (+1)
go
My question is: how to do the same in the version without monad transformers?
The problem you're having is that the hand-unrolling of StateT s IO a is s -> IO (s, a), not IO (s -> (s, a))! Once you have this insight, it's pretty easy to see how to do it:
go :: Int -> IO (Int, Int)
go s = do
l <- getLine
if null l
then return (s, s)
else do
putStrLn (show s ++ ' ' : reverse l)
go (s+1)
You'd just need to run the accumulated state computation on every line. This is O(n²) time, but since your first program is already using O(n) space, that's not too terrible. Of course, the StateT approach is superior in pretty much every way! If you really want to do it "by hand" and not pay an efficiency price, just manage the state by hand instead of building a state transformer at all. You're really not getting any benefit by using State instead of Int in the first program.
Maybe this is what you are looking for?
main = print =<< fmap (`evalState` 0) (go get) where
go :: State Int Int -> IO (State Int Int)
go st = do
l <- getLine
if null l
then return (st >>= \_ -> get)
else do
let ln = evalState st 0
putStrLn(show ln ++ ' ' : reverse l)
go (st >>= \_ -> modify (+1) >>= \_ -> get)
The idea here is to make go tail recursive, building up your state computation, which you can then evaluate at each step.
EDIT
This version will bound the size of the state computation to a constant size, although under lazy evaluation, when the previous state computation is forced, we should be able to reuse it without re-evaluating it, so I'm guessing that these are essentially the same...
main = print =<< fmap (`evalState` 0) (go get) where
go :: State Int Int -> IO (State Int Int)
go st = do
l <- getLine
if null l
then return st
else do
let ln = evalState st 0
putStrLn(show ln ++ ' ' : reverse l)
go (modify (\s -> s+ln+1) >>= \_ -> get)
Related
I have the following function:
main = do xs <- getContents
edLines <- ed $ lines xs
putStr $ unlines edLines
Firstly I used the working version main = interact (unlines . ed . lines) but changed the signature of ed since. Now it returns IO [String] instead of just [String] so I can't use this convenient definition any more.
The problem is that now my function ed is still getting evaluated partly but nothing is displayed till I close the stdin via CTRL + D.
Definition of ed:
ed :: Bool -> [EdCmdLine] -> IO EdLines
ed xs = concatM $ map toLinesExt $ scanl (flip $ edLine defHs) (return [Leaf ""]) xs where
toLinesExt :: IO [EdState] -> IO EdLines
toLinesExt rsIO = do
rs#(r:_) <- rsIO -- todo add fallback pattern with (error)
return $ fromEd r ++ [" "]
The scanl is definitely evaluated lazy because edLine is getting evaluated for sure (observable by the side effects).
I think it could have to do with concatM:
concatM :: (Foldable t, Monad m) => t (m [a]) -> m [a]
concatM xsIO = foldr (\accIO xIO -> do {x <- xIO; acc <- accIO; return $ acc ++ x}) (return []) xsIO
All I/O in Haskell is explicitly ordered. The last two lines of your main function desugar into something like
ed (lines xs) >>= (\edLines -> putStr $ unlines edLines)
>>= sequences all of the I/O effects on the left before all of those on the right. You're constructing an I/O action of the form generate line 1 >> ... >> generate line n >> output line 1 >> ... >> output line n.
This isn't really an evaluation order issue, it's a correctness issue. An implementation is free to evaluate in any order it wants, but it can't change the ordering of I/O actions that you specified, any more than it can reorder the elements of a list.
Here's a toy example showing what you need to do:
lineProducingActions :: [IO String]
lineProducingActions = replicate 10 getLine
wrongOrder, correctOrder :: IO ()
wrongOrder = do
xs <- sequence lineProducingActions
mapM_ putStrLn xs
correctOrder = do
let xs = [x >>= putStrLn | x <- lineProducingActions]
sequence_ xs
Note that you can decouple the producer and consumer while getting the ordering you want. You just need to avoid combining the I/O actions in the producer. I/O actions are pure values that can be manipulated just like any other values. They aren't side-effectful expressions that happen immediately as they're written. They happen, rather, in whatever order you glue them together in.
You would need to use unsafeInterleaveIO to schedule some of your IO actions for later. Beware that the IO actions may then be executed in a different order than you might first expect!
However, I strongly recommend not doing that. Change your IO [String] action to print each line as it's produced instead.
Alternately, if you really want to maintain the computation-as-pipeline view, check out one of the many streaming libraries available on Hackage (streamly, pipes, iteratees, conduit, machines, and probably half a dozen others).
Thanks to #benrg answer I was able to solve the issue with the following code:
ed :: [EdCmdLine] -> [IO EdLines]
ed cmds = map (>>= return . toLines . head) $ edHistIO where
toLines :: EdState -> EdLines
toLines r = fromEd r ++ [" "]
edHistIO = edRec defHs cmds (return [initState])
edRec :: [HandleHandler] -> [EdCmdLine] -> IO EdHistory -> [IO EdHistory]
edRec _ [] hist = [hist] -- if CTRL + D
edRec defHs (cmd:cmds) hist = let next = edLine defHs cmd hist in next : edRec defHs cmds next
main = getContents >>= mapM_ (>>= (putStr . unlines)) . ed . lines
Basically I would like to find a way so that a user can enter the number of test cases and then input their test cases. The program can then run those test cases and print out the results in the order that the test cases appear.
So basically I have main which reads in the number of test cases and inputs it into a function that will read from IO that many times. It looks like this:
main = getLine >>= \tst -> w (read :: String -> Int) tst [[]]
This is the method signature of w: w :: Int -> [[Int]]-> IO ()
So my plan is to read in the number of test cases and have w run a function which takes in each test case and store the result into the [[]] variable. So each list in the list will be an output. w will just run recursively until it reaches 0 and print out each list on a separate line. I'd like to know if there is a better way of doing this since I have to pass in an empty list into w, which seems extraneous.
As #bheklilr mentioned you can't update a value like [[]]. The standard functional approach is to pass an accumulator through a a set of recursive calls. In the following example the acc parameter to the loop function is this accumulator - it consists of all of the output collected so far. At the end of the loop we return it.
myTest :: Int -> [String]
myTest n = [ "output line " ++ show k ++ " for n = " ++ show n | k <- [1..n] ]
main = do
putStr "Enter number of test cases: "
ntests <- fmap read getLine :: IO Int
let loop k acc | k > ntests = return $ reverse acc
loop k acc = do
-- we're on the kth-iteration
putStr $ "Enter parameter for test case " ++ show k ++ ": "
a <- fmap read getLine :: IO Int
let output = myTest a -- run the test
loop (k+1) (output:acc)
allOutput <- loop 1 []
print allOutput
As you get more comfortable with this kind of pattern you'll recognize it as a fold (indeed a monadic fold since we're doing IO) and you can implement it with foldM.
Update: To help explain how fmap works, here are equivalent expressions written without using fmap:
With fmap: Without fmap:
n <- fmap read getLine :: IO [Int] line <- getLine
let n = read line :: Int
vals <- fmap (map read . words) getLine line <- getLine
:: IO [Int] let vals = (map read . words) line :: [Int]
Using fmap allows us to eliminate the intermediate variable line which we never reference again anyway. We still need to provide a type signature so read knows what to do.
The idiomatic way is to use replicateM:
runAllTests :: [[Int]] -> IO ()
runAllTests = {- ... -}
main = do
numTests <- readLn
tests <- replicateM numTests readLn
runAllTests tests
-- or:
-- main = readLn >>= flip replicateM readLn >>= runAllTests
I am having trouble understanding the answers to a previous question. I'm hoping that an explanation of the following will clarify things. The following example comes from fpcomplete
import Control.Monad.Trans.Class
import Control.Monad.Trans.Cont
main = flip runContT return $ do
lift $ putStrLn "alpha"
(k, num) <- callCC $ \k -> let f x = k (f, x)
in return (f, 0)
lift $ putStrLn "beta"
lift $ putStrLn "gamma"
if num < 5
then k (num + 1) >> return ()
else lift $ print num
The output is
alpha
beta
gamma
beta
gamma
beta
gamma
beta
gamma
beta
gamma
beta
gamma
5
I think I understand how this example works, but why is it necessary to have a let expression in the callCC to "return" the continuation so that it can be used later on. So I tried to directly return the continuation by taking the following simpler example and modifying it.
import Control.Monad.Trans.Class
import Control.Monad.Trans.Cont
main = flip runContT return $ do
lift $ putStrLn "alpha"
callCC $ \k -> do
k ()
lift $ putStrLn "uh oh..."
lift $ putStrLn "beta"
lift $ putStrLn "gamma"
This prints
alpha
beta
gamma
And I modified it to the following
import Control.Monad.Trans.Class
import Control.Monad.Trans.Cont
main = flip runContT return $ do
lift $ putStrLn "alpha"
f <- callCC $ \k -> do
lift $ putStrLn "uh oh..."
return k
lift $ putStrLn "beta"
lift $ putStrLn "gamma"
The idea being that the continuation would get returned as f and be unused in this test example which I would expect to print
uh oh...
beta
gamma
But this example doesn't compile, why can't this be done?
Edit: Consider the analgous example in Scheme. As far as I know Scheme wouldn't have a problem, is that correct?, but why?.
As the others have written the last example does not typecheck due to an infinite type.
#augustss proposed another way of solving this problem:
You can also make a newtype to wrap the infinite (equi-)recursive type into a (iso-)recursive newtype. – augustss Dec 12 '13 at 12:50
Here's my take at it:
import Control.Monad.Trans.Cont
import Control.Monad.Trans.Class
data Mu t = In { out :: t (Mu t) }
newtype C' b a = C' { unC' :: a -> b }
type C b = Mu (C' b)
unfold = unC' . out
fold = In . C'
setjmp = callCC $ (\c -> return $ fold c)
jump l = unfold l l
test :: ContT () IO ()
test = do
lift $ putStrLn "Start"
l <- setjmp
lift $ putStrLn "x"
jump l
main = runContT test return
I think this is what #augustss had in mind.
Looking at your examples in reverse order.
The last example does not typecheck due to an infinite type. Looking at the type of callCC, it is ((a -> ContT r m b) -> ContT r m a) -> ContT r m a. If we try to return the continuation we return something of type ContT r m (a -> ContT r m b). This means we get the type equality constraint a ~ (a -> ContT r m b), which means a has to be an infinite type. Haskell does not allow these (in general, for good reason - as far as I can tell the infinite type here would be something along the lines of, supply it an infinite order function as an argument).
You don't mention whether there's anything you're confused about in the second example, but. The reason that it does not print "uh oh..." is because the ContT action produced by k (), unlike many ContT actions does not use the following computation. This is the difference between the continuations and just normal functions which return ContT actions (disclaimer, any function could return a ContT action like this, but in general). So, when you follow the k () up with a print, or anything else, it is irrelevant, because the k () just discards the following actions.
So, the first example. The let binding here is actually only used to mess around with the parameters to k. But by doing so we avoid an infinite type. Effectively, we do some recursion in the let binding which is related to the infinite type we got before. f is a little bit like a version of the continuation with the recursion already done.
The type of this lambda we pass to callCC is Num n => ((n -> ContT r m b, n) -> ContT r m b) -> ContT r m (n -> ContT r m b, n). This does not have the same infinite type problem that your last example has, because we messed around with the parameters. You can perform a similar trick without adding the extra parameter by using let bindings in other ways. For example:
recur :: Monad m => ContT r m (ContT r m ())
recur = callCC $ \k -> let r = k r in r >> return r
This probably wasn't a terribly well explained answer, but the basic idea is that returning the continuation directly will create an infinite type problem. By using a let binding to create some recursion inside the lambda you pass to callCC, you can avoid this.
The example executes in the ContT () IO monad, the Monad allowing continuations that result in () and some lifted IO.
type ExM a = ContT () IO a
ContT can be an incredibly confusing monad to work in, but I've found that Haskell's equational reasoning is a powerful tool for disentangling it. The remainder of this answer examines the original example in several steps, each powered by syntactic transforms and pure renamings.
So, let's first examine the type of the callCC part—it's ultimately the heart of this entire piece of code. That chunk is responsible for generating a strange kind of tuple as its monadic value.
type ContAndPrev = (Int -> ExM (), Int)
getContAndPrev :: ExM ContAndPrev
getContAndPrev = callCC $ \k -> let f x = k (f, x)
in return (f, 0)
This can be made a little bit more familiar by sectioning it with (>>=), which is exactly how it would be used in a real context—any do-block desugaring will create the (>>=) for us eventually.
withContAndPrev :: (ContAndPrev -> ExM ()) -> ExM ()
withContAndPrev go = getContAndPrev >>= go
and finally we can examine that it actually looks like in the call site. To be more clear, I'll desugar the original example a little bit
flip runContT return $ do
lift (putStrLn "alpha")
withContAndPrev $ \(k, num) -> do
lift $ putStrLn "beta"
lift $ putStrLn "gamma"
if num < 5
then k (num + 1) >> return ()
else lift $ print num
Notice that this is a purely syntactic transformation. The code is identical to the original example, but it highlights the existence of this indented block under withContAndPrev. This is the secret to understanding Haskell callCC---withContAndPrev is given access to the entire "rest of the do block" which it gets to choose how to use.
Let's ignore the actual implementation of withContAndPrev and just see if we can create the behavior we saw in running the example. It's fairly tricky, but what we want to do is pass into the block the ability to call itself. Haskell being as lazy and recursive as it is, we can write that directly.
withContAndPrev' :: (ContAndPrev -> ExM ()) -> ExM ()
withContAndPrev' = go 0 where
go n next = next (\i -> go i next, n)
This is still something of a recursive headache, but it might be easier to see how it works. We're taking the remainder of the do block and creating a looping construct called go. We pass into the block a function that calls our looper, go, with a new integer argument and returns the prior one.
We can begin to unroll this code a bit by making a few more syntactic changes to the original code.
maybeCont :: ContAndPrev -> ExM ()
maybeCont k n | n < 5 = k (num + 1)
| otherwise = lift (print n)
bg :: ExM ()
bg = lift $ putStrLn "beta" >> putStrLn "gamma"
flip runContT return $ do
lift (putStrLn "alpha")
withContAndPrev' $ \(k, num) -> bg >> maybeCont k num
And now we can examine what this looks like when betaGam >> maybeCont k num gets passed into withContAndPrev.
let go n next = next (\i -> go i next, n)
next = \(k, num) -> bg >> maybeCont k num
in
go 0 next
(\(k, num) -> betaGam >> maybeCont k num) (\i -> go i next, 0)
bg >> maybeCont (\i -> go i next) 0
bg >> (\(k, num) -> betaGam >> maybeCont k num) (\i -> go i next, 1)
bg >> bg >> maybeCont (\i -> go i next) 1
bg >> bg >> (\(k, num) -> betaGam >> maybeCont k num) (\i -> go i next, 2)
bg >> bg >> bg >> maybeCont (\i -> go i next) 2
bg >> bg >> bg >> bg >> maybeCont (\i -> go i next) 3
bg >> bg >> bg >> bg >> bg >> maybeCont (\i -> go i next) 4
bg >> bg >> bg >> bg >> bg >> bg >> maybeCont (\i -> go i next) 5
bg >> bg >> bg >> bg >> bg >> bg >> lift (print 5)
So clearly our fake implementation recreates the behavior of the original loop. It might be slightly more clear how our fake behavior achieves that by tying a recursive knot using the "rest of the do block" which it receives as an argument.
Armed with this knowledge, we can take a closer look at callCC. We'll again profit by initially examining it in its pre-bound form. It's extremely simple, if weird, in this form.
withCC gen block = callCC gen >>= block
withCC gen block = block (gen block)
In other words, we use the argument to callCC, gen, to generate the return value of callCC, but we pass into gen the very continuation block that we end up applying the value to. It's recursively trippy, but denotationally clear—callCC is truly "call this block with the current continuation".
withCC (\k -> let f x = k (f, x)
in return (f, 0)) next
next (let f x = next (f, x) in return (f, 0))
The actual implementation details of callCC are a bit more challenging since they require that we find a way to define callCC from the semantics of (callCC >>=) but that's mostly ignorable. At the end of the day, we profit from the fact that do blocks are written so that each line gets the remainder of the block bound to it with (>>=) which provides a natural notion of continuation immediately.
why is it necessary to have a let expression in the callCC to "return"
the continuation so that it can be used later on
Thats the exact use of continuation, i.e capture the current execution context and then later use this capture continuation to jump back to that execution context.
It seems that you are confused by the function name callCC, which may be indicating to you that it is calling a continuation BUT actually it is creating a continuation.
I have a function in my main block
map anyHeavyFunction [list]
I'd like to show a progress bar during the computation process or add additional actions (pause, stop process etc.), but because map is a pure function I can't do it directly. I can guess I have to use monads, but what monad is appropriate? IO, State?
I know there is at least one library on hackage that has some pre-made monad transformers for this task, but I normally turn to the pipes package to roll my own when I need one. I am using pipes-4.0.0 it is going to be on hackage this weekend, but you can grab it form the github repo before that.
I also used terminal-progress-bar package so that it makes a nice terminal animation as well.
{-# language BangPatterns #-}
import Pipes
import qualified Pipes.Prelude as P
import Control.Monad.IO.Class
import System.ProgressBar
import System.IO ( hSetBuffering, BufferMode(NoBuffering), stdout )
-- | Takes the total size of the stream to be processed as l and the function
-- to map as fn
progress l = loop 0
where
loop n = do
liftIO $ progressBar (msg "Working") percentage 40 n l
!x <- await -- bang pattern to make strict
yield x
loop (n+1)
main = do
-- Force progress bar to print immediately
hSetBuffering stdout NoBuffering
let n = 10^6
let heavy x = last . replicate n $ x -- time wasting function
r <- P.toListM $ each [1..100] >-> P.map heavy >-> progress 100
putStrLn ""
return r
This animates:
> Working [=>.......................] 7%
> Working [=====>...................] 20%
Every update erases the last bar so it only take up one line on the terminal. Then it finishes like so:
> main
Working [=========================] 100%
[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100]
Here's a (kind of) simple answer that I'm not satisfied with. It is based on the fact that #shellenberg wanted to apply a heavy function on each element of a (supposedly long) list. If it suffices to move the "progress bar" once for every element of the list, then the following can be turned into a general solution.
First of all, you need to pick the monad in which you'll work. This depends on what exactly your "progress bar" is. For this discussion, let's say that the IO monad is enough and that we want to alternately display the characters -, /, | and \. You'll also (most probably) need some kind of state S (here it is only the number of elements processed so far, therefore S is Int), so the real monad used will be StateT S IO.
Suppose your original program is:
m = 100000 -- how many elements the list has
-- Your (pure) function
anyHeavyFunction :: Int -> Bool
anyHeavyFunction n =
length [1..n] + length [n+1..4217] == 4217
-- Your list
list :: [Int]
list = take m $ repeat 4217
-- The main program
main :: IO ()
main = do
let l = map anyHeavyFunction list
if and l
then putStrLn "OK"
else putStrLn "WRONG"
(Notice that, very conveniently, the heavy function takes the same time for each element of the list.)
This is how you could convert it to display the crude "progress bar":
import Control.Monad.State
import System.IO (hFlush, stdout)
m = 100000 -- how many elements the list has
k = 5000 -- how often you want to "tick"
tick :: a -> StateT Int IO a
tick x = do
s <- get
put $ s+1
when (s `mod` k == 0) $ liftIO $ do
let r = (s `div` k) `mod` 4
putChar $ "-/|\\" !! r
putChar '\b'
hFlush stdout
x `seq` return x
-- Your (pure) function
anyHeavyFunction :: Int -> Bool
anyHeavyFunction n =
length [1..n] + length [n+1..4217] == 4217
-- Your list
list :: [Int]
list = take m $ repeat 4217
-- The main program
main :: IO ()
main = do
l <- flip evalStateT 0 $ mapM (tick . anyHeavyFunction) list
if and l
then putStrLn "OK"
else putStrLn "WRONG"
An interesting point: The seq in tick forces evaluation of the result for each element of the list. This is enough, if the result has a basic type (Bool here). Otherwise, it's not clear what you would want to do -- remember Haskell is lazy!
If one wants a finer progress bar or if one is not satisfied with the assumption that one "tick" will be counted for each element of the list, then I believe it's necessary to incorporate the ticking in the logic of the heavy function. This makes it ugly... I'd like to see what kind of general solutions can be suggested to that. I'm all in for Haskell, but I think it just sucks for such things as progress bars... There's no free lunch; you can't be pure and lazy and have your progress bars made easy!
EDIT: A version which uses the ProgressBar module suggested by #Davorak. It certainly looks nicer than my rotating bar.
import Control.Monad.State
import System.ProgressBar
import System.IO (hSetBuffering, BufferMode(NoBuffering), stdout)
m = 100000 -- how many elements the list has
k = 5000 -- how often you want to "tick"
tick :: a -> StateT Int IO a
tick x = do
s <- get
put $ s+1
when (s `mod` k == 0) $ liftIO $ do
progressBar (msg "Working") percentage 40 (toInteger s) (toInteger m)
x `seq` return x
-- Your (pure) function
anyHeavyFunction :: Int -> Bool
anyHeavyFunction n =
length [1..n] + length [n+1..4217] == 4217
-- Your list
list :: [Int]
list = take m $ repeat 4217
-- The main program
main :: IO ()
main = do
hSetBuffering stdout NoBuffering
l <- flip evalStateT 0 $ mapM (tick . anyHeavyFunction) list
if and l
then putStrLn "OK"
else putStrLn "WRONG"
The idea is the same, the drawbacks too.
You could use parMap to apply the expensive function in parallel (if the dependencies permit) and a list of TVars corresponding to each list (or chunk of) element(s) and set them once the respective function application has completed. A separate thread could check on the values and update the display (obviously some IO action would happen here).
Solving a problem from Google Code Jam (2009.1A.A: "Multi-base happiness") I came up with an awkward (code-wise) solution, and I'm interested in how it could be improved.
The problem description, shortly, is: Find the smallest number bigger than 1 for which iteratively calculating the sum of squares of digits reaches 1, for all bases from a given list.
Or description in pseudo-Haskell (code that would solve it if elem could always work for infinite lists):
solution =
head . (`filter` [2..]) .
all ((1 `elem`) . (`iterate` i) . sumSquareOfDigitsInBase)
And my awkward solution:
By awkward I mean it has this kind of code: happy <- lift . lift . lift $ isHappy Set.empty base cur
I memoize results of the isHappy function. Using the State monad for the memoized results Map.
Trying to find the first solution, I did not use head and filter (like the pseudo-haskell above does), because the computation isn't pure (changes state). So I iterated by using StateT with a counter, and a MaybeT to terminate the computation when condition holds.
Already inside a MaybeT (StateT a (State b)), if the condition doesn't hold for one base, there is no need to check the other ones, so I have another MaybeT in the stack for that.
Code:
import Control.Monad.Maybe
import Control.Monad.State
import Data.Maybe
import qualified Data.Map as Map
import qualified Data.Set as Set
type IsHappyMemo = State (Map.Map (Integer, Integer) Bool)
isHappy :: Set.Set Integer -> Integer -> Integer -> IsHappyMemo Bool
isHappy _ _ 1 = return True
isHappy path base num = do
memo <- get
case Map.lookup (base, num) memo of
Just r -> return r
Nothing -> do
r <- calc
when (num < 1000) . modify $ Map.insert (base, num) r
return r
where
calc
| num `Set.member` path = return False
| otherwise = isHappy (Set.insert num path) base nxt
nxt =
sum . map ((^ (2::Int)) . (`mod` base)) .
takeWhile (not . (== 0)) . iterate (`div` base) $ num
solve1 :: [Integer] -> IsHappyMemo Integer
solve1 bases =
fmap snd .
(`runStateT` 2) .
runMaybeT .
forever $ do
(`when` mzero) . isJust =<<
runMaybeT (mapM_ f bases)
lift $ modify (+ 1)
where
f base = do
cur <- lift . lift $ get
happy <- lift . lift . lift $ isHappy Set.empty base cur
unless happy mzero
solve :: [String] -> String
solve =
concat .
(`evalState` Map.empty) .
mapM f .
zip [1 :: Integer ..]
where
f (idx, prob) = do
s <- solve1 . map read . words $ prob
return $ "Case #" ++ show idx ++ ": " ++ show s ++ "\n"
main :: IO ()
main =
getContents >>=
putStr . solve . tail . lines
Other contestants using Haskell did have nicer solutions, but solved the problem differently. My question is about small iterative improvements to my code.
Your solution is certainly awkward in its use (and abuse) of monads:
It is usual to build monads piecemeal by stacking several transformers
It is less usual, but still happens sometimes, to stack several states
It is very unusual to stack several Maybe transformers
It is even more unusual to use MaybeT to interrupt a loop
Your code is a bit too pointless :
(`when` mzero) . isJust =<<
runMaybeT (mapM_ f bases)
instead of the easier to read
let isHappy = isJust $ runMaybeT (mapM_ f bases)
when isHappy mzero
Focusing now on function solve1, let us simplify it.
An easy way to do so is to remove the inner MaybeT monad. Instead of a forever loop which breaks when a happy number is found, you can go the other way around and recurse only if the
number is not happy.
Moreover, you don't really need the State monad either, do you ? One can always replace the state with an explicit argument.
Applying these ideas solve1 now looks much better:
solve1 :: [Integer] -> IsHappyMemo Integer
solve1 bases = go 2 where
go i = do happyBases <- mapM (\b -> isHappy Set.empty b i) bases
if and happyBases
then return i
else go (i+1)
I would be more han happy with that code.
The rest of your solution is fine.
One thing that bothers me is that you throw away the memo cache for every subproblem. Is there a reason for that?
solve :: [String] -> String
solve =
concat .
(`evalState` Map.empty) .
mapM f .
zip [1 :: Integer ..]
where
f (idx, prob) = do
s <- solve1 . map read . words $ prob
return $ "Case #" ++ show idx ++ ": " ++ show s ++ "\n"
Wouldn't your solution be more efficient if you reused it instead ?
solve :: [String] -> String
solve cases = (`evalState` Map.empty) $ do
solutions <- mapM f (zip [1 :: Integer ..] cases)
return (unlines solutions)
where
f (idx, prob) = do
s <- solve1 . map read . words $ prob
return $ "Case #" ++ show idx ++ ": " ++ show s
The Monad* classes exist to remove the need for repeated lifting. If you change your signatures like this:
type IsHappyMemo = Map.Map (Integer, Integer) Bool
isHappy :: MonadState IsHappyMemo m => Set.Set Integer -> Integer -> Integer -> m Bool
This way you can remove most of the 'lift's. However, the longest sequence of lifts cannot be removed, since it is a State monad inside a StateT, so using the MonadState type class will give you the outer StateT, where you need tot get to the inner State. You could wrap your State monad in a newtype and make a MonadHappy class, similar to the existing monad classes.
ListT (from the List package) does a much nicer job than MaybeT in stopping the calculation when necessary.
solve1 :: [Integer] -> IsHappyMemo Integer
solve1 bases = do
Cons result _ <- runList . filterL cond $ fromList [2..]
return result
where
cond num = andL . mapL (isHappy Set.empty num) $ fromList bases
Some elaboration on how this works:
Had we used a regular list the code would had looked like this:
solve1 bases = do
result:_ <- filterM cond [2..]
return result
where
cond num = fmap and . mapM (isHappy Set.empty num) bases
This calculation happens in a State monad, but if we'd like to get the resulting state, we'd have a problem, because filterM runs the monadic predicate it gets for every element of [2..], an infinite list.
With the monadic list, filterL cond (fromList [2..]) represents a list that we can access one item at a time as a monadic action, so our monadic predicate cond isn't actually executed (and affecting the state) unless we consume the corresponding list items.
Similarly, implementing cond using andL makes us not calculate and update the state if we already got a False result from one of the isHappy Set.empty num calculations.