One irritation with lazy IO caught to my attention recently
import System.IO
import Control.Applicative
main = withFile "test.txt" ReadMode getLines >>= mapM_ putStrLn
where getLines h = lines <$> hGetContents h
Due to lazy IO, the above program prints nothing. So I imagined this could be solved with a strict version of fmap. And indeed, I did come up with just such a combinator:
forceM :: Monad m => m a -> m a
forceM m = do v <- m; return $! v
(<$!>) :: Monad m => (a -> b) -> m a -> m b
f <$!> m = liftM f (forceM m)
Replacing <$> with <$!> does indeed alleviate the problem. However, I am not satisfied. <$!> has a Monad constraint, which feels too tight; it's companion <$> requires only Functor.
Is there a way to write <$!> without the Monad constraint? If so, how? If not, why not? I've tried throwing strictness all over the place, to no avail (following code does not work as desired):
forceF :: Functor f => f a -> f a
forceF m = fmap (\x -> seq x x) $! m
(<$!>) :: Functor f => (a -> b) -> f a -> f b
f <$!> m = fmap (f $!) $! (forceF $! m)
I don't think it's possible, and also the monadic forceM doesn't work for all monads:
module Force where
import Control.Monad.State.Lazy
forceM :: Monad m => m a -> m a
forceM m = do v <- m; return $! v
(<$!>) :: Monad m => (a -> b) -> m a -> m b
f <$!> m = liftM f (forceM m)
test :: Int
test = evalState (const 1 <$!> undefined) True
And the evaluation:
Prelude Force> test
1
forceM needs a strict enough (>>=) to actually force the result of its argument. Functor doesn't even have a (>>=). I don't see how one could write an effective forceF. (That doesn't prove it's impossible, of course.)
Related
Explain about a "duplicate"
Someone point to Is this a case for foldM? as a possible duplicate. Now, I have a strong opinion that, two questions that can be answered with identical answers are not necessarily duplicates! "What is 1 - 2" and "What is i^2" both yields "-1", but no, they are not duplicate questions. My question (which is already answered, kind of) was about "whether the function iterateM exists in Haskell standard library", not "How to implement a chained monad action".
The question
When I write some projects, I found myself writing this combinator:
repeatM :: Monad m => Int -> (a -> m a) -> a -> m a
repeatM 0 _ a = return a
repeatM n f a = (repeatM (n-1) f) =<< f a
It just performs a monadic action n times, feeding the previous result into the next action. I tried some hoogle search and some Google search, and did not find anything that comes with the "standard" Haskell. Is there such a formal function that is predefined?
You can use foldM, e.g.:
import Control.Monad
f a = do print a; return (a+2)
repeatM n f a0 = foldM (\a _ -> f a) a0 [1..n]
test = repeatM 5 f 3
-- output: 3 5 7 9 11
Carsten mentioned replicate, and that's not a bad thought.
import Control.Monad
repeatM n f = foldr (>=>) pure (replicate n f)
The idea behind this is that for any monad m, the functions of type a -> m b form the Kleisli category of m, with identity arrows
pure :: a -> m a
(also called return)
and composition operator
(<=<) :: (b -> m c) -> (a -> m b) -> a -> m c
f <=< g = \a -> f =<< g a
Since were actually dealing with a function of type a -> m a, we're really looking at one monoid of the Kleisli category, so we can think about folding lists of these arrows.
What the code above does is fold the composition operator, flipped, into a list of n copies of f, finishing off with an identity as usual. Flipping the composition operator actually puts us into the dual category; for many common monads, x >=> y >=> z >=> w is more efficient than w <=< z <=< y <=< x; since all the arrows are the same in this case, it seems we might as well. Note that for the lazy state monad and likely also the reader monad, it may be better to use the unflipped <=< operator; >=> will generally be better for IO, ST s, and the usual strict state.
Notice: I am no category theorist, so there may be errors in the explanation above.
I find myself wanting this function often, I wish it had a standard name. That name however would not be repeatM - that would be for an infinite repeat, like forever if it existed, just for consistency with other libraries (and repeatM is defined in some libraries that way).
Just as another perspective from the answers already given, I point out that (s -> m s) looks a bit like an action in a State monad with state type s.
In fact, it is isomorphic to StateT s m () - an action which returns no value, because all the work it does is encapsulated in the way it changes the state. In this monad, the function you wanted really is replicateM. You can write it this way in haskell although it probably looks uglier than just writing it directly.
First convert s -> m s to the equivalent form which StateT uses, adding the information-free (), using liftM to map a function over the return type.
> :t \f -> liftM (\x -> ((),x)) . f
\f -> liftM (\x -> ((),x)) . f :: Monad m => (a -> m t) -> a -> m ((), t)
(could have used fmap but the Monad constraint seems clearer here; could have used TupleSections if you like; if you find do notation easier to read it is simply \f s -> do x <- f s; return ((),s) ).
Now this has the right type to wrap up with StateT:
> :t StateT . \f -> liftM (\x -> ((),x)) . f
StateT . \f -> liftM (\x -> ((),x)) . f :: Monad m => (s -> m s) -> StateT s m ()
and then you can replicate it n times, using the replicateM_ version because the returned list [()] from replicateM would not be interesting:
> :t \n -> replicateM_ n . StateT . \f -> liftM (\x -> ((),x)) . f
\n -> replicateM_ n . StateT . \f -> liftM (\x -> ((),x)) . f :: Monad m => Int -> (s -> m s) -> StateT s m ()
and finally you can use execStateT to go back to the Monad you were originally working in:
runNTimes :: Monad m => Int -> (s -> m s) -> s -> m s
runNTimes n act =
execStateT . replicateM_ n . StateT . (\f -> liftM (\x -> ((),x)) . f) $ act
I have a function that apply a function to a file if it exists:
import System.Directory
import Data.Maybe
applyToFile :: (FilePath -> IO a) -> FilePath -> IO (Maybe a)
applyToFile f p = doesFileExist p >>= apply
where
apply True = f p >>= (pure . Just)
apply False = pure Nothing
Usage example:
applyToFile readFile "/tmp/foo"
applyToFile (\p -> writeFile p "bar") "/tmp/foo"
A level of abstraction can be added with:
import System.Directory
import Data.Maybe
applyToFileIf :: (FilePath -> IO Bool) -> (FilePath -> IO a) -> FilePath -> IO (Maybe a)
applyToFileIf f g p = f p >>= apply
where
apply True = g p >>= (pure . Just)
apply False = pure Nothing
applyToFile :: (FilePath -> IO a) -> FilePath -> IO (Maybe a)
applyToFile f p = applyToFileIf doesFileExist f p
That allow usages like:
applyToFileIf (\p -> doesFileExist p >>= (pure . not)) (\p -> writeFile p "baz") "/tmp/baz"
I have the feeling that I just scratched the surface and there is a more generic pattern hiding.
Are there better abstractions or more idiomatic ways to do this?
applyToFileIf can be given a more generic type and a more generic name
applyToIf :: Monad m => (a -> m Bool) -> (a -> m b) -> a -> m (Maybe b)
applyToIf f g p = f p >>= apply
where
apply True = g p >>= (return . Just)
apply False = return Nothing
In the type of applyToIf we see the composition of two Monads
Maybe is a monad ---v
applyToIf :: Monad m => (a -> m Bool) -> (a -> m b) -> a -> m (Maybe b)
^------------- m is a monad -------------^
When we see the composition of two monads, we can expect that it could be replaced with a monad transformer stack and some class describing what that monad transformer adds. The MaybeT transformer replaces m (Maybe a)
newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }
And adds MonadPlus to what an m can do.
instance (Monad m) => MonadPlus (MaybeT m) where ...
We'll change the type of applyToIf to not have a composition of two monads and instead have a MonadPlus constraint on a single monad
import Control.Monad
applyToIf :: MonadPlus m => (a -> m Bool) -> (a -> m b) -> a -> m b
applyToIf f g p = f p >>= apply
where
apply True = g p
apply False = mzero
This could be rewritten in terms of guard from Control.Monad and given a more generic name.
guardBy :: MonadPlus m => (a -> m Bool) -> (a -> m b) -> a -> m b
guardBy f g p = f p >>= apply
where
apply b = guard b >> g p
The second g argument adds nothing to what guardBy can do. guardBy f g p can be replaced by guardBy f return p >>= g. We will drop the second argument.
guardBy :: MonadPlus m => (a -> m Bool) -> a -> m a
guardBy f p = f p >>= \b -> guard b >> return p
The MaybeT transformer adds possible failure to any computation. We can use it to recreate applyToIf or use it more generally to handle failure through complete programs.
import Control.Monad.Trans.Class
import Control.Monad.Trans.Maybe
applyToIf :: Monad m => (a -> m Bool) -> (a -> m b) -> a -> m (Maybe b)
applyToIf f g = runMaybeT . (>>= lift . g) . guardBy (lift . f)
If you instead rework the program to use monad style classes, it might include a snippet like
import Control.Monad.IO.Class
(MonadPlus m, MonadIO m) =>
...
guardBy (liftIO . doesFileExist) filename >>= liftIO . readFile
How can we prove that the continuation monad has no valid instance of MonadFix?
Well actually, it's not that there can't be a MonadFix instance, just that the library's type is a bit too constrained. If you define ContT over all possible rs, then not only does MonadFix become possible, but all instances up to Monad require nothing of the underlying functor :
newtype ContT m a = ContT { runContT :: forall r. (a -> m r) -> m r }
instance Functor (ContT m) where
fmap f (ContT k) = ContT (\kb -> k (kb . f))
instance Monad (ContT m) where
return a = ContT ($a)
join (ContT kk) = ContT (\ka -> kk (\(ContT k) -> k ka))
instance MonadFix m => MonadFix (ContT m) where
mfix f = ContT (\ka -> mfixing (\a -> runContT (f a) ka<&>(,a)))
where mfixing f = fst <$> mfix (\ ~(_,a) -> f a )
Consider the type signature of mfix for the continuation monad.
(a -> ContT r m a) -> ContT r m a
-- expand the newtype
(a -> (a -> m r) -> m r) -> (a -> m r) -> m r
Here's the proof that there's no pure inhabitant of this type.
---------------------------------------------
(a -> (a -> m r) -> m r) -> (a -> m r) -> m r
introduce f, k
f :: a -> (a -> m r) -> m r
k :: a -> m r
---------------------------
m r
apply k
f :: a -> (a -> m r) -> m r
k :: a -> m r
---------------------------
a
dead end, backtrack
f :: a -> (a -> m r) -> m r
k :: a -> m r
---------------------------
m r
apply f
f :: a -> (a -> m r) -> m r f :: a -> (a -> m r) -> m r
k :: a -> m r k :: a -> m r
--------------------------- ---------------------------
a a -> m r
dead end reflexivity k
As you can see the problem is that both f and k expect a value of type a as an input. However, there's no way to conjure a value of type a. Hence, there's no pure inhabitant of mfix for the continuation monad.
Note that you can't define mfix recursively either because mfix f k = mfix ? ? would lead to an infinite regress since there's no base case. And, we can't define mfix f k = f ? ? or mfix f k = k ? because even with recursion there's no way to conjure a value of type a.
But, could we have an impure implementation of mfix for the continuation monad? Consider the following.
import Control.Concurrent.MVar
import Control.Monad.Cont
import Control.Monad.Fix
import System.IO.Unsafe
instance MonadFix (ContT r m) where
mfix f = ContT $ \k -> unsafePerformIO $ do
m <- newEmptyMVar
x <- unsafeInterleaveIO (readMVar m)
return . runContT (f x) $ \x' -> unsafePerformIO $ do
putMVar m x'
return (k x')
The question that arises is how to apply f to x'. Normally, we'd do this using a recursive let expression, i.e. let x' = f x'. However, x' is not the return value of f. Instead, the continuation given to f is applied to x'. To solve this conundrum, we create an empty mutable variable m, lazily read its value x, and apply f to x. It's safe to do so because f must not be strict in its argument. When f eventually calls the continuation given to it, we store the result x' in m and apply the continuation k to x'. Thus, when we finally evaluate x we get the result x'.
The above implementation of mfix for the continuation monad looks a lot like the implementation of mfix for the IO monad.
import Control.Concurrent.MVar
import Control.Monad.Fix
instance MonadFix IO where
mfix f = do
m <- newEmptyMVar
x <- unsafeInterleaveIO (takeMVar m)
x' <- f x
putMVar m x'
return x'
Note, that in the implementation of mfix for the continuation monad we used readMVar whereas in the implementation of mfix for the IO monad we used takeMVar. This is because, the continuation given to f can be called multiple times. However, we only want to store the result given to the first callback. Using readMVar instead of takeMVar ensures that the mutable variable remains full. Hence, if the continuation is called more than once then the second callback will block indefinitely on the putMVar operation.
However, only storing the result of the first callback seems kind of arbitrary. So, here's an implementation of mfix for the continuation monad that allows the provided continuation to be called multiple times. I wrote it in JavaScript because I couldn't get it to play nicely with laziness in Haskell.
// mfix :: (Thunk a -> ContT r m a) -> ContT r m a
const mfix = f => k => {
const ys = [];
return (function iteration(n) {
let i = 0, x;
return f(() => {
if (i > n) return x;
throw new ReferenceError("x is not defined");
})(y => {
const j = i++;
if (j === n) {
ys[j] = k(x = y);
iteration(i);
}
return ys[j];
});
}(0));
};
const example = triple => k => [
{ a: () => 1, b: () => 2, c: () => triple().a() + triple().b() },
{ a: () => 2, b: () => triple().c() - triple().a(), c: () => 5 },
{ a: () => triple().c() - triple().b(), b: () => 5, c: () => 8 },
].flatMap(k);
const result = mfix(example)(({ a, b, c }) => [{ a: a(), b: b(), c: c() }]);
console.log(result);
Here's the equivalent Haskell code, sans the implementation of mfix.
import Control.Monad.Cont
import Control.Monad.Fix
data Triple = { a :: Int, b :: Int, c :: Int } deriving Show
example :: Triple -> ContT r [] Triple
example triple = ContT $ \k ->
[ Triple 1 2 (a triple + b triple)
, Triple 2 (c triple - a triple) 5
, Triple (c triple - b triple) 5 8
] >>= k
result :: [Triple]
result = runContT (mfix example) pure
main :: IO ()
main = print result
Notice that this looks a lot like the list monad.
import Control.Monad.Fix
data Triple = { a :: Int, b :: Int, c :: Int } deriving Show
example :: Triple -> [Triple]
example triple =
[ Triple 1 2 (a triple + b triple)
, Triple 2 (c triple - a triple) 5
, Triple (c triple - b triple) 5 8
]
result :: [Triple]
result = mfix example
main :: IO ()
main = print result
This makes sense because after all the continuation monad is the mother of all monads. I'll leave the verification of the MonadFix laws of my JavaScript implementation of mfix as an exercise for the reader.
I am looking for a function that basically is like mapM on a list -- it performs a series of monadic actions taking every value in the list as a parameter -- and each monadic function returns m (Maybe b). However, I want it to stop after the first parameter that causes the function to return a Just value, not execute any more after that, and return that value.
Well, it'll probably be easier to just show the type signature:
findM :: (Monad m) => (a -> m (Maybe b)) -> [a] -> m (Maybe b)
where b is the first Just value. The Maybe in the result is from the finding (in case of an empty list, etc.), and has nothing to do with the Maybe returned by the Monadic function.
I can't seem to implement this with a straightforward application of library functions. I could use
findM f xs = fmap (fmap fromJust . find isJust) $ mapM f xs
which will work, but I tested this and it seems that all of the monadic actions are executed before calling find, so I can't rely on laziness here.
ghci> findM (\x -> print x >> return (Just x)) [1,2,3]
1
2
3
-- returning IO (Just 1)
What is the best way to implement this function that won't execute the monadic actions after the first "just" return? Something that would do:
ghci> findM (\x -> print x >> return (Just x)) [1,2,3]
1
-- returning IO (Just 1)
or even, ideally,
ghci> findM (\x -> print x >> return (Just x)) [1..]
1
-- returning IO (Just 1)
Hopefully there is an answer that doesn't use explicit recursion, and are compositions of library functions if possible? Or maybe even a point-free one?
One simple point-free solution is using the MaybeT transformer. Whenever we see m (Maybe a) we can wrap it into MaybeT and we get all MonadPlus functions immediately. Since mplus for MaybeT does exactly we need - it runs the second given action only if the first one resulted in Nothing - msum does exactly what we need:
import Control.Monad
import Control.Monad.Trans.Maybe
findM :: (Monad m) => (a -> m (Maybe b)) -> [a] -> m (Maybe b)
findM f = runMaybeT . msum . map (MaybeT . f)
Update: In this case, we were lucky that there exists a monad transformer (MaybeT) whose mplus has just the semantic we need. But in a general case, it can be that it won't be possible to construct such a transformer. MonadPlus has some laws that must be satisfied with respect to other monadic operations. However, all is not lost, as we actually don't need a MonadPlus, all we need is a proper monoid to fold with.
So let's pretend we don't (can't) have MaybeT. Computing the first value of some sequence of operations is described by the First monoid. We just need to make a monadic variant that won't execute the right part, if the left part has a value:
newtype FirstM m a = FirstM { getFirstM :: m (Maybe a) }
instance (Monad m) => Monoid (FirstM m a) where
mempty = FirstM $ return Nothing
mappend (FirstM x) (FirstM y) = FirstM $ x >>= maybe y (return . Just)
This monoid exactly describes the process without any reference to lists or other structures. Now we just fold over the list using this monoid:
findM' :: (Monad m) => (a -> m (Maybe b)) -> [a] -> m (Maybe b)
findM' f = getFirstM . mconcat . map (FirstM . f)
Moreover, it allows us to create a more generic (and even shorter) function using Data.Foldable:
findM'' :: (Monad m, Foldable f)
=> (a -> m (Maybe b)) -> f a -> m (Maybe b)
findM'' f = getFirstM . foldMap (FirstM . f)
I like Cirdec's answer if you don't mind recursion, but I think the equivalent fold based answer is quite pretty.
findM f = foldr test (return Nothing)
where test x m = do
curr <- f x
case curr of
Just _ -> return curr
Nothing -> m
A nice little test of how well you understand folds.
This should do it:
findM _ [] = return Nothing
findM filter (x:xs) =
do
match <- filter x
case match of
Nothing -> findM filter xs
_ -> return match
If you really want to do it points free (added as an edit)
The following would find something in a list using an Alternative functor, using a fold as in jozefg's answer
findA :: (Alternative f) => (a -> f b) -> [a] -> f b
findA = flip foldr empty . ((<|>) .)
I don't thing we can make (Monad m) => m . Maybe an instance of Alternative, but we could pretend there's an existing function:
-- Left biased choice
(<||>) :: (Monad m) => m (Maybe a) -> m (Maybe a) -> m (Maybe a)
(<||>) left right = left >>= fromMaybe right . fmap (return . Just)
-- Or its hideous points-free version
(<||>) = flip ((.) . (>>=)) (flip ((.) . ($) . fromMaybe) (fmap (return . Just)))
Then we can define findM in the same vein as findA
findM :: (Monad m) => (a -> m (Maybe b)) -> [a] -> m (Maybe b)
findM = flip foldr (return Nothing) . ((<||>) .)
This can be expressed pretty nicely with the MaybeT monad transformer and Data.Foldable.
import Data.Foldable (msum)
import Control.Monad.Trans.Maybe (MaybeT(..))
findM :: Monad m => (a -> m (Maybe b)) -> [a] -> m (Maybe b)
findM f = runMaybeT . msum . map (MaybeT . f)
And if you change your search function to produce a MaybeT stack, it becomes even nicer:
findM' :: Monad m => (a -> MaybeT m b) -> [a] -> MaybeT m b
findM' f = msum . map f
Or in point-free:
findM' = (.) msum . map
The original version can be made fully point-free as well, but it becomes pretty unreadable:
findM = (.) runMaybeT . (.) msum . map . (.) MaybeT
IORefs, MVars, and TVars can be used to wrap a shared variable in a concurrent context. I've studied concurrent haskell for a while and now I've encounted some questions. After searching on stackoverflow and read through some related question, my questions are not fully resolved.
According to the IORef documentation,"Extending the atomicity to multiple IORefs is problematic", can someone help to explain why a single IORef is safe but more than one IORefs are problematic?
modifyMVar is "exception-safe, but only atomic if there are no other producers for this MVar". See MVar's documentation. The source code show that modifyMVar does only compose a getMVar and putMVar sequencially, indicating that it's note thread-safe if there is another producer. But if there is no producer and all threads behave in the "takeMVar then putMVar" way, then is it thread-safe to simply use modifyMVar ?
To give a concrete situation, I'll show the actual problem. I've some shared variables which are never empty and I want them be mutable states so some threads can simultaneously modify these variables.
OK, it seems tha TVar solve everything clearly. But I'm not satisfied with it and I'm eager for answers to the questions above. Any help are appreciated.
-------------- re: #GabrielGonzalez BFS interface code ------------------
Code below is my BFS interface using state monad.
{-# LANGUAGE TypeFamilies, FlexibleContexts #-}
module Data.Graph.Par.Class where
import Data.Ix
import Data.Monoid
import Control.Concurrent
import Control.Concurrent.MVar
import Control.Monad
import Control.Monad.Trans.State
class (Ix (Vertex g), Ord (Edge g), Ord (Path g)) => ParGraph g where
type Vertex g :: *
type Edge g :: *
-- type Path g :: * -- useless
type VertexProperty g :: *
type EdgeProperty g :: *
edges :: g a -> IO [Edge g]
vertexes :: g a -> IO [Vertex g]
adjacencies :: g a -> Vertex g -> IO [Vertex g]
vertexProperty :: Vertex g -> g a -> IO (VertexProperty g)
edgeProperty :: Edge g -> g a -> IO (EdgeProperty g)
atomicModifyVertexProperty :: (VertexProperty g -> IO (VertexProperty g)) ->
Vertex g -> g a -> IO (g a) -- fixed
spanForest :: ParGraph g => [Vertex g] -> StateT (g a) IO ()
spanForest roots = parallelise (map spanTree roots) -- parallel version
spanForestSeq :: ParGraph g => [Vertex g] -> StateT (g a) IO ()
spanForestSeq roots = forM_ roots spanTree -- sequencial version
spanTree :: ParGraph g => Vertex g -> StateT (g a) IO ()
spanTree root = spanTreeOneStep root >>= \res -> case res of
[] -> return ()
adjs -> spanForestSeq adjs
spanTreeOneStep :: ParGraph g => Vertex g -> StateT (g a) IO [Vertex g]
spanTreeOneStep v = StateT $ \g -> adjacencies g v >>= \adjs -> return (adjs, g)
parallelise :: (ParGraph g, Monoid b) => [StateT (g a) IO b] -> StateT (g a) IO b
parallelise [] = return mempty
parallelise ss = syncGraphOp $ map forkGraphOp ss
forkGraphOp :: (ParGraph g, Monoid b) => StateT (g a) IO b -> StateT (g a) IO (MVar b)
forkGraphOp t = do
s <- get
mv <- mapStateT (forkHelper s) t
return mv
where
forkHelper s x = do
mv <- newEmptyMVar
forkIO $ x >>= \(b, s) -> putMVar mv b
return (mv, s)
syncGraphOp :: (ParGraph g, Monoid b) => [StateT (g a) IO (MVar b)] -> StateT (g a) IO b
syncGraphOp [] = return mempty
syncGraphOp ss = collectMVars ss >>= waitResults
where
collectMVars [] = return []
collectMVars (x:xs) = do
mvx <- x
mvxs <- collectMVars xs
return (mvx:mvxs)
waitResults mvs = StateT $ \g -> forM mvs takeMVar >>= \res -> return ((mconcat res), g)
Modern processors offer a compare-and-swap instruction that atomically modifies a single pointer. I expect if you track down deep enough, you will find that this instruction is the one used to implement atomicModifyIORef. It is therefore easy to provide atomic access to a single pointer. However, because there isn't such hardware support for more than one pointer, whatever you need will have to be done in software. This typically involves inventing and manually enforcing a protocol in all your threads -- which is complicated and error-prone.
Yes, if all threads agree to only use the "single takeMVar followed by a single putMVar" behavior, then modifyMVar is safe.