At first glance I thought these two functions would work the same:
firstM _ [] = return Nothing
firstM p (x:xs) = p x >>= \r -> if r then return (Just x) else firstM p xs
firstM' p xs = fmap listToMaybe (mapM p xs)
But they don't. In particular, firstM stops as soon as the first p x is true. But firstM', because of mapM, needs the evaluate the whole list.
Is there a "lazy mapM" that enables the second definition, or at least one that doesn't require explicit recursion?
There isn't (can't be) a safe, Monad-polymorphic lazy mapM. But the monad-loops package contains many lazy monadic variants of various pure functions, and includes firstM.
One solution is to use ListT, the list monad transformer. This type interleaves side effects and results, so you can peek at the initial element without running the whole computation first.
Here's an example using ListT:
import Control.Monad
import qualified ListT
firstM :: Monad m => (a -> Bool) -> [a] -> m (Maybe a)
firstM p = ListT.head . mfilter p . ListT.fromFoldable
(Note that the ListT defined in transformers and mtl is buggy and should not be used. The version I linked above should be okay, though.)
If there is, I doubt it's called mapM.
As I recall, mapM is defined in terms of sequence:
mapM :: Monad m => (a -> b) -> [a] -> m [b]
mapM f = sequence . map f
and the whole point of sequence is to guarantee that all the side-effects are done before giving you anything.
As opposed to using some alternate mapM, you could get away with just using map and sequence, so you could change the container from [a] to Just a:
firstM p xs = sequence $ listToMaybe (map p xs)
or even:
firstM p xs = mapM f $ listToMaybe xs
Now that mapM and sequence can operate on generic Traversables, not just lists.
Related
The following function is not in the standard Haskell libraries, so I wrote it:
paginate _ [] = []
paginate n xs = let (h, t) = splitAt n xs in h : paginate n t
Then I wanted to rewrite it without the explicit recursion yet still be easily understandable.
I've tried the fix version, which does not yield very satisfactory results:
paginate = fix go
where
go _ _ [] = []
go v n xs = let (h, t) = splitAt n xs in h : v n t
Are there simpler solutions which use the Prelude functions?
When I see this sort of problem, I like to think about the "shape" of the function you want. In this case, you're starting with a list and producing a list of lists. This is a special case of starting with a single value and producing a list, which corresponds to an unfold. So, if you don't mind importing Data.List, you can neatly write your function using unfoldr.
Lets take a look at how to derive this step by step. First, as above, you just have to know about unfoldr and see that it can apply. This is where a bit of experience (like reading answers like this one :)) comes in.
Next, we take a look at the type of unfoldr:
Prelude Data.List> :t unfoldr
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
The idea is that we start with a single "kernel" value (b), and apply a step function (b -> Maybe (a, b)) repeatedly until we hit Nothing. Our starting value is the list we want to separate into chunks, so we don't need to do any more processing there. This means the last step we have is implementing the step function.
Since we're starting with a list of values, we can replace b with [n]; moreover, since we want to produce a list of lists at the very end, we can replace [a] with [[n]] and, therefore, a with [n]. This gives us the final type we have to implement for the step function:
[n] -> Maybe ([n], [n])
Hey, that looks very similar to the type of splitAt!
splitAt :: Int -> a -> ([a], [a])
In fact, we can just wrap the result of splitAt in Just and satify our types. But this leaves out a very important part: the final Nothing that tells unfoldr to stop! So our helper function needs an extra base case for [] to return the right result.
Putting it all together gives us the final function:
import Data.List (unfoldr)
paginate n = unfoldr go
where go [] = Nothing -- base case
go ls = Just (splitAt n ls)
I think the end result is pretty good, but I'm not sure it's any more readable than the recursive version.
If you don't mind enabling an extension (LambdaCase), you can rewrite this in a slightly neater way by avoiding naming go:
paginate n = unfoldr $ \case
[] -> Nothing
ls -> Just (splitAt n ls)
FYI, following from Tikhon Jelvis' answer, I eventually ended up with:
boolMaybe p f x = if p x then Just (f x) else Nothing
groupWith = unfoldr . boolMaybe null
paginate = groupWith . splitAt
which is a bit too obfuscated for me. Back to the recursive version.
Here is a slightly changed version of the Tikhon Jelvis' code:
import Data.Maybe
import Data.List
import Control.Applicative
paginate n = unfoldr $ \xs -> listToMaybe xs *> Just (splitAt n xs)
Or in pointfree:
paginate n = unfoldr $ (*>) <$> listToMaybe <*> Just . splitAt n
there's always been this:
import Data.List
chunks n = takeWhile (not . null) . unfoldr (Just . splitAt n)
but the new champion, for me, is
import Data.Maybe -- additional imports
import Control.Applicative
chunks n = unfoldr $ (<$) . splitAt n <*> listToMaybe
-- \xs -> splitAt n xs <$ listToMaybe xs
(tweaking just a little bit the code from this comment by #user3237465).
The bits used here are:
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
(f <*> g) x = f x (g x) -- S combinator
x <$ c = fmap (const x) c -- "mapped into"
listToMaybe xs = head (map Just xs ++ [Nothing])
Another, similar way to write it is
import Control.Monad -- an additional import
chunks n = unfoldr $ listToMaybe . join ((<$) . splitAt n)
-- \xs -> listToMaybe (splitAt n xs <$ xs)
using the monadic join for functions,
join f x = f x x -- W combinator
The following line
filterM (\x -> Just (x > 0)) [2, 1, 0, -1]
outputs
Just [2,1]
and the line
filterM (\x -> Just (x > 0)) []
shows
Just []
But I would like it to output Nothing. What can be done to possibly introduce some change that would work for all monad functions working with lists? So should I use something else instead of filterM or I can do some inheritance from Maybe monad?
You could use mfilter to convert your Just [] to nothing based on a predicate:
mfilter (not . null) . filterM (Just . (>0)) $ []
Define:
flatten :: Maybe [a] -> Maybe [a]
flatten Nothing = Nothing
flatten (Just []) = Nothing
flatten x = x
Then
flatten $ filterM (\x -> Just (x > 0)) []
I'm not sure what you mean in terms of working for all monad functions working with lists. There is no way to modify the list monad so that this will happen automatically with any monad function, as this would require effectively changing the bind of some other unknown monad. Likewise, changing all of the monad functions would be a bit time consuming.
My suggestion is to use composability. What we want is to fail in the outer monad m if the inner monad, list, is failing. This should be fairly easy, although I'm not sure what to call it, so I will use failing.
failing :: MonadPlus m => m [a] -> m [a]
failing = (=<<) $ \l -> case l of
[] -> mzero
_ -> return l
Then failing $ filterM (\x -> Just (x > 0)) [] should do what you want.
I'm not sure why you're involving Maybe at all in this case though (as your filter conditions both only use Just), but I'm assuming you have some other reason for doing this, or some other condition that sometimes fails.
I would like to generate infinite stream of numbers with Rand monad from System.Random.MWC.Monad. If only there would be a MonadFix instance for this monad, or instance like this:
instance (PrimMonad m) => MonadFix m where
...
then one could write:
runWithSystemRandom (mfix (\ xs -> uniform >>= \x -> return (x:xs)))
There isn't one though.
I was going through MonadFix docs but I don't see an obvious way of implementing this instance.
You can write a MonadFix instance. However, the code will not generate an infinite stream of distinct random numbers. The argument to mfix is a function that calls uniform exactly once. When the code is run, it will call uniform exactly once, and create an infinite list containing the result.
You can try the equivalent IO code to see what happens:
import System.Random
import Control.Monad.Fix
main = print . take 10 =<< mfix (\xs -> randomIO >>= (\x -> return (x : xs :: [Int])))
It seems that you want to use a stateful random number generator, and you want to run the generator and collect its results lazily. That isn't possible without careful use of unsafePerformIO. Unless you need to produce many random numbers quickly, you can use a pure RNG function such as randomRs instead.
A question: how do you wish to generate your initial seed?
The problem is that MWS is built on the "primitive" package which abstracts only IO and strict (Control.Monad.ST.ST s). It does not also abstract lazy (Control.Monad.ST.Lazy.ST s).
Perhaps one could make instances for "primitive" to cover lazy ST and then MWS could be lazy.
UPDATE: I can make this work using Control.Monad.ST.Lazy by using strictToLazyST:
module Main where
import Control.Monad(replicateM)
import qualified Control.Monad.ST as S
import qualified Control.Monad.ST.Lazy as L
import qualified System.Random.MWC as A
foo :: Int -> L.ST s [Int]
foo i = do rest <- foo $! succ i
return (i:rest)
splam :: A.Gen s -> S.ST s Int
splam = A.uniformR (0,100)
getS :: Int -> S.ST s [Int]
getS n = do gen <- A.create
replicateM n (splam gen)
getL :: Int -> L.ST s [Int]
getL n = do gen <- createLazy
replicateM n (L.strictToLazyST (splam gen))
createLazy :: L.ST s (A.Gen s)
createLazy = L.strictToLazyST A.create
makeLots :: A.Gen s -> L.ST s [Int]
makeLots gen = do x <- L.strictToLazyST (A.uniformR (0,100) gen)
rest <- makeLots gen
return (x:rest)
main = do
print (S.runST (getS 8))
print (L.runST (getL 8))
let inf = L.runST (foo 0) :: [Int]
print (take 10 inf)
let inf3 = L.runST (createLazy >>= makeLots) :: [Int]
print (take 10 inf3)
(This would be better suited as a comment to Heatsink's answer, but it's a bit too long.)
MonadFix instances must adhere to several laws. One of them is left shrinking/thightening:
mfix (\x -> a >>= \y -> f x y) = a >>= \y -> mfix (\x -> f x y)
This law allows to rewrite your expression as
mfix (\xs -> uniform >>= \x -> return (x:xs))
= uniform >>= \x -> mfix (\xs -> return (x:xs))
= uniform >>= \x -> mfix (return . (x :))
Using another law, purity mfix (return . h) = return (fix h), we can further simplify to
= uniform >>= \x -> return (fix (x :))
and using the standard monad laws and rewriting fix (x :) as repeat x
= liftM (\x -> fix (x :)) uniform
= liftM repeat uniform
Therefore, the result is indeed one invocation of uniform and then just repeating the single value indefinitely.
UPDATE: Okay this question becomes potentially very straightforward.
q <- mapM return [1..]
Why does this never return?
Does mapM not lazily deal with infinite lists?
The code below hangs. However, if I replace line A by line B, it doesn't hang anymore. Alternatively, if I preceed line A by a "splitRandom $", it also doesn't hang.
Q1 is: Is mapM not lazy? Otherwise, why does replacing line A with line B "fix this" code?
Q2 is: Why does preceeding line A with splitRandom "solve" the problem?
import Control.Monad.Random
import Control.Applicative
f :: (RandomGen g) => Rand g (Double, [Double])
f = do
b <- splitRandom $ sequence $ repeat $ getRandom
c <- mapM return b -- A
-- let c = map id b -- B
a <- getRandom
return (a, c)
splitRandom :: (RandomGen g) => Rand g a -> Rand g a
splitRandom code = evalRand code <$> getSplit
t0 = do
(a, b) <- evalRand f <$> newStdGen
print a
print (take 3 b)
The code generates an infinite list of random numbers lazily. Then it generates a single random number. By using splitRandom, I can evaluate this latter random number first before the infinite list. This can be demonstrated if I return b instead of c in the function.
However, if I apply the mapM to the list, the program now hangs. To prevent this hanging, I have to apply splitRandom again before the mapM. I was under the impression that mapM can lazily
Well, there's lazy, and then there's lazy. mapM is indeed lazy in that it doesn't do more work than it has to. However, look at the type signature:
mapM :: (Monad m) => (a -> m b) -> [a] -> m [b]
Think about what this means: You give it a function a -> m b and a bunch of as. A regular map can turn those into a bunch of m bs, but not an m [b]. The only way to combine the bs into a single [b] without the monad getting in the way is to use >>= to sequence the m bs together to construct the list.
In fact, mapM is precisely equivalent to sequence . map.
In general, for any monadic expression, if the value is used at all, the entire chain of >>=s leading to the expression must be forced, so applying sequence to an infinite list can't ever finish.
If you want to work with an unbounded monadic sequence, you'll either need explicit flow control--e.g., a loop termination condition baked into the chain of binds somehow, which simple recursive functions like mapM and sequence don't provide--or a step-by-step sequence, something like this:
data Stream m a = Nil | Stream a (m (Stream m a))
...so that you only force as many monad layers as necessary.
Edit:: Regarding splitRandom, what's going on there is that you're passing it a Rand computation, evaluating that with the seed splitRandom gets, then returning the result. Without the splitRandom, the seed used by the single getRandom has to come from the final result of sequencing the infinite list, hence it hangs. With the extra splitRandom, the seed used only needs to thread though the two splitRandom calls, so it works. The final list of random numbers works because you've left the Rand monad at that point and nothing depends on its final state.
Okay this question becomes potentially very straightforward.
q <- mapM return [1..]
Why does this never return?
It's not necessarily true. It depends on the monad you're in.
For example, with the identity monad, you can use the result lazily and it terminates fine:
newtype Identity a = Identity a
instance Monad Identity where
Identity x >>= k = k x
return = Identity
-- "foo" is the infinite list of all the positive integers
foo :: [Integer]
Identity foo = do
q <- mapM return [1..]
return q
main :: IO ()
main = print $ take 20 foo -- [1 .. 20]
Here's an attempt at a proof that mapM return [1..] doesn't terminate. Let's assume for the moment that we're in the Identity monad (the argument will apply to any other monad just as well):
mapM return [1..] -- initial expression
sequence (map return [1 ..]) -- unfold mapM
let k m m' = m >>= \x ->
m' >>= \xs ->
return (x : xs)
in foldr k (return []) (map return [1..]) -- unfold sequence
So far so good...
-- unfold foldr
let k m m' = m >>= \x ->
m' >>= \xs ->
return (x : xs)
go [] = return []
go (y:ys) = k y (go ys)
in go (map return [1..])
-- unfold map so we have enough of a list to pattern-match go:
go (return 1 : map return [2..])
-- unfold go:
k (return 1) (go (map return [2..])
-- unfold k:
(return 1) >>= \x -> go (map return [2..]) >>= \xs -> return (x:xs)
Recall that return a = Identity a in the Identity monad, and (Identity a) >>= f = f a in the Identity monad. Continuing:
-- unfold >>= :
(\x -> go (map return [2..]) >>= \xs -> return (x:xs)) 1
-- apply 1 to \x -> ... :
go (map return [2..]) >>= \xs -> return (1:xs)
-- unfold >>= :
(\xs -> return (1:xs)) (go (map return [2..]))
Note that at this point we'd love to apply to \xs, but we can't yet! We have to instead continue unfolding until we have a value to apply:
-- unfold map for go:
(\xs -> return (1:xs)) (go (return 2 : map return [3..]))
-- unfold go:
(\xs -> return (1:xs)) (k (return 2) (go (map return [3..])))
-- unfold k:
(\xs -> return (1:xs)) ((return 2) >>= \x2 ->
(go (map return [3..])) >>= \xs2 ->
return (x2:xs2))
-- unfold >>= :
(\xs -> return (1:xs)) ((\x2 -> (go (map return [3...])) >>= \xs2 ->
return (x2:xs2)) 2)
At this point, we still can't apply to \xs, but we can apply to \x2. Continuing:
-- apply 2 to \x2 :
(\xs -> return (1:xs)) ((go (map return [3...])) >>= \xs2 ->
return (2:xs2))
-- unfold >>= :
(\xs -> return (1:xs)) (\xs2 -> return (2:xs2)) (go (map return [3..]))
Now we've gotten to a point where neither \xs nor \xs2 can be reduced yet! Our only choice is:
-- unfold map for go, and so on...
(\xs -> return (1:xs))
(\xs2 -> return (2:xs2))
(go ((return 3) : (map return [4..])))
So you can see that, because of foldr, we're building up a series of functions to apply, starting from the end of the list and working our way back up. Because at each step the input list is infinite, this unfolding will never terminate and we will never get an answer.
This makes sense if you look at this example (borrowed from another StackOverflow thread, I can't find which one at the moment). In the following list of monads:
mebs = [Just 3, Just 4, Nothing]
we would expect sequence to catch the Nothing and return a failure for the whole thing:
sequence mebs = Nothing
However, for this list:
mebs2 = [Just 3, Just 4]
we would expect sequence to give us:
sequence mebs = Just [3, 4]
In other words, sequence has to see the whole list of monadic computations, string them together, and run them all in order to come up with the right answer. There's no way sequence can give an answer without seeing the whole list.
Note: The previous version of this answer asserted that foldr computes starting from the back of the list, and wouldn't work at all on infinite lists, but that's incorrect! If the operator you pass to foldr is lazy on its second argument and produces output with a lazy data constructor like a list, foldr will happily work with an infinite list. See foldr (\x xs -> (replicate x x) ++ xs) [] [1...] for an example. But that's not the case with our operator k.
This question is showing very well the difference between the IO Monad and other Monads. In the background the mapM builds an expression with a bind operation (>>=) between all the list elements to turn the list of monadic expressions into a monadic expression of a list. Now, what is different in the IO monad is that the execution model of Haskell is executing expressions during the bind in the IO Monad. This is exactly what finally forces (in a purely lazy world) something to be executed at all.
So IO Monad is special in a way, it is using the sequence paradigm of bind to actually enforce execution of each step and this is what our program makes to execute anything at all in the end. Others Monads are different. They have other meanings of the bind operator, depending on the Monad. IO is actually the one Monad which execute things in the bind and this is the reason why IO types are "actions".
The following example show that other Monads do not enforce execution, the Maybe monad for example. Finally this leds to the result that a mapM in the IO Monad returns an expression, which - when executed - executes each single element before returning the final value.
There are nice papers about this, start here or search for denotational semantics and Monads:
Tackling the awkward squad: http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/mark.pdf
Example with Maybe Monad:
module Main where
fstMaybe :: [Int] -> Maybe [Int]
fstMaybe = mapM (\x -> if x == 3 then Nothing else Just x)
main = do
let r = fstMaybe [1..]
return r
Let's talk about this in a more generic context.
As the other answers said, the mapM is just a combination of sequence and map. So the problem is why sequence is strict in certain Monads. However, this is not restricted to Monads but also Applicatives since we have sequenceA which share the same implementation of sequence in most cases.
Now look at the (specialized for lists) type signature of sequenceA :
sequenceA :: Applicative f => [f a] -> f [a]
How would you do this? You were given a list, so you would like to use foldr on this list.
sequenceA = foldr f b where ...
--f :: f a -> f [a] -> f [a]
--b :: f [a]
Since f is an Applicative, you know what b coule be - pure []. But what is f?
Obviously it is a lifted version of (:):
(:) :: a -> [a] -> [a]
So now we know how sequenceA works:
sequenceA = foldr f b where
f a b = (:) <$> a <*> b
b = pure []
or
sequenceA = foldr ((<*>) . fmap (:)) (pure [])
Assume you were given a lazy list (x:_|_). The above definition of sequenceA gives
sequenceA (x:_|_) === (:) <$> x <*> foldr ((<*>) . fmap (:)) (pure []) _|_
=== (:) <$> x <*> _|_
So now we see the problem was reduced to consider weather f <*> _|_ is _|_ or not. Obviously if f is strict this is _|_, but if f is not strict, to allow a stop of evaluation we require <*> itself to be non-strict.
So the criteria for an applicative functor to have a sequenceA that stops on will be
the <*> operator to be non-strict. A simple test would be
const a <$> _|_ === _|_ ====> strict sequenceA
-- remember f <$> a === pure f <*> a
If we are talking about Moands, the criteria is
_|_ >> const a === _|_ ===> strict sequence
I've been studying Haskell in my spare time and have recently crossed into the area of monadic functions. I've distilled the code from an excercise I've been working on into this very contrived example to isolate the exact problem I'm having:
import System.Random
rndPermu :: [a] -> IO (a, [a])
rndPermu xs = (front, back)
where (front, back) = hurf xs
hurf :: [a] -> IO (a, [a])
hurf xs = randomRIO (0, (length xs) - 1) >>= \r -> return $ removeAt r xs
removeAt :: Int -> [a] -> (a, [a])
removeAt n xs = (e, rest)
where e = xs !! n
rest = take n xs ++ (tail $ drop n xs)
rndPermu produces a type error when loaded into GHCi stating a type (t, t1) was expected in the 'where' clause but IO (a, [a]) was received. I can use things like (liftM fst) to pull individual items from the tuple and just assign a single value but that's obviously a sloppy and roundabout way of going about things. I feel I'm probably stumbling over some minor nuance of syntax that's staring me in the face. How does one resolve this type error? It should be possible to directly match against a tuple wrapped in a monad, shouldn't it?
I don't know why you don't have
rndPermu xs = hurf xs
but to answer the question you asked, try this
rndPermu xs = do (front, back) <- hurf xs
return (front, back)
My understanding is that you cannot directly match something within IO. You have to extract it first using the <- syntax.
If I understand correctly what you are trying to do, rndPermu tries to take the value in IOreturned by hurf and remove the IO from it, like rndPermu :: IO a -> a. This is not possible. A return value in the IO monad signals that the hurf function uses IO and all functions that use the results of a call to hurf will therefore indirectly also use IO: Their return values should also be in the IO monad. This is enforced by the type system.
If you just want to use pattern matching in a monad the most direct way is to use the do-notation:
rndPermu :: [a] -> IO (a, [a])
rndPermu xs =
do (front, back) <- hurf xs
return (front, back)
A common pattern is to use different, pure functions to do further processing of values.
These functions are just called from IO or a different monad, but they don't need to know about that:
-- pure function to do something with the result of |hurf|
modify :: (a, [a]) -> (a, [a])
modify (a, as) = (a, reverse as)
rndPermu :: [a] -> IO (a, [a])
rndPermu xs =
do r <- hurf xs
return (modify r)
-- or, with >>= operator:
-- hurf xs >>= return . modify
As an alternative to a do block, you can pattern match in the function to which you bind the monadic value:
rndPermu xs = hurf xs >>= \(front, back) -> return (front, back)
rndPermu xs = hurf xs >>= \res -> case res of (front, back) -> return (front, back)
To answer the question from your comment, GHCi does infer that rndPermu should have an IO type. That's not the problem. The problem is this line:
where (front, back) = hurf xs
Type inference just means (loosely) that you don't have to specify the types of the expressions you're dealing with. Type inference does not mean that Haskell will simply convert values of one type to another silently; quite the opposite in fact. As others mentioned, you don't have to write a do block if you don't want to, but you do have to deal with the fact that you have an IO value.