I've been studying Haskell in my spare time and have recently crossed into the area of monadic functions. I've distilled the code from an excercise I've been working on into this very contrived example to isolate the exact problem I'm having:
import System.Random
rndPermu :: [a] -> IO (a, [a])
rndPermu xs = (front, back)
where (front, back) = hurf xs
hurf :: [a] -> IO (a, [a])
hurf xs = randomRIO (0, (length xs) - 1) >>= \r -> return $ removeAt r xs
removeAt :: Int -> [a] -> (a, [a])
removeAt n xs = (e, rest)
where e = xs !! n
rest = take n xs ++ (tail $ drop n xs)
rndPermu produces a type error when loaded into GHCi stating a type (t, t1) was expected in the 'where' clause but IO (a, [a]) was received. I can use things like (liftM fst) to pull individual items from the tuple and just assign a single value but that's obviously a sloppy and roundabout way of going about things. I feel I'm probably stumbling over some minor nuance of syntax that's staring me in the face. How does one resolve this type error? It should be possible to directly match against a tuple wrapped in a monad, shouldn't it?
I don't know why you don't have
rndPermu xs = hurf xs
but to answer the question you asked, try this
rndPermu xs = do (front, back) <- hurf xs
return (front, back)
My understanding is that you cannot directly match something within IO. You have to extract it first using the <- syntax.
If I understand correctly what you are trying to do, rndPermu tries to take the value in IOreturned by hurf and remove the IO from it, like rndPermu :: IO a -> a. This is not possible. A return value in the IO monad signals that the hurf function uses IO and all functions that use the results of a call to hurf will therefore indirectly also use IO: Their return values should also be in the IO monad. This is enforced by the type system.
If you just want to use pattern matching in a monad the most direct way is to use the do-notation:
rndPermu :: [a] -> IO (a, [a])
rndPermu xs =
do (front, back) <- hurf xs
return (front, back)
A common pattern is to use different, pure functions to do further processing of values.
These functions are just called from IO or a different monad, but they don't need to know about that:
-- pure function to do something with the result of |hurf|
modify :: (a, [a]) -> (a, [a])
modify (a, as) = (a, reverse as)
rndPermu :: [a] -> IO (a, [a])
rndPermu xs =
do r <- hurf xs
return (modify r)
-- or, with >>= operator:
-- hurf xs >>= return . modify
As an alternative to a do block, you can pattern match in the function to which you bind the monadic value:
rndPermu xs = hurf xs >>= \(front, back) -> return (front, back)
rndPermu xs = hurf xs >>= \res -> case res of (front, back) -> return (front, back)
To answer the question from your comment, GHCi does infer that rndPermu should have an IO type. That's not the problem. The problem is this line:
where (front, back) = hurf xs
Type inference just means (loosely) that you don't have to specify the types of the expressions you're dealing with. Type inference does not mean that Haskell will simply convert values of one type to another silently; quite the opposite in fact. As others mentioned, you don't have to write a do block if you don't want to, but you do have to deal with the fact that you have an IO value.
Related
At first glance I thought these two functions would work the same:
firstM _ [] = return Nothing
firstM p (x:xs) = p x >>= \r -> if r then return (Just x) else firstM p xs
firstM' p xs = fmap listToMaybe (mapM p xs)
But they don't. In particular, firstM stops as soon as the first p x is true. But firstM', because of mapM, needs the evaluate the whole list.
Is there a "lazy mapM" that enables the second definition, or at least one that doesn't require explicit recursion?
There isn't (can't be) a safe, Monad-polymorphic lazy mapM. But the monad-loops package contains many lazy monadic variants of various pure functions, and includes firstM.
One solution is to use ListT, the list monad transformer. This type interleaves side effects and results, so you can peek at the initial element without running the whole computation first.
Here's an example using ListT:
import Control.Monad
import qualified ListT
firstM :: Monad m => (a -> Bool) -> [a] -> m (Maybe a)
firstM p = ListT.head . mfilter p . ListT.fromFoldable
(Note that the ListT defined in transformers and mtl is buggy and should not be used. The version I linked above should be okay, though.)
If there is, I doubt it's called mapM.
As I recall, mapM is defined in terms of sequence:
mapM :: Monad m => (a -> b) -> [a] -> m [b]
mapM f = sequence . map f
and the whole point of sequence is to guarantee that all the side-effects are done before giving you anything.
As opposed to using some alternate mapM, you could get away with just using map and sequence, so you could change the container from [a] to Just a:
firstM p xs = sequence $ listToMaybe (map p xs)
or even:
firstM p xs = mapM f $ listToMaybe xs
Now that mapM and sequence can operate on generic Traversables, not just lists.
I'm learning haskell with wikibooks.
Given is the signature. The solution to the problem is:
sequenceIO :: [IO a] -> IO [a]
sequenceIO [] = return []
sequenceIO (a:as) = do {v <-a ; vs <- (sequenceIO as) ; return (v : vs)}
My solution was (which does not work):
sequenceIO xs = return (foldl get [] xs)
where get ys act = do y <- act
ys ++ [y]
The "get" singled out as a function:
gett :: [a] -> IO a -> [a]
gett ys act = do y <- act
ys ++ y
will not work: a list as a second parameter is infered. Why? In the solution the v <-a works too.
The reason your solution doesn't work is that you cannot simply escape the IO Monad. Once you are in the IO Monad, you will always be. This is actually one of the really great things about Haskell, because you can't hide side effects and can tell by a functions type signature what type of effects it is allowed to do.
The type signature of your gett function should actually be
gett :: [a] -> IO a -> IO [a]
However you're function will still not typecheck:
1) A small oversight: ys ++ y should actually be ys ++ [y] (as it was for get in the where clause) because append (++) works on lists.
2) ys ++ [y] would still have the wrong type. It must be of type IO [a] but is of type [a]. You can get it into IO [a] by doing return $ ys ++ [y]. Here return has type return :: a -> IO a, it lifts any value into the monad.
gett is then:
gett :: [a] -> IO a -> IO [a]
gett ys act = do
y <- act
return $ ys ++ [y]
We're now getting close, but we're not quite there yet. Now the problem is with foldl get [] xs: foldl works on plain lists, not on lists inside monads. However, we can use foldM from here and write sequenceIO as follows:
sequenceIO :: [IO a] -> IO [a]
sequenceIO xs = foldM gett [] xs
which should typecheck and work.
Full program:
import Control.Monad (foldM) -- this only imports foldM from Control.Monad
gett :: [a] -> IO a -> IO [a]
gett ys act = do
y <- act
return $ ys ++ [y]
sequenceIO :: [IO a] -> IO [a]
sequenceIO xs = foldM gett [] xs
Now a good exercise is to figure out how foldM works ;)
I have the following function:
function :: [String] -> [[Int]] -> ([[Int]],[[Int]])
I would like to know if it is possible to do something like this:
function :: [String] -> [[Int]] -> ([[Int]],[[Int]])
function a (b:bs) = if condition then ([[]], [b]) ++ function a bs else
([b], [[]]) ++ function a bs
Of course I could write two functions which returns each [[Int]] but I would like to do it in a more appropriate way for Haskell.
There is a monoid instance for a tuple:
(Monoid a, Monoid b) => Monoid (a, b)
So the result of mappend will be:
([1], [2]) `mappend` ([3], [4])
([1, 3], [2, 4])
So basically you just replace ++ with mappend in your example
How about using map?
import Data.Monoid
f :: t -> [a] -> ([a], [a])
f a = mconcat . map part
where part b = if True then ([], [b]) else ([b], [])
So we let part choose which list our element of b will go in and let `map, and mconcat to flatten it.
It's idiomatic haskell to avoid explicit recursion so while you can fix your code by substituting mappend for ++ since you asked for the more appropriate haskell way I'll suggest this.
Oh also, you could just use break
f s = break $ \b -> condition
The most Haskellic(?) way would probably be to use unzip:
function a bs = unzip $ function' a bs
where function' a (b:bs) = (if condition then ([], b) else (b, [])) : function' a bs
function' _ [] = [] -- You forgot the base case.
I admit this is my homework. But I really couldn't find a good solution after working hard on it.
There might be some stupid ways to accomplish this, like:
myHead (x:[]) = x
myHead (x:y:xs) = fst (x, y)
But I don't think that's what the teacher wants.
BTW, error-handling is not required.
Thanks in advance!
There's a very natural function that's not in the prelude called "uncons" which is the inverse of uncurried cons.
cons :: a -> [a] -> [a]
uncurry cons :: (a, [a]) -> [a]
uncons :: [a] -> (a, [a])
uncons (x:xs) = (x, xs)
You can use it to implement head as
head = fst . uncons
Why is uncons natural?
You can think of a list as the datatype that's defined through the use of two constructor functions
nil :: [a]
nil = []
cons :: (a, [a]) -> [a]
cons (a,as) = a:as
You can also think of it as the data type which is deconstructed by a function
destruct :: [a] -> Maybe (a, [a])
destruct [] = Nothing
destruct (a:as) = Just (a, as)
It's well beyond this answer to explain why those are so definitively tied to the list type, but one way to look at it is to try to define
nil :: f a
cons :: (a, f a) -> f a
or
destruct :: f a -> Maybe (a, f a)
for any other container type f. You'll find that they all have very close relationships with lists.
You can almost already see uncons in the second case of the definition of destruct, but there's a Just in the way. This is uncons is better paired with head and tail which are not defined on empty lists
head [] = error "Prelude.head"
so we can adjust the previous answer to work for infinite streams. Here we can think of infinite streams as being constructed by one function
data Stream a = Next a (Stream a)
cons :: (a, Stream a) -> Stream a
cons (a, as) = Next a as
and destructed by one function
uncons :: Stream a -> (a, Stream a)
uncons (Next a as) = (a, as)
-- a. k. a.
uncons stream = (head stream, tail stream)
the two being inverses of one another.
Now we can get head for Streams by getting the first element of the return tuple from uncons
head = fst . uncons
And that's what head models in the Prelude, so we can pretend like lists are infinite streams and define head in that way
uncons :: [a] -> (a, [a])
uncons (a:as) = (a, as)
-- a. k. a.
uncons list = (head list, tail list)
head = fst . uncons
Perhaps you're expected write to your own cons List type, then it might make more sense. Although type synonyms can't be recursive, so you end up using a non-tuple data constructor, making the tuple superfluous.. it would look like:
data List a = Nil | List (a, List a)
deriving( Show )
head :: List a -> a
head (List c) = fst c
Like already said in the comments, this is just a silly task and you won't get something you could call a good implementation of head.
Your solution, for those requirements, is just fine – as the only change I would replace (x:y:xs) with (x:y:_) since xs isn't used at all (which would actually cause a compiler warning in some settings). In fact, you could do that with y as well:
myHead (x:_:_) = fst (x, undefined)
There would be alternatives that look perhaps not quite so useless use of fst, i.e. don't just build a tuple by hand and immediately deconstruct it again:
myHead' [x] = x
myHead' xs = myHead' . fst $ splitAt 1 xs
myHead'' = foldr1 $ curry fst
myHead''' = fromJust . find ((==0) . fst) . zip [0..]
but you could rightfully say that these are just ridiculous.
UPDATE: Okay this question becomes potentially very straightforward.
q <- mapM return [1..]
Why does this never return?
Does mapM not lazily deal with infinite lists?
The code below hangs. However, if I replace line A by line B, it doesn't hang anymore. Alternatively, if I preceed line A by a "splitRandom $", it also doesn't hang.
Q1 is: Is mapM not lazy? Otherwise, why does replacing line A with line B "fix this" code?
Q2 is: Why does preceeding line A with splitRandom "solve" the problem?
import Control.Monad.Random
import Control.Applicative
f :: (RandomGen g) => Rand g (Double, [Double])
f = do
b <- splitRandom $ sequence $ repeat $ getRandom
c <- mapM return b -- A
-- let c = map id b -- B
a <- getRandom
return (a, c)
splitRandom :: (RandomGen g) => Rand g a -> Rand g a
splitRandom code = evalRand code <$> getSplit
t0 = do
(a, b) <- evalRand f <$> newStdGen
print a
print (take 3 b)
The code generates an infinite list of random numbers lazily. Then it generates a single random number. By using splitRandom, I can evaluate this latter random number first before the infinite list. This can be demonstrated if I return b instead of c in the function.
However, if I apply the mapM to the list, the program now hangs. To prevent this hanging, I have to apply splitRandom again before the mapM. I was under the impression that mapM can lazily
Well, there's lazy, and then there's lazy. mapM is indeed lazy in that it doesn't do more work than it has to. However, look at the type signature:
mapM :: (Monad m) => (a -> m b) -> [a] -> m [b]
Think about what this means: You give it a function a -> m b and a bunch of as. A regular map can turn those into a bunch of m bs, but not an m [b]. The only way to combine the bs into a single [b] without the monad getting in the way is to use >>= to sequence the m bs together to construct the list.
In fact, mapM is precisely equivalent to sequence . map.
In general, for any monadic expression, if the value is used at all, the entire chain of >>=s leading to the expression must be forced, so applying sequence to an infinite list can't ever finish.
If you want to work with an unbounded monadic sequence, you'll either need explicit flow control--e.g., a loop termination condition baked into the chain of binds somehow, which simple recursive functions like mapM and sequence don't provide--or a step-by-step sequence, something like this:
data Stream m a = Nil | Stream a (m (Stream m a))
...so that you only force as many monad layers as necessary.
Edit:: Regarding splitRandom, what's going on there is that you're passing it a Rand computation, evaluating that with the seed splitRandom gets, then returning the result. Without the splitRandom, the seed used by the single getRandom has to come from the final result of sequencing the infinite list, hence it hangs. With the extra splitRandom, the seed used only needs to thread though the two splitRandom calls, so it works. The final list of random numbers works because you've left the Rand monad at that point and nothing depends on its final state.
Okay this question becomes potentially very straightforward.
q <- mapM return [1..]
Why does this never return?
It's not necessarily true. It depends on the monad you're in.
For example, with the identity monad, you can use the result lazily and it terminates fine:
newtype Identity a = Identity a
instance Monad Identity where
Identity x >>= k = k x
return = Identity
-- "foo" is the infinite list of all the positive integers
foo :: [Integer]
Identity foo = do
q <- mapM return [1..]
return q
main :: IO ()
main = print $ take 20 foo -- [1 .. 20]
Here's an attempt at a proof that mapM return [1..] doesn't terminate. Let's assume for the moment that we're in the Identity monad (the argument will apply to any other monad just as well):
mapM return [1..] -- initial expression
sequence (map return [1 ..]) -- unfold mapM
let k m m' = m >>= \x ->
m' >>= \xs ->
return (x : xs)
in foldr k (return []) (map return [1..]) -- unfold sequence
So far so good...
-- unfold foldr
let k m m' = m >>= \x ->
m' >>= \xs ->
return (x : xs)
go [] = return []
go (y:ys) = k y (go ys)
in go (map return [1..])
-- unfold map so we have enough of a list to pattern-match go:
go (return 1 : map return [2..])
-- unfold go:
k (return 1) (go (map return [2..])
-- unfold k:
(return 1) >>= \x -> go (map return [2..]) >>= \xs -> return (x:xs)
Recall that return a = Identity a in the Identity monad, and (Identity a) >>= f = f a in the Identity monad. Continuing:
-- unfold >>= :
(\x -> go (map return [2..]) >>= \xs -> return (x:xs)) 1
-- apply 1 to \x -> ... :
go (map return [2..]) >>= \xs -> return (1:xs)
-- unfold >>= :
(\xs -> return (1:xs)) (go (map return [2..]))
Note that at this point we'd love to apply to \xs, but we can't yet! We have to instead continue unfolding until we have a value to apply:
-- unfold map for go:
(\xs -> return (1:xs)) (go (return 2 : map return [3..]))
-- unfold go:
(\xs -> return (1:xs)) (k (return 2) (go (map return [3..])))
-- unfold k:
(\xs -> return (1:xs)) ((return 2) >>= \x2 ->
(go (map return [3..])) >>= \xs2 ->
return (x2:xs2))
-- unfold >>= :
(\xs -> return (1:xs)) ((\x2 -> (go (map return [3...])) >>= \xs2 ->
return (x2:xs2)) 2)
At this point, we still can't apply to \xs, but we can apply to \x2. Continuing:
-- apply 2 to \x2 :
(\xs -> return (1:xs)) ((go (map return [3...])) >>= \xs2 ->
return (2:xs2))
-- unfold >>= :
(\xs -> return (1:xs)) (\xs2 -> return (2:xs2)) (go (map return [3..]))
Now we've gotten to a point where neither \xs nor \xs2 can be reduced yet! Our only choice is:
-- unfold map for go, and so on...
(\xs -> return (1:xs))
(\xs2 -> return (2:xs2))
(go ((return 3) : (map return [4..])))
So you can see that, because of foldr, we're building up a series of functions to apply, starting from the end of the list and working our way back up. Because at each step the input list is infinite, this unfolding will never terminate and we will never get an answer.
This makes sense if you look at this example (borrowed from another StackOverflow thread, I can't find which one at the moment). In the following list of monads:
mebs = [Just 3, Just 4, Nothing]
we would expect sequence to catch the Nothing and return a failure for the whole thing:
sequence mebs = Nothing
However, for this list:
mebs2 = [Just 3, Just 4]
we would expect sequence to give us:
sequence mebs = Just [3, 4]
In other words, sequence has to see the whole list of monadic computations, string them together, and run them all in order to come up with the right answer. There's no way sequence can give an answer without seeing the whole list.
Note: The previous version of this answer asserted that foldr computes starting from the back of the list, and wouldn't work at all on infinite lists, but that's incorrect! If the operator you pass to foldr is lazy on its second argument and produces output with a lazy data constructor like a list, foldr will happily work with an infinite list. See foldr (\x xs -> (replicate x x) ++ xs) [] [1...] for an example. But that's not the case with our operator k.
This question is showing very well the difference between the IO Monad and other Monads. In the background the mapM builds an expression with a bind operation (>>=) between all the list elements to turn the list of monadic expressions into a monadic expression of a list. Now, what is different in the IO monad is that the execution model of Haskell is executing expressions during the bind in the IO Monad. This is exactly what finally forces (in a purely lazy world) something to be executed at all.
So IO Monad is special in a way, it is using the sequence paradigm of bind to actually enforce execution of each step and this is what our program makes to execute anything at all in the end. Others Monads are different. They have other meanings of the bind operator, depending on the Monad. IO is actually the one Monad which execute things in the bind and this is the reason why IO types are "actions".
The following example show that other Monads do not enforce execution, the Maybe monad for example. Finally this leds to the result that a mapM in the IO Monad returns an expression, which - when executed - executes each single element before returning the final value.
There are nice papers about this, start here or search for denotational semantics and Monads:
Tackling the awkward squad: http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/mark.pdf
Example with Maybe Monad:
module Main where
fstMaybe :: [Int] -> Maybe [Int]
fstMaybe = mapM (\x -> if x == 3 then Nothing else Just x)
main = do
let r = fstMaybe [1..]
return r
Let's talk about this in a more generic context.
As the other answers said, the mapM is just a combination of sequence and map. So the problem is why sequence is strict in certain Monads. However, this is not restricted to Monads but also Applicatives since we have sequenceA which share the same implementation of sequence in most cases.
Now look at the (specialized for lists) type signature of sequenceA :
sequenceA :: Applicative f => [f a] -> f [a]
How would you do this? You were given a list, so you would like to use foldr on this list.
sequenceA = foldr f b where ...
--f :: f a -> f [a] -> f [a]
--b :: f [a]
Since f is an Applicative, you know what b coule be - pure []. But what is f?
Obviously it is a lifted version of (:):
(:) :: a -> [a] -> [a]
So now we know how sequenceA works:
sequenceA = foldr f b where
f a b = (:) <$> a <*> b
b = pure []
or
sequenceA = foldr ((<*>) . fmap (:)) (pure [])
Assume you were given a lazy list (x:_|_). The above definition of sequenceA gives
sequenceA (x:_|_) === (:) <$> x <*> foldr ((<*>) . fmap (:)) (pure []) _|_
=== (:) <$> x <*> _|_
So now we see the problem was reduced to consider weather f <*> _|_ is _|_ or not. Obviously if f is strict this is _|_, but if f is not strict, to allow a stop of evaluation we require <*> itself to be non-strict.
So the criteria for an applicative functor to have a sequenceA that stops on will be
the <*> operator to be non-strict. A simple test would be
const a <$> _|_ === _|_ ====> strict sequenceA
-- remember f <$> a === pure f <*> a
If we are talking about Moands, the criteria is
_|_ >> const a === _|_ ===> strict sequence