According to the Control.Arrow documentation, for many monads (those for which the >>= operation is strict) the instance MonadFix m => ArrowLoop (Kleisli m) does not satisfy the right-tightening law (loop (f >>> first h) = loop f >>> h) required by the ArrowLoop class. Why is that so?
This is multi-faceted question with several different angles, and it goes back to the value-recursion (mfix/mdo) in Haskell. See here for background information. I'll try to address the right-tightening issue here in some detail.
Right-tightening for mfix
Here's the right-tightening property for mfix:
mfix (λ(x, y). f x >>= λz. g z >>= λw. return (z, w))
= mfix f >>= λz. g z >>= λw. return (z, w)
Here it is in picture form:
The dotted-lines show where the "knot-tying" is happening. This is essentially the same law as mentioned in the question, except it is expressed in terms of mfix and value-recursion. As shown in Section 3.1 of this work, for a monad with a strict bind operator, you can always write an expression that distinguishes the left-hand side of this equation from the right hand-side, thus failing this property. (See below for an actual example in Haskell.)
When an arrow is created via the Kleisli construction from a monad with mfix, the corresponding loop operator fails the corresponding property in the same way.
Domain-theory and approximations
In domain theoretic terms, the mismatch will always be an approximation.
That is, the left hand-side will always be less defined than the right. (More precisely, the lhs will be lower than the rhs, in the domain of PCPOs, the typical domain we use for Haskell semantics.) In practice, this means that the right hand side will terminate more often, and is to be preferred when that is an issue. Again, see Section 3.1 of this for details.
In practice
This may sound all abstract, and in a certain sense it is. More intuitively, the left hand side gets a chance to act on the recursive value as it is being produced since g is inside the "loop," and thus is able to interfere with the fixed-point computation. Here's an actual Haskell program to illustrate:
import Control.Monad.Fix
f :: [Int] -> IO [Int]
f xs = return (1:xs)
g :: [Int] -> IO Int
g [x] = return x
g _ = return 1
lhs = mfix (\(x, y) -> f x >>= \z -> g z >>= \w -> return (z, w))
rhs = mfix f >>= \z -> g z >>= \w -> return (z, w)
If you evaluate the lhs it will never terminate, while rhs will give you the infinite list of 1's as expected:
*Main> :t lhs
lhs :: IO ([Int], Int)
*Main> lhs >>= \(xs, y) -> return (take 5 xs, y)
^CInterrupted.
*Main> rhs >>= \(xs, y) -> return (take 5 xs, y)
([1,1,1,1,1],1)
I interrupted the computation in the first case as it is non-terminating. While this is a contrived example, it is the simplest to illustrate the point. (See below for a rendering of this example using the mdo-notation, which might be easier to read.)
Example monads
Typical examples of monads that do not satisfy this law include Maybe, List, IO, or any other monad that's based on an algebraic type with multiple constructors. Typical examples of monads that do satisfy this law are State and Environment monads. See Section 4.10 for a table summarizing these results.
Pure-right shrinking
Note that a "weaker" form of right-tightening, where the function g in the above equation is pure, follows from value-recursion laws:
mfix (λ(x, y). f x >>= λz. return (z, h z))
= mfix f >>= λz. return (z, h z)
This is the same law as before with, g = return . h. That is g cannot perform any effects. In this case, there is no way to distinguish the left-hand side from the right as you might expect; and the result indeed follows from value-recursion axioms. (See Section 2.6.3 for a proof.) The picture in this case looks like this:
This property follows from the sliding property, which is a version of dinaturality for value-recursion, and is known to be satisfied by many monads of interest: Section 2.4.
Impact on the mdo-notation
The failure of this law has an impact on how the mdo notation was designed in GHC. The translation includes the so called "segmentation" step precisely to avoid the failure of the right-shrinking law. Some people consider that a bit controversial as GHC automatically picks the segments, essentially applying the right-tightening law. If explicit control is needed, GHC provides the rec keyword to leave the decision to the users.
Using the mdo-notation and explicit do rec, the above example renders
as follows:
{-# LANGUAGE RecursiveDo #-}
f :: [Int] -> IO [Int]
f xs = return (1:xs)
g :: [Int] -> IO Int
g [x] = return x
g _ = return 1
lhs :: IO ([Int], Int)
lhs = do rec x <- f x
w <- g x
return (x, w)
rhs :: IO ([Int], Int)
rhs = mdo x <- f x
w <- g x
return (x, w)
One might naively expect that lhs and rhs above should be the same, but due to the failure of the right-shrinking law, they are not. Just like before, lhs gets stuck, while rhs successfully produces the value:
*Main> lhs >>= \(x, y) -> return (take 5 x, y)
^CInterrupted.
*Main> rhs >>= \(x, y) -> return (take 5 x, y)
([1,1,1,1,1],1)
Visually inspecting the code, we see that the recursion is simply for the function f, which justifies the segmentation that is automatically performed by the mdo-notation.
If the rec notation is to be preferred, the programmer will need to put it in minimal blocks to ensure termination. For instance, the above expression for lhs should be written as follows:
lhs :: IO ([Int], Int)
lhs = do rec x <- f x
w <- g x
return (x, w)
The mdo-notation takes care of this and places the recursion over minimal blocks without user intervention.
Failure for Kleisli Arrows
After this lengthy detour, let us now go back to the original question about the corresponding law for arrows. Similar to the mfix case, we can construct a failing example for Kleisli arrows as well. In fact, the above example translates more or less directly:
{-# LANGUAGE Arrows #-}
import Control.Arrow
f :: Kleisli IO ([Int], [Int]) ([Int], [Int])
f = proc (_, ys) -> returnA -< (ys, 1:ys)
g :: Kleisli IO [Int] Int
g = proc xs -> case xs of
[x] -> returnA -< x
_ -> returnA -< 1
lhs, rhs :: Kleisli IO [Int] Int
lhs = loop (f >>> first g)
rhs = loop f >>> g
Just like in the case of mfix, we have:
*Main> runKleisli rhs []
1
*Main> runKleisli lhs []
^CInterrupted.
The failure of right-tightening for mfix of the IO-monad also prevents the Kleisli IO arrow from satisfying the right-tightening law in the ArrowLoop instance.
Related
I am currently reading Learn You a Haskell for Great Good! and am stumbling on the explanation for the evaluation of a certain code block. I've read the explanations several times and am starting to doubt if even the author understands what this piece of code is doing.
ghci> (+) <$> (+3) <*> (*100) $ 5
508
An applicative functor applies a function in some context to a value in some context to get some result in some context. I have spent a few hours studying this code block and have come up with a few explanations for how this expression is evaluated, and none of them are satisfactory. I understand that (5+3)+(5*100) is 508, but the problem is getting to this expression. Does anyone have a clear explanation for this piece of code?
The other two answers have given the detail of how this is calculated - but I thought I might chime in with a more "intuitive" answer to explain how, without going through a detailed calculation, one can "see" that the result must be 508.
As you implied, every Applicative (in fact, even every Functor) can be viewed as a particular kind of "context" which holds values of a given type. As simple examples:
Maybe a is a context in which a value of type a might exist, but might not (usually the result of a computation which may fail for some reason)
[a] is a context which can hold zero or more values of type a, with no upper limit on the number - representing all possible outcomes of a particular computation
IO a is a context in which a value of type a is available as a result of interacting with "the outside world" in some way. (OK that one isn't so simple...)
And, relevant to this example:
r -> a is a context in which a value of type a is available, but its particular value is not yet known, because it depends on some (as yet unknown) value of type r.
The Applicative methods can be very well understood on the basis of values in such contexts. pure embeds an "ordinary value" in a "default context" in which it behaves as closely as possible in that context to a "context-free" one. I won't go through this for each of the 4 examples above (most of them are very obvious), but I will note that for functions, pure = const - that is, a "pure value" a is represented by the function which always produces a no matter what the source value.
Rather than dwell on how <*> can best be described using the "context" metaphor though, I want to dwell on the particular expression:
f <$> a <*> b
where f is a function between 2 "pure values" and a and b are "values in a context". This expression in fact has a synonym as a function: liftA2. Although using the liftA2 function is generally considered less idiomatic than the "applicative style" using <$> and <*>, the name emphasies that the idea is to "lift" a function on "ordinary values" to one on "values in a context". And when thought of like this, I think it is usually very intuitive what this does, given a particular "context" (ie. a particular Applicative instance).
So the expression:
(+) <$> a <*> b
for values a and b of type say f Int for an Applicative f, behaves as follows for different instances f:
if f = Maybe, then the result, if a and b are both Just values, is to add up the underlying values and wrap them in a Just. If either a or b is Nothing, then the whole expression is Nothing.
if f = [] (the list instance) then the above expression is a list containing all sums of the form a' + b' where a' is in a and b' is in b.
if f = IO, then the above expression is an IO action that performs all the I/O effects of a followed by those of b, and results in the sum of the Ints produced by those two actions.
So what, finally, does it do if f is the function instance? Since a and b are both functions describing how to get a given Int given an arbitrary (Int) input, it is natural that lifting the (+) function over them should be the function that, given an input, gets the result of both the a and b functions, and then adds the results.
And that is, of course, what it does - and the explicit route by which it does that has been very ably mapped out by the other answers. But the reason why it works out like that - indeed, the very reason we have the instance that f <*> g = \x -> f x (g x), which might otherwise seem rather arbitrary (although in actual fact it's one of the very few things, if not the only thing, that will type-check), is so that the instance matches the semantics of "values which depend on some as-yet-unknown other value, according to the given function". And in general, I would say it's often better to think "at a high level" like this than to be forced to go down to the low-level details of exactly how computations are performed. (Although I certainly don't want to downplay the importance of also being able to do the latter.)
[Actually, from a philosophical point of view, it might be more accurate to say that the definition is as it is just because it's the "natural" definition that type-checks, and that it's just happy coincidence that the instance then takes on such a nice "meaning". Mathematics is of course full of just such happy "coincidences" which turn out to have very deep reasons behind them.]
It is using the applicative instance for functions. Your code
(+) <$> (+3) <*> (*100) $ 5
is evaluated as
( (\a->\b->a+b) <$> (\c->c+3) <*> (\d->d*100) ) 5 -- f <$> g
( (\x -> (\a->\b->a+b) ((\c->c+3) x)) <*> (\d->d*100) ) 5 -- \x -> f (g x)
( (\x -> (\a->\b->a+b) (x+3)) <*> (\d->d*100) ) 5
( (\x -> \b -> (x+3)+b) <*> (\d->d*100) ) 5
( (\x->\b->(x+3)+b) <*> (\d->d*100) ) 5 -- f <*> g
(\y -> ((\x->\b->(x+3)+b) y) ((\d->d*100) y)) 5 -- \y -> (f y) (g y)
(\y -> (\b->(y+3)+b) (y*100)) 5
(\y -> (y+3)+(y*100)) 5
(5+3)+(5*100)
where <$> is fmap or just function composition ., and <*> is ap if you know how it behaves on monads.
Let us first take a look how fmap and (<*>) are defined for a function:
instance Functor ((->) r) where
fmap = (.)
instance Applicative ((->) a) where
pure = const
(<*>) f g x = f x (g x)
liftA2 q f g x = q (f x) (g x)
The expression we aim to evaluate is:
(+) <$> (+3) <*> (*100) $ 5
or more verbose:
((+) <$> (+3)) <*> (*100) $ 5
If we thus evaluate (<$>), which is an infix synonym for fmap, we thus see that this is equal to:
(+) . (+3)
so that means our expression is equivalent to:
((+) . (+3)) <*> (*100) $ 5
Next we can apply the sequential application. Here f is thus equal to (+) . (+3) and g is (*100). This thus means that we construct a function that looks like:
\x -> ((+) . (+3)) x ((*100) x)
We can now simplify this and rewrite this into:
\x -> ((+) (x+3)) ((*100) x)
and then rewrite it to:
\x -> (+) (x+3) ((*100) x)
We thus have constructed a function that looks like:
\x -> (x+3) + 100 * x
or simpler:
\x -> 101 * x + 3
If we then calculate:
(\x -> 101*x + 3) 5
then we of course obtain:
101 * 5 + 3
and thus:
505 + 3
which is the expected:
508
For any applicative,
a <$> b <*> c = liftA2 a b c
For functions,
liftA2 a b c x
= a (b x) (c x) -- by definition;
= (a . b) x (c x)
= ((a <$> b) <*> c) x
Thus
(+) <$> (+3) <*> (*100) $ 5
=
liftA2 (+) (+3) (*100) 5
=
(+) ((+3) 5) ((*100) 5)
=
(5+3) + (5*100)
(the long version of this answer follows.)
Pure math has no time. Pure Haskell has no time. Speaking in verbs ("applicative functor applies" etc.) can be confusing ("applies... when?...").
Instead, (<*>) is a combinator which combines a "computation" (denoted by an applicative functor) carrying a function (in the context of that type of computations) and a "computation" of the same type, carrying a value (in like context), into one combined "computation" that carries out the application of that function to that value (in such context).
"Computation" is used to contrast it with a pure Haskell "calculations" (after Philip Wadler's "Calculating is better than Scheming" paper, itself referring to David Turner's Kent Recursive Calculator language, one of predecessors of Miranda, the (main) predecessor of Haskell).
"Computations" might or might not be pure themselves, that's an orthogonal issue. But mainly what it means, is that "computations" embody a generalized function call protocol. They might "do" something in addition to / as part of / carrying out the application of a function to its argument. Or in types,
( $ ) :: (a -> b) -> a -> b
(<$>) :: (a -> b) -> f a -> f b
(<*>) :: f (a -> b) -> f a -> f b
(=<<) :: (a -> f b) -> f a -> f b
With functions, the context is application (another one), and to recover the value -- be it a function or an argument -- the application to a common argument is to be performed.
(bear with me, we're almost there).
The pattern a <$> b <*> c is also expressible as liftA2 a b c. And so, the "functions" applicative functor "computation" type is defined by
liftA2 h x y s = let x' = x s -- embellished application of h to x and y
y' = y s in -- in context of functions, or Reader
h x' y'
-- liftA2 h x y = let x' = x -- non-embellished application, or Identity
-- y' = y in
-- h x' y'
-- liftA2 h x y s = let (x',s') = x s -- embellished application of h to x and y
-- (y',s'') = y s' in -- in context of
-- (h x' y', s'') -- state-passing computations, or State
-- liftA2 h x y = let (x',w) = x -- embellished application of h to x and y
-- (y',w') = y in -- in context of
-- (h x' y', w++w') -- logging computations, or Writer
-- liftA2 h x y = [h x' y' | -- embellished application of h to x and y
-- x' <- x, -- in context of
-- y' <- y ] -- nondeterministic computations, or List
-- ( and for Monads we define `liftBind h x k =` and replace `y` with `k x'`
-- in the bodies of the above combinators; then liftA2 becomes liftBind: )
-- liftA2 :: (a -> b -> c) -> f a -> f b -> f c
-- liftBind :: (a -> b -> c) -> f a -> (a -> f b) -> f c
-- (>>=) = liftBind (\a b -> b) :: f a -> (a -> f b) -> f b
And in fact all the above snippets can be just written with ApplicativeDo as liftA2 h x y = do { x' <- x ; y' <- y ; pure (h x' y') } or even more intuitively as
liftA2 h x y = [h x' y' | x' <- x, y' <- y], with Monad Comprehensions, since all the above computation types are monads as well as applicative functors. This shows by the way that (<*>) = liftA2 ($), which one might find illuminating as well.
Indeed,
> :t let liftA2 h x y r = h (x r) (y r) in liftA2
:: (a -> b -> c) -> (t -> a) -> (t -> b) -> (t -> c)
> :t liftA2 -- the built-in one
liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c
i.e. the types match when we take f a ~ (t -> a) ~ (->) t a, i.e. f ~ (->) t.
And so, we're already there:
(+) <$> (+3) <*> (*100) $ 5
=
liftA2 (+) (+3) (*100) 5
=
(+) ((+3) 5) ((*100) 5)
=
(+) (5+3) (5*100)
=
(5+3) + (5*100)
It's just how liftA2 is defined for this type, Applicative ((->) t) => ...:
instance Applicative ((->) t) where
pure x t = x
liftA2 h x y t = h (x t) (y t)
There's no need to define (<*>). The source code says:
Minimal complete definition
pure, ((<*>) | liftA2)
So now you've been wanting to ask for a long time, why is it that a <$> b <*> c is equivalent to liftA2 a b c?
The short answer is, it just is. One can be defined in terms of the other -- i.e. (<*>) can be defined via liftA2,
g <*> x = liftA2 id g x -- i.e. (<*>) = liftA2 id = liftA2 ($)
-- (g <*> x) t = liftA2 id g x t
-- = id (g t) (x t)
-- = (id . g) t (x t) -- = (id <$> g <*> x) t
-- = g t (x t)
(which is exactly as it is defined in the source),
and it is a law that every Applicative Functor must follow, that h <$> g = pure h <*> g.
Lastly,
liftA2 h g x == pure h <*> g <*> x
-- h g x == (h g) x
because <*> associates to the left: it is infixl 4 <*>.
Consider the following functions, taken from the answers to this problem set:
func6 :: Monad f => f Integer -> f (Integer,Integer)
func6 xs = do
x <- xs
return $ if x > 0 then (x, 0)
else (0, x)
func6' :: Functor f => f Integer -> f (Integer,Integer)
-- slightly unorthodox idiom, with an partially applied fmap
func6' = fmap $ \x -> if x > 0 then (x,0) else (0,x)
-- func7 cannot be implemented without Monad if we care about the precise
-- evaluation and layzness behaviour:
-- > isJust (func7 (Just undefined))
-- *** Exception: Prelude.undefined
--
-- If we care not, then it is equivalent to func6, and there we can. Note that
-- > isJust (func6 (Just undefined))
-- True
func7 :: Monad f => f Integer -> f (Integer,Integer)
func7 xs = do
x <- xs
if x > 0 then return (x, 0)
else return (0, x)
-- func9 cannot be implemented without Monad: The structure of the computation
-- depends on the result of the first argument.
func9 :: Monad f => f Integer -> f Integer -> f Integer -> f Integer
func9 xs ys zs = xs >>= \x -> if even x then ys else zs
Although I understand the counterexample for func7, I don't understand the given reasoning for why we can implement func7 and func9 using monads only. How do monad/applicative/functor laws fit with the above reasoning?
I don't think typeclass laws are what you need to be worrying about here; in fact, I think the typeclasses unnnecessarily complicate the exercise, if your purpose is to understand nonstrictness.
Here's a simpler example where everything is monomorphic, and rather than give examples using bottom, we're going to use :sprint in GHCi to watch the extent of the evaluation.
func6
My x6 example here corresponds to func6 in the question.
λ> x6 = Just . bool 'a' 'b' =<< Just True
Initially, nothing has been evaluated.
λ> :sprint x6
x6 = _
Now we evaluate 'isJust x6'.
λ> isJust x6
True
And now we can see that x6 has been partially evaluated. Only to its head, though.
λ> :sprint x6
y = Just _
Why? Because there was no need to know the result of the bool 'a' 'b' part just to determine whether the Maybe was going to be a Just. So it remains an unevaluated thunk.
func7
My x7 example here corresponds to func7 in the question.
λ> x7 = bool (Just 'a') (Just 'b') =<< Just True
x :: Maybe Char
Again, initially nothing is evaluated.
λ> :sprint x7
x = _
And again we'll apply isJust.
λ> isJust x7
True
In this case, the content of the Just did get evaluated (so we say this definition was "more strict" or "not as lazy").
λ> :sprint x7
x = Just 'b'
Why? Because we had to evaluate the bool application before we could tell whether it was going to produce a Just result.
Chris Martin's answer covers func6 versus func7 very well. (In short, the difference is that, thanks to laziness, func6 #Maybe can decide whether the constructor used for the result should be Just or Nothing without actually having to look at any value within its argument.)
As for func9, what makes Monad necessary is that the function involves using values found in xs to decide on the functorial context of the result. (Synonyms for "functorial context" in this setting include "effects" and, as the solution you quote puts it, "structure of the computation".) For the sake of illustration, consider:
func9 (fmap read getLine) (putStrLn "Even!") (putStrLn "Odd!")
It is useful to compare the types of fmap, (<*>) and (>>=):
(<$>) :: Functor f => (a -> b) -> (f a -> f b) -- (<$>) = fmap
(<*>) :: Applicative f => f (a -> b) -> (f a -> f b)
(=<<) :: Monad f => (a -> f b) -> (f a -> f b) -- (=<<) = filp (>>=)
The a -> b function passed to fmap has no information about f, the involved Functor, and so fmap cannot change the effects at all. (<*>) can change the effects, but only by combining the effects of its two arguments -- the a -> b functions that might be found in the f (a -> b) argument have no bearing on that whatsoever. With (>>=), though, the a -> f b function is used precisely to generate effects from values found in the f a argument.
I suggest Difference between Monad and Applicative in Haskell as further reading on what you gain (and lose) when moving between Functor, Applicative and Monad.
I would like to generate infinite stream of numbers with Rand monad from System.Random.MWC.Monad. If only there would be a MonadFix instance for this monad, or instance like this:
instance (PrimMonad m) => MonadFix m where
...
then one could write:
runWithSystemRandom (mfix (\ xs -> uniform >>= \x -> return (x:xs)))
There isn't one though.
I was going through MonadFix docs but I don't see an obvious way of implementing this instance.
You can write a MonadFix instance. However, the code will not generate an infinite stream of distinct random numbers. The argument to mfix is a function that calls uniform exactly once. When the code is run, it will call uniform exactly once, and create an infinite list containing the result.
You can try the equivalent IO code to see what happens:
import System.Random
import Control.Monad.Fix
main = print . take 10 =<< mfix (\xs -> randomIO >>= (\x -> return (x : xs :: [Int])))
It seems that you want to use a stateful random number generator, and you want to run the generator and collect its results lazily. That isn't possible without careful use of unsafePerformIO. Unless you need to produce many random numbers quickly, you can use a pure RNG function such as randomRs instead.
A question: how do you wish to generate your initial seed?
The problem is that MWS is built on the "primitive" package which abstracts only IO and strict (Control.Monad.ST.ST s). It does not also abstract lazy (Control.Monad.ST.Lazy.ST s).
Perhaps one could make instances for "primitive" to cover lazy ST and then MWS could be lazy.
UPDATE: I can make this work using Control.Monad.ST.Lazy by using strictToLazyST:
module Main where
import Control.Monad(replicateM)
import qualified Control.Monad.ST as S
import qualified Control.Monad.ST.Lazy as L
import qualified System.Random.MWC as A
foo :: Int -> L.ST s [Int]
foo i = do rest <- foo $! succ i
return (i:rest)
splam :: A.Gen s -> S.ST s Int
splam = A.uniformR (0,100)
getS :: Int -> S.ST s [Int]
getS n = do gen <- A.create
replicateM n (splam gen)
getL :: Int -> L.ST s [Int]
getL n = do gen <- createLazy
replicateM n (L.strictToLazyST (splam gen))
createLazy :: L.ST s (A.Gen s)
createLazy = L.strictToLazyST A.create
makeLots :: A.Gen s -> L.ST s [Int]
makeLots gen = do x <- L.strictToLazyST (A.uniformR (0,100) gen)
rest <- makeLots gen
return (x:rest)
main = do
print (S.runST (getS 8))
print (L.runST (getL 8))
let inf = L.runST (foo 0) :: [Int]
print (take 10 inf)
let inf3 = L.runST (createLazy >>= makeLots) :: [Int]
print (take 10 inf3)
(This would be better suited as a comment to Heatsink's answer, but it's a bit too long.)
MonadFix instances must adhere to several laws. One of them is left shrinking/thightening:
mfix (\x -> a >>= \y -> f x y) = a >>= \y -> mfix (\x -> f x y)
This law allows to rewrite your expression as
mfix (\xs -> uniform >>= \x -> return (x:xs))
= uniform >>= \x -> mfix (\xs -> return (x:xs))
= uniform >>= \x -> mfix (return . (x :))
Using another law, purity mfix (return . h) = return (fix h), we can further simplify to
= uniform >>= \x -> return (fix (x :))
and using the standard monad laws and rewriting fix (x :) as repeat x
= liftM (\x -> fix (x :)) uniform
= liftM repeat uniform
Therefore, the result is indeed one invocation of uniform and then just repeating the single value indefinitely.
From time to time I stumble over the problem that I want to express "please use the last argument twice", e.g. in order to write pointfree style or to avoid a lambda. E.g.
sqr x = x * x
could be written as
sqr = doubleArgs (*) where
doubleArgs f x = f x x
Or consider this slightly more complicated function (taken from this question):
ins x xs = zipWith (\ a b -> a ++ (x:b)) (inits xs) (tails xs)
I could write this code pointfree if there were a function like this:
ins x = dup (zipWith (\ a b -> a ++ (x:b))) inits tails where
dup f f1 f2 x = f (f1 x) (f2 x)
But as I can't find something like doubleArgs or dup in Hoogle, so I guess that I might miss a trick or idiom here.
From Control.Monad:
join :: (Monad m) -> m (m a) -> m a
join m = m >>= id
instance Monad ((->) r) where
return = const
m >>= f = \x -> f (m x) x
Expanding:
join :: (a -> a -> b) -> (a -> b)
join f = f >>= id
= \x -> id (f x) x
= \x -> f x x
So, yeah, Control.Monad.join.
Oh, and for your pointfree example, have you tried using applicative notation (from Control.Applicative):
ins x = zipWith (\a b -> a ++ (x:b)) <$> inits <*> tails
(I also don't know why people are so fond of a ++ (x:b) instead of a ++ [x] ++ b... it's not faster -- the inliner will take care of it -- and the latter is so much more symmetrical! Oh well)
What you call 'doubleArgs' is more often called dup - it is the W combinator (called warbler in To Mock a Mockingbird) - "the elementary duplicator".
What you call 'dup' is actually the 'starling-prime' combinator.
Haskell has a fairly small "combinator basis" see Data.Function, plus some Applicative and Monadic operations add more "standard" combinators by virtue of the function instances for Applicative and Monad (<*> from Applicative is the S - starling combinator for the functional instance, liftA2 & liftM2 are starling-prime). There doesn't seem to be much enthusiasm in the community for expanding Data.Function, so whilst combinators are good fun, pragmatically I've come to prefer long-hand in situations where a combinator is not directly available.
Here is another solution for the second part of my question: Arrows!
import Control.Arrow
ins x = inits &&& tails >>> second (map (x:)) >>> uncurry (zipWith (++))
The &&& ("fanout") distributes an argument to two functions and returns the pair of the results. >>> ("and then") reverses the function application order, which allows to have a chain of operations from left to right. second works only on the second part of a pair. Of course you need an uncurry at the end to feed the pair in a function expecting two arguments.
UPDATE: Okay this question becomes potentially very straightforward.
q <- mapM return [1..]
Why does this never return?
Does mapM not lazily deal with infinite lists?
The code below hangs. However, if I replace line A by line B, it doesn't hang anymore. Alternatively, if I preceed line A by a "splitRandom $", it also doesn't hang.
Q1 is: Is mapM not lazy? Otherwise, why does replacing line A with line B "fix this" code?
Q2 is: Why does preceeding line A with splitRandom "solve" the problem?
import Control.Monad.Random
import Control.Applicative
f :: (RandomGen g) => Rand g (Double, [Double])
f = do
b <- splitRandom $ sequence $ repeat $ getRandom
c <- mapM return b -- A
-- let c = map id b -- B
a <- getRandom
return (a, c)
splitRandom :: (RandomGen g) => Rand g a -> Rand g a
splitRandom code = evalRand code <$> getSplit
t0 = do
(a, b) <- evalRand f <$> newStdGen
print a
print (take 3 b)
The code generates an infinite list of random numbers lazily. Then it generates a single random number. By using splitRandom, I can evaluate this latter random number first before the infinite list. This can be demonstrated if I return b instead of c in the function.
However, if I apply the mapM to the list, the program now hangs. To prevent this hanging, I have to apply splitRandom again before the mapM. I was under the impression that mapM can lazily
Well, there's lazy, and then there's lazy. mapM is indeed lazy in that it doesn't do more work than it has to. However, look at the type signature:
mapM :: (Monad m) => (a -> m b) -> [a] -> m [b]
Think about what this means: You give it a function a -> m b and a bunch of as. A regular map can turn those into a bunch of m bs, but not an m [b]. The only way to combine the bs into a single [b] without the monad getting in the way is to use >>= to sequence the m bs together to construct the list.
In fact, mapM is precisely equivalent to sequence . map.
In general, for any monadic expression, if the value is used at all, the entire chain of >>=s leading to the expression must be forced, so applying sequence to an infinite list can't ever finish.
If you want to work with an unbounded monadic sequence, you'll either need explicit flow control--e.g., a loop termination condition baked into the chain of binds somehow, which simple recursive functions like mapM and sequence don't provide--or a step-by-step sequence, something like this:
data Stream m a = Nil | Stream a (m (Stream m a))
...so that you only force as many monad layers as necessary.
Edit:: Regarding splitRandom, what's going on there is that you're passing it a Rand computation, evaluating that with the seed splitRandom gets, then returning the result. Without the splitRandom, the seed used by the single getRandom has to come from the final result of sequencing the infinite list, hence it hangs. With the extra splitRandom, the seed used only needs to thread though the two splitRandom calls, so it works. The final list of random numbers works because you've left the Rand monad at that point and nothing depends on its final state.
Okay this question becomes potentially very straightforward.
q <- mapM return [1..]
Why does this never return?
It's not necessarily true. It depends on the monad you're in.
For example, with the identity monad, you can use the result lazily and it terminates fine:
newtype Identity a = Identity a
instance Monad Identity where
Identity x >>= k = k x
return = Identity
-- "foo" is the infinite list of all the positive integers
foo :: [Integer]
Identity foo = do
q <- mapM return [1..]
return q
main :: IO ()
main = print $ take 20 foo -- [1 .. 20]
Here's an attempt at a proof that mapM return [1..] doesn't terminate. Let's assume for the moment that we're in the Identity monad (the argument will apply to any other monad just as well):
mapM return [1..] -- initial expression
sequence (map return [1 ..]) -- unfold mapM
let k m m' = m >>= \x ->
m' >>= \xs ->
return (x : xs)
in foldr k (return []) (map return [1..]) -- unfold sequence
So far so good...
-- unfold foldr
let k m m' = m >>= \x ->
m' >>= \xs ->
return (x : xs)
go [] = return []
go (y:ys) = k y (go ys)
in go (map return [1..])
-- unfold map so we have enough of a list to pattern-match go:
go (return 1 : map return [2..])
-- unfold go:
k (return 1) (go (map return [2..])
-- unfold k:
(return 1) >>= \x -> go (map return [2..]) >>= \xs -> return (x:xs)
Recall that return a = Identity a in the Identity monad, and (Identity a) >>= f = f a in the Identity monad. Continuing:
-- unfold >>= :
(\x -> go (map return [2..]) >>= \xs -> return (x:xs)) 1
-- apply 1 to \x -> ... :
go (map return [2..]) >>= \xs -> return (1:xs)
-- unfold >>= :
(\xs -> return (1:xs)) (go (map return [2..]))
Note that at this point we'd love to apply to \xs, but we can't yet! We have to instead continue unfolding until we have a value to apply:
-- unfold map for go:
(\xs -> return (1:xs)) (go (return 2 : map return [3..]))
-- unfold go:
(\xs -> return (1:xs)) (k (return 2) (go (map return [3..])))
-- unfold k:
(\xs -> return (1:xs)) ((return 2) >>= \x2 ->
(go (map return [3..])) >>= \xs2 ->
return (x2:xs2))
-- unfold >>= :
(\xs -> return (1:xs)) ((\x2 -> (go (map return [3...])) >>= \xs2 ->
return (x2:xs2)) 2)
At this point, we still can't apply to \xs, but we can apply to \x2. Continuing:
-- apply 2 to \x2 :
(\xs -> return (1:xs)) ((go (map return [3...])) >>= \xs2 ->
return (2:xs2))
-- unfold >>= :
(\xs -> return (1:xs)) (\xs2 -> return (2:xs2)) (go (map return [3..]))
Now we've gotten to a point where neither \xs nor \xs2 can be reduced yet! Our only choice is:
-- unfold map for go, and so on...
(\xs -> return (1:xs))
(\xs2 -> return (2:xs2))
(go ((return 3) : (map return [4..])))
So you can see that, because of foldr, we're building up a series of functions to apply, starting from the end of the list and working our way back up. Because at each step the input list is infinite, this unfolding will never terminate and we will never get an answer.
This makes sense if you look at this example (borrowed from another StackOverflow thread, I can't find which one at the moment). In the following list of monads:
mebs = [Just 3, Just 4, Nothing]
we would expect sequence to catch the Nothing and return a failure for the whole thing:
sequence mebs = Nothing
However, for this list:
mebs2 = [Just 3, Just 4]
we would expect sequence to give us:
sequence mebs = Just [3, 4]
In other words, sequence has to see the whole list of monadic computations, string them together, and run them all in order to come up with the right answer. There's no way sequence can give an answer without seeing the whole list.
Note: The previous version of this answer asserted that foldr computes starting from the back of the list, and wouldn't work at all on infinite lists, but that's incorrect! If the operator you pass to foldr is lazy on its second argument and produces output with a lazy data constructor like a list, foldr will happily work with an infinite list. See foldr (\x xs -> (replicate x x) ++ xs) [] [1...] for an example. But that's not the case with our operator k.
This question is showing very well the difference between the IO Monad and other Monads. In the background the mapM builds an expression with a bind operation (>>=) between all the list elements to turn the list of monadic expressions into a monadic expression of a list. Now, what is different in the IO monad is that the execution model of Haskell is executing expressions during the bind in the IO Monad. This is exactly what finally forces (in a purely lazy world) something to be executed at all.
So IO Monad is special in a way, it is using the sequence paradigm of bind to actually enforce execution of each step and this is what our program makes to execute anything at all in the end. Others Monads are different. They have other meanings of the bind operator, depending on the Monad. IO is actually the one Monad which execute things in the bind and this is the reason why IO types are "actions".
The following example show that other Monads do not enforce execution, the Maybe monad for example. Finally this leds to the result that a mapM in the IO Monad returns an expression, which - when executed - executes each single element before returning the final value.
There are nice papers about this, start here or search for denotational semantics and Monads:
Tackling the awkward squad: http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/mark.pdf
Example with Maybe Monad:
module Main where
fstMaybe :: [Int] -> Maybe [Int]
fstMaybe = mapM (\x -> if x == 3 then Nothing else Just x)
main = do
let r = fstMaybe [1..]
return r
Let's talk about this in a more generic context.
As the other answers said, the mapM is just a combination of sequence and map. So the problem is why sequence is strict in certain Monads. However, this is not restricted to Monads but also Applicatives since we have sequenceA which share the same implementation of sequence in most cases.
Now look at the (specialized for lists) type signature of sequenceA :
sequenceA :: Applicative f => [f a] -> f [a]
How would you do this? You were given a list, so you would like to use foldr on this list.
sequenceA = foldr f b where ...
--f :: f a -> f [a] -> f [a]
--b :: f [a]
Since f is an Applicative, you know what b coule be - pure []. But what is f?
Obviously it is a lifted version of (:):
(:) :: a -> [a] -> [a]
So now we know how sequenceA works:
sequenceA = foldr f b where
f a b = (:) <$> a <*> b
b = pure []
or
sequenceA = foldr ((<*>) . fmap (:)) (pure [])
Assume you were given a lazy list (x:_|_). The above definition of sequenceA gives
sequenceA (x:_|_) === (:) <$> x <*> foldr ((<*>) . fmap (:)) (pure []) _|_
=== (:) <$> x <*> _|_
So now we see the problem was reduced to consider weather f <*> _|_ is _|_ or not. Obviously if f is strict this is _|_, but if f is not strict, to allow a stop of evaluation we require <*> itself to be non-strict.
So the criteria for an applicative functor to have a sequenceA that stops on will be
the <*> operator to be non-strict. A simple test would be
const a <$> _|_ === _|_ ====> strict sequenceA
-- remember f <$> a === pure f <*> a
If we are talking about Moands, the criteria is
_|_ >> const a === _|_ ===> strict sequence