Applicatives compose, monads don't - haskell

Applicatives compose, monads don't.
What does the above statement mean? And when is one preferable to other?

If we compare the types
(<*>) :: Applicative a => a (s -> t) -> a s -> a t
(>>=) :: Monad m => m s -> (s -> m t) -> m t
we get a clue to what separates the two concepts. That (s -> m t) in the type of (>>=) shows that a value in s can determine the behaviour of a computation in m t. Monads allow interference between the value and computation layers. The (<*>) operator allows no such interference: the function and argument computations don't depend on values. This really bites. Compare
miffy :: Monad m => m Bool -> m x -> m x -> m x
miffy mb mt mf = do
b <- mb
if b then mt else mf
which uses the result of some effect to decide between two computations (e.g. launching missiles and signing an armistice), whereas
iffy :: Applicative a => a Bool -> a x -> a x -> a x
iffy ab at af = pure cond <*> ab <*> at <*> af where
cond b t f = if b then t else f
which uses the value of ab to choose between the values of two computations at and af, having carried out both, perhaps to tragic effect.
The monadic version relies essentially on the extra power of (>>=) to choose a computation from a value, and that can be important. However, supporting that power makes monads hard to compose. If we try to build ‘double-bind’
(>>>>==) :: (Monad m, Monad n) => m (n s) -> (s -> m (n t)) -> m (n t)
mns >>>>== f = mns >>-{-m-} \ ns -> let nmnt = ns >>= (return . f) in ???
we get this far, but now our layers are all jumbled up. We have an n (m (n t)), so we need to get rid of the outer n. As Alexandre C says, we can do that if we have a suitable
swap :: n (m t) -> m (n t)
to permute the n inwards and join it to the other n.
The weaker ‘double-apply’ is much easier to define
(<<**>>) :: (Applicative a, Applicative b) => a (b (s -> t)) -> a (b s) -> a (b t)
abf <<**>> abs = pure (<*>) <*> abf <*> abs
because there is no interference between the layers.
Correspondingly, it's good to recognize when you really need the extra power of Monads, and when you can get away with the rigid computation structure that Applicative supports.
Note, by the way, that although composing monads is difficult, it might be more than you need. The type m (n v) indicates computing with m-effects, then computing with n-effects to a v-value, where the m-effects finish before the n-effects start (hence the need for swap). If you just want to interleave m-effects with n-effects, then composition is perhaps too much to ask!

Applicatives compose, monads don't.
Monads do compose, but the result might not be a monad.
In contrast, the composition of two applicatives is necessarily an applicative.
I suspect the intention of the original statement was that "Applicativeness composes, while monadness doesn't." Rephrased, "Applicative is closed under composition, and Monad is not."

If you have applicatives A1 and A2, then the type data A3 a = A3 (A1 (A2 a)) is also applicative (you can write such an instance in a generic way).
On the other hand, if you have monads M1 and M2 then the type data M3 a = M3 (M1 (M2 a)) is not necessarily a monad (there is no sensible generic implementation for >>= or join for the composition).
One example can be the type [Int -> a] (here we compose a type constructor [] with (->) Int, both of which are monads). You can easily write
app :: [Int -> (a -> b)] -> [Int -> a] -> [Int -> b]
app f x = (<*>) <$> f <*> x
And that generalizes to any applicative:
app :: (Applicative f, Applicative f1) => f (f1 (a -> b)) -> f (f1 a) -> f (f1 b)
But there is no sensible definition of
join :: [Int -> [Int -> a]] -> [Int -> a]
If you're unconvinced of this, consider this expression:
join [\x -> replicate x (const ())]
The length of the returned list must be set in stone before an integer is ever provided, but the correct length of it depends on the integer that's provided. Thus, no correct join function can exist for this type.

Unfortunately, our real goal, composition of monads, is rather more
difficult. .. In fact, we
can actually prove that, in a certain sense, there is no way to
construct a join function with the type above using only the
operations of the two monads (see the appendix for an outline of the
proof). It follows that the only way that we might hope to form a
composition is if there are some additional constructions linking the
two components.
Composing monads, http://web.cecs.pdx.edu/~mpj/pubs/RR-1004.pdf

The distributive law solution
l : MN -> NM is enough
to guarantee monadicity of NM. To see this you need a unit and a mult. i'll focus on the mult (the unit is unit_N unitM)
NMNM - l -> NNMM - mult_N mult_M -> NM
This does not guarantee that MN is a monad.
The crucial observation however, comes into play when you have distributive law solutions
l1 : ML -> LM
l2 : NL -> LN
l3 : NM -> MN
thus, LM, LN and MN are monads. The question arises as to whether LMN is a monad (either by
(MN)L -> L(MN)
or by
N(LM) -> (LM)N
We have enough structure to make these maps. However, as Eugenia Cheng observes, we need a hexagonal condition (that amounts to a presentation of the Yang-Baxter equation) to guarantee monadicity of either construction. In fact, with the hexagonal condition, the two different monads coincide.

Any two applicative functors can be composed and yield another applicative functor. But this does not work with monads. A composition of two monads is not always a monad. For example, a composition of State and List monads (in any order) is not a monad.
Moreover, one cannot combine two monads in general, whether by composition or by any other method. There is no known algorithm or procedure that combines any two monads M, N into a larger, lawful monad T so that you can inject M ~> T and N ~> T by monad morphisms and satisfy reasonable non-degeneracy laws (e.g., to guarantee that T is not just a unit type that discards all effects from M and N).
It is possible to define a suitable T for specific M and N, for example of M = Maybe and N = State s and so on. But it is unknown how to define T that would work parametrically in the monads M and N. Neither functor composition, nor more complicated constructions work adequately.
One way of combining monads M and N is first, to define the co-product C a = Either (M a) (N a). This C will be a functor but, in general, not a monad. Then one constructs a free monad (Free C) on the functor C. The result is a monad that is able to represent effects of M and N combined. However, it is a much larger monad that can also represent other effects; it is much larger than just a combination of effects of M and N. Also, the free monad will need to be "run" or "interpreted" in order to extract any results (and the monad laws are guaranteed only after "running"). There will be a run-time penalty as well as memory size penalty because the free monad will potentially build very large structures in memory before it is "run". If these drawbacks are not significant, the free monad is the way to go.
Another way of combining monads is to take one monad's transformer and apply it to the other monad. But there is no algorithmic way of taking a definition of a monad (e.g., type and code in Haskell) and producing the type and code of the corresponding transformer.
There are at least 4 different classes of monads whose transformers are constructed in completely different but regular ways (composed-inside, composed-outside, adjunction-based monad, product monad). A few other monads do not belong to any of these "regular" classes and have transformers defined "ad hoc" in some way.
Distributive laws exist only for composed monads. It is misleading to think that any two monads M, N for which one can define some function M (N a) -> N (M a) will compose. In addition to defining a function with this type signature, one needs to prove that certain laws hold. In many cases, these laws do not hold.
There are even some monads that have two inequivalent transformers; one defined in a "regular" way and one "ad hoc". A simple example is the identity monad Id a = a; it has the regular transformer IdT m = m ("composed") and the irregular "ad hoc" one: IdT2 m a = forall r. (a -> m r) -> m r (the codensity monad on m).
A more complicated example is the "selector monad": Sel q a = (a -> q) -> a. Here q is a fixed type and a is the main type parameter of the monad Sel q. This monad has two transformers: SelT1 m a = (m a -> q) -> m a (composed-inside) and SelT2 m a = (a -> m q) -> m a (ad hoc).
Full details are worked out in Chapter 14 of the book "The Science of Functional Programming". https://github.com/winitzki/sofp or https://leanpub.com/sofp/

Here is some code making monad composition via a distributive law work. Note that there are distributive laws from any monad to the monads Maybe, Either, Writer and []. On the other hand, you won't find such (general) distributive laws into Reader and State. For these, you will need monad transformers.
{-# LANGUAGE FlexibleInstances #-}
module ComposeMonads where
import Control.Monad
import Control.Monad.Writer.Lazy
newtype Compose m1 m2 a = Compose { run :: m1 (m2 a) }
instance (Functor f1, Functor f2) => Functor (Compose f1 f2) where
fmap f = Compose . fmap (fmap f) . run
class (Monad m1, Monad m2) => DistributiveLaw m1 m2 where
dist :: m2 (m1 a) -> m1 (m2 a)
instance (Monad m1,Monad m2, DistributiveLaw m1 m2)
=> Applicative (Compose m1 m2) where
pure = return
(<*>) = ap
instance (Monad m1, Monad m2, DistributiveLaw m1 m2)
=> Monad (Compose m1 m2) where
return = Compose . return . return
Compose m1m2a >>= g =
Compose $ do m2a <- m1m2a -- in monad m1
m2m2b <- dist $ do a <- m2a -- in monad m2
let Compose m1m2b = g a
return m1m2b
-- do ... :: m2 (m1 (m2 b))
-- dist ... :: m1 (m2 (m2 b))
return $ join m2m2b -- in monad m2
instance Monad m => DistributiveLaw m Maybe where
dist Nothing = return Nothing
dist (Just m) = fmap Just m
instance Monad m => DistributiveLaw m (Either s) where
dist (Left s) = return $ Left s
dist (Right m) = fmap Right m
instance Monad m => DistributiveLaw m [] where
dist = sequence
instance (Monad m, Monoid w) => DistributiveLaw m (Writer w) where
dist m = let (m1,w) = runWriter m
in do a <- m1
return $ writer (a,w)
liftOuter :: (Monad m1, Monad m2, DistributiveLaw m1 m2) =>
m1 a -> Compose m1 m2 a
liftOuter = Compose . fmap return
liftInner :: (Monad m1, Monad m2, DistributiveLaw m1 m2) =>
m2 a -> Compose m1 m2 a
liftInner = Compose . return

Related

Is the monad transformer of a monad unique in Haskell?

There have been a couple of questions (e.g. this and this) asking whether every monad in Haskell (other than IO) has a corresponding monad transformer. Now I would like to ask a complementary question. Does every monad have exactly one transformer (or none as in the case of IO) or can it have more than one transformer?
A counterexample would be two monad transformers that would produce monads behaving identically when applied to the identity monad would but would produce differently behaving monads when applied to some other monad. If the answer is that a monad can have more than one transformer I would like to have a Haskell example which is as simple as possible. These don't have to be actually useful transformers (though that would be interesting).
Some of the answers in the linked question seemed to suggest that a monad could have more than one transformer. However, I don't know much category theory beyond the basic definition of a category so I wasn't sure whether they are an answer to this question.
Here's one idea for a counterexample to uniqueness. We know that in general, monads don't compose... but we also know that if there's an appropriate swap operation, you can compose them[1]! Let's make a class for monads that can swap with themselves.
-- | laws (from [1]):
-- swap . fmap (fmap f) = fmap (fmap f) . swap
-- swap . pure = fmap pure
-- swap . fmap pure = pure
-- fmap join . swap . fmap (join . fmap swap) = join . fmap swap . fmap join . swap
class Monad m => Swap m where
swap :: m (m a) -> m (m a)
instance Swap Identity where swap = id
instance Swap Maybe where
swap Nothing = Just Nothing
swap (Just Nothing) = Nothing
swap (Just (Just x)) = Just (Just x)
Then we can build a monad transformer that composes a monad with itself, like so:
newtype Twice m a = Twice (m (m a))
Hopefully it should be obvious what pure and (<$>) do. Rather than defining (>>=), I'll define join, as I think it's a bit more obvious what's going on; (>>=) can be derived from it.
instance Swap m => Monad (Twice m) where
join = id
. Twice -- rewrap newtype
. fmap join . join . fmap swap -- from [1]
. runTwice . fmap runTwice -- unwrap newtype
instance MonadTrans Twice where lift = Twice . pure
I haven't checked that lift obeys the MonadTrans laws for all Swap instances, but I did check them for Identity and Maybe.
Now, we have
IdentityT Identity ~= Identity ~= Twice Identity
IdentityT Maybe ~= Maybe !~= Twice Maybe
which shows that IdentityT is not a unique monad transformer for producing Identity.
[1] Composing monads by Mark P. Jones and Luc Duponcheel
The identity monad has at least two monad transformers: the identity monad transformer and the codensity monad transformer.
newtype IdentityT m a = IdentityT (m a)
newtype Codensity m a = Codensity (forall r. (a -> m r) -> m r)
Indeed, considering Codensity Identity, forall r. (a -> r) -> r is isomorphic to a.
These monad transformers are quite different. One example is that "bracket" can be defined as a monadic action in Codensity:
bracket :: Monad m => m () -> m () -> Codensity m ()
bracket before after = Codensity (\k -> before *> k () *> after)
whereas transposing that signature to IdentityT doesn't make much sense
bracket :: Monad m => m () -> m () -> IdentityT m () -- cannot implement the same functionality
Other examples can be found similarly from variants of the continuation/codensity monad, though I don't see a general scheme yet.
The state monad corresponds to the state monad transformer and to the composition of Codensity and ReaderT:
newtype StateT s m a = StateT (s -> m (s, a))
newtype CStateT s m a = CStateT (Codensity (ReaderT s m) a)
The list monad corresponds to at least three monad transformers, not including the wrong one:
newtype ListT m a = ListT (m (Maybe (a, ListT m a))) -- list-t
newtype LogicT m a = LogicT (forall r. (a -> m r -> m r) -> m r -> m r) -- logict
newtype MContT m a = MContT (forall r. Monoid r => (a -> m r) -> m r))
The first two can be found respectively in the packages list-t (also in an equivalent form in pipes),
and logict.
There is another example of a monad that has two inequivalent transformers: the "selection" monad.
type Sel r a = (a -> r) -> a
The monad is not well known but there are papers that mention it. Here is a package that refers to some papers:
https://hackage.haskell.org/package/transformers-0.6.0.4/docs/Control-Monad-Trans-Select.html
That package implements one transformer:
type SelT r m a = (a -> m r) -> m a
But there exists a second transformer:
type Sel2T r m a = (m a -> r ) -> m a
Proving laws for this transformer is more difficult but I have done it.
An advantage of the second transformer is that it is covariant in m, so the hoist function can be defined:
hoist :: (m a -> n a) -> Sel2T r m a -> Sel2T r n a
The second transformer is "fully featured", has all lifts and "hoist". The first transformer is not fully featured; for example, you cannot define blift :: Sel r a -> SelT r m a. In other words, you cannot embed monadic computations from Sel into SelT, just like you can't do that with the Continuation monad and the Codensity monad.
But with the Sel2T transformer, all lifts exist and you can embed Sel computations into Sel2T.
This example shows a monad with two transformers without using the Codensity construction in any way.

What is the 'minimum' needed to make an Applicative a Monad?

The Monad typeclass can be defined in terms of return and (>>=). However, if we already have a Functor instance for some type constructor f, then this definition is sort of 'more than we need' in that (>>=) and return could be used to implement fmap so we're not making use of the Functor instance we assumed.
In contrast, defining return and join seems like a more 'minimal'/less redundant way to make f a Monad. This way, the Functor constraint is essential because fmap cannot be written in terms of these operations. (Note join is not necessarily the only minimal way to go from Functor to Monad: I think (>=>) works as well.)
Similarly, Applicative can be defined in terms of pure and (<*>), but this definition again does not take advantage of the Functor constraint since these operations are enough to define fmap.
However, Applicative f can also be defined using unit :: f () and (>*<) :: f a -> f b -> f (a, b). These operations are not enough to define fmap so I would say in some sense this is a more minimal way to go from Functor to Applicative.
Is there a characterization of Monad as fmap, unit, (>*<), and some other operator which is minimal in that none of these functions can be derived from the others?
(>>=) does not work, since it can implement a >*< b = a >>= (\ x -> b >>= \ y -> pure (x, y)) where pure x = fmap (const x) unit.
Nor does join since m >>= k = join (fmap k m) so (>*<) can be implemented as above.
I suspect (>=>) fails similarly.
I have something, I think. It's far from elegant, but maybe it's enough to get you unstuck, at least. I started with join :: m (m a) -> ??? and asked "what could it produce that would require (<*>) to get back to m a?", which I found a fruitful line of thought that probably has more spoils.
If you introduce a new type T which can only be constructed inside the monad:
t :: m T
Then you could define a join-like operation which requires such a T:
joinT :: m (m a) -> m (T -> a)
The only way we can produce the T we need to get to the sweet, sweet a inside is by using t, and then we have to combine that with the result of joinT somehow. There are two basic operations that can combine two ms into one: (<*>) and joinT -- fmap is no help. joinT is not going to work, because we'll just need yet another T to use its result, so (<*>) is the only option, meaning that (<*>) can't be defined in terms of joinT.
You could roll that all up into an existential, if you prefer.
joinT :: (forall t. m t -> (m (m a) -> m (t -> a)) -> r) -> r

Creating a Monad Transformer from a Monad [duplicate]

Applicatives compose, monads don't.
What does the above statement mean? And when is one preferable to other?
If we compare the types
(<*>) :: Applicative a => a (s -> t) -> a s -> a t
(>>=) :: Monad m => m s -> (s -> m t) -> m t
we get a clue to what separates the two concepts. That (s -> m t) in the type of (>>=) shows that a value in s can determine the behaviour of a computation in m t. Monads allow interference between the value and computation layers. The (<*>) operator allows no such interference: the function and argument computations don't depend on values. This really bites. Compare
miffy :: Monad m => m Bool -> m x -> m x -> m x
miffy mb mt mf = do
b <- mb
if b then mt else mf
which uses the result of some effect to decide between two computations (e.g. launching missiles and signing an armistice), whereas
iffy :: Applicative a => a Bool -> a x -> a x -> a x
iffy ab at af = pure cond <*> ab <*> at <*> af where
cond b t f = if b then t else f
which uses the value of ab to choose between the values of two computations at and af, having carried out both, perhaps to tragic effect.
The monadic version relies essentially on the extra power of (>>=) to choose a computation from a value, and that can be important. However, supporting that power makes monads hard to compose. If we try to build ‘double-bind’
(>>>>==) :: (Monad m, Monad n) => m (n s) -> (s -> m (n t)) -> m (n t)
mns >>>>== f = mns >>-{-m-} \ ns -> let nmnt = ns >>= (return . f) in ???
we get this far, but now our layers are all jumbled up. We have an n (m (n t)), so we need to get rid of the outer n. As Alexandre C says, we can do that if we have a suitable
swap :: n (m t) -> m (n t)
to permute the n inwards and join it to the other n.
The weaker ‘double-apply’ is much easier to define
(<<**>>) :: (Applicative a, Applicative b) => a (b (s -> t)) -> a (b s) -> a (b t)
abf <<**>> abs = pure (<*>) <*> abf <*> abs
because there is no interference between the layers.
Correspondingly, it's good to recognize when you really need the extra power of Monads, and when you can get away with the rigid computation structure that Applicative supports.
Note, by the way, that although composing monads is difficult, it might be more than you need. The type m (n v) indicates computing with m-effects, then computing with n-effects to a v-value, where the m-effects finish before the n-effects start (hence the need for swap). If you just want to interleave m-effects with n-effects, then composition is perhaps too much to ask!
Applicatives compose, monads don't.
Monads do compose, but the result might not be a monad.
In contrast, the composition of two applicatives is necessarily an applicative.
I suspect the intention of the original statement was that "Applicativeness composes, while monadness doesn't." Rephrased, "Applicative is closed under composition, and Monad is not."
If you have applicatives A1 and A2, then the type data A3 a = A3 (A1 (A2 a)) is also applicative (you can write such an instance in a generic way).
On the other hand, if you have monads M1 and M2 then the type data M3 a = M3 (M1 (M2 a)) is not necessarily a monad (there is no sensible generic implementation for >>= or join for the composition).
One example can be the type [Int -> a] (here we compose a type constructor [] with (->) Int, both of which are monads). You can easily write
app :: [Int -> (a -> b)] -> [Int -> a] -> [Int -> b]
app f x = (<*>) <$> f <*> x
And that generalizes to any applicative:
app :: (Applicative f, Applicative f1) => f (f1 (a -> b)) -> f (f1 a) -> f (f1 b)
But there is no sensible definition of
join :: [Int -> [Int -> a]] -> [Int -> a]
If you're unconvinced of this, consider this expression:
join [\x -> replicate x (const ())]
The length of the returned list must be set in stone before an integer is ever provided, but the correct length of it depends on the integer that's provided. Thus, no correct join function can exist for this type.
Unfortunately, our real goal, composition of monads, is rather more
difficult. .. In fact, we
can actually prove that, in a certain sense, there is no way to
construct a join function with the type above using only the
operations of the two monads (see the appendix for an outline of the
proof). It follows that the only way that we might hope to form a
composition is if there are some additional constructions linking the
two components.
Composing monads, http://web.cecs.pdx.edu/~mpj/pubs/RR-1004.pdf
The distributive law solution
l : MN -> NM is enough
to guarantee monadicity of NM. To see this you need a unit and a mult. i'll focus on the mult (the unit is unit_N unitM)
NMNM - l -> NNMM - mult_N mult_M -> NM
This does not guarantee that MN is a monad.
The crucial observation however, comes into play when you have distributive law solutions
l1 : ML -> LM
l2 : NL -> LN
l3 : NM -> MN
thus, LM, LN and MN are monads. The question arises as to whether LMN is a monad (either by
(MN)L -> L(MN)
or by
N(LM) -> (LM)N
We have enough structure to make these maps. However, as Eugenia Cheng observes, we need a hexagonal condition (that amounts to a presentation of the Yang-Baxter equation) to guarantee monadicity of either construction. In fact, with the hexagonal condition, the two different monads coincide.
Any two applicative functors can be composed and yield another applicative functor. But this does not work with monads. A composition of two monads is not always a monad. For example, a composition of State and List monads (in any order) is not a monad.
Moreover, one cannot combine two monads in general, whether by composition or by any other method. There is no known algorithm or procedure that combines any two monads M, N into a larger, lawful monad T so that you can inject M ~> T and N ~> T by monad morphisms and satisfy reasonable non-degeneracy laws (e.g., to guarantee that T is not just a unit type that discards all effects from M and N).
It is possible to define a suitable T for specific M and N, for example of M = Maybe and N = State s and so on. But it is unknown how to define T that would work parametrically in the monads M and N. Neither functor composition, nor more complicated constructions work adequately.
One way of combining monads M and N is first, to define the co-product C a = Either (M a) (N a). This C will be a functor but, in general, not a monad. Then one constructs a free monad (Free C) on the functor C. The result is a monad that is able to represent effects of M and N combined. However, it is a much larger monad that can also represent other effects; it is much larger than just a combination of effects of M and N. Also, the free monad will need to be "run" or "interpreted" in order to extract any results (and the monad laws are guaranteed only after "running"). There will be a run-time penalty as well as memory size penalty because the free monad will potentially build very large structures in memory before it is "run". If these drawbacks are not significant, the free monad is the way to go.
Another way of combining monads is to take one monad's transformer and apply it to the other monad. But there is no algorithmic way of taking a definition of a monad (e.g., type and code in Haskell) and producing the type and code of the corresponding transformer.
There are at least 4 different classes of monads whose transformers are constructed in completely different but regular ways (composed-inside, composed-outside, adjunction-based monad, product monad). A few other monads do not belong to any of these "regular" classes and have transformers defined "ad hoc" in some way.
Distributive laws exist only for composed monads. It is misleading to think that any two monads M, N for which one can define some function M (N a) -> N (M a) will compose. In addition to defining a function with this type signature, one needs to prove that certain laws hold. In many cases, these laws do not hold.
There are even some monads that have two inequivalent transformers; one defined in a "regular" way and one "ad hoc". A simple example is the identity monad Id a = a; it has the regular transformer IdT m = m ("composed") and the irregular "ad hoc" one: IdT2 m a = forall r. (a -> m r) -> m r (the codensity monad on m).
A more complicated example is the "selector monad": Sel q a = (a -> q) -> a. Here q is a fixed type and a is the main type parameter of the monad Sel q. This monad has two transformers: SelT1 m a = (m a -> q) -> m a (composed-inside) and SelT2 m a = (a -> m q) -> m a (ad hoc).
Full details are worked out in Chapter 14 of the book "The Science of Functional Programming". https://github.com/winitzki/sofp or https://leanpub.com/sofp/
Here is some code making monad composition via a distributive law work. Note that there are distributive laws from any monad to the monads Maybe, Either, Writer and []. On the other hand, you won't find such (general) distributive laws into Reader and State. For these, you will need monad transformers.
{-# LANGUAGE FlexibleInstances #-}
module ComposeMonads where
import Control.Monad
import Control.Monad.Writer.Lazy
newtype Compose m1 m2 a = Compose { run :: m1 (m2 a) }
instance (Functor f1, Functor f2) => Functor (Compose f1 f2) where
fmap f = Compose . fmap (fmap f) . run
class (Monad m1, Monad m2) => DistributiveLaw m1 m2 where
dist :: m2 (m1 a) -> m1 (m2 a)
instance (Monad m1,Monad m2, DistributiveLaw m1 m2)
=> Applicative (Compose m1 m2) where
pure = return
(<*>) = ap
instance (Monad m1, Monad m2, DistributiveLaw m1 m2)
=> Monad (Compose m1 m2) where
return = Compose . return . return
Compose m1m2a >>= g =
Compose $ do m2a <- m1m2a -- in monad m1
m2m2b <- dist $ do a <- m2a -- in monad m2
let Compose m1m2b = g a
return m1m2b
-- do ... :: m2 (m1 (m2 b))
-- dist ... :: m1 (m2 (m2 b))
return $ join m2m2b -- in monad m2
instance Monad m => DistributiveLaw m Maybe where
dist Nothing = return Nothing
dist (Just m) = fmap Just m
instance Monad m => DistributiveLaw m (Either s) where
dist (Left s) = return $ Left s
dist (Right m) = fmap Right m
instance Monad m => DistributiveLaw m [] where
dist = sequence
instance (Monad m, Monoid w) => DistributiveLaw m (Writer w) where
dist m = let (m1,w) = runWriter m
in do a <- m1
return $ writer (a,w)
liftOuter :: (Monad m1, Monad m2, DistributiveLaw m1 m2) =>
m1 a -> Compose m1 m2 a
liftOuter = Compose . fmap return
liftInner :: (Monad m1, Monad m2, DistributiveLaw m1 m2) =>
m2 a -> Compose m1 m2 a
liftInner = Compose . return

Use of a monad superclass in a monad instance declaration?

I'm implementing a very simple poor mans concurrency structure with the following data type:
data C m a = Atomic (m (C m a)) | Done a
I'm creating an instance of monad for this:
instance Monad m => Monad (C m) where
(>>=) (Atomic m) f = Atomic $ (liftM (>>= f) m)
(>>=) (Done v) f = f v
return = Done
Q1. Am I right in saying Atomic $ (liftM (>>= f) m) is creating a new Atomic monad which contains the result of f (* -> *) applied to the value inside of m?
Q2. Am I right in saying the superclass Monad m is used here to enable the use of liftM? If so, as this is an instance of the Monad class, why can't this access liftM directly?
Q1. It is creating Atomic value. Monad is a mapping at type level. Since f is the second argument of >>= of C m a we know its type
f :: Monad m => a -> C m b
hence
(>>= f) :: Monad m => C m a -> C m b
is f extended to unwrap its argument, and
liftM (>>= f) :: (Monad m1, Monad m) => m1 (C m a) -> m1 (C m b)
simply lifts the transformation into m1 which in your setting is unified with m. When you extract the value m from Atomic and pass it to liftM you use the monad m's bind (via liftM) to extract the inner C m a to be passed to f. The second part of liftM's definition re-wraps the result as a m (C m b) which you wrap in Atomic. So yes.
Q2. Yes. But it is a liftM of the underlying monad m. The liftM for C m a is defined in terms of the instance (its >>= and return). By using C m a's liftM (if you managed to define >>= in terms of liftM) you would get a cyclic definition. You need the constraint Monad m to create the plumbing for m (C m a) to travel through the >>= and f. In fact as pointed by Benjamin Hodgson, Monad m is an unnecessarily strong constaint, Functor m would suffice. The fact that C is in fact Free together with the relevant implementations give the most insight in the matter.

Understanding differences between functors, applicative functors and monads? [duplicate]

While explaining to someone what a type class X is I struggle to find good examples of data structures which are exactly X.
So, I request examples for:
A type constructor which is not a Functor.
A type constructor which is a Functor, but not Applicative.
A type constructor which is an Applicative, but is not a Monad.
A type constructor which is a Monad.
I think there are plenty examples of Monad everywhere, but a good example of Monad with some relation to previous examples could complete the picture.
I look for examples which would be similar to each other, differing only in aspects important for belonging to the particular type class.
If one could manage to sneak up an example of Arrow somewhere in this hierarchy (is it between Applicative and Monad?), that would be great too!
A type constructor which is not a Functor:
newtype T a = T (a -> Int)
You can make a contravariant functor out of it, but not a (covariant) functor. Try writing fmap and you'll fail. Note that the contravariant functor version is reversed:
fmap :: Functor f => (a -> b) -> f a -> f b
contramap :: Contravariant f => (a -> b) -> f b -> f a
A type constructor which is a functor, but not Applicative:
I don't have a good example. There is Const, but ideally I'd like a concrete non-Monoid and I can't think of any. All types are basically numeric, enumerations, products, sums, or functions when you get down to it. You can see below pigworker and I disagreeing about whether Data.Void is a Monoid;
instance Monoid Data.Void where
mempty = undefined
mappend _ _ = undefined
mconcat _ = undefined
Since _|_ is a legal value in Haskell, and in fact the only legal value of Data.Void, this meets the Monoid rules. I am unsure what unsafeCoerce has to do with it, because your program is no longer guaranteed not to violate Haskell semantics as soon as you use any unsafe function.
See the Haskell Wiki for an article on bottom (link) or unsafe functions (link).
I wonder if it is possible to create such a type constructor using a richer type system, such as Agda or Haskell with various extensions.
A type constructor which is an Applicative, but not a Monad:
newtype T a = T {multidimensional array of a}
You can make an Applicative out of it, with something like:
mkarray [(+10), (+100), id] <*> mkarray [1, 2]
== mkarray [[11, 101, 1], [12, 102, 2]]
But if you make it a monad, you could get a dimension mismatch. I suspect that examples like this are rare in practice.
A type constructor which is a Monad:
[]
About Arrows:
Asking where an Arrow lies on this hierarchy is like asking what kind of shape "red" is. Note the kind mismatch:
Functor :: * -> *
Applicative :: * -> *
Monad :: * -> *
but,
Arrow :: * -> * -> *
My style may be cramped by my phone, but here goes.
newtype Not x = Kill {kill :: x -> Void}
cannot be a Functor. If it were, we'd have
kill (fmap (const ()) (Kill id)) () :: Void
and the Moon would be made of green cheese.
Meanwhile
newtype Dead x = Oops {oops :: Void}
is a functor
instance Functor Dead where
fmap f (Oops corpse) = Oops corpse
but cannot be applicative, or we'd have
oops (pure ()) :: Void
and Green would be made of Moon cheese (which can actually happen, but only later in the evening).
(Extra note: Void, as in Data.Void is an empty datatype. If you try to use undefined to prove it's a Monoid, I'll use unsafeCoerce to prove that it isn't.)
Joyously,
newtype Boo x = Boo {boo :: Bool}
is applicative in many ways, e.g., as Dijkstra would have it,
instance Applicative Boo where
pure _ = Boo True
Boo b1 <*> Boo b2 = Boo (b1 == b2)
but it cannot be a Monad. To see why not, observe that return must be constantly Boo True or Boo False, and hence that
join . return == id
cannot possibly hold.
Oh yeah, I nearly forgot
newtype Thud x = The {only :: ()}
is a Monad. Roll your own.
Plane to catch...
I believe the other answers missed some simple and common examples:
A type constructor which is a Functor but not an Applicative. A simple example is a pair:
instance Functor ((,) r) where
fmap f (x,y) = (x, f y)
But there is no way how to define its Applicative instance without imposing additional restrictions on r. In particular, there is no way how to define pure :: a -> (r, a) for an arbitrary r.
A type constructor which is an Applicative, but is not a Monad. A well-known example is ZipList. (It's a newtype that wraps lists and provides different Applicative instance for them.)
fmap is defined in the usual way. But pure and <*> are defined as
pure x = ZipList (repeat x)
ZipList fs <*> ZipList xs = ZipList (zipWith id fs xs)
so pure creates an infinite list by repeating the given value, and <*> zips a list of functions with a list of values - applies i-th function to i-th element. (The standard <*> on [] produces all possible combinations of applying i-th function to j-th element.) But there is no sensible way how to define a monad (see this post).
How arrows fit into the functor/applicative/monad hierarchy?
See Idioms are oblivious, arrows are meticulous, monads are promiscuous by Sam Lindley, Philip Wadler, Jeremy Yallop. MSFP 2008. (They call applicative functors idioms.) The abstract:
We revisit the connection between three notions of computation: Moggi's monads, Hughes's arrows and McBride and Paterson's idioms (also called applicative functors). We show that idioms are equivalent to arrows that satisfy the type isomorphism A ~> B = 1 ~> (A -> B) and that monads are equivalent to arrows that satisfy the type isomorphism A ~> B = A -> (1 ~> B). Further, idioms embed into arrows and arrows embed into monads.
A good example for a type constructor which is not a functor is Set: You can't implement fmap :: (a -> b) -> f a -> f b, because without an additional constraint Ord b you can't construct f b.
I'd like to propose a more systematic approach to answering this question, and also to show examples that do not use any special tricks like the "bottom" values or infinite data types or anything like that.
When do type constructors fail to have type class instances?
In general, there are two reasons why a type constructor could fail to have an instance of a certain type class:
Cannot implement the type signatures of the required methods from the type class.
Can implement the type signatures but cannot satisfy the required laws.
Examples of the first kind are easier than those of the second kind because for the first kind, we just need to check whether one can implement a function with a given type signature, while for the second kind, we are required to prove that no implementation could possibly satisfy the laws.
Specific examples
A type constructor that cannot have a functor instance because the type cannot be implemented:
data F z a = F (a -> z)
This is a contrafunctor, not a functor, with respect to the type parameter a, because a in a contravariant position. It is impossible to implement a function with type signature (a -> b) -> F z a -> F z b.
A type constructor that is not a lawful functor even though the type signature of fmap can be implemented:
data Q a = Q(a -> Int, a)
fmap :: (a -> b) -> Q a -> Q b
fmap f (Q(g, x)) = Q(\_ -> g x, f x) -- this fails the functor laws!
The curious aspect of this example is that we can implement fmap of the correct type even though F cannot possibly be a functor because it uses a in a contravariant position. So this implementation of fmap shown above is misleading - even though it has the correct type signature (I believe this is the only possible implementation of that type signature), the functor laws are not satisfied. For example, fmap id ≠ id, because let (Q(f,_)) = fmap id (Q(read,"123")) in f "456" is 123, but let (Q(f,_)) = id (Q(read,"123")) in f "456" is 456.
In fact, F is only a profunctor, - it is neither a functor nor a contrafunctor.
A lawful functor that is not applicative because the type signature of pure cannot be implemented: take the Writer monad (a, w) and remove the constraint that w should be a monoid. It is then impossible to construct a value of type (a, w) out of a.
A functor that is not applicative because the type signature of <*> cannot be implemented: data F a = Either (Int -> a) (String -> a).
A functor that is not lawful applicative even though the type class methods can be implemented:
data P a = P ((a -> Int) -> Maybe a)
The type constructor P is a functor because it uses a only in covariant positions.
instance Functor P where
fmap :: (a -> b) -> P a -> P b
fmap fab (P pa) = P (\q -> fmap fab $ pa (q . fab))
The only possible implementation of the type signature of <*> is a function that always returns Nothing:
(<*>) :: P (a -> b) -> P a -> P b
(P pfab) <*> (P pa) = \_ -> Nothing -- fails the laws!
But this implementation does not satisfy the identity law for applicative functors.
A functor that is Applicative but not a Monad because the type signature of bind cannot be implemented.
I do not know any such examples!
A functor that is Applicative but not a Monad because laws cannot be satisfied even though the type signature of bind can be implemented.
This example has generated quite a bit of discussion, so it is safe to say that proving this example correct is not easy. But several people have verified this independently by different methods. See Is `data PoE a = Empty | Pair a a` a monad? for additional discussion.
data B a = Maybe (a, a)
deriving Functor
instance Applicative B where
pure x = Just (x, x)
b1 <*> b2 = case (b1, b2) of
(Just (x1, y1), Just (x2, y2)) -> Just((x1, x2), (y1, y2))
_ -> Nothing
It is somewhat cumbersome to prove that there is no lawful Monad instance. The reason for the non-monadic behavior is that there is no natural way of implementing bind when a function f :: a -> B b could return Nothing or Just for different values of a.
It is perhaps clearer to consider Maybe (a, a, a), which is also not a monad, and to try implementing join for that. One will find that there is no intuitively reasonable way of implementing join.
join :: Maybe (Maybe (a, a, a), Maybe (a, a, a), Maybe (a, a, a)) -> Maybe (a, a, a)
join Nothing = Nothing
join Just (Nothing, Just (x1,x2,x3), Just (y1,y2,y3)) = ???
join Just (Just (x1,x2,x3), Nothing, Just (y1,y2,y3)) = ???
-- etc.
In the cases indicated by ???, it seems clear that we cannot produce Just (z1, z2, z3) in any reasonable and symmetric manner out of six different values of type a. We could certainly choose some arbitrary subset of these six values, -- for instance, always take the first nonempty Maybe - but this would not satisfy the laws of the monad. Returning Nothing will also not satisfy the laws.
A tree-like data structure that is not a monad even though it has associativity for bind - but fails the identity laws.
The usual tree-like monad (or "a tree with functor-shaped branches") is defined as
data Tr f a = Leaf a | Branch (f (Tr f a))
This is a free monad over the functor f. The shape of the data is a tree where each branch point is a "functor-ful" of subtrees. The standard binary tree would be obtained with type f a = (a, a).
If we modify this data structure by making also the leaves in the shape of the functor f, we obtain what I call a "semimonad" - it has bind that satisfies the naturality and the associativity laws, but its pure method fails one of the identity laws. "Semimonads are semigroups in the category of endofunctors, what's the problem?" This is the type class Bind.
For simplicity, I define the join method instead of bind:
data Trs f a = Leaf (f a) | Branch (f (Trs f a))
join :: Trs f (Trs f a) -> Trs f a
join (Leaf ftrs) = Branch ftrs
join (Branch ftrstrs) = Branch (fmap #f join ftrstrs)
The branch grafting is standard, but the leaf grafting is non-standard and produces a Branch. This is not a problem for the associativity law but breaks one of the identity laws.
When do polynomial types have monad instances?
Neither of the functors Maybe (a, a) and Maybe (a, a, a) can be given a lawful Monad instance, although they are obviously Applicative.
These functors have no tricks - no Void or bottom anywhere, no tricky laziness/strictness, no infinite structures, and no type class constraints. The Applicative instance is completely standard. The functions return and bind can be implemented for these functors but will not satisfy the laws of the monad. In other words, these functors are not monads because a specific structure is missing (but it is not easy to understand what exactly is missing). As an example, a small change in the functor can make it into a monad: data Maybe a = Nothing | Just a is a monad. Another similar functor data P12 a = Either a (a, a) is also a monad.
Constructions for polynomial monads
In general, here are some constructions that produce lawful Monads out of polynomial types. In all these constructions, M is a monad:
type M a = Either c (w, a) where w is any monoid
type M a = m (Either c (w, a)) where m is any monad and w is any monoid
type M a = (m1 a, m2 a) where m1 and m2 are any monads
type M a = Either a (m a) where m is any monad
The first construction is WriterT w (Either c), the second construction is WriterT w (EitherT c m). The third construction is a component-wise product of monads: pure #M is defined as the component-wise product of pure #m1 and pure #m2, and join #M is defined by omitting cross-product data (e.g. m1 (m1 a, m2 a) is mapped to m1 (m1 a) by omitting the second part of the tuple):
join :: (m1 (m1 a, m2 a), m2 (m1 a, m2 a)) -> (m1 a, m2 a)
join (m1x, m2x) = (join #m1 (fmap fst m1x), join #m2 (fmap snd m2x))
The fourth construction is defined as
data M m a = Either a (m a)
instance Monad m => Monad M m where
pure x = Left x
join :: Either (M m a) (m (M m a)) -> M m a
join (Left mma) = mma
join (Right me) = Right $ join #m $ fmap #m squash me where
squash :: M m a -> m a
squash (Left x) = pure #m x
squash (Right ma) = ma
I have checked that all four constructions produce lawful monads.
I conjecture that there are no other constructions for polynomial monads. For example, the functor Maybe (Either (a, a) (a, a, a, a)) is not obtained through any of these constructions and so is not monadic. However, Either (a, a) (a, a, a) is monadic because it is isomorphic to the product of three monads a, a, and Maybe a. Also, Either (a,a) (a,a,a,a) is monadic because it is isomorphic to the product of a and Either a (a, a, a).
The four constructions shown above will allow us to obtain any sum of any number of products of any number of a's, for example Either (Either (a, a) (a, a, a, a)) (a, a, a, a, a)) and so on. All such type constructors will have (at least one) Monad instance.
It remains to be seen, of course, what use cases might exist for such monads. Another issue is that the Monad instances derived via constructions 1-4 are in general not unique. For example, the type constructor type F a = Either a (a, a) can be given a Monad instance in two ways: by construction 4 using the monad (a, a), and by construction 3 using the type isomorphism Either a (a, a) = (a, Maybe a). Again, finding use cases for these implementations is not immediately obvious.
A question remains - given an arbitrary polynomial data type, how to recognize whether it has a Monad instance. I do not know how to prove that there are no other constructions for polynomial monads. I don't think any theory exists so far to answer this question.

Resources