In the following code snippet I initially believed to have a restricted monad error (I forgot to add the Monad m => in the instance Monad (Transform m a) definition). Having read quite a bit about restricted monads I wonder why this here happens to be ok:
{-# LANGUAGE GADTs #-}
data Next a where
Todo :: a -> Next a
Done :: Next a
instance Functor Next where
fmap f Done = Done
fmap f (Todo a) = Todo (f a)
data Transform m a b = Monad m => Transform ( m(Next a) -> m(Next b) )
instance Functor (Transform m a) where
fmap f (Transform ta) = Transform tb where
tb ma = ta ma >>= return . (fmap f)
instance Applicative (Transform m a) where
pure = return
mf <*> ma = do
f <- mf
a <- ma
return (f a)
instance Monad m => Monad (Transform m a) where
return b = Transform (t b) where
t b _ = return $ Todo b
(Transform t) >>= f = Transform (\ma -> do
a <- ma
case a of
Done -> return Done
--Todo a' -> ...
)
The example is rather contrived, I stripped away all irrelevant bits. (The actual problem at hand is related to this.) The crucial part is the Monad m restriction in Transform.
I don't quite see how this is different from the often-cited canonical Set-as-a-monad example, which does exhibit the restricted monad limitation.
Transform is not a restricted monad.
Look at Set. Set is monadic in its one argument, except said argument needs to be Ord. That is, Set is a monad on the subcategory of Hask where all the objects are in Ord.
But Transform is not a monad in the first place. Transform :: (* -> *) -> * -> * -> *, but Monad applies to things of kind * -> * (if you're going to go full category theorist, monads in general are endofunctors and should roughly have kind k -> k for some k, but Transform doesn't really fit that wider template either). The thing that is a monad is Transform m a when m is a monad. Transform m a is monad on all of Hask, as long as m is also a monad. You see the difference? Transform m a given Monad m operates on every type there is. But there is nothing I can put in the blank to make "Set given ___ operates on every type there is", because the restriction is going on the parameter that Set is monadic in, while Transform m a does not have a restriction on the type it is monadic in, but on one of the types that makes it up.
I don't quite see how this is different from the often-cited canonical Set-as-a-monad example, which does exhibit the restricted monad limitation.
It's different because the constraint isn't on the last type parameter, which is the one which varies in Monad. In the case of Set it is.
Related
The Monad typeclass can be defined in terms of return and (>>=). However, if we already have a Functor instance for some type constructor f, then this definition is sort of 'more than we need' in that (>>=) and return could be used to implement fmap so we're not making use of the Functor instance we assumed.
In contrast, defining return and join seems like a more 'minimal'/less redundant way to make f a Monad. This way, the Functor constraint is essential because fmap cannot be written in terms of these operations. (Note join is not necessarily the only minimal way to go from Functor to Monad: I think (>=>) works as well.)
Similarly, Applicative can be defined in terms of pure and (<*>), but this definition again does not take advantage of the Functor constraint since these operations are enough to define fmap.
However, Applicative f can also be defined using unit :: f () and (>*<) :: f a -> f b -> f (a, b). These operations are not enough to define fmap so I would say in some sense this is a more minimal way to go from Functor to Applicative.
Is there a characterization of Monad as fmap, unit, (>*<), and some other operator which is minimal in that none of these functions can be derived from the others?
(>>=) does not work, since it can implement a >*< b = a >>= (\ x -> b >>= \ y -> pure (x, y)) where pure x = fmap (const x) unit.
Nor does join since m >>= k = join (fmap k m) so (>*<) can be implemented as above.
I suspect (>=>) fails similarly.
I have something, I think. It's far from elegant, but maybe it's enough to get you unstuck, at least. I started with join :: m (m a) -> ??? and asked "what could it produce that would require (<*>) to get back to m a?", which I found a fruitful line of thought that probably has more spoils.
If you introduce a new type T which can only be constructed inside the monad:
t :: m T
Then you could define a join-like operation which requires such a T:
joinT :: m (m a) -> m (T -> a)
The only way we can produce the T we need to get to the sweet, sweet a inside is by using t, and then we have to combine that with the result of joinT somehow. There are two basic operations that can combine two ms into one: (<*>) and joinT -- fmap is no help. joinT is not going to work, because we'll just need yet another T to use its result, so (<*>) is the only option, meaning that (<*>) can't be defined in terms of joinT.
You could roll that all up into an existential, if you prefer.
joinT :: (forall t. m t -> (m (m a) -> m (t -> a)) -> r) -> r
I got the impression that (>>=) (used by Haskell) and join (preferred by mathematicians) are "equal" since one can write one in terms of the other:
import Control.Monad (join)
join x = x >>= id
x >>= f = join (fmap f x)
Additionally every monad is a functor since bind can be used to replace fmap:
fmap f x = x >>= (return . f)
I have the following questions:
Is there a (non-recursive) definition of fmap in terms of join? (fmap f x = join $ fmap (return . f) x follows from the equations above but is recursive.)
Is "every monad is a functor" a conclusion when using bind (in the definition of a monad), but an assumption when using join?
Is bind more "powerful" than join? And what would "more powerful" mean?
A monad can be either defined in terms of:
return :: a -> m a
bind :: m a -> (a -> m b) -> m b
or alternatively in terms of:
return :: a -> m a
fmap :: (a -> b) -> m a -> m b
join :: m (m a) -> m a
To your questions:
No, we cannot define fmap in terms of join, since otherwise we could remove fmap from the second list above.
No, "every monad is a functor" is a statement about monads in general, regardless whether you define your specific monad in terms of bind or in terms of join and fmap. It is easier to understand the statement if you see the second definition, but that's it.
Yes, bind is more "powerful" than join. It is exactly as "powerful" as join and fmap combined, if you mean with "powerful" that it has the capacity to define a monad (always in combination with return).
For an intuition, see e.g. this answer – bind allows you to combine or chain strategies/plans/computations (that are in a context) together. As an example, let's use the Maybe context (or Maybe monad):
λ: let plusOne x = Just (x + 1)
λ: Just 3 >>= plusOne
Just 4
fmap also let's you chain computations in a context together, but at the cost of increasing the nesting with every step.[1]
λ: fmap plusOne (Just 3)
Just (Just 4)
That's why you need join: to squash two levels of nesting into one. Remember:
join :: m (m a) -> m a
Having only the squashing step doesn't get you very far. You need also fmap to have a monad – and return, which is Just in the example above.
[1]: fmap and (>>=) don't take their two arguments in the same order, but don't let that confuse you.
Is there a [definition] of fmap in terms of join?
No, there isn't. That can be demonstrated by attempting to do it. Suppose we are given an arbitrary type constructor T, and functions:
returnT :: a -> T a
joinT :: T (T a) -> T a
From this data alone, we want to define:
fmapT :: (a -> b) -> T a -> T b
So let's sketch it:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = tb
where
tb = undefined -- tb :: T b
We need to get a value of type T b somehow. ta :: T a on its own won't do, so we need functions that produce T b values. The only two candidates are joinT and returnT. joinT doesn't help:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = joinT ttb
where
ttb = undefined -- ttb :: T (T b)
It just kicks the can down the road, as needing a T (T b) value under these circumstances is no improvement.
We might try returnT instead:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = returnT b
where
b = undefined -- b :: b
Now we need a b value. The only thing that can give us one is f:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = returnT (f a)
where
a = undefined -- a :: a
And now we are stuck: nothing can give us an a. We have exhausted all available possibilities, so fmapT cannot be defined in such terms.
A digression: it wouldn't suffice to cheat by using a function like this:
extractT :: T a -> a
With an extractT, we might try a = extractT ta, leading to:
fmapT :: (a -> b) -> T a -> T b
fmapT f ta = returnT (f (extractT ta))
It is not enough, however, for fmapT to have the right type: it must also follow the functor laws. In particular, fmapT id = id should hold. With this definition, fmapT id is returnT . extractT, which, in general, is not id (most functors which are instances of both Monad and Comonad serve as examples).
Is "every monad is a functor" a conclusion when using bind (in the definition of a monad), but an assumption when using join?
"Every monad is a functor" is an assumption, or, more precisely, part of the definition of monad. To pick an arbitrary illustration, here is Emily Riehl, Category Theory In Context, p. 154:
Definition 5.1.1. A monad on a category C consists of
an endofunctor T : C → C,
a unit natural transformation η : 1C ⇒ T, and
a multiplication natural transformation μ :T2 ⇒ T,
so that the following diagrams commute in CC: [diagrams of the monad laws]
A monad, therefore, involves an endofunctor by definition. For a Haskell type constructor T that instantiates Monad, the object mapping of that endofunctor is T itself, and the morphism mapping is its fmap. That T will be a Functor instance, and therefore will have an fmap, is, in contemporary Haskell, guaranteed by Applicative (and, by extension, Functor) being a superclass of Monad.
Is that the whole story, though? As far as Haskell is concerned. we know that liftM exists, and also that in a not-so-distant past Functor was not a superclass of Monad. Are those two facts mere Haskellisms? Not quite. In the classic paper Notions of computation and monads, Eugenio Moggi unearths the following definition (p. 3):
Definition 1.2 ([Man76]) A Kleisli triple over a category C is a triple (T, η, _*), where T : Obj(C) → Obj(C), ηA : A → T A for A ∈ Obj(C), f* : T A → T B for f : A → T B and the following equations hold:
ηA* = idT A
ηA; f* = f for f : A → T B
f*; g* = (f; g*)* for f : A → T B and g : B → T C
The important detail here is that T is presented as merely an object mapping in the category C, and not as an endofunctor in C. Working in the Hask category, that amounts to taking a type constructor T without presupposing it is a Functor instance. In code, we might write that as:
class KleisliTriple t where
return :: a -> t a
(=<<) :: (a -> t b) -> t a -> t b
-- (return =<<) = id
-- (f =<<) . return = f
-- (g =<<) . (f =<<) = ((g =<<) . f =<<)
Flipped bind aside, that is the pre-AMP definition of Monad in Haskell. Unsurprisingly, Moggi's paper doesn't take long to show that "there is a one-to-one correspondence between Kleisli triples and monads" (p. 5), establishing along the way that T can be extended to an endofunctor (in Haskell, that step amounts to defining the morphism mapping liftM f m = return . f =<< m, and then showing it follows the functor laws).
All in all, if you write lawful definitions of return and (>>=) without presupposing fmap, you indeed get a lawful implementation of Functor as a consequence. "There is a one-to-one correspondence between Kleisli triples and monads" is a consequence of the definition of Kleisli triple, while "a monad involves an endofunctor" is part of the definition of monad. It is tempting to consider whether it would be more accurate to describe what Haskellers did when writing Monad instances as "setting up a Klesili triple" rather than "setting up a monad", but I will refrain out of fear of getting mired down terminological pedantry -- and in any case, now that Functor is a superclass of Monad there is no practical reason to worry about that.
Is bind more "powerful" than join? And what would "more powerful" mean?
Trick question!
Taken at face value, the answer would be yes, to the extent that, together with return, (>>=) makes it possible to implement fmap (via liftM, as noted above), while join doesn't. However, I don't feel it is worthwhile to insist on this distinction. Why so? Because of the monad laws. Just like it doesn't make sense to talk about a lawful (>>=) without presupposing return, it doesn't make sense to talk about a lawful join without pressuposing return and fmap.
One might get the impression that I am giving too much weight to the laws by using them to tie Monad and Functor in this way. It is true that there are cases of laws that involve two classes, and that only apply to types which instantiate them both. Foldable provides a good example of that: we can find the following law in the Traversable documentation:
The superclass instances should satisfy the following: [...]
In the Foldable instance, foldMap should be equivalent to traversal with a constant applicative functor (foldMapDefault).
That this specific law doesn't always apply is not a problem, because we don't need it to characterise what Foldable is (alternatives include "a Foldable is a container from which we can extract some sequence of elements", and "a Foldable is a container that can be converted to the free monoid on its element type"). With the monad laws, though, it isn't like that: the meaning of the class is inextricably bound to all three of the monad laws.
So far, every monad (that can be represented as a data type) that I have encountered had a corresponding monad transformer, or could have one. Is there such a monad that can't have one? Or do all monads have a corresponding transformer?
By a transformer t corresponding to monad m I mean that t Identity is isomorphic to m. And of course that it satisfies the monad transformer laws and that t n is a monad for any monad n.
I'd like to see either a proof (ideally a constructive one) that every monad has one, or an example of a particular monad that doesn't have one (with a proof). I'm interested in both more Haskell-oriented answers, as well as (category) theoretical ones.
As a follow-up question, is there a monad m that has two distinct transformers t1 and t2? That is, t1 Identity is isomorphic to t2 Identity and to m, but there is a monad n such that t1 n is not isomorphic to t2 n.
(IO and ST have a special semantics so I don't take them into account here and let's disregard them completely. Let's focus only on "pure" monads that can be constructed using data types.)
I'm with #Rhymoid on this one, I believe all Monads have two (!!) transformers. My construction is a bit different, and far less complete. I'd like to be able to take this sketch into a proof, but I think I'm either missing the skills/intuition and/or it may be quite involved.
Due to Kleisli, every monad (m) can be decomposed into two functors F_k and G_k such that F_k is left adjoint to G_k and that m is isomorphic to G_k * F_k (here * is functor composition). Also, because of the adjunction, F_k * G_k forms a comonad.
I claim that t_mk defined such that t_mk n = G_k * n * F_k is a monad transformer. Clearly, t_mk Id = G_k * Id * F_k = G_k * F_k = m. Defining return for this functor is not difficult since F_k is a "pointed" functor, and defining join should be possible since extract from the comonad F_k * G_k can be used to reduce values of type (t_mk n * t_mk n) a = (G_k * n * F_k * G_k * n * F_k) a to values of type G_k * n * n * F_k, which is then further reduces via join from n.
We do have to be a bit careful since F_k and G_k are not endofunctors on Hask. So, they are not instances of the standard Functor typeclass, and also are not directly composable with n as shown above. Instead we have to "project" n into the Kleisli category before composition, but I believe return from m provides that "projection".
I believe you can also do this with the Eilenberg-Moore monad decomposition, giving m = G_em * F_em, tm_em n = G_em * n * F_em, and similar constructions for lift, return, and join with a similar dependency on extract from the comonad F_em * G_em.
Here's a hand-wavy I'm-not-quite-sure answer.
Monads can be thought of as the interface of imperative languages. return is how you inject a pure value into the language, and >>= is how you splice pieces of the language together. The Monad laws ensure that "refactoring" pieces of the language works the way you would expect. Any additional actions provided by a monad can be thought of as its "operations."
Monad Transformers are one way to approach the "extensible effects" problem. If we have a Monad Transformer t which transforms a Monad m, then we could say that the language m is being extended with additional operations available via t. The Identity monad is the language with no effects/operations, so applying t to Identity will just get you a language with only the operations provided by t.
So if we think of Monads in terms of the "inject, splice, and other operations" model, then we can just reformulate them using the Free Monad Transformer. Even the IO monad could be turned into a transformer this way. The only catch is that you probably want some way to peel that layer off the transformer stack at some point, and the only sensible way to do it is if you have IO at the bottom of the stack so that you can just perform the operations there.
Previously, I thought I found examples of explicitly defined monads without a transformer, but those examples were incorrect.
The transformer for Either a (z -> a) is m (Either a (z -> m a), where m is an arbitrary foreign monad. The transformer for (a -> n p) -> n a is (a -> t m p) -> t m a where t m is the transformer for the monad n.
The free pointed monad.
The monad type constructor L for this example is defined by
type L z a = Either a (z -> a)
The intent of this monad is to embellish the ordinary reader monad z -> a with an explicit pure value (Left x). The ordinary reader monad's pure value is a constant function pure x = _ -> x. However, if we are given a value of type z -> a, we will not be able to determine whether this value is a constant function. With L z a, the pure value is represented explicitly as Left x. Users can now pattern-match on L z a and determine whether a given monadic value is pure or has an effect. Other than that, the monad L z does exactly the same thing as the reader monad.
The monad instance:
instance Monad (L z) where
return x = Left x
(Left x) >>= f = f x
(Right q) >>= f = Right(join merged) where
join :: (z -> z -> r) -> z -> r
join f x = f x x -- the standard `join` for Reader monad
merged :: z -> z -> r
merged = merge . f . q -- `f . q` is the `fmap` of the Reader monad
merge :: Either a (z -> a) -> z -> a
merge (Left x) _ = x
merge (Right p) z = p z
This monad L z is a specific case of a more general construction, (Monad m) => Monad (L m) where L m a = Either a (m a). This construction embellishes a given monad m by adding an explicit pure value (Left x), so that users can now pattern-match on L m to decide whether the value is pure. In all other ways, L m represents the same computational effect as the monad m.
The monad instance for L m is almost the same as for the example above, except the join and fmap of the monad m need to be used, and the helper function merge is defined by
merge :: Either a (m a) -> m a
merge (Left x) = return #m x
merge (Right p) = p
I checked that the laws of the monad hold for L m with an arbitrary monad m.
This construction gives the free pointed functor on the given monad m. This construction guarantees that the free pointed functor on a monad is also a monad.
The transformer for the free pointed monad is defined like this:
type LT m n a = n (Either a (mT n a))
where mT is the monad transformer of the monad m (which needs to be known).
Another example:
type S a = (a -> Bool) -> Maybe a
This monad appeared in the context of "search monads" here. The paper by Jules Hedges also mentions the search monad, and more generally, "selection" monads of the form
type Sq n q a = (a -> n q) -> n a
for a given monad n and a fixed type q. The search monad above is a particular case of the selection monad with n a = Maybe a and q = (). The paper by Hedges claims (without proof, but he proved it later using Coq) that Sq is a monad transformer for the monad (a -> q) -> a.
However, the monad (a -> q) -> a has another monad transformer (m a -> q) -> m a of the "composed outside" type. This is related to the property of "rigidity" explored in the question Is this property of a functor stronger than a monad? Namely, (a -> q) -> a is a rigid monad, and all rigid monads have monad transformers of the "composed-outside" type.
Generally, transformed monads don't themselves automatically possess a monad transformer. That is, once we take some foreign monad m and apply some monad transformer t to it, we obtain a new monad t m, and this monad doesn't have a transformer: given a new foreign monad n, we don't know how to transform n with the monad t m. If we know the transformer mT for the monad m, we can first transform n with mT and then transform the result with t. But if we don't have a transformer for the monad m, we are stuck: there is no construction that creates a transformer for the monad t m out of the knowledge of t alone and works for arbitrary foreign monads m.
However, in practice all explicitly defined monads have explicitly defined transformers, so this problem does not arise.
#JamesCandy's answer suggests that for any monad (including IO?!), one can write a (general but complicated) type expression that represents the corresponding monad transformer. Namely, you first need to Church-encode your monad type, which makes the type look like a continuation monad, and then define its monad transformer as if for the continuation monad. But I think this is incorrect - it does not give a recipe for producing a monad transformer in general.
Taking the Church encoding of a type a means writing down the type
type ca = forall r. (a -> r) -> r
This type ca is completely isomorphic to a by Yoneda's lemma. So far we have achieved nothing other than made the type a lot more complicated by introducing a quantified type parameter forall r.
Now let's Church-encode a base monad L:
type CL a = forall r. (L a -> r) -> r
Again, we have achieved nothing so far, since CL a is fully equivalent to L a.
Now pretend for a second that CL a a continuation monad (which it isn't!), and write the monad transformer as if it were a continuation monad transformer, by replacing the result type r through m r:
type TCL m a = forall r. (L a -> m r) -> m r
This is claimed to be the "Church-encoded monad transformer" for L. But this seems to be incorrect. We need to check the properties:
TCL m is a lawful monad for any foreign monad m and for any base monad L
m a -> TCL m a is a lawful monadic morphism
The second property holds, but I believe the first property fails, - in other words, TCL m is not a monad for an arbitrary monad m. Perhaps some monads m admit this but others do not. I was not able to find a general monad instance for TCL m corresponding to an arbitrary base monad L.
Another way to argue that TCL m is not in general a monad is to note that forall r. (a -> m r) -> m r is indeed a monad for any type constructor m. Denote this monad by CM. Now, TCL m a = CM (L a). If TCL m were a monad, it would imply that CM can be composed with any monad L and yields a lawful monad CM (L a). However, it is highly unlikely that a nontrivial monad CM (in particular, one that is not equivalent to Reader) will compose with all monads L. Monads usually do not compose without stringent further constraints.
A specific example where this does not work is for reader monads. Consider L a = r -> a and m a = s -> a where r and s are some fixed types. Now, we would like to consider the "Church-encoded monad transformer" forall t. (L a -> m t) -> m t. We can simplify this type expression using the Yoneda lemma,
forall t. (x -> t) -> Q t = Q x
(for any functor Q) and obtain
forall t. (L a -> s -> t) -> s -> t
= forall t. ((L a, s) -> t) -> s -> t
= s -> (L a, s)
= s -> (r -> a, s)
So this is the type expression for TCL m a in this case. If TCL were a monad transformer then P a = s -> (r -> a, s) would be a monad. But one can check explicitly that this P is actually not a monad (one cannot implement return and bind that satisfy the laws).
Even if this worked (i.e. assuming that I made a mistake in claiming that TCL m is in general not a monad), this construction has certain disadvantages:
It is not functorial (i.e. not covariant) with respect to the foreign monad m, so we cannot do things like interpret a transformed free monad into another monad, or merge two monad transformers as explained here Is there a principled way to compose two monad transformers if they are of different type, but their underlying monad is of the same type?
The presence of a forall r makes the type quite complicated to reason about and may lead to performance degradation (see the "Church encoding considered harmful" paper) and stack overflows (since Church encoding is usually not stack-safe)
The Church-encoded monad transformer for an identity base monad (L = Id) does not yield the unmodified foreign monad: T m a = forall r. (a -> m r) -> m r and this is not the same as m a. In fact it's quite difficult to figure out what that monad is, given a monad m.
As an example showing why forall r makes reasoning complicated, consider the foreign monad m a = Maybe a and try to understand what the type forall r. (a -> Maybe r) -> Maybe r actually means. I was not able to simplify this type or to find a good explanation about what this type does, i.e. what kind of "effect" it represents (since it's a monad, it must represent some kind of "effect") and how one would use such a type.
The Church-encoded monad transformer is not equivalent to the standard well-known monad transformers such as ReaderT, WriterT, EitherT, StateT and so on.
It is not clear how many other monad transformers exist and in what cases one would use one or another transformer.
One of the questions in the post is to find an explicit example of a monad m that has two transformers t1 and t2 such that for some foreign monad n, the monads t1 n and t2 n are not equivalent.
I believe that the Search monad provides such an example.
type Search a = (a -> p) -> a
where p is a fixed type.
The transformers are
type SearchT1 n a = (a -> n p) -> n a
type SearchT2 n a = (n a -> p) -> n a
I checked that both SearchT1 n and SearchT2 n are lawful monads for any monad n. We have liftings n a -> SearchT1 n a and n a -> SearchT2 n a that work by returning constant functions (just return n a as given, ignoring the argument). We have SearchT1 Identity and SearchT2 Identity obviously equivalent to Search.
The big difference between SearchT1 and SearchT2 is that SearchT1 is not functorial in n, while SearchT2 is. This may have implications for "running" ("interpreting") the transformed monad, since normally we would like to be able to lift an interpreter n a -> n' a into a "runner" SearchT n a -> SearchT n' a. This is possibly only with SearchT2.
A similar deficiency is present in the standard monad transformers for the continuation monad and the codensity monad: they are not functorial in the foreign monad.
My solution exploits the logical structure of Haskell terms etc.
I looked at right Kan extensions as possible representations of the monad transformer. As everyone knows, right Kan extensions are limits, so it makes sense that they should serve as universal encoding of any object of interest. For monadic functors F and M, I looked at the right Kan extension of MF along F.
First I proved a lemma, "rolling lemma:" a procomposed functor to the Right kan extension can be rolled inside it, giving the map F(Ran G H) -> Ran G(FH) for any functors F, G, and H.
Using this lemma, I computed a monadic join for the right Kan extension Ran F (MF), requiring the distributive law FM -> MF. It is as follows:
Ran F(MF) . Ran F(MF) [rolling lemma] =>
Ran F(Ran F(MF)MF) [insert eta] =>
Ran F(Ran F(MF)FMF) [gran] =>
Ran F(MFMF) [apply distributive law] =>
Ran F(MMFF) [join Ms and Fs] =>
Ran F(MF).
What seems to be interesting about this construction is that it admits of lifts of both functors F and M as follows:
(1) F [lift into codensity monad] =>
Ran F F [procompose with eta] =>
Ran F(MF).
(2) M [Yoneda lemma specialized upon F-] =>
Ran F(MF).
I also investigated the right Kan extension Ran F(FM). It seems to be a little better behaved achieving monadicity without appeal to the distributive law, but much pickier in what functors it lifts. I determined that it will lift monadic functors under the following conditions:
1) F is monadic.
2) F |- U, in which case it admits the lift F ~> Ran U(UM). This can be used in the context of a state monad to "set" the state.
3) M under certain conditions, for instance when M admits a distributive law.
Imagine that I have a value that is generic over the monad:
m :: (Monad m) => m A -- 'A' is some concrete type
Now let's say that I specialize this value to a concrete monad transformer stack in two separate ways:
m1 :: T M A
m1 = m
m2 :: T M A
m2 = lift m
... where M and T M are monads, and T is a monad transformer:
instance Monad M where ...
instance (Monad m) => Monad (T m) where ...
instance MonadTrans T where ...
... and those instances obey the monad laws and monad transformer laws.
Can we deduce that:
m1 = m2
... knowing nothing about m other than its type?
This is just a long-winded way of asking if lift m is a valid substitution for m, assuming that both type-check. It is a little bit difficult to phrase the question because it requires m type-checking as two separate monads before and after the substitution. As far as I can tell, the only way such a substitution would type-check is if m is generic over the monad.
My vague intuition is that the substitution should always be correct, but I'm not sure that my intuition is correct, or how to prove it if it is correct.
If m :: Monad m => m A, then m must be equivalent to return x for some x :: A, because the only ways you have to get anything :: m x are return and (>>=). But in order to use (>>=) you must be able to produce some m y, which you can either do with return or with another application of (>>=). Either way you have to use return eventually, and the monad laws guarantee that the whole expression will be equivalent to return x.
(If m is completely polymorphic over the monad, then you must able to use it at m ~ Identity, so it can't use any fancy monad tricks, unless you pass it an argument. This sort of trick is used e.g. here and here.)
Given that m = return x, we know by the monad transformer laws (lift . return = return) that lift m = m.
Of course, this is only true for this particular type. If you have, say, m :: MonadState S m => m A, then m could easily be different from lift m -- for example, with a type like StateT A (State A) A, get and lift get will be different.
(And of course all of this is ignoring ⊥. Then again, if you don't, most monads don't obey the laws anyway.)
I believe this is a sloppy inductive proof that your m is equivalent to lift m.
I think we have to try to prove something about m (or rather, about all possible values of type (Monad m) => m A). If we consider Monad as consisting of bind and return only, and ignore bottom and fail then your m must at the top level be one of:
mA = return (x)
mB = (mX >>= f)
For mA the two forms of m are equivalent by the monad transformer law:
lift (return (x)) = return (x)
That's the base case. Then we're left with the second transformer law to reason about mB:
lift (mX >>= f) = lift mX >>= (lift . f)
and where we'd like to prove that that our mB is equal to that expansion:
mX >>= f = lift mX >>= (lift . f)
we assume that the left side of bind is equivalent (mX = lift mX) since that's our inductive hypothesis (right?).
So then we're left to prove f = lift . f by figuring out what f has to look like:
f :: a -> m b
f = \a -> (one of our forms mA or mB)
and lift . f looks like:
f = \a -> lift (one of our forms mA or mB)
Which leaves us back with our hypothesis:
m = lift m
I have become rather interested in how computation is modeled in Haskell. Several resources have described monads as "composable computation" and arrows as "abstract views of computation". I've never seen monoids, functors or applicative functors described in this way. It seems that they lack the necessary structure.
I find that idea interesting and wonder if there are any other constructs that do something similar. If so, what are some resources that I can use to acquaint myself with them? Are there any packages on Hackage that might come in handy?
Note: This question is similar to
Monads vs. Arrows and https://stackoverflow.com/questions/2395715/resources-for-learning-monads-functors-monoids-arrows-etc, but I am looking for constructs beyond funtors, applicative functors, monads, and arrows.
Edit: I concede that applicative functors should be considered "computational constructs", but I'm really looking for something I haven't come across yet. This includes applicative functors, monads and arrows.
Arrows are generalized by Categories, and so by the Category typeclass.
class Category f where
(.) :: f a b -> f b c -> f a c
id :: f a a
The Arrow typeclass definition has Category as a superclass. Categories (in the haskell sense) generalize functions (you can compose them but not apply them) and so are definitely a "model of computation". Arrow provides a Category with additional structure for working with tuples. So, while Category mirrors something about Haskell's function space, Arrow extends that to something about product types.
Every Monad gives rise to something called a "Kleisli Category" and this construction gives you instances of ArrowApply. You can build a Monad out of any ArrowApply such that going full circle doesn't change your behavior, so in some deep sense Monad and ArrowApply are the same thing.
newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }
instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli (\b -> g b >>= f)
instance Monad m => Arrow (Kleisli m) where
arr f = Kleisli (return . f)
first (Kleisli f) = Kleisli (\ ~(b,d) -> f b >>= \c -> return (c,d))
second (Kleisli f) = Kleisli (\ ~(d,b) -> f b >>= \c -> return (d,c))
Actually every Arrow gives rise to an Applicative (universally quantified to get the kinds right) in addition to the Category superclass, and I believe the combination of the appropriate Category and Applicative is enough to reconstruct your Arrow.
So, these structures are deeply connected.
Warning: wishy-washy commentary ahead. One central difference between the Functor/Applicative/Monad way of thinking and the Category/Arrow way of thinking is that while Functor and its ilk are generalizations at the level of object (types in Haskell), Category/Arrow are generelazation of the notion of morphism (functions in Haskell). My belief is that thinking at the level of generalized morphism involves a higher level of abstraction than thinking at the level of generalized objects. Sometimes that is a good thing, other times it is not. On the other-hand, despite the fact that Arrows have a categorical basis, and no one in math thinks Applicative is interesting, it is my understanding that Applicative is generally better understood than Arrow.
Basically you can think of "Category < Arrow < ArrowApply" and "Functor < Applicative < Monad" such that "Category ~ Functor", "Arrow ~ Applicative" and "ArrowApply ~ Monad".
More Concrete Below:
As for other structures to model computation: one can often reverse the direction of the "arrows" (just meaning morphisms here) in categorical constructions to get the "dual" or "co-construction". So, if a monad is defined as
class Functor m => Monad m where
return :: a -> m a
join :: m (m a) -> m a
(okay, I know that isn't how Haskell defines things, but ma >>= f = join $ fmap f ma and join x = x >>= id so it just as well could be)
then the comonad is
class Functor m => Comonad m where
extract :: m a -> a -- this is co-return
duplicate :: m a -> m (m a) -- this is co-join
This thing turns out to be pretty common also. It turns out that Comonad is the basic underlying structure of cellular automata. For completness, I should point out that Edward Kmett's Control.Comonad puts duplicate in a class between functor and Comonad for "Extendable Functors" because you can also define
extend :: (m a -> b) -> m a -> m b -- Looks familiar? this is just the dual of >>=
extend f = fmap f . duplicate
--this is enough
duplicate = extend id
It turns out that all Monads are also "Extendable"
monadDuplicate :: Monad m => m a -> m (m a)
monadDuplicate = return
while all Comonads are "Joinable"
comonadJoin :: Comonad m => m (m a) -> m a
comonadJoin = extract
so these structures are very close together.
All Monads are Arrows (Monad is isomorphic to ArrowApply). In a different way, all Monads are instances of Applicative, where <*> is Control.Monad.ap and *> is >>. Applicative is weaker because it does not guarantee the >>= operation. Thus Applicative captures computations that do not examine previous results and branch on values. In retrospect much monadic code is actually applicative, and with a clean rewrite this would happen.
Extending monads, with recent Constraint kinds in GHC 7.4.1 there can now be nicer designs for restricted monads. And there are also people looking at parameterized monads, and of course I include a link to something by Oleg.
In libraries these structures give rise to different type of computations.
For example Applicatives can be used to implement static effects. With that I mean effects, which are defined at forehand. For example when implementing a state machine, rejecting or accepting an input state. They can't be used to manipulate their internal structure in terms of their input.
The type says it all:
<*> :: f (a -> b) -> f a -> f b
It is easy to reason, the structure of f cannot be depend om the input of a. Because a cannot reach f on the type level.
Monads can be used for dynamic effects. This also can be reasoned from the type signature:
>>= :: m a -> (a -> m b) -> m b
How can you see this? Because a is on the same "level" as m. Mathematically it is a two stage process. Bind is a composition of two function: fmap and join. First we use fmap together with the monadic action to create a new structure embedded in the old one:
fmap :: (a -> b) -> m a -> m b
f :: (a -> m b)
m :: m a
fmap f :: m a -> m (m b)
fmap f m :: m (m b)
Fmap can create a new structure, based on the input value. Then we collapse the structure with join, thus we are able to manipulate the structure from within the monadic computation in a way that depends on the input:
join :: m (m a) -> m a
join (fmap f m) :: m b
Many monads are easier to implement with join:
(>>=) = join . fmap
This is possible with monads:
addCounter :: Int -> m Int ()
But not with applicatives, but applicatives (and any monad) can do things like:
addOne :: m Int ()
Arrows give more control over the input and the output types, but for me they really feel similar to applicatives. Maybe I am wrong about that.