By the title, I mean types like Monad m => m (m a).
When the structure of a monad is simple, I can easily think of a usage of such type:
[[a]], which is a multidimensional list
Maybe (Maybe a), which is a type adjoined by two error states
Either e (Either e a), which is like above, but with messages
Monoid m => (m,(m,a)), which is a writer monad with two things to write over
r -> r -> a, which is a reader monad with two things to read from
Identity (Identity a), which is still the identity monad
Complex (Complex a), which is a 2-by-2 matrix
But it goes haywire in my mind if I think about the following types:
ReadP (ReadP a)? Why would it be useful when ReadP isn't an instance of Read?
ReadPrec (ReadPrec a)? Like above?
Monad m => Kleisli m a (Kleisli m a b)?
IO (IO a)!? This must be useful. It just is too hard to think about it.
forall s. ST s (ST s a)!? This should be like the above.
Is there a practical use for such types? Especially for the IO one?
On the second thought, I might need to randomly pick an IO action. That's an example of IO (IO a) which focuses on inputs. What about one focusing on outputs?
In some sense, a monad can be thought of as a functor in which layers can be collapsed.
If the Monad class were defined more like the category-theory definition, it would look like
class Applicative m => Monad m where
return :: a -> m a
join :: m (m a) -> m a
Using fmap with a function of type a -> m b results in a function of type m a -> m (m b). join is used to eliminate one layer of monad from the result. Since that's a common thing to do, one might define a function that does it.
foo :: Monad m => (a -> m b) -> m a -> m b
foo f ma = join (fmap f ma)
If you look carefully, you'll recognize foo as >>= with its arguments flipped.
foo = flip (>>=)
Since >>= is used more than join would be, the typeclass definition is
class Applicative m => Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
and join is defined as a separate function
join :: Monad m => m (m a) -> m a
join mma = mma >>= id
Does not matter.
Monads are monads precisely because for every Monad ( Monad a ) we can always get Monad a. Such operation is called "join" and it's alternative operation to "bind" that could form definition of Monad. Haskell uses "bind" because its much more useful for composing monadic code :)
(join can be implemented with bind, and bind with join - they are equivalent)
Does not matter
Is actually small lie, since ability to form Monad ( Monad a ) is also de facto part of what makes Monads monads. With Monad (Monad a) being transitional representation in some operations.
Full answer is: Yes, because that enables Monads. Though Monad ( Monad a ) can have extra "domain" meaning as you list for some of the Monads ;)
Related
There have been a couple of questions (e.g. this and this) asking whether every monad in Haskell (other than IO) has a corresponding monad transformer. Now I would like to ask a complementary question. Does every monad have exactly one transformer (or none as in the case of IO) or can it have more than one transformer?
A counterexample would be two monad transformers that would produce monads behaving identically when applied to the identity monad would but would produce differently behaving monads when applied to some other monad. If the answer is that a monad can have more than one transformer I would like to have a Haskell example which is as simple as possible. These don't have to be actually useful transformers (though that would be interesting).
Some of the answers in the linked question seemed to suggest that a monad could have more than one transformer. However, I don't know much category theory beyond the basic definition of a category so I wasn't sure whether they are an answer to this question.
Here's one idea for a counterexample to uniqueness. We know that in general, monads don't compose... but we also know that if there's an appropriate swap operation, you can compose them[1]! Let's make a class for monads that can swap with themselves.
-- | laws (from [1]):
-- swap . fmap (fmap f) = fmap (fmap f) . swap
-- swap . pure = fmap pure
-- swap . fmap pure = pure
-- fmap join . swap . fmap (join . fmap swap) = join . fmap swap . fmap join . swap
class Monad m => Swap m where
swap :: m (m a) -> m (m a)
instance Swap Identity where swap = id
instance Swap Maybe where
swap Nothing = Just Nothing
swap (Just Nothing) = Nothing
swap (Just (Just x)) = Just (Just x)
Then we can build a monad transformer that composes a monad with itself, like so:
newtype Twice m a = Twice (m (m a))
Hopefully it should be obvious what pure and (<$>) do. Rather than defining (>>=), I'll define join, as I think it's a bit more obvious what's going on; (>>=) can be derived from it.
instance Swap m => Monad (Twice m) where
join = id
. Twice -- rewrap newtype
. fmap join . join . fmap swap -- from [1]
. runTwice . fmap runTwice -- unwrap newtype
instance MonadTrans Twice where lift = Twice . pure
I haven't checked that lift obeys the MonadTrans laws for all Swap instances, but I did check them for Identity and Maybe.
Now, we have
IdentityT Identity ~= Identity ~= Twice Identity
IdentityT Maybe ~= Maybe !~= Twice Maybe
which shows that IdentityT is not a unique monad transformer for producing Identity.
[1] Composing monads by Mark P. Jones and Luc Duponcheel
The identity monad has at least two monad transformers: the identity monad transformer and the codensity monad transformer.
newtype IdentityT m a = IdentityT (m a)
newtype Codensity m a = Codensity (forall r. (a -> m r) -> m r)
Indeed, considering Codensity Identity, forall r. (a -> r) -> r is isomorphic to a.
These monad transformers are quite different. One example is that "bracket" can be defined as a monadic action in Codensity:
bracket :: Monad m => m () -> m () -> Codensity m ()
bracket before after = Codensity (\k -> before *> k () *> after)
whereas transposing that signature to IdentityT doesn't make much sense
bracket :: Monad m => m () -> m () -> IdentityT m () -- cannot implement the same functionality
Other examples can be found similarly from variants of the continuation/codensity monad, though I don't see a general scheme yet.
The state monad corresponds to the state monad transformer and to the composition of Codensity and ReaderT:
newtype StateT s m a = StateT (s -> m (s, a))
newtype CStateT s m a = CStateT (Codensity (ReaderT s m) a)
The list monad corresponds to at least three monad transformers, not including the wrong one:
newtype ListT m a = ListT (m (Maybe (a, ListT m a))) -- list-t
newtype LogicT m a = LogicT (forall r. (a -> m r -> m r) -> m r -> m r) -- logict
newtype MContT m a = MContT (forall r. Monoid r => (a -> m r) -> m r))
The first two can be found respectively in the packages list-t (also in an equivalent form in pipes),
and logict.
There is another example of a monad that has two inequivalent transformers: the "selection" monad.
type Sel r a = (a -> r) -> a
The monad is not well known but there are papers that mention it. Here is a package that refers to some papers:
https://hackage.haskell.org/package/transformers-0.6.0.4/docs/Control-Monad-Trans-Select.html
That package implements one transformer:
type SelT r m a = (a -> m r) -> m a
But there exists a second transformer:
type Sel2T r m a = (m a -> r ) -> m a
Proving laws for this transformer is more difficult but I have done it.
An advantage of the second transformer is that it is covariant in m, so the hoist function can be defined:
hoist :: (m a -> n a) -> Sel2T r m a -> Sel2T r n a
The second transformer is "fully featured", has all lifts and "hoist". The first transformer is not fully featured; for example, you cannot define blift :: Sel r a -> SelT r m a. In other words, you cannot embed monadic computations from Sel into SelT, just like you can't do that with the Continuation monad and the Codensity monad.
But with the Sel2T transformer, all lifts exist and you can embed Sel computations into Sel2T.
This example shows a monad with two transformers without using the Codensity construction in any way.
Is there a way to implement bind for nested monads? What I want is the following signature:
(>>>=) :: (Monad m, Monad n) => m (n a) -> (a -> m (n b)) -> m (n b)
It looks like it should be a trivial task, but I somehow just can't wrap my head around it. In my program, I use this pattern for several different combinations of monads and for each combination, I can implement it. But for the general case, I just don't understand it.
Edit: It seems that it is not possible in the general case. But it is certainly possible in some special cases. E.g. if the inner Monad is a Maybe. Since it IS possible for all the Monads I want to use, having additional constraints seems fine for me. So I change the question a bit:
What additional constraints do I need on n such that the following is possible?
(>>>=) :: (Monad m, Monad n, ?? n) => m (n a) -> (a -> m (n b)) -> m (n b)
Expanding on the comments: As the linked questions show, it is necessary to have some function n (m a) -> m (n a) to even have a chance to make the composition a monad.
If your inner monad is a Traversable, then sequence provides such a function, and the following will have the right type:
(>>>=) :: (Monad m, Monad n, Traversable n) => m (n a) -> (a -> m (n b)) -> m (n b)
m >>>= k = do
a <- m
b <- sequence (fmap k a)
return (join b)
Several well-known transformers are in fact simple newtype wrappers over something equivalent to this (although mostly defining things with pattern matching instead of literally using the inner monads' Monad and Traversable instances):
MaybeT based on Maybe
ExceptT based on Either
WriterT based on (,) ((,) doesn't normally have its Monad instance defined, and WriterT is using the wrong tuple order to make use of it if it had - but in spirit it could have).
ListT based on []. Oh, whoops...
The last one is in fact notorious for not being a monad unless the lifted monad is "commutative" - otherwise, expressions that should be equal by the monad laws can give different order of effects. My hunch is that this comes essentially from lists being able to contain more than one value, unlike the other, reliably working examples.
So, although the above definition will be correctly typed, it can still break the monad laws.
Also as an afterthought, one other transformer is such a nested monad, but in a completely different way: ReaderT, based on using (->) as the outer monad.
So far, every monad (that can be represented as a data type) that I have encountered had a corresponding monad transformer, or could have one. Is there such a monad that can't have one? Or do all monads have a corresponding transformer?
By a transformer t corresponding to monad m I mean that t Identity is isomorphic to m. And of course that it satisfies the monad transformer laws and that t n is a monad for any monad n.
I'd like to see either a proof (ideally a constructive one) that every monad has one, or an example of a particular monad that doesn't have one (with a proof). I'm interested in both more Haskell-oriented answers, as well as (category) theoretical ones.
As a follow-up question, is there a monad m that has two distinct transformers t1 and t2? That is, t1 Identity is isomorphic to t2 Identity and to m, but there is a monad n such that t1 n is not isomorphic to t2 n.
(IO and ST have a special semantics so I don't take them into account here and let's disregard them completely. Let's focus only on "pure" monads that can be constructed using data types.)
I'm with #Rhymoid on this one, I believe all Monads have two (!!) transformers. My construction is a bit different, and far less complete. I'd like to be able to take this sketch into a proof, but I think I'm either missing the skills/intuition and/or it may be quite involved.
Due to Kleisli, every monad (m) can be decomposed into two functors F_k and G_k such that F_k is left adjoint to G_k and that m is isomorphic to G_k * F_k (here * is functor composition). Also, because of the adjunction, F_k * G_k forms a comonad.
I claim that t_mk defined such that t_mk n = G_k * n * F_k is a monad transformer. Clearly, t_mk Id = G_k * Id * F_k = G_k * F_k = m. Defining return for this functor is not difficult since F_k is a "pointed" functor, and defining join should be possible since extract from the comonad F_k * G_k can be used to reduce values of type (t_mk n * t_mk n) a = (G_k * n * F_k * G_k * n * F_k) a to values of type G_k * n * n * F_k, which is then further reduces via join from n.
We do have to be a bit careful since F_k and G_k are not endofunctors on Hask. So, they are not instances of the standard Functor typeclass, and also are not directly composable with n as shown above. Instead we have to "project" n into the Kleisli category before composition, but I believe return from m provides that "projection".
I believe you can also do this with the Eilenberg-Moore monad decomposition, giving m = G_em * F_em, tm_em n = G_em * n * F_em, and similar constructions for lift, return, and join with a similar dependency on extract from the comonad F_em * G_em.
Here's a hand-wavy I'm-not-quite-sure answer.
Monads can be thought of as the interface of imperative languages. return is how you inject a pure value into the language, and >>= is how you splice pieces of the language together. The Monad laws ensure that "refactoring" pieces of the language works the way you would expect. Any additional actions provided by a monad can be thought of as its "operations."
Monad Transformers are one way to approach the "extensible effects" problem. If we have a Monad Transformer t which transforms a Monad m, then we could say that the language m is being extended with additional operations available via t. The Identity monad is the language with no effects/operations, so applying t to Identity will just get you a language with only the operations provided by t.
So if we think of Monads in terms of the "inject, splice, and other operations" model, then we can just reformulate them using the Free Monad Transformer. Even the IO monad could be turned into a transformer this way. The only catch is that you probably want some way to peel that layer off the transformer stack at some point, and the only sensible way to do it is if you have IO at the bottom of the stack so that you can just perform the operations there.
Previously, I thought I found examples of explicitly defined monads without a transformer, but those examples were incorrect.
The transformer for Either a (z -> a) is m (Either a (z -> m a), where m is an arbitrary foreign monad. The transformer for (a -> n p) -> n a is (a -> t m p) -> t m a where t m is the transformer for the monad n.
The free pointed monad.
The monad type constructor L for this example is defined by
type L z a = Either a (z -> a)
The intent of this monad is to embellish the ordinary reader monad z -> a with an explicit pure value (Left x). The ordinary reader monad's pure value is a constant function pure x = _ -> x. However, if we are given a value of type z -> a, we will not be able to determine whether this value is a constant function. With L z a, the pure value is represented explicitly as Left x. Users can now pattern-match on L z a and determine whether a given monadic value is pure or has an effect. Other than that, the monad L z does exactly the same thing as the reader monad.
The monad instance:
instance Monad (L z) where
return x = Left x
(Left x) >>= f = f x
(Right q) >>= f = Right(join merged) where
join :: (z -> z -> r) -> z -> r
join f x = f x x -- the standard `join` for Reader monad
merged :: z -> z -> r
merged = merge . f . q -- `f . q` is the `fmap` of the Reader monad
merge :: Either a (z -> a) -> z -> a
merge (Left x) _ = x
merge (Right p) z = p z
This monad L z is a specific case of a more general construction, (Monad m) => Monad (L m) where L m a = Either a (m a). This construction embellishes a given monad m by adding an explicit pure value (Left x), so that users can now pattern-match on L m to decide whether the value is pure. In all other ways, L m represents the same computational effect as the monad m.
The monad instance for L m is almost the same as for the example above, except the join and fmap of the monad m need to be used, and the helper function merge is defined by
merge :: Either a (m a) -> m a
merge (Left x) = return #m x
merge (Right p) = p
I checked that the laws of the monad hold for L m with an arbitrary monad m.
This construction gives the free pointed functor on the given monad m. This construction guarantees that the free pointed functor on a monad is also a monad.
The transformer for the free pointed monad is defined like this:
type LT m n a = n (Either a (mT n a))
where mT is the monad transformer of the monad m (which needs to be known).
Another example:
type S a = (a -> Bool) -> Maybe a
This monad appeared in the context of "search monads" here. The paper by Jules Hedges also mentions the search monad, and more generally, "selection" monads of the form
type Sq n q a = (a -> n q) -> n a
for a given monad n and a fixed type q. The search monad above is a particular case of the selection monad with n a = Maybe a and q = (). The paper by Hedges claims (without proof, but he proved it later using Coq) that Sq is a monad transformer for the monad (a -> q) -> a.
However, the monad (a -> q) -> a has another monad transformer (m a -> q) -> m a of the "composed outside" type. This is related to the property of "rigidity" explored in the question Is this property of a functor stronger than a monad? Namely, (a -> q) -> a is a rigid monad, and all rigid monads have monad transformers of the "composed-outside" type.
Generally, transformed monads don't themselves automatically possess a monad transformer. That is, once we take some foreign monad m and apply some monad transformer t to it, we obtain a new monad t m, and this monad doesn't have a transformer: given a new foreign monad n, we don't know how to transform n with the monad t m. If we know the transformer mT for the monad m, we can first transform n with mT and then transform the result with t. But if we don't have a transformer for the monad m, we are stuck: there is no construction that creates a transformer for the monad t m out of the knowledge of t alone and works for arbitrary foreign monads m.
However, in practice all explicitly defined monads have explicitly defined transformers, so this problem does not arise.
#JamesCandy's answer suggests that for any monad (including IO?!), one can write a (general but complicated) type expression that represents the corresponding monad transformer. Namely, you first need to Church-encode your monad type, which makes the type look like a continuation monad, and then define its monad transformer as if for the continuation monad. But I think this is incorrect - it does not give a recipe for producing a monad transformer in general.
Taking the Church encoding of a type a means writing down the type
type ca = forall r. (a -> r) -> r
This type ca is completely isomorphic to a by Yoneda's lemma. So far we have achieved nothing other than made the type a lot more complicated by introducing a quantified type parameter forall r.
Now let's Church-encode a base monad L:
type CL a = forall r. (L a -> r) -> r
Again, we have achieved nothing so far, since CL a is fully equivalent to L a.
Now pretend for a second that CL a a continuation monad (which it isn't!), and write the monad transformer as if it were a continuation monad transformer, by replacing the result type r through m r:
type TCL m a = forall r. (L a -> m r) -> m r
This is claimed to be the "Church-encoded monad transformer" for L. But this seems to be incorrect. We need to check the properties:
TCL m is a lawful monad for any foreign monad m and for any base monad L
m a -> TCL m a is a lawful monadic morphism
The second property holds, but I believe the first property fails, - in other words, TCL m is not a monad for an arbitrary monad m. Perhaps some monads m admit this but others do not. I was not able to find a general monad instance for TCL m corresponding to an arbitrary base monad L.
Another way to argue that TCL m is not in general a monad is to note that forall r. (a -> m r) -> m r is indeed a monad for any type constructor m. Denote this monad by CM. Now, TCL m a = CM (L a). If TCL m were a monad, it would imply that CM can be composed with any monad L and yields a lawful monad CM (L a). However, it is highly unlikely that a nontrivial monad CM (in particular, one that is not equivalent to Reader) will compose with all monads L. Monads usually do not compose without stringent further constraints.
A specific example where this does not work is for reader monads. Consider L a = r -> a and m a = s -> a where r and s are some fixed types. Now, we would like to consider the "Church-encoded monad transformer" forall t. (L a -> m t) -> m t. We can simplify this type expression using the Yoneda lemma,
forall t. (x -> t) -> Q t = Q x
(for any functor Q) and obtain
forall t. (L a -> s -> t) -> s -> t
= forall t. ((L a, s) -> t) -> s -> t
= s -> (L a, s)
= s -> (r -> a, s)
So this is the type expression for TCL m a in this case. If TCL were a monad transformer then P a = s -> (r -> a, s) would be a monad. But one can check explicitly that this P is actually not a monad (one cannot implement return and bind that satisfy the laws).
Even if this worked (i.e. assuming that I made a mistake in claiming that TCL m is in general not a monad), this construction has certain disadvantages:
It is not functorial (i.e. not covariant) with respect to the foreign monad m, so we cannot do things like interpret a transformed free monad into another monad, or merge two monad transformers as explained here Is there a principled way to compose two monad transformers if they are of different type, but their underlying monad is of the same type?
The presence of a forall r makes the type quite complicated to reason about and may lead to performance degradation (see the "Church encoding considered harmful" paper) and stack overflows (since Church encoding is usually not stack-safe)
The Church-encoded monad transformer for an identity base monad (L = Id) does not yield the unmodified foreign monad: T m a = forall r. (a -> m r) -> m r and this is not the same as m a. In fact it's quite difficult to figure out what that monad is, given a monad m.
As an example showing why forall r makes reasoning complicated, consider the foreign monad m a = Maybe a and try to understand what the type forall r. (a -> Maybe r) -> Maybe r actually means. I was not able to simplify this type or to find a good explanation about what this type does, i.e. what kind of "effect" it represents (since it's a monad, it must represent some kind of "effect") and how one would use such a type.
The Church-encoded monad transformer is not equivalent to the standard well-known monad transformers such as ReaderT, WriterT, EitherT, StateT and so on.
It is not clear how many other monad transformers exist and in what cases one would use one or another transformer.
One of the questions in the post is to find an explicit example of a monad m that has two transformers t1 and t2 such that for some foreign monad n, the monads t1 n and t2 n are not equivalent.
I believe that the Search monad provides such an example.
type Search a = (a -> p) -> a
where p is a fixed type.
The transformers are
type SearchT1 n a = (a -> n p) -> n a
type SearchT2 n a = (n a -> p) -> n a
I checked that both SearchT1 n and SearchT2 n are lawful monads for any monad n. We have liftings n a -> SearchT1 n a and n a -> SearchT2 n a that work by returning constant functions (just return n a as given, ignoring the argument). We have SearchT1 Identity and SearchT2 Identity obviously equivalent to Search.
The big difference between SearchT1 and SearchT2 is that SearchT1 is not functorial in n, while SearchT2 is. This may have implications for "running" ("interpreting") the transformed monad, since normally we would like to be able to lift an interpreter n a -> n' a into a "runner" SearchT n a -> SearchT n' a. This is possibly only with SearchT2.
A similar deficiency is present in the standard monad transformers for the continuation monad and the codensity monad: they are not functorial in the foreign monad.
My solution exploits the logical structure of Haskell terms etc.
I looked at right Kan extensions as possible representations of the monad transformer. As everyone knows, right Kan extensions are limits, so it makes sense that they should serve as universal encoding of any object of interest. For monadic functors F and M, I looked at the right Kan extension of MF along F.
First I proved a lemma, "rolling lemma:" a procomposed functor to the Right kan extension can be rolled inside it, giving the map F(Ran G H) -> Ran G(FH) for any functors F, G, and H.
Using this lemma, I computed a monadic join for the right Kan extension Ran F (MF), requiring the distributive law FM -> MF. It is as follows:
Ran F(MF) . Ran F(MF) [rolling lemma] =>
Ran F(Ran F(MF)MF) [insert eta] =>
Ran F(Ran F(MF)FMF) [gran] =>
Ran F(MFMF) [apply distributive law] =>
Ran F(MMFF) [join Ms and Fs] =>
Ran F(MF).
What seems to be interesting about this construction is that it admits of lifts of both functors F and M as follows:
(1) F [lift into codensity monad] =>
Ran F F [procompose with eta] =>
Ran F(MF).
(2) M [Yoneda lemma specialized upon F-] =>
Ran F(MF).
I also investigated the right Kan extension Ran F(FM). It seems to be a little better behaved achieving monadicity without appeal to the distributive law, but much pickier in what functors it lifts. I determined that it will lift monadic functors under the following conditions:
1) F is monadic.
2) F |- U, in which case it admits the lift F ~> Ran U(UM). This can be used in the context of a state monad to "set" the state.
3) M under certain conditions, for instance when M admits a distributive law.
I have become rather interested in how computation is modeled in Haskell. Several resources have described monads as "composable computation" and arrows as "abstract views of computation". I've never seen monoids, functors or applicative functors described in this way. It seems that they lack the necessary structure.
I find that idea interesting and wonder if there are any other constructs that do something similar. If so, what are some resources that I can use to acquaint myself with them? Are there any packages on Hackage that might come in handy?
Note: This question is similar to
Monads vs. Arrows and https://stackoverflow.com/questions/2395715/resources-for-learning-monads-functors-monoids-arrows-etc, but I am looking for constructs beyond funtors, applicative functors, monads, and arrows.
Edit: I concede that applicative functors should be considered "computational constructs", but I'm really looking for something I haven't come across yet. This includes applicative functors, monads and arrows.
Arrows are generalized by Categories, and so by the Category typeclass.
class Category f where
(.) :: f a b -> f b c -> f a c
id :: f a a
The Arrow typeclass definition has Category as a superclass. Categories (in the haskell sense) generalize functions (you can compose them but not apply them) and so are definitely a "model of computation". Arrow provides a Category with additional structure for working with tuples. So, while Category mirrors something about Haskell's function space, Arrow extends that to something about product types.
Every Monad gives rise to something called a "Kleisli Category" and this construction gives you instances of ArrowApply. You can build a Monad out of any ArrowApply such that going full circle doesn't change your behavior, so in some deep sense Monad and ArrowApply are the same thing.
newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }
instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli (\b -> g b >>= f)
instance Monad m => Arrow (Kleisli m) where
arr f = Kleisli (return . f)
first (Kleisli f) = Kleisli (\ ~(b,d) -> f b >>= \c -> return (c,d))
second (Kleisli f) = Kleisli (\ ~(d,b) -> f b >>= \c -> return (d,c))
Actually every Arrow gives rise to an Applicative (universally quantified to get the kinds right) in addition to the Category superclass, and I believe the combination of the appropriate Category and Applicative is enough to reconstruct your Arrow.
So, these structures are deeply connected.
Warning: wishy-washy commentary ahead. One central difference between the Functor/Applicative/Monad way of thinking and the Category/Arrow way of thinking is that while Functor and its ilk are generalizations at the level of object (types in Haskell), Category/Arrow are generelazation of the notion of morphism (functions in Haskell). My belief is that thinking at the level of generalized morphism involves a higher level of abstraction than thinking at the level of generalized objects. Sometimes that is a good thing, other times it is not. On the other-hand, despite the fact that Arrows have a categorical basis, and no one in math thinks Applicative is interesting, it is my understanding that Applicative is generally better understood than Arrow.
Basically you can think of "Category < Arrow < ArrowApply" and "Functor < Applicative < Monad" such that "Category ~ Functor", "Arrow ~ Applicative" and "ArrowApply ~ Monad".
More Concrete Below:
As for other structures to model computation: one can often reverse the direction of the "arrows" (just meaning morphisms here) in categorical constructions to get the "dual" or "co-construction". So, if a monad is defined as
class Functor m => Monad m where
return :: a -> m a
join :: m (m a) -> m a
(okay, I know that isn't how Haskell defines things, but ma >>= f = join $ fmap f ma and join x = x >>= id so it just as well could be)
then the comonad is
class Functor m => Comonad m where
extract :: m a -> a -- this is co-return
duplicate :: m a -> m (m a) -- this is co-join
This thing turns out to be pretty common also. It turns out that Comonad is the basic underlying structure of cellular automata. For completness, I should point out that Edward Kmett's Control.Comonad puts duplicate in a class between functor and Comonad for "Extendable Functors" because you can also define
extend :: (m a -> b) -> m a -> m b -- Looks familiar? this is just the dual of >>=
extend f = fmap f . duplicate
--this is enough
duplicate = extend id
It turns out that all Monads are also "Extendable"
monadDuplicate :: Monad m => m a -> m (m a)
monadDuplicate = return
while all Comonads are "Joinable"
comonadJoin :: Comonad m => m (m a) -> m a
comonadJoin = extract
so these structures are very close together.
All Monads are Arrows (Monad is isomorphic to ArrowApply). In a different way, all Monads are instances of Applicative, where <*> is Control.Monad.ap and *> is >>. Applicative is weaker because it does not guarantee the >>= operation. Thus Applicative captures computations that do not examine previous results and branch on values. In retrospect much monadic code is actually applicative, and with a clean rewrite this would happen.
Extending monads, with recent Constraint kinds in GHC 7.4.1 there can now be nicer designs for restricted monads. And there are also people looking at parameterized monads, and of course I include a link to something by Oleg.
In libraries these structures give rise to different type of computations.
For example Applicatives can be used to implement static effects. With that I mean effects, which are defined at forehand. For example when implementing a state machine, rejecting or accepting an input state. They can't be used to manipulate their internal structure in terms of their input.
The type says it all:
<*> :: f (a -> b) -> f a -> f b
It is easy to reason, the structure of f cannot be depend om the input of a. Because a cannot reach f on the type level.
Monads can be used for dynamic effects. This also can be reasoned from the type signature:
>>= :: m a -> (a -> m b) -> m b
How can you see this? Because a is on the same "level" as m. Mathematically it is a two stage process. Bind is a composition of two function: fmap and join. First we use fmap together with the monadic action to create a new structure embedded in the old one:
fmap :: (a -> b) -> m a -> m b
f :: (a -> m b)
m :: m a
fmap f :: m a -> m (m b)
fmap f m :: m (m b)
Fmap can create a new structure, based on the input value. Then we collapse the structure with join, thus we are able to manipulate the structure from within the monadic computation in a way that depends on the input:
join :: m (m a) -> m a
join (fmap f m) :: m b
Many monads are easier to implement with join:
(>>=) = join . fmap
This is possible with monads:
addCounter :: Int -> m Int ()
But not with applicatives, but applicatives (and any monad) can do things like:
addOne :: m Int ()
Arrows give more control over the input and the output types, but for me they really feel similar to applicatives. Maybe I am wrong about that.
Lets say I have a function
f :: State [Int] Int
and a function:
g :: StateT [Int] IO Int
I want to use f in g and pass the state between them. Is there a library function for
StateT (return . runState f)? Or in general, given a monad transformer with a corresponding monad, is there a library function for it?
In even more general, what you're trying to do is apply a transformation to an inner layer of a transformer stack. For two arbitrary monads, the type signature might look something like this:
fmapMT :: (MonadTrans t, Monad m1, Monad m2) => (m1 a -> m2 a) -> t m1 a -> t m2 a
Basically a higher-level fmap. In fact, it would probably make even more sense to combine it with a map over the final parameter as well:
fmapMT :: (MonadTrans t, Monad m1, Monad m2) => (m1 a -> m2 b) -> t m1 a -> t m2 b
Clearly this isn't going to be possible in all cases, though when the "source" monad is Identity it's likely to be easier, but I can imagine defining another type class for the places it does work. I don't think there's anything like this in the typical monad transformer libraries; however, some browsing on hackage turns up something very similar in the Monatron package:
class MonadT t => FMonadT t where
tmap' :: FunctorD m -> FunctorD n -> (a -> b)
-> (forall x. m x -> n x) -> t m a -> t n b
tmap :: (FMonadT t, Functor m, Functor n) => (forall b. m b -> n b)
-> t m a -> t n a
tmap = tmap' functor functor id
In the signature for tmap', the FunctorD types are basically ad-hoc implementations of fmap instead of using Functor instances directly.
Also, for two Functor-like type constructors F and G, a function with a type like (forall a. F a -> G a) describes a natural transformation from F to G. There's quite possibly another implementation of the transformer map that you want somewhere in the category-extras package but I'm not sure what the category-theoretic version of a monad transformer would be so I don't know what it might be called.
Since tmap requires only a Functor instance (which any Monad must have) and a natural transformation, and any Monad has a natural transformation from the Identity monad provided by return, the function you want can be written generically for any instance of FMonadT as tmap (return . runIdentity)--assuming the "basic" monad is defined as a synonym for the transformer applied to Identity, at any rate, which is generally the case with transformer libraries.
Getting back to your specific example, note that Monatron does indeed have an instance of FMonadT for StateT.
What you are asking for is a mapping (known as a monad morphism) from a monad StateT m to StateT n. I'll be using the the mmorph library, which provides a very nice set of tools for working with monad morphisms.
To perform the State -> StateT m transform you are looking for, we'll start by defining a morphism to generalize the Identity monad embedded in State,
generalize :: Monad m => Identity a -> m a
generalize = return . runIdentity
Next we'll want to lift this morphism to act on the inner monad of your StateT. That is, we want a function which given a mapping from one monad to another (e.g. our generalize morphism), will give us a function acting on the base monad of a monad transformer, e.g. t Identity a -> t m a. You'll find this resembles the hoist function of mmorph's MFunctor class,
hoist :: Monad m => (forall a. m a -> n a) -> t m b -> t n b
Putting the pieces together,
myAction :: State s Int
myAction = return 2
myAction' :: Monad m => StateT s m Int
myAction' = hoist generalize myAction
Such a function is not definable for all monad transformers. The Cont r monad, for example, can't be lifted into ContT r IO because that would require turning a continuation in the IO monad (a -> IO r) into a pure continuation (a -> r).