Monad instance of a number-parameterised vector? - haskell

Statically sized vectors in Haskell are shown in Oleg Kiselyov's Number-parameterized types and can also be found in the Data.Param.FSVec type from the parameterized-data module on Hackage:
data Nat s => FSVec s a
FSVec is not an instance of the Monad type class.
The monad instance for lists, can be used to remove or duplicate elements:
Prelude> [1,2,3] >>= \i -> case i of 1 -> [1,1]; 2 -> []; _ -> [i]
[1,1,3]
Whether similar to the list version or not, is it possible to construct a monad from a fixed length vector?

Yes it is possible, if not natural.
The monad has to 'diagonalize' the result in order to satisfy the monad laws.
That is to say, you can look at a vector as a tabulated function from [0..n-1] -> a and then adapt the monad instance for functions.
The resulting join operation takes a square matrix in the form of a vector of vectors and returns its diagonal.
Given
tabulate :: Pos n => (forall m. (Nat m, m :<: n) => m -> a) -> FSVec n a
then
instance Pos n => Monad (FSVec n) where
return = copy (toNum undefined)
v >>= f = tabulate (\i -> f (v ! i) ! i)
Sadly uses of this monad are somewhat limited.
I have a half-dozen variations on the theme in my streams package and Jeremy Gibbons wrote a blog post on this monad.
Equivalently, you can view a FSVec n as a representable functor with its representation being natural numbers bounded by n, then use the definitions of bindRep and pureRep in my representable-functors package to get the definition automatically.

That seems impossible given that any monad has a join function. If the vector size is not exactly zero or one this would change the vector size. You can make it a Functor and Applicative, though.

Sure you can do that. Just write
instance Monad (FSVec s) where
-- your definition of return
-- your definition of >>=

Related

Does Haskell have a "double bind" operation for two layers of monad? [duplicate]

Is there a way to implement bind for nested monads? What I want is the following signature:
(>>>=) :: (Monad m, Monad n) => m (n a) -> (a -> m (n b)) -> m (n b)
It looks like it should be a trivial task, but I somehow just can't wrap my head around it. In my program, I use this pattern for several different combinations of monads and for each combination, I can implement it. But for the general case, I just don't understand it.
Edit: It seems that it is not possible in the general case. But it is certainly possible in some special cases. E.g. if the inner Monad is a Maybe. Since it IS possible for all the Monads I want to use, having additional constraints seems fine for me. So I change the question a bit:
What additional constraints do I need on n such that the following is possible?
(>>>=) :: (Monad m, Monad n, ?? n) => m (n a) -> (a -> m (n b)) -> m (n b)
Expanding on the comments: As the linked questions show, it is necessary to have some function n (m a) -> m (n a) to even have a chance to make the composition a monad.
If your inner monad is a Traversable, then sequence provides such a function, and the following will have the right type:
(>>>=) :: (Monad m, Monad n, Traversable n) => m (n a) -> (a -> m (n b)) -> m (n b)
m >>>= k = do
a <- m
b <- sequence (fmap k a)
return (join b)
Several well-known transformers are in fact simple newtype wrappers over something equivalent to this (although mostly defining things with pattern matching instead of literally using the inner monads' Monad and Traversable instances):
MaybeT based on Maybe
ExceptT based on Either
WriterT based on (,) ((,) doesn't normally have its Monad instance defined, and WriterT is using the wrong tuple order to make use of it if it had - but in spirit it could have).
ListT based on []. Oh, whoops...
The last one is in fact notorious for not being a monad unless the lifted monad is "commutative" - otherwise, expressions that should be equal by the monad laws can give different order of effects. My hunch is that this comes essentially from lists being able to contain more than one value, unlike the other, reliably working examples.
So, although the above definition will be correctly typed, it can still break the monad laws.
Also as an afterthought, one other transformer is such a nested monad, but in a completely different way: ReaderT, based on using (->) as the outer monad.

Compose nested Monads in Haskell

Is there a way to implement bind for nested monads? What I want is the following signature:
(>>>=) :: (Monad m, Monad n) => m (n a) -> (a -> m (n b)) -> m (n b)
It looks like it should be a trivial task, but I somehow just can't wrap my head around it. In my program, I use this pattern for several different combinations of monads and for each combination, I can implement it. But for the general case, I just don't understand it.
Edit: It seems that it is not possible in the general case. But it is certainly possible in some special cases. E.g. if the inner Monad is a Maybe. Since it IS possible for all the Monads I want to use, having additional constraints seems fine for me. So I change the question a bit:
What additional constraints do I need on n such that the following is possible?
(>>>=) :: (Monad m, Monad n, ?? n) => m (n a) -> (a -> m (n b)) -> m (n b)
Expanding on the comments: As the linked questions show, it is necessary to have some function n (m a) -> m (n a) to even have a chance to make the composition a monad.
If your inner monad is a Traversable, then sequence provides such a function, and the following will have the right type:
(>>>=) :: (Monad m, Monad n, Traversable n) => m (n a) -> (a -> m (n b)) -> m (n b)
m >>>= k = do
a <- m
b <- sequence (fmap k a)
return (join b)
Several well-known transformers are in fact simple newtype wrappers over something equivalent to this (although mostly defining things with pattern matching instead of literally using the inner monads' Monad and Traversable instances):
MaybeT based on Maybe
ExceptT based on Either
WriterT based on (,) ((,) doesn't normally have its Monad instance defined, and WriterT is using the wrong tuple order to make use of it if it had - but in spirit it could have).
ListT based on []. Oh, whoops...
The last one is in fact notorious for not being a monad unless the lifted monad is "commutative" - otherwise, expressions that should be equal by the monad laws can give different order of effects. My hunch is that this comes essentially from lists being able to contain more than one value, unlike the other, reliably working examples.
So, although the above definition will be correctly typed, it can still break the monad laws.
Also as an afterthought, one other transformer is such a nested monad, but in a completely different way: ReaderT, based on using (->) as the outer monad.

Are there non-trivial Foldable or Traversable instances that don't look like containers?

There are lots of functors that look like containers (lists, sequences, maps, etc.), and many others that don't (state transformers, IO, parsers, etc.). I've not yet seen any non-trivial Foldable or Traversable instances that don't look like containers (at least if you squint a bit). Do any exist? If not, I'd love to get a better understanding of why they can't.
Every valid Traversable f is isomorphic to Normal s for some s :: Nat -> * where
data Normal (s :: Nat -> *) (x :: *) where -- Normal is Girard's terminology
(:-) :: s n -> Vec n x -> Normal s x
data Nat = Zero | Suc Nat
data Vec (n :: Nat) (x :: *) where
Nil :: Vec Zero n
(:::) :: x -> Vec n x -> Vec (Suc n) x
but it's not at all trivial to implement the iso in Haskell (but it's worth a go with full dependent types). Morally, the s you pick is
data {- not really -} ShapeSize (f :: * -> *) (n :: Nat) where
Sized :: pi (xs :: f ()) -> ShapeSize f (length xs)
and the two directions of the iso separate and recombine shape and contents. The shape of a thing is given just by fmap (const ()), and the key point is that the length of the shape of an f x is the length of the f x itself.
Vectors are traversable in the visit-each-once-left-to-right sense. Normals are traversable exactly in by preserving the shape (hence the size) and traversing the vector of elements. To be traversable is to have finitely many element positions arranged in a linear order: isomorphism to a normal functor exactly exposes the elements in their linear order. Correspondingly, every Traversable structure is a (finitary) container: they have a set of shapes-with-size and a corresponding notion of position given by the initial segment of the natural numbers strictly less than the size.
The Foldable things are also finitary and they keep things in an order (there is a sensible toList), but they are not guaranteed to be Functors, so they don't have such a crisp notion of shape. In that sense (the sense of "container" defined by my colleagues Abbott, Altenkirch and Ghani), they do not necessarily admit a shapes-and-positions characterization and are thus not containers. If you're lucky, some of them may be containers upto some quotient. Indeed Foldable exists to allow processing of structures like Set whose internal structure is intended to be a secret, and certainly depends on ordering information about the elements which is not necessarily respected by traversing operations. Exactly what constitutes a well behaved Foldable is rather a moot point, however: I won't quibble with the pragmatic benefits of that library design choice, but I could wish for a clearer specification.
Well, with the help of universe, one could potentially write Foldable and Traversable instances for state transformers over finite state spaces. The idea would be roughly similar to the Foldable and Traversable instances for functions: run the function everywhere for Foldable and make a lookup table for Traversable. Thus:
import Control.Monad.State
import Data.Map
import Data.Universe
-- e.g. `m ~ Identity` satisfies these constraints
instance (Finite s, Foldable m, Monad m) => Foldable (StateT s m) where
foldMap f m = mconcat [foldMap f (evalStateT m s) | s <- universeF]
fromTable :: (Finite s, Ord s) => [m (a, s)] -> StateT s m a
fromTable vs = StateT (fromList (zip universeF vs) !)
float :: (Traversable m, Applicative f) => m (f a, s) -> f (m (a, s))
float = traverse (\(fa, s) -> fmap (\a -> (a, s)) fa)
instance (Finite s, Ord s, Traversable m, Monad m) => Traversable (StateT s m) where
sequenceA m = fromTable <$> traverse (float . runStateT m) universeF
I'm not sure whether this makes sense. If it does, I think I would be happy to add it to the package; what do you think?
I don’t think it’s actually Foldable or Traversible, but MonadRandom is an example of something that could be, functioning like an infinite list, but which doesn't look any more like a container than anything else that’s foldable. Conceptually, it’s a random variable.

Is there a monad that doesn't have a corresponding monad transformer (except IO)?

So far, every monad (that can be represented as a data type) that I have encountered had a corresponding monad transformer, or could have one. Is there such a monad that can't have one? Or do all monads have a corresponding transformer?
By a transformer t corresponding to monad m I mean that t Identity is isomorphic to m. And of course that it satisfies the monad transformer laws and that t n is a monad for any monad n.
I'd like to see either a proof (ideally a constructive one) that every monad has one, or an example of a particular monad that doesn't have one (with a proof). I'm interested in both more Haskell-oriented answers, as well as (category) theoretical ones.
As a follow-up question, is there a monad m that has two distinct transformers t1 and t2? That is, t1 Identity is isomorphic to t2 Identity and to m, but there is a monad n such that t1 n is not isomorphic to t2 n.
(IO and ST have a special semantics so I don't take them into account here and let's disregard them completely. Let's focus only on "pure" monads that can be constructed using data types.)
I'm with #Rhymoid on this one, I believe all Monads have two (!!) transformers. My construction is a bit different, and far less complete. I'd like to be able to take this sketch into a proof, but I think I'm either missing the skills/intuition and/or it may be quite involved.
Due to Kleisli, every monad (m) can be decomposed into two functors F_k and G_k such that F_k is left adjoint to G_k and that m is isomorphic to G_k * F_k (here * is functor composition). Also, because of the adjunction, F_k * G_k forms a comonad.
I claim that t_mk defined such that t_mk n = G_k * n * F_k is a monad transformer. Clearly, t_mk Id = G_k * Id * F_k = G_k * F_k = m. Defining return for this functor is not difficult since F_k is a "pointed" functor, and defining join should be possible since extract from the comonad F_k * G_k can be used to reduce values of type (t_mk n * t_mk n) a = (G_k * n * F_k * G_k * n * F_k) a to values of type G_k * n * n * F_k, which is then further reduces via join from n.
We do have to be a bit careful since F_k and G_k are not endofunctors on Hask. So, they are not instances of the standard Functor typeclass, and also are not directly composable with n as shown above. Instead we have to "project" n into the Kleisli category before composition, but I believe return from m provides that "projection".
I believe you can also do this with the Eilenberg-Moore monad decomposition, giving m = G_em * F_em, tm_em n = G_em * n * F_em, and similar constructions for lift, return, and join with a similar dependency on extract from the comonad F_em * G_em.
Here's a hand-wavy I'm-not-quite-sure answer.
Monads can be thought of as the interface of imperative languages. return is how you inject a pure value into the language, and >>= is how you splice pieces of the language together. The Monad laws ensure that "refactoring" pieces of the language works the way you would expect. Any additional actions provided by a monad can be thought of as its "operations."
Monad Transformers are one way to approach the "extensible effects" problem. If we have a Monad Transformer t which transforms a Monad m, then we could say that the language m is being extended with additional operations available via t. The Identity monad is the language with no effects/operations, so applying t to Identity will just get you a language with only the operations provided by t.
So if we think of Monads in terms of the "inject, splice, and other operations" model, then we can just reformulate them using the Free Monad Transformer. Even the IO monad could be turned into a transformer this way. The only catch is that you probably want some way to peel that layer off the transformer stack at some point, and the only sensible way to do it is if you have IO at the bottom of the stack so that you can just perform the operations there.
Previously, I thought I found examples of explicitly defined monads without a transformer, but those examples were incorrect.
The transformer for Either a (z -> a) is m (Either a (z -> m a), where m is an arbitrary foreign monad. The transformer for (a -> n p) -> n a is (a -> t m p) -> t m a where t m is the transformer for the monad n.
The free pointed monad.
The monad type constructor L for this example is defined by
type L z a = Either a (z -> a)
The intent of this monad is to embellish the ordinary reader monad z -> a with an explicit pure value (Left x). The ordinary reader monad's pure value is a constant function pure x = _ -> x. However, if we are given a value of type z -> a, we will not be able to determine whether this value is a constant function. With L z a, the pure value is represented explicitly as Left x. Users can now pattern-match on L z a and determine whether a given monadic value is pure or has an effect. Other than that, the monad L z does exactly the same thing as the reader monad.
The monad instance:
instance Monad (L z) where
return x = Left x
(Left x) >>= f = f x
(Right q) >>= f = Right(join merged) where
join :: (z -> z -> r) -> z -> r
join f x = f x x -- the standard `join` for Reader monad
merged :: z -> z -> r
merged = merge . f . q -- `f . q` is the `fmap` of the Reader monad
merge :: Either a (z -> a) -> z -> a
merge (Left x) _ = x
merge (Right p) z = p z
This monad L z is a specific case of a more general construction, (Monad m) => Monad (L m) where L m a = Either a (m a). This construction embellishes a given monad m by adding an explicit pure value (Left x), so that users can now pattern-match on L m to decide whether the value is pure. In all other ways, L m represents the same computational effect as the monad m.
The monad instance for L m is almost the same as for the example above, except the join and fmap of the monad m need to be used, and the helper function merge is defined by
merge :: Either a (m a) -> m a
merge (Left x) = return #m x
merge (Right p) = p
I checked that the laws of the monad hold for L m with an arbitrary monad m.
This construction gives the free pointed functor on the given monad m. This construction guarantees that the free pointed functor on a monad is also a monad.
The transformer for the free pointed monad is defined like this:
type LT m n a = n (Either a (mT n a))
where mT is the monad transformer of the monad m (which needs to be known).
Another example:
type S a = (a -> Bool) -> Maybe a
This monad appeared in the context of "search monads" here. The paper by Jules Hedges also mentions the search monad, and more generally, "selection" monads of the form
type Sq n q a = (a -> n q) -> n a
for a given monad n and a fixed type q. The search monad above is a particular case of the selection monad with n a = Maybe a and q = (). The paper by Hedges claims (without proof, but he proved it later using Coq) that Sq is a monad transformer for the monad (a -> q) -> a.
However, the monad (a -> q) -> a has another monad transformer (m a -> q) -> m a of the "composed outside" type. This is related to the property of "rigidity" explored in the question Is this property of a functor stronger than a monad? Namely, (a -> q) -> a is a rigid monad, and all rigid monads have monad transformers of the "composed-outside" type.
Generally, transformed monads don't themselves automatically possess a monad transformer. That is, once we take some foreign monad m and apply some monad transformer t to it, we obtain a new monad t m, and this monad doesn't have a transformer: given a new foreign monad n, we don't know how to transform n with the monad t m. If we know the transformer mT for the monad m, we can first transform n with mT and then transform the result with t. But if we don't have a transformer for the monad m, we are stuck: there is no construction that creates a transformer for the monad t m out of the knowledge of t alone and works for arbitrary foreign monads m.
However, in practice all explicitly defined monads have explicitly defined transformers, so this problem does not arise.
#JamesCandy's answer suggests that for any monad (including IO?!), one can write a (general but complicated) type expression that represents the corresponding monad transformer. Namely, you first need to Church-encode your monad type, which makes the type look like a continuation monad, and then define its monad transformer as if for the continuation monad. But I think this is incorrect - it does not give a recipe for producing a monad transformer in general.
Taking the Church encoding of a type a means writing down the type
type ca = forall r. (a -> r) -> r
This type ca is completely isomorphic to a by Yoneda's lemma. So far we have achieved nothing other than made the type a lot more complicated by introducing a quantified type parameter forall r.
Now let's Church-encode a base monad L:
type CL a = forall r. (L a -> r) -> r
Again, we have achieved nothing so far, since CL a is fully equivalent to L a.
Now pretend for a second that CL a a continuation monad (which it isn't!), and write the monad transformer as if it were a continuation monad transformer, by replacing the result type r through m r:
type TCL m a = forall r. (L a -> m r) -> m r
This is claimed to be the "Church-encoded monad transformer" for L. But this seems to be incorrect. We need to check the properties:
TCL m is a lawful monad for any foreign monad m and for any base monad L
m a -> TCL m a is a lawful monadic morphism
The second property holds, but I believe the first property fails, - in other words, TCL m is not a monad for an arbitrary monad m. Perhaps some monads m admit this but others do not. I was not able to find a general monad instance for TCL m corresponding to an arbitrary base monad L.
Another way to argue that TCL m is not in general a monad is to note that forall r. (a -> m r) -> m r is indeed a monad for any type constructor m. Denote this monad by CM. Now, TCL m a = CM (L a). If TCL m were a monad, it would imply that CM can be composed with any monad L and yields a lawful monad CM (L a). However, it is highly unlikely that a nontrivial monad CM (in particular, one that is not equivalent to Reader) will compose with all monads L. Monads usually do not compose without stringent further constraints.
A specific example where this does not work is for reader monads. Consider L a = r -> a and m a = s -> a where r and s are some fixed types. Now, we would like to consider the "Church-encoded monad transformer" forall t. (L a -> m t) -> m t. We can simplify this type expression using the Yoneda lemma,
forall t. (x -> t) -> Q t = Q x
(for any functor Q) and obtain
forall t. (L a -> s -> t) -> s -> t
= forall t. ((L a, s) -> t) -> s -> t
= s -> (L a, s)
= s -> (r -> a, s)
So this is the type expression for TCL m a in this case. If TCL were a monad transformer then P a = s -> (r -> a, s) would be a monad. But one can check explicitly that this P is actually not a monad (one cannot implement return and bind that satisfy the laws).
Even if this worked (i.e. assuming that I made a mistake in claiming that TCL m is in general not a monad), this construction has certain disadvantages:
It is not functorial (i.e. not covariant) with respect to the foreign monad m, so we cannot do things like interpret a transformed free monad into another monad, or merge two monad transformers as explained here Is there a principled way to compose two monad transformers if they are of different type, but their underlying monad is of the same type?
The presence of a forall r makes the type quite complicated to reason about and may lead to performance degradation (see the "Church encoding considered harmful" paper) and stack overflows (since Church encoding is usually not stack-safe)
The Church-encoded monad transformer for an identity base monad (L = Id) does not yield the unmodified foreign monad: T m a = forall r. (a -> m r) -> m r and this is not the same as m a. In fact it's quite difficult to figure out what that monad is, given a monad m.
As an example showing why forall r makes reasoning complicated, consider the foreign monad m a = Maybe a and try to understand what the type forall r. (a -> Maybe r) -> Maybe r actually means. I was not able to simplify this type or to find a good explanation about what this type does, i.e. what kind of "effect" it represents (since it's a monad, it must represent some kind of "effect") and how one would use such a type.
The Church-encoded monad transformer is not equivalent to the standard well-known monad transformers such as ReaderT, WriterT, EitherT, StateT and so on.
It is not clear how many other monad transformers exist and in what cases one would use one or another transformer.
One of the questions in the post is to find an explicit example of a monad m that has two transformers t1 and t2 such that for some foreign monad n, the monads t1 n and t2 n are not equivalent.
I believe that the Search monad provides such an example.
type Search a = (a -> p) -> a
where p is a fixed type.
The transformers are
type SearchT1 n a = (a -> n p) -> n a
type SearchT2 n a = (n a -> p) -> n a
I checked that both SearchT1 n and SearchT2 n are lawful monads for any monad n. We have liftings n a -> SearchT1 n a and n a -> SearchT2 n a that work by returning constant functions (just return n a as given, ignoring the argument). We have SearchT1 Identity and SearchT2 Identity obviously equivalent to Search.
The big difference between SearchT1 and SearchT2 is that SearchT1 is not functorial in n, while SearchT2 is. This may have implications for "running" ("interpreting") the transformed monad, since normally we would like to be able to lift an interpreter n a -> n' a into a "runner" SearchT n a -> SearchT n' a. This is possibly only with SearchT2.
A similar deficiency is present in the standard monad transformers for the continuation monad and the codensity monad: they are not functorial in the foreign monad.
My solution exploits the logical structure of Haskell terms etc.
I looked at right Kan extensions as possible representations of the monad transformer. As everyone knows, right Kan extensions are limits, so it makes sense that they should serve as universal encoding of any object of interest. For monadic functors F and M, I looked at the right Kan extension of MF along F.
First I proved a lemma, "rolling lemma:" a procomposed functor to the Right kan extension can be rolled inside it, giving the map F(Ran G H) -> Ran G(FH) for any functors F, G, and H.
Using this lemma, I computed a monadic join for the right Kan extension Ran F (MF), requiring the distributive law FM -> MF. It is as follows:
Ran F(MF) . Ran F(MF) [rolling lemma] =>
Ran F(Ran F(MF)MF) [insert eta] =>
Ran F(Ran F(MF)FMF) [gran] =>
Ran F(MFMF) [apply distributive law] =>
Ran F(MMFF) [join Ms and Fs] =>
Ran F(MF).
What seems to be interesting about this construction is that it admits of lifts of both functors F and M as follows:
(1) F [lift into codensity monad] =>
Ran F F [procompose with eta] =>
Ran F(MF).
(2) M [Yoneda lemma specialized upon F-] =>
Ran F(MF).
I also investigated the right Kan extension Ran F(FM). It seems to be a little better behaved achieving monadicity without appeal to the distributive law, but much pickier in what functors it lifts. I determined that it will lift monadic functors under the following conditions:
1) F is monadic.
2) F |- U, in which case it admits the lift F ~> Ran U(UM). This can be used in the context of a state monad to "set" the state.
3) M under certain conditions, for instance when M admits a distributive law.

Computation Constructs (Monads, Arrows, etc.)

I have become rather interested in how computation is modeled in Haskell. Several resources have described monads as "composable computation" and arrows as "abstract views of computation". I've never seen monoids, functors or applicative functors described in this way. It seems that they lack the necessary structure.
I find that idea interesting and wonder if there are any other constructs that do something similar. If so, what are some resources that I can use to acquaint myself with them? Are there any packages on Hackage that might come in handy?
Note: This question is similar to
Monads vs. Arrows and https://stackoverflow.com/questions/2395715/resources-for-learning-monads-functors-monoids-arrows-etc, but I am looking for constructs beyond funtors, applicative functors, monads, and arrows.
Edit: I concede that applicative functors should be considered "computational constructs", but I'm really looking for something I haven't come across yet. This includes applicative functors, monads and arrows.
Arrows are generalized by Categories, and so by the Category typeclass.
class Category f where
(.) :: f a b -> f b c -> f a c
id :: f a a
The Arrow typeclass definition has Category as a superclass. Categories (in the haskell sense) generalize functions (you can compose them but not apply them) and so are definitely a "model of computation". Arrow provides a Category with additional structure for working with tuples. So, while Category mirrors something about Haskell's function space, Arrow extends that to something about product types.
Every Monad gives rise to something called a "Kleisli Category" and this construction gives you instances of ArrowApply. You can build a Monad out of any ArrowApply such that going full circle doesn't change your behavior, so in some deep sense Monad and ArrowApply are the same thing.
newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }
instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli (\b -> g b >>= f)
instance Monad m => Arrow (Kleisli m) where
arr f = Kleisli (return . f)
first (Kleisli f) = Kleisli (\ ~(b,d) -> f b >>= \c -> return (c,d))
second (Kleisli f) = Kleisli (\ ~(d,b) -> f b >>= \c -> return (d,c))
Actually every Arrow gives rise to an Applicative (universally quantified to get the kinds right) in addition to the Category superclass, and I believe the combination of the appropriate Category and Applicative is enough to reconstruct your Arrow.
So, these structures are deeply connected.
Warning: wishy-washy commentary ahead. One central difference between the Functor/Applicative/Monad way of thinking and the Category/Arrow way of thinking is that while Functor and its ilk are generalizations at the level of object (types in Haskell), Category/Arrow are generelazation of the notion of morphism (functions in Haskell). My belief is that thinking at the level of generalized morphism involves a higher level of abstraction than thinking at the level of generalized objects. Sometimes that is a good thing, other times it is not. On the other-hand, despite the fact that Arrows have a categorical basis, and no one in math thinks Applicative is interesting, it is my understanding that Applicative is generally better understood than Arrow.
Basically you can think of "Category < Arrow < ArrowApply" and "Functor < Applicative < Monad" such that "Category ~ Functor", "Arrow ~ Applicative" and "ArrowApply ~ Monad".
More Concrete Below:
As for other structures to model computation: one can often reverse the direction of the "arrows" (just meaning morphisms here) in categorical constructions to get the "dual" or "co-construction". So, if a monad is defined as
class Functor m => Monad m where
return :: a -> m a
join :: m (m a) -> m a
(okay, I know that isn't how Haskell defines things, but ma >>= f = join $ fmap f ma and join x = x >>= id so it just as well could be)
then the comonad is
class Functor m => Comonad m where
extract :: m a -> a -- this is co-return
duplicate :: m a -> m (m a) -- this is co-join
This thing turns out to be pretty common also. It turns out that Comonad is the basic underlying structure of cellular automata. For completness, I should point out that Edward Kmett's Control.Comonad puts duplicate in a class between functor and Comonad for "Extendable Functors" because you can also define
extend :: (m a -> b) -> m a -> m b -- Looks familiar? this is just the dual of >>=
extend f = fmap f . duplicate
--this is enough
duplicate = extend id
It turns out that all Monads are also "Extendable"
monadDuplicate :: Monad m => m a -> m (m a)
monadDuplicate = return
while all Comonads are "Joinable"
comonadJoin :: Comonad m => m (m a) -> m a
comonadJoin = extract
so these structures are very close together.
All Monads are Arrows (Monad is isomorphic to ArrowApply). In a different way, all Monads are instances of Applicative, where <*> is Control.Monad.ap and *> is >>. Applicative is weaker because it does not guarantee the >>= operation. Thus Applicative captures computations that do not examine previous results and branch on values. In retrospect much monadic code is actually applicative, and with a clean rewrite this would happen.
Extending monads, with recent Constraint kinds in GHC 7.4.1 there can now be nicer designs for restricted monads. And there are also people looking at parameterized monads, and of course I include a link to something by Oleg.
In libraries these structures give rise to different type of computations.
For example Applicatives can be used to implement static effects. With that I mean effects, which are defined at forehand. For example when implementing a state machine, rejecting or accepting an input state. They can't be used to manipulate their internal structure in terms of their input.
The type says it all:
<*> :: f (a -> b) -> f a -> f b
It is easy to reason, the structure of f cannot be depend om the input of a. Because a cannot reach f on the type level.
Monads can be used for dynamic effects. This also can be reasoned from the type signature:
>>= :: m a -> (a -> m b) -> m b
How can you see this? Because a is on the same "level" as m. Mathematically it is a two stage process. Bind is a composition of two function: fmap and join. First we use fmap together with the monadic action to create a new structure embedded in the old one:
fmap :: (a -> b) -> m a -> m b
f :: (a -> m b)
m :: m a
fmap f :: m a -> m (m b)
fmap f m :: m (m b)
Fmap can create a new structure, based on the input value. Then we collapse the structure with join, thus we are able to manipulate the structure from within the monadic computation in a way that depends on the input:
join :: m (m a) -> m a
join (fmap f m) :: m b
Many monads are easier to implement with join:
(>>=) = join . fmap
This is possible with monads:
addCounter :: Int -> m Int ()
But not with applicatives, but applicatives (and any monad) can do things like:
addOne :: m Int ()
Arrows give more control over the input and the output types, but for me they really feel similar to applicatives. Maybe I am wrong about that.

Resources