what does the A stand for in sequenceA? - haskell

What does sequenceA from Traversable stand for? Why is there capital A at the end? I've been learning Haskell for a few months now and this is one of those things that's been bugging me for a while.

The "A" stands for Applicative, as in the constraint in sequenceA's type:
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)
That the "A" is there is fruit of a historical accident. Once upon a time, neither Applicative nor Traversable existed in Haskell. Nonetheless, a function exactly like sequenceA already existed -- except that it had a much more specific type:
sequence :: Monad m => [m a] -> m [a]
When Applicative and Traversable were introduced, the function was generalised from lists to any Traversable [1]:
sequence :: (Traversable t, Monad m) => t (m a) -> m (t a)
sequence's Monad constraint is unnecessarily restrictive. Back then, however, generalising it further to Applicative was not an option. The problem was that until early last year Applicative was not a superclass of Monad as it is supposed to be, and therefore generalising the signature to Applicative would break any use of sequence with monads that lacked an Applicative instance. That being so, an extra "A" was added to the name of the general version.
[1]: Note, however, that the Prelude continued to carry the list-specific version until quite recently.

Related

Why is ap available inside Applicative?

I am trying to implement MonadUnliftIO for Snap and analyzing Snap classes.
I discovered that ap is used for implementing Applicative while ap requires Monad and Monad requires Applicative. It looks like a loop.
I thought till now that is not possible to write such things.
What is the limit for such kind of trick?
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
return :: a -> m a
instance Applicative Snap where
pure x = ...
(<*>) = ap
ap :: Monad m => m (a -> b) -> m a -> m b
This only works because Snap has a Monad instance (and it's actually in scope at that point).
Effectively, the compiler handles declarations in two separate passes: first it resolves all the instance heads
instance Applicative Snap
instance Monad Snap
...without even looking in the actual method implementations. This works out fine: Monad is happy as long as it sees the Applicative instance.
So then it already knows that Snap is a monad. Then it proceeds to typecheck the (<*>) implementation, notices that it requires the Monad instance, and... yeah, it's there, so that too is fine.
The actual reason we have ap :: Monad m => ... is mostly historical: the Haskell98 Monad class did not have Applicative or even Functor as a superclass, so it was possible to write code Monad m => ... that could then not use fmap or <*>. Therefore the liftM and ap functions were introduced as replacement.
Then, when the better current class hierarchy was established, many instances were simply defined by referring back to the already existing Monad instance, which is after all sufficient for everything.
IMO it is usually a good idea to directly implement <*> and definitely fmap before writing the Monad instance, rather than the other way around.
I think you are imagining a cycle like this:
(<*>) is implemented with ap
(>>=) is implemented with (<*>)
ap is implemented using (>>=)
And yes, if you try this, it will indeed give you an infinite loop!
However, this is not what your code block does. Its implementations look more like this:
(>>=) is implemented from first principles, without using any Applicative functions
ap is implemented using (>>=)
(<*>) is implemented in terms of ap
Which is obviously fine — there’s no cycles of any sort in this set of function definitions.
One thing which might still be a bit confusing is: how can you implement an Applicative function in terms of a Monad function, when a type can only be a Monad if it is already Applicative? To answer this, let’s add explicit type signatures to your code sample (note this requires language extensions to compile):
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
return :: a -> m a
instance Applicative Snap where
pure :: a -> Snap a
pure x = ...
(<*>) :: Snap (a -> b) -> Snap a -> Snap b
(<*>) = ap
ap :: Monad m => m (a -> b) -> m a -> m b
The answer is now clear: we are not in fact defining (<*>) for just any arbitrary Applicative type! Rather, we are defining it for Snap only, which means we can use any function defined to work on Snaps — including those from the Monad typeclass. The fact that this function happens to be within an instance Applicative Snap block doesn’t matter: in all other respects, it’s just an ordinary function definition, and there’s no reason why the full range of Snap functions shouldn’t be able to appear in it.
There should be some instance Monad Snap somewhere else. The ap use in the Applicative instance will make use of >>= from that instance.
In general, an instance for Applicative can not make use of ap in this way, but when then applicative is also a monad, I think it is quite common to do so, since it's convenient.
Note that, if one chooses this route, it should avoid using <*> or ap inside the definition of >>=, since that could lead to infinite recursion.
The fact that the two instances are mutually recursive, in some sense, is not an issue. Haskell allows mutual recursion, and this also reflects on instances. The programmer however must ensure that the recursion actually terminates, or be prepared to have a non-terminating program.

Why does sequenceA need Traversable?

From the Typeclassopedia:
sequence :: Monad m => [m a] -> m [a]
Takes a list of computations and combines them into one computation which collects a list of their results. It is again something of a historical accident that sequence has a Monad constraint, since it can actually be implemented only in terms of Applicative.
And indeed, there is sequenceA which operates on Applicative types.
sequenceA :: (Applicative f, Traversable t) => t (f a) -> f (t a)
But wait, why does sequenceA need the Traversable constraint when it can be implemented without it?
seqA :: Applicative f => [f a] -> f [a]
seqA = foldr fx (pure [])
where
fx f fs = pure (:) <*> f <*> fs)
This is a subject that can be initially confusing because there is a lot of history and change around it, and older explanations are out of date. The history is roughly this:
The idea of the sequence operation dates back to the 1990s, when the Monad class was incorporated into Haskell and people first formulated generic operations for it. The signature that you quote from the Typeclassopedia reflects this: sequence :: Monad m => [m a] -> m [a]. It can work with any monad, but it is hardcoded to work on lists.
The Applicative class was developed during the mid-to-late 2000s; the seminal paper that everybody cites is McBride and Patterson (2008), "Applicative Programming with Effects". McBride and Patterson also note that:
The old monadic sequence operation can in fact be generalized to dist :: Applicative f => [f a] -> f [a]. The Monad constraint is too narrow!
Likewise, the old mapM function (close relative to sequence) generalizes to traverse :: Applicative f => (a -> f b) -> [a] -> f [b].
This can be generalized to non-list data structures by putting these operations into a Traversable class that they propose in the paper.
The Applicative and Traversable classes were added into the GHC base libraries, with some small changes. The dist function was named sequenceA instead, and the Foldable class joined Traversable for types that support only a subset of Traversable's requirements.
These classes proved extremely popular. But that shows that the original sequence :: Monad m => [m a] -> m [a] signature was wrong in hindsight.
So finally, fast-forward to 2015, when GHC implemented the Applicative/Monad Proposal (make Applicative a superclass of Monad) and the Foldable/Traversable Proposal, where the base libraries were revised in order to get them close to the ideal, hindsight-informed design.
The Foldable/Traversable Proposal, however, did not change the signature of sequence. I can't tell you precisely why, but the answer is going to be some obtuse detail that boils down to historical reasons, backward compatibility or something like that. But I can tell you that if we could start all over again:
sequence would have sequenceA's signature;
sequenceA would not exist independently of sequence;
mapM is just traverse with the wrong signature, and thus would not exist independently either.

Why isn't Kleisli an instance of Monoid?

If you wish to append two functions of type (a -> m b) so you get only one function of the same type appending both results, you could use Kleisli to do so:
instance (Monad m, Monoid b) => Monoid (Kleisli m a b) where
mempty = Kleisli (\_ -> return mempty)
mappend k1 k2 =
Kleisli g
where
g x = do
r1 <- runKleisli k1 x
r2 <- runKleisli k2 x
return (r1 <> r2)
However, currently there is no such instance defined in Control.Arrow.
As often, in Haskell, I suspect there is a good reason, but cannot find which one.
Note
This question is rather similar to this one. However, with Monoid I don't see a way to define an instance such as:
instance (Monad m, Monoid b) => Monoid (a -> m b) where
[...]
since there is already an instance:
instance Monoid b => Monoid (a -> b) where
[...]
In the business of library design, we face a choice point here, and we have chosen to be less than entirely consistent in our collective policy (or lack of it).
Monoid instances for Monad (or Applicative) type constructors can come about in a variety of ways. Pointwise lifting is always available, but we don't define
instance (Applicative f, Monoid x) => Monoid (f x) {- not really -} where
mempty = pure mempty
mappend fa fb = mappend <$> fa <*> fb
Note that the instance Monoid (a -> b) is just such a pointwise lifting, so the pointwise lifting for (a -> m b) does happen whenever the monoid instance for m b does pointwise lifting for the monoid on b.
We don't do pointwise lifting in general, not only because it would prevent other Monoid instances whose carriers happen to be applied types, but also because the structure of the f is often considered more significant than that of the x. A key case in point is the free monoid, better known as [x], which is a Monoid by [] and (++), rather than by pointwise lifting. The monoidal structure comes from the list wrapping, not from the elements wrapped.
My preferred rule of thumb is indeed to prioritise monoidal structure inherent in the type constructor over either pointwise lifting, or monoidal structure of specific instantiations of a type, like the composition monoid for a -> a. These can and do get newtype wrappings.
Arguments break out over whether Monoid (m x) should coincide with MonadPlus m whenever both exist (and similarly with Alternative). My sense is that the only good MonadPlus instance is a copy of a Monoid instance, but others differ. Still, the library is not consistent in this matter, especially not in the matter of (many readers will have seen this old bugbear of mine coming)...
...the monoid instance for Maybe, which ignores the fact that we routinely use Maybe to model possible failure and instead observes that that the same data type idea of chucking in an extra element can be used to give a semigroup a neutral element if it didn't already have one. The two constructions give rise to isomorphic types, but they are not conceptually cognate. (Edit To make matters worse, the idea is implemented awkwardly, giving instance a Monoid constraint, when only a Semigroup is needed. I'd like to see the Semigroup-extends-to-Monoid idea implemented, but not for Maybe.)
Getting back to Kleisli in particular, we have three obvious candidate instances:
Monoid (Kleisli m a a) with return and Kleisli composition
MonadPlus m => Monoid (Kleisli m a b) lifting mzero and mplus pointwise over ->
Monoid b => Monoid (Kleisli m a b) lifting the monoid structure of b over m then ->
I expect no choice has been made, just because it's not clear which choice to make. I hesitate to say so, but my vote would be for 2, prioritising the structure coming from Kleisli m a over the structure coming from b.

Can the Traversable laws be derived from the fact that every Traversable is also a Functor?

I've been thinking why does the Traversable type class require both a Functor and Foldable, and not just the Foldable, since it doesn't use any part of the Functor?
class (Functor t, Foldable t) => Traversable t where
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
sequenceA :: Applicative f => t (f a) -> f (t a)
Is seems that the laws for Traversable were not present in the documentation for base 4.6, which makes me think that they could be derived from the fact that every Traversable is a Functor?
In the Essence of the Iterator Pattern paper (section 5.1) it states that there are some free theorems for traverse which come directly from its type, but the paper doesn't go into depth describing why this is the case.
Where do the Traversable laws as described in the base 4.7 documentation come from?
Basically, any type constructor * -> * that's covariant in its argument is canonically a functor. Since Applicative f is obviously covariant, so is t for the signature sequenceA :: t (f a) -> f (t a) to make sense, hence the Functor requirement is essentially redundant. But much like with the long-missing-because-unneeded Applicative => Monad superclass, it's not really a good idea to omit such "obvious" requirements, it just leads to code duplication and confusing synonymous functions.

Relationship between Functor, Applicative Functor, and Monad

When reading up on type classes I have seen that the relationship between Functors, Applicative Functors, and Monads is that of strictly increasing power. Functors are types that can be mapped over. Applicative Functors can do the same things with certain effects. Monads the same with possibly unrestrictive effects. Moreover:
Every Monad is an Applicative Functor
Every Applicative Functor is a Functor
The definition of the Applicative Functor shows this clearly with:
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
But the definition of Monad is:
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
(>>) :: m a -> m b -> m b
m >> n = m >>= \_ -> n
fail :: String -> m a
According to Brent Yorgey's great typeclassopedia that an alternative definition of monad could be:
class Applicative m => Monad' m where
(>>=) :: m a -> (a -> m b) -> m b
which is obviously simpler and would cement that Functor < Applicative Functor < Monad. So why isn't this the definition? I know applicative functors are new, but according to the 2010 Haskell Report page 80, this hasn't changed. Why is this?
Everyone wants to see Applicative become a superclass of Monad, but it would break so much code (if return is eliminated, every current Monad instance becomes invalid) that everyone wants to hold off until we can extend the language in such a way that avoids breaking the code (see here for one prominent proposal).
Haskell 2010 was a conservative, incremental improvement in general, standardising only a few uncontroversial extensions and breaking compatibility in one area to bring the standard in line with every existing implementation. Indeed, Haskell 2010's libraries don't even include Applicative — less of what people have come to expect from the standard library is standardised than you might expect.
Hopefully we'll see the situation improve soon, but thankfully it's usually only a mild inconvenience (having to write liftM instead of fmap in generic code, etc.).
Changing the definition of Monad at this point, would have broken a lot of existing code (any piece of code that defines a Monad instance) to be worthwhile.
Breaking backwards-compatibility like that is only worthwhile if there is a large practical benefit to the change. In this case the benefit is not that big (and mostly theoretical anyway) and wouldn't justify that amount of breakage.

Resources