Is it better to define Functor in terms of Applicative in terms of Monad, or vice versa? - haskell

This is a general question, not tied to any one piece of code.
Say you have a type T a that can be given an instance of Monad. Since every monad is an Applicative by assigning pure = return and (<*>) = ap, and then every applicative is a Functor via fmap f x = pure f <*> x, is it better to define your instance of Monad first, and then trivially give T instances of Applicative and Functor?
It feels a bit backward to me. If I were doing math instead of programming, I would think that I would first show that my object is a functor, and then continue adding restrictions until I have also shown it to be a monad. I know Haskell is merely inspired by Category Theory and obviously the techniques one would use when constructing a proof aren't the techniques one would use when writing a useful program, but I'd like to get an opinion from the Haskell community. Is it better to go from Monad down to Functor? or from Functor up to Monad?

I tend to write and see written the Functor instance first. Doubly so because if you use the LANGUAGE DeriveFunctor pragma then data Foo a = Foo a deriving ( Functor ) works most of the time.
The tricky bits are around agreement of instances when your Applicative can be more general than your Monad. For instance, here's an Err data type
data Err e a = Err [e] | Ok a deriving ( Functor )
instance Applicative (Err e) where
pure = Ok
Err es <*> Err es' = Err (es ++ es')
Err es <*> _ = Err es
_ <*> Err es = Err es
Ok f <*> Ok x = Ok (f x)
instance Monad (Err e) where
return = pure
Err es >>= _ = Err es
Ok a >>= f = f a
Above I defined the instances in Functor-to-Monad order and, taken in isolation, each instance is correct. Unfortunately, the Applicative and Monad instances do not align: ap and (<*>) are observably different as are (>>) and (*>).
Err "hi" <*> Err "bye" == Err "hibye"
Err "hi" `ap` Err "bye" == Err "hi"
For sensibility purposes, especially once the Applicative/Monad Proposal is in everyone's hands, these should align. If you defined instance Applicative (Err e) where { pure = return; (<*>) = ap } then they will align.
But then, finally, you may be capable of carefully teasing apart the differences in Applicative and Monad so that they behave differently in benign ways---such as having a lazier or more efficient Applicative instance. This actually occurs fairly frequently and I feel the jury is still a little bit out on what "benign" means and under what kinds of "observation" should your instances align. Perhaps some of the most gregarious use of this is in the Haxl project at Facebook where the Applicative instance is more parallelized than the Monad instance, and thus is far more efficient at the cost of some fairly severe "unobserved" side effects.
In any case, if they differ, document it.

I often choose a reverse approach as compared to the one in Abrahamson's answer. I manually define only the Monad instance and define the Applicative and Functor in terms of it with the help of already defined functions in the Control.Monad, which renders those instances the same for absolutely any monad, i.e.:
instance Applicative SomeMonad where
pure = return
(<*>) = ap
instance Functore SomeMonad where
fmap = liftM
While this way the definition of Functor and Applicative is always "brain-free" and very easy to reason about, I must note that this is not the ultimate solution, since there are cases, when the instances can be implemented more efficiently or even provide new features. E.g., the Applicative instance of Concurrently executes things ... concurrently, while the Monad instance can only execute them sequentially due to the nature of monads.

Functor instances are typically very simple to define, I'd normally do those by hand.
For Applicative and Monad, it depends. pure and return are usually similarly easy, and it really doesn't matter in which class you put the expanded definition. For bind, it is sometimes benfitial to go the "category way", i.e. define a specialised join' :: (M (M x)) -> M x first and then a>>=b = join' $ fmap b a (which of course wouldn't work if you had defined fmap in terms of >>=). Then it's probably useful to just re-use (>>=) for the Applicative instance.
Other times, the Applicative instance can be written quite easily or is more efficient than the generic Monad-derived implementation. In that case, you should definitely define <*> separately.

The magic here, that the Haskell uses the Kleisli-tiplet notation of a monad, that
is more convenient way, if somebody wants to use monads in imperative programming like tools.
I asked the same question, and the answer come after a while, if you see the definitions
of the Functor, Applicative, Monad in haskell you miss one link, which is the original definition of the monad, which contains only the join operation, that can be found on the HaskellWiki.
With this point of view you will see how haskell monads are built up functor, applicative functors, monads and Kliesli triplet.
A rough explanation can be found here: https://github.com/andorp/pearls/blob/master/Monad.hs
And other with the same ideas here: http://people.inf.elte.hu/pgj/haskell2/jegyzet/08/Monad.hs

I think you mis-understand how sub-classes work in Haskell. They aren't like OO sub-classes! Instead, a sub-class constraint, like
class Applicative m => Monad m
says "any type with a canonical Monad structure must also have a canonical Applicative structure". There are two basic reasons why you would place a constraint like that:
The sub-class structure induces a super-class structure.
The super-class structure is a natural subset of the sub-class structure.
For example, consider:
class Vector v where
(.^) :: Double -> v -> v
(+^) :: v -> v -> v
negateV :: v -> v
class Metric a where
distance :: a -> a -> Double
class (Vector v, Metric v) => Norm v where
norm :: v -> Double
The first super-class constraint on Norm arises because the concept of a normed space is really weak unless you also assume a vector space structure; the second arises because (given a vector space) a Norm induces a Metric, which you can prove by observing that
instance Metric V where
distance v0 v1 = norm (v0 .^ negateV v1)
is a valid Metric instance for any V with a valid Vector instance and a valid norm function. We say that the norm induces a metric. See http://en.wikipedia.org/wiki/Normed_vector_space#Topological_structure .
The Functor and Applicative super-classes on Monad are like Metric, not like Vector: the return and >>= functions from Monad induce Functor and Applicative structures:
fmap: can be defined as fmap f a = a >>= return . f, which was liftM in the Haskell 98 standard library.
pure: is the same operation as return; the two names is a legacy from when Applicative wasn't a super-class of Monad.
<*>: can be defined as af <*> ax = af >>= \ f -> ax >>= \ x -> return (f x), which was liftM2 ($) in the Haskell 98 standard library.
join: can be defined as join aa = aa >>= id.
So it's perfectly sensible, mathematically, to define the Functor and Applicative operations in terms of Monad.

Related

Regardless of composition, can I say monadic if there is only a way to change m(m a) to m a

Monad has the following components.
a and m a which converted from a by a Functor
a -> m b arrow and a way to change m m to m
composition of arrows
If there is a structure that has 1,2 regardless of composition, can I say it's Monadic?
Sure, If a structure has 1 and 2, It can be composed. but It doesn't have to be concerned or not?
edited:
I thought that associativity, Identity are about composition. If a structure has all exactly, It's just a Monad. My Question is about "Monadic". the most important factor of Monad is mu(join), so if a structure has only mu, Could it be a Monadic thing roughly ?
or
Monadic is just an adjective for Monad.
"Monadic" thing means that It's a factor of a Monad.
Monadic structure of a Monad
Monadic type m a of a Monad,
Monadic arrow a -> m b of a Monad,
Monadic compostion >=> of a Monad
...
This looks like two questions: a question about terminology, and a question about whether the monad laws relate to "composition" only.
For the terminology question...
As pointed out by #cstml in a comment, there's a formal definition of a "monadic functor" in category theory, but I don't think Haskell programmers typically use "monadic" in this formal sense, and it doesn't look like you're asking about that.
When I use the term "monadic", I typically use it in the sense of "things pertaining to a monad". So a monadic action is an expression of the form m a for a Monad m and when I engage in monadic programming, I'm using monads.
I don't typically use it in the sense of "monad-like", but I could see maybe saying something like "arrows have some monadic behavior, but they aren't necessarily monads". On the other hand, if you say "type X is monadic" without qualification, that sounds like just an unusual way of saying "type X is a monad".
For the monad laws question...
Even though the monad laws for Haskell are usually expressed in terms of return and >>=, an alternative set of laws that only involve return, fmap, and join can be formulated. That is, the join operator all by itself satisfies identity and associativity laws that are more or less equivalent to the laws satisfied by >>=.
Specifically, any monad obeys the obvious laws:
fmap f . return = return . f
join . fmap return = id
join . return = id
If you have an object that supports operations:
return :: a -> m a -- convert from `a` by functor
join :: m (m a) -> m a -- change `m m` to `m`
but doesn't satisfy any of these laws, then it doesn't seem very "monadic".
If it does satisfy the above laws, then the composition defined by:
x >>= f = join (fmap f x)
will automatically satisfy the usual Haskell left and right identity laws:
return a >>= k = k a
m >>= return = m
so these laws aren't "just about composition".
The only usual Haskell law not implied by the above is associativity. Like the others, it can be expressed in terms of join and fmap instead of >>= and it takes the form:
join . fmap h . join . fmap k = join . fmap (join . fmap h . k)
Basically, the question of associativity can either be expressed as the associativity of the composition of monad arrows or as the associativity of the join operation acting on the type m m m a -- in other words, does it matter if you join the outer layer first or the inner layer first?

How exactly does the `(<*>) = ap` Applicative/Monad law relate the two classes?

ap doesn't have a documented spec, and reads with a comment pointing out it could be <*>, but isn't for practical reasons:
ap :: (Monad m) => m (a -> b) -> m a -> m b
ap m1 m2 = do { x1 <- m1; x2 <- m2; return (x1 x2) }
-- Since many Applicative instances define (<*>) = ap, we
-- cannot define ap = (<*>)
So I assume the ap in the (<*>) = ap law is shorthand for "right-hand side of ap" and the law actually expresses a relationship between >>= , return and <*> right? Otherwise the law is meaningless.
The context is me thinking about Validation and how unsatisfying it is that it can't seem to have a lawful Monad instance. I'm also thinking about ApplicativeDo and how that transformation sort of lets us recover from the practical effects of a Monad instance for Validation; what I most often want to do is accumulate errors as far as possible, but still be able to use bind when necessary. We actually export a bindV function which we need to use just about everywhere, it's all kind of absurd. The only practical consequence I can think of the lawlessness is that we accumulate different or fewer errors depending on what sort of composition we use (or how our program might theoretically be transformed by rewrite rules, though I'm not sure why applicative composition would ever get converted to monadic).
EDIT: The documentation for the same laws in Monad is more extensive:
Furthermore, the Monad and Applicative operations should relate as follows:
pure = return
(<*>) = ap
The above laws imply:
fmap f xs = xs >>= return . f
(>>) = (*>)
"The above laws imply"... so is the idea here that these are the real laws we care about?
But now I'm left trying to understand these in the context of Validation. The first law would hold. The second could obviously be made to hold if we just define (>>) = (*>).
But the documentation for Monad surprisingly says nothing at all (unless I'm just missing it) about how >> should relate. Presumably we want that
a >> b = a >>= \_ -> b
...and (>>) is included in the class so that it can be overridden for efficiency, and this just never quite made it into the docs.
So if that's the case, then I guess the way Monad and Applicative relate is actually something like:
return = pure
xs >>= return . f = fmap f xs
a >>= \_ -> b = fmap (const id) a <*> b
Every Monad gives rise to an Applicative, and for that induced Applicative, <*> = ap will hold definitionally. But given two structures - Monad m and Applicative m - there is no guarantee that these structures agree without the two laws <*> = ap and pure = return. For example, take the 'regular' Monad instance for lists, and the zip-list Applicative instance. While there is nothing fundamentally 'wrong' about a Monad and Applicative instance disagreeing, it would probably be confusing to most users, and so it's prohibited by the Monad laws.
tl;dr The laws in question serve to ensure that Monad and Applicative agree in an intuitively obvious way.
So I assume the ap in the (<*>) = ap law is shorthand for "right-hand side of ap" and the law actually expresses a relationship between >>=, return and <*> right?
It seems to me (<*>) = ap doesn't strictly imply anything (at least post-AMP). Presumably it's trying to express some relationship between <*> and the right-hand side of ap. Maybe I'm being pedantic.
Speaking pedantically, I'd say the opposite: because ap is definitionally equal to its right-hand side, saying (<*>) = ap is exactly the same as saying m1 <*> m2 = do { x1 <- m1; x2 <- m2; return (x1 x2) }. It's just the normal first step of dealing with equalities like that: expanding the definitions.
Reply to the comment:
Right, but the definition is free to change.
Then the law would change or be removed too. Just as when/if join is added to Monad the current definition will become a law instead.
it wouldn't have been possible to define it literally as ap = <*>
Do you mean it would be impossible to define ap or the law in this way?
If ap, then you are correct: it would have the wrong type. But stating the law like this would be fine.

Are Functor instances unique?

I was wondering to what extent Functor instances in Haskell are determined (uniquely) by the functor laws.
Since ghc can derive Functor instances for at least "run-of-the-mill" data types, it seems that they must be unique at least in a wide variety of cases.
For convenience, the Functor definition and functor laws are:
class Functor f where
fmap :: (a -> b) -> f a -> f b
fmap id = id
fmap (g . h) = (fmap g) . (fmap h)
Questions:
Can one derive the definition of map starting from the assumption that it is a Functor instance for data List a = Nil | Cons a (List a)? If so, what assumptions have to be made in order to do this?
Are there any Haskell data types which have more than one Functor instances which satisfy the functor laws?
When can ghc derive a functor instance and when can't it?
Does all of this depend how we define equality? The Functor laws are expressed in terms of an equality of values, yet we don't require Functors to have Eq instances. So is there some choice here?
Regarding equality, there is certainly a notion of what I call "constructor equality" which allows us to reason that [a,a,a] is "equal" to [a,a,a] for any value of a of any type even if a does not have (==) defined for it. All other (useful) notions of equality are probably coarser that this equivalence relationship. But I suspect that the equality in the Functor laws are more of an "reasoning equality" relationship and can be application specific. Any thoughts on this?
See Brent Yorgey's Typeclassopedia:
Unlike some other type classes we will encounter, a given type has at most one valid instance of Functor. This can be proven via the free theorem for the type of fmap. In fact, GHC can automatically derive Functor instances for many data types.
"When can GHC derive a functor instance and when can't it?"
When we have intentional circular data structures. The type system does not allow us to express our intent of a forced circularity.
So, ghc can derive an instance, similar to that which we want, but not the same.
Circular data structures
are probably the only case where the Functor should be implemented a different way.
But then again, it would have the same semantic.
data HalfEdge a = HalfEdge { label :: a , companion :: HalfEdge a }
instance Functor HalfEdge where
fmap f (HalfEdge a (HalfEdge b _)) = fix $ HalfEdge (f a) . HalfEdge (f b)
EDIT:
HalfEdges are structures (known in computer graphics, 3d meshes...) that represent undirected Edges in a graph, where you can have a reference to either end. Usually they store more references to neighbour HalfEdges, Nodes and Faces.
newEdge :: a -> a -> HalfEdge a
newEdge a b = fix $ HalfEdge a . HalfEdge b
Semantically, there is no fix $ HalfEdge 0 . HalfEdge 1 . HalfEdge 2, because edges are always composed out of exactly two half edges.
EDIT 2:
In the haskell community, the quote "Tying the Knot" is known for this kind of data structure. It is about data structures that are semantically infinite because they cycle. They consume only finite memory. Example: given ones = 1:ones, we will have these semantically equivalent implementations of twos:
twos = fmap (+1) ones
twos = fix ((+1)(head ones) :)
If we traverse (first n elements of) twos and still have a reference to the begin of that list, these implementations differ in speed (evaluate 1+1 each time vs only once) and memory consumption (O(n) vs O(1)).

Monads as adjunctions

I've been reading about monads in category theory. One definition of monads uses a pair of adjoint functors. A monad is defined by a round-trip using those functors. Apparently adjunctions are very important in category theory, but I haven't seen any explanation of Haskell monads in terms of adjoint functors. Has anyone given it a thought?
Edit: Just for fun, I'm going to do this right. Original answer preserved below
The current adjunction code for category-extras now is in the adjunctions package: http://hackage.haskell.org/package/adjunctions
I'm just going to work through the state monad explicitly and simply. This code uses Data.Functor.Compose from the transformers package, but is otherwise self-contained.
An adjunction between f (D -> C) and g (C -> D), written f -| g, can be characterized in a number of ways. We'll use the counit/unit (epsilon/eta) description, which gives two natural transformations (morphisms between functors).
class (Functor f, Functor g) => Adjoint f g where
counit :: f (g a) -> a
unit :: a -> g (f a)
Note that the "a" in counit is really the identity functor in C, and the "a" in unit is really the identity functor in D.
We can also recover the hom-set adjunction definition from the counit/unit definition.
phiLeft :: Adjoint f g => (f a -> b) -> (a -> g b)
phiLeft f = fmap f . unit
phiRight :: Adjoint f g => (a -> g b) -> (f a -> b)
phiRight f = counit . fmap f
In any case, we can now define a Monad from our unit/counit adjunction like so:
instance Adjoint f g => Monad (Compose g f) where
return x = Compose $ unit x
x >>= f = Compose . fmap counit . getCompose $ fmap (getCompose . f) x
Now we can implement the classic adjunction between (a,) and (a ->):
instance Adjoint ((,) a) ((->) a) where
-- counit :: (a,a -> b) -> b
counit (x, f) = f x
-- unit :: b -> (a -> (a,b))
unit x = \y -> (y, x)
And now a type synonym
type State s = Compose ((->) s) ((,) s)
And if we load this up in ghci, we can confirm that State is precisely our classic state monad. Note that we can take the opposite composition and get the Costate Comonad (aka the store comonad).
There are a bunch of other adjunctions we can make into monads in this fashion (such as (Bool,) Pair), but they're sort of strange monads. Unfortunately we can't do the adjunctions that induce Reader and Writer directly in Haskell in a pleasant way. We can do Cont, but as copumpkin describes, that requires an adjunction from an opposite category, so it actually uses a different "form" of the "Adjoint" typeclass that reverses some arrows. That form is also implemented in a different module in the adjunctions package.
this material is covered in a different way by Derek Elkins' article in The Monad Reader 13 -- Calculating Monads with Category Theory: http://www.haskell.org/wikiupload/8/85/TMR-Issue13.pdf
Also, Hinze's recent Kan Extensions for Program Optimization paper walks through the construction of the list monad from the adjunction between Mon and Set: http://www.cs.ox.ac.uk/ralf.hinze/Kan.pdf
Old answer:
Two references.
1) Category-extras delivers, as as always, with a representation of adjunctions and how monads arise from them. As usual, it's good to think with, but pretty light on documentation: http://hackage.haskell.org/packages/archive/category-extras/0.53.5/doc/html/Control-Functor-Adjunction.html
2) -Cafe also delivers with a promising but brief discussion on the role of adjunction. Some of which may help in interpreting category-extras: http://www.haskell.org/pipermail/haskell-cafe/2007-December/036328.html
Derek Elkins was showing me recently over dinner how the Cont Monad arises from composing the (_ -> k) contravariant functor with itself, since it happens to be self-adjoint. That's how you get (a -> k) -> k out of it. Its counit, however, leads to double negation elimination, which can't be written in Haskell.
For some Agda code that illustrates and proves this, please see http://hpaste.org/68257.
This is an old thread, but I found the question interesting,
so I did some calculations myself. Hopefully Bartosz is still there
and might read this..
In fact, the Eilenberg-Moore construction does give a very clear picture in this case.
(I will use CWM notation with Haskell like syntax)
Let T be the list monad < T,eta,mu > (eta = return and mu = concat)
and consider a T-algebra h:T a -> a.
(Note that T a = [a] is a free monoid <[a],[],(++)>, that is, identity [] and multiplication (++).)
By definition, h must satisfy h.T h == h.mu a and h.eta a== id.
Now, some easy diagram chasing proves that h actually induces a monoid structure on a (defined by x*y = h[x,y] ),
and that h becomes a monoid homomorphism for this structure.
Conversely, any monoid structure < a,a0,* > defined in Haskell is naturally defined as a T-algebra.
In this way (h = foldr ( * ) a0, a function that 'replaces' (:) with (*),and maps [] to a0, the identity).
So, in this case, the category of T-algebras is just the category of monoid structures definable in Haskell, HaskMon.
(Please check that the morphisms in T-algebras are actually monoid homomorphisms.)
It also characterizes lists as universal objects in HaskMon, just like free products in Grp, polynomial rings in CRng, etc.
The adjuction corresponding to the above construction is < F,G,eta,epsilon >
where
F:Hask -> HaskMon, which takes a type a to the 'free monoid generated by a',that is, [a],
G:HaskMon -> Hask, the forgetful functor (forget the multiplication),
eta:1 -> GF , the natural transformation defined by \x::a -> [x],
epsilon: FG -> 1 , the natural transformation defined by the folding function above
(the 'canonical surjection' from a free monoid to its quotient monoid)
Next, there is another 'Kleisli category' and the corresponding adjunction.
You can check that it is just the category of Haskell types with morphisms a -> T b,
where its compositions are given by the so-called 'Kleisli composition' (>=>).
A typical Haskell programmer will find this category more familiar.
Finally,as is illustrated in CWM, the category of T-algebras
(resp. Kleisli category) becomes the terminal (resp. initial) object in the category
of adjuctions that define the list monad T in a suitable sense.
I suggest to do a similar calculations for the binary tree functor T a = L a | B (T a) (T a) to check your understanding.
I've found a standard constructions of adjunct functors for any monad by Eilenberg-Moore, but I'm not sure if it adds any insight to the problem. The second category in the construction is a category of T-algebras. A T algebra adds a "product" to the initial category.
So how would it work for a list monad? The functor in the list monad consists of a type constructor, e.g., Int->[Int] and a mapping of functions (e.g., standard application of map to lists). An algebra adds a mapping from lists to elements. One example would be adding (or multiplying) all the elements of a list of integers. The functor F takes any type, e.g., Int, and maps it into the algebra defined on the lists of Int, where the product is defined by monadic join (or vice versa, join is defined as the product). The forgetful functor G takes an algebra and forgets the product. The pair F, G, of adjoint functors is then used to construct the monad in the usual way.
I must say I'm none the wiser.
If you are interested,here's some thoughts of a non-expert
on the role of monads and adjunctions in programming languages:
First of all, there exists for a given monad T a unique adjunction to the Kleisli category of T.
In Haskell,the use of monads is primarily confined to operations in this category
(which is essentially a category of free algebras,no quotients).
In fact, all one can do with a Haskell Monad is to compose some Kleisli morphisms of
type a->T b through the use of do expressions, (>>=), etc., to create a new
morphism. In this context, the role of monads is restricted to just the economy
of notation.One exploits associativity of morphisms to be able to write (say) [0,1,2]
instead of (Cons 0 (Cons 1 (Cons 2 Nil))), that is, you can write sequence as sequence,
not as a tree.
Even the use of IO monads is non essential, for the current Haskell type system is powerful
enough to realize data encapsulation (existential types).
This is my answer to your original question,
but I'm curious what Haskell experts have to say about this.
On the other hand, as we have noted, there's also a 1-1 correspondence between monads and
adjunctions to (T-)algebras. Adjoints, in MacLane's terms, are 'a way
to express equivalences of categories.'
In a typical setting of adjunctions <F,G>:X->A where F is some sort
of 'free algebra generator' and G a 'forgetful functor',the corresponding monad
will (through the use of T-algebras) describe how (and when) the algebraic structure of A is constructed on the objects of X.
In the case of Hask and the list monad T, the structure which T introduces is that
of monoid,and this can help us to establish properties (including the correctness) of code through algebraic
methods that the theory of monoids provides. For example, the function foldr (*) e::[a]->a can
readily be seen as an associative operation as long as <a,(*),e> is a monoid,
a fact which could be exploited by the compiler to optimize the computation (e.g. by parallelism).
Another application is to identify and classify 'recursion patterns' in functional programming using categorical
methods in the hope to (partially) dispose of 'the goto of functional programming', Y (the arbitrary recursion combinator).
Apparently, this kind of applications is one of the primary motivations of the creators of Category Theory (MacLane, Eilenberg, etc.),
namely, to establish natural equivalence of categories, and transfer a well-known method in one category
to another (e.g. homological methods to topological spaces,algebraic methods to programming, etc.).
Here, adjoints and monads are indispensable tools to exploit this connection of categories.
(Incidentally, the notion of monads (and its dual, comonads) is so general that one can even go so far as to define 'cohomologies' of
Haskell types.But I have not given a thought yet.)
As for non-determistic functions you mentioned, I have much less to say...
But note that; if an adjunction <F,G>:Hask->A for some category A defines the list monad T,
there must be a unique 'comparison functor' K:A->MonHask (the category of monoids definable in Haskell), see CWM.
This means, in effect, that your category of interest must be a category of monoids in some restricted form (e.g. it may lack some quotients but not free algebras) in order to define the list monad.
Finally,some remarks:
The binary tree functor I mentioned in my last posting easily generalizes to arbitrary data type
T a1 .. an = T1 T11 .. T1m | ....
Namely,any data type in Haskell naturally defines a monad (together with the corresponding category of algebras and the Kleisli category),
which is just the result of any data constructor in Haskell being total.
This is another reason why I consider Haskell's Monad class is not much more than a syntax sugar
(which is pretty important in practice,of course).

What is <*> called and what does it do? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 5 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
How do these functions in the Applicative typeclass work?
(<*>) :: f (a -> b) -> f a -> f b
(*>) :: f a -> f b -> f b
(<*) :: f a -> f b -> f a
(That is, if they weren't operators, what might they be called?)
As a side note, if you could rename pure to something more friendly to non-mathematicians, what would you call it?
Sorry, I don't really know my math, so I'm curious how to pronounce the functions in the Applicative typeclass
Knowing your math, or not, is largely irrelevant here, I think. As you're probably aware, Haskell borrows a few bits of terminology from various fields of abstract math, most notably Category Theory, from whence we get functors and monads. The use of these terms in Haskell diverges somewhat from the formal mathematical definitions, but they're usually close enough to be good descriptive terms anyway.
The Applicative type class sits somewhere between Functor and Monad, so one would expect it to have a similar mathematical basis. The documentation for the Control.Applicative module begins with:
This module describes a structure intermediate between a functor and a monad: it provides pure expressions and sequencing, but no binding. (Technically, a strong lax monoidal functor.)
Hmm.
class (Functor f) => StrongLaxMonoidalFunctor f where
. . .
Not quite as catchy as Monad, I think.
What all this basically boils down to is that Applicative doesn't correspond to any concept that's particularly interesting mathematically, so there's no ready-made terms lying around that capture the way it's used in Haskell. So, set the math aside for now.
If we want to know what to call (<*>) it might help to know what it basically means.
So what's up with Applicative, anyway, and why do we call it that?
What Applicative amounts to in practice is a way to lift arbitrary functions into a Functor. Consider the combination of Maybe (arguably the simplest non-trivial Functor) and Bool (likewise the simplest non-trivial data type).
maybeNot :: Maybe Bool -> Maybe Bool
maybeNot = fmap not
The function fmap lets us lift not from working on Bool to working on Maybe Bool. But what if we want to lift (&&)?
maybeAnd' :: Maybe Bool -> Maybe (Bool -> Bool)
maybeAnd' = fmap (&&)
Well, that's not what we want at all! In fact, it's pretty much useless. We can try to be clever and sneak another Bool into Maybe through the back...
maybeAnd'' :: Maybe Bool -> Bool -> Maybe Bool
maybeAnd'' x y = fmap ($ y) (fmap (&&) x)
...but that's no good. For one thing, it's wrong. For another thing, it's ugly. We could keep trying, but it turns out that there's no way to lift a function of multiple arguments to work on an arbitrary Functor. Annoying!
On the other hand, we could do it easily if we used Maybe's Monad instance:
maybeAnd :: Maybe Bool -> Maybe Bool -> Maybe Bool
maybeAnd x y = do x' <- x
y' <- y
return (x' && y')
Now, that's a lot of hassle just to translate a simple function--which is why Control.Monad provides a function to do it automatically, liftM2. The 2 in its name refers to the fact that it works on functions of exactly two arguments; similar functions exist for 3, 4, and 5 argument functions. These functions are better, but not perfect, and specifying the number of arguments is ugly and clumsy.
Which brings us to the paper that introduced the Applicative type class. In it, the authors make essentially two observations:
Lifting multi-argument functions into a Functor is a very natural thing to do
Doing so doesn't require the full capabilities of a Monad
Normal function application is written by simple juxtaposition of terms, so to make "lifted application" as simple and natural as possible, the paper introduces infix operators to stand in for application, lifted into the Functor, and a type class to provide what's needed for that.
All of which brings us to the following point: (<*>) simply represents function application--so why pronounce it any differently than you do the whitespace "juxtaposition operator"?
But if that's not very satisfying, we can observe that the Control.Monad module also provides a function that does the same thing for monads:
ap :: (Monad m) => m (a -> b) -> m a -> m b
Where ap is, of course, short for "apply". Since any Monad can be Applicative, and ap needs only the subset of features present in the latter, we can perhaps say that if (<*>) weren't an operator, it should be called ap.
We can also approach things from the other direction. The Functor lifting operation is called fmap because it's a generalization of the map operation on lists. What sort of function on lists would work like (<*>)? There's what ap does on lists, of course, but that's not particularly useful on its own.
In fact, there's a perhaps more natural interpretation for lists. What comes to mind when you look at the following type signature?
listApply :: [a -> b] -> [a] -> [b]
There's something just so tempting about the idea of lining the lists up in parallel, applying each function in the first to the corresponding element of the second. Unfortunately for our old friend Monad, this simple operation violates the monad laws if the lists are of different lengths. But it makes a fine Applicative, in which case (<*>) becomes a way of stringing together a generalized version of zipWith, so perhaps we can imagine calling it fzipWith?
This zipping idea actually brings us full circle. Recall that math stuff earlier, about monoidal functors? As the name suggests, these are a way of combining the structure of monoids and functors, both of which are familiar Haskell type classes:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Monoid a where
mempty :: a
mappend :: a -> a -> a
What would these look like if you put them in a box together and shook it up a bit? From Functor we'll keep the idea of a structure independent of its type parameter, and from Monoid we'll keep the overall form of the functions:
class (Functor f) => MonoidalFunctor f where
mfEmpty :: f ?
mfAppend :: f ? -> f ? -> f ?
We don't want to assume that there's a way to create an truly "empty" Functor, and we can't conjure up a value of an arbitrary type, so we'll fix the type of mfEmpty as f ().
We also don't want to force mfAppend to need a consistent type parameter, so now we have this:
class (Functor f) => MonoidalFunctor f where
mfEmpty :: f ()
mfAppend :: f a -> f b -> f ?
What's the result type for mfAppend? We have two arbitrary types we know nothing about, so we don't have many options. The most sensible thing is to just keep both:
class (Functor f) => MonoidalFunctor f where
mfEmpty :: f ()
mfAppend :: f a -> f b -> f (a, b)
At which point mfAppend is now clearly a generalized version of zip on lists, and we can reconstruct Applicative easily:
mfPure x = fmap (\() -> x) mfEmpty
mfApply f x = fmap (\(f, x) -> f x) (mfAppend f x)
This also shows us that pure is related to the identity element of a Monoid, so other good names for it might be anything suggesting a unit value, a null operation, or such.
That was lengthy, so to summarize:
(<*>) is just a modified function application, so you can either read it as "ap" or "apply", or elide it entirely the way you would normal function application.
(<*>) also roughly generalizes zipWith on lists, so you can read it as "zip functors with", similarly to reading fmap as "map a functor with".
The first is closer to the intent of the Applicative type class--as the name suggests--so that's what I recommend.
In fact, I encourage liberal use, and non-pronunciation, of all lifted application operators:
(<$>), which lifts a single-argument function into a Functor
(<*>), which chains a multi-argument function through an Applicative
(=<<), which binds a function that enters a Monad onto an existing computation
All three are, at heart, just regular function application, spiced up a little bit.
Since I have no ambitions of improving on C. A. McCann's technical answer, I'll tackle the more fluffy one:
If you could rename pure to something more friendly to podunks like me, what would you call it?
As an alternative, especially since there is no end to the constant angst-and-betrayal-filled cried against the Monad version, called "return", I propose another name, which suggests its function in a way that can satisfy the most imperative of imperative programmers, and the most functional of...well, hopefully, everyone can complain the same about: inject.
Take a value. "Inject" it into the Functor, Applicative, Monad, or what-have-you. I vote for "inject", and I approved this message.
In brief:
<*> you can call it apply. So Maybe f <*> Maybe a can be pronounced as apply Maybe f over Maybe a.
You could rename pure to of, like many JavaScript libraries do. In JS you can create a Maybe with Maybe.of(a).
Also, Haskell's wiki has a page on pronunciation of language operators here
(<*>) -- Tie Fighter
(*>) -- Right Tie
(<*) -- Left Tie
pure -- also called "return"
Source: Haskell Programming from First Principles, by Chris Allen and Julie Moronuki

Resources