Must mplus always be associative? Haskell wiki vs. Oleg Kiselyov - haskell

The Haskell wikibook asserts that
Instances of MonadPlus are required to fulfill several rules, just as instances of Monad are required to fulfill the three monad laws. ... The most essential are that mzero and mplus form a monoid.
A consequence of which is that mplus must be associative. The Haskell wiki agrees.
However, Oleg, in one of his many backtracking search implementations, writes that
-- Generally speaking, mplus is not associative. It better not be,
-- since associative and non-commutative mplus makes the search
-- strategy incomplete.
Is it kosher to define a non-associative mplus? The first two links pretty clearly suggest you don't have a real MonadPlus instance if mplus isn't associative. But if Oleg does it ... (On the other hand, in that file he's just defining a function called mplus, and doesn't claim that that mplus is the mplus of MonadPlus. He chose a pretty confusing name, if that's the right interpretation.)

Below is the opinion of Oleg Himself, with my comment and his clarification.
O.K. First I would like to register my disagreement with Gabriel
Gonzalez. Not everyone agrees that MonadPlus should be monoid with
respect to mplus and mzero. The Report says nothing about it. There
are many compelling cases when this is not so (see below). Generally,
the algebraic structure should fit the task. That's why we have
groups, and also weaker semi-groups or groupoids (magmas). It seems
MonadPlus is often regarded as a search/non-determinism monad. If so,
then the properties of MonadPlus should be those that facilitate
search and reasoning about search — rather than some ideal ad hoc
properties someone likes for whatever reason. Let me give an example:
it is tempting to posit the law
m >> mzero === mzero
However, monads that support search and can do other effects (think of
NonDeT m) cannot satisfy that law. For example,
print "OK" >> mzero =/== mzero
because the left-hand side prints something but the right-hand
doesn't. By the same token, mplus cannot be symmetric: mplus m1 m2
generally differs from mplus m2 m1, in the same model.
Let us come to mplus. There are two main reason NOT to require mplus
be associative. First is the completeness of the search. Consider
ones = return 1 `mplus` ones
foo = ones `mplus` return 2
=== {- inlining ones -}
(return 1 `mplus` ones) `mplus` return 2
=== {- associativity -}
return 1 `mplus` (ones `mplus` return 2)
===
return 1 `mplus` foo
It would seem therefore, coinductively ones and foo are the same. That
means, we will never get the answer 2 from foo.
That results holds for ANY search that can be represented by MonadPlus, so
long as mplus is associative and non-commutative. Therefore, if MonadPlus is a
monad for search, then associativity of mplus is an unreasonable requirement.
Here is the second reason: sometimes we wish for a probabilistic
search — or, in general, weighted search, when some alternatives are
weighted. It is obvious that the probabilistic choice operator is not
associative. For that reason, our JFP paper specifically avoids
imposing monoid (mplus, mzero) structure on MonadPlus.
http://okmij.org/ftp/Computation/monads.html#lazy-sharing-nondet
(see the discussion around Figure 1 of the paper).
R.C.
I think Gabriel and you agree on the fact that search monads do not
exhibit the monoid structure. The argument boils down to whether
MonadPlus should be used for search monads or should there be another
class, let's call it MonadPlus', which is just like MonadPlus but with
more lax laws. As you say, the report doesn't say anything on this
topic, and there's no authority to decide.
For the purpose of reasoning, I don't see any problem with that — one
just has to state clearly her assumptions about the MonadPlus instances.
As for the rewrite rule that re-associates mplus'es, the mere existence
and widespread use of MonadPlus instances that are not associative,
regardless of whether they are "broken", means that one should probably
abstain from defining it.
O.K.
I guess I disagree with Gabriel's statement
The monoid laws are the minimum requirement because
without them the other laws are meaningless. For example, when you say
mzero >>= f = mzero, you first need some sensible definition of
mzero is, but without the identity laws you don't have that. The
monoid laws are what keep the other proposed laws "honest". If you don't
have the monoid laws then you have no sensible laws and what's the point
of a theoretical type class that has no laws?
For example, LogicT paper and especially the JFP paper has lots of
examples of equational reasoning about non-determinism, without
associativity of mplus. The JFP paper omits all monoid laws for mplus
and mzero (but uses mzero >>= f === mzero). It seems one can have
"honest" and "sensible laws" for non-determinism and search without
the monoid laws for mplus and mzero.
I'm also not sure I agree with the claim
The two laws that everybody agrees that MonadPlus should obey are
the identity and associativity laws (a.k.a. the monoid laws):
I'm not sure a poll has been taken on this. The Report states no laws
for mplus (perhaps the authors were still debating them). So, I
would say the issue is open — and this is the main message to get
across.

The two laws that everybody agrees that MonadPlus should obey are the identity and associativity laws (a.k.a. the monoid laws):
mplus mempty a = a
mplus a mempty = a
mplus (mplus a b) c = mplus a (mplus b c)
I always assume they hold in all MonadPlus instances that I use and consider instances that violate those laws to be "broken", whether or not they were written by Oleg.
Oleg is right that associativity does not play nicely with breadth-first search, but that just means that MonadPlus is not the abstraction he is looking for.
To answer the point you made in a comment, I would always consider that rewrite rule of yours sound.

It's rare that MonadPlus instances violate associativity, but clearly not impossible. Typeclasses can only be counted to satisfy the "obvious" laws up to a certain amount. For instance, four further sets of possible laws for MonadPlus are discussed here without any conclusion and with libraries following various conventions without specifying which.
Clearly, Oleg has a reason to dismiss associativity. Is it "truly a MonadPlus instance"? Who knows, it's not well enough defined to say.

Related

What is the relationship between Applicative, Foldable and Traversable?

I'm trying to understand what exactly is needed from the Applicative interface in order to perform any traverse. I'm stuck as they are not used in the default implementation as if the constraint was to strict. Is Haskell's type system too weak to describe the actual requirements?
-- | Map each element of a structure to an action, evaluate these actions
-- from left to right, and collect the results. For a version that ignores
-- the results see 'Data.Foldable.traverse_'.
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
traverse f = sequenceA . fmap f
-- | Evaluate each action in the structure from left to right, and
-- and collect the results. For a version that ignores the results
-- see 'Data.Foldable.sequenceA_'.
sequenceA :: Applicative f => t (f a) -> f (t a)
sequenceA = traverse id
A possibly related side question, why is sequenceA_ defined in Foldable?
traverse and sequenceA both need to deal with what happens when the Traversable is empty. Then you won't have any elements in an Applicative context that you can use to glom other stuff onto so you'll need pure.
The definitions you've presented are a bit misleading since, as you pointed out, they're mutually dependent. When you go to actually implement one of them you'll run into the empty collection problem. And you'll run into the need for <*> as Functor provides no facility to aggregate different values of f a for some functor f.
Therefore the Applicative constraint is there because for most types, in order to implement either traverse or sequenceA you'll need the tools that Applicative provides.
That being said there are certain types where you don't need pure or don't need <*>. If your collection can never be empty you don't need pure, e.g. NonEmpty. If your collection never has more than one element you don't need <*>, e.g. Maybe. Sometimes you don't need either and you can get away with just fmap, e.g. a tuple section such as (a,)).
Haskell could have a more fine-grained typeclass hierarchy that breaks Applicative down into more fine-grained parts with separate classes for pure and <*> which would then allow you to make different versions of Traversable with weaker constraints. Edward Kmett's library semigroupoids goes in this direction, although it isn't perfect since it can't add actual superclasses to the base classes. It has Apply which is Applicative but without pure, and Traversable1 which is a variant of Traversable that uses Apply instead of Applicative and thus requires that its types can never be empty.
Note that other ecosystems have chosen to have a more fine-grained typeclass hierarchy (see Scala's cats or scalaz libraries). I personally find such a distinction occasionally useful but not overwhelmingly so.
As for your second question if all you know how to do is tear down something, you can still perform effects along the way but you can't necessarily recover the original structure. Hence why sequenceA_ is in Foldable. It is strictly less powerful than sequenceA.

Why does Haskell contain so many equivalent functions

It seems like there are a lot of functions that do the same thing, particularly relating to Monads, Functors, and Applicatives.
Examples (from most to least generic):
fmap == liftA == liftM
(<*>) == ap
liftA[2345] == liftM[2345]
pure == return
(*>) == (>>)
An example not directly based on the FAM class tree:
fmap == map
(I thought there were quite a few more with List, Foldable, Traversable, but it looks like most were made more generic some time ago, as I only see the old, less generic type signatures in old stack overflow / message board questions)
I personally find this annoying, as it means that if I need to do x, and some function such as liftM allows me to do x, then I will have made my function less generic than it could have been, and I am only going to notice that kind of thing by thoroughly reasoning about the differences between types (such as FAM, or perhaps List, Foldable, Traversable combinations as well), which is not beginner friendly at all, as while simply using those types isn't all that hard, reasoning about their properties and laws requires a lot more mental effort.
I am guessing a lot of these equivalencies come from the Applicative Monad Proposal. If that is the reason for them (and not some other reason I am missing for having less generic functions available for confusion), are they going to be deprecated / deleted ever? I can understand waiting a long time to delete them, due to breaking existing code, but surely deprecation is a good idea?
The short answers are "history" and "regularity".
Originally "map" was defined for lists. Then type-classes were introduced, with the Functor type class, so the generalised version of "map" for any functor had to be called something different, otherwise existing code would be broken. Hence "fmap".
Then monads came along. Instances of monads did not need to be functors, so "liftM" was created, along with "liftM2", "liftM3" etc. Of course if a type is an instance of both Monad and Functor then fmap = liftM.
Monads also have "ap", used in expressions like f `ap` arg1 `ap` arg2. This was very handy, but then Applicative Functors were added. (<*>) did the same job for applicative functors as 'ap', but because many applicative functors are not monads it had to be called something different. Likewise liftAx versus liftMx and "pure" versus "return".
They aren't equivalent though. equivalent things in haskell can be interchanged with no difference at all in functionality. Consider for example pure and return
EDIT: I wrote some examples down, but they were really bad since they involved Maybe a, a type that is both an applicative and a monad, so the functions could be used pretty interchangeably.
There are types that are applicatives but not monads though (see this question for examples), and by studying the type of the following expression, we can see that this could lead to some roadbumps:
pure 1 >>= pure :: (Monad m, Num b) => m b
I personally find this annoying, as it means that if I need to do x, and some function such as liftM allows me to do x, then I will have made my function less generic than it could have been
This logic is backwards.
Normally you know in advance the type of the thing you want to write, be it IO String or (Foldable f, Monoid t, Monad m) => f (m t) -> m t or whatever. Let's take the first case, getLineCapitalized :: IO String. You could write it as
getLineCapitalized = liftM (map toUpper) getLine
or
getLineCapitalized = fmap (fmap toUpper) getLine
Is the former "less generic" because it uses the specialized functions liftM and map? Of course not. This is intrinsically an IO action that produces a list. It cannot become "more generic" by changing it to the second version since those fmaps will have their types fixed to IO and [] anyways. So, there is no advantage to the second version.
By writing the first version, you provide contextual information to the reader for free. In liftM (map foo) bar, the reader knows that bar is going to be an action in some monad that returns a list. In fmap (fmap foo) bar, it could be any sort of doubly-nested structure whatsoever. If bar is something complicated rather than just getLine, then this kind of information is helpful for understanding more easily what is going on in bar.
In general, you should write a function in two steps.
Decide what the type of the function should be. Make it as general or as specific as you want. The more general the type of the function, the stronger guarantees you get on its behavior from parametricity.
Once you have decided on the type of your function, implement it using the most specific available functions. By doing so, you are providing the most information to the reader of your function. You never lose any generality or parametricity guarantees by doing so, since those only depend on the type, which you already determined in step 1.
Edit in response to comments: I was reminded of the biggest reason to use the most specific function available, which is catching bugs. The type length :: [a] -> Int is essentially the entire reason that I still use GHC 7.8. It's never happened that I wanted to take the length of an unknown Foldable structure. On the other hand, I definitely do not want to ever accidentally take the length of a pair, or take the length of foo bar baz which I think has type [a], but actually has type Maybe [a].
In the use cases for Foldable that are not already covered by the rest of the Haskell standard, lens is a vastly more powerful alternative. If I want the "length" of a Maybe t, lengthOf _Just :: Maybe t -> Int expresses my intent clearly, and the compiler can check that the program actually matches my intent; and I can go on to write lengthOf _Nothing, lengthOf _Left, etc. Explicit is better than implicit.
There are some "redundant" functions like liftM, ap, and liftA that have a very real use and taking them out would cause loss of functionality --- you can use liftM, ap, and liftA to implement your Functor or Applicative instances if all you've written is a Monad instance. It lets you be lazy and do, say:
instance Monad Foo where
return = ...
(>>=) = ...
Now you've done all of the rewarding work of defining a Monad instance, but this won't compile. Why? Because you also need a Functor and Applicative instance.
So, because you're quickly prototyping, or lazy, or can't think of a better way, you can just get a free Functor and Applicative instance:
instance Functor Foo where
fmap = liftM
instance Applicative Foo where
pure = return
(<*>) = ap
In fact, you can just copy-and-paste that chunk of code everywhere you need to quickly define a Functor or Applicative instance when you already have a Monad instance defined.
The same goes for fmapDefault from Data.Traversable. If you've implemented Traversable, you can also implement Foldable and Functor:
instance Functor Bar where
fmap = fmapDefault
no extra work required!
There are some redundant functions, however, that really have no actual usage other than being historical accidents from a time when Functor was not a superclass of Monad. These have literally zero use/point in existing...and include things like the liftM2, liftM3 etc., and (>>) and friends.

Proofs of Applicative laws for haskell instances

Have all the Haskell instances of Applicative typeclass that we get with the Haskell platform been proved to satisfy all the Applicative laws?
If yes, where do we find those proofs?
The source code of Control.Applicative does not seem to contain any proof that the applicative laws for various instances do hold.
It just mentions that
-- | A functor with application.
--
--Instances should satisfy the following laws:
It then just states the laws in the comments.
I found a similar case for the instances of other typeclasses (Alternative and Monad) too.
Are the users of these libraries supposed to verify these laws by themselves?
But I was wondering whether the rigorous proofs of these laws have been given elsewhere by the developers?
Again, I am aware that a rigorous proof of Applicate (or Monad) laws for IO Monad, which involves talking with outside world, in general, may well be very much complex.
Thanks.
Yes, the burden of proof is entirely on the library writers. There are some examples of implementations that violate these laws. The canonical example of a law violation is ListT, which doesn't obey the monad laws for most base monads (see examples). This gives very buggy behavior and nobody really uses ListT as a result.
I'm pretty sure most of these kinds of proofs haven't been etched in stone in a standard place. Most of the proofs have simply been repeated and checked to death by various curious members of the community so after a while we know which implementations do and do not satisfy their laws.
To give a concrete example, when I write my pipes library, I have to prove that my pipes satisfy the Category laws, but I just keep these proofs in a text file or an hpaste for future record if somebody requests them. Including them in the source is not really feasible because they can get very long, especially for associativity laws.
However, I think a good practice might be to include, when possible, machine-checked proofs in the original repositories so that users can refer to them as necessary.
There is excellent library checkers which provides QuickCheck properties to check laws.
The experimental library ghc-proofs allows you to use the compiler to verify such laws for you:
app_law_2 a b (c :: Succs a) = pure (.) <*> a <*> b <*> c
=== a <*> (b <*> c)
It only works in a few cases, such as the one described in my blog post, and is best seen as an experiment, not a ready tool.

What is a monad in FP, in categorical terms?

Every time someone promises to "explain monads", my interest is piqued, only to be replaced by frustration when the alleged "explanation" is a long list of examples terminated by some off-hand remark that the "mathematical theory" behind the "esoteric ideas" is "too complicated to explain at this point".
Now I'm asking for the opposite. I have a solid grasp on category theory and I'm not afraid of diagram chasing, Yoneda's lemma or derived functors (and indeed on monads and adjunctions in the categorical sense).
Could someone give me a clear and concise definition of what a monad is in functional programming? The fewer examples the better: sometimes one clear concept says more than a hundred timid examples. Haskell would do nicely as a language for demonstration though I'm not picky.
This question has some good answers: Monads as adjunctions
More to the point, Derek Elkins' "Calculating Monads with Category Theory" article in TMR #13 should have the sort of constructions you're looking for: http://www.haskell.org/wikiupload/8/85/TMR-Issue13.pdf
Finally, and perhaps this is really the closest to what you're looking for, you can go straight to the source and look at Moggi's seminal papers on the topic from 1988-91: http://www.disi.unige.it/person/MoggiE/publications.html
See in particular "Notions of computation and monads".
My own I'm sure too condensed/imprecise take:
Begin with a category Hask whose objects are Haskell types, and whose morphisms are functions. Functions are also objects in Hask, as are products. So Hask is Cartesian closed. Now introduce an arrow mapping every object in Hask to MHask which is a subset of the objects in Hask. Unit!
Next introduce an arrow mapping every arrow on Hask to an arrow on MHask. This gives us map, and makes MHask a covariant endofunctor. Now introduce an arrow mapping every object in MHask which is generated from an object in MHask (via unit) to the object in MHask which generates it. Join! And from the that, MHask is a monad (and a monoidal endofunctor to be more precise).
I'm sure there is a reason why the above is deficient, which is why I'd really direct you, if you're looking for formalism, to the Moggi papers in particular.
As a compliment to Carl's answer, a Monad in Haskell is (theoretically) this:
class Monad m where
join :: m (m a) -> m a
return :: a -> m a
fmap :: (a -> b) -> m a -> m b
Note that "bind" (>>=) can be defined as
x >>= f = join (fmap f x)
According to the Haskell Wiki
A monad in a category C is a triple (F : C → C, η : Id → F, μ : F ∘ F → F)
...with some axioms. For Haskell, fmap, return, and join line up with F, η, and μ, respectively. (fmap in Haskell defines a Functor). If I'm not mistaken, Scala calls these map, pure, and join respectively. (Scala calls bind "flatMap")
Ok, using Haskell terminology and examples...
A monad, in functional programming, is a composition pattern for data types with the kind * -> *.
class Monad (m :: * -> *) where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
(There's more to the class than that in Haskell, but those are the important parts.)
A data type is a monad if it can implement that interface while satisfying three conditions in the implementation. These are the "monad laws", and I'll leave it to those long-winded explanations for the full explanation. I summarize the laws as "(>>= return) is an identity function, and (>>=) is associative." It's really not more than that, even if it can be expressed more precisely.
And that's all a monad is. If you can implement that interface while preserving those behavioral properties, you have a monad.
That explanation is probably shorter than you expected. That's because the monad interface really is very abstract. The incredible level of abstraction is part of why so many different things can be modeled as monads.
What's less obvious is that as abstract as the interface is, it allows generically modeling any control-flow pattern, regardless of the actual monad implementation. This is why the Control.Monad package in GHC's base library has combinators like when, forever, etc. And this is why the ability to explicitly abstract over any monad implementation is powerful, especially with support from a type system.
You should read the paper by Eugenio Moggi "Notions of computations and monads" which explain the then proposed role of monads to structure denotational semantic of effectful languages.
Also there is a related question:
References for learning the theory behind pure functional languages such as Haskell?
As you don't want hand-waving, you have to read scientific papers, not forum answers or tutorials.
A monad is a monoid in the category of endofunctors, whats the problem?.
Humor aside, I personally believe that monads, as they are used in Haskell and functional programming, are better understood from the monads-as-an-interface point of view (as in Carl's and Dan's answers) instead of from the monads-as-the-term-from-category-theory point of view. I have to confess that I only really internalized the whole monad thing when I had to use a monadic library from another language in a real project.
You mention that you didn't like all the "lots of examples" tutorials. Has anyone ever pointed you to the Awkward squad paper? It focuses manly in the IO monad but the introduction gives a good technical and historical explanation of why the monad concept was embraced by Haskell in the first place.
I don't really know what I'm talking about, but here's my take:
Monads are used to represent computations. You can think of a normal procedural program, which is basically a list of statements, as a bunch of composed computations. Monads are a generalization of this concept, allowing you to define how the statements get composed. Each computation has a value (it could just be ()); the monad just determines how the value strung through a series of computations behaves.
Do notation is really what makes this clear: it's basically a special sort of statement-based language that lets you define what happens between statements. It's as if you could define how ";" worked in C-like languages.
In this light all of the monads I've used so far makes sense: State doesn't affect the value but updates a second value which is passed along from computation to computation in the background; Maybe short-circuits the value if it ever encounters a Nothing; List lets you have a variable number of values passed through; IO lets you have impure values passed through in a safe way. The more specialized monads I've used like Gen and Parsec parsers are also similar.
Hopefully this is a clear explanation which isn't completely off-base.
Since you understand monads in the category-theoretic sense I am interpreting your question as being about the presentation of monads in functional programming.
Thus my answer avoids any explanation of what a monad is, or any intuition about its meaning or use.
Answer: In Haskell a monad is presented, in an internal language for some category, as the (internalised) maps of a Kleisli triple.
Explanation:
It is hard to be precise about the properties of the "Hask category", and these properties are largely irrelevant for understanding Haskell's presentation of monads.
Instead, for this discussion, it is more useful to understand Haskell as an internal language for some category C. Haskell functions define morphisms in C and Haskell types are objects in C, but the particular category in which these definitions are made is unimportant.
Parameteric data types, e.g. data F a = ..., are object mappings, e.g. F : |C| -> |C|.
The usual description of a monad in Haskell is in Kleisli triple (or Kleisli extension) form:
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
where:
m is the object mapping m :|C| -> |C|
return is the unit operation on objects
>>= (pronounced "bind" by Haskellers) is the extension operation on morphisms but with its first two parameters swapped (cf. usual signature of extension (-)* : (a -> m b) -> m a -> m b)
(These maps are themselves internalised as families of morphisms in C, which is possible since m :|C| -> |C|).
Haskell's do-notation (if you have come across this) is therefore an internal language for Kleisli categories.
The Haskell wikibook page has a good basic explanation.

Why should I use applicative functors in functional programming?

I'm new to Haskell, and I'm reading about functors and applicative functors. Ok, I understand functors and how I can use them, but I don't understand why applicative functors are useful and how I can use them in Haskell. Can you explain to me with a simple example why I need applicative functors?
Applicative functors are a construction that provides the midpoint between functors and monads, and are therefore more widespread than monads, while more useful than functors. Normally you can just map a function over a functor. Applicative functors allow you to take a "normal" function (taking non-functorial arguments) use it to operate on several values that are in functor contexts. As a corollary, this gives you effectful programming without monads.
A nice, self-contained explanation fraught with examples can be found here. You can also read a practical parsing example developed by Bryan O'Sullivan, which requires no prior knowledge.
Applicative functors are useful when you need sequencing of actions, but don't need to name any intermediate results. They are thus weaker than monads, but stronger than functors (they do not have an explicit bind operator, but they do allow running arbitrary functions inside the functor).
When are they useful? A common example is parsing, where you need to run a number of actions that read parts of a data structure in order, then glue all the results together. This is like a general form of function composition:
f a b c d
where you can think of a, b and so on as the arbitrary actions to run, and f as the functor to apply to the result.
f <$> a <*> b <*> c <*> d
I like to think of them as overloaded 'whitespace'. Or, that regular Haskell functions are in the identity applicative functor.
See "Applicative Programming with Effects"
Conor McBride and Ross Paterson's Functional Pearl on the style has several good examples. It's also responsible for popularizing the style in the first place. They use the term "idiom" for "applicative functor", but other than that it's pretty understandable.
It is hard to come up with examples where you need applicative functors. I can understand why an intermediate Haskell programmer would ask them self that question since most introductory texts present instances derived from Monads using Applicative Functors only as a convenient interface.
The key insight, as mentioned both here and in most introductions to the subject, is that Applicative Functors are between Functors and Monads (even between Functors and Arrows). All Monads are Applicative Functors but not all Functors are Applicative.
So necessarily, sometimes we can use applicative combinators for something that we can't use monadic combinators for. One such thing is ZipList (see also this SO question for some details), which is just a wrapper around lists in order to have a different Applicative instance than the one derived from the Monad instance of list. The Applicative documentation uses the following line to give an intuitive notion of what ZipList is for:
f <$> ZipList xs1 <*> ... <*> ZipList xsn = ZipList (zipWithn f xs1 ... xsn)
As pointed out here, it is possible to make quirky Monad instances that almost work for ZipList.
There are other Applicative Functors that are not Monads (see this SO question) and they are easy to come up with. Having an alternative Interface for Monads is nice and all, but sometimes making a Monad is inefficient, complicated, or even impossible, and that is when you need Applicative Functors.
disclaimer: Making Applicative Functors might also be inefficient, complicated, and impossible, when in doubt, consult your local category theorist for correct usage of Applicative Functors.
In my experience, Applicative functors are great for the following reasons:
Certain kinds of data structures admit powerful types of compositions, but cannot really be made monads. In fact, most of the abstractions in functional reactive programming fall into this category. While we might technically be able to make e.g. Behavior (aka Signal) a monad, it typically cannot be done efficiently. Applicative functors allow us to still have powerful compositions without sacrificing efficiency (admittedly, it is a bit trickier to use an applicative than a monad sometimes, just because you don't have quite as much structure to work with).
The lack of data-dependence in an applicative functor allows you to e.g. traverse an action looking for all the effects it might produce without having the data available. So you could imagine a "web form" applicative, used like so:
userData = User <$> field "Name" <*> field "Address"
and you could write an engine which would traverse to find all the fields used and display them in a form, then when you get the data back run it again to get the constructed User. This cannot be done with a plain functor (because it combines two forms into one), nor a monad, because with a monad you could express:
userData = do
name <- field "Name"
address <- field $ name ++ "'s address"
return (User name address)
which cannot be rendered, because the name of the second field cannot be known without already having the response from the first. I'm pretty sure there's a library that implements this forms idea -- I've rolled my own a few times for this and that project.
The other nice thing about applicative functors is that they compose. More precisely, the composition functor:
newtype Compose f g x = Compose (f (g x))
is applicative whenever f and g are. The same cannot be said for monads, which has creates the whole monad transformer story which is complicated in some unpleasant ways. Applicatives are super clean this way, and it means you can build up the structure of a type you need by focusing on small composable components.
Recently the ApplicativeDo extension has appeared in GHC, which allows you to use do notation with applicatives, easing some of the notational complexity, as long as you don't do any monady things.
One good example: applicative parsing.
See [real world haskell] ch16 http://book.realworldhaskell.org/read/using-parsec.html#id652517
This is the parser code with do-notation:
-- file: ch16/FormApp.hs
p_hex :: CharParser () Char
p_hex = do
char '%'
a <- hexDigit
b <- hexDigit
let ((d, _):_) = readHex [a,b]
return . toEnum $ d
Using functor make it much shorter:
-- file: ch16/FormApp.hs
a_hex = hexify <$> (char '%' *> hexDigit) <*> hexDigit
where hexify a b = toEnum . fst . head . readHex $ [a,b]
'lifting' can hide the underlying details of some repeating code. then you can just use fewer words to tell the exact & precise story.
I would also suggest to take a look at this
In the end of the article there's an example
import Control.Applicative
hasCommentA blogComments =
BlogComment <$> lookup "title" blogComments
<*> lookup "user" blogComments
<*> lookup "comment" blogComments
Which illustrates several features of applicative programming style.

Resources