what are those operators in haskell? and <*>
I have them in a line like this:
class Evaluable e where
eval :: (Num a, Ord a) => (Ident -> Maybe a) -> (e a) -> (Either String a)
typeCheck :: (Ident -> String) -> (e a) -> Bool
instance Evaluable NExpr where
eval lookup (Plus left right) = (+) <$> eval lookup left <*> eval lookup right
As I am the one who showed you these operators, I'll give a brief explanation as to why I used them.
To review, a functor is a type constructor that lets you use the fmap function to apply a function to a "wrapped" value. In the specific case of the Either type constructor (partially applied, in this case, to String), you can apply a function to a Right value, but ignore the function if applied to a Left value (your error). It provides a way of error propagation without having to check for the error.
fmap f (Right x) = Right (f x)
fmap f (Left y) = Left y
An applicative functor is similar, except the function itself can be wrapped just like the argument it is applied to. The <*> operator unwraps both its operands, unlike fmap which only unwraps its right operand.
Right f <*> Right x = Right (f x)
Left f <*> _ = Left f
_ <*> Left y = Left y
Typically, you don't wrap functions yourself: they result from using fmap to partially apply a function to a wrapped value:
fmap (+) (Right 3) == Right (+ 3)
fmap (+) (Left "error") == Left "error"
So, when we are working with Either values, the use of <$> (infix fmap) and <*> let us pretend we are working with regular values, without worrying about whether they are wrapped with Left or Right. Right values provide the expected Right-wrapped answer, and Left values are preserved. In the case of a binary operator, only the first Left value is returned, but that is often sufficient.
(+) <$> Left "x undefined" <*> Left "y undefined" == Left "x undefined" <*> Left "y undefined"
== Left "x undefined"
(+) <$> Left "x undefined" <*> Right 9 == Left "x undefined" <*> Right 9
== Left "x undefined"
(+) <$> Right 3 <*> Left "y undefined" == Right (+ 3) <*> Left "y undefined"
== Left "y undefined"
(+) <$> Right 3 <*> Right 9 == Right (+3) <*> Right 9
== Right 12
In the end, using the Applicative instance of Either String lets us combine the results of evaluating two subexpressions without having to explicitly check if either recursive call of eval actually succeeded. Successful recursive calls result in success; an error in either call is used as the same error for the top-level call.
The <$> operator is an infix form of fmap. It allows you to apply a pure function to the value wrapped into some parametric type that belongs to a Functor class. The type of <$> is (a -> b) -> f a -> f b.
The <*> operator is quite similar to <$>. It allows you to apply a function wrapped into a parametric type to a value wrapped into the same parametric type. The type of <*> is f (a -> b) -> f a -> f b.
In this particular case, it's a way to combine the results of eval.
If one part of the the expression is failing, then the whole expression is failing.
This way, its possible to separate error handling of your application logic and to avoid complex nested case ... of.
To fully understand this, I'll advise do read on functors first, then applicative functors.
In parallel you can play with Maybe and Either, and write the equivalent code using case expressions.
Related
I've recently been trying to learn Haskell with the "Learn You a Haskell" and have been really struggling with understanding functions as Applicatives. I should point out that using other types of Applicatives like Lists and Maybe I seem to understand well enough to use them effectively.
As I tend to do when trying to understand something is I tried to play with as many examples as I could and once the pattern emerges things tend to make sense. As such I tried a few examples. Attached are my notes of several examples I tried along with a diagram I drew to try to visualize what was happening.
The definition of funct doesnt seem to relevant to the outcome but in my tests I used a function with the following definition:
funct :: (Num a) => a -> a -> a -> a
At the bottom I tried to show the same thing as in the diagrams just using normal math notation.
So all of this is well and good, I can understand the pattern when I have some function of an arbitrary number of arguments (though needs 2 or more) and apply it to a function that takes one argument. However intuitively this pattern doesn't make that much sense to me.
So here are the specific questions I have:
What is the intuitive way to understand the pattern I'm seeing, particularly if i view an Applicative as a container (which is how I view Maybe and lists)?
What is the pattern when the function on the right of the <*> takes more than a single argument (I've mostly been using the function (+3) or (+5) on the right)?
why is the function on the right hand side of the <*> applied to the second argument of the function on the left side. For example if the function on the right hand side were f() then funct(a,b,c) turns into funct (x, f(x), c)?
Why does it work for funct <*> (+3) but not for funct <*> (+)? Moreover it DOES work for (\ a b -> 3) <*> (+)
Any explanation that gives me a better intuitive understanding for this concept would be greatly appreciated. I read other explanations such as in the book I mentioned that explains functions in terms of ((->)r) or similar patterns but even though I know how to use the ->) operator when defining a function I'm not sure i understand it in this context.
Extra Details:
I want to also include the actual code I used to help me form the diagrams above.
First I defined funct as I showed above with:
funct :: (Num a) => a -> a -> a -> a
Throughout the process i refined funct in various ways to understand what was going on.
Next I tried this code:
funct a b c = 6
functMod = funct <*> (+3)
functMod 2 3
Unsuprisingly the result was 6
So now I tried just returning each argument directly like this:
funct a b c = a
functMod = funct <*> (+3)
functMod 2 3 -- returns 2
funct a b c = b
functMod = funct <*> (+3)
functMod 2 3 -- returns 5
funct a b c = c
functMod = funct <*> (+3)
functMod 2 3 -- returns 3
From this I was able to confirm the second diagram is what was taking place. I repeated this patterns to observe the third diagram as well (which is the same patterns extended on top a second time).
You can usually understand what a function is doing in Haskell if you
substitute its definition into some examples. You already have some
examples and the definition you need is <*> for (->) a which is
this:
(f <*> g) x = f x (g x)
I don't know if you'll find any better intuition than just using the
definition a few times.
On your first example we get this:
(funct <*> (+3)) x
= funct x ((+3) x)
= funct x (x+3)
(Since there was nothing I could do with funct <*> (+3) without a
further parameter I just applied it to x - do this any time you need
to.)
And the rest:
(funct <*> (+3) <*> (+5)) x
= (funct x (x+3) <*> (+5)) x
= funct x (x+3) x ((+5) x)
= funct x (x+3) x (x+5)
(funct <*> (+)) x
= funct x ((+) x)
= funct x (x+)
Notice you can't use the same funct with both of these - in
the first it can take four numbers, but in the second it needs to take
a number and a function.
((\a b -> 3) <*> (+)) x
= (\a b -> 3) x (x+)
= (\b -> 3) (x+)
= 3
(((\a b -> a + b) <*> (+)) x
= (\a b -> a + b) x (x+)
= x + (x+)
= type error
As pointed out by David Fletcher, (<*>) for functions is:
(g <*> f) x = g x (f x)
There are two intuitive pictures of (<*>) for functions which, though not quite able to stop it from being dizzying, might help with keeping your balance as you go through code that uses it. In the next few paragraphs, I will use (+) <*> negate as a running example, so you might want to try it out a few times in GHCi before continuing.
The first picture is (<*>) as applying the result of a function to the result of another function:
g <*> f = \x -> (g x) (f x)
For instance, (+) <*> negate passes an argument to both (+) and negate, giving out a function and a number respectively, and then applies one to the other...
(+) <*> negate = \x -> (x +) (negate x)
... which explains why its result is always 0.
The second picture is (<*>) as a variation on function composition in which the argument is also used to determine what the second function to be composed will be
g <*> f = \x -> (g x . f) x
From that point of view, (+) <*> negate negates the argument and then adds the argument to the result:
(+) <*> negate = \x -> ((x +) . negate) x
If you have a funct :: Num a => a -> a -> a -> a, funct <*> (+3) works because:
In terms of the first picture: (+ 3) x is a number, and so you can apply funct x to it, ending up with funct x ((+ 3) x), a function that takes two arguments.
In terms of the second picture: funct x is a function (of type Num a => a -> a -> a) that takes a number, and so you can compose it with (+ 3) :: Num a => a -> a.
On the other hand, with funct <*> (+), we have:
In terms of the first picture: (+) x is not a number, but a Num a => a -> a function, and so you can't apply funct x to it.
In terms of the second picture: the result type of (+), when seen as a function of one argument ((+) :: Num a => a -> (a -> a)), is Num a => a -> a (and not Num a => a), and so you can't compose it with funct x (which expects a Num a => a).
For an arbitrary example of something that does work with (+) as the second argument to (<*>), consider the function iterate:
iterate :: (a -> a) -> a -> [a]
Given a function and an initial value, iterate generates an infinite list by repeatedly applying the function. If we flip the arguments to iterate, we end up with:
flip iterate :: a -> (a -> a) -> [a]
Given the problem with funct <*> (+) was that funct x wouldn't take a Num a => a -> a function, this seems to have a suitable type. And sure enough:
GHCi> take 10 $ (flip iterate <*> (+)) 1
[1,2,3,4,5,6,7,8,9,10]
(On a tangential note, you can leave out the flip if you use (=<<) instead of (<*>). That, however, is a different story.)
As a final aside, neither of the two intuitive pictures lends itself particularly well to the common use case of applicative style expressions such as:
(+) <$> (^2) <*> (^3)
To use the intuitive pictures there, you'd have to account for how (<$>) for functions is (.), which murks things quite a bit. It is easier to just see the entire thing as lifted application instead: in this example, we are adding up the results of (^2) and (^3). The equivalent spelling as...
liftA2 (+) (^2) (^3)
... somewhat emphasises that. Personally, though, I feel one possible disadvantage of writing liftA2 in this setting is that, if you apply the resulting function right in the same expression, you end up with something like...
liftA2 (+) (^2) (^3) 5
... and seeing liftA2 followed by three arguments tends to make my brain tilt.
You can view the function monad as a container. Note that it's really a separate monad for every argument-type, so we can pick a simple example: Bool.
type M a = Bool -> a
This is equivalent to
data M' a = M' { resultForFalse :: a
, resultForTrue :: a }
and the instances could be defined
instance Functor M where instance Functor M' where
fmap f (M g) = M g' fmap f (M' gFalse gTrue) = M g'False g'True
where g' False = f $ g False where g'False = f $ gFalse
g' True = f $ g True g'True = f $ gTrue
and similar for Applicative and Monad.
Of course this exhaustive case-listing definition would become totally impractical for argument-types with more than a few possible values, but it's always the same principle.
But the important thing to take away is that the instances are always specific for one particular argument. So, Bool -> Int and Bool -> String belong to the same monad, but Int -> Int and Char -> Int do not. Int -> Double -> Int does belong to the same monad as Int -> Int, but only if you consider Double -> Int as an opaque result type which has nothing to do with the Int-> monad.
So, if you're considering something like a -> a -> a -> a then this is not really a question about applicatives/monads but about Haskell in general. And therefore, you shouldn't expect that the monad=container picture gets you anywhere. To understand a -> a -> a -> a as a member of a monad, you need to pick out which of the arrows you're talking about; in this case it's only the leftmost one, i.e. you have the value M (a->a->a) in the type M=(a->) monad. The arrows between a->a->a do not participate in the monadic action in any way; if they do in your code, then it means you're actually mixing multiple monads together. Before you do that, you should understand how a single monad works, so stick to examples with only a single function arrow.
Applicative Programming with Effects, the paper from McBride and Paterson, presents the interchange law:
u <*> pure x = pure (\f -> f x) <*> u
In order to try to understand it, I attempted the following example - to represent the left-hand side.
ghci> Just (+10) <*> pure 5
Just 15
How could I write this example using the right-hand side?
Also, if u is an f (a -> b) where f is an Applicative, then what's the function on the right-hand side: pure (\f -> f x) ...?
It would be written as
pure (\f -> f 5) <*> Just (+10)
Or even
pure ($ 5) <*> Just (+10)
Both are equivalent in this case. Quite literally, you're wrapping a function with pure that takes another function as its argument, then applies x to it. You provide f as the contents of the Just, which in this case is (+10). When you see the lambda syntax of (\f -> f x), it's being very literal, this is a lambda used for this definition.
The point this law makes is about preservation of exponential by the Applicative Functor: what is a exponential in the origin, is also an exponential in the image of the category.
Please, observe that the actual action of the Applicative Functor is the transformation of the following kind: strength :: (f a, f b) -> f (a, b); then ap or <*> is just fmap eval over the result, or, written fully, ap = curry $ fmap (uncurry ($)) . strength.
This law then says that since in the origin g $ x == ($ x) $ g, lifting ($), x and ($ x) should preserve the equality. Notice, that "normal" Functors will only preserve the equality only if g is lifted, too, but Applicative Functors will preserve this equality for any object of type f (a->b) in place of g. This way the whole type f (a->b) behaves like f a -> f b, whereas for "normal" Functors it only needs to behave like f a -> f b for images of the arrows in the origin (to make the diagrams commute and fulfill the promises of the Functor).
As to representing the right-hand-side of the law, you've already been advised to take it literally, pure ($ 5) <*> Just (+10)
I have to give a (simple) talk about Yesod. And yes,.. i've never or really really rarely used haskell as well.
University lecturer.....huh.
So i read a book about yesod and in some chapters the author is using some operators like <$> and <*>.
Can someone explain in easy words, what this operators do? Its pretty hard to google for that chars and if tried to read the documentation of Control.Applicative but to be honest, its hard to get for an haskell beginner.
so i hope anyone have a simple answer for me :)
an example of the book where these operators are used:
......
personForm :: Html -> MForm Handler (FormResult Person, Widget)
personForm = renderDivs $ Person
<$> areq textField "Name" Nothing
<*> areq (jqueryDayField def
{ jdsChangeYear = True -- give a year dropdown
, jdsYearRange = "1900:-5" -- 1900 till five years ago
}) "Birthday" Nothing
<*> aopt textField "Favorite color" Nothing
<*> areq emailField "Email address" Nothing
<*> aopt urlField "Website" Nothing
data Person = Person
{ personName :: Text
, personBirthday :: Day
, personFavoriteColor :: Maybe Text
, personEmail :: Text
, personWebsite :: Maybe Text
}
deriving Show
.....
.....................................
Hey,
Thanks a lot and amazingly most of the answers are useful. Sadly a only can hit "solved" on one answer.
Thanks a lot, the tutorial (that I really didn't find on Google) is pretty good
I am always very careful when making answers that are made up mostly of links, but this is one amazing tutorial that explains Functors, Applicatives and gives a bit of understaning on Monads.
The simplest answer is the type, of course. These operators come from type typeclasses Functor and its subclass Applicative.
class Functor f where
fmap :: (a -> b) -> (f a -> f b)
(<$>) = fmap -- synonym
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
The simplest intuitive answer is that Functors and Applicatives let you annotate simple values with "metadata" and (<$>), (<*>), and friends let you transform your "regular" value-level functions to work on "annotated" values.
go x y -- works if x and y are regular values
go <$> pure x <*> pure y -- uses `pure` to add "default" metadata
-- but is otherwise identical to the last one
Like any simple answer, it's kind of a lie, though. "Metadata" is a very oversimplified term. Better ones are "computational context" or "effect context" or "container".
If you're familiar with Monads then you are already very familiar with this concept. All Monads are Applicatives and so you can think of (<$>) and (<*>) as providing an alternative syntax for some do notation
do x_val <- x go <$> x
y_val <- y <*> y
return (go x_val y_val)
It has fewer symbols and emphasizes the idea of "applying" go to two arguments instead of emphasizing the imperative notion of "get the value that x generates, then get the value that y generates, then apply those values to go, then re-wrap the result" like do syntax does.
One final intuition I can throw out there is to think of Applicative in a very different way. Applicative is equivalent to another class called Monoidal.
class Functor f => Monoidal f where
init :: f () -- similar to pure
prod :: f a -> f b -> f (a, b) -- similar to (<*>)
so that Monoidal Functors let you (a) instantiate them with a trivial value
init :: [()]
init = []
init :: Maybe ()
init = Just ()
and also smash two of them together to produce their product
prod :: [a] -> [b] -> [(a, b)]
prod as bs = [(a, b) | a <- as, b <- bs]
prod :: Maybe a -> Maybe b -> Maybe (a, b)
prod (Just a) (Just b) = (Just (a, b))
prod _ _ = Nothing
This means that with a Monoidal functor you can smash a whole lot of values together and then fmap a value-level function over the whole bunch
go <$> maybeInt `prod` (maybeChar `prod` maybeBool) where
go :: (Int, (Char, Bool)) -> Double -- it's pure!
go (i, (c, b)) = ...
which is essentially what you're doing with (<$>) and (<*>), just with fewer tuples
go <$> maybeInt <*> maybeChar <*> maybeBool where
go :: Int -> Char -> Bool -> Double
go i c b = ...
Finally, here's how you convert between the two notions
-- forward
init = pure ()
prod x y = (,) <$> x <*> y
-- back
pure a = const a <$> init
f <*> x = ($) <$> prod f x
which shows how you can think of (<*>) as taking a normal value-level application ($) and injecting it up into the product inside of the Functor.
I don't suppose it helps to say that <$> is just an infix synonym for fmap. However, maybe these examples help clarify:
GHCi> (*2) <$> (Just 3)
Just 6
GHCi> (*2) <$> (Nothing)
Nothing
GHCi> (*3) <$> (Right 7)
Right 21
GHCi> (*2) <$> (Left "error")
Left "error"
GHCi> (+ 1) <$> [2,4,6,8]
[3,5,7,9]
Now compare that to this:
GHCi> (*) <$> (Just 2) <*> (Just 5)
Just 10
GHCi> (*) <$> (Just 2) <*> (Nothing)
Nothing
GHCi> (*) <$> (Right 3) <*> (Right 7)
Right 21
GHCi> (*) <$> (Left "error") <*> (Right 7)
Left "error"
GHCi> (+) <$> [1,2,3] <*> [10,20,30]
[11,21,31,12,22,32,13,23,33]
GHCi> (+) <$> [1,2,3] <*> []
[]
And then to this:
GHCi> (Just (*2)) <*> (Just 5)
Just 10
GHCi> (Right (*3)) <*> (Right 7)
Right 21
GHCi> [(+1),(+2),(+3)] <*> [10,20,30]
[11,21,31,12,22,32,13,23,33]
Really, that should show you all you need to know for lecture purposes, assuming you have learned from this that (*) <$> (Just 2) <*> (Just 5) is equivalent to Just (2 * 5)
(In the first set of examples, by the way, the functions on the left hand side are all applicatives.)
Put simply, <$> takes the function on the left and lifts it into the context of the "things in boxes" on the right, so that it can be applied to the things in boxes, in a way that obeys the special rules of the boxes (e.g. Nothng causing the whole chain to fail).
<*> takes a partially-bound function in a box on the left and applies it to the value in a box on the right. A partially bound function being one which has been given some but not all of its arguments. So (*) <$> (Right 3) <*> (Right 7) <*> (Right 4) would fail - with a not-very-helpful error message - because once * has been applied to 3 and 7 it is no longer a partial function and nobody knows what to do with the 4.
Used together, <$> and <*> allow a function to be applied to its arguments, all inside a box. You get the result in a box.
This can all only be done if the box is itself a functor; that is the crucial constraint for all of this. A functor is a function for which somebody has defined an fmap function which allows it to be transformed from a function that applies to one type into a function that applies to another type (while not changing the essential character of the function). If you like, Monads (boxes for things) know how to transform functions so that they can be applied to their things.
If you're not ready to learn about functors, applicatives, and monads yet, this may give you an intuition for how to use <$> and <*>. (I myself learned how to use them by looking at examples, before I really understood that other stuff.) Without the <$> and <*>, the first part of that code would look something like this:
......
personForm :: Html -> MForm Handler (FormResult Person, Widget)
personForm = do
name <- areq textField "Name" Nothing
bday <- areq (jqueryDayField def
{ jdsChangeYear = True -- give a year dropdown
, jdsYearRange = "1900:-5" -- 1900 till five years ago
}) "Birthday" Nothing
colour <- aopt textField "Favorite color" Nothing
email <- areq emailField "Email address" Nothing
url <- aopt urlField "Website" Nothing
renderDivs $ Person name bday colour email url
In other words, <$> and <*> can eliminate the need to create a lot of symbols that we only use once.
Earlier I asked about translating monadic code to use only the applicative functor instance of Parsec. Unfortunately I got several replies which answered the question I literally asked, but didn't really give me much insight. So let me try this again...
Summarising my knowledge so far, an applicative functor is something which is somewhat more restricted than a monad. In the tradition of "less is more", restricting what the code can do increases the possibilities for crazy code manipulation. Regardless, a lot of people seem to believe that using applicative instead of monad is a superior solution where it's possible.
The Applicative class is defined in Control.Applicative, whose Haddock's listing helpfully separates the class methods and utility functions with a vast swathe of class instances between them, to make it hard to quickly see everything on screen at once. But the pertinent type signatures are
pure :: x -> f x
<*> :: f (x -> y) -> f x -> f y
*> :: f x -> f y -> f y
<* :: f x -> f y -> f x
<$> :: (x -> y) -> f x -> f y
<$ :: x -> f y -> f x
Makes perfect sense, right?
Well, Functor already gives us fmap, which is basically <$>. I.e., given a function from x to y, we can map an f x to an f y. Applicative adds two essentially new elements. One is pure, which has roughly the same type as return (and several other operators in various category theory classes). The other is <*>, which gives us the ability to take a container of functions and a container of inputs and produce a container of outputs.
Using the operators above, we can very neatly do something such as
foo <$> abc <*> def <*> ghi
This allows us to take an N-ary function and source its arguments from N functors in a way which generalises easily to any N.
This much I already understand. There are two main things which I do not yet understand.
First, the functions *>, <* and <$. From their types, <* = const, *> = flip const, and <$ could be something similar. Presumably this does not describe what these functions actually do though. (??!)
Second, when writing a Parsec parser, each parsable entity usually ends up looking something like this:
entity = do
var1 <- parser1
var2 <- parser2
var3 <- parser3
...
return $ foo var1 var2 var3...
Since an applicative functor does not allow us to bind intermediate results to variables in this way, I'm puzzled as to how to gather them up for the final stage. I haven't been able to wrap my mind around the idea fully enough in order to comprehend how to do this.
The <* and *> functions are very simple: they work the same way as >>. The <* would work the same way as << except << does not exist. Basically, given a *> b, you first "do" a, then you "do" b and return the result of b. For a <* b, you still first "do" a then "do" b, but you return the result of a. (For appropriate meanings of "do", of course.)
The <$ function is just fmap const. So a <$ b is equal to fmap (const a) b. You just throw away the result of an "action" and return a constant value instead. The Control.Monad function void, which has a type Functor f => f a -> f () could be written as () <$.
These three functions are not fundamental to the definition of an applicative functor. (<$, in fact, works for any functor.) This, again, is just like >> for monads. I believe they're in the class to make it easier to optimize them for specific instances.
When you use applicative functors, you do not "extract" the value from the functor. In a monad, this is what >>= does, and what foo <- ... desugars to. Instead, you pass the wrapped values into a function directly using <$> and <*>. So you could rewrite your example as:
foo <$> parser1 <*> parser2 <*> parser3 ...
If you want intermediate variables, you could just use a let statement:
let var1 = parser1
var2 = parser2
var3 = parser3 in
foo <$> var1 <*> var2 <*> var3
As you correctly surmised, pure is just another name for return. So, to make the shared structure more obvious, we can rewrite this as:
pure foo <*> parser1 <*> parser2 <*> parser3
I hope this clarifies things.
Now just a little note. People do recommend using applicative functor functions for parsing. However, you should only use them if they make more sense! For sufficiently complex things, the monad version (especially with do-notation) can actually be clearer. The reason people recommend this is that
foo <$> parser1 <*> parser2 <*> parser3
is both shorter and more readable than
do var1 <- parser1
var2 <- parser2
var3 <- parser3
return $ foo var1 var2 var3
Essentially, the f <$> a <*> b <*> c is essentially like lifted function application. You can imagine the <*> being a replacement for a space (e.g. function application) in the same way that fmap is a replacement for function application. This should also give you an intuitive notion of why we use <$>--it's like a lifted version of $.
I can make a few remarks here, hopefully helpful. This reflects my understanding which itself might be wrong.
pure is unusually named. Usually functions are named referring to what they produce, but in pure x it is x that is pure. pure x produces an applicative functor which "carries" the pure x. "Carries" of course is approximate. An example: pure 1 :: ZipList Int is a ZipList, carrying a pure Int value, 1.
<*>, *>, and <* are not functions, but methods (this answers your first concern). f in their types is not general (like it would be, for functions) but specific, as specified by a specific instance. That's why they are indeed not just $, flip const and const. The specialized type f specifies the semantics of combination. In the usual applicative style programming, combination means application. But with functors, an additional dimension is present, represented by the "carrier" type f. In f x, there is a "contents", x, but there is also a "context", f.
The "applicative functors" style sought to enable the "applicative style" programming, with effects. Effects being represented by functors, carriers, providers of context; "applicative" referring to the normal applicative style of functional application. Writing just f x to denote application was once a revolutionary idea. There was no need for additional syntax anymore, no (funcall f x), no CALL statements, none of this extra stuff - combination was application... Not so, with effects, seemingly - there was again that need for the special syntax, when programming with effects. The slain beast reappeared again.
So came the Applicative Programming with Effects to again make the combination mean just application - in the special (perhaps effectful) context, if they were indeed in such context. So for a :: f (t -> r) and b :: f t, the (almost plain) combination a <*> b is an application of carried contents (or types t -> r and t), in a given context (of type f).
The main distinction from monads is, monads are non-linear. In
do { x <- a
; y <- b x
; z <- c x y
; return
(x, y, z) }
the computation b x depends on x, and c x y depends on both x and y. The functions are nested:
a >>= (\x -> b x >>= (\y -> c x y >>= (\z -> .... )))
If b and c do not depend on the previous results (x, y), this can be made flat by making the computation stages return repackaged, compound data (this addresses your second concern):
a >>= (\x -> b >>= (\y-> return (x,y))) -- `b ` sic
>>= (\(x,y) -> c >>= (\z-> return (x,y,z))) -- `c `
>>= (\(x,y,z) -> ..... )
and this is essentially an applicative style (b, c are fully known in advance, independent of the value x produced by a, etc.). So when your combinations create data that encompass all the information they need for further combinations, and there's no need for "outer variables" (i.e. all computations are already fully known, independent of any values produced by any of the previous stages), you can use this style of combination.
But if your monadic chain has branches dependent on values of such "outer" variables (i.e. results of previous stages of monadic computation), then you can't make a linear chain out of it. It is essentially monadic then.
As an illustration, the first example from that paper shows how the "monadic" function
sequence :: [IO a] → IO [a]
sequence [ ] = return [ ]
sequence (c : cs) = do
{ x <- c
; xs <- sequence cs -- `sequence cs` fully known, independent of `x`
; return
(x : xs) }
can actually be coded in this "flat, linear" style as
sequence :: (Applicative f) => [f a] -> f [a]
sequence [] = pure []
sequence (c : cs) = pure (:) <*> c <*> sequence cs
-- (:) x xs
There's no use here for the monad's ability to branch on previous results.
a note on the excellent Petr Pudlák's answer: in my "terminology" here, his pair is combination without application. It shows that the essence of what the Applictive Functors add to plain Functors, is the ability to combine. Application is then achieved by the good old fmap. This suggests combinatory functors as perhaps a better name (update: in fact, "Monoidal Functors" is the name).
You can view functors, applicatives and monads like this: They all carry a kind of "effect" and a "value". (Note that the terms "effect" and "value" are only approximations - there doesn't actually need to be any side effects or values - like in Identity or Const.)
With Functor you can modify possible values inside using fmap, but you cannot do anything with effects inside.
With Applicative, you can create a value without any effect with pure, and you can sequence effects and combine their values inside. But the effects and values are separate: When sequencing effects, an effect cannot depend on the value of a previous one. This is reflected in <*, <*> and *>: They sequence effects and combine their values, but you cannot examine the values inside in any way.
You could define Applicative using this alternative set of functions:
fmap :: (a -> b) -> (f a -> f b)
pureUnit :: f ()
pair :: f a -> f b -> f (a, b)
-- or even with a more suggestive type (f a, f b) -> f (a, b)
(where pureUnit doesn't carry any effect)
and define pure and <*> from them (and vice versa). Here pair sequences two effects and remembers the values of both of them. This definition expresses the fact that Applicative is a monoidal functor.
Now consider an arbitrary (finite) expression consisting of pair, fmap, pureUnit and some primitive applicative values. We have several rules we can use:
fmap f . fmap g ==> fmap (f . g)
pair (fmap f x) y ==> fmap (\(a,b) -> (f a, b)) (pair x y)
pair x (fmap f y) ==> -- similar
pair pureUnit y ==> fmap (\b -> ((), b)) y
pair x pureUnit ==> -- similar
pair (pair x y) z ==> pair x (pair y z)
Using these rules, we can reorder pairs, push fmaps outwards and eliminate pureUnits, so eventually such expression can be converted into
fmap pureFunction (x1 `pair` x2 `pair` ... `pair` xn)
or
fmap pureFunction pureUnit
So indeed, we can first collect all effects together using pair and then modify the resulting value inside using a pure function.
With Monad, an effect can depend on the value of a previous monadic value. This makes them so powerful.
The answers already given are excellent, but there's one small(ish) point I'd like to spell out explicitly, and it has to do with <*, <$ and *>.
One of the examples was
do var1 <- parser1
var2 <- parser2
var3 <- parser3
return $ foo var1 var2 var3
which can also be written as foo <$> parser1 <*> parser2 <*> parser3.
Suppose that the value of var2 is irrelevant for foo - e.g. it's just some separating whitespace. Then it also doesn't make sense to have foo accept this whitespace only to ignore it. In this case foo should have two parameters, not three. Using do-notation, you can write this as:
do var1 <- parser1
parser2
var3 <- parser3
return $ foo var1 var3
If you wanted to write this using only <$> and <*> it should be something like one of these equivalent expressions:
(\x _ z -> foo x z) <$> parser1 <*> parser2 <*> parser3
(\x _ -> foo x) <$> parser1 <*> parser2 <*> parser3
(\x -> const (foo x)) <$> parser1 <*> parser2 <*> parser3
(const . foo) <$> parser1 <*> parser2 <*> parser3
But that's kind of tricky to get right with more arguments!
However, you can also write foo <$> parser1 <* parser2 <*> parser3. You could call foo the semantic function which is fed the result of parser1 and parser3 while ignoring the result of parser2 in between. The absence of > is meant to be indicative of the ignoring.
If you wanted to ignore the result of parser1 but use the other two results, you can similarly write foo <$ parser1 <*> parser2 <*> parser3, using <$ instead of <$>.
I've never found much use for *>, I would normally write id <$ p1 <*> p2 for the parser that ignores the result of p1 and just parses with p2; you could write this as p1 *> p2 but that increases the cognitive load for readers of the code.
I've learnt this way of thinking just for parsers, but it has later been generalised to Applicatives; but I think this notation comes from the uuparsing library; at least I used it at Utrecht 10+ years ago.
I'd like to add/reword a couple things to the very helpful existing answers:
Applicatives are "static". In pure f <*> a <*> b, b does not depend on a, and so can be analyzed statically. This is what I was trying to show in my answer to your previous question (but I guess I failed -- sorry) -- that since there was actually no sequential dependence of parsers, there was no need for monads.
The key difference that monads bring to the table is (>>=) :: Monad m => m a -> (a -> m b) -> m a, or, alternatively, join :: Monad m => m (m a). Note that whenever you have x <- y inside do notation, you're using >>=. These say that monads allow you to use a value "inside" a monad to produce a new monad, "dynamically". This cannot be done with an Applicative. Examples:
-- parse two in a row of the same character
char >>= \c1 ->
char >>= \c2 ->
guard (c1 == c2) >>
return c1
-- parse a digit followed by a number of chars equal to that digit
-- assuming: 1) `digit`s value is an Int,
-- 2) there's a `manyN` combinator
-- examples: "3abcdef" -> Just {rest: "def", chars: "abc"}
-- "14abcdef" -> Nothing
digit >>= \d ->
manyN d char
-- note how the value from the first parser is pumped into
-- creating the second parser
-- creating 'half' of a cartesian product
[1 .. 10] >>= \x ->
[1 .. x] >>= \y ->
return (x, y)
Lastly, Applicatives enable lifted function application as mentioned by #WillNess.
To try to get an idea of what the "intermediate" results look like, you can look at the parallels between normal and lifted function application. Assuming add2 = (+) :: Int -> Int -> Int:
-- normal function application
add2 :: Int -> Int -> Int
add2 3 :: Int -> Int
(add2 3) 4 :: Int
-- lifted function application
pure add2 :: [] (Int -> Int -> Int)
pure add2 <*> pure 3 :: [] (Int -> Int)
pure add2 <*> pure 3 <*> pure 4 :: [] Int
-- more useful example
[(+1), (*2)]
[(+1), (*2)] <*> [1 .. 5]
[(+1), (*2)] <*> [1 .. 5] <*> [3 .. 8]
Unfortunately, you can't meaningfully print the result of pure add2 <*> pure 3 for the same reason that you can't for add2 ... frustrating. You may also want to look at the Identity and its typeclass instances to get a handle on Applicatives.
I have been working through the great good book, but I am struggling slightly with Applicative Functors.
In the following example max is applied to the contents of the two Maybe functors and returns Just 6.
max <$> Just 3 <*> Just 6
Why in the following example is Left "Hello" returned instead of the contents of the Either functors: Left "Hello World"?
(++) <$> Left "Hello" <*> Left " World"
It's because the type parameter in the Functor instance (and Applicative etc.) is the second type parameter. In
Either a b
the a type, and the Left values are not affected by functorial or applicative operations, because they are considered failure cases or otherwise inaccessible.
instance Functor (Either a) where
fmap _ (Left x) = Left x
fmap f (Right y) = Right (f y)
Use Right,
(++) <$> Right "Hello" <*> Right " World"
to get concatenation.
To add to Daniel's excellent answer, there are a couple points I'd like to make:
First, here's the Applicative instance:
instance Applicative (Either e) where
pure = Right
Left e <*> _ = Left e
Right f <*> r = fmap f r
You can see that this is 'short-circuiting' -- as soon as it hits a Left, it aborts and returns that Left. You can check this with poor man's strictness analysis:
ghci> (++) <$> Left "Hello" <*> undefined
Left "Hello" -- <<== it's not undefined :) !!
ghci> (++) <$> Right "Hello" <*> undefined
*** Exception: Prelude.undefined -- <<== undefined ... :(
ghci> Left "oops" <*> undefined <*> undefined
Left "oops" -- <<== :)
ghci> Right (++) <*> undefined <*> undefined
*** Exception: Prelude.undefined -- <<== :(
Second, your example is slightly tricky. In general, the type of the function and the e in Either e are not related. Here's <*>s type:
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
If we make the substitution f -->> Either e, we get:
(<*>) :: Either e (a -> b) -> Either e a -> Either e b
Although in your example, e and a match, in general they won't, which means you can't polymorphically implement an Applicative instance for Either e which applies the function to a left-hand argument.