When reading stuff on Haskell, I sometimes come across the adjective "applicative", but I have not been able to find a sufficiently clear definition of this adjective (as opposed to, say, Haskell's Applicative class). I would like to learn to recognize a piece of code/algorithm/data structure, etc. that is "applicative", just like I can recognize one that is "recursive". Some contrasting examples of "applicative" vs. whatever the term intends to draw a distinction from (which I hope is something more meaningful in its own right than "non-applicative") would be much appreciated.
Edit: for example, why was the word "applicative" chosen to name the class, and not some other name? What is it about this class that makes the name Applicative such a good fit for it (even at the price of its obscurity)?
Thanks!
It's not clear what "applicative" is being used to mean without knowing the context.
If it's truly not referring to applicative functors (i.e. Applicative), then it's probably referring to the form of application itself: f a b c is an applicative form, and this is where applicative functors get their name from: f <$> a <*> b <*> c is analogous. (Indeed, idiom brackets take this connection further, by letting you write it as (| f a b c |).)
Similarly, "applicative languages" can be contrasted with languages that are not primarily based on the application of function to argument (usually in prefix form); concatenative ("stack based") languages aren't applicative, for instance.
To answer the question of why applicative functors are called what they are in depth, I recommend reading
Applicative programming with effects; the basic idea is that a lot of situations call for something like "enhanced application": applying pure functions within some effectful context. Compare these definitions of map and mapM:
map :: (a -> b) -> [a] -> [b]
map _ [] = []
map f (x:xs) = f x : map f xs
mapM :: (Monad m) => (a -> m b) -> [a] -> m [b]
mapM _ [] = return []
mapM f (x:xs) = do
x' <- f x
xs' <- mapM f xs
return (x' : xs')
with mapA (usually called traverse):
mapA :: (Applicative f) => (a -> f b) -> [a] -> f [b]
mapA _ [] = pure []
mapA f (x:xs) = (:) <$> f x <*> mapA f xs
As you can see, mapA is much more concise, and more obviously related to map (even more so if you use the prefix form of (:) in map too). Indeed, using the applicative functor notation even when you have a full Monad is common in Haskell, since it's often much more clear.
Looking at the definition helps, too:
class (Functor f) => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
Compare the type of (<*>) to the type of application: ($) :: (a -> b) -> a -> b. What Applicative offers is a generalised "lifted" form of application, and code using it is written in an applicative style.
More formally, as mentioned in the paper and pointed out by ertes, Applicative is a generalisation of the SK combinators; pure is a generalisation of K :: a -> (r -> a) (aka const), and (<*>) is a generalisation of S :: (r -> a -> b) -> (r -> a) -> (r -> b). The r -> a part is simply generalised to f a; the original types are obtained with the Applicative instance for ((->) r).
As a practical matter, pure also allows you to write applicative expressions in a more uniform manner: pure f <*> effectful <*> pure x <*> effectful as opposed to (\a b -> f a x b) <$> effectful <*> effectful.
On a more fundamental level one could say that "applicative" means working in some form of the SK calculus. This is also what the Applicative class is about. It gives you the combinators pure (a generalization of K) and <*> (a generalization of S).
Your code is applicative when it is expressed in such a style. For example the code
liftA2 (+) sin cos
is an applicative expression of
\x -> sin x + cos x
Of course in Haskell the Applicative class is the main construct for programming in an applicative style, but even in a monadic or arrowic context you can write applicatively:
return (+) `ap` sin `ap` cos
arr (uncurry (+)) . (sin &&& cos)
Whether the last piece of code is applicative is controversial though, because one might argue that applicative style needs currying to make sense.
Related
I'm reading the second edition of Programming in Haskell and I've came across this sentence:
... there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
This doesn't seem right to me, though. I can see that there is only one valid definition of fmap for each Functor type, but surely I could define any number of functions with the type (a -> b) -> f a -> f b which aren't equivalent to each other?
Why is this the case? Or, is it just a mistake by the author?
You've misread what the author was saying.
...any function with the same polymorphic type as fmap...
This means, any function with the signature
Functor f => (a -> b) -> f a -> f b
must be equivalant to fmap. (Unless you permit bottom values, of course.)
That statement is true; it can be seen quite easily if you try to define such a function: because you know nothing about f except that it's a functor, the only way to obtain a non-⊥ f b value is by fmapping over the f a one.
What's a bit less clear cut is the logical implication in the quote:
there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
I think what the author means there is, because a Functor f => (a -> b) -> f a -> f b function must necessarily invoke fmap, and because fmap is always the only valid functor-mapping for a parameterised type, any Functor f => (a -> b) -> f a -> f b will indeed also in practice obey the functor laws, i.e. it will be the fmap.
I agree that the “hence” is a bit badly phrased, but in principle the quote is correct.
I think that the quote refers to this scenario. Assume we define a parameterized type:
data F a = .... -- whatever
for which we can write not only one, but two fmap implementations
fmap1 :: (a -> b) -> F a -> F b
fmap2 :: (a -> b) -> F a -> F b
satisfying the functor laws
fmap1 id = id
fmap1 (f . g) = fmap1 f . fmap1 g
fmap2 id = id
fmap2 (f . g) = fmap2 f . fmap2 g
Under these assumptions, we have that fmap1 = fmap2.
This is a theoretical consequence of the "free theorem" associated to fmap's polymorphic type (see the comment under Lemma 1).
Pragmatically, this ensures that the instance we obtain from deriving Functor is the only possible one.
It is a mistake. Here's some examples of functions with the same type as fmap for lists that are not fmap:
\f -> const []
\f -> concatMap (replicate 2 . f)
\f -> map (f . head) . chunksOf 2
\f -> map f . reverse
There are many more. In general, given a function ixf from list lengths to lists of numbers no bigger than that length (that is, valid indices into the list), we can build
maybeIt'sFmapLol :: (Int -> [Int]) -> (a -> b) -> [a] -> [b]
maybeIt'sFmapLol ixf elemf xs = [map elemf xs !! ix | ix <- ixf (length xs)]
Use suitably lazy variants of Int to handle infinite lists. A similar function schema can be cooked up for other container-like functors.
In many articles I have read that monad >>= operator is a way to represent function composition. But for me it is closer to some kind of advanced function application
($) :: (a -> b) -> a -> b
(>>=) :: Monad m => m a -> (a -> m b) -> m b
For composition we have
(.) :: (b -> c) -> (a -> b) -> a -> c
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
Please clarify.
Clearly, >>= is not a way to represent function composition. Function composition is simply done with .. However, I don't think any of the articles you've read meant this, either.
What they meant was “upgrading” function composition to work directly with “monadic functions”, i.e. functions of the form a -> m b. The technical term for such functions is Kleisli arrows, and indeed they can be composed with <=< or >=>. (Alternatively, you can use the Category instance, then you can also compose them with . or >>>.)
However, talking about arrows / categories tends to be confusing especially to beginners, just like point-free definitions of ordinary functions are often confusing. Luckily, Haskell allows us to express functions also in a more familiar style that focuses on the results of functions, rather the functions themselves as abstract morphisms†. It's done with lambda abstraction: instead of
q = h . g . f
you may write
q = (\x -> (\y -> (\z -> h z) (g y)) (f x))
...of course the preferred style would be (this being only syntactic sugar for lambda abstraction!)‡
q x = let y = f x
z = g y
in h z
Note how, in the lambda expression, basically composition was replaced by application:
q = \x -> (\y -> (\z -> h z) $ g y) $ f x
Adapted to Kleisli arrows, this means instead of
q = h <=< g <=< f
you write
q = \x -> (\y -> (\z -> h z) =<< g y) =<< f x
which again looks of course much nicer with flipped operators or syntactic sugar:
q x = do y <- f x
z <- g y
h z
So, indeed, =<< is to <=< like $ is to .. The reason it still makes sense to call it a composition operator is that, apart from “applying to values”, the >>= operator also does the nontrivial bit about Kleisli arrow composition, which function composition doesn't need: joining the monadic layers.
†The reason this works is that Hask is a cartesian closed category, in particular a well-pointed category. In such a category, arrows can, broadly speaking, be defined by the collection of all their results when applied to simple argument values.
‡#adamse remarks that let is not really syntactic sugar for lambda abstraction. This is particularly relevant in case of recursive definitions, which you can't directly write with a lambda. But in simple cases like this here, let does behave like syntactic sugar for lambdas, just like do notation is syntactic sugar for lambdas and >>=. (BTW, there's an extension which allows recursion even in do notation... it circumvents the lambda-restriction by using fixed-point combinators.)
Just as an illustration, consider this:
($) :: (a -> b) -> a -> b
let g=g in (g $) :: a -> b
g :: (a -> b)
_____
Functor f => / \
(<$>) :: (a -> b) -> f a -> f b
let g=g in (g <$>) :: f a -> f b
g :: (a -> b)
___________________
Applicative f => / / \
(<*>) :: f (a -> b) -> f a -> f b
let h=h in (h <*>) :: f a -> f b
h :: f (a -> b)
_____________
Monad m => /.------. \
(=<<) :: (a -> m b) -> m a -> m b
let k=k in (k =<<) :: m a -> m b
k :: (a -> m b)
So yes, each one of those, (g <$>), (h <*>) or (k =<<), is some kind of a function application, promoted into either Functor, Applicative Functor, or a Monad "context". And (g $) is just a regular kind of application of a regular kind of function.
With Functors, functions have no influence on the f component of the overall thing. They work strictly on the inside and can't influence the "wrapping".
With Applicatives, the functions come wrapped in an f, which wrapping combines with that of an argument (as part of the application) to produce the wrapping of the result.
With Monads, functions themselves now produce the wrapped results, pulling their arguments somehow from the wrapped argument (as part of the application).
We can see the three operators as some kind of a marking on a function, like mathematicians like to write say f' or f^ or f* (and in the original work by Eugenio Moggi(1) f* is exactly what was used, denoting the promoted function (f =<<)).
And of course, with the promoted functions :: f a -> f b, we get to chain them, because now the types line up. The promotion is what allows the composition.
(1) "Notions of computation and monads", Eugenio Moggi, July 1991.
more about compositionality, with a picture: Monads with Join() instead of Bind()
So the functor is "magically working inside" "the pipes"; applicative is "prefabricated pipes built from components in advance"; and monads are "building pipe networks as we go". An illustration:
I keep reusing lambda expressions such as
\x -> (f x, g x)
where I apply the same input to two functions and encapsulate the result in a pair. I can write a function capturing this
combine :: (a -> b) -> (a -> c) -> a -> (b,c)
combine f g x = (f x, g x)
Now the above lambda expression is just combine f g. I have two questions.
I'm interested to know if there is a standard library function that does this that I just can't find.
Out of curiosity, I'd like to rewrite this function in point-free style, but I'm having a lot of trouble with it.
Control.Arrow has the function (&&&) for this. It has a "more general" type, which unfortunately means that Hoogle doesn't find it (maybe this should be considered a bug in Hoogle?).
You can usually figure this sort of thing automatically with pointfree, which lambdabot in #haskell has as a plugin.
For example:
<shachaf> #pl combine f g x = (f x, g x)
<lambdabot> combine = liftM2 (,)
Where liftM2 with the (r ->) instance of Monad has type (a -> b -> c) -> (r -> a) -> (r -> b) -> r -> c. Of course, there are many other ways of writing this point-free, depending on what primitives you allow.
I'm interested to know if there is a standard library function that does this that I just can't find.
It's easy to miss because of the type class, but look at Control.Arrow. Plain Arrows can't be curried or applied, so the Arrow combinators are pointfree by necessity. If you specialize them to (->), you'll find the one you want is this:
(&&&) :: (Arrow a) => a b c -> a b c' -> a b (c, c')
There are other, similar functions, such as the equivalent operation for Either, which specialized to (->) looks like this:
(|||) :: (a -> c) -> (b -> c) -> Either a b -> c
Which is the same as either.
Out of curiosity, I'd like to rewrite this function in point-free style, but I'm having a lot of trouble with it.
Since you're duplicating an input, you need some way of doing that pointfree--the most common way is via the Applicative or Monad instance for (->), for example \f g -> (,) <$> f <*> g. This is essentially an implicit, inline Reader monad, and the argument being split up is the "environment" value. Using this approach, join f x becomes f x x, pure or return become const, fmap becomes (.), and (<*>) becomes the S combinator \f g x -> f x (g x).
There are actually quite a few ways of doing this. The most common way is to use the (&&&) function from Control.Arrow:
f &&& g
However, often you have more functions or need to pass the result to another function, in which case it is much more convenient to use applicative style. Then
uncurry (+) . (f &&& g)
becomes
liftA2 (+) f g
As noted this can be used with more than one function:
liftA3 zip3 f g h
I'm reading Conor McBride and Ross Paterson's "Functional Pearl / Idioms: applicative programming with effects:" (The new version, with "idioms" in the title). I'm having a little difficulty with Exercise 4, which is explained below. Any hints would be much appreciated (especially: should I start writing fmap and join or return and >>=?).
Problem Statement
You want to create an instance Monad [] where
return x = repeat x
and ap = zapp.
Standard library functions
As on p. 2 of the paper, ap applies a monadic function-value to a monadic value.
ap :: Monad m => m (s -> t) -> m s -> m t
ap mf ms = do
f <- mf
s <- ms
return (f s)
I expanded this in canonical notation to,
ap mf ms = mf >>= (\f -> (ms >>= \s -> return (f s)))
The list-specific function zapp ("zippy application") applies a function from one list to a corresponding value in another, namely,
zapp (f:fs) (s:ss) = f s : zapp fs ss
My difficulties
Note that in the expanded form, mf :: m (a -> b) is a list of functions [(a -> b)] in our case. So, in the first application of >>=, we have
(f:fs) >>= mu
where mu = (\f -> (ms >>= \s -> return (f s))). Now, we can call fs >>= mu as a subroutine, but this doesn't know to remove the first element of ms. (recall that we want the resulting list to be [f1 s1, f2 s2, ...]. I tried to hack something but... as predicted, it didn't work... any help would be much appreciated.
Thanks in advance!
Edit 1
I think I got it to work; first I rewrote ap with fmap and join as user "comonad" suggested .
My leap of faith was assuming that fmap = map. If anyone can explain how to get there, I'd appreciate it very much. After this, it's clear that join works on the list of lists user "comonad" suggested, and should be the diagonal, \x -> zipWith ((!!) . unL) x [0..]. My complete code is this:
newtype L a = L [a] deriving (Eq, Show, Ord)
unL (L lst) = lst
liftL :: ([a] -> [b]) -> L a -> L b
liftL f = L . f . unL
joinL :: L (L a) -> L a
joinL = liftL $ \x -> zipWith ((!!) . unL) x [0..]
instance Functor L where
fmap f = liftL (map f)
instance Monad L where
return x = L $ repeat x
m >>= g = joinL (fmap g m)
hopefully that's right (seems to be the "solution" on p. 18 of the paper) ... thanks for the help, everyone!
Hm. I can't help but think this exercise is a little bit unfair as presented.
Exercise 4 (the colist Monad)
Although repeat and zapp are not the return and ap of the usual Monad [] instance,
they are none the less the return and ap of an alternative monad, more suited to
the coinductive interpretation of []. What is the join :: [[x]] → [x] of this monad?
Comment on the relative efficiency of this monad’s ap and our zapp.
First, I'm fairly certain that the monad instance in question is not valid for [] in general. When they say "the coinductive interpretation", I suspect this refers to infinite lists. The instance is actually valid for finite lists in certain cases, but not for arbitrary lists in general.
So that's your first, very general, hint--why would a monad instance only be valid for certain lists, particularly infinite ones?
Here's your second hint: fmap and return are trivial given other definitions earlier in the paper. You already have return; fmap is only slightly less obvious.
Furthermore, (>>=) has an easy implementation in terms of the other functions, as with any Monad, which leaves join as the crux of the matter. In most cases (>>=) is more natural for programming with, but join is more conceptually fundamental and in this case, I think, more straightforward to analyze. So I recommend working on that, and forgetting about (>>=) for now. Once you have an implementation, you can go back and reconstruct (>>=) and check the monad laws to make sure it all works properly.
Finally, suppose for a moment that you have fmap available, but nothing else. Given values with type [a -> b] and [a], you can combine them to get something of type [[b]]. The type of join here is [[a]] -> [a]. How might you write join such that you get the same result here that you would from using zapp on the original values? Note that the question about relative efficiency is, as well as a question, a clue about the implementation.
I just thought I should clarify that the version with exercises and "Idioms" in the title is a rather earlier draft of the paper which eventually appeared in JFP. At that time, I mistakenly thought that colists (by which I mean possibly infinite, possibly finite lists) were a monad in a way which corresponds to zapp: there is a plausible candidate for the join (alluded to in other answers) but Jeremy Gibbons was kind enough to point out to us that it does not satisfy the monad laws. The counterexamples involve "ragged" lists of lists with varying finite lengths. Correspondingly, in the JFP article, we stood corrected. (We were rather happy about it, because we love to find applicative functors whose (<*>) is not the ap of a Monad.)
The necessarily infinite lists (i.e. streams), by ruling out the ragged cases, do indeed form a monad whose ap behaves like zapp. For a clue, note that Stream x is isomorphic to Nat -> x.
My apologies for the confusion. It's sometimes dangerous leaving old, unfinished drafts (replete with errors) lying (ha ha) around on the web.
The minimal complete definition of a Monad is either fmap+return+join or return+(>>=). You can implement the one with the other:
(>>=) :: Monad m => m a -> (a->m b) -> m b
(>>=) ma amb = join $ fmap amb ma
fmap :: Monad m => (a->b) -> m a -> m b
fmap f ma = ma >>= (return . f)
join :: Monad m => m (m a) -> m a
join mma = mma >>= id
Now, the implementation of ap can be rewritten in terms of join and fmap:
ap :: Monad m => m (a->b) -> m a -> m b
ap mf ma = do
f <- mf
a <- ma
return (f a)
ap mf ma = do
f <- mf
fmap f ma
ap mf ma = join $ fmap (flip fmap ma) mf
In the exercise, the semantics of fmap and return and ap are given.
The rest will be obvious, as soon as you examine one example:
ap [f1,f2,f3...] [1,2,3...] = join $ fmap (flip fmap [1,2,3...]) [f1,f2,f3...]
= join $ [ [(f1 1), f1 2 , f1 3 ...]
, [ f2 1 ,(f2 2), f2 3 ...]
, [ f3 1 , f3 2 ,(f3 3)...]
...
]
= [(f1 1)
, (f2 2)
, (f3 3)
...
]
According to the Typeclassopedia (among other sources), Applicative logically belongs between Monad and Pointed (and thus Functor) in the type class hierarchy, so we would ideally have something like this if the Haskell prelude were written today:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Functor f => Pointed f where
pure :: a -> f a
class Pointed f => Applicative f where
(<*>) :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
-- either the traditional bind operation
(>>=) :: (m a) -> (a -> m b) -> m b
-- or the join operation, which together with fmap is enough
join :: m (m a) -> m a
-- or both with mutual default definitions
f >>= x = join ((fmap f) x)
join x = x >>= id
-- with return replaced by the inherited pure
-- ignoring fail for the purposes of discussion
(Where those default definitions were re-typed by me from the explanation at Wikipedia, errors being my own, but if there are errors it is at least in principle possible.)
As the libraries are currently defined, we have:
liftA :: (Applicative f) => (a -> b) -> f a -> f b
liftM :: (Monad m) => (a -> b) -> m a -> m b
and:
(<*>) :: (Applicative f) => f (a -> b) -> f a -> f b
ap :: (Monad m) => m (a -> b) -> m a -> m b
Note the similarity between these types within each pair.
My question is: are liftM (as distinct from liftA) and ap (as distinct from <*>), simply a result of the historical reality that Monad wasn't designed with Pointed and Applicative in mind? Or are they in some other behavioral way (potentially, for some legal Monad definitions) distinct from the versions that only require an Applicative context?
If they are distinct, could you provide a simple set of definitions (obeying the laws required of Monad, Applicative, Pointed, and Functor definitions described in the Typeclassopedia and elsewhere but not enforced by the type system) for which liftA and liftM behave differently?
Alternatively, if they are not distinct, could you prove their equivalence using those same laws as premises?
liftA, liftM, fmap, and . should all be the same function, and they must be if they satisfy the functor law:
fmap id = id
However, this is not checked by Haskell.
Now for Applicative. It's possible for ap and <*> to be distinct for some functors simply because there could be more than one implementation that satisfies the types and the laws. For example, List has more than one possible Applicative instance. You could declare an applicative as follows:
instance Applicative [] where
(f:fs) <*> (x:xs) = f x : fs <*> xs
_ <*> _ = []
pure = repeat
The ap function would still be defined as liftM2 id, which is the Applicative instance that comes for free with every Monad. But here you have an example of a type constructor having more than one Applicative instance, both of which satisfy the laws. But if your monads and your applicative functors disagree, it's considered good form to have different types for them. For example, the Applicative instance above does not agree with the monad for [], so you should really say newtype ZipList a = ZipList [a] and then make the new instance for ZipList instead of [].
They can differ, but they shouldn't.
They can differ because they can have different implementations: one is defined in an instance Applicative while the other is defined in an instance Monad. But if they indeed differ, then I'd say the programmer who wrote those instances wrote misleading code.
You are right: the functions exist as they do for historical reasons. People have strong ideas about how things should have been.