I know that the Applicative class is described in category theory as a "lax monoidal functor" but I've never heard the term "lax" before, and the nlab page on lax functor a bunch of stuff I don't recognize at all, re: bicategories and things that I didn't know we cared about in Haskell. If it is actually about bicategories, can someone give me a plebian view of what that means? Otherwise, what is "lax" doing in this name?
Let's switch to the monoidal view of Applicative:
unit :: () -> f ()
mult :: (f s, f t) -> f (s, t)
pure :: x -> f x
pure x = fmap (const x) (unit ())
(<*>) :: f (s -> t) -> f s -> f t
ff <*> fs = fmap (uncurry ($)) (mult (ff, fs))
For a strict monoidal functor, unit and mult must be isomorphisms. The impact of "lax" is to drop that requirement.
E.g., (up to the usual naivete) (->) a is strict-monoidal, but [] is only lax-monoidal.
Related
Applicatives are often presented as a way to lift multi-argument functions
into a functor and apply functor values to it. But I wonder if there is some
subtle additional power stemming from the fact that it can do so by lifting
functions that return a function and applying the function arguments one at
a time.
Imagine instead we define an interface based on lifting functions whose argument is a tuple of arguments:
# from Functor
fmap :: (a -> b) -> Fa -> Fb
# from Applicative
pure :: a -> Fa
# combine multiple functor values into a functor of a tuple
tuple1 :: Fa -> F(a)
tuple2 :: Fa -> Fb -> F(a,b)
tuple3 :: Fa -> Fb -> Fc -> F(a,b,c)
(etc ...)
# lift multi-argument functions (that take a tuple as input)
ap_tuple1 :: ((a) -> b) -> F(a) -> Fb
ap_tuple2 :: ((a,b) -> c) -> F(a,b) -> Fc
ap_tuple3 :: ((a,b,c) -> d) -> F(a,b,c) -> Fd
(etc ..)
Assume we had the corresponding tuple function defined for every sized tuple we may encounter.
Would this interface be equally as powerful as the Applicative interface, given it allows for
lifting/applying-to multi-argument functions BUT doesn't allow for lifting/applying-to functions
that return a function? Obviously one can curry functions that take a tuple as an argument
so they can be lifted in an applicative and one can uncurry functions that return a function
in order to lift them into hypothetical implementation above. But to my mind there is a subtle
difference in power. Is there any difference? (Assuming the question even makes sense)
You've rediscovered the monoidal presentation of Applicative. It looks like this:
class Functor f => Monoidal f where
(>*<) :: f a -> f b -> f (a, b)
unit :: f ()
It's isomorphic to Applicative via:
(>*<) = liftA2 (,)
unit = pure ()
pure x = x <$ unit
f <*> x = fmap (uncurry ($)) (f >*< x)
By the way, your ap_tuple functions are all just fmap. The "hard" part with multiple values is combining them together. Splitting them back into pieces is "easy".
Yes, this is equally as powerful. Notice that pure and tuple1 are the same. Further, everything higher than tuple2 is recovered from tuple2 and fmap:
tuple3 x y z = repair <$> tuple2 (tuple2 x y) z
where repair ((a, b), c) = (a, b, c)
tuple4 w x y z = repair <$> tuple2 (tuple2 x y) (tuple2 x y)
where repair ((a, b), (c, d)) = (a, b, c, d)
-- etc.
Also, all of the ap_tuples are just fmap:
ap_tuple1 = fmap
ap_tuple2 = fmap
ap_tuple3 = fmap
-- ...
Renaming prod = tuple2, your question boils down to
Is
class Functor f => Applicative f where
pure :: a -> f a
prod :: f a -> f b -> f (a, b)
equivalent to
class Functor f => Applicative f where
pure :: a -> f a
liftA2 :: (a -> b -> c) -> f a -> f b -> f c
?
And you might already see that the answer is yes. prod is just a specialization of liftA2
prod = liftA2 (,)
But (,) is "natural" in the sense that it doesn't "delete" anything, so you can recover liftA2 just by destructuring the data back out:
liftA2 f x y = f' <$> prod x y
where f' (a, b) = f a b
In Haskell, class Monad is declared as:
class Applicative m => Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
return = pure
How can I show that Monad is actually Applicative, which is declared like this?
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
Specifically, how can I write pure and <*> in terms of return and >>=?
How can I show that Monad is actually Functor, which is declared like this?
class Functor f where
fmap :: (a -> b) -> f a -> f b
Specifically, how can I write fmap in terms of return and >>=?
These are all in the documentation.
Specifically, how can I write pure and <*> in terms of return and >>=?
See http://hackage.haskell.org/package/base-4.12.0.0/docs/Control-Monad.html#t:Monad, specifically this section:
Furthermore, the Monad and Applicative operations should relate as follows:
pure = return
(<*>) = ap
and note that ap was a standard Monad function long before Applicative was introduced as a standard part of the language, and is defined as ap m1 m2 = do { x1 <- m1; x2 <- m2; return (x1 x2) }
Specifically, how can I write fmap in terms of return and >>=?
The Control.Applicative documentation says:
As a consequence of these laws, the Functor instance for f will satisfy
fmap f x = pure f <*> x
Which of course, using what I quoted above, you can use to implement fmap in terms of return and >>=.
And as #duplode points out, there are also liftM for Monads, and liftA for Applicatives, which are (essentially, although they're not defined literally that way) synonyms of fmap, specialised to their particular type classes.
I can readily enough define general Functor and Monad classes in Haskell:
class (Category s, Category t) => Functor s t f where
map :: s a b -> t (f a) (f b)
class Functor s s m => Monad s m where
pure :: s a (m a)
join :: s (m (m a)) (m a)
join = bind id
bind :: s a (m b) -> s (m a) (m b)
bind f = join . map f
I'm reading this post which explains an applicative functor is a lax (closed or monoidal) functor. It does so in terms of a (exponential or monoidal) bifunctor. I know in the Haskell category, every Monad is Applicative; how can we generalize? How should we choose the (exponential or monoidal) functor in terms of which to define Applicative? What confuses me is our Monad class seems to have no notion whatsoever of the (closed or monoidal) structure.
Edit: A commenter says it is not generally possible, so now part of my question is where it is possible.
What confuses me is our Monad class seems to have no notion whatsoever of the (closed or monoidal) structure.
If I understood your question correctly, that would be provided via the tensorial strength of the monad. The Monad class doesn't have it because it is intrinsic to the Hask category. More concretely, it is assumed to be:
t :: Monad m => (a, m b) -> m (a,b)
t (x, my) = my >>= \y -> return (x,y)
Essentially, all the monoidal stuff involved in the methods of a monoidal functor happens on the target category. It can be formalised thus†:
class (Category s, Category t) => Functor s t f where
map :: s a b -> t (f a) (f b)
class Functor s t f => Monoidal s t f where
pureUnit :: t () (f ())
fzip :: t (f a,f b) (f (a,b))
s-morphisms only come in if you consider the laws of a monoidal functor, which roughly say that the monoidal structure of s should be mapped into this monoidal structure of t by the functor.
Perhaps more insightful is to factor an fmap into the class methods, so it's clear what the “func-”-part of the functor does:
class Functor s t f => Monoidal s t f where
...
puref :: s () y -> t () (f y)
puref f = map f . pureUnit
fzipWith :: s (a,b) c -> t (f a,f b) (f c)
fzipWith f = map f . fzip
From Monoidal, we can get back our good old Hask-Applicative thus:
pure :: Monoidal (->) (->) f => a -> f a
pure a = puref (const a) ()
(<*>) :: Monoidal (->) (->) f => f (a->b) -> f a -> f b
fs <*> xs = fzipWith (uncurry ($)) (fs, xs)
or
liftA2 :: Monoidal (->) (->) f => (a->b->c) -> f a -> f b -> f c
liftA2 f xs ys = fzipWith (uncurry f) (xs,ys)
Perhaps more interesting in this context is the other direction, because that shows us up the connection to monads in the generalised case:
instance Applicative f => Monoidal (->) (->) f where
pureUnit = pure
fzip = \(xs,ys) -> liftA2 (,) xs ys
= \(xs,ys) -> join $ map (\x -> map (x,) ys) xs
That lambdas and tuple sections aren't available in a general category, however they can be translated to cartesian closed categories.
†I'm using (,) as the product in both monoidal categories, with identity element (). More generally you might write data I_s and data I_t and type family (⊗) x y and type family (∙) x y for the products and their respective identity elements.
I am reading in the haskellbook about applicative and trying to understand it.
In the book, the author mentioned:
So, with Applicative, we have a Monoid for our structure and function
application for our values!
How is monoid connected to applicative?
Remark: I don't own the book (yet), and IIRC, at least one of the authors is active on SO and should be able to answer this question. That being said, the idea behind a monoid (or rather a semigroup) is that you have a way to create another object from two objects in that monoid1:
mappend :: Monoid m => m -> m -> m
So how is Applicative a monoid? Well, it's a monoid in terms of its structure, as your quote says. That is, we start with an f something, continue with f anotherthing, and we get, you've guessed it a f resulthing:
amappend :: f (a -> b) -> f a -> f b
Before we continue, for a short, a very short time, let's forget that f has kind * -> *. What do we end up with?
amappend :: f -> f -> f
That's the "monodial structure" part. And that's the difference between Applicative and Functor in Haskell, since with Functor we don't have that property:
fmap :: (a -> b) -> f a -> f b
-- ^
-- no f here
That's also the reason we get into trouble if we try to use (+) or other functions with fmap only: after a single fmap we're stuck, unless we can somehow apply our new function in that new structure. Which brings us to the second part of your question:
So, with Applicative, we have [...] function application for our values!
Function application is ($). And if we have a look at <*>, we can immediately see that they are similar:
($) :: (a -> b) -> a -> b
(<*>) :: f (a -> b) -> f a -> f b
If we forget the f in (<*>), we just end up with ($). So (<*>) is just function application in the context of our structure:
increase :: Int -> Int
increase x = x + 1
five :: Int
five = 5
increaseA :: Applicative f => f (Int -> Int)
increaseA = pure increase
fiveA :: Applicative f => f Int
fiveA = pure 5
normalIncrease = increase $ five
applicativeIncrease = increaseA <*> fiveA
And that's, I guessed, what the author meant with "function application". We suddenly can take those functions that are hidden away in our structure and apply them on other values in our structure. And due to the monodial nature, we stay in that structure.
That being said, I personally would never call that monodial, since <*> does not operate on two arguments of the same type, and an applicative is missing the empty element.
1 For a real semigroup/monoid that operation should be associative, but that's not important here
Although this question got a great answer long ago, I would like to add a bit.
Take a look at the following class:
class Functor f => Monoidal f where
unit :: f ()
(**) :: f a -> f b -> f (a, b)
Before explaining why we need some Monoidal class for a question about Applicatives, let us first take a look at its laws, abiding by which gives us a monoid:
f a (x) is isomorphic to f ((), a) (unit ** x), which gives us the left identity. (** unit) :: f a -> f ((), a), fmap snd :: f ((), a) -> f a.
f a (x) is also isomorphic f (a, ()) (x ** unit), which gives us the right identity. (unit **) :: f a -> f (a, ()), fmap fst :: f (a, ()) -> f a.
f ((a, b), c) ((x ** y) ** z) is isomorphic to f (a, (b, c)) (x ** (y ** z)), which gives us the associativity. fmap assoc :: f ((a, b), c) -> f (a, (b, c)), fmap assoc' :: f (a, (b, c)) -> f ((a, b), c).
As you might have guessed, one can write down Applicative's methods with Monoidal's and the other way around:
unit = pure ()
f ** g = (,) <$> f <*> g = liftA2 (,) f g
pure x = const x <$> unit
f <*> g = uncurry id <$> (f ** g)
liftA2 f x y = uncurry f <$> (x ** y)
Moreover, one can prove that Monoidal and Applicative laws are telling us the same thing. I asked a question about this a while ago.
When reading stuff on Haskell, I sometimes come across the adjective "applicative", but I have not been able to find a sufficiently clear definition of this adjective (as opposed to, say, Haskell's Applicative class). I would like to learn to recognize a piece of code/algorithm/data structure, etc. that is "applicative", just like I can recognize one that is "recursive". Some contrasting examples of "applicative" vs. whatever the term intends to draw a distinction from (which I hope is something more meaningful in its own right than "non-applicative") would be much appreciated.
Edit: for example, why was the word "applicative" chosen to name the class, and not some other name? What is it about this class that makes the name Applicative such a good fit for it (even at the price of its obscurity)?
Thanks!
It's not clear what "applicative" is being used to mean without knowing the context.
If it's truly not referring to applicative functors (i.e. Applicative), then it's probably referring to the form of application itself: f a b c is an applicative form, and this is where applicative functors get their name from: f <$> a <*> b <*> c is analogous. (Indeed, idiom brackets take this connection further, by letting you write it as (| f a b c |).)
Similarly, "applicative languages" can be contrasted with languages that are not primarily based on the application of function to argument (usually in prefix form); concatenative ("stack based") languages aren't applicative, for instance.
To answer the question of why applicative functors are called what they are in depth, I recommend reading
Applicative programming with effects; the basic idea is that a lot of situations call for something like "enhanced application": applying pure functions within some effectful context. Compare these definitions of map and mapM:
map :: (a -> b) -> [a] -> [b]
map _ [] = []
map f (x:xs) = f x : map f xs
mapM :: (Monad m) => (a -> m b) -> [a] -> m [b]
mapM _ [] = return []
mapM f (x:xs) = do
x' <- f x
xs' <- mapM f xs
return (x' : xs')
with mapA (usually called traverse):
mapA :: (Applicative f) => (a -> f b) -> [a] -> f [b]
mapA _ [] = pure []
mapA f (x:xs) = (:) <$> f x <*> mapA f xs
As you can see, mapA is much more concise, and more obviously related to map (even more so if you use the prefix form of (:) in map too). Indeed, using the applicative functor notation even when you have a full Monad is common in Haskell, since it's often much more clear.
Looking at the definition helps, too:
class (Functor f) => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
Compare the type of (<*>) to the type of application: ($) :: (a -> b) -> a -> b. What Applicative offers is a generalised "lifted" form of application, and code using it is written in an applicative style.
More formally, as mentioned in the paper and pointed out by ertes, Applicative is a generalisation of the SK combinators; pure is a generalisation of K :: a -> (r -> a) (aka const), and (<*>) is a generalisation of S :: (r -> a -> b) -> (r -> a) -> (r -> b). The r -> a part is simply generalised to f a; the original types are obtained with the Applicative instance for ((->) r).
As a practical matter, pure also allows you to write applicative expressions in a more uniform manner: pure f <*> effectful <*> pure x <*> effectful as opposed to (\a b -> f a x b) <$> effectful <*> effectful.
On a more fundamental level one could say that "applicative" means working in some form of the SK calculus. This is also what the Applicative class is about. It gives you the combinators pure (a generalization of K) and <*> (a generalization of S).
Your code is applicative when it is expressed in such a style. For example the code
liftA2 (+) sin cos
is an applicative expression of
\x -> sin x + cos x
Of course in Haskell the Applicative class is the main construct for programming in an applicative style, but even in a monadic or arrowic context you can write applicatively:
return (+) `ap` sin `ap` cos
arr (uncurry (+)) . (sin &&& cos)
Whether the last piece of code is applicative is controversial though, because one might argue that applicative style needs currying to make sense.