Does liftA2 preserve associativity? - haskell

Given an operation (??) such that
(a ?? b) ?? c = a ?? (b ?? c)
(that is to say (??) is associative)
must it be the case that
liftA2 (??) (liftA2 (??) a b) c = liftA2 (??) a (liftA2 (??) b c)
(that is to say that liftA2 (??) is associative)
If we would prefere we can rewrite this as:
fmap (??) (fmap (??) a <*> b) <*> c = fmap (??) a <*> (fmap (??) b <*> c)
I spent a little while staring at the applicative laws but I couldn't come up with a proof that this would be the case. So I set out to disprove it. All the out-of-the-box applicatives (Maybe, [], Either, etc.) that I have tried, follow the law, so I thought I would create my own.
My best idea was to make a vacuous applicative with an extra piece of information attached.
data Vacuous a = Vac Alg
Where Alg would be some algebra I would define at my own convenience later as to make the property fail but the applicative laws succeed.
Now we define our instances as such:
instance Functor Vacuous where
fmap f = id
instance Applicative Vacuous where
pure x = Vac i
liftA2 f (Vac a) (Vac b) = Vac (comb a b)
(Vac a) <*> (Vac b) = Vac (comb a b)
Where i is some element of Alg to be determined and comb is a binary combinator on Alg also to be determined. There is not really another way we can go about defining this.
If we want to fulfill the Identiy law this forces i to be an idenity over comb. We then get Homomorphism and Interchange for free. But now Composition forces comb to be associative over Alg
((pure (.) <*> Vac u) <*> Vac v) <*> Vac w = Vac u <*> (Vac v <*> Vac w)
((Vac i <*> Vac u) <*> Vac v) <*> Vac w = Vac u <*> (Vac v <*> Vac w)
(Vac u <*> Vac v) <*> Vac w = Vac u <*> (Vac v <*> Vac w)
(Vac (comb u v)) <*> Vac w = Vac u <*> (Vac (comb v w))
Vac (comb (comb u v) w) = Vac (comb u (comb v w))
comb (comb u v) w = comb u (comb v w)
Forcing us to satisfy the property.
Is there a counter example? If not how can we prove this property?

We start by rewriting the left hand side, using the applicative laws. Recall that both <$> and <*> are left-associative, so that we have, e.g., x <*> y <*> z = (x <*> y) <*> z and x <$> y <*> z = (x <$> y) <*> z.
(??) <$> ((??) <$> a <*> b) <*> c
= fmap/pure law
pure (??) <*> (pure (??) <*> a <*> b) <*> c
= composition law
pure (.) <*> pure (??) <*> (pure (??) <*> a) <*> b <*> c
= homomorphism law
pure ((.) (??)) <*> (pure (??) <*> a) <*> b <*> c
= composition law
pure (.) <*> pure ((.) (??)) <*> pure (??) <*> a <*> b <*> c
= homomorphism law
pure ((.) ((.) (??)) (??)) <*> a <*> b <*> c
= definition (.)
pure (\x -> (.) (??) ((??) x)) <*> a <*> b <*> c
= definition (.), eta expansion
pure (\x y z -> (??) ((??) x y) z) <*> a <*> b <*> c
= associativity (??)
pure (\x y z -> x ?? y ?? z) <*> a <*> b <*> c
The last form reveals that, essentially, the original expression "runs" the actions a, b, and c in that order, sequencing their effects in that way, and then uses (??) to purely combine the three results.
We can then prove that the right hand side is equivalent to the above form.
(??) <$> a <*> ((??) <$> b <*> c)
= fmap/pure law
pure (??) <*> a <*> (pure (??) <*> b <*> c)
= composition law
pure (.) <*> (pure (??) <*> a) <*> (pure (??) <*> b) <*> c
= composition law
pure (.) <*> pure (.) <*> pure (??) <*> a <*> (pure (??) <*> b) <*> c
= homomorphism law
pure ((.) (.) (??)) <*> a <*> (pure (??) <*> b) <*> c
= composition law
pure (.) <*> (pure ((.) (.) (??)) <*> a) <*> pure (??) <*> b <*> c
= composition law
pure (.) <*> pure (.) <*> pure ((.) (.) (??)) <*> a <*> pure (??) <*> b <*> c
= homomorphism law
pure ((.) (.) ((.) (.) (??))) <*> a <*> pure (??) <*> b <*> c
= interchange law
pure ($ (??)) <*> (pure ((.) (.) ((.) (.) (??))) <*> a) <*> b <*> c
= composition law
pure (.) <*> pure ($ (??)) <*> pure ((.) (.) ((.) (.) (??))) <*> a <*> b <*> c
= homomorphism law
pure ((.) ($ (??)) ((.) (.) ((.) (.) (??)))) <*> a <*> b <*> c
Now, we only have to rewrite the point-free term ((.) ($ (??)) ((.) (.) ((.) (.) (??)))) in a more readable point-ful form, so that we can make it equal to the term we got in the first half of the proof. This is just a matter of applying (.) and ($) as needed.
((.) ($ (??)) ((.) (.) ((.) (.) (??))))
= \x -> (.) ($ (??)) ((.) (.) ((.) (.) (??))) x
= \x -> ($ (??)) ((.) (.) ((.) (.) (??)) x)
= \x -> (.) (.) ((.) (.) (??)) x (??)
= \x y -> (.) ((.) (.) (??) x) (??) y
= \x y -> (.) (.) (??) x ((??) y)
= \x y z -> (.) ((??) x) ((??) y) z
= \x y z -> (??) x ((??) y z)
= \x y z -> x ?? y ?? z
where in the last step we exploited the associativity of (??).
(Whew.)

Not only does it preserve associativity, I would say that's perhaps the main idea behind the applicative laws in the first place!
Recall the maths-style form of the class:
class Functor f => Monoidal f where
funit :: () -> f ()
fzip :: (f a, f b) -> f (a,b)
with laws
zAssc: fzip (fzip (x,y), z) ≅ fzip (x, fzip (y,z)) -- modulo tuple re-bracketing
fComm: fzip (fmap fx x, fmap fy y) ≡ fmap (fx***fy) (fzip (x,y))
fIdnt: fmap id ≡ id -- ─╮
fCmpo: fmap f . fmap g ≡ fmap (f . g) -- ─┴ functor laws
In this approach, liftA2 factors into fmapping a tuple-valued function over an already ready-zipped pair:
liftZ2 :: ((a,b)->c) -> (f a,f b) -> f c
liftZ2 f = fmap f . fzip
i.e.
liftZ2 f (a,b) = f <$> fzip (a,b)
Now say we have given
g :: (G,G) -> G
gAssc: g (g (α,β), γ) ≡ g (α, g (β,γ))
or point-free (again ignoring tuple-bracket interchange)
gAssc: g . (g***id) ≅ g . (id***g)
If we write everything in this style, it's easy to see that associativity-preservation is basically just zAssc, with everything about g happening in a separate fmap step:
liftZ2 g (liftZ2 g (a,b), c)
{-liftA2'-} ≡ g <$> fzip (g <$> fzip (a,b), c)
{-fIdnt,fComm-} ≡ g . (g***id) <$> fzip (fzip (a,b), c)
{-gAssc,zAssc-} ≡ g . (id***g) <$> fzip (a, fzip (b,c))
{-fComm,fIdnt-} ≡ g <$> fzip (a, g <$> fzip (b,c))
{-liftA2'-} ≡ liftZ2 g (a, liftZ2 g (b,c))

Related

Are free monads also zippily applicative?

I think I've come up with an interesting "zippy" Applicative instance for Free.
data FreeMonad f a = Free (f (FreeMonad f a))
| Return a
instance Functor f => Functor (FreeMonad f) where
fmap f (Return x) = Return (f x)
fmap f (Free xs) = Free (fmap (fmap f) xs)
instance Applicative f => Applicative (FreeMonad f) where
pure = Return
Return f <*> xs = fmap f xs
fs <*> Return x = fmap ($x) fs
Free fs <*> Free xs = Free $ liftA2 (<*>) fs xs
It's sort of a zip-longest strategy. For example, using data Pair r = Pair r r as the functor (so FreeMonad Pair is an externally labelled binary tree):
+---+---+ +---+---+ +-----+-----+
| | | | <*> | |
+--+--+ h x +--+--+ --> +--+--+ +--+--+
| | | | | | | |
f g y z f x g x h y h z
I haven't seen anyone mention this instance before. Does it break any Applicative laws? (It doesn't agree with the usual Monad instance of course, which is "substitutey" rather than "zippy".)
Yes, it looks like this is a lawful Applicative. Weird!
As #JosephSible points out, you can read off the identity, homomorphism and interchange laws immediately from the definitions. The only tricky one is the composition law.
pure (.) <*> u <*> v <*> w = u <*> (v <*> w)
There are eight cases to check, so strap in.
One case with three Returns: pure (.) <*> Return f <*> Return g <*> Return z
Follows trivially from associativity of (.).
Three cases with one Free:
pure (.) <*> Free u <*> Return g <*> Return z
Working backwards from Free u <*> (Return g <*> Return z) you get fmap (\f -> f (g z)) (Free u), so this follows from the functor law.
pure (.) <*> Return f <*> Free v <*> Return z
fmap ($z) $ fmap f (Free v)
fmap (\g -> f (g z)) (Free v) -- functor law
fmap (f . ($z)) (Free v)
fmap f (fmap ($z) (Free v)) -- functor law
Return f <$> (Free v <*> Return z) -- RHS of `<*>` (first and second cases)
QED
pure (.) <*> Return f <*> Return g <*> Free w
Reduces immediately to fmap (f . g) (Free w), so follows from the functor law.
Three cases with one Return:
pure (.) <*> Return f <*> Free v <*> Free w
Free $ fmap (<*>) (fmap (fmap (f.)) v) <*> w
Free $ fmap (\y z -> fmap (f.) y <*> z) v <*> w -- functor law
Free $ fmap (\y z -> fmap (.) <*> Return f <*> y <*> z) v <*> w -- definition of fmap, twice
Free $ fmap (\y z -> Return f <*> (y <*> z)) v <*> w -- composition
Free $ fmap (\y z -> fmap f (y <*> z)) v <*> w -- RHS of fmap, definition of liftA2
Free $ fmap (fmap f) $ fmap (<*>) v <*> w -- functor law, eta reduce
fmap f $ Free $ liftA2 (<*>) v w -- RHS of fmap
Return f <*> Free v <*> Free w -- RHS of <*>
QED.
pure (.) <*> Free u <*> Return g <*> Free w
Free ((fmap (fmap ($g))) (fmap (fmap (.)) u)) <*> Free w
Free (fmap (fmap (\f -> f . g) u)) <*> Free w -- functor law, twice
Free $ fmap (<*>) (fmap (fmap (\f -> f . g)) u) <*> w
Free $ fmap (\x z -> fmap (\f -> f . g) x <*> z) u <*> w -- functor law
Free $ fmap (\x z -> pure (.) <*> x <*> Return g <*> z) u <*> w
Free $ fmap (\x z -> x <*> (Return g <*> z)) u <*> w -- composition
Free $ fmap (<*>) u <*> fmap (Return g <*>) w -- https://gist.github.com/benjamin-hodgson/5b36259986055d32adea56d0a7fa688f
Free u <*> fmap g w -- RHS of <*> and fmap
Free u <*> (Return g <*> w)
QED.
pure (.) <*> Free u <*> Free v <*> Return z
Free (fmap (<*>) (fmap (fmap (.)) u) <*> v) <*> Return z
Free (fmap (\x y -> fmap (.) x <*> y) u <*> v) <*> Return z -- functor law
Free $ fmap (fmap ($z)) (fmap (\x y -> fmap (.) x <*> y) u <*> v)
Free $ liftA2 (\x y -> (fmap ($z)) (fmap (.) x <*> y)) u v -- see Lemma, with f = fmap ($z) and g x y = fmap (.) x <*> y
Free $ liftA2 (\x y -> fmap (.) x <*> y <*> Return z) u v -- interchange
Free $ liftA2 (\x y -> x <*> (y <*> Return z)) u v -- composition
Free $ liftA2 (\f g -> f <*> fmap ($z) g) u v -- interchange
Free $ fmap (<*>) u <*> (fmap (fmap ($z)) v) -- https://gist.github.com/benjamin-hodgson/5b36259986055d32adea56d0a7fa688f
Free u <*> Free (fmap (fmap ($z)) v)
Free u <*> (Free v <*> Return z)
QED.
Three Frees: pure (.) <*> Free u <*> Free v <*> Free w
This case only exercises the Free/Free case of <*>, whose right-hand side is identical to that of Compose's <*>. So this one follows from the correctness of Compose's instance.
For the pure (.) <*> Free u <*> Free v <*> Return z case I used a lemma:
Lemma: fmap f (fmap g u <*> v) = liftA2 (\x y -> f (g x y)) u v.
fmap f (fmap g u <*> v)
pure (.) <*> pure f <*> fmap g u <*> v -- composition
fmap (f .) (fmap g u) <*> v -- homomorphism
fmap ((f .) . g) u <*> v -- functor law
liftA2 (\x y -> f (g x y)) u v -- eta expand
QED.
Variously I'm using functor and applicative laws under the induction hypothesis.
This was pretty fun to prove! I'd love to see a formal proof in Coq or Agda (though I suspect the termination/positivity checker might mess it up).
For the sake of completeness, I will use this answer to expand on my comment above:
Though I didn't actually write down the proof, I believe the mixed-Free-and-Return cases of the composition law must hold due to parametricity. I also suspect that should be easier to show using the monoidal presentation.
The monoidal presentation of the Applicative instance here is:
unit = Return ()
Return x *&* v = (x,) <$> v
u *&* Return y = (,y) <$> u
-- I will also piggyback on the `Compose` applicative, as suggested above.
Free u *&* Free v = Free (getCompose (Compose u *&* Compose v))
Under the monoidal presentation, the composition/associativity law is:
(u *&* v) *&* w ~ u *&* (v *&* w)
Now let's consider one of its mixed cases; say, the Free-Return-Free one:
(Free fu *&* Return y) *&* Free fw ~ Free fu *&* (Return y *&* Free fw)
(Free fu *&* Return y) *&* Free fw -- LHS
((,y) <$> Free fu) *&* Free fw
Free fu *&* (Return y *&* Free fw) -- RHS
Free fu *&* ((y,) <$> Free fw)
Let's have a closer look at this left-hand side. (,y) <$> Free fu applies (,y) :: a -> (a, b) to the a values found in Free fu :: FreeMonad f a. Parametricity (or, more specifically, the free theorem for (*&*)) means that it doesn't matter if we do that before or after using (*&*). That means the left-hand side amounts to:
first (,y) <$> (Free fu *&* Free fw)
Analogously, the right-hand side becomes:
second (y,) <$> (Free fu *&* Free fw)
Since first (,y) :: (a, c) -> ((a, b), c) and second (y,) :: (a, c) -> (a, (b, c)) are the same up to reassociation of pairs, we have:
first (,y) <$> (Free fu *&* Free fw) ~ second (y,) <$> (Free fu *&* Free fw)
-- LHS ~ RHS
The other mixed cases can be dealt with analogously. For the rest of the proof, see Benjamin Hodgson's answer.
From the definition of Applicative:
If f is also a Monad, it should satisfy
pure = return
(<*>) = ap
(*>) = (>>)
So this implementation would break the applicative laws that say it must agree with the Monad instance.
That said, there's no reason you couldn't have a newtype wrapper for FreeMonad that didn't have a monad instance, but did have the above applicative instance
newtype Zip f a = Zip { runZip :: FreeMonad f a }
deriving Functor
instance Applicative f => Applicative (Zip f) where -- ...

Applicative: Prove `pure f <*> x = pure (flip ($)) <*> x <*> pure f`

During my study of Typoclassopedia I encountered this proof, but I'm not sure if my proof is correct. The question is:
One might imagine a variant of the interchange law that says something about applying a pure function to an effectful argument. Using the above laws, prove that:
pure f <*> x = pure (flip ($)) <*> x <*> pure f
Where "above laws" points to Applicative Laws, briefly:
pure id <*> v = v -- identity law
pure f <*> pure x = pure (f x) -- homomorphism
u <*> pure y = pure ($ y) <*> u -- interchange
u <*> (v <*> w) = pure (.) <*> u <*> v <*> w -- composition
My proof is as follows:
pure f <*> x = pure (($) f) <*> x -- identical
pure f <*> x = pure ($) <*> pure f <*> x -- homomorphism
pure f <*> x = pure (flip ($)) <*> x <*> pure f -- flip arguments
The first two steps of your proof look fine, but the last step doesn't. While the definition of flip allows you to use a law like:
f a b = flip f b a
that doesn't mean:
pure f <*> a <*> b = pure (flip f) <*> b <*> a
In fact, this is false in general. Compare the output of these two lines:
pure (+) <*> [1,2,3] <*> [4,5]
pure (flip (+)) <*> [4,5] <*> [1,2,3]
If you want a hint, you are going to need to use the original interchange law at some point to prove this variant.
In fact, I found I had to use the homomorphism, interchange, and composition laws to prove this, and part of the proof was pretty tricky, especially getting the sections right --like ($ f), which is different from (($) f). It was helpful to have GHCi open to double-check that each step of my proof type checked and gave the right result. (Your proof above type checks fine; it's just that the last step wasn't justified.)
> let f = sqrt
> let x = [1,4,9]
> pure f <*> x
[1.0,2.0,3.0]
> pure (flip ($)) <*> x <*> pure f
[1.0,2.0,3.0]
>
I ended up proving it backwards:
pure (flip ($)) <*> x <*> pure f
= (pure (flip ($)) <*> x) <*> pure f -- <*> is left-associative
= pure ($ f) <*> (pure (flip ($)) <*> x) -- interchange
= pure (.) <*> pure ($ f) <*> pure (flip ($)) <*> x -- composition
= pure (($ f) . (flip ($))) <*> x -- homomorphism
= pure (flip ($) f . flip ($)) <*> x -- identical
= pure f <*> x
Explanation of the last transformation:
flip ($) has type a -> (a -> c) -> c, intuitively, it first takes an argument of type a, then a function that accepts that argument, and in the end it calls the function with the first argument. So flip ($) 5 takes as argument a function which gets called with 5 as it's argument. If we pass (+ 2) to flip ($) 5, we get flip ($) 5 (+2) which is equivalent to the expression (+2) $ 5, evaluating to 7.
flip ($) f is equivalent to \x -> x $ f, that means, it takes as input a function and calls it with the function f as argument.
The composition of these functions works like this: First flip ($) takes x as it's first argument, and returns a function flip ($) x, this function is awaiting a function as it's last argument, which will be called with x as it's argument. Now this function flip ($) x is passed to flip ($) f, or to write it's equivalent (\x -> x $ f) (flip ($) x), this results in the expression (flip ($) x) f, which is equivalent to f $ x.
You can check the type of flip ($) f . flip ($) is something like this (depending on your function f):
λ: let f = sqrt
λ: :t (flip ($) f) . (flip ($))
(flip ($) f) . (flip ($)) :: Floating c => c -> c
I'd remark that such theorems are, as a rule, a lot less involved when written in mathematical style of a monoidal functor, rather than the applicative version, i.e. with the equivalent class
class Functor f => Monoidal f where
pure :: a -> f a
(⑂) :: f a -> f b -> f (a,b)
Then the laws are
id <$> v = v
f <$> (g <$> v) = f . g <$> v
f <$> pure x = pure (f x)
x ⑂ pure y = fmap (,y) x
a⑂(b⑂c) = assoc <$> (a⑂b)⑂c
where assoc ((x,y),z) = (x,(y,z)).
The theorem then reads
pure u ⑂ x = swap <$> x ⑂ pure u
Proof:
swap <$> x ⑂ pure u
= swap <$> fmap (,u) x
= swap . (,u) <$> x
= (u,) <$> x
= pure u ⑂ x
□

Applicative Laws for the ((->) r) type

I'm trying to check that the Applicative laws hold for the function type ((->) r), and here's what I have so far:
-- Identiy
pure (id) <*> v = v
-- Starting with the LHS
pure (id) <*> v
const id <*> v
(\x -> const id x (g x))
(\x -> id (g x))
(\x -> g x)
g x
v
-- Homomorphism
pure f <*> pure x = pure (f x)
-- Starting with the LHS
pure f <*> pure x
const f <*> const x
(\y -> const f y (const x y))
(\y -> f (x))
(\_ -> f x)
pure (f x)
Did I perform the steps for the first two laws correctly?
I'm struggling with the interchange & composition laws. For interchange, so far I have the following:
-- Interchange
u <*> pure y = pure ($y) <*> u
-- Starting with the LHS
u <*> pure y
u <*> const y
(\x -> g x (const y x))
(\x -> g x y)
-- I'm not sure how to proceed beyond this point.
I would appreciate any help for the steps to verify the Interchange & Composition applicative laws for the ((->) r) type. For reference, the Composition applicative law is as follows:
pure (.) <*> u <*> v <*> w = u <*> (v <*> w)
I think in your "Identity" proof, you should replace g with v everywhere (otherwise what is g and where did it come from?). Similarly, in your "Interchange" proof, things look okay so far, but the g that magically appears should just be u. To continue that proof, you could start reducing the RHS and verify that it also produces \x -> u x y.
Composition is more of the same: plug in the definitions of pure and (<*>) on both sides, then start calculating on both sides. You'll soon come to some bare lambdas that will be easy to prove equivalent.

How arbitrary is the "ap" implementation for monads?

I am currently studying the bonds between monad and applicative functors.
I see two implementation for ap:
ap m1 m2 = do { f <- m1 ; x <- m2 ; return (f x) }
and
ap m1 m2 = do { x <- m2 ; f <- m1 ; return (f x) }
The second one is different, yet, would it be a good implementation for <*> ?
I got lost in the proof of pure (.) <*> u <*> v <*> w = u <*> (v <*> w)
I try to get an intuition of "what part of the monad is the applicative functor"...
There are at least three relevant aspects to this question.
Given a Monad m instance, what is the specification of its necessary Applicative m superclass instance? Answer: pure is return, <*> is ap, so
mf <*> ms == do f <- mf; s <- ms; return (f s)
Note that this specification is not a law of the Applicative class. It's a requirement on Monads, to ensure consistent usage patterns.
Given that specification (by candidate implementation), is ap the only acceptable implementation. Answer: resoundingly, no. The value dependency permitted by the type of >>= can sometimes lead to inefficient execution: there are situations where <*> can be made more efficient than ap because you don't need to wait for the first computation to finish before you can tell what the second computation is. The "applicative do" notation exists exactly to exploit this possibility.
Do any other candidate instances for Applicative satisfy the Applicative laws, even though they disagree with the required ap instances? Answer: yes. The "backwards" instance proposed by the question is just such a thing. Indeed, as another answer observes, any applicative can be turned backwards, and the result is often a different beast.
For a further example and exercise for the reader, note that nonempty lists are monadic in the way familiar from ordinary lists.
data Nellist x = x :& Maybe (Nellist x)
necat :: Nellist x -> Nellist x -> Nellist x
necat (x :& Nothing) ys = x :& Just ys
necat (x :& Just xs) ys = x :& Just (necat xs ys)
instance Monad Nellist where
return x = x :& Nothing
(x :& Nothing) >>= k = k x
(x :& Just xs) >>= k = necat (k x) (xs >>= k)
Find at least four behaviourally distinct instances of Applicative Nellist which obey the applicative laws.
Let's start with the obvious fact: such a definition for <*> violates the ap-law in the sense that <*> should ap, where ap is the one defined in the Monad class, i.e. the first one you posted.
Trivialities aside, as far as I can see, the other applicative laws should hold.
More concretely, let's focus on the composition law you mentioned.
Your "reversed" ap
(<**>) m1 m2 = do { x <- m2 ; f <- m1 ; return (f x) }
can also be defined as
(<**>) m1 m2 = pure (flip ($)) <*> m2 <*> m1
where <*> is the "regular" ap.
This means that, for instance,
u <**> (v <**> w) =
{ def. <**> }
pure (flip ($)) <*> (v <**> w) <*> u =
{ def. <**> }
pure (flip ($)) <*> (pure (flip ($)) <*> w <*> v) <*> u =
{ composition law }
pure (.) <*> pure (flip ($)) <*> (pure (flip ($)) <*> w) <*> v <*> u =
{ homomorphism law }
pure ((.) (flip ($))) <*> (pure (flip ($)) <*> w) <*> v <*> u =
{ composition law }
pure (.) <*> pure ((.) (flip ($))) <*> pure (flip ($)) <*> w <*> v <*> u =
{ homomorphism law (x2)}
pure ((.) ((.) (flip ($))) (flip ($))) <*> w <*> v <*> u =
{ beta reduction (several) }
pure (\x f g -> g (f x)) <*> w <*> v <*> u
(I hope I got everything OK)
Try doing something similar to the left hand side.
pure (.) <**> u <**> v <**> w = ...

What are the applicative functor laws in terms of pure and liftA2?

I'm playing around with formulating Applicative in terms of pure and liftA2 (so that (<*>) = liftA2 id becomes a derived combinator).
I can think of a bunch of candidate laws, but I'm not sure what the minimal set would be.
f <$> pure x = pure (f x)
f <$> liftA2 g x y = liftA2 ((f .) . g) x y
liftA2 f (pure x) y = f x <$> y
liftA2 f x (pure y) = liftA2 (flip f) (pure y) x
liftA2 f (g <$> x) (h <$> y) = liftA2 (\x y -> f (g x) (h y)) x y
...
Based on McBride and Paterson's laws for Monoidal(section 7) I'd suggest the following laws for liftA2 and pure.
left and right identity
liftA2 (\_ y -> y) (pure x) fy = fy
liftA2 (\x _ -> x) fx (pure y) = fx
associativity
liftA2 id (liftA2 (\x y z -> f x y z) fx fy) fz =
liftA2 (flip id) fx (liftA2 (\y z x -> f x y z) fy fz)
naturality
liftA2 (\x y -> o (f x) (g y)) fx fy = liftA2 o (fmap f fx) (fmap g fy)
It isn't immediately apparent that these are sufficient to cover the relationship between fmap and Applicative's pure and liftA2. Let's see if we can prove from the above laws that
fmap f fx = liftA2 id (pure f) fx
We'll start by working on fmap f fx. All of the following are equivalent.
fmap f fx
liftA2 (\x _ -> x) (fmap f fx) ( pure y ) -- by right identity
liftA2 (\x _ -> x) (fmap f fx) ( id (pure y)) -- id x = x by definition
liftA2 (\x _ -> x) (fmap f fx) (fmap id (pure y)) -- fmap id = id (Functor law)
liftA2 (\x y -> (\x _ -> x) (f x) (id y)) fx (pure y) -- by naturality
liftA2 (\x _ -> f x ) fx (pure y) -- apply constant function
At this point we've written fmap in terms of liftA2, pure and any y; fmap is entirely determined by the above laws. The remainder of the as-yet-unproven proof is left by the irresolute author as an exercise for the determined reader.
If you define (<.>) = liftA2 (.) then the laws become very nice:
pure id <.> f = f
f <.> pure id = f
f <.> (g <.> h) = (f <.> g) <.> h
Apparently pure f <.> pure g = pure (f . g) follows for free. I believe this formulation originates with Daniel Mlot.
Per the online book, Learn You A Haskell:Functors, Applicative Functors and Monoids, the Appplicative Functor laws are bellow but reorganized for formatting reasons; however, I am making this post community editable since it would be useful if someone could embed derivations:
identity] v = pure id <*> v
homomorphism] pure (f x) = pure f <*> pure x
interchange] u <*> pure y = pure ($ y) <*> u
composition] u <*> (v <*> w) = pure (.) <*> u <*> v <*> w
Note:
function composition] (.) = (b->c) -> (a->b) -> (a->c)
application operator] $ = (a->b) -> a -> b
Found a treatment on Reddit

Resources