I had a couple of hours of fun today trying to understand what the arrow operator applicative does in Haskell. I am now trying to verify whether my understanding is correct. In short, I found that for the arrow operator applicative
(f <*> g <*> h <*> v) z = f z (g z) (h z) (v z)
Before I proceed, I am aware of this discussion but found it to be very convoluted and much more complicated than what I hope I derived today.
In order to understand what the applicative does I started from the definition of the arrow applicative in base
instance Applicative ((->) a) where
pure = const
(<*>) f g x = f x (g x)
and then proceeded to explore what the expressions
(f <*> g <*> h) z
and
(f <*> g <*> h <*> v) z
yield when expanded.
From the definition we get that
f <*> g = \x -> f x (g x)
Because (<*>) is left associative, it follows that
f <*> g <*> h = (f <*> g) <*> h
= (\x -> f x (g x)) <*> h
= \y -> (\x -> f x (g x)) y (h y)
Therefore
(f <*> g <*> h) z = (\y -> (\x -> f x (g x)) y (h y)) z
= (\x -> f x (g x)) z (h z)
= (f z (g z)) (h z)
= f z (g z) (h z)
The last step is due to the fact that function application is left associative. Similarly
(f <*> g <*> h <*> v) z = f z (g z) (h z) (v z)
This, to me, provides a very clear intuitive idea of what the arrow applicative does. But is this correct?
To test the result I ran, for example, the following,
λ> ((\z g h v -> [z, g, h, v]) <*> (1+) <*> (2+) <*> (3+)) 4
[4,5,6,7]
which conforms to the result derived above.
Before doing the expansion above I found this applicative very difficult to understand, since extremely complicated behaviour can result from its use because of currying. In particular, in
(f <*> g <*> h <*> v) z = f z (g z) (h z) (v z)
functions can return other functions. Here is an example:
λ> ((\z g -> g) <*> pure (++) <*> pure "foo" <*> pure "bar") undefined
"foobar"
In this case z=undefined is ignored by all functions, because pure x z = x and the first function ignores z by construction. Furthermore, the first function takes only two arguments but returns a function taking two arguments.
Yes, your calculations are correct.
Related
I think I've come up with an interesting "zippy" Applicative instance for Free.
data FreeMonad f a = Free (f (FreeMonad f a))
| Return a
instance Functor f => Functor (FreeMonad f) where
fmap f (Return x) = Return (f x)
fmap f (Free xs) = Free (fmap (fmap f) xs)
instance Applicative f => Applicative (FreeMonad f) where
pure = Return
Return f <*> xs = fmap f xs
fs <*> Return x = fmap ($x) fs
Free fs <*> Free xs = Free $ liftA2 (<*>) fs xs
It's sort of a zip-longest strategy. For example, using data Pair r = Pair r r as the functor (so FreeMonad Pair is an externally labelled binary tree):
+---+---+ +---+---+ +-----+-----+
| | | | <*> | |
+--+--+ h x +--+--+ --> +--+--+ +--+--+
| | | | | | | |
f g y z f x g x h y h z
I haven't seen anyone mention this instance before. Does it break any Applicative laws? (It doesn't agree with the usual Monad instance of course, which is "substitutey" rather than "zippy".)
Yes, it looks like this is a lawful Applicative. Weird!
As #JosephSible points out, you can read off the identity, homomorphism and interchange laws immediately from the definitions. The only tricky one is the composition law.
pure (.) <*> u <*> v <*> w = u <*> (v <*> w)
There are eight cases to check, so strap in.
One case with three Returns: pure (.) <*> Return f <*> Return g <*> Return z
Follows trivially from associativity of (.).
Three cases with one Free:
pure (.) <*> Free u <*> Return g <*> Return z
Working backwards from Free u <*> (Return g <*> Return z) you get fmap (\f -> f (g z)) (Free u), so this follows from the functor law.
pure (.) <*> Return f <*> Free v <*> Return z
fmap ($z) $ fmap f (Free v)
fmap (\g -> f (g z)) (Free v) -- functor law
fmap (f . ($z)) (Free v)
fmap f (fmap ($z) (Free v)) -- functor law
Return f <$> (Free v <*> Return z) -- RHS of `<*>` (first and second cases)
QED
pure (.) <*> Return f <*> Return g <*> Free w
Reduces immediately to fmap (f . g) (Free w), so follows from the functor law.
Three cases with one Return:
pure (.) <*> Return f <*> Free v <*> Free w
Free $ fmap (<*>) (fmap (fmap (f.)) v) <*> w
Free $ fmap (\y z -> fmap (f.) y <*> z) v <*> w -- functor law
Free $ fmap (\y z -> fmap (.) <*> Return f <*> y <*> z) v <*> w -- definition of fmap, twice
Free $ fmap (\y z -> Return f <*> (y <*> z)) v <*> w -- composition
Free $ fmap (\y z -> fmap f (y <*> z)) v <*> w -- RHS of fmap, definition of liftA2
Free $ fmap (fmap f) $ fmap (<*>) v <*> w -- functor law, eta reduce
fmap f $ Free $ liftA2 (<*>) v w -- RHS of fmap
Return f <*> Free v <*> Free w -- RHS of <*>
QED.
pure (.) <*> Free u <*> Return g <*> Free w
Free ((fmap (fmap ($g))) (fmap (fmap (.)) u)) <*> Free w
Free (fmap (fmap (\f -> f . g) u)) <*> Free w -- functor law, twice
Free $ fmap (<*>) (fmap (fmap (\f -> f . g)) u) <*> w
Free $ fmap (\x z -> fmap (\f -> f . g) x <*> z) u <*> w -- functor law
Free $ fmap (\x z -> pure (.) <*> x <*> Return g <*> z) u <*> w
Free $ fmap (\x z -> x <*> (Return g <*> z)) u <*> w -- composition
Free $ fmap (<*>) u <*> fmap (Return g <*>) w -- https://gist.github.com/benjamin-hodgson/5b36259986055d32adea56d0a7fa688f
Free u <*> fmap g w -- RHS of <*> and fmap
Free u <*> (Return g <*> w)
QED.
pure (.) <*> Free u <*> Free v <*> Return z
Free (fmap (<*>) (fmap (fmap (.)) u) <*> v) <*> Return z
Free (fmap (\x y -> fmap (.) x <*> y) u <*> v) <*> Return z -- functor law
Free $ fmap (fmap ($z)) (fmap (\x y -> fmap (.) x <*> y) u <*> v)
Free $ liftA2 (\x y -> (fmap ($z)) (fmap (.) x <*> y)) u v -- see Lemma, with f = fmap ($z) and g x y = fmap (.) x <*> y
Free $ liftA2 (\x y -> fmap (.) x <*> y <*> Return z) u v -- interchange
Free $ liftA2 (\x y -> x <*> (y <*> Return z)) u v -- composition
Free $ liftA2 (\f g -> f <*> fmap ($z) g) u v -- interchange
Free $ fmap (<*>) u <*> (fmap (fmap ($z)) v) -- https://gist.github.com/benjamin-hodgson/5b36259986055d32adea56d0a7fa688f
Free u <*> Free (fmap (fmap ($z)) v)
Free u <*> (Free v <*> Return z)
QED.
Three Frees: pure (.) <*> Free u <*> Free v <*> Free w
This case only exercises the Free/Free case of <*>, whose right-hand side is identical to that of Compose's <*>. So this one follows from the correctness of Compose's instance.
For the pure (.) <*> Free u <*> Free v <*> Return z case I used a lemma:
Lemma: fmap f (fmap g u <*> v) = liftA2 (\x y -> f (g x y)) u v.
fmap f (fmap g u <*> v)
pure (.) <*> pure f <*> fmap g u <*> v -- composition
fmap (f .) (fmap g u) <*> v -- homomorphism
fmap ((f .) . g) u <*> v -- functor law
liftA2 (\x y -> f (g x y)) u v -- eta expand
QED.
Variously I'm using functor and applicative laws under the induction hypothesis.
This was pretty fun to prove! I'd love to see a formal proof in Coq or Agda (though I suspect the termination/positivity checker might mess it up).
For the sake of completeness, I will use this answer to expand on my comment above:
Though I didn't actually write down the proof, I believe the mixed-Free-and-Return cases of the composition law must hold due to parametricity. I also suspect that should be easier to show using the monoidal presentation.
The monoidal presentation of the Applicative instance here is:
unit = Return ()
Return x *&* v = (x,) <$> v
u *&* Return y = (,y) <$> u
-- I will also piggyback on the `Compose` applicative, as suggested above.
Free u *&* Free v = Free (getCompose (Compose u *&* Compose v))
Under the monoidal presentation, the composition/associativity law is:
(u *&* v) *&* w ~ u *&* (v *&* w)
Now let's consider one of its mixed cases; say, the Free-Return-Free one:
(Free fu *&* Return y) *&* Free fw ~ Free fu *&* (Return y *&* Free fw)
(Free fu *&* Return y) *&* Free fw -- LHS
((,y) <$> Free fu) *&* Free fw
Free fu *&* (Return y *&* Free fw) -- RHS
Free fu *&* ((y,) <$> Free fw)
Let's have a closer look at this left-hand side. (,y) <$> Free fu applies (,y) :: a -> (a, b) to the a values found in Free fu :: FreeMonad f a. Parametricity (or, more specifically, the free theorem for (*&*)) means that it doesn't matter if we do that before or after using (*&*). That means the left-hand side amounts to:
first (,y) <$> (Free fu *&* Free fw)
Analogously, the right-hand side becomes:
second (y,) <$> (Free fu *&* Free fw)
Since first (,y) :: (a, c) -> ((a, b), c) and second (y,) :: (a, c) -> (a, (b, c)) are the same up to reassociation of pairs, we have:
first (,y) <$> (Free fu *&* Free fw) ~ second (y,) <$> (Free fu *&* Free fw)
-- LHS ~ RHS
The other mixed cases can be dealt with analogously. For the rest of the proof, see Benjamin Hodgson's answer.
From the definition of Applicative:
If f is also a Monad, it should satisfy
pure = return
(<*>) = ap
(*>) = (>>)
So this implementation would break the applicative laws that say it must agree with the Monad instance.
That said, there's no reason you couldn't have a newtype wrapper for FreeMonad that didn't have a monad instance, but did have the above applicative instance
newtype Zip f a = Zip { runZip :: FreeMonad f a }
deriving Functor
instance Applicative f => Applicative (Zip f) where -- ...
Say I have functions
g :: a -> b, h :: a -> c
and
f :: b -> c -> d.
Is it possible to write the function
f' :: a -> a -> d
given by
f' x y = f (g x) (h y)
in point free style?.
One can write the function
f' a -> d, f' x = f (g x) (h x)
in point free style by setting
f' = (f <$> g) <*> h
but I couldn't figure out how to do the more general case.
We have:
k x y = (f (g x)) (h y)
and we wish to write k in point-free style.
The first argument passed to k is x. What do we need to do with x? Well, first we need to call g on it, and then f, and then do something fancy to apply this to (h y).
k = fancy . f . g
What is this fancy? Well:
k x y = (fancy . f . g) x y
= fancy (f (g x)) y
= f (g x) (h y)
So we desire fancy z y = z (h y). Eta-reducing, we get fancy z = z . h, or fancy = (. h).
k = (. h) . f . g
A more natural way to think about it might be
┌───┐ ┌───┐
x ───│ g │─── g x ───│ │
/ └───┘ │ │
(x, y) │ f │─── f (g x) (h y)
\ ┌───┐ │ │
y ───│ h │─── h y ───│ │
└───┘ └───┘
└──────────────────────────────┘
k
Enter Control.Arrow:
k = curry ((g *** h) >>> uncurry f)
Take a look at online converter
It converted
f' x y = f (g x) (h y)
into
f' = (. h) . f . g
with the flow of transformations
f' = id (fix (const (flip ((.) . f . g) h)))
f' = fix (const (flip ((.) . f . g) h))
f' = fix (const ((. h) . f . g))
f' = (. h) . f . g
This is slightly longer, but a little easier to follow, than (. h) . f. g.
First, rewrite f' slightly to take a tuple instead of two arguments. (In otherwords, we're uncurrying your original f'.)
f' (x, y) = f (g x) (h y)
You can pull a tuple apart with fst and snd instead of pattern matching on it:
f' t = f (g (fst t)) (h (snd t))
Using function composition, the above becomes
f' t = f ((g . fst) t) ((h . snd) t)
which, hey, looks a lot like the version you could make point-free using applicative style:
f' = let g' = g . fst
h' = h . snd
in (f <$> g') <*> h'
The only problem left is that f' :: (a, a) -> d. You can fix this by explicitly currying it:
f' :: a -> a -> d
f' = let g' = g . fst
h' = h . snd
in curry $ (f <$> g') <*> h'
(This is very similar, by the way, to the Control.Arrow solution added by Lynn.)
Using the "three rules of operator sections" as applied to the (.) function composition operator,
(.) f g = (f . g) = (f .) g = (. g) f -- the argument goes into the free slot
-- 1 2 3
this is derivable by a few straightforward mechanical steps:
k x y = (f (g x)) (h y) -- a (b c) === (a . b) c
= (f (g x) . h) y
= (. h) (f (g x)) y
= (. h) ((f . g) x) y
= ((. h) . (f . g)) x y
Lastly, (.) is associative, so the inner parens may be dropped.
The general procedure is to strive to reach the situation where eta-reduction can be performed, i.e. we can get rid of the arguments if they are in same order and are outside any parentheses:
k x y = (......) y
=>
k x = (......)
Lather, rinse, repeat.
Another trick is to turn two arguments into one, or vice versa, with the equation
curry f x y = f (x,y)
so, your
f (g x) (h y) = (f.g) x (h y) -- by B-combinator rule
= (f.g.fst) (x,y) ((h.snd) (x,y))
= (f.g.fst <*> h.snd) (x,y) -- by S-combinator rule
= curry (f.g.fst <*> h.snd) x y
This is the same as the answer by #chepner, but presented more concisely.
So, you see, your (f.g <*> h) x1 just becomes (f.g.fst <*> h.snd) (x,y). Same difference.
1(because, for functions, (<$>) = (.))
Control.Compose
(g ~> h ~> id) f
Data.Function.Meld
f $* g $$ h *$ id
Data.Function.Tacit
lurryA #N2 (f <$> (g <$> _1) <*> (h <$> _2))
lurryA #N5 (_1 <*> (_2 <*> _4) <*> (_3 <*> _5)) f g h
Related articles
Semantic Editor Combinators, Conal Elliott, 2008/11/24
Pointless fun, Matt Hellige, 2008/12/03
The Composition Applicative Law is as follows:
pure (.) <*> u <*> v <*> w = u <*> (v <*> w)
Here's my attempt at proving the Composition law for the ((->) r) type:
RHS:
u <*> (v <*> w)
u <*> ( \y -> v y (w y) )
\x -> u x ( (\y -> v y (w y)) x )
\x -> u x ( v x (w x)) -- (A)
LHS:
pure (.) <*> u <*> v <*> w
const (.) <*> u <*> v <*> w
(\f -> const (.) f (u f)) <*> v <*> w
(\f -> (.) (u f)) <*> v <*> w
(\g -> (\f -> (.) (u f)) g (v g)) <*> w
\x -> (\g -> (\f -> (.) (u f)) g (v g)) x (w x)
-- Expanding labmda by applying to x
\x -> ((\f -> (.) (u f)) x (v x)) (w x)
\x -> (( (.) (u x)) (v x)) (w x)
\x -> ((u x) . (v x)) (w x) -- (B)
I don't think (A) & (B) are equivalent, so where did I make a mistake? I would appreciate any help or suggestions.
You're almost there. You just need to use the definition of (.), which is
(f . g) x = f (g x)
After substituting that definition in the last line of your LHS calculation, you should have two obviously equal lambdas.
I'm trying to check that the Applicative laws hold for the function type ((->) r), and here's what I have so far:
-- Identiy
pure (id) <*> v = v
-- Starting with the LHS
pure (id) <*> v
const id <*> v
(\x -> const id x (g x))
(\x -> id (g x))
(\x -> g x)
g x
v
-- Homomorphism
pure f <*> pure x = pure (f x)
-- Starting with the LHS
pure f <*> pure x
const f <*> const x
(\y -> const f y (const x y))
(\y -> f (x))
(\_ -> f x)
pure (f x)
Did I perform the steps for the first two laws correctly?
I'm struggling with the interchange & composition laws. For interchange, so far I have the following:
-- Interchange
u <*> pure y = pure ($y) <*> u
-- Starting with the LHS
u <*> pure y
u <*> const y
(\x -> g x (const y x))
(\x -> g x y)
-- I'm not sure how to proceed beyond this point.
I would appreciate any help for the steps to verify the Interchange & Composition applicative laws for the ((->) r) type. For reference, the Composition applicative law is as follows:
pure (.) <*> u <*> v <*> w = u <*> (v <*> w)
I think in your "Identity" proof, you should replace g with v everywhere (otherwise what is g and where did it come from?). Similarly, in your "Interchange" proof, things look okay so far, but the g that magically appears should just be u. To continue that proof, you could start reducing the RHS and verify that it also produces \x -> u x y.
Composition is more of the same: plug in the definitions of pure and (<*>) on both sides, then start calculating on both sides. You'll soon come to some bare lambdas that will be easy to prove equivalent.
I'm playing around with formulating Applicative in terms of pure and liftA2 (so that (<*>) = liftA2 id becomes a derived combinator).
I can think of a bunch of candidate laws, but I'm not sure what the minimal set would be.
f <$> pure x = pure (f x)
f <$> liftA2 g x y = liftA2 ((f .) . g) x y
liftA2 f (pure x) y = f x <$> y
liftA2 f x (pure y) = liftA2 (flip f) (pure y) x
liftA2 f (g <$> x) (h <$> y) = liftA2 (\x y -> f (g x) (h y)) x y
...
Based on McBride and Paterson's laws for Monoidal(section 7) I'd suggest the following laws for liftA2 and pure.
left and right identity
liftA2 (\_ y -> y) (pure x) fy = fy
liftA2 (\x _ -> x) fx (pure y) = fx
associativity
liftA2 id (liftA2 (\x y z -> f x y z) fx fy) fz =
liftA2 (flip id) fx (liftA2 (\y z x -> f x y z) fy fz)
naturality
liftA2 (\x y -> o (f x) (g y)) fx fy = liftA2 o (fmap f fx) (fmap g fy)
It isn't immediately apparent that these are sufficient to cover the relationship between fmap and Applicative's pure and liftA2. Let's see if we can prove from the above laws that
fmap f fx = liftA2 id (pure f) fx
We'll start by working on fmap f fx. All of the following are equivalent.
fmap f fx
liftA2 (\x _ -> x) (fmap f fx) ( pure y ) -- by right identity
liftA2 (\x _ -> x) (fmap f fx) ( id (pure y)) -- id x = x by definition
liftA2 (\x _ -> x) (fmap f fx) (fmap id (pure y)) -- fmap id = id (Functor law)
liftA2 (\x y -> (\x _ -> x) (f x) (id y)) fx (pure y) -- by naturality
liftA2 (\x _ -> f x ) fx (pure y) -- apply constant function
At this point we've written fmap in terms of liftA2, pure and any y; fmap is entirely determined by the above laws. The remainder of the as-yet-unproven proof is left by the irresolute author as an exercise for the determined reader.
If you define (<.>) = liftA2 (.) then the laws become very nice:
pure id <.> f = f
f <.> pure id = f
f <.> (g <.> h) = (f <.> g) <.> h
Apparently pure f <.> pure g = pure (f . g) follows for free. I believe this formulation originates with Daniel Mlot.
Per the online book, Learn You A Haskell:Functors, Applicative Functors and Monoids, the Appplicative Functor laws are bellow but reorganized for formatting reasons; however, I am making this post community editable since it would be useful if someone could embed derivations:
identity] v = pure id <*> v
homomorphism] pure (f x) = pure f <*> pure x
interchange] u <*> pure y = pure ($ y) <*> u
composition] u <*> (v <*> w) = pure (.) <*> u <*> v <*> w
Note:
function composition] (.) = (b->c) -> (a->b) -> (a->c)
application operator] $ = (a->b) -> a -> b
Found a treatment on Reddit