Is `x >> pure y` equivalent to `liftM (const y) x` - haskell

The two expressions
y >> pure x
liftM (const x) y
have the same type signature in Haskell.
I was curious whether they were equivalent, but I could neither produce a proof of the fact nor a counter example against it.
If we rewrite the two expressions so that we can eliminate the x and y then the question becomes whether the two following functions are equivalent
flip (>>) . pure
liftM . const
Note that both these functions have type Monad m => a -> m b -> m a.
I used the laws that Haskell gives for monad, applicatives, and functors to transform both statements into various equivalent forms, but I was not able to produce a sequence of equivalences between the two.
For instance I found that y >> pure x can be rewritten as follows
y >>= const (pure x)
y *> pure x
(id <$ y) <*> pure x
fmap (const id) y <*> pure x
and liftM (const x) y can be rewritten as follows
fmap (const x) y
pure (const x) <*> y
None of these spring out to me as necessarily equivalent, but I cannot think of any cases where they would not be equivalent.

The other answer gets there eventually, but it takes a long-winded route. All that is actually needed are the definitions of liftM, const, and a single monad law: m1 >> m2 and m1 >>= \_ -> m2 must be semantically identical. (Indeed, this is the default implementation of (>>), and it is rare to override it.) Then:
liftM (const x) y
= { definition of liftM* }
y >>= \z -> pure (const x z)
= { definition of const }
y >>= \z -> pure x
= { monad law }
y >> pure x
* Okay, okay, so the actual definition of liftM uses return instead of pure. Whatever.

Yes they are the same
Let's start with flip (>>) . pure, which is the pointfree version of x >> pure y you provide:
flip (>>) . pure
It is the case that flip (>>) is just (=<<) . const so we can rewrite this as:
((=<<) . const) . pure
Since function composition ((.)) is associative we can write this as:
(=<<) . (const . pure)
Now we would like to rewrite const . pure. We can notice that const is just pure on (a ->), that means since pure . pure is fmap pure . pure, const . pure is (.) pure . const, ((.) is fmap for the functor (a ->)).
(=<<) . ((.) pure . const)
Now we associate again:
((=<<) . (.) pure) . const
((=<<) . (.) pure) is the definition for liftM1 so we can substitute:
liftM . const
And that is the goal. The two are the same.
1: The definition of liftM is liftM f m1 = do { x1 <- m1; return (f x1) }, we can desugar the do into liftM f m1 = m1 >>= return . f. We can flip the (>>=) for liftM f m1 = return . f =<< m1 and elide the m1 to get liftM f = (return . f =<<) a little pointfree magic and we get liftM = (=<<) . (.) return

One more possible route, exploiting the applicative laws:
For instance I found that y >> pure x can be rewritten as follows [...]
fmap (const id) y <*> pure x
That amounts to...
fmap (const id) y <*> pure x
pure ($ x) <*> fmap (const id) y -- interchange law of applicatives
fmap ($ x) (fmap (const id) y) -- fmap in terms of <*>
fmap (($ x) . const id) y -- composition law of functors
fmap (const x) y
... which, as you noted, is the same as liftM (const x) y.
That this route requires only applicative laws and not monad ones reflects how (*>) (another name for (>>)) is an Applicative method.

Related

Do the monadic liftM and the functorial fmap have to be equivalent?

(Note: I'm phrasing the question using Haskell terminology; answers are welcome to use the same terminology and/or the mathematical language of category theory, including proper mathematical definitions and axioms where I speak of functor and monad laws.)
It is well known that every monad is also a functor, with the functor's fmap equivalent to the monad's liftM. This makes sense, and of course holds for all common/reasonable monad instances.
My question is whether this equivalence of fmap and liftM provably follows from the functor and monad laws. If so it will be nice to see how, and if not it will be nice to see a counterexample.
To clarify, the functor and monad laws I know are the following:
fmap id ≡ id
fmap f . fmap g ≡ fmap (f . g)
return x >>= f ≡ f x
x >>= return ≡ x
(x >>= f) >>= g ≡ x >>= (\x -> f x >>= g)
I don't see anything in these laws which relates the functor functionality (fmap) to the monad functionality (return and >>=), and so I find it hard to see how the equivalence of fmap and liftM (defined as liftM f x = x >>= (return . f)) can be derived from them. Maybe there is an argument for it which is just not straightforward enough for me to spot? Or maybe I'm missing some laws?
What you have missed is the parametericity law, otherwise known as the free theorem. One of the consequences of parametricity is that all polymorphic functions are natural transformations. Naturality says that any polymorphic function of the form
t :: F a -> G a
where F and G are functors, commutes with fmap:
t . fmap f = fmap f . t
If we can make something involving liftM that has the form of a natural transformation, then we will have an equation relating liftM and fmap. liftM itself doesn't produce a natural transformation:
liftM :: (a -> b) -> m a -> m b
-- ^______^
-- these need to be the same
But here's an idea, since (a ->) is a functor:
as :: m a
flip liftM as :: (a -> b) -> m b
-- F b -> G b
Let's try using parametericity on flip liftM m:
flip liftM m . fmap f = fmap f . flip liftM m
The former fmap is on the (a ->) functor, where fmap = (.), so
flip liftM m . (.) f = fmap f . flip liftM m
Eta expand
(flip liftM m . (.) f) g = (fmap f . flip liftM m) g
flip liftM m (f . g) = fmap f (flip liftM m g)
liftM (f . g) m = fmap f (liftM g m)
This is promising. Take g = id:
liftM (f . id) m = fmap f (liftM id m)
liftM f m = fmap f (liftM id m)
It would suffice to show liftM id = id. That probably follows from its definition:
liftM id m
= m >>= return . id
= m >>= return
= m
Yep! Qed.
For this exercise, I found it easier to work with join rather than >>=. A monad can be equivalently defined through return and join, satisfying
1) join . join = join . fmap join
2) join . return = join . fmap return = id
Indeed, join and >>= are inter-definable:
x >>= f = join (fmap f x)
join x = x >>= id
And the laws you mentioned correspond to those above (I won't prove this).
Then, we have:
liftM f x
= { def liftM }
x >>= return . f
= { def >>= }
join (fmap (return . f) x)
= { def . and $ }
join . fmap (return . f) $ x
= { fmap law }
join . fmap return . fmap f $ x
= { join law 2 }
id . fmap f $ x
= { def id, ., $ }
fmap f x

Applicative: Prove `pure f <*> x = pure (flip ($)) <*> x <*> pure f`

During my study of Typoclassopedia I encountered this proof, but I'm not sure if my proof is correct. The question is:
One might imagine a variant of the interchange law that says something about applying a pure function to an effectful argument. Using the above laws, prove that:
pure f <*> x = pure (flip ($)) <*> x <*> pure f
Where "above laws" points to Applicative Laws, briefly:
pure id <*> v = v -- identity law
pure f <*> pure x = pure (f x) -- homomorphism
u <*> pure y = pure ($ y) <*> u -- interchange
u <*> (v <*> w) = pure (.) <*> u <*> v <*> w -- composition
My proof is as follows:
pure f <*> x = pure (($) f) <*> x -- identical
pure f <*> x = pure ($) <*> pure f <*> x -- homomorphism
pure f <*> x = pure (flip ($)) <*> x <*> pure f -- flip arguments
The first two steps of your proof look fine, but the last step doesn't. While the definition of flip allows you to use a law like:
f a b = flip f b a
that doesn't mean:
pure f <*> a <*> b = pure (flip f) <*> b <*> a
In fact, this is false in general. Compare the output of these two lines:
pure (+) <*> [1,2,3] <*> [4,5]
pure (flip (+)) <*> [4,5] <*> [1,2,3]
If you want a hint, you are going to need to use the original interchange law at some point to prove this variant.
In fact, I found I had to use the homomorphism, interchange, and composition laws to prove this, and part of the proof was pretty tricky, especially getting the sections right --like ($ f), which is different from (($) f). It was helpful to have GHCi open to double-check that each step of my proof type checked and gave the right result. (Your proof above type checks fine; it's just that the last step wasn't justified.)
> let f = sqrt
> let x = [1,4,9]
> pure f <*> x
[1.0,2.0,3.0]
> pure (flip ($)) <*> x <*> pure f
[1.0,2.0,3.0]
>
I ended up proving it backwards:
pure (flip ($)) <*> x <*> pure f
= (pure (flip ($)) <*> x) <*> pure f -- <*> is left-associative
= pure ($ f) <*> (pure (flip ($)) <*> x) -- interchange
= pure (.) <*> pure ($ f) <*> pure (flip ($)) <*> x -- composition
= pure (($ f) . (flip ($))) <*> x -- homomorphism
= pure (flip ($) f . flip ($)) <*> x -- identical
= pure f <*> x
Explanation of the last transformation:
flip ($) has type a -> (a -> c) -> c, intuitively, it first takes an argument of type a, then a function that accepts that argument, and in the end it calls the function with the first argument. So flip ($) 5 takes as argument a function which gets called with 5 as it's argument. If we pass (+ 2) to flip ($) 5, we get flip ($) 5 (+2) which is equivalent to the expression (+2) $ 5, evaluating to 7.
flip ($) f is equivalent to \x -> x $ f, that means, it takes as input a function and calls it with the function f as argument.
The composition of these functions works like this: First flip ($) takes x as it's first argument, and returns a function flip ($) x, this function is awaiting a function as it's last argument, which will be called with x as it's argument. Now this function flip ($) x is passed to flip ($) f, or to write it's equivalent (\x -> x $ f) (flip ($) x), this results in the expression (flip ($) x) f, which is equivalent to f $ x.
You can check the type of flip ($) f . flip ($) is something like this (depending on your function f):
λ: let f = sqrt
λ: :t (flip ($) f) . (flip ($))
(flip ($) f) . (flip ($)) :: Floating c => c -> c
I'd remark that such theorems are, as a rule, a lot less involved when written in mathematical style of a monoidal functor, rather than the applicative version, i.e. with the equivalent class
class Functor f => Monoidal f where
pure :: a -> f a
(⑂) :: f a -> f b -> f (a,b)
Then the laws are
id <$> v = v
f <$> (g <$> v) = f . g <$> v
f <$> pure x = pure (f x)
x ⑂ pure y = fmap (,y) x
a⑂(b⑂c) = assoc <$> (a⑂b)⑂c
where assoc ((x,y),z) = (x,(y,z)).
The theorem then reads
pure u ⑂ x = swap <$> x ⑂ pure u
Proof:
swap <$> x ⑂ pure u
= swap <$> fmap (,u) x
= swap . (,u) <$> x
= (u,) <$> x
= pure u ⑂ x
□

Relationship between fmap and bind

After looking up the Control.Monad documentation, I'm confused about
this passage:
The above laws imply:
fmap f xs = xs >>= return . f
How do they imply that?
Control.Applicative says
As a consequence of these laws, the Functor instance for f will satisfy
fmap f x = pure f <*> x
The relationship between Applicative and Monad says
pure = return
(<*>) = ap
ap says
return f `ap` x1 `ap` ... `ap` xn
is equivalent to
liftMn f x1 x2 ... xn
Therefore
fmap f x = pure f <*> x
= return f `ap` x
= liftM f x
= do { v <- x; return (f v) }
= x >>= return . f
Functor instances are unique, in the sense that if F is a Functor and you have a function foobar :: (a -> b) -> F a -> F b such that foobar id = id (that is, it follows the first functor law) then foobar = fmap. Now, consider this function:
liftM :: Monad f => (a -> b) -> f a -> f b
liftM f xs = xs >>= return . f
What is liftM id xs, then?
liftM id xs
xs >>= return . id
-- id does nothing, so...
xs >>= return
-- By the second monad law...
xs
liftM id xs = xs; that is, liftM id = id. Therefore, liftM = fmap; or, in other words...
fmap f xs = xs >>= return . f
epheriment's answer, which routes through the Applicative laws, is also a valid way of reaching this conclusion.

Applicative Laws for the ((->) r) type

I'm trying to check that the Applicative laws hold for the function type ((->) r), and here's what I have so far:
-- Identiy
pure (id) <*> v = v
-- Starting with the LHS
pure (id) <*> v
const id <*> v
(\x -> const id x (g x))
(\x -> id (g x))
(\x -> g x)
g x
v
-- Homomorphism
pure f <*> pure x = pure (f x)
-- Starting with the LHS
pure f <*> pure x
const f <*> const x
(\y -> const f y (const x y))
(\y -> f (x))
(\_ -> f x)
pure (f x)
Did I perform the steps for the first two laws correctly?
I'm struggling with the interchange & composition laws. For interchange, so far I have the following:
-- Interchange
u <*> pure y = pure ($y) <*> u
-- Starting with the LHS
u <*> pure y
u <*> const y
(\x -> g x (const y x))
(\x -> g x y)
-- I'm not sure how to proceed beyond this point.
I would appreciate any help for the steps to verify the Interchange & Composition applicative laws for the ((->) r) type. For reference, the Composition applicative law is as follows:
pure (.) <*> u <*> v <*> w = u <*> (v <*> w)
I think in your "Identity" proof, you should replace g with v everywhere (otherwise what is g and where did it come from?). Similarly, in your "Interchange" proof, things look okay so far, but the g that magically appears should just be u. To continue that proof, you could start reducing the RHS and verify that it also produces \x -> u x y.
Composition is more of the same: plug in the definitions of pure and (<*>) on both sides, then start calculating on both sides. You'll soon come to some bare lambdas that will be easy to prove equivalent.

How arbitrary is the "ap" implementation for monads?

I am currently studying the bonds between monad and applicative functors.
I see two implementation for ap:
ap m1 m2 = do { f <- m1 ; x <- m2 ; return (f x) }
and
ap m1 m2 = do { x <- m2 ; f <- m1 ; return (f x) }
The second one is different, yet, would it be a good implementation for <*> ?
I got lost in the proof of pure (.) <*> u <*> v <*> w = u <*> (v <*> w)
I try to get an intuition of "what part of the monad is the applicative functor"...
There are at least three relevant aspects to this question.
Given a Monad m instance, what is the specification of its necessary Applicative m superclass instance? Answer: pure is return, <*> is ap, so
mf <*> ms == do f <- mf; s <- ms; return (f s)
Note that this specification is not a law of the Applicative class. It's a requirement on Monads, to ensure consistent usage patterns.
Given that specification (by candidate implementation), is ap the only acceptable implementation. Answer: resoundingly, no. The value dependency permitted by the type of >>= can sometimes lead to inefficient execution: there are situations where <*> can be made more efficient than ap because you don't need to wait for the first computation to finish before you can tell what the second computation is. The "applicative do" notation exists exactly to exploit this possibility.
Do any other candidate instances for Applicative satisfy the Applicative laws, even though they disagree with the required ap instances? Answer: yes. The "backwards" instance proposed by the question is just such a thing. Indeed, as another answer observes, any applicative can be turned backwards, and the result is often a different beast.
For a further example and exercise for the reader, note that nonempty lists are monadic in the way familiar from ordinary lists.
data Nellist x = x :& Maybe (Nellist x)
necat :: Nellist x -> Nellist x -> Nellist x
necat (x :& Nothing) ys = x :& Just ys
necat (x :& Just xs) ys = x :& Just (necat xs ys)
instance Monad Nellist where
return x = x :& Nothing
(x :& Nothing) >>= k = k x
(x :& Just xs) >>= k = necat (k x) (xs >>= k)
Find at least four behaviourally distinct instances of Applicative Nellist which obey the applicative laws.
Let's start with the obvious fact: such a definition for <*> violates the ap-law in the sense that <*> should ap, where ap is the one defined in the Monad class, i.e. the first one you posted.
Trivialities aside, as far as I can see, the other applicative laws should hold.
More concretely, let's focus on the composition law you mentioned.
Your "reversed" ap
(<**>) m1 m2 = do { x <- m2 ; f <- m1 ; return (f x) }
can also be defined as
(<**>) m1 m2 = pure (flip ($)) <*> m2 <*> m1
where <*> is the "regular" ap.
This means that, for instance,
u <**> (v <**> w) =
{ def. <**> }
pure (flip ($)) <*> (v <**> w) <*> u =
{ def. <**> }
pure (flip ($)) <*> (pure (flip ($)) <*> w <*> v) <*> u =
{ composition law }
pure (.) <*> pure (flip ($)) <*> (pure (flip ($)) <*> w) <*> v <*> u =
{ homomorphism law }
pure ((.) (flip ($))) <*> (pure (flip ($)) <*> w) <*> v <*> u =
{ composition law }
pure (.) <*> pure ((.) (flip ($))) <*> pure (flip ($)) <*> w <*> v <*> u =
{ homomorphism law (x2)}
pure ((.) ((.) (flip ($))) (flip ($))) <*> w <*> v <*> u =
{ beta reduction (several) }
pure (\x f g -> g (f x)) <*> w <*> v <*> u
(I hope I got everything OK)
Try doing something similar to the left hand side.
pure (.) <**> u <**> v <**> w = ...

Resources