Relationship between fmap and bind - haskell

After looking up the Control.Monad documentation, I'm confused about
this passage:
The above laws imply:
fmap f xs = xs >>= return . f
How do they imply that?

Control.Applicative says
As a consequence of these laws, the Functor instance for f will satisfy
fmap f x = pure f <*> x
The relationship between Applicative and Monad says
pure = return
(<*>) = ap
ap says
return f `ap` x1 `ap` ... `ap` xn
is equivalent to
liftMn f x1 x2 ... xn
Therefore
fmap f x = pure f <*> x
= return f `ap` x
= liftM f x
= do { v <- x; return (f v) }
= x >>= return . f

Functor instances are unique, in the sense that if F is a Functor and you have a function foobar :: (a -> b) -> F a -> F b such that foobar id = id (that is, it follows the first functor law) then foobar = fmap. Now, consider this function:
liftM :: Monad f => (a -> b) -> f a -> f b
liftM f xs = xs >>= return . f
What is liftM id xs, then?
liftM id xs
xs >>= return . id
-- id does nothing, so...
xs >>= return
-- By the second monad law...
xs
liftM id xs = xs; that is, liftM id = id. Therefore, liftM = fmap; or, in other words...
fmap f xs = xs >>= return . f
epheriment's answer, which routes through the Applicative laws, is also a valid way of reaching this conclusion.

Related

proving monad laws of a new monad instance (list of maybe)

I made a new List of Maybe Monad instance and tried to prove the implementation does satisfy the Monad laws, am I doing it right or is the implementation incorrect? Any pointer is appreciated. Thanks!
newtype Test a = Test { getTest :: [Maybe a] }
deriving Functor
instance Applicative Test where
pure = return
(<*>) = liftM2 ($)
instance Monad Test where
return :: a -> Test a
return a = Test $ [Just a]
(>>=) :: Test a -> (a -> Test b) -> Test b
Test [Nothing] >>= f = Test [Nothing]
Test [Just x] >>= f = f x
{-
1. return x >>= f = f x
return x >>= f = [Just x] >>= f = f x
2. m >>= return = m
[Nothing] >>= return = [Nothing]
[Just x] >>= return = return x = [Just x]
3. (m >>= f) >>= g == m >>= (\x -> (f x >>= g))
m = [Nothing]
L.H.S. = ([Nothing] >>= f ) >>= g = Nothing >>= g = Nothing
R.H.S. = [Nothing] >>= (\x -> (f x >>= g)) = Nothing
m = [Just x]
L.H.S. = ([Just x] >>= f) >>= g = f x >>= g
R.H.S. = [Just x] >>= (\v -> (f v >>= g)) = (\v -> (f v >>= g)) x
= f x >>= g
-}
The bits of the proof you have written are only incorrect in unimportant ways. Specifically, in these two lines:
([Nothing] >>= f ) >>= g = Nothing >>= g = Nothing
[Nothing] >>= (\x -> (f x >>= g)) = Nothing
The three bare Nothings should be [Nothing]s.
However, the proof is incomplete, because there are values of type Test a that are neither of the form [Just (x :: a)] nor [Nothing]. This makes the proof as a whole incorrect in an important way.

Do the monadic liftM and the functorial fmap have to be equivalent?

(Note: I'm phrasing the question using Haskell terminology; answers are welcome to use the same terminology and/or the mathematical language of category theory, including proper mathematical definitions and axioms where I speak of functor and monad laws.)
It is well known that every monad is also a functor, with the functor's fmap equivalent to the monad's liftM. This makes sense, and of course holds for all common/reasonable monad instances.
My question is whether this equivalence of fmap and liftM provably follows from the functor and monad laws. If so it will be nice to see how, and if not it will be nice to see a counterexample.
To clarify, the functor and monad laws I know are the following:
fmap id ≡ id
fmap f . fmap g ≡ fmap (f . g)
return x >>= f ≡ f x
x >>= return ≡ x
(x >>= f) >>= g ≡ x >>= (\x -> f x >>= g)
I don't see anything in these laws which relates the functor functionality (fmap) to the monad functionality (return and >>=), and so I find it hard to see how the equivalence of fmap and liftM (defined as liftM f x = x >>= (return . f)) can be derived from them. Maybe there is an argument for it which is just not straightforward enough for me to spot? Or maybe I'm missing some laws?
What you have missed is the parametericity law, otherwise known as the free theorem. One of the consequences of parametricity is that all polymorphic functions are natural transformations. Naturality says that any polymorphic function of the form
t :: F a -> G a
where F and G are functors, commutes with fmap:
t . fmap f = fmap f . t
If we can make something involving liftM that has the form of a natural transformation, then we will have an equation relating liftM and fmap. liftM itself doesn't produce a natural transformation:
liftM :: (a -> b) -> m a -> m b
-- ^______^
-- these need to be the same
But here's an idea, since (a ->) is a functor:
as :: m a
flip liftM as :: (a -> b) -> m b
-- F b -> G b
Let's try using parametericity on flip liftM m:
flip liftM m . fmap f = fmap f . flip liftM m
The former fmap is on the (a ->) functor, where fmap = (.), so
flip liftM m . (.) f = fmap f . flip liftM m
Eta expand
(flip liftM m . (.) f) g = (fmap f . flip liftM m) g
flip liftM m (f . g) = fmap f (flip liftM m g)
liftM (f . g) m = fmap f (liftM g m)
This is promising. Take g = id:
liftM (f . id) m = fmap f (liftM id m)
liftM f m = fmap f (liftM id m)
It would suffice to show liftM id = id. That probably follows from its definition:
liftM id m
= m >>= return . id
= m >>= return
= m
Yep! Qed.
For this exercise, I found it easier to work with join rather than >>=. A monad can be equivalently defined through return and join, satisfying
1) join . join = join . fmap join
2) join . return = join . fmap return = id
Indeed, join and >>= are inter-definable:
x >>= f = join (fmap f x)
join x = x >>= id
And the laws you mentioned correspond to those above (I won't prove this).
Then, we have:
liftM f x
= { def liftM }
x >>= return . f
= { def >>= }
join (fmap (return . f) x)
= { def . and $ }
join . fmap (return . f) $ x
= { fmap law }
join . fmap return . fmap f $ x
= { join law 2 }
id . fmap f $ x
= { def id, ., $ }
fmap f x

Haskell function composition and fmap f

I have two simple examples:
1) xt function (what is this?)
Prelude> :t fmap
fmap :: Functor f => (a -> b) -> f a -> f b
Prelude> :{
Prelude| f::Int->Int
Prelude| f x = x
Prelude| :}
Prelude> xt = fmap f // ?
Prelude> :t xt
xt :: Functor f => f Int -> f Int
Prelude> xt (+2) 1
3
2) xq function (via composition)
Prelude> :{
Prelude| return x = [x]
Prelude| :}
Prelude> xq = return . f
Prelude> :t xq
xq :: Int -> [Int]
Prelude> :t return
return :: a -> [a]
xq function I get through composition return(f(x)). But what does that mean: fmap f and what is difference?
The Functor instance for (->) r defines fmap to be function composition:
fmap f g = f . g
Thus, xt (+2) == fmap f (+2) == f . (+2) == (+2) (since f is the identity function for Int). Applied to 1, you get the observed answer 3.
fmap is the function defined by the Functor type class:
class Functor f where
fmap :: (a -> b) -> f a -> f b
It takes a function as its argument and returns a new function "lifted" into the functor in question. The exact definition is supplied by the Functor instance. Above is the definition for the function functor; here for reference are some simpler ones for lists and Maybe:
instance Functor [] where
fmap = map
instance Functor Maybe where
fmap f Nothing = Nothing
fmap f (Just x) = Just (f x)
> fmap (+1) [1,2,3]
[2,3,4]
> fmap (+1) Nothing
Nothing
> fmap (+1) (Just 3)
Just 4
Since you can think of functors as boxes containing one or more values, the intuition for the function functor is that a function is a box containing the result of applying the function to its argument. That is, (+2) is a box that contains some value plus 2. (F)mapping a function on that box provides a box that contains the result of applying f to the result of the original function, i.e, produces a function that is the composition of f with the original function.
Both xq = return . f and xt = fmap f can be eta-expanded:
xq x = (return . f) x = return (f x) = return x
Now it can be eta-contracted:
xq = return
The second is
xt y = fmap f y = fmap (\x -> x) y = fmap id y = id y = y
fmap has type :: Functor f => (a -> b) -> f a -> f b so fmap f has type :: Functor f => f Int -> f Int, because f :: Int -> Int. From its type we see that fmap f is a function, expecting an Int, and producing an Int.
Since f x = x for Ints by definition, it means that f = id for Ints, where id is a predefined function defined just the same way as f is (but in general, for any type).
Then by Functor laws (and that's all we need to know about "Functors" here), fmap id = id and so xt y = y, in other words it's also id - but only for Ints,
xt = id :: Int -> Int
Naturally, xt (+2) = id (+2) = (+2).
Addendum: for something to be a "Functor" means that it can be substituted for f in
fmap id (x :: f a) = x
(fmap g . fmap h) = fmap (g . h)
so that the expressions involved make sense (i.e. are well formed, i.e. have a type), and the above equations hold (they are in fact the two "Functor laws").

Avoiding do statement in foldM

g ll =
foldlM (\ some_list b -> do
part <- f b
return (some_list ++ part)) [] ll
In above piece of code I use do statement just because the f function return a monad type: M a where a is a list.
( I "unpack" that list with <-. This is why I need do statement). Can I avoid it and write that more concisely? ( Yes, I know that I can write it using >>= but I also consider something nicer.)
foldlM is the wrong tool for the job. You can use it, as chepner's answer shows, but the way you're concatenating lists could get expensive. Luka Rahne's one-liner is much better:
g ll = fmap concat (mapM f ll)
Another option is to use foldr directly:
g = foldr (\x r -> (++) <$> f x <*> r) (pure [])
Another way to write the second version, by inlining the foldr:
g [] = pure []
g (x : xs) = (++) <$> f x <*> g xs
Your do expression
do
part <- f b
return (some_list ++ part)
follows the extract-apply-return pattern that fmap captures (due to the identity fmap f k = k >>= return . f
You extract part from the computation f b
You apply (some_list ++) to part
You return the result of that application.
This can be done in one step with fmap:
-- foldlM (f b >>= return . (some_list ++)) [] ll
foldlM (\some_list b -> fmap (some_list ++) (f b)) [] ll

How do you implement monoid interface for this tree in haskell?

Please excuse the terminology, my mind is still bending.
The tree:
data Ftree a = Empty | Leaf a | Branch ( Ftree a ) ( Ftree a )
deriving ( Show )
I have a few questions:
If Ftree could not be Empty, would it no longer be a Monoid since there is no identity value.
How would you implement mappend with this tree? Can you just arbitrarily graft two trees together willy nilly?
For binary search trees, would you have to introspect some of the elements in both trees to make sure the result of mappend is still a BST?
For the record, some other stuff Ftree could do here:
instance Functor Ftree where
fmap g Empty = Empty
fmap g ( Leaf a ) = Leaf ( g a )
fmap g ( Branch tl tr ) = Branch ( fmap g tl ) ( fmap g tr )
instance Monad Ftree where
return = Leaf
Empty >>= g = Empty
Leaf a >>= g = g a
Branch lt rt >>= g = Branch ( lt >>= g ) ( rt >>= g )
There are three answers to your question, one captious and one unhelpful and one abstract:
The captious answer
instance Monoid (Ftree a) where
mempty = Empty
mappend = Branch
This is an instance of the Monoid type class, but does not satisfy any of the required properties.
The unhelpful answer
What Monoid do you want? Just asking for a monoid instance without further information is like asking for a solution without giving the problem. Sometimes there is a natural monoid instance (e.g. for lists) or there is only one (e.g. for (), disregarding questions of definedness). I don’t think either is the case here.
BTW: There would be an interesting monoid instance if your tree would have data at internal nodes that combines two trees recursively...
The abstract answer
Since you gave a Monad (Ftree a) instance, there is a generic way to get a Monoid instance:
instance (Monoid a, Monad f) => Monoid (f a) where
mempty = return mempty
mappend f g = f >>= (\x -> (mappend x) `fmap` g)
Lets check if this is a Monoid. I use <> = mappend. We assume that the Monad laws hold (I did not check that for your definition). At this point, recall the Monad laws written in do-notation.
Our mappend, written in do-Notation, is:
mappend f g = do
x <- f
y <- g
return (f <> g)
So we can verify the monoid laws now:
Left identity
mappend mempty g
≡ -- Definition of mappend
do
x <- mempty
y <- g
return (x <> y)
≡ -- Definition of mempty
do
x <- return mempty
y <- g
return (x <> y)
≡ -- Monad law
do
y <- g
return (mempty <> y)
≡ -- Underlying monoid laws
do
y <- g
return y
≡ -- Monad law
g
Right identity
mappend f mempty
≡ -- Definition of mappend
do
x <- f
y <- mempty
return (x <> y)
≡ -- Monad law
do
x <- f
return (x <> mempty)
≡ -- Underlying monoid laws
do
x <- f
return x
≡ -- Monad law
f
And finally the important associativity law
mappend f (mappend g h)
≡ -- Definition of mappend
do
x <- f
y <- do
x' <- g
y' <- h
return (x' <> y')
return (x <> y)
≡ -- Monad law
do
x <- f
x' <- g
y' <- h
y <- return (x' <> y')
return (x <> y)
≡ -- Monad law
do
x <- f
x' <- g
y' <- h
return (x <> (x' <> y'))
≡ -- Underlying monoid law
do
x <- f
x' <- g
y' <- h
return ((x <> x') <> y')
≡ -- Monad law
do
x <- f
x' <- g
z <- return (x <> x')
y' <- h
return (z <> y')
≡ -- Monad law
do
z <- do
x <- f
x' <- g
return (x <> x')
y' <- h
return (z <> y')
≡ -- Definition of mappend
mappend (mappend f g) h
So for every (proper) Monad (and even for every applicative functor, as Jake McArthur pointed out on #haskell), there is a Monoid instance. It may or may not be the one that you are looking for.

Resources