Reasoning about types in Haskell - haskell

Chapter 16 of "Haskell Programming from First Principles" on page 995 has an exercise to manually work out how (fmap . fmap) typechecks. It suggests substituting the type of each fmap for the function types in the type of the composition operator:
T1 (.) :: (b -> c) -> (a -> b) -> a -> c
T2 fmap :: Functor f => (m -> n) -> f m -> f n
T3 fmap :: Functor g => (x -> y) -> g x -> g y
By (attempting to) substitute T2 and T3 into T1, I arrived at the following:
T4: ((m -> n) -> f m -> f n) -> ((x -> y) -> g x -> g y) -> a -> c
Further, it suggests checking the type of (fmap . fmap) to see what the end type should look like.
T5: (fmap . fmap) :: (Functor f1, Functor f2) => (a -> b) -> f1 (f2 a) -> f1 (f2 b)
I'm having trouble understanding what I should be doing here. Could any knowledgeable haskellers help get me started, or maybe provide examples of similar exercises that show how to work out types by hand?

We proceed step by careful step:
--- fmap . fmap = (.) fmap fmap
--- Functor f, g, ... => .....
(.) :: ( b -> c ) -> (a -> b ) -> a -> c
fmap :: (d -> e) -> f d -> f e
-------- ----------
(.) fmap :: (a ->d->e) -> a -> f d -> f e
---- ----------
-- then,
(.) fmap :: ( a -> d -> e ) -> a -> f d -> f e
fmap :: (b -> c) -> g b -> g c
-------- --- ---
(.) fmap fmap :: (b->c) -> f (g b) -> f (g c)
------ ----- -----
It is important to consistently rename all the type variables on each separate use of a type, to avoid conflation.
We use the fact that the arrows associate on the right,
A -> B -> C ~ A -> (B -> C)
and the type inference rule is
f :: A -> B
x :: C
--------------
f x :: B , A ~ C
(f :: A -> B) (x :: C) :: B under the equivalence / unification of types A ~ C and all that it entails.

Related

haskell partial application in case-statement

I'm having trouble understanding whats going on in this function. My understanding is that fmap f x returns a function that takes the last argument y. But when is y "fed" to fmap f x inside the case statement?.
func :: (Num a, Num b) => (a -> b -> c) -> Maybe a -> Maybe b -> Maybe c
func f x y = case fmap f x of
Nothing -> Nothing
Just z -> fmap z y
For Maybe, the functor instance is defined as:
instance Functor Maybe where
fmap _ Nothing = Nothing
fmap f (Just x) = Just (f x)
So for f ~ Maybe, fmap is specialized to fmap :: (g -> h) -> Maybe g -> Maybe h.
In your specific case, f has signature f :: a -> b -> c, or more verbose ff :: a -> (b -> c). So that means that for our signature of fmap, we obtain:
fmap :: (g -> h ) -> Maybe g -> Maybe h
f :: a -> (b -> c)
----------------------------------------------
g ~ a, h ~ (b -> c)
So that means fmap f x will have type fmap f x :: Maybe (b -> c). It is thus a Maybe that wraps a function of type b -> c.
Here we thus can inspect if fmap f x is a Just z, in which case z has type z :: b -> c, or Nothing.
In case it is a Just z, we thus can perform another fmap with z, and thus obtain a Maybe c.

Why does join . (flip fmap) have type ((A -> B) -> A) -> (A -> B) -> B?

Some playing around with functors and monads in ghci led me to a value whose type and behaviour I would like to understand better.
The type of \x -> join . x is (Monad m) => (a -> m (m b)) -> (a -> m b) and the type of \y -> y . (flip fmap) is (Functor f) => ((a -> b) -> f b) -> (f a -> c).
Version 8.2.2 of ghci permits the definition h = join . (flip fmap).
Why does h have type ((A -> B) -> A) -> (A -> B) -> B?
In particular, why do the functor and monad constraints disappear? Is this really the correct and expected behaviour? As a follow up, I would also like to ask:
Why does evaluating h (\f -> f u) (\x -> x + v) for integers u and v give u + 2v in every case?
In short: due to type deduction, Haskell knows that m and f are in fact a partially instantiated arrow.
Deriving the type
Well let us do the math. The function join . (flip fmap) is basically your given lambda expression \x -> join . x with as argument (flip fmap), so:
h = (\x -> join . x) (flip fmap)
Now the lambda expression has type:
(\x -> join . x) :: Monad m => (a -> m (m b)) -> (a -> m b)
Now the argument flip fmap has type:
flip fmap :: Functor f => f c -> ((c -> d) -> f d)
(we here use c and d instead of a and b to avoid confusion between two possibly different types).
So that means that the type of flip fmap is the same as the type of the argument of the lambda expression, hence we know that:
Monad m => a -> m (m b)
~ Functor f => f c -> ((c -> d) -> f d)
---------------------------------------
a ~ f c, m (m b) ~ ((c -> d) -> f d)
So we now know that a has the same type as f c (this is the meaning of the tilde ~).
But we have to do some extra computations:
Monad m => m (m b)
~ Functor f => ((c -> d) -> f d)
--------------------------------
m ~ (->) (c -> d), m b ~ f d
Hence we know that m is the same as (->) (c -> d) (basically this is a function where we know that input type, here (c -> d), and the output type is a type parameter of m.
So that means that m b ~ (c -> d) -> b ~ f d, so this means that f ~ (->) (c -> d) and b ~ d. An extra consequence is that since a ~ f c, we know that a ~ (c -> d) -> c
So to list what we derived:
f ~ m
m ~ (->) (c -> d)
b ~ d
a ~ (c -> d) -> c
So we now can "specialize" the types of both our lambda expression, and our flip fmap function:
(\x -> join . x)
:: (((c -> d) -> c) -> (c -> d) -> (c -> d) -> d) -> ((c -> d) -> c) -> (c -> d) -> d
flip fmap
:: ((c -> d) -> c) -> (c -> d) -> (c -> d) -> d
and type of flip fmap now perfectly matches with the type of the argument of the lambda expression. So the type of (\x -> join . x) (flip fmap) is the result type of the lambda expression type, and that is:
(\x -> join . x) (flip fmap)
:: ((c -> d) -> c) -> (c -> d) -> d
But now we of course did not yet obtained the implementation of this function. We are however already a step further.
Deriving the implementation
Since we now know that m ~ (->) (c -> d), we know we should lookup the arrow instance of a monad:
instance Monad ((->) r) where
f >>= k = \ r -> k (f r) r
So for a given function f :: r -> a, as left operand, and a function k :: a -> (r -> b) ~ a -> r -> b as operand, we construct a new function that maps a variable x to k applied to f applied to x, and x. It is thus a way to perform some sort of preprocessing on an input variable x, and then do the processing both taking into account the preprocessing and the original view (well this is an interpretation a human reader can use).
Now join :: Monad m => m (m a) -> m a is implemented as:
join :: Monad m => m (m a) -> m a
join x = x >>= id
So for the (->) r monad, this means that we implement this as:
-- specialized for `m ~ (->) a
join f = \r -> id (f r) r
Since id :: a -> a (the identity function) returns its argument, we can further simplify it to:
-- specialized for `m ~ (->) a
join f = \r -> (f r) r
or cleaner:
-- specialized for `m ~ (->) a
join f x = f x x
So it basically is given a function f, and will then apply an argument twice to that function.
Furthermore we know that the Functor instance for the arrow type is defined as:
instance Functor ((->) r) where
fmap = (.)
So it is basically used as a "post processor" on the result of the function: we construct a new function that will do the post processing with the given function.
So now that we specialized the function enough for the given Functor/Monad, we can derive the implementation as:
-- alternative implementation
h = (.) (\f x -> f x x) (flip (.))
or by using more lambda expressions:
h = \a -> (\f x -> f x x) ((flip (.)) a)
which we can now further specialize as:
h = \a -> (\f x -> f x x) ((\y z -> z . y) a)
-- apply a in the lambda expression
h = \a -> (\f x -> f x x) (\z -> z . a)
-- apply (\z -> z . a) in the first lambda expression
h = \a -> (\x -> (\z -> z . a) x x)
-- cleaning syntax
h a = (\x -> (\z -> z . a) x x)
-- cleaning syntax
h a x = (\z -> z . a) x x
-- apply lambda expression
h a x = (x . a) x
-- remove the (.) part
h a x = x (a x)
So h basically takes two arguments: a and x, it then performs function application with a as function and x as parameter, and the output is passed to the x function again.
Sample usage
As sample usage you use:
h (\f -> f u) (\x -> x + v)
or nicer:
h (\f -> f u) (+v)
so we can analyze this like:
h (\f -> f u) (+v)
-> (+v) ((\f -> f u) (+v))
-> (+v) ((+v) u)
-> (+v) (u+v)
-> ((u+v)+v)
So we add u+v to v.
Types line up easier with >>>:
a -> b >>>
b -> c ::
a -> c
Here, we have
join . flip fmap == flip fmap >>> join
flip fmap :: Functor f => f a -> ((a -> b) -> f b )
join :: Monad m => (m (m b)) -> m b
----------------------------------------------------------
flip fmap >>> join ::
(Functor f, Monad m) => f a -> m b , ((a -> b) ->) ~ m, f ~ m
::
(Functor f, Monad f) => f a -> f b , f ~ ((a -> b) ->)
:: ((a -> b) -> a) -> ((a -> b) -> b)
Simple, mechanical, mundane.
To see what it does, combinatory style definitions are usually easiest to twiddle with,
(join . flip fmap) f g x =
join (flip fmap f) g x = -- join f x = f x x
(`fmap` f) g g x = -- f `fmap` g = f . g
(g . f) g x
g (f g) x
So we don't need x after all (or do we?). The join and fmap definitions for functions are given in the margins. We've arrived at
(join . flip fmap) f g = g (f g) -- f :: (a -> b) -> a, g :: a -> b
-- f g :: a , g (f g) :: b
Another way is starting from the types, going by the rule of modus ponens,
((a -> b) -> a) (a -> b) -- f g
---------------------------
(a -> b) a -- g (f g)
---------------------------------------
b

What is the type of composition of two functions -- for example (flip . const)

I've started learning Haskell and I'm curious how to find out the type of composition of functions: for example:
:t flip
flip :: (a -> b -> c) -> b -> a -> c
:t const
const :: a -> b -> a
how manually do :t (flip . const)?
of course GHCi can help you with that:
:t (flip.const)
(flip . const) :: (b -> c) -> b -> a -> c
but how to do it yourself?
Well there are three functions here into play:
(.) :: (b -> c) -> (a -> b) -> a -> c;
flip :: (a -> b -> c) -> b -> a -> c; and
const :: a -> b -> a.
Note that if you use the (.) function as an operator, you actually have written:
(.) flip const
or more verbose:
((.) flip) const
now let us first write the signatures of the functions in a verbose way, and with different vaiable names, such that these do not clash:
(.) :: (b -> c) -> ((a -> b) -> (a -> c))
flip :: (d -> (e -> f)) -> (e -> (d -> f))
const :: g -> (h -> g)
So we apply (.) to flip, that thus means that we have to match the parameter of (.) which has type (b -> c) with the signature of flip, so we resolve this with:
b -> c
(d -> (e -> f)) -> (e -> (d -> f))
Which is the only match possible (notice the brackets). So that means that:
b ~ (d -> (e -> f))
c ~ (e -> (d -> f))
(here a ~ b means that a and b are the same type)
As a result, the type of (.) flip is
(.) flip :: (a -> b) -> (a -> c)
Which is again a function with one parameter (all functions in Haskell have one parameter), and that parameter has type a -> b.
and we apply that function to const, so we do again a pattern matching:
a -> b
g -> (h -> g)
so that means that a ~ g and b ~ (d -> (e -> f)) ~ (h -> g), as a result we know that d ~ h and g ~ (e -> f).
We know that the type of ((.) flip) const has type:
((.) flip) const :: a -> c`
So now it is a matter of substituting: a for g and g ~ (e -> f), so a ~ (e -> f). Furthermore we know that c ~ (e -> (d -> f)), so that means that the type is:
((.) flip) const :: (e -> f) -> (e -> (d -> f))
or in less verbose form:
flip . const :: (e -> f) -> e -> d -> f
which is, except for variable renaming, the same as the type derived by GHCi.
We have also (>>>) = flip (.) which can be easier to deal with, type-wise:
f . g = g >>> f
g :: a -> b
f :: b -> c
g >>> f :: a -> c
thus
flip . const = const >>> flip
const :: a1 -> (b1 -> a1 )
flip :: (a2 -> (b2 -> c2)) -> (b2 -> a2 -> c2)
const >>> flip
:: a1 -> b2 -> a2 -> c2 -- where
-- b1 ~ a2, a1 ~ b2 -> c2
:: (b2 -> c2) -> b2 -> a2 -> c2
Or flip . const :: (b -> c) -> b -> a -> c. GHCi says the same.
From this type we immediately see that (flip . const) f x z = f x. Indeed (flip . const) f x z = flip (const f) x z = const f z x = f x.
Three lessons to take away from this:
types associate to the right, as functional application associates to the left, f x y z = (((f x) y) z), f :: a -> (b -> (c -> d));
vertically aligning the stuff helps;
numbering the type vars in separate types helps keep them apart.

Applying compose to fmap

For weeks I've been trying to figure out how the Haskell compiler applies the (.) to fmap.
What I mean is.
:t (.)
(.) :: (b -> c) -> (a -> b) -> a -> c
:t fmap
fmap :: Functor f => (a -> b) -> f a -> f b
:t (.) fmap
(.) fmap :: Functor f => (a -> a1 -> b) -> a -> f a1 -> f b
How did the compiler arrive at the type for (.) fmap?
I was actually going to ask this question here but while I was explaining what I've tried it all came together. So now I'm just going to post the answer too.
To get this I took fmap
fmap :: Functor f => (a -> b) -> f a -> f b
fmap :: Functor f => (a -> b) -> (f a -> f b)
if
:t (.)
(.) :: (b -> c) -> (a -> b) -> a -> c
then
(b -> c) of the start of the (.) function can be replaced with
((a -> b) -> (f a -> f b))
thus I have
((a1 -> b) -> (f a1 -> f b)) -> (a -> (a1 -> b)) -> a -> (f a1 -> f b)
Since (.) has been applied to fmap, we can eliminate ((a1 -> b) -> (f a1 -> f b)) and we are left with
(a -> (a1 -> b)) -> a -> (f a1 -> f b)
Then to be extra clean we can eliminate extra parentheses.
Glguy and Hamme from the IRC Beginner-haskell channel both reminded me (->) is right associative
e.g. (a -> b -> c -> d) = (a -> (b -> (c -> d)))
so we eliminate the redundant parentheses.
(a -> a1 -> b) -> a -> f a1 -> f b
:t (.) fmap
(.) fmap :: Functor f => (a -> a1 -> b) -> a -> f a1 -> f b
The type signature can be understood intuitively if you rename a to c, rename a1 to a, and add an extra pair of parentheses:
> :t (.) fmap
(.) fmap :: Functor f => (c -> (a -> b)) -> c -> f a -> f b
The first argument is a function that returns another function (a -> b) that gets fed into fmap. Applying the first argument produces the fully composed function waiting on that one argument c. Applying c produces fmap (a -> b) which is only waiting on the last argument f a.
((.) fmap)
((.) fmap (c -> (a -> b)) -- Apply the 1st argument
((.) fmap (c -> (a -> b)) c -- Apply the 2nd argument
fmap (a -> b)
fmap (a -> b) f a -- Apply the 3rd argument
f b -- The result
An example:
> ((.) fmap) (\n -> (+n)) 42 [1..5] -- Becomes: fmap (+42) [1..5]
[43,44,45,46,47]
> ((.) fmap) (\n -> (+n)) 13 [1..5]
[14,15,16,17,18]
One way to understand how the type is derived is to look at what (fmap .) means.
Consider fmap . g: what does this mean? Expanding the definition of ., we see that fmap . g = \x -> fmap (g x). Since the first argument to fmap needs to be a function with type a -> b, g must be function with a type like c -> a -> b; it computes an appropriate function given an argument.
Now, whereas we can apply fmap f directly to a list (or other functor), we need to give fmap . g an argument first:
fmap f someFunctorialValue == someOtherFunctorialValue
((fmap . g) x) someFunctorialValue == someOtherFunctorialValue
Dropping some redundant parentheses, this becomes
(fmap .) g x someFunctorialValue == someOtherFunctorialValue
and now we can directly what the type of each expression should be:
-- someFunctorialValue :: Functor f => f a
-- someOtherFunctorialValue :: Functor f => f b
-- x :: c
-- g :: (c -> a -> b)
-- (fmap .) :: (c -> a -> b) -> c -> f a -> f b
-- fmap :: ( a -> b) -> f a -> f b
In other words: fmap takes a concrete function a -> b, while (fmap .) takes a "parameterized" function g and a "function selector" x.

How (fmap . fmap) typechecks

I have been going through a article(http://comonad.com/reader/2012/abstracting-with-applicatives/) and found the following snippet of code there:
newtype Compose f g a = Compose (f (g a)) deriving Show
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f (Compose x) = Compose $ (fmap . fmap) f x
How does actually (fmap . fmap) typechecks ?
Their types being:
(.) :: (a -> b) -> (r -> a) -> (r -> b)
fmap :: (a -> b) -> f a -> f b
fmap :: (a -> b) -> f a -> f b
Now from here I can see in no way in which fmap . fmap will typecheck ?
First let's change the type variables' names to be unique:
(.) :: (a -> b) -> (r -> a) -> (r -> b)
fmap :: Functor f => (c -> d) -> f c -> f d
fmap :: Functor g => (x -> y) -> g x -> g y
Now the first parameter to . has type a -> b and we supply an argument of type (c -> d) -> (f c -> f d), so a is c -> d and b is f c -> f d. So so far we have:
(.) :: Functor f => -- Left operand
((c -> d) -> (f c -> f d)) ->
-- Right operand
(r -> (c -> d)) ->
-- Result
(r -> (f c -> f d))
The second parameter to . has type r -> a a.k.a. r -> (c -> d) and the argument we give has type (x -> y) -> (g x -> g y), so r becomes x -> y, c becomes g x and d becomes g y. So now we have:
(.) :: (Functor f, Functor g) => -- Left operand
((g x -> g y) -> (f (g x) -> f (g y))) ->
-- Right operand
((x -> y) -> (g x -> g y)) ->
-- Result
(x -> y) -> f (g x) -> f (g y)
fmap.fmap :: (Functor f, Functor g) => (x -> y) -> f (g x) -> f (g y)
The expression fmap . fmap has two instances of fmap which can, in principle, have different types. So let's say their types are
fmap :: (x -> y) -> (g x -> g y)
fmap :: (u -> v) -> (f u -> f v)
Our job is to unify types (which amounts to coming up with equality relations between these type variables) so that the right-hand side of the first fmap is the same as the left-hand side of the second fmap. Hopefully you can see that if you set u = g x and v = g y you will end up with
fmap :: ( x -> y) -> ( g x -> g y )
fmap :: (g x -> g y) -> (f (g x) -> f (g y))
Now the type of compose is
(.) :: (b -> c) -> (a -> b) -> (a -> c)
To make this work out, you can pick a = x -> y and b = g x -> g y and c = f (g x) -> f (g y) so that the type can be written
(.) :: ((g x -> g y) -> (f (g x) -> f (g y))) -> ((x -> y) -> (g x -> g y)) -> ((x -> y) -> (f (g x) -> f (g y)))
which is pretty unwieldy, but it's just a specialization of the original type signature for (.). Now you can check that everything matches up such that fmap . fmap typechecks.
An alternative is to approach it from the opposite direction. Let's say that you have some object that has two levels of functoriality, for example
>> let x = [Just "Alice", Nothing, Just "Bob"]
and you have some function that adds bangs to any string
bang :: String -> String
bang str = str ++ "!"
You'd like to add the bang to each of the strings in x. You can go from String -> String to Maybe String -> Maybe String with one level of fmap
fmap bang :: Maybe String -> Maybe String
and you can go to [Maybe String] -> [Maybe String] with another application of fmap
fmap (fmap bang) :: [Maybe String] -> [Maybe String]
Does that do what we want?
>> fmap (fmap bang) x
[Just "Alice!", Nothing, Just "Bob!"]
Let's write a utility function, fmap2, that takes any function f and applies fmap to it twice, so that we could just write fmap2 bang x instead. That would look like this
fmap2 f x = fmap (fmap f) x
You can certainly drop the x from both sides
fmap2 f = fmap (fmap f)
Now you realize that the pattern g (h x) is the same as (g . h) x so you can write
fmap2 f = (fmap . fmap) f
so you can now drop the f from both sides
fmap2 = fmap . fmap
which is the function you were interested in. So you see that fmap . fmap just takes a function, and applies fmap to it twice, so that it can be lifted through two levels of functoriality.
Old question, but to me, conceptually, fmap represents "taking an a -> b and bringing it 'one level up', to f a -> f b".
So if I had an a -> b, I can fmap it to give me an f a -> f b.
If I had an f a -> f b, I can fmap it again to give me a g (f a) -> g (f a). Lift that f a -> f b function to new heights --- a new level.
So "fmapping" once lifts the function once. fmapping twice lifts that lifted function...so, a double lift.
Put in the language of haskell syntax:
f :: a -> b
fmap f :: f a -> f b
fmap (fmap f) :: g (f a) -> g (f b)
fmap (fmap (fmap f)) :: h (g (f a)) -> h (g (f b))
Notice how each successive fmap lifts the original a -> b to another new level. So,
fmap :: (a -> b) -> ( f a -> f b )
fmap . fmap :: (a -> b) -> ( g (f a) -> g (f b) )
fmap . fmap . fmap :: (a -> b) -> (h (g (f a)) -> h (g (f a)))
Any "higher order function" that returns a function of the same arity as its input can do this. Take zipWith :: (a -> b -> c) -> ([a] -> [b] -> [c]), which takes a function taking two arguments and returns a new function taking two arguments. We can chain zipWiths the same way:
f :: a -> b -> c
zipWith f :: [a] -> [b] -> [c]
zipWith (zipWith f) :: [[a]] -> [[b]] -> [[c]]
So
zipWith :: (a -> b -> c) -> ( [a] -> [b] -> [c] )
zipWith . zipWith :: (a -> b -> c) -> ([[a]] -> [[b]] -> [[c]])
liftA2 works pretty much the same way:
f :: a -> b -> c
liftA2 f :: f a -> f b -> f c
liftA2 (liftA2 f) :: g (f a) -> g (f b) -> g (f c)
One rather surprising example that is put to great use in the modern implementation of the lens library is traverse:
f :: a -> IO b
traverse f :: f a -> IO ( f b )
traverse (traverse f) :: g (f a) -> IO ( g (f b) )
traverse (traverse (traverse f)) :: h (g (f a)) -> IO (h (g (f b)))
So you can have things like:
traverse :: (a -> m b) -> ( f a -> m ( f b ))
traverse . traverse :: (a -> m b) -> (g (f a) -> m (g (f b)))

Resources