In Haskell, is the map function a functor? - haskell

The type of map is: (a->b) -> [a] -> [b]
while the type of the functor fmap is:
Functor f => (a+b) -> f a -> f b
I read on wikipedia that map was a polymorphic morphism while fmap was a polytypic morphism but that doesn't really clear up things for me.
So my question is: is the map function a functor?

In Haskell terms, fmap is a method in the typeclass Functor, not the functor itself. [], Maybe, ... are type constructors which instantiates the class Functor and, abusing the language, you can say that "Maybe is a functor".
In mathematical terms, a functor (or more specifically in this case, an endofunctor in the category Hask, the category of Haskell types) is composed of two mappings: the first one from a type to another and the second from an arrow (a -> b) to another (f a -> f b) which preserves the structure. In that sense, Maybe is the first arrow which maps a type to another, say Int to Maybe Int and the fmap for Maybe is the second arrow.

No, but any container that implements fmap is a functor. Lists implement fmap as well (give it a try!), so lists are functors. Map is just the implementation of fmap for lists.

Like all (->) r types map is also a functor.
(->) r is actually just a simple function type like r -> a and all functions are functors. Like other functors you may think a function like a container but you get the contained value when you apply a value. However as for a Functor instance we can not have a type with two type variables. That's why we partially apply it. Just like Either a b is done like instance Functor (Either a). Since we are interested in the return value of a function the Functor instance of a function type is a partially applied r -> a which is (->) r.
So lets see a functions Functor instance.
instance Functor ((->) r) where
fmap f g = (\x -> f (g x))
So coming back to the question, map is a function with type (a -> b) -> [a] -> [b] and we can rewrite this like (a -> b) -> ([a] -> [b]) so here r variable in the Functor instance stands for a -> b and a variable in the Functor instance stands for [a] -> [b]. So lets apply fmap on map like fmap ($) map. Which essentially means it will return us a function like \x -> ($) map x. Where x has to be a (a -> b) type function. So lets use <$>; the infix representation of fmap in the below example ;
Prelude> (($) <$> map) (+2) [1,2,3]
[3,4,5]
So.. yes map is a functor.

Related

Any function with the same polymorphic type as fmap must be equal to fmap?

I'm reading the second edition of Programming in Haskell and I've came across this sentence:
... there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
This doesn't seem right to me, though. I can see that there is only one valid definition of fmap for each Functor type, but surely I could define any number of functions with the type (a -> b) -> f a -> f b which aren't equivalent to each other?
Why is this the case? Or, is it just a mistake by the author?
You've misread what the author was saying.
...any function with the same polymorphic type as fmap...
This means, any function with the signature
Functor f => (a -> b) -> f a -> f b
must be equivalant to fmap. (Unless you permit bottom values, of course.)
That statement is true; it can be seen quite easily if you try to define such a function: because you know nothing about f except that it's a functor, the only way to obtain a non-⊥ f b value is by fmapping over the f a one.
What's a bit less clear cut is the logical implication in the quote:
there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
I think what the author means there is, because a Functor f => (a -> b) -> f a -> f b function must necessarily invoke fmap, and because fmap is always the only valid functor-mapping for a parameterised type, any Functor f => (a -> b) -> f a -> f b will indeed also in practice obey the functor laws, i.e. it will be the fmap.
I agree that the “hence” is a bit badly phrased, but in principle the quote is correct.
I think that the quote refers to this scenario. Assume we define a parameterized type:
data F a = .... -- whatever
for which we can write not only one, but two fmap implementations
fmap1 :: (a -> b) -> F a -> F b
fmap2 :: (a -> b) -> F a -> F b
satisfying the functor laws
fmap1 id = id
fmap1 (f . g) = fmap1 f . fmap1 g
fmap2 id = id
fmap2 (f . g) = fmap2 f . fmap2 g
Under these assumptions, we have that fmap1 = fmap2.
This is a theoretical consequence of the "free theorem" associated to fmap's polymorphic type (see the comment under Lemma 1).
Pragmatically, this ensures that the instance we obtain from deriving Functor is the only possible one.
It is a mistake. Here's some examples of functions with the same type as fmap for lists that are not fmap:
\f -> const []
\f -> concatMap (replicate 2 . f)
\f -> map (f . head) . chunksOf 2
\f -> map f . reverse
There are many more. In general, given a function ixf from list lengths to lists of numbers no bigger than that length (that is, valid indices into the list), we can build
maybeIt'sFmapLol :: (Int -> [Int]) -> (a -> b) -> [a] -> [b]
maybeIt'sFmapLol ixf elemf xs = [map elemf xs !! ix | ix <- ixf (length xs)]
Use suitably lazy variants of Int to handle infinite lists. A similar function schema can be cooked up for other container-like functors.

Representation of tuple applicative

I am trying to understand the tuple applicative. When I look at information in prelude about tuple, it says:
instance (Monoid a, Monoid b) => Monoid (a, b)
instance Monoid a => Applicative ((,) a)
What does ((,) a) mean? It does not look like a tuple.
I have following example:
Prelude Data.Monoid> ((Sum 2), (+2)) <*> ((Sum 45), 8)
(Sum {getSum = 47},10)
The first argument in the tuple is an instance of monoid and the second it is just a function application. But how does the signature ((,) a) match the example above?
I prefer to write the instance as a tuple section (which is not actually legal Haskell, but gets the point across:
instance Monoid a => Applicative (a,)
... and before discussing that, consider
instance Functor (a,)
What this does is just: for any left-hand element (of type a), map over whichever right-hand argument there is. I.e.
fmap :: (b -> c) -> (a,b) -> (a,c)
One might at this point wonder why we have (a,) and not (,a). Well, mathematically speaking, the following is just as valid:
instance Functor (,a)
fmap :: (a -> b) -> (a,c) -> (b,c)
...however that can't be defined because (a,b) is in fact syntactic sugar for (,) a b, i.e. the tuple-type-constructor (,) applied first to the a type and then to the b type. Now, in instance Functor (a,), we simply leave the b open to be fmapped over, hence
instance Functor ((,) a)
But it's not possible to _only apply the b argument, while leaving a open – that would require a sort of type-level lambda
“instance Functor (\b -> (a,b))”
which is not supported.
Now as to Applicative – that just extends the (a,) functor to also support combinations of multiple tuples, provided there's already a natural way to combine the a elements – that's what the Monoid instance offers. This is better known as the Writer monad, because it's well suited for generating a “operations log” along with the actual computation results:
Prelude> ("4+", (4+)) <*> ("10",10)
("4+10",14)

instance of "Type constructed with type argument" need not be constructed with data of that type, in Haskell

In Haskell, a type constructor can take a type argument, of course.
A function a -> b, when looked at as a "type with a funny constructor name", has type (->) a b. That makes it a type constructor (->) with two arguments, a and b. This is frequently encountered in the "reader" pattern as in its Functor and Applicative instances:
instance Functor ((->) a) where
fmap = (.)
instance Applicative ((->) a) where
pure = const
(<*>) f g x = f x (g x)
When I first tried to understand uses of this instance, as in
fmap (+1) (*2) 3 (=== (+1) . (*2) $ 3 === 3*2+1 === 7)
my reaction was "Ok, (+1) has type Int -> Int, which is (->) Int Int, so that matches Functor.... but where is the Int? I make a Maybe Int by calling Just 1, but I don't ever make a (->) Int Int by applying anything to an Int. In fact, I destroy a ((->) Int Int) by applying it to an Int! (Yeah, there's Nothing, but that seems... degenerate.)"
This all works (of course), as long as I remember that just because a type is built from a constructor+argument, that doesn't mean its values are built from a correspondingly typed constructor+argument. And some of the most interesting and powerful (and tricky to understand) type constructors are like this ((->), Lens, Arrow, etc)
(OK, really it's Num a => a, not Int, but let's ignore that, not relevant)
Is there a name for this concept? What is the appropriate mental model for thinking about type constructors, without leaning on the misleading and disempowering crutch interpretation "Foo a is a structure Foo containing value(s) of type a)?
This concept is known as a contravariant functor, on in Haskell-speak a Contravariant type.
class Contravariant f where
contramap :: (b -> a) -> f a -> f b
-- compare
class Functor f where
fmap :: (a -> b) -> f a -> f b
More generally, we can think of type variables in a type as having contravariant or covariant nature (at its simplest). For instance, by default we have
newtype Reader t a = Reader (t -> a)
instance Functor (Reader t) where
fmap ab (Reader ta) = Reader (ab . ta)
Which indicates that the second type parameter to Reader is covariant, while if we reverse the order
newtype RevReader a t = RevReader (t -> a)
instance Contravariant (RevReader a) where
contramap st (RevReader ta) = RevReader (ta . st)
A useful intuition for Contravariant types is that they have the ability to consume zero, one, or many values of the contravariant parameter instead of containing zero, one, or many values of the covariant parameter like we often think of when considering Functors.
Combining these two notions is the Profunctor
class Profunctor p where
dimap :: (a -> b) -> (c -> d) -> p b c -> p a d
which, as we notice, demands that p is of kind * -> * -> * where the first type parameter is contravariant and the second covariant. This class well characterizes the (->) type constructor
instance Profuntor (->) where
dimap f g h = g . h . f
Again, if we think of contravariant type parameters as being consumed and covariant ones as being produced this is quite amenable of the typical intuition around (->) types.
A few more examples of types which contravariant parameters include Relation
newtype Relation t = Relation (t -> t -> Bool)
instance Contravariant Relation where
contramap g (Relation pred) = Relation $ \a b -> pred (g a) (g b)
Or Fold which represents a left fold as a data type
newtype Fold a b = Fold b (a -> Fold a b)
instance Profunctor Fold where
dimap f g (Fold b go) = Fold (g b) (go . f)
sumF :: Num a => Fold a a
sumF = go 0 where
go n = Fold n (\i -> go (n + i))
With Fold a b we see that it consumes an arbitrary number of a types to produce one b type.
Generally what we find is that while it's often the case that we have covariant and "container" (strictly positive) types where values of some type c a are produced from a constructor of type a -> c a and some filler values a, in general that doesn't hold. In particular we have covariant types like that, but also contravariant ones which are often processes which somehow consume values of their parameterized type variables, or even more exotic ones like phantom types which utterly ignore their type variables
newtype Proxy a = Proxy -- need no `a`, produce no `a`
-- we have both this instance
instance Functor Proxy where
fmap _ Proxy = Proxy
-- and this one, though both instances ignore the passed function
instance Contravariant Proxy where
contramap _ Proxy = Proxy
and... "nothing special" type variables which cannot have any sort of nature, usually because they're being used as both covariant and contravariant types.
data Endo a = Endo (a -> a)
-- no instance Functor Endo or Contravariant Endo, it needs to treat
-- the input `a` differently from the output `a` such as in
--
-- instance Profunctor (->) where
Finally, a type constructor which takes multiple arguments may have different natures for each argument. In Haskell, the final type parameter is usually treated specially, though.

How does fmap work for List

Learn you a haskell gives description about Functor typeclass.
I can see that for list, it's implemented as follows:
instance Functor [] where
fmap = map
But how does this work ?
In the typeclass Functor, fmap doesn't even have an implementation. All it has is just type declaration like this:
class Functor f where
fmap :: (a -> b) -> f a -> f b
Just by having the type declaration, how does Haskell figure out map operation for Lists correctly ?
map is just a normal function with type (a -> b) -> [a] -> [b]. Unlike fmap, it is not part of the Functor typeclass. It works exactly how you think it does.
The idea behind typeclasses is that you use the types to figure out which implementation to use. When we say instance Functor [] where ..., we're telling the compiler what the implementation of fmap for [] (the list type) is.
In this case, the implementation for fmap is just map, which is a normal function.

Can liftM differ from liftA?

According to the Typeclassopedia (among other sources), Applicative logically belongs between Monad and Pointed (and thus Functor) in the type class hierarchy, so we would ideally have something like this if the Haskell prelude were written today:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Functor f => Pointed f where
pure :: a -> f a
class Pointed f => Applicative f where
(<*>) :: f (a -> b) -> f a -> f b
class Applicative m => Monad m where
-- either the traditional bind operation
(>>=) :: (m a) -> (a -> m b) -> m b
-- or the join operation, which together with fmap is enough
join :: m (m a) -> m a
-- or both with mutual default definitions
f >>= x = join ((fmap f) x)
join x = x >>= id
-- with return replaced by the inherited pure
-- ignoring fail for the purposes of discussion
(Where those default definitions were re-typed by me from the explanation at Wikipedia, errors being my own, but if there are errors it is at least in principle possible.)
As the libraries are currently defined, we have:
liftA :: (Applicative f) => (a -> b) -> f a -> f b
liftM :: (Monad m) => (a -> b) -> m a -> m b
and:
(<*>) :: (Applicative f) => f (a -> b) -> f a -> f b
ap :: (Monad m) => m (a -> b) -> m a -> m b
Note the similarity between these types within each pair.
My question is: are liftM (as distinct from liftA) and ap (as distinct from <*>), simply a result of the historical reality that Monad wasn't designed with Pointed and Applicative in mind? Or are they in some other behavioral way (potentially, for some legal Monad definitions) distinct from the versions that only require an Applicative context?
If they are distinct, could you provide a simple set of definitions (obeying the laws required of Monad, Applicative, Pointed, and Functor definitions described in the Typeclassopedia and elsewhere but not enforced by the type system) for which liftA and liftM behave differently?
Alternatively, if they are not distinct, could you prove their equivalence using those same laws as premises?
liftA, liftM, fmap, and . should all be the same function, and they must be if they satisfy the functor law:
fmap id = id
However, this is not checked by Haskell.
Now for Applicative. It's possible for ap and <*> to be distinct for some functors simply because there could be more than one implementation that satisfies the types and the laws. For example, List has more than one possible Applicative instance. You could declare an applicative as follows:
instance Applicative [] where
(f:fs) <*> (x:xs) = f x : fs <*> xs
_ <*> _ = []
pure = repeat
The ap function would still be defined as liftM2 id, which is the Applicative instance that comes for free with every Monad. But here you have an example of a type constructor having more than one Applicative instance, both of which satisfy the laws. But if your monads and your applicative functors disagree, it's considered good form to have different types for them. For example, the Applicative instance above does not agree with the monad for [], so you should really say newtype ZipList a = ZipList [a] and then make the new instance for ZipList instead of [].
They can differ, but they shouldn't.
They can differ because they can have different implementations: one is defined in an instance Applicative while the other is defined in an instance Monad. But if they indeed differ, then I'd say the programmer who wrote those instances wrote misleading code.
You are right: the functions exist as they do for historical reasons. People have strong ideas about how things should have been.

Resources