Differences between functors and endofunctors - haskell

Can someone explain in simple terms the difference between the two? I'm not fully understanding the part where monads are endofunctors versus being just functors.

A functor may go from one category to a different one, an endofunctor is a functor for which start and target category are the same.
Same as with endomorphisms versus morphisms.
Now, why must monads be endofunctors?
There is the famous quote that "Monads are just monoids in the category of endofunctors". Fortunately, somebody else has already explained that rather well in this answer.
The key point why a monad has to be an endofunctor, is that join, as it is called in Haskell, or µ, as it is usually called in category theory, is part of the definition¹ of a monad. Now
Prelude Control.Monad> :t join
join :: Monad m => m (m a) -> m a
so the result of applying the functor m to an object (in Hask, the category of Haskell types as objects and functions as morphisms, a type) must be an object that m can again be applied to. That means it must belong to the category that is the domain of the functor m.
A functor can only be composed with itself if its domain and codomain are the same [strictly, if its codomain is a subcategory of its domain], in other words, if it is an endofunctor. Since composability with itself is part of the definition of a monad, monads are a fortiori endofunctors.
¹ One definition, one can alternatively define a monad using (>>=) or bind and have join as a derived property.

Related

Why is there a distinction between co and contravariant functors in Haskell but not Category Theory?

This answer from a Category Theory perspective includes the following statement:
...the truth is that there's no real distinction between co and contravariant functor, because every functor is just a covariant functor.
...
More in details a contravariant functor F from a category C to a category D is nothing more than a (covariant) functor of type F : Cop→D, from the opposite category of C to the category D.
On the other hand, Haskell's Functor and Contravariant merely require fmap and contramap, respectively, to be defined for an instance. This suggests that, from the perspective of Haskell, there exists objects that are Contravariant but are not Functors (and vice versa).
So it seems that in Category Theory "there's no real distinction between co and contravariant functors" while in Haskell there is a distinction between Contravariant and Functor.
I suspect that this difference has something to with all implementation in Haskell happening in Hask, but I'm not sure.
I think I understand each of the Category Theory and Haskell perspectives on their own, but I'm struggling to find an intuition that connects the two.
It's for convenience.
One could get by with a more general Functor class, and define instances for endofunctors on Hask (corresponding to our existing Functor) and functors from Hask^op to Hask (corresponding to our existing Contravariant). But this comes at a figurative cognitive cost and a quite literal syntactical cost: one must then rely on type inference or type annotations to select an instance, and there are explicit conversions (named Op and getOp in the standard library) into and out of Hask^op.
Using the names fmap and contramap relaxes both costs: readers do not need to run Hindley-Milner in their head to decide which instance is being selected when it is unambiguous, and writers do not need to give explicit conversions or type annotations to select an instance in cases where it is ambiguous.
(I am actually rewriting history a little bit here. The real reason is because the language designers thought the specialized Functor would be useful and hadn't imagined or didn't see a need for a more general Functor. People came along later and noticed it would be useful, sometimes. But experience with the generalized Functor class shows that can be tedious, and that specialized classes for the most common cases turns out to be a surprisingly good fit after all, for the reasons described above.)
Imagine for a minute we had something like the following.
class MoreAccurateFunctor c d f where
fmap :: c a b -> d (f a) (f b)
Since (->) is an instance of Category (this is Hask), we would have that Functor ~ MoreAccurateFunctor (->) (->).
Now, imagine we have Dual (->), the dual category of (->) (this would be HaskOp and we would have Dual (->) a b ~ (b -> a)), we would have that Contravariant ~ MoreAccurateFunctor (Dual (->)) (->).
I don't know if this helps but the idea is to point out the fact that Functor and Contravariant are two specialisations of MoreAccurateFunctor while this latter class is closer to the definition of functor in category theory.
Mathematically, considering contravariant functors as a distinct class of functors is just a notational convenience; the contravariant functor F : C -> D can always be defined as a covariant functor F' : C^{op} -> D, so getting rid of the idea of contravariant functors would just force you to talk about the opposite category explicitly.
In Haskell, the Functor class represents an endofunctor on the (assumed) category Hask. There is no convenient way to represent HASKOP directly (or at least, not in a form that helps us define functors from that category), nor is there a typeclass that defines exofunctor*, so instead we define the Contrafunctor class whose contramap function can reverse the arrow from Hask "on demand", so to speak.
* Is "exofunctor" a real term? I just made it up to indicate a functor that is not an endofunctor.

What is difference between Coyoneda and free-functors

As I know we can derive functor for free via Coyoneda.
But exists some haskell package http://hackage.haskell.org/package/free-functors
And my question, what is difference between Coyoneda and http://hackage.haskell.org/package/free-functors-0.8.1/docs/src/Data-Functor-Free.html#Free
The key here is to understand what it means for a type constructor not to be a functor. It means that it's defined on objects and not on morphisms. But we can describe it as a functor too, if we choose a different source category. For every category C, you can define a discrete category |C|, which has the same objects as C, but no morphisms other than the identity morphisms. A "non-functor" is just a functor from |C| to C. There is a trivial injection functor J from |C| to C that is identity on objects and morphisms (of which there are only identity morphisms). So let's see:
Coyoneda is defined as a left Kan extension of a functor f along the identity functor. It requires f to be a functor.
Free functor looks like Coyoneda, but it's really the left Kan extension of a functor f from |C| to C along J. Strictly speaking J is not identity, but it's close enough, hence the abuse of notation.
The free functor from Sjoerd Visscher's library extends this idea even further. Roughly speaking, a type class in Haskell defines a subcategory of Hask. His free functor is then the left Kan extension of a functor f from that subcategory (or the discrete version of it) to Hask, along the injection of that subcategory into Hask.

What are the situations when you can/cannot have a Functor instances for a datatype?

Considering a type with kind * -> *, I'm trying to find rules and build intuition for when you can and when you cannot have Functor for this type.
So far the rules that I see are the following:
No Functor instance for the container types that have restrictions
on the contained values.
Example: You cannot have a Functor instance for Set because Ord is needed for the contained value
No Functor instance for contravariant data types.
Example:
newtype Contra a = Contra (a -> Int)
Besides this, are there other situations?
In addition to your rules:
Must be of kind * -> *
No Functor instance for the container types that have restrictions on the contained values.
No Functor instance for contravariant data types.
I would add a few:
A natural extension of "not for contravariant types": no Functor instance for invariant data types. e.g. data Iso a b = Iso (a -> b) (b -> a)
GADTs often cannot have a Functor instance. For example,
data Foo a where
Foo :: Foo Int
Perhaps you would somehow want to lump this into the "only covariant" rule somehow (it's not clear to me what variance this even has), or the "unrestricted container types" rule somehow (GADTs introduce type equalities that are very constraint-like).
However, keep in mind that these rules apply to Functor only, not to functors in general. I expect any stupid type (of appropriate kind) you can cook up will be a functor on some suitable category closely related to Hask.
From the point of view of category theory, Haskell's type constructors of the kind *->* define a mapping of objects in the category Hask (it's a category modulo termination issues, which I'm going to conveniently ignore). A functor is a mapping of objects and, more importantly, a mapping of morphisms. In fact, it's primarily a mapping of morphisms--the mapping of objects is, in a way, a side effect of that. Objects are just the endpoints of morphisms. This mapping of morphisms must preserve composition and identity.
In Haskell, morphisms are functions, and the mapping of functions is implemented as fmap.
The fact that in Haskell we start with the mapping of objects is a little backwards. This works because the syntax of the language drastically limits the possibilities for defining the mapping of objects. Such mapping are very regular and, quite often, come equipped with canonical mappings of functions. For instance, algebraic data types are constructed using products and coproducts, which are functorial in nature (hence the possibility of deriving Functor automatically). Also, function types (categorical exponentials) are functorial in the second argument (and contravariant in the first). So, as long as we use the tools of the bicartesian closed category (products, coproducts, and exponentials), it's easy to construct functorial mappings of objects.
A functor must be defined for every object in a given category, so data types that are constrained by type classes (e.g., Set with the Ord constraint) are not Functors in Hask, but they may be functors in a subcategory of Hask. (It's possible to define subcategories in Haskell together with their own functors.)
There is -XDerivingFunctor extension in Haskell user guide. You can find full description here. It has description of all cases when functor deriving can fail. This description has cases when algorithm checks for proper kind, etc. But I believe this list of constraints for algorithm is exhaustive.
For example, another case, when type has kind * -> * but cannot be instance of Functor:
A data type’s last type variable is used in an -XExistentialQuantification constraint, or is refined in a GADT

What is a monad in FP, in categorical terms?

Every time someone promises to "explain monads", my interest is piqued, only to be replaced by frustration when the alleged "explanation" is a long list of examples terminated by some off-hand remark that the "mathematical theory" behind the "esoteric ideas" is "too complicated to explain at this point".
Now I'm asking for the opposite. I have a solid grasp on category theory and I'm not afraid of diagram chasing, Yoneda's lemma or derived functors (and indeed on monads and adjunctions in the categorical sense).
Could someone give me a clear and concise definition of what a monad is in functional programming? The fewer examples the better: sometimes one clear concept says more than a hundred timid examples. Haskell would do nicely as a language for demonstration though I'm not picky.
This question has some good answers: Monads as adjunctions
More to the point, Derek Elkins' "Calculating Monads with Category Theory" article in TMR #13 should have the sort of constructions you're looking for: http://www.haskell.org/wikiupload/8/85/TMR-Issue13.pdf
Finally, and perhaps this is really the closest to what you're looking for, you can go straight to the source and look at Moggi's seminal papers on the topic from 1988-91: http://www.disi.unige.it/person/MoggiE/publications.html
See in particular "Notions of computation and monads".
My own I'm sure too condensed/imprecise take:
Begin with a category Hask whose objects are Haskell types, and whose morphisms are functions. Functions are also objects in Hask, as are products. So Hask is Cartesian closed. Now introduce an arrow mapping every object in Hask to MHask which is a subset of the objects in Hask. Unit!
Next introduce an arrow mapping every arrow on Hask to an arrow on MHask. This gives us map, and makes MHask a covariant endofunctor. Now introduce an arrow mapping every object in MHask which is generated from an object in MHask (via unit) to the object in MHask which generates it. Join! And from the that, MHask is a monad (and a monoidal endofunctor to be more precise).
I'm sure there is a reason why the above is deficient, which is why I'd really direct you, if you're looking for formalism, to the Moggi papers in particular.
As a compliment to Carl's answer, a Monad in Haskell is (theoretically) this:
class Monad m where
join :: m (m a) -> m a
return :: a -> m a
fmap :: (a -> b) -> m a -> m b
Note that "bind" (>>=) can be defined as
x >>= f = join (fmap f x)
According to the Haskell Wiki
A monad in a category C is a triple (F : C → C, η : Id → F, μ : F ∘ F → F)
...with some axioms. For Haskell, fmap, return, and join line up with F, η, and μ, respectively. (fmap in Haskell defines a Functor). If I'm not mistaken, Scala calls these map, pure, and join respectively. (Scala calls bind "flatMap")
Ok, using Haskell terminology and examples...
A monad, in functional programming, is a composition pattern for data types with the kind * -> *.
class Monad (m :: * -> *) where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
(There's more to the class than that in Haskell, but those are the important parts.)
A data type is a monad if it can implement that interface while satisfying three conditions in the implementation. These are the "monad laws", and I'll leave it to those long-winded explanations for the full explanation. I summarize the laws as "(>>= return) is an identity function, and (>>=) is associative." It's really not more than that, even if it can be expressed more precisely.
And that's all a monad is. If you can implement that interface while preserving those behavioral properties, you have a monad.
That explanation is probably shorter than you expected. That's because the monad interface really is very abstract. The incredible level of abstraction is part of why so many different things can be modeled as monads.
What's less obvious is that as abstract as the interface is, it allows generically modeling any control-flow pattern, regardless of the actual monad implementation. This is why the Control.Monad package in GHC's base library has combinators like when, forever, etc. And this is why the ability to explicitly abstract over any monad implementation is powerful, especially with support from a type system.
You should read the paper by Eugenio Moggi "Notions of computations and monads" which explain the then proposed role of monads to structure denotational semantic of effectful languages.
Also there is a related question:
References for learning the theory behind pure functional languages such as Haskell?
As you don't want hand-waving, you have to read scientific papers, not forum answers or tutorials.
A monad is a monoid in the category of endofunctors, whats the problem?.
Humor aside, I personally believe that monads, as they are used in Haskell and functional programming, are better understood from the monads-as-an-interface point of view (as in Carl's and Dan's answers) instead of from the monads-as-the-term-from-category-theory point of view. I have to confess that I only really internalized the whole monad thing when I had to use a monadic library from another language in a real project.
You mention that you didn't like all the "lots of examples" tutorials. Has anyone ever pointed you to the Awkward squad paper? It focuses manly in the IO monad but the introduction gives a good technical and historical explanation of why the monad concept was embraced by Haskell in the first place.
I don't really know what I'm talking about, but here's my take:
Monads are used to represent computations. You can think of a normal procedural program, which is basically a list of statements, as a bunch of composed computations. Monads are a generalization of this concept, allowing you to define how the statements get composed. Each computation has a value (it could just be ()); the monad just determines how the value strung through a series of computations behaves.
Do notation is really what makes this clear: it's basically a special sort of statement-based language that lets you define what happens between statements. It's as if you could define how ";" worked in C-like languages.
In this light all of the monads I've used so far makes sense: State doesn't affect the value but updates a second value which is passed along from computation to computation in the background; Maybe short-circuits the value if it ever encounters a Nothing; List lets you have a variable number of values passed through; IO lets you have impure values passed through in a safe way. The more specialized monads I've used like Gen and Parsec parsers are also similar.
Hopefully this is a clear explanation which isn't completely off-base.
Since you understand monads in the category-theoretic sense I am interpreting your question as being about the presentation of monads in functional programming.
Thus my answer avoids any explanation of what a monad is, or any intuition about its meaning or use.
Answer: In Haskell a monad is presented, in an internal language for some category, as the (internalised) maps of a Kleisli triple.
Explanation:
It is hard to be precise about the properties of the "Hask category", and these properties are largely irrelevant for understanding Haskell's presentation of monads.
Instead, for this discussion, it is more useful to understand Haskell as an internal language for some category C. Haskell functions define morphisms in C and Haskell types are objects in C, but the particular category in which these definitions are made is unimportant.
Parameteric data types, e.g. data F a = ..., are object mappings, e.g. F : |C| -> |C|.
The usual description of a monad in Haskell is in Kleisli triple (or Kleisli extension) form:
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
where:
m is the object mapping m :|C| -> |C|
return is the unit operation on objects
>>= (pronounced "bind" by Haskellers) is the extension operation on morphisms but with its first two parameters swapped (cf. usual signature of extension (-)* : (a -> m b) -> m a -> m b)
(These maps are themselves internalised as families of morphisms in C, which is possible since m :|C| -> |C|).
Haskell's do-notation (if you have come across this) is therefore an internal language for Kleisli categories.
The Haskell wikibook page has a good basic explanation.

Are all Haskell functors endofunctors?

I'm a bit confused, and need someone to set me straight. Lets outline my current understanding:
Where E is an endofunctor, and A is some category:
E : A -> A.
Since all types and morphisms in Haskell are in the Hask category, is not any functor in Haskell also an endofunctor? F : Hask -> Hask.
I have a good feeling that I'm wrong, and oversimplifying this somehow, and I'd like someone to tell me what an idiot I am. Thanks.
You may want to clarify whether you're asking about "functors in Haskell", or Functors. It's not always clear what category is being assumed when Category Theory terms are used in Haskell.
But yes, the default assumption is Hask, which is taken to be the category of Haskell types with functions as morphisms. In that case, an endofunctor F on Hask would map any type A to a type F(A) and any function f between two types A and B to a function F(f) between some types F(A) and F(B).
If we then limit ourselves to only those endofunctors which map any type a to a type (f a) where f is a type constructor with kind * -> *, then we can describe the associated map for functions as a higher-order function with type (a -> b) -> (f a -> f b), which is of course the type class called Functor.
However, one can easily imagine well-behaved endofunctors on Hask which can't be written (directly) as an instance of Functor, such as a functor mapping a type a to Either a t. And while there's obviously not much sense in a functor from Hask to some other category entirely, it's reasonable to consider a (contravariant) functor from Hask to Haskop.
Beyond that, instances of Functor necessarily map from the entire category Hask onto some subset of it that, thus, also forms a category. But it's also reasonable to talk about functors between subsets of Hask. For instance, consider a functor that sends types Maybe a to [a].
You may wish to peruse the category-extras package, which provides some Category Theory-inspired structures embedded within Hask instead of assuming the entirety of it.
Even if ultimately, you manipulate Hask, there are a lot of other categories that can be built on Hask, which can be meaningful for the problem at hand:
Hask^op, which is Hask with all arrows reversed
Hask * Hask, functors on it are bifunctors
Comma categories, ie. objects are morphisms to a fixed object a, morphisms are commutative triangles
Functor categories, morphisms are natural transformations
Algebra categories
Monoidal categories
Kleisli categories
...
grab a copy of Mac Lane's Categories for the Working Mathematician to have definitions, and try to find by yourself the problem they solve in Haskell. Especially choke on adjoint functors (which are initial/terminal objects in the right category) and their relationship with monads.
You'll see that even if there is one big category (Hask, or perhaps "lifted objects from Hask with the right arrows/products/...", which encapsulates the language choices of Haskell such as non-strictness and lazyness), proper derived categories are expressive.
A possibly relevant (or at least interesting) discussion specifically regarding monads is found in the paper "Monads need not be endofunctors":
http://www.cs.nott.ac.uk/~txa/publ/Relative_Monads.pdf

Resources