Why doesn't Haskell have a stronger alternative to Eq? - haskell

The reason why Set is not a functor is given here. It seems to boil down to the fact that a == b && f a /= f b is possible. So, why doesn't Haskell have as standard an alternative to Eq, something like
class Eq a => StrongEq a where
(===) :: a -> a -> Bool
(/==) :: a -> a -> Bool
x /== y = not (x === y)
x === y = not (x /== y)
for which instances are supposed to obey the laws
∀a,b,f. not (a === b) || (f a === f b)
∀a. a === a
∀a,b. (a === b) == (b === a)
and maybe some others? Then we could have:
instance StrongEq a => Functor (Set a) where
-- ...
Or am I missing something?
Edit: my problem is not “Why are there types without an Eq instance?”, like some of you seem to have answered. It's the opposite: “Why are there instances of Eq that aren't extensionally equal? Why are there too many Eq instances?”, combined with “If a == b does imply extensional equality, why is Set not an instance of Functor?”.
Also, my instance declaration is rubbish (thanks #n.m.). I should have said:
newtype StrongSet a = StrongSet (Set a)
instance Functor StrongSet where
fmap :: (StrongEq a, StrongEq b) => (a -> b) -> StrongSet a -> StrongSet b
fmap (StrongSet s) = StrongSet (map s)

instance StrongEq a => Functor (Set a) where
This makes sense neither in Haskell nor in the grand mathematical/categorical scheme of things, regardless of what StrongEq means.
In Haskell, Functor requires a type constructor of kind * -> *. The arrow reflects the fact that in category theory, a functor is a kind of mapping. [] and (the hypothetical) Set are such type constructors. [a] and Set a have kind * and cannot be functors.
In Haskell, it is hard to define Set such that it can be made into a Functor because equality cannot be sensibly defined for some types no matter what. You cannot compare two things of type Integer->Integer, for example.
Let's suppose there is a function
goedel :: Integer -> Integer -> Integer
goedel x y = -- compute the result of a function with
-- Goedel number x, applied to y
Suppose you have a value s :: Set Integer. What fmap goedel s should look like? How do you eliminate duplicates?
In your typical set theory equality is magically defined for everything, including functions, so Set (or Powerset to be precise) is a functor, no problem with that.

Since I'm not a category theorist, I'll try to write a more concrete/practical explanation (i.e., one I can understand):
The key point is the one that #leftaroundabout made in a comment:
== is supposed to
witness "equivalent by all observable means" (that doesn't necessarily
require a == b must hold only for identical implementations; but
anything you can "officially" do with a and b should again yield
equivalent results. So unAlwaysEq should never be exposed in the first
place). If you can't ensure this for some type, you shouldn't give it
an Eq instance.
That is, there should be no need for your StrongEq because that's what Eq is supposed to be already.
Haskell values are often intended to represent some sort of mathematical or "real-life" value. Many times, this representation is one-to-one. For example, consider the type
data PlatonicSolid = Tetrahedron | Cube |
Octahedron | Dodecahedron | Icosahedron
This type contains exactly one representation of each Platonic solid. We can take advantage of this by adding deriving Eq to the declaration, and it will produce the correct instance.
In many cases, however, the same abstract value may be represented by more than one Haskell value. For example, the red-black trees Node B (Node R Leaf 1 Leaf) 2 Leaf and Node B Leaf 1 (Node R Leaf 2 Leaf) can both represent the set {1,2}. If we added deriving Eq to our declaration, we would get an instance of Eq that distinguishes things we want to be considered the same (outside of the implementation of the set operations).
It's important to make sure that types are only made instances of Eq (and Ord) when appropriate! It's very tempting to make something an instance of Ord just so you can stick it in a data structure that requires ordering, but if the ordering is not truly a total ordering of the abstract values, all manner of breakage may ensue. Unless the documentation absolutely guarantees it, for example, a function called sort :: Ord a => [a] -> [a] may not only be an unstable sort, but may not even produce a list containing all the Haskell values that go into it. sort [Bad 1 "Bob", Bad 1 "James"] can reasonably produce [Bad 1 "Bob", Bad 1 "James"], [Bad 1 "James", Bad 1 "Bob"], [Bad 1 "James", Bad 1 "James"], or [Bad 1 "Bob", Bad 1 "Bob"]. All of these are perfectly legitimate. A function that uses unsafePerformIO in the back room to implement a Las Vegas-style randomized algorithm or to race threads against each other to get an answer from the fastest may even give different results different times, as long as they're == to each other.
tl;dr: Making something an instance of Eq is a way of making a very strong statement to the world; don't make that statement if you don't mean it.

Your second Functor instance also doesn't make any sense. The biggest reason why Set can't be a Functor in Haskell is fmap can't have constraints. Inventing different notions of equality as StrongEq doesn't change the fact that you can't write those constraints on fmap in your Set instance.
fmap in general shouldn't have the constraints you need. It makes perfect sense to have functors of functions, for example (without it the whole notion of using Applicative to apply functions inside a functor breaks down), and functions can't be members of Eq or your StrongEq in general.
fmap can't have extra constraints on only some instances, because of code like this:
fmapBoth :: (Functor f, Functor g) => (a -> b, c -> d) -> (f a, g c) -> (f b, g d)
fmapBoth (h, j) (x, y) = (fmap h x, fmap j y)
This code claims to work regardless of the functors f and g, and regardless of the functions h and j. It has no way of checking whether one of the functors is a special one that has extra constraints on fmap, nor any way of checking whether one of the functions it's applying would violate those constraints.
Saying that Set is a Functor in Haskell, is saying that there is a (lawful) operation fmap :: (a -> b) -> Set a -> Set b, with that exact type. That is precisely what Functor means. fmap :: (Eq a -> Eq b) => (a -> b) -> Set a -> Set b is not an example of such an operation.
It is possible, I understand, to use the ConstraintKinds GHC extendsion to write a different Functor class that permits constraints on the values which vary by Functor (and what you actually need is an Ord constraint, not just Eq). This blog post talks about doing so to make a new Monad class which can have an instance for Set. I've never played around with code like this, so I don't know much more than that the technique exists. It wouldn't help you hand off Sets to existing code that needs Functors, but you should be able to use it instead of Functor in your own code if you wish.

This notion of StrongEq is tough. In general, equality is a place where computer science becomes significantly more rigorous than typical mathematics in the kind of way which makes things challenging.
In particular, typical mathematics likes to talk about objects as though they exist in a set and can be uniquely identified. Computer programs usually deal with types which are not always computable (as a simple counterexample, tell me what the set corresponding to the type data U = U (U -> U) is). This means that it may be undecidable as to whether two values are identifiable.
This becomes an enormous topic in dependently typed languages since typechecking requires identifying like types and dependently typed languages may have arbitrary values in their types and thus need a way to project equality.
So, StrongEq could be defined over a restricted part of Haskell containing only the types which can be decidably compared for equality. We can consider this a category with the arrows as computable functions and then see Set as an endofunctor from types to the type of sets of values of that type. Unfortunately, these restrictions have taken us far from standard Haskell and make defining StrongEq or Functor (Set a) a little less than practical.

Related

What's the meaning of `fmap` over set of predicates in Haskell?

My student assignment I'm doing for Haskell programming includes a task I'm a little bit puzzled to solve. The things are given so: an instance of Functor class to be created just for a set-based new type. There is a declaration of that:
newtype Set x = Set { contains :: (x -> Bool) }
It's a case for me to understand what means if fmap serves to be applied to something like a set of predicates. When doing about previous tasks, I've already defined fmap rather with functions like (+3) (to alter integers), (toUpper) (to strings) etc. It's first time I'm dealing with anything beyond normal types (various numerics, String, Char). There is my humble attempt to start:
instance Functor Set where
fmap f (Set x) = if (contains x) == True then Set (f x) else Set x
Surely, it's a nonsense code, but I suppose some True/False need to be evaluated before fmap apllication goes well. But, first of all, would you please explain the matter of set of predicates to elaborate more sensible approach?
With this definition, it is actually impossible to define a Functor instance for Set. The reason for this is that your Set a type contains a in a negative position... that is, a is an argument to a function, not a value. When that happens, the type constructor can be a contravariant functor (type class Contravariant from the contravariant package), but it cannot be a covariant functor (the Functor type class).
Here's are definitions of those classes:
class Functor f where
fmap :: (a -> b) -> f a -> f b
class Contravariant f where
contramap :: (a -> b) -> f b -> f a
See the difference? In a contravariant functor, the direction of the function you pass in is flipped when it's lifted to operate on the functor type.
In the end, this should make sense. Your notion of a "set" is a test that tells you whether something qualifies or not. This is a perfectly good mathematical definition of a set, but it gives you different computational powers than the standard one. For example, you cannot get at the elements of your predicate-based sets; only wait to be given a potential element. If you have a Set Integer, and a function f :: String -> Integer, then you can test Strings by first converting them to Integers and testing those, so you can get from Set Integer to Set String. But having a g :: Integer -> String doesn't let you test Strings!
If this is an assignment, then either you've misunderstood the assignment, you've done some earlier step differently than expected (e.g., if you defined Set yourself in an earlier part, maybe you need to change the definition), or perhaps your instructor is hoping you'll struggle with this and understand why Functor cannot be defined.

How to think about the lack of laws

I am a mathematician who works a lot with category theory, and I've been using Haskell for a while to perform certain computations etc., but I am definitely not a programmer. I really love Haskell and want to become much more fluent in it, and the type system is something that I find especially great to have in place when writing programs.
However, I've recently been trying to implement category theoretic things, and am running into problems concerning the fact that you seemingly can't have class method laws in Haskell. In case my terminology here is wrong, what I mean is that I can write
class Monoid c where
id :: c -> c
m :: c -> c -> c
but I can't write some law along the lines of
m (m x y) z == m x $ m y z
From what I gather, this is due to the lack of dependent types in Haskell, but I'm not sure how exactly this is the case (having now read a bit about dependent types). It also seems that the convention is just to include laws like this in comments and hope that you don't accidentally cook up some instance that doesn't satisfy them.
How should I change my approach to Haskell to deal with this problem? Is there a nice mathematical/type-theoretic solution (for example, require the existence of an associator that is an isomorphism (though then the question is, how do we encode isomorphisms without a law?)); is there some 'hack' (using extensions such as DataKinds); should I be drastic and switch to using something like Idris instead; or is the best response to just change the way I think about using Haskell (i.e. accept that these laws can't be implemented in a Haskelly way)?
(bonus) How exactly does the lack of laws come from not supporting dependent types?
You want to require that:
m (m x y) z = m x (m y z) -- (1)
But to require this you need a way to check it. So you, or your compiler (or proof assistant), need to construct a proof of this. And the question is, what type is a proof of (1)?
One could imagine some Proof type but then maybe you could just construct a proof that 0 = 0 instead of a proof of (1) and both would have type Proof. So you’d need a more general type. I can’t decide how to break up the rest of the question so I’ll go for a super brief explanation of the Curry-Howard isomorphism followed by an explanation of how to prove two things are equal and then how dependent types are relevant.
The Curry-Howard isomorphism says that propositions are isomorphic to types and proofs are isomorphic to programs: a type corresponds to a proposition and a proof of that proposition corresponds to a program constructing a value inhabiting that type. Ignoring how many propositions might be expressed as types, an example would be that the type A * B (written (A, B) in Haskell) corresponds to the proposition “A and B,” while the type A + B (written Either A B in Haskell) corresponds to the proposition “A or B.” Finally the type A -> B corresponds to “A implies B,” as a proof of this is a program which takes evidence of A and gives you evidence of B. One should note that there isn’t a way to express not A but one could imagine adding a type Not A with builtins of type Either a (Not a) for the law of the excluded middle as well as Not (Not a) -> a, and a * Not a -> Void (where Void is a type which cannot be inhabited and therefore corresponds to false), but then one can’t really run these programs to get constructivist proofs.
Now we will ignore some realities of Haskell and imagine that there aren’t ways round these rules (in particular undefined :: a says everything is true, and unsafeCoerce :: a -> b says that anything implies anything else, or just other functions that don’t return where their existence does not imply the corresponding proof).
So we know how to combine propositions but what might a proposition be? Well one could be to say that two types are equal. In Haskell this corresponds to the GADT
data Eq a b where Refl :: Eq c c
Where this constructor corresponds to the reflexive property of equality.
[side note: if you’re still interested so far, you may be interested to look up Voevodsky’s univalent foundations, depending on how much the idea of “Homotopy type theory” interests you]
So can we prove something now? How about the transitive property of equality:
trans :: Eq a b -> Eq b c -> Eq a c
trans x y =
case x of
Refl -> -- by this match being successful, the compiler now knows that a = b
case y of
Refl -> -- and now b = c and so the compiler knows a = c
Refl -- the compiler knows that this is of type Eq d d, and as it knows a = c, this typechecks as Eq a c
This feels like one hasn’t really proven anything (especially as this mainly relies on the compiler knowing the transitive and symmetric properties), but one gets a similar feeling when proving simple things in logic as well.
So now how might you prove the original proposition (1)? Well let’s imagine we want a type c to be a monoid then we should also prove that $\forall x,y,z:c, m (m x y) z = m x (m y z).$ So we need a way to express m (m x y) z as a type. Strictly speaking this isn’t dependent types (this can be done with DataKinds to promote values and type families instead of functions). But you do need dependent types to have types depend on values. Specifically if you have a type Nat of natural numbers and a type family Vec :: Nat -> * (* is the kind (read type) of all types) of fixed length vectors, you could define a dependently typed function mkVec :: (n::Nat) -> Vec n. Observe how the type of the output depends on the value of the input.
So your law needs to have functions promoted to type level (skipping the questions about how one defines type equality and value equality), as well as dependent types (made up syntax):
class Monoid c where
e :: c
(*) :: c -> c -> c
idl :: (x::c) -> Eq x (e * x)
idr :: (x::c) -> Eq x (x * e)
assoc :: (x::c) -> (y::c) -> (z::c) -> Eq ((x * y) * z) (x * (y * z))
Observe how types tend to become large with dependent types and proofs. In a language missing typeclasses one could put such values into a record.
Final note on the theory of dependent types and how these correspond to the curry Howard isomorphism.
Dependent types can be considered an answer to the question: what types correspond to the propositions $\forall x\in S\quad P(x)$ and $\exists y\in T\quad Q(y)?$
The answer is that you create new ways to make types: the dependent product and the dependent sum (coproduct). The dependent product expresses “for all values $x$ of type $S,$ there is a value of type $P(x).$” A normal product would be a dependent product with $S=2,$ a type inhabited by two values. A dependent product might be written (x:T) -> P x. A dependent sum says “some value $y$ of type $T$, paired with a value of type $Q(y).$” this might be written (y:T) * Q y.
One can think of these as a generalisation of arbitrarily indexed (co)products from Set to general categories, where one might sensibly write e.g. $\prod_\Lambda X(\lambda),$ and sometimes such notation is used in type theory.

The fixed point functors of Free and Cofree

To make that clear, I'm not talking about how the free monad looks a lot like a fixpoint combinator applied to a functor, i.e. how Free f is basically a fixed point of f. (Not that this isn't interesting!)
What I'm talking about are fixpoints of Free, Cofree :: (*->*) -> (*->*), i.e. functors f such that Free f is isomorphic to f itself.
Background: today, to firm up my rather lacking grasp on free monads, I decided to just write a few of them out for different simple functors, both for Free and for Cofree and see what better-known [co]monads they'd be isomorphic to. What intrigued me particularly was the discovery that Cofree Empty is isomorphic to Empty (meaning, Const Void, the functor that maps any type to the uninhabited). Ok, perhaps this is just stupid – I've discovered that if you put empty garbage in you get empty garbage out, yeah! – but hey, this is category theory, where whole universes rise up from seeming trivialities... right?
The immediate question is, if Cofree has such a fixed point, what about Free? Well, it certainly can't be Empty as that's not a monad. The quick suspect would be something nearby like Const () or Identity, but no:
Free (Const ()) ~~ Either () ~~ Maybe
Free Identity ~~ (Nat,) ~~ Writer Nat
Indeed, the fact that Free always adds an extra constructor suggests that the structure of any functor that's a fixed point would have to be already infinite. But it seems odd that, if Cofree has such a simple fixed point, Free should only have a much more complex one (like the fix-by-construction FixFree a = C (Free FixFree a) that Reid Barton brings up in the comments).
Is the boring truth just that Free has no “accidental fixed point” and it's a mere coincidence that Cofree has one, or am I missing something?
Your observation that Empty is a fixed point of Cofree (which is not really true in Haskell, but I guess you want to work in some model that ignores ⊥, like Set) boils down to the fact that
there is a set E (the empty set) such that for every set X, the projection p₂ : X × E -> E is an isomorphism.
We could say in this situation that E is an absorbing object for the product. We can replace the word “set” by “object of C” for any category C with products, and we get a statement about C that may or may not be true. For Set, it happens to be true.
If we pick C = Setop, which also has products (because Set has coproducts), and then dualize the language to talk about sets again, we get the statement
there is a set F such that for every set Y, the inclusion i₂ : F -> Y + F is an isomorphism.
Obviously, this statement is not true for any set F (we can pick any non-empty set Y as a counterexample for any F). No surprise there, after all Setop is a different category from Set.
So, we won't get a “trivial fixed point” of Free in the same way we got one for Cofree, because Setop is qualitatively different from Set. The initial object of Set is an absorbing element for the product, but the terminal object of Set is not an absorbing object for the coproduct.
If I may get on my soapbox for a moment:
There is much discussion among Haskell programmers about which constructions are the “duals” of which other constructions. Most of this is in a formal sense meaningless, because in category theory dualizing a construction works like this:
Suppose I have a construction which I can perform on any category C (or any category with certain extra structure and/or properties). Then the dual construction on a category C is the original construction on the opposite category Cop (which had better have the extra structure and properties we needed, if any).
For example: The notion of products makes sense in any category C (though products might not always exist), via the universal property defining products. To get a dual notion of coproducts in C we should ask what are the products in Cop, and we have just defined what products are in any category, so this notion makes sense.
The trouble with applying duality to the setting of Haskell is that the Haskell language prefers overwhelmingly to talk about just one category, Hask, in which we do our constructions. This causes two problems for talking about duality:
To obtain the dual of a construction as described above, I am supposed to be able to be able to do the construction in any category, or at least any category of a particular form. So we must first generalize the construction that, typically, we have only done in the category Hask to a larger class of categories. (And having done so, there are plenty of other interesting categories we could potentially interpret the resulting notion in besides Haskop, such as Kleisli categories of monads.)
The category Hask enjoys many special properties which can be summarized by saying that (ignoring ⊥) Hask is a cartesian closed category. For example, this implies that the initial object is an absorbing object for the product. Haskop does not have these properties, which means that the generalized notion may not make sense in Haskop; and it can also mean that two notions which happened to be equivalent in Hask are distinct in general, and have different duals.
For an example of the latter, take lenses. In Hask they can be constructed in a number of ways; two ways are in terms of getter/setter pairs and as coalgebras for the costate comonad. The former generalizes to categories with products and the second to categories enriched in a particular way over Hask. If we apply the former construction to Haskop then we get out prisms, but if we apply the latter construction to Haskop then we get algebras for the state monad and these are not the same thing.
A more familiar example might be comonads: starting from the Haskell-centric presentation
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
some insight seems to be needed to determine which arrows to reverse to obtain
extract :: w a -> a
extend :: w a -> (w b -> a) -> w b
The point is that it would have been much easier to start from join :: m (m a) -> m a instead of (>>=); but finding this alternative presentation (equivalent due to special features of Hask) is a creative process, not a mechanical one.
In a question like yours, and many others like it, where it is pretty clear what sense of dual is intended, there's still absolutely no reason to expect a priori that the dual construction will actually exist or have the same properties as the original, because Haskop qualitatively behaves quite differently from Hask. A slogan might be
the theory of categories is self-dual, but the theory of any particular category is not!
Since you asked about the structure of the fixed points of Free, I'm going to sketch an informal argument that Free only has one fixed point which is a Functor, namely the type
newtype FixFree a = C (Free FixFree a)
that Reid Barton described. Indeed, I make a somewhat stronger claim. Let's start with a few pieces:
newtype Fix f a = Fix (f (Fix f) a)
instance Functor (f (Fix f)) => Functor (Fix f) where
fmap f (Fix x) = Fix (fmap f x)
-- This is basically `MFunctor` from `Control.Monad.Morph`
class FFunctor (g :: (* -> *) -> * -> *) where
hoistF :: Functor f => (forall a . f a -> f' a) -> g f b -> g f' b
Notably,
instance FFunctor Free where
hoistF _f (Pure a) = Pure a
hoistF f (Free fffa) = Free . f . fmap (hoistF f) $ fffa
Then
fToFixG :: (Functor f, FFunctor g) => (forall a . f a -> g f a) -> f a -> Fix g a
fToFixG fToG fa = Fix $ hoistF (fToFixG fToG) $ fToG fa
fixGToF :: forall f b (g :: (* -> *) -> * -> *) .
(FFunctor g, Functor (g (Fix g)))
=> (forall a . g f a -> f a) -> Fix g b -> f b
fixGToF gToF (Fix ga) = gToF $ hoistF (fixGToF gToF) ga
If I'm not mistaken (which I could be), passing each side of an isomorphism between f and g f to each of these functions will yield each side of an isomorphism between f and Fix g. Substituting Free for g will demonstrate the claim. This argument is very hand-wavey, of course, because Haskell is inconsistent.

Any advantage of using type constructors in type classes?

Take for example the class Functor:
class Functor a
instance Functor Maybe
Here Maybe is a type constructor.
But we can do this in two other ways:
Firstly, using multi-parameter type classes:
class MultiFunctor a e
instance MultiFunctor (Maybe a) a
Secondly using type families:
class MonoFunctor a
instance MonoFunctor (Maybe a)
type family Element
type instance Element (Maybe a) a
Now there's one obvious advantage of the two latter methods, namely that it allows us to do things like this:
instance Text Char
Or:
instance Text
type instance Element Text Char
So we can work with monomorphic containers.
The second advantage is that we can make instances of types that don't have the type parameter as the final parameter. Lets say we make an Either style type but put the types the wrong way around:
data Silly t errorT = Silly t errorT
instance Functor Silly -- oh no we can't do this without a newtype wrapper
Whereas
instance MultiFunctor (Silly t errorT) t
works fine and
instance MonoFunctor (Silly t errorT)
type instance Element (Silly t errorT) t
is also good.
Given these flexibility advantages of only using complete types (not type signatures) in type class definitions, is there any reason to use the original style definition, assuming you're using GHC and don't mind using the extensions? That is, is there anything special you can do putting a type constructor, not just a full type in a type class that you can't do with multi-parameter type classes or type families?
Your proposals ignore some rather important details about the existing Functor definition because you didn't work through the details of writing out what would happen with the class's member function.
class MultiFunctor a e where
mfmap :: (e -> ??) -> a -> ????
instance MultiFunctor (Maybe a) a where
mfmap = ???????
An important property of fmap at the moment is that its first argument can change types. fmap show :: (Functor f, Show a) => f a -> f String. You can't just throw that away, or you lose most of the value of fmap. So really, MultiFunctor would need to look more like...
class MultiFunctor s t a b | s -> a, t -> b, s b -> t, t a -> s where
mfmap :: (a -> b) -> s -> t
instance (a ~ c, b ~ d) => MultiFunctor (Maybe a) (Maybe b) c d where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (f a)
Note just how incredibly complicated this has become to try to make inference at least close to possible. All the functional dependencies are in place to allow instance selection without annotating types all over the place. (I may have missed a couple possible functional dependencies in there!) The instance itself grew some crazy type equality constraints to allow instance selection to be more reliable. And the worst part is - this still has worse properties for reasoning than fmap does.
Supposing my previous instance didn't exist, I could write an instance like this:
instance MultiFunctor (Maybe Int) (Maybe Int) Int Int where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (if f a == a then a else f a * 2)
This is broken, of course - but it's broken in a new way that wasn't even possible before. A really important part of the definition of Functor is that the types a and b in fmap don't appear anywhere in the instance definition. Just looking at the class is enough to tell the programmer that the behavior of fmap cannot depend on the types a and b. You get that guarantee for free. You don't need to trust that instances were written correctly.
Because fmap gives you that guarantee for free, you don't even need to check both Functor laws when defining an instance. It's sufficient to check the law fmap id x == x. The second law comes along for free when the first law is proven. But with that broken mfmap I just provided, mfmap id x == x is true, even though the second law is not.
As the implementer of mfmap, you have more work to do to prove your implementation is correct. As a user of it, you have to put more trust in the implementation's correctness, since the type system can't guarantee as much.
If you work out more complete examples for the other systems, you find that they have just as many issues if you want to support the full functionality of fmap. And this is why they aren't really used. They add a lot of complexity for only a small gain in utility.
Well, for one thing the traditional functor class is just much simpler. That alone is a valid reason to prefer it, even though this is Haskell and not Python. And it also represents the mathematical idea better of what a functor is supposed to be: a mapping from objects to objects (f :: *->*), with extra property (->Constraint) that each (forall (a::*) (b::*)) morphism (a->b) is lifted to a morphism on the corresponding object mapped to (-> f a->f b). None of that can be seen very clearly in the * -> * -> Constraint version of the class, or its TypeFamilies equivalent.
On a more practical account, yes, there are also things you can only do with the (*->*)->Constraint version.
In particular, what this constraint guarantees you right away is that all Haskell types are valid objects you can put into the functor, whereas for MultiFunctor you need to check every possible contained type, one by one. Sometimes that's just not possible (or is it?), like when you're mapping over infinitely many types:
data Tough f a = Doable (f a)
| Tough (f (Tough f (a, a)))
instance (Applicative f) = Semigroup (Tough f a) where
Doable x <> Doable y = Tough . Doable $ (,)<$>x<*>y
Tough xs <> Tough ys = Tough $ xs <> ys
-- The following actually violates the semigroup associativity law. Hardly matters here I suppose...
xs <> Doable y = xs <> Tough (Doable $ fmap twice y)
Doable x <> ys = Tough (Doable $ fmap twice x) <> ys
twice x = (x,x)
Note that this uses the Applicative instance of f not just on the a type, but also on arbitrary tuples thereof. I can't see how you could express that with a MultiParamTypeClasses- or TypeFamilies-based applicative class. (It might be possible if you make Tough a suitable GADT, but without that... probably not.)
BTW, this example is perhaps not as useless as it may look – it basically expresses read-only vectors of length 2n in a monadic state.
The expanded variant is indeed more flexible. It was used e.g. by Oleg Kiselyov to define restricted monads. Roughly, you can have
class MN2 m a where
ret2 :: a -> m a
class (MN2 m a, MN2 m b) => MN3 m a b where
bind2 :: m a -> (a -> m b) -> m b
allowing monad instances to be parametrized over a and b. This is useful because you can restrict those types to members of some other class:
import Data.Set as Set
instance MN2 Set.Set a where
-- does not require Ord
return = Set.singleton
instance Prelude.Ord b => MN3 SMPlus a b where
-- Set.union requires Ord
m >>= f = Set.fold (Set.union . f) Set.empty m
Note than because of that Ord constraint, we are unable to define Monad Set.Set using unrestricted monads. Indeed, the monad class requires the monad to be usable at all types.
Also see: parameterized (indexed) monad.

Monad theory and Haskell

Most tutorials seem to give a lot of examples of monads (IO, state, list and so on) and then expect the reader to be able to abstract the overall principle and then they mention category theory. I don't tend to learn very well by trying generalise from examples and I would like to understand from a theoretical point of view why this pattern is so important.
Judging from this thread:
Can anyone explain Monads?
this is a common problem, and I've tried looking at most of the tutorials suggested (except the Brian Beck videos which won't play on my linux machine):
Does anyone know of a tutorial that starts from category theory and explains IO, state, list monads in those terms? the following is my unsuccessful attempt to do so:
As I understand it a monad consists of a triple: an endo-functor and two natural transformations.
The functor is usually shown with the type:
(a -> b) -> (m a -> m b)
I included the second bracket just to emphasise the symmetry.
But, this is an endofunctor, so shouldn't the domain and codomain be the same like this?:
(a -> b) -> (a -> b)
I think the answer is that the domain and codomain both have a type of:
(a -> b) | (m a -> m b) | (m m a -> m m b) and so on ...
But I'm not really sure if that works or fits in with the definition of the functor given?
When we move on to the natural transformation it gets even worse. If I understand correctly a natural transformation is a second order functor (with certain rules) that is a functor from one functor to another one. So since we have defined the functor above the general type of the natural transformations would be:
((a -> b) -> (m a -> m b)) -> ((a -> b) -> (m a -> m b))
But the actual natural transformations we are using have type:
a -> m a
m a -> (a ->m b) -> m b
Are these subsets of the general form above? and why are they natural transformations?
Martin
A quick disclaimer: I'm a little shaky on category theory in general, while I get the impression you have at least some familiarity with it. Hopefully I won't make too much of a hash of this...
Does anyone know of a tutorial that starts from category theory and explains IO, state, list monads in those terms?
First of all, ignore IO for now, it's full of dark magic. It works as a model of imperative computations for the same reasons that State works for modelling stateful computations, but unlike the latter IO is a black box with no way to deduce the monadic structure from the outside.
The functor is usually shown with the type: (a -> b) -> (m a -> m b) I included the second bracket just to emphasise the symmetry.
But, this is an endofunctor, so shouldn't the domain and codomain be the same like this?:
I suspect you are misinterpreting how type variables in Haskell relate to the category theory concepts.
First of all, yes, that specifies an endofunctor, on the category of Haskell types. A type variable such as a is not anything in this category, however; it's a variable that is (implicitly) universally quantified over all objects in the category. Thus, the type (a -> b) -> (a -> b) describes only endofunctors that map every object to itself.
Type constructors describe an injective function on objects, where the elements of the constructor's codomain cannot be described by any means except as an application of the type constructor. Even if two type constructors produce isomorphic results, the resulting types remain distinct. Note that type constructors are not, in the general case, functors.
The type variable m in the functor signature, then, represents a one-argument type constructor. Out of context this would normally be read as universal quantification, but that's incorrect in this case since no such function can exist. Rather, the type class definition binds m and allows the definition of such functions for specific type constructors.
The resulting function, then, says that for any type constructor m which has fmap defined, for any two objects a and b and a morphism between them, we can find a morphism between the types given by applying m to a and b.
Note that while the above does, of course, define an endofunctor on Hask, it is not even remotely general enough to describe all such endofunctors.
But the actual natural transformations we are using have type:
a -> m a
m a -> (a ->m b) -> m b
Are these subsets of the general form above? and why are they natural transformations?
Well, no, they aren't. A natural transformation is roughly a function (not a functor) between functors. The two natural transformations that specify a monad M look like I -> M where I is the identity functor, and M ∘ M -> M where ∘ is functor composition. In Haskell, we have no good way of working directly with either a true identity functor or with functor composition. Instead, we discard the identity functor to get just (Functor m) => a -> m a for the first, and write out nested type constructor application as (Functor m) => m (m a) -> m a for the second.
The first of these is obviously return; the second is a function called join, which is not part of the type class. However, join can be written in terms of (>>=), and the latter is more often useful in day-to-day programming.
As far as specific monads go, if you want a more mathematical description, here's a quick sketch of an example:
For some fixed type S, consider two functors F and G where F(x) = (S, x) and G(x) = S -> x (It should hopefully be obvious that these are indeed valid functors).
These functors are also adjoints; consider natural transformations unit :: x -> G (F x) and counit :: F (G x) -> x. Expanding the definitions gives us unit :: x -> (S -> (S, x)) and counit :: (S, S -> x) -> x. The types suggest uncurried function application and tuple construction; feel free to verify that those work as expected.
An adjunction gives rise to a monad by composition of the functors, so taking G ∘ F and expanding the definition, we get G (F x) = S -> (S, x), which is the definition of the State monad. The unit for the adjunction is of course return; and you should be able to use counit to define join.
This page does exactly that. I think your main confusion is that the class doesn't really make the Type a functor, but it defines a functor from the category of Haskell types into the category of that type.
Following the notation of the link, assuming F is a Haskell Functor, it means that there is a functor from the category of Hask to the category of F.
Roughly speaking, Haskell does its category theory all in just one category, whose objects are Haskell types and whose arrows are functions between these types. It's definitely not a general-purpose language for modelling category theory.
A (mathematical) functor is an operation turning things in one category into things in another, possibly entirely different, category. An endofunctor is then a functor which happens to have the same source and target categories. In Haskell, a functor is an operation turning things in the category of Haskell types into other things also in the category of Haskell types, so it is always an endofunctor.
[If you're following the mathematical literature, technically, the operation '(a->b)->(m a -> m b)' is just the arrow part of the endofunctor m, and 'm' is the object part]
When Haskellers talk about working 'in a monad' they really mean working in the Kleisli category of the monad. The Kleisli category of a monad is a thoroughly confusing beast at first, and normally needs at least two colours of ink to give a good explanation, so take the following attempt for what it is and check out some references (unfortunately Wikipedia is useless here for all but the straight definitions).
Suppose you have a monad 'm' on the category C of Haskell types. Its Kleisli category Kl(m) has the same objects as C, namely Haskell types, but an arrow a ~(f)~> b in Kl(m) is an arrow a -(f)-> mb in C. (I've used a squiggly line in my Kleisli arrow to distinguish the two). To reiterate: the objects and arrows of the Kl(C) are also objects and arrows of C but the arrows point to different objects in Kl(C) than in C. If this doesn't strike you as odd, read it again more carefully!
Concretely, consider the Maybe monad. Its Kleisli category is just the collection of Haskell types, and its arrows a ~(f)~> b are functions a -(f)-> Maybe b. Or consider the (State s) monad whose arrows a ~(f)~> b are functions a -(f)-> (State s b) == a -(f)-> (s->(s,b)). In any case, you're always writing a squiggly arrow as a shorthand for doing something to the type of the codomain of your functions.
[Note that State is not a monad, because the kind of State is * -> * -> *, so you need to supply one of the type parameters to turn it into a mathematical monad.]
So far so good, hopefully, but suppose you want to compose arrows a ~(f)~> b and b ~(g)~> c. These are really Haskell functions a -(f)-> mb and b -(g)-> mc which you cannot compose because the types don't match. The mathematical solution is to use the 'multiplication' natural transformation u:mm->m of the monad as follows: a ~(f)~> b ~(g)~> c == a -(f)-> mb -(mg)-> mmc -(u_c)-> mc to get an arrow a->mc which is a Kleisli arrow a ~(f;g)~> c as required.
Perhaps a concrete example helps here. In the Maybe monad, you cannot compose functions f : a -> Maybe b and g : b -> Maybe c directly, but by lifting g to
Maybe_g :: Maybe b -> Maybe (Maybe c)
Maybe_g Nothing = Nothing
Maybe_g (Just a) = Just (g a)
and using the 'obvious'
u :: Maybe (Maybe c) -> Maybe c
u Nothing = Nothing
u (Just Nothing) = Nothing
u (Just (Just c)) = Just c
you can form the composition u . Maybe_g . f which is the function a -> Maybe c that you wanted.
In the (State s) monad, it's similar but messier: Given two monadic functions a ~(f)~> b and b ~(g)~> c which are really a -(f)-> (s->(s,b)) and b -(g)-> (s->(s,c)) under the hood, you compose them by lifting g into
State_s_g :: (s->(s,b)) -> (s->(s,(s->(s,c))))
State_s_g p s1 = let (s2, b) = p s1 in (s2, g b)
then you apply the 'multiplication' natural transformation u, which is
u :: (s->(s,(s->(s,c)))) -> (s->(s,c))
u p1 s1 = let (s2, p2) = p1 s1 in p2 s2
which (sort of) plugs the final state of f into the initial state of g.
In Haskell, this turns out to be a bit of an unnatural way to work so instead there's the (>>=) function which basically does the same thing as u but in a way that makes it easier to implement and use. This is important: (>>=) is not the natural transformation 'u'. You can define each in terms of the other, so they're equivalent, but they're not the same thing. The Haskell version of 'u' is written join.
The other thing missing from this definition of Kleisli categories is the identity on each object: a ~(1_a)~> a which is really a -(n_a)-> ma where n is the 'unit' natural transformation. This is written return in Haskell, and doesn't seem to cause as much confusion.
I learned category theory before I came to Haskell, and I too have had difficulty with the mismatch between what mathematicians call a monad and what they look like in Haskell. It's no easier from the other direction!
Not sure I understand what was the question but yes, you are right, monad in Haskell is defined as a triple:
m :: * -> * -- this is endofunctor from haskell types to haskell types!
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
but common definition from category theory is another triple:
m :: * -> *
return :: a -> m a
join :: m (m a) -> m a
It is slightly confusing but it's not so hard to show that these two definitions are equal.
To do that we need to define join in terms of (>>=) (and vice versa).
First step:
join :: m (m a) -> m a
join x = ?
This gives us x :: m (m a).
All we can do with something that have type m _ is to aply (>>=) to it:
(x >>=) :: (m a -> m b) -> m b
Now we need something as a second argument for (>>=), and also,
from the type of join we have constraint (x >>= y) :: ma.
So y here will have type y :: ma -> ma and id :: a -> a fits it very well:
join x = x >>= id
The other way
(>>=) :: ma -> (a -> mb) -> m b
(>>=) x y = ?
Where x :: m a and y :: a -> m b.
To get m b from x and y we need something of type a.
Unfortunately, we can't extract a from m a. But we can substitute it for something else (remember, monad is a functor also):
fmap :: (a -> b) -> m a -> m b
fmap y x :: m (m b)
And it's perfectly fits as argument for join: (>>=) x y = join (fmap y x).
The best way to look at monads and computational effects is to start with where Haskell got the notion of monads for computational effects from, and then look at Haskell after you understand that. See this paper in particular: Notions of Computation and Monads, by E. Moggi.
See also Moggi's earlier paper which shows how monads work for the lambda calculus alone: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.2787
The fact that monads capture substitution, among other things (http://blog.sigfpe.com/2009/12/where-do-monads-come-from.html), and substitution is key to the lambda calculus, should give a good clue as to why they have so much expressive power.
While monads originally came from category theory, this doesn't mean that category theory is the only abstract context in which you can view them. A different viewpoint is given by operational semantics. For an introduction, have a look at my Operational Monad Tutorial.
One way to look at IO is to consider it as a strange kind of state monad. Remember that the state monad looks like:
data State s a = State (s -> (s, a))
where the "s" argument is the data type you want to thread through the computation. Also, this version of "State" doesn't have "get" and "put" actions and we don't export the constructor.
Now imagine a type
data RealWorld = RealWorld ......
This has no real definition, but notionally a value of type RealWorld holds the state of the entire universe outside the computer. Of course we can never have a value of type RealWorld, but you can imagine something like:
getChar :: RealWorld -> (RealWorld, Char)
In other words the "getChar" function takes a state of the universe before the keyboard button has been pressed, and returns the key pressed plus the state of the universe after the key has been pressed. Of course the problem is that the previous state of the world is still available to be referenced, which can't happen in reality.
But now we write this:
type IO = State RealWorld
getChar :: IO Char
Notionally, all we have done is wrap the previous version of "getChar" as a state action. But by doing this we can no longer access the "RealWorld" values because they are wrapped up inside the State monad.
So when a Haskell program wants to change a lightbulb it takes hold of the bulb and applies a "rotate" function to the RealWorld value inside IO.
For me, so far, the explanation that comes closest to tie together monads in category theory and monads in Haskell is that monads are a monid whose objects have the type a->m b. I can see that these objects are very close to an endofunctor and so the composition of such functions are related to an imperative sequence of program statements. Also functions which return IO functions are valid in pure functional code until the inner function is called from outside.
This id element is 'a -> m a' which fits in very well but the multiplication element is function composition which should be:
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
This is not quite function composition, but close enough (I think to get true function composition we need a complementary function which turns m b back into a, then we get function composition if we apply these in pairs?), I'm not quite sure how to get from that to this:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
I've got a feeling I may have seen an explanation of this in all the stuff that I read, without understanding its significance the first time through, so I will do some re-reading to try to (re)find an explanation of this.
The other thing I would like to do is link together all the different category theory explanations: endofunctor+2 natural transformations, Kleisli category, a monoid whose objects are monoids and so on. For me the thing that seems to link all these explanations is that they are two level. That is, normally we treat category objects as black-boxes where we imply their properties from their outside interactions, but here there seems to be a need to go one level inside the objects to see what’s going on? We can explain monads without this but only if we accept apparently arbitrary constructions.
Martin
See this question: is chaining operations the only thing that the monad class solves?
In it, I explain my vision that we must differentiate between the Monad class and individual types that solve individual problems. The Monad class, by itself, only solve the important problem of "chaining operations with choice" and mades this solution available to types being instance of it (by means of "inheritance").
On the other hand, if a given type that solves a given problem faces the problem of "chaining operations with choice" then, it should be made an instance (inherit) of the Monad class.
The fact is that problems not get solved merely by being a Monad. It would be like saying that "wheels" solve many problems, but actually "wheels" only solve a problem, and things with wheels solve many different problems.

Resources