Illegal instance declaration for typeclass TF - haskell

I am having a problem declaring an instance of the following typeclass. I tried to follow the advice in the error message from the ghci compiler but still cannot get the code to compile. Any help would be appreciated.
class TF p where
valid :: p -> Bool
lequiv :: p -> p -> Bool
instance TF Bool
where
valid = id
lequiv f g = f == g
instance TF p => TF (Bool -> p)
where
valid f = valid (f True) && valid (f False)
lequiv f g = (f True) `lequiv` (g True)
&& (f False) `lequiv` (g False)
The error I am getting is:
Illegal instance declaration for ‘TF (Bool -> p)’
(All instance types must be of the form (T a1 ... an)
where a1 ... an are *distinct type variables*,
and each type variable appears at most once in the instance head.
Use FlexibleInstances if you want to disable this.)
In the instance declaration for ‘TF (Bool -> p)’

The problem here is that you have a type constructor (->) applied to things that aren't type variables. There's a lot of ways you can deal with that:
FlexibleInstances. This relaxes the assumption (made in the early days of Haskell, when it wasn't yet clear how difficult implementing type classes would be). This is not very controversial at all. On the other hand, it doesn't play that well with type inference: your instance will only be chosen when we know that we're supplying something of the shape Bool -> p -- and in particular something that's polymorphic in the first argument will not match that shape. So valid id will not typecheck without further annotations.
TypeFamilies. This gives us (among other things) access to a constraint which demands that two particular types be equal. So with this extension, you could write
instance (bool ~ Bool, TF p) => TF (bool -> p) where ...
Now this matches whenever the thing we're supplying has shape bool -> p -- that is, any function at all -- and only after we have selected this instance does it check (in fact, enforce) that the argument type is Bool. This means valid id will typecheck; on the other hand, it also means you cannot declare instances for any other argument types.
Add a typeclass. In fact, the only thing you really care about is that you can list all the inhabitants of Bool in not too much time. So you could instead declare a typeclass, say, Finite, which you will instantiate at such types, and use it as the constraint on the argument type. Thus:
instance (Finite arg, TF p) => TF (arg -> p) where
valid f = all (valid . f) universe
lequiv f g = all (\x -> f x `lequiv` g x) universe
-- could also spell that lambda "liftA2 lequiv f g"
Then you would want to provide a Finite instance for Bool (which, luckily, is already available for you in the universe package). This is nice because it combines the strengths of the previous two approaches: this instance will be chosen as soon as we know the argument is a function, and you can declare instances for many argument types by adding Finite instances for them.

Related

Instance signatures: constraints on methods

With InstanceSignatures, I can give a signature for a method within an instance decl.
The type signature in the instance declaration must be more polymorphic than (or the same as) the one in the class declaration, instantiated with the instance type.
I can see from this answer that the type part of the sig can't be more specific; but why couldn't you add extra constraints? After all you can put constraints on the instance decl that will make the instance more specific than "the class declaration, instantiated with the instance type".
Addit: (In response to the first couple of comments.) To explain "you can put constraints on the instance decl ... instance more specific": the OVERLAPPABLE instance head here alleges it provides addNat for all types a, b, c. But it doesn't: it only provides if a is of the form SNat a', c is of the form SNat c', etc. I'm asking why I can't similarly restrict the types at the method level?
For example [adapted from Hughes 1999] (more realistically this could be a BST/Rose tree, etc):
data AscList a = AscList [a] -- ascending list
instance Functor AscList where -- constructor class
fmap f (AscList xs) = AscList $ sort $ fmap f xs
-- :: Ord b => (a -> b) -> AscList a -> AscList b -- inferred
Without an instance signature GHC complains no instance for (Ord b). With the signature GHC complains the instantiated sig from the class is more polymorphic than the one given.
This (excellent) answer explains the machinery in the instance dictionary. I can see the entry in the dictionary for the method has a type set from the number of constraints for the method (if any) in the class decl. There's no room to put an extra dictionary/parameter for the method.
That seems merely that the implementation didn't foresee a need. Is there a deeper/more theory-based reason against?
(BTW I'm not ever-so convinced by Hughes' approach: that would want both Ord a => WFT(AscList a) and Ord b => WFT(AscList b). But there's no need to require the incoming (AscList a) is ascending. Other constructor classes for AscList perhaps don't need an Ord constraint anywhere.)
If you allow adding constraints to instance methods that aren't already in the class method, a polymorphic function using type classes might no longer typecheck when you specialize it.
For example, consider your AscList instance above, and the following usage of Functor:
data T f = T (f (IO ())) -- f applied to a non-Ord
example :: Functor f => T f -> T f
example (T t) = T (fmap (\x -> x >> print 33) t)
The type of example says you can instantiate f with any functor, such as f = AscList.
But that makes no sense because it would try to sort an AscList (IO ()).
The only way to tell that something goes wrong when we specialize example here is to read its definition, to find whether uses of fmap are still valid, and that goes against modularity.
The type class constraints in a type signature do not and should not record how their methods are used. Conversely, that means that instances do not get to add constraints to their method implementations.

Confused about GADTs and propagating constraints

There's plenty of Q&A about GADTs being better than DatatypeContexts, because GADTs automagically make constraints available in the right places. For example here, here, here. But sometimes it seems I still need an explicit constraint. What's going on? Example adapted from this answer:
{-# LANGUAGE GADTs #-}
import Data.Maybe -- fromJust
data GADTBag a where
MkGADTBag :: Eq a => { unGADTBag :: [a] } -> GADTBag a
baz (MkGADTBag x) (Just y) = x == y
baz2 x y = unGADTBag x == fromJust y
-- unGADTBag :: GADTBag a -> [a] -- inferred, no Eq a
-- baz :: GADTBag a -> Maybe [a] -> Bool -- inferred, no Eq a
-- baz2 :: Eq a => GADTBag a -> Maybe [a] -> Bool -- inferred, with Eq a
Why can't the type for unGADTBag tell us Eq a?
baz and baz2 are morally equivalent, yet have different types. Presumably because unGADTBag has no Eq a, then the constraint can't propagate into any code using unGADTBag.
But with baz2 there's an Eq a constraint hiding inside the GADTBag a. Presumably baz2's Eq a will want a duplicate of the dictionary already there(?)
Is it that potentially a GADT might have many data constructors, each with different (or no) constraints? That's not the case here, or with typical examples for constrained data structures like Bags, Sets, Ordered Lists.
The equivalent for a GADTBag datatype using DatatypeContexts infers baz's type same as baz2.
Bonus question: why can't I get an ordinary ... deriving (Eq) for GADTBag? I can get one with StandaloneDeriving, but it's blimmin obvious, why can't GHC just do it for me?
deriving instance (Eq a) => Eq (GADTBag a)
Is the problem again that there might be other data constructors?
(Code exercised at GHC 8.6.5, if that's relevant.)
Addit: in light of #chi's and #leftroundabout's answers -- neither of which I find convincing. All of these give *** Exception: Prelude.undefined:
*DTContexts> unGADTBag undefined
*DTContexts> unGADTBag $ MkGADTBag undefined
*DTContexts> unGADTBag $ MkGADTBag (undefined :: String)
*DTContexts> unGADTBag $ MkGADTBag (undefined :: [a])
*DTContexts> baz undefined (Just "hello")
*DTContexts> baz (MkGADTBag undefined) (Just "hello")
*DTContexts> baz (MkGADTBag (undefined :: String)) (Just "hello")
*DTContexts> baz2 undefined (Just "hello")
*DTContexts> baz2 (MkGADTBag undefined) (Just "hello")
*DTContexts> baz2 (MkGADTBag (undefined :: String)) (Just "hello")
Whereas these two give the same type error at compile time * Couldn't match expected type ``[Char]'* No instance for (Eq (Int -> Int)) arising from a use of ``MkGADTBag'/ ``baz2' respectively [Edit: my initial Addit gave the wrong expression and wrong error message]:
*DTContexts> baz (MkGADTBag (undefined :: [Int -> Int])) (Just [(+ 1)])
*DTContexts> baz2 (MkGADTBag (undefined :: [Int -> Int])) (Just [(+ 1)])
So baz, baz2 are morally equivalent not just in that they return the same result for the same well-defined arguments; but also in that they exhibit the same behaviour for the same ill-defined arguments. Or they differ only in where the absence of an Eq instance gets reported?
#leftroundabout Before you've actually deconstructed the x value, there's no way of knowing that the MkGADTBag constructor indeed applies.
Yes there is: field label unGADTBag is defined if and only if there's a pattern match on MkGADTBag. (It would maybe be different if there were other constructors for the type -- especially if those also had a label unGADTBag.) Again, being undefined/lazy evaluation doesn't postpone the type-inference.
To be clear, by "[not] convincing" I mean: I can see the behaviour and the inferred types I'm getting. I don't see that laziness or potential undefinedness gets in the way of type inference. How could I expose a difference between baz, baz2 that would explain why they have different types?
Function calls never bring type class constraints in scope, only (strict) pattern matching does.
The comparison
unGADTBag x == fromJust y
is essentially a function call of the form
foo (unGADTBag x) (fromJust y)
where foo requires Eq a. That would morally be provided by unGADTBag x, but that expression is not yet evaluated! Because of laziness, unGADTBag x will be evaluated only when (and if) foo demands its first argument.
So, in order to call foo in this example we need its argument to be evaluated in advance. While Haskell could work like this, it would be a rather surprising semantics, where arguments are evaluated or not depending on whether they provide a type class constraint which is needed. Imagine more general cases like
foo (if cond then unGADTBag x else unGADTBag z) (fromJust y)
What should be evaluated here? unGADTBag x? unGADTBag y? Both? cond as well? It's hard to tell.
Because of these issues, Haskell was designed so that we need to manually require the evaluation of a GADT value like x using pattern matching.
Why can't the type for unGADTBag tell us Eq a?
Before you've actually deconstructed the x value, there's no way of knowing that the MkGADTBag constructor indeed applies. Sure, if it doesn't then you have other problems (bottom), but those might conceivably not surface. Consider
ignore :: a -> b -> b
ignore _ = id
baz2' :: GADTBag a -> Maybe [a] -> Bool
baz2' x y = ignore (unGADTBag x) (y==y)
Note that I could now invoke the function with, say, undefined :: GADTBag (Int->Int). Shouldn't be a problem since the undefined is ignored, right★? Problem is, despite Int->Int not having an Eq instance, I was able to write y==y, which y :: Maybe [Int->Int] can't in fact support.
So, we can't have that only mentioning unGADTBag is enough to spew the Eq a constraint into its surrounding scope. Instead, we must clearly delimit the scope of that constraint to where we've confirmed that the MkGADTBag constructor does apply, and a pattern match accomplishes that.
★If you're annoyed that my argument relies on undefined, note that the same issue arises also when there are multiple constructors which would bring different constraints into scope.
An alternative to a pattern-match that does work is this:
{-# LANGUAGE RankNTypes #-}
withGADTBag :: GADTBag a -> (Eq a => [a] -> b) -> b
withGADTBag (MkGADTBag x) f = f x
baz3 :: GADTBag a -> Maybe [a] -> Bool
baz3 x y = withGADTBag x (== fromJust y)
Response to edits
All of these give *** Exception: Prelude.undefined:
Yes of course they do, because you actually evaluate x == y in your function. So the function can only possibly yield non-⟂ if the inputs have a NF. But that's by no means the case for all functions.
Whereas these two give the same type error at compile time
Of course they do, because you're trying to wrap a value of non-Eq type in the MkGADTBag constructor, which explicitly requires that constraint (and allows you to explicitly unwrap it again!), whereas the GADTBag type doesn't require that constraint. (Which is kind of the whole point about this sort of encapsulation!)
Before you've actually deconstructed the x value, there's no way of knowing that the `MkGADTBag` constructor indeed applies.Yes there is: field label `unGADTBag` is defined if and only if there's a pattern match on `MkGADTBag`.
Arguably, that's the way field labels should work, but they don't, in Haskell. A field label is nothing but a function from the data type to the field type, and a nontotal function at that if there are multiple constructors.Yeah, Haskell records are one of the worst-designed features of the language. I personally tend to use field labels only for big, single-constructor, plain-old-data types (and even then I prefer using not the field labels directly but lenses derived from them).
Anyway though, I don't see how “field label is defined iff there's a pattern match” could even be implemented in a way that would allow your code to work the way you think it should. The compiler would have to insert the step of confirming that the constructor applies (and extracting its GADT-encapsulated constraint) somewhere. But where? In your example it's reasonably obvious, but in general x could inhabit a vast scope with lots of decision branches and you really don't want it to get evaluated in a branch where the constraint isn't actually needed.
Also keep in mind that when we argue with undefined/⟂ it's not just about actually diverging computations, more typically you're worried about computations that would simply take a long time (just, Haskell doesn't actually have a notion of “taking a long time”).
The way to think about this is OutsideIn(X) ... with local assumptions. It's not about undefinedness or lazy evaluation. A pattern match on a GADT constructor is outside, the RHS of the equation is inside. Constraints from the constructor are made available only locally -- that is only inside.
baz (MkGADTBag x) (Just y) = x == y
Has an explicit data constructor MkGADTBag outside, supplying an Eq a. The x == y raises a wanted Eq a locally/inside, which gets discharged from the pattern match. OTOH
baz2 x y = unGADTBag x == fromJust y
Has no explicit data constructor outside, so no context is supplied. unGADTBag has a Eq a, but that is deeper inside the l.h. argument to ==; type inference doesn't go looking deeper inside. It just doesn't. Then in the effective definition for unGADTBag
unGADTBag (MkGADTBag x) = x
there is an Eq a made available from the outside, but it cannot escape from the RHS into the type environment at a usage site for unGADTBag. It just doesn't. Sad!
The best I can see for an explanation is towards the end of the OutsideIn paper, Section 9.7 Is the emphasis on principal types well-justified? (A rhetorical question but my answer would me: of course we must emphasise principal types; type inference could get better principaled under some circumstances.) That last section considers this example
data R a where
RInt :: Int -> R Int
RBool :: Bool -> R Bool
RChar :: Char -> R Char
flop1 (RInt x) = x
there is a third type that is arguably more desirable [for flop1], and that type is R Int -> Int.
flop1's definition is of the same form as unGADTBag, with a constrained to be Int.
flop2 (RInt x) = x
flop2 (RBool x) = x
Unfortunately, ordinary polymorphic types are too weak to express this restriction [that a must be only Int or Bool] and we can only get Ɐa.R a -> a for flop2, which does not rule the application of flop2 to values of type R Char.
So at that point the paper seems to give up trying to refine better principal types:
In conclusion, giving up on some natural principal types in favor of more specialized types that eliminate more pattern match errors at runtime is appealing but does not quite work unless we consider a more expressive syntax of types. Furthermore it is far from obvious how to specify these typings in a high-level declarative specification.
"is appealing". It just doesn't.
I can see a general solution is difficult/impossible. But for use-cases of constrained Bags/Lists/Sets, the specification is:
All data constructors have the same constraint(s) on the datatype's parameters.
All constructors yield the same type (... -> T a or ... -> T [a] or ... -> T Int, etc).
Datatypes with a single constructor satisfy that trivially.
To satisfy the first bullet, for a Set type using a binary balanced tree, there'd be a non-obvious definition for the Nil constructor:
data OrdSet a where
SNode :: Ord a => OrdSet a -> a -> OrdSet a -> OrdSet a
SNil :: Ord a => OrdSet a -- seemingly redundant Ord constraint
Even so, repeating the constraint on every node and every terminal seems wasteful: it's the same constraint all the way down (which is unlike GADTs for EDSL abstract syntax trees); presumably each node carries a copy of exactly the same dictionary.
The best way to ensure same constraint(s) on every constructor could just be prefixing the constraint to the datatype:
data Ord a => OrdSet a where ...
And perhaps the constraint could go 'OutsideOut' to the environment that's accessing the tree.
Another possible approach is to use a PatternSynonym with an explicit signature giving a Required constraint.
pattern EqGADTBag :: Eq a => [a] -> GADTBag a -- that Eq a is the *Required*
pattern EqGADTBag{ unEqGADTBag } = MkGADTBag unEqGADTBag -- without sig infers Eq a only as *Provided*
That is, without that explicit sig:
*> :i EqGADTBag
pattern EqGADTBag :: () => Eq a => [a] -> GADTBag a
The () => Eq a => ... shows Eq a is Provided, arising from the GADT constructor.
Now we get both inferred baz, baz2 :: Eq a => GADTBag a -> Maybe [a] -> Bool:
baz (EqGADTBag x) (Just y) = x == y
baz2 x y = unEqGADTBag x == fromJust y
As a curiosity: it's possible to give those equations for baz, baz2 as well as those in the O.P. using the names from the GADT decl. GHC warns of overlapping patterns [correctly]; and does infer the constrained sig for baz.
I wonder if there's a design pattern here? Don't put constraints on the data constructor -- that is, don't make it a GADT. Instead declare a 'shadow' PatternSynonym with the Required/Provided constraints.
You can capture the constraint in a fold function, (Eq a => ..) says you can assume Eq a but only within the function next (which is defined after a pattern match). If you instantiate next as = fromJust maybe == as it uses this constraint to witness equality
-- local constraint
-- |
-- vvvvvvvvvvvvvvvvvv
foldGADTBag :: (Eq a => [a] -> res) -> GADTBag a -> res
foldGADTBag next (MkGADTBag as) = next as
baz3 :: GADTBag a -> Maybe [a] -> Bool
baz3 gadtBag maybe = foldGADTBag (fromJust maybe ==) gadtBag
type Ty :: Type -> Type
data Ty a where
TyInt :: Int -> Ty Int
TyUnit :: Ty ()
-- locally assume Int locally assume unit
-- | |
-- vvvvvvvvvvvvvvvvvvv vvvvvvvvvvvvv
foldTy :: (a ~ Int => a -> res) -> (a ~ () => res) -> (Ty a -> res)
foldTy int unit (TyInt i) = int i
foldTy int unit TyUnit = unit
eval :: Ty a -> a
eval = foldTy id ()

How do you apply function constraints in instance methods in Haskell?

I'm learning how to use typeclasses in Haskell.
Consider the following implementation of a typeclass T with a type constrained class function f.
class T t where
f :: (Eq u) => t -> u
data T_Impl = T_Impl_Bool Bool | T_Impl_Int Int | T_Impl_Float Float
instance T T_Impl where
f (T_Impl_Bool x) = x
f (T_Impl_Int x) = x
f (T_Impl_Float x) = x
When I load this into GHCI 7.10.2, I get the following error:
Couldn't match expected type ‘u’ with actual type ‘Float’
‘u’ is a rigid type variable bound by
the type signature for f :: Eq u => T_Impl -> u
at generics.hs:6:5
Relevant bindings include
f :: T_Impl -> u (bound at generics.hs:6:5)
In the expression: x
In an equation for ‘f’: f (T_Impl_Float x) = x
What am I doing/understanding wrong? It seems reasonable to me that one would want to specialize a typeclass in an instance by providing an accompaning data constructor and function implementation. The part
Couldn't match expected type 'u' with actual type 'Float'
is especially confusing. Why does u not match Float if u only has the constraint that it must qualify as an Eq type (Floats do that afaik)?
The signature
f :: (Eq u) => t -> u
means that the caller can pick t and u as wanted, with the only burden of ensuring that u is of class Eq (and t of class T -- in class methods there's an implicit T t constraint).
It does not mean that the implementation can choose any u.
So, the caller can use f in any of these ways: (with t in class T)
f :: t -> Bool
f :: t -> Char
f :: t -> Int
...
The compiler is complaining that your implementation is not general enough to cover all these cases.
Couldn't match expected type ‘u’ with actual type ‘Float’
means "You gave me a Float, but you must provide a value of the general type u (where u will be chosen by the caller)"
Chi has already pointed out why your code doesn't compile. But it's not even that typeclasses are the problem; indeed, your example has only one instance, so it might just as well be a normal function rather than a class.
Fundamentally, the problem is that you're trying to do something like
foobar :: Show x => Either Int Bool -> x
foobar (Left x) = x
foobar (Right x) = x
This won't work. It tries to make foobar return a different type depending on the value you feed it at run-time. But in Haskell, all types must be 100% determined at compile-time. So this cannot work.
There are several things you can do, however.
First of all, you can do this:
foo :: Either Int Bool -> String
foo (Left x) = show x
foo (Right x) = show x
In other words, rather than return something showable, actually show it. That means the result type is always String. It means that which version of show gets called will vary at run-time, but that's fine. Code paths can vary at run-time, it's types which cannot.
Another thing you can do is this:
toInt :: Either Int Bool -> Maybe Int
toInt (Left x) = Just x
toInt (Right x) = Nothing
toBool :: Either Int Bool -> Maybe Bool
toBool (Left x) = Nothing
toBool (Right x) = Just x
Again, that works perfectly fine.
There are other things you can do; without knowing why you want this, it's difficult to suggest others.
As a side note, you want to stop thinking about this like it's object oriented programming. It isn't. It requires a new way of thinking. In particular, don't reach for a typeclass unless you really need one. (I realise this particular example may just be a learning exercise to learn about typeclasses of course...)
It's possible to do this:
class Eq u => T t u | t -> u where
f :: t -> u
You need FlexibleContextx+FunctionalDepencencies and MultiParamTypeClasses+FlexibleInstances on call-site. Or to eliminate class and to use data types instead like Gabriel shows here

Any advantage of using type constructors in type classes?

Take for example the class Functor:
class Functor a
instance Functor Maybe
Here Maybe is a type constructor.
But we can do this in two other ways:
Firstly, using multi-parameter type classes:
class MultiFunctor a e
instance MultiFunctor (Maybe a) a
Secondly using type families:
class MonoFunctor a
instance MonoFunctor (Maybe a)
type family Element
type instance Element (Maybe a) a
Now there's one obvious advantage of the two latter methods, namely that it allows us to do things like this:
instance Text Char
Or:
instance Text
type instance Element Text Char
So we can work with monomorphic containers.
The second advantage is that we can make instances of types that don't have the type parameter as the final parameter. Lets say we make an Either style type but put the types the wrong way around:
data Silly t errorT = Silly t errorT
instance Functor Silly -- oh no we can't do this without a newtype wrapper
Whereas
instance MultiFunctor (Silly t errorT) t
works fine and
instance MonoFunctor (Silly t errorT)
type instance Element (Silly t errorT) t
is also good.
Given these flexibility advantages of only using complete types (not type signatures) in type class definitions, is there any reason to use the original style definition, assuming you're using GHC and don't mind using the extensions? That is, is there anything special you can do putting a type constructor, not just a full type in a type class that you can't do with multi-parameter type classes or type families?
Your proposals ignore some rather important details about the existing Functor definition because you didn't work through the details of writing out what would happen with the class's member function.
class MultiFunctor a e where
mfmap :: (e -> ??) -> a -> ????
instance MultiFunctor (Maybe a) a where
mfmap = ???????
An important property of fmap at the moment is that its first argument can change types. fmap show :: (Functor f, Show a) => f a -> f String. You can't just throw that away, or you lose most of the value of fmap. So really, MultiFunctor would need to look more like...
class MultiFunctor s t a b | s -> a, t -> b, s b -> t, t a -> s where
mfmap :: (a -> b) -> s -> t
instance (a ~ c, b ~ d) => MultiFunctor (Maybe a) (Maybe b) c d where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (f a)
Note just how incredibly complicated this has become to try to make inference at least close to possible. All the functional dependencies are in place to allow instance selection without annotating types all over the place. (I may have missed a couple possible functional dependencies in there!) The instance itself grew some crazy type equality constraints to allow instance selection to be more reliable. And the worst part is - this still has worse properties for reasoning than fmap does.
Supposing my previous instance didn't exist, I could write an instance like this:
instance MultiFunctor (Maybe Int) (Maybe Int) Int Int where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (if f a == a then a else f a * 2)
This is broken, of course - but it's broken in a new way that wasn't even possible before. A really important part of the definition of Functor is that the types a and b in fmap don't appear anywhere in the instance definition. Just looking at the class is enough to tell the programmer that the behavior of fmap cannot depend on the types a and b. You get that guarantee for free. You don't need to trust that instances were written correctly.
Because fmap gives you that guarantee for free, you don't even need to check both Functor laws when defining an instance. It's sufficient to check the law fmap id x == x. The second law comes along for free when the first law is proven. But with that broken mfmap I just provided, mfmap id x == x is true, even though the second law is not.
As the implementer of mfmap, you have more work to do to prove your implementation is correct. As a user of it, you have to put more trust in the implementation's correctness, since the type system can't guarantee as much.
If you work out more complete examples for the other systems, you find that they have just as many issues if you want to support the full functionality of fmap. And this is why they aren't really used. They add a lot of complexity for only a small gain in utility.
Well, for one thing the traditional functor class is just much simpler. That alone is a valid reason to prefer it, even though this is Haskell and not Python. And it also represents the mathematical idea better of what a functor is supposed to be: a mapping from objects to objects (f :: *->*), with extra property (->Constraint) that each (forall (a::*) (b::*)) morphism (a->b) is lifted to a morphism on the corresponding object mapped to (-> f a->f b). None of that can be seen very clearly in the * -> * -> Constraint version of the class, or its TypeFamilies equivalent.
On a more practical account, yes, there are also things you can only do with the (*->*)->Constraint version.
In particular, what this constraint guarantees you right away is that all Haskell types are valid objects you can put into the functor, whereas for MultiFunctor you need to check every possible contained type, one by one. Sometimes that's just not possible (or is it?), like when you're mapping over infinitely many types:
data Tough f a = Doable (f a)
| Tough (f (Tough f (a, a)))
instance (Applicative f) = Semigroup (Tough f a) where
Doable x <> Doable y = Tough . Doable $ (,)<$>x<*>y
Tough xs <> Tough ys = Tough $ xs <> ys
-- The following actually violates the semigroup associativity law. Hardly matters here I suppose...
xs <> Doable y = xs <> Tough (Doable $ fmap twice y)
Doable x <> ys = Tough (Doable $ fmap twice x) <> ys
twice x = (x,x)
Note that this uses the Applicative instance of f not just on the a type, but also on arbitrary tuples thereof. I can't see how you could express that with a MultiParamTypeClasses- or TypeFamilies-based applicative class. (It might be possible if you make Tough a suitable GADT, but without that... probably not.)
BTW, this example is perhaps not as useless as it may look – it basically expresses read-only vectors of length 2n in a monadic state.
The expanded variant is indeed more flexible. It was used e.g. by Oleg Kiselyov to define restricted monads. Roughly, you can have
class MN2 m a where
ret2 :: a -> m a
class (MN2 m a, MN2 m b) => MN3 m a b where
bind2 :: m a -> (a -> m b) -> m b
allowing monad instances to be parametrized over a and b. This is useful because you can restrict those types to members of some other class:
import Data.Set as Set
instance MN2 Set.Set a where
-- does not require Ord
return = Set.singleton
instance Prelude.Ord b => MN3 SMPlus a b where
-- Set.union requires Ord
m >>= f = Set.fold (Set.union . f) Set.empty m
Note than because of that Ord constraint, we are unable to define Monad Set.Set using unrestricted monads. Indeed, the monad class requires the monad to be usable at all types.
Also see: parameterized (indexed) monad.

Does exporting type constructors make a difference?

Let's say I have an internal data type, T a, that is used in the signature of exported functions:
module A (f, g) where
newtype T a = MkT { unT :: (Int, a) }
deriving (Functor, Show, Read) -- for internal use
f :: a -> IO (T a)
f a = fmap (\i -> T (i, a)) randomIO
g :: T a -> a
g = snd . unT
What is the effect of not exporting the type constructor T? Does it prevent consumers from meddling with values of type T a? In other words, is there a difference between the export list (f, g) and (f, g, T()) here?
Prevented
The first thing a consumer will see is that the type doesn't appear in Haddock documentation. In the documentation for f and g, the type Twill not be hyperlinked like an exported type. This may prevent a casual reader from discovering T's class instances.
More importantly, a consumer cannot doing anything with T at the type level. Anything that requires writing a type will be impossible. For instance, a consumer cannot write new class instances involving T, or include T in a type family. (I don't think there's a way around this...)
At the value level, however, the main limitation is that a consumer cannot write a type annotation including T:
> :t (f . read) :: Read b => String -> IO (A.T b)
<interactive>:1:39: Not in scope: type constructor or class `A.T'
Not prevented
The restriction on type signatures is not as significant a limitation as it appears. The compiler can still infer such a type:
> :t f . read
f . read :: Read b => String -> IO (A.T b)
Any value expression within the inferrable subset of Haskell may therefore be expressed regardless of the availability of the type constructor T. If, like me, you're addicted to ScopedTypeVariables and extensive annotations, you may be a little surprised by the definition of unT' below.
Furthermore, because typeclass instances have global scope, a consumer can use any available class functions without additional limitation. Depending on the classes involved, this may allow significant manipulation of values of the unexposed type. With classes like Functor, a consumer can also freely manipulate type parameters, because there's an available function of type T a -> T b.
In the example of T, deriving Show of course exposes the "internal" Int, and gives a consumer enough information to hackishly implement unT:
-- :: (Show a, Read a) => T a -> (Int, a)
unT' = (read . strip . show') `asTypeOf` (mkPair . g)
where
strip = reverse . drop 1 . reverse . drop 9
-- :: T a -> String
show' = show `asTypeOf` (mkString . g)
mkPair :: t -> (Int, t)
mkPair = undefined
mkString :: t -> String
mkString = undefined
> :t unT'
unT' :: (Show b, Read b) => A.T b -> (Int, b)
> x <- f "x"
> unT' x
(-29353, "x")
Implementing mkT' with the Read instance is left as an exercise.
Deriving something like Generic will completely explode any idea of containment, but you'd probably expect that.
Prevented?
In the corners of Haskell where type signatures are necessary or where asTypeOf-style tricks don't work, I guess not exporting the type constructor could actually prevent a consumer from doing something they could with the export list (f, g, T()).
Recommendation
Export all type constructors that are used in the type of any value you export. Here, go ahead and include T() in your export list. Leaving it out doesn't accomplish anything other than muddying the documentation. If you want to expose an purely abstract immutable type, use a newtype with a hidden constructor and no class instances.

Resources