How to apply wildcard in constraint in Haskell? - haskell

It sounds trivial but I can't find out what I'm supposed to do.
The following is my type definition:
data CDeq q a = Shallow (q a)
| Deep{ hq ⦂ q a
, susp ⦂ CDeq q (q a)
, tq ⦂ q a }
and I want it to have an instance of Show.
Since GHC doesn't allow deriving here, I just tried to write one myself:
instance (Show a, Show ????) => Show (CDeq q a) where
....
but I got stuck.
I don't know how to represent that for all type v, (q v) can be shown in Haskell.
I can't simply do the following:
instance (Show a, Show (q a)) => Show (CDeq q a) where
....
since to show CDeq q (q a), Show (q (q a)) is required, then Show (q (q (q a))) is required, then on and on.
So I am wondering is there a syntax such that I can express the meaning I stated up there?
I once thought forall could be the solution to this, but it doesn't work:
instance (Show a, forall v. Show (q v)) => Show (CDeq q a)

There's a class Show1 in Prelude.Extras to represent "for all type v, (q v) can be shown".
class Show1 f where
showsPrec1 :: Show a => Int -> f a -> ShowS
default showsPrec1 :: (Show (f a), Show a) => Int -> f a -> ShowS
showsPrec1 = showsPrec
...
You can use this to write a show instance for CDeq q a.
instance (Show a, Show1 q) => Show (CDeq q a) where
....
Where you would use show or showsPrec on a q x you'll instead use show1 or showsPrec1.
If you use these, you should also provide instances for CDeq q.
instance (Show1 q) => Show1 (CDeq q) where
showsPrec1 = showsPrec

#Cirdec's already accepted answer is a great pragmatic answer for this use case. I wanted to write a bit about more general techniques (especially the Data.Constraint.Forall mentioned in a comment, since that almost works for this use case but doesn't quite, and there's another thing from constraints that does work), but I wanted to also try to explain a bit about why this isn't directly possible first and what Show1, Forall, and Lifting do to work around that (they're each different trade offs). So it got a little long, apologies.
By standard Haskell mechanisms you can't have such a "wildcard" constraint. The reason is that Show makes a constraint from a type, with potentially a different instance (with different method definitions) for each type you could apply it to.
When your code requires a Show (CDeq q a) instance and finds instance (Show a, Show ????) => Show (CDeq q a), that means it now also needs to find an instance for Show a and Show ????.
Show a is easy; either a has already been chosen as some concrete type like Int, and it can use the Show instance for that type (or error if there isn't one in scope). Or your code was in a function that was polymorphic in a; in that case there must have been a Show a constraint on the function you wrote, so the compiler will just rely on the caller having chosen some particular a and passed in the Show a instance definition.
But the wildcard constraint Show ???? is different. We're not talking about a concrete type here, so that path to resolution isn't going to work. And we're not even talking about a polymorphic constraint in the sense that there's a type variable the caller would choose (in that case we could punt the problem of choosing a single instance dictionary to the caller).
What you need is to be able to show q a, and q (q a), and q (q (q a), etc. But each of those could have its own instance! The types are gone at runtime, so the compiler can't even attempt to round up all of those instances (or require the caller to pass in an unbounded number of instances) and emit code that switches between which show it calls. It needs to emit code that only calls one version of show; either a specific one from a specific instance it's been able to choose, or one that is passed in by the caller to the function.
One way to workaround this is with an alternative type class like Show1. Here the "unit of overloading" is on the * -> * type constructor. It's impossible to make a Show1 instance for types like Int (they're not the right shape), but a single Show1 q instance is applicable to all types of the form q a, so you don't need an unbounded number of instances to support q a, q (q a), q (q (q a)), etc anymore.
But there also are Show instances that are polymorphic in some type variables. For example the list instance is Show a => Show [a]; knowing that this instance exists (and there aren't any overlapping instances), we know we'll be using the same instance for [a], [[a]], [[[a]]], etc. If we could somehow write a constraint that required a polymorphic instance like that.
There's no direct way to say we want a polymorphic instance - the constraint language only allows us to ask for instances for particular types, or particular types that the caller chooses. But the Data.Forall module in the constraints package (https://hackage.haskell.org/package/constraints), which #Cirdec suggested in a comment, uses clever tricks internally to provide a few ways of doing this.1 Here's an example of how that would look:
{-# LANGUAGE FlexibleContexts
, ScopedTypeVariables
, TypeApplications
, TypeOperators
#-}
module Foo where
import Data.Constraint ( (:-), (\\) )
import Data.Constraint.Forall (ForallF, instF)
data Nested q a = Stop | Deeper a (Nested q (q a))
data None a = None
instance Show (None a)
where show None = "None"
instance (Show a, ForallF Show q) => Show (Nested q a)
where show Stop = "Stop"
show (Deeper a r) = show a ++ " " ++ show r
\\ (instF #Show #q #a)
The constraint ForallF Show q represents forall a. Show (q a).2 But it's not that constraint directly, so we can't just derive Show using this; we need to write an instance manually so we can do some massaging.
A ForallF constraint gives us access to instF which is of type forall p f a. ForallF p f :- p (f a). The :- is a type of the constraints package; values of type c :- d represent a proof that when the constraint c holds the constraint d also holds (or in terms of instance dictionaries: it contains a dictionary for d that's parameterised on a dictionary for c). So ForallF p f :- p (f a) is a proof that when we have ForallF p f we can get p (f a). Type application syntax is a less verbose way of pinning down the types at which we're using instF, we want the left side of :- to tie back to the ForallF Show q that we know we have from the instance constraint. That means the right hand side will give us a Show (q a), as we needed! The \\ operator just takes an expression on the left and an c :- d on the right, and basically connects c and d instances for us; the expression will be evaluated with access to a dictionary for d, but the overall expression only needs c.
Here's an example of use:
λ Deeper 'a' (Deeper None (Deeper None Stop))
'a' None None Stop
it :: Nested None Char
Hurrah! But why did I use None? What happens when we try it with nested lists?
λ :t Deeper [] (Deeper [] Stop)
Deeper [] (Deeper [] Stop) :: Nested [] [t]
λ Deeper [] (Deeper [] Stop)
<interactive>:65:1: error:
• No instance for (Show
(Data.Constraint.Forall.Skolem
(Data.Constraint.Forall.ComposeC Show [])))
arising from a use of ‘print’
• In a stmt of an interactive GHCi command: print it
Drat. What went wrong? Well, our polymorphic Show instance for lists is actually Show a => Show [a]. The instance head is polymorphic, so it applies to all types of forms [a]. But it also needs the extra constraint that Show a holds, so it's not truly polymorphic. Basically what happens is the internal unexported thing in Data.Constraint doesn't have an instance for Show (it can't have any instances for the technique to work), so we get the error above. And that's actually a good thing; dictionaries for [a] contain a nested dictionary for a, so the trick of getting an instance we know is polymorphic and then unsafeCoercing it to the right type wouldn't be applicable here. ForallF only works to find instances that are completely polymorphic, with no restrictions at all.
But there is one more thing the constraints package has to offer here! Data.Constraint.Lifting gives us a class Lifting p f, that represents the idea "p (f a) holds whenever p a holds". The idea that the constraint p "lifts through" the type constructor f. This is actually exactly the notion you needed, since you can just apply it recursively to nest arbitrarily many depths of q.
{-# LANGUAGE FlexibleContexts
, ScopedTypeVariables
, TypeApplications
, TypeOperators
#-}
module Foo where
import Data.Constraint ( (:-), (\\) )
import Data.Constraint.Lifting ( Lifting (lifting) )
data Nested q a = Stop | Deeper a (Nested q (q a))
instance (Show a, Lifting Show q) => Show (Nested q a)
where show Stop = "Stop"
show (Deeper a r) = show a ++ " " ++ show r
\\ (lifting #Show #q #a)
Here the lifting method of the class Lifting is doing basically what instF was doing before. lifting :: Lifting p f => p a :- p (f a), so when we have Lifting Show q and we have Show a, then we can use \\ and lifting (used at the right type) to get the Show (q a) dictionary we need to recursively invoke show.
Now we can show Nested applied to list types:
λ Deeper [] (Deeper [[True]] Stop)
[] [[True]] Stop
it :: Nested [] [Bool]
Data.Constraint.Lifting does have a lot of predefined instances for things in the prelude, but you'll likely to have to write your own instances. Fortunately this pretty much is generally a matter of writing:
instance Lifting SomeClass MyType
where lifting = Sub Dict
The instance resolver does all the actual work for you, provided your type really does allow that class to be "lifted through" it.
1 My understanding of the code in that module is not 100% complete (and the full details are a bit involved to make it as safe as possible), but basically the technique is to apply a class to a hidden unexported thing and capture the dictionary. Since no third-party instance could have actually referenced our unexported thing, the only way an instance could be resolved is if it was actually polymorphic and would work for anything. So then the captured dictionary can just be unsafeCoerced to apply to any type you like.
2 There are a few other variants of Forall* for representing different "shapes" of polymorphism in the constraint. I believe you can't make a one-size-fits-all version because you have to not mention the variable you're being polymorphic over, which means you can't actually use the constraint applied, you have to have something that takes the class as a parameter as well as all of the non-polymorphic parameters and applies them together in a particular fashion.

Related

Instance signatures: constraints on methods

With InstanceSignatures, I can give a signature for a method within an instance decl.
The type signature in the instance declaration must be more polymorphic than (or the same as) the one in the class declaration, instantiated with the instance type.
I can see from this answer that the type part of the sig can't be more specific; but why couldn't you add extra constraints? After all you can put constraints on the instance decl that will make the instance more specific than "the class declaration, instantiated with the instance type".
Addit: (In response to the first couple of comments.) To explain "you can put constraints on the instance decl ... instance more specific": the OVERLAPPABLE instance head here alleges it provides addNat for all types a, b, c. But it doesn't: it only provides if a is of the form SNat a', c is of the form SNat c', etc. I'm asking why I can't similarly restrict the types at the method level?
For example [adapted from Hughes 1999] (more realistically this could be a BST/Rose tree, etc):
data AscList a = AscList [a] -- ascending list
instance Functor AscList where -- constructor class
fmap f (AscList xs) = AscList $ sort $ fmap f xs
-- :: Ord b => (a -> b) -> AscList a -> AscList b -- inferred
Without an instance signature GHC complains no instance for (Ord b). With the signature GHC complains the instantiated sig from the class is more polymorphic than the one given.
This (excellent) answer explains the machinery in the instance dictionary. I can see the entry in the dictionary for the method has a type set from the number of constraints for the method (if any) in the class decl. There's no room to put an extra dictionary/parameter for the method.
That seems merely that the implementation didn't foresee a need. Is there a deeper/more theory-based reason against?
(BTW I'm not ever-so convinced by Hughes' approach: that would want both Ord a => WFT(AscList a) and Ord b => WFT(AscList b). But there's no need to require the incoming (AscList a) is ascending. Other constructor classes for AscList perhaps don't need an Ord constraint anywhere.)
If you allow adding constraints to instance methods that aren't already in the class method, a polymorphic function using type classes might no longer typecheck when you specialize it.
For example, consider your AscList instance above, and the following usage of Functor:
data T f = T (f (IO ())) -- f applied to a non-Ord
example :: Functor f => T f -> T f
example (T t) = T (fmap (\x -> x >> print 33) t)
The type of example says you can instantiate f with any functor, such as f = AscList.
But that makes no sense because it would try to sort an AscList (IO ()).
The only way to tell that something goes wrong when we specialize example here is to read its definition, to find whether uses of fmap are still valid, and that goes against modularity.
The type class constraints in a type signature do not and should not record how their methods are used. Conversely, that means that instances do not get to add constraints to their method implementations.

Overlapping instance for Show

Suppose we have the following:
{-# LANGUAGE FlexibleInstances #-}
module Sample where
newtype A a =
A a
deriving (Show)
newtype L a =
L [a]
class ListContainer l where
getList :: l a -> [a]
instance ListContainer L where
getList (L l) = l
instance (Show a, ListContainer l) => Show (l a) where
show = const "example"
With this code, ghc complains:
warning: [-Wdeferred-type-errors]
• Overlapping instances for Show (A a)
    arising from a use of ‘GHC.Show.$dmshowList’
  Matching instances:
    instance (Show a, ListContainer l) => Show (l a)
      -- Defined at /.../src/Sample.hs:18:10
    instance Show a => Show (A a)
      -- Defined at /.../src/Sample.hs:7:13
• In the expression: GHC.Show.$dmshowList #(A a)
  In an equation for ‘showList’:
      showList = GHC.Show.$dmshowList #(A a)
  When typechecking the code for ‘showList’
    in a derived instance for ‘Show (A a)’:
    To see the code I am typechecking, use -ddump-deriv
  In the instance declaration for ‘Show (A a)’
warning: [-Wdeferred-type-errors]
• Overlapping instances for Show (A a)
    arising from a use of ‘GHC.Show.$dmshow’
  Matching instances:
    instance (Show a, ListContainer l) => Show (l a)
      -- Defined at /.../src/Sample.hs:18:10
    instance Show a => Show (A a)
      -- Defined at /.../src/Sample.hs:7:13
• In the expression: GHC.Show.$dmshow #(A a)
  In an equation for ‘show’: show = GHC.Show.$dmshow #(A a)
  When typechecking the code for ‘show’
    in a derived instance for ‘Show (A a)’:
    To see the code I am typechecking, use -ddump-deriv
  In the instance declaration for ‘Show (A a)’
I can understand that it thinks type a can either derive Show, or derive ListContainer, which may result in Show.
How do we avoid that?
I understand that there exists a function showList, but its signature is a bit foreign. I do already have a function that I intend to use to display certain lists, which returns String directly.
I can understand that it thinks type a can either derive Show, or derive ListContainer, which may result in Show.
That is not what it thinks.
When Haskell chooses class instance, it doesn't look at instance constraints at all. All it considers when choosing an instance is the instance head (the thing that comes right after class name).
In your Show instance, the instance head is l a. This instance head matches A a (by assuming l = A). It also matches a lot of other things by the way - for example, it matches Maybe a (where l = Maybe), and Either b a (with l = Either b), and Identity a, and IO a - pretty much every type with a type parameter, come to think of it. It doesn't matter that neither A nor Maybe nor IO have an instance of ListContainer, because like I said above, Haskell doesn't look at constraints when choosing instances, only at instance heads.
It is only after finding a matching instance (by matching on its head) that Haskell will check if that instance's constraints are in fact satisfied. And will complain if they aren't. But it will never go back and try to pick another instance instead.
So coming back to your example: since A now has two matching Show instances - its own derived one and the Show (l a) one that you wrote, - the compiler complains that they are overlapping.
In your example you can just remove instance (Show a, ListContainer l) => Show (l a) and add deriving (Show) to L definition.
Alternatively you can remove deriving (Show) from A definition.
If you want you code behave as it is now remove deriving (Show) and implement it explicitly
instance {-# OVERLAPPING #-} Show a => Show (A a)
where
show (A a) = "A " ++ show a

Any advantage of using type constructors in type classes?

Take for example the class Functor:
class Functor a
instance Functor Maybe
Here Maybe is a type constructor.
But we can do this in two other ways:
Firstly, using multi-parameter type classes:
class MultiFunctor a e
instance MultiFunctor (Maybe a) a
Secondly using type families:
class MonoFunctor a
instance MonoFunctor (Maybe a)
type family Element
type instance Element (Maybe a) a
Now there's one obvious advantage of the two latter methods, namely that it allows us to do things like this:
instance Text Char
Or:
instance Text
type instance Element Text Char
So we can work with monomorphic containers.
The second advantage is that we can make instances of types that don't have the type parameter as the final parameter. Lets say we make an Either style type but put the types the wrong way around:
data Silly t errorT = Silly t errorT
instance Functor Silly -- oh no we can't do this without a newtype wrapper
Whereas
instance MultiFunctor (Silly t errorT) t
works fine and
instance MonoFunctor (Silly t errorT)
type instance Element (Silly t errorT) t
is also good.
Given these flexibility advantages of only using complete types (not type signatures) in type class definitions, is there any reason to use the original style definition, assuming you're using GHC and don't mind using the extensions? That is, is there anything special you can do putting a type constructor, not just a full type in a type class that you can't do with multi-parameter type classes or type families?
Your proposals ignore some rather important details about the existing Functor definition because you didn't work through the details of writing out what would happen with the class's member function.
class MultiFunctor a e where
mfmap :: (e -> ??) -> a -> ????
instance MultiFunctor (Maybe a) a where
mfmap = ???????
An important property of fmap at the moment is that its first argument can change types. fmap show :: (Functor f, Show a) => f a -> f String. You can't just throw that away, or you lose most of the value of fmap. So really, MultiFunctor would need to look more like...
class MultiFunctor s t a b | s -> a, t -> b, s b -> t, t a -> s where
mfmap :: (a -> b) -> s -> t
instance (a ~ c, b ~ d) => MultiFunctor (Maybe a) (Maybe b) c d where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (f a)
Note just how incredibly complicated this has become to try to make inference at least close to possible. All the functional dependencies are in place to allow instance selection without annotating types all over the place. (I may have missed a couple possible functional dependencies in there!) The instance itself grew some crazy type equality constraints to allow instance selection to be more reliable. And the worst part is - this still has worse properties for reasoning than fmap does.
Supposing my previous instance didn't exist, I could write an instance like this:
instance MultiFunctor (Maybe Int) (Maybe Int) Int Int where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (if f a == a then a else f a * 2)
This is broken, of course - but it's broken in a new way that wasn't even possible before. A really important part of the definition of Functor is that the types a and b in fmap don't appear anywhere in the instance definition. Just looking at the class is enough to tell the programmer that the behavior of fmap cannot depend on the types a and b. You get that guarantee for free. You don't need to trust that instances were written correctly.
Because fmap gives you that guarantee for free, you don't even need to check both Functor laws when defining an instance. It's sufficient to check the law fmap id x == x. The second law comes along for free when the first law is proven. But with that broken mfmap I just provided, mfmap id x == x is true, even though the second law is not.
As the implementer of mfmap, you have more work to do to prove your implementation is correct. As a user of it, you have to put more trust in the implementation's correctness, since the type system can't guarantee as much.
If you work out more complete examples for the other systems, you find that they have just as many issues if you want to support the full functionality of fmap. And this is why they aren't really used. They add a lot of complexity for only a small gain in utility.
Well, for one thing the traditional functor class is just much simpler. That alone is a valid reason to prefer it, even though this is Haskell and not Python. And it also represents the mathematical idea better of what a functor is supposed to be: a mapping from objects to objects (f :: *->*), with extra property (->Constraint) that each (forall (a::*) (b::*)) morphism (a->b) is lifted to a morphism on the corresponding object mapped to (-> f a->f b). None of that can be seen very clearly in the * -> * -> Constraint version of the class, or its TypeFamilies equivalent.
On a more practical account, yes, there are also things you can only do with the (*->*)->Constraint version.
In particular, what this constraint guarantees you right away is that all Haskell types are valid objects you can put into the functor, whereas for MultiFunctor you need to check every possible contained type, one by one. Sometimes that's just not possible (or is it?), like when you're mapping over infinitely many types:
data Tough f a = Doable (f a)
| Tough (f (Tough f (a, a)))
instance (Applicative f) = Semigroup (Tough f a) where
Doable x <> Doable y = Tough . Doable $ (,)<$>x<*>y
Tough xs <> Tough ys = Tough $ xs <> ys
-- The following actually violates the semigroup associativity law. Hardly matters here I suppose...
xs <> Doable y = xs <> Tough (Doable $ fmap twice y)
Doable x <> ys = Tough (Doable $ fmap twice x) <> ys
twice x = (x,x)
Note that this uses the Applicative instance of f not just on the a type, but also on arbitrary tuples thereof. I can't see how you could express that with a MultiParamTypeClasses- or TypeFamilies-based applicative class. (It might be possible if you make Tough a suitable GADT, but without that... probably not.)
BTW, this example is perhaps not as useless as it may look – it basically expresses read-only vectors of length 2n in a monadic state.
The expanded variant is indeed more flexible. It was used e.g. by Oleg Kiselyov to define restricted monads. Roughly, you can have
class MN2 m a where
ret2 :: a -> m a
class (MN2 m a, MN2 m b) => MN3 m a b where
bind2 :: m a -> (a -> m b) -> m b
allowing monad instances to be parametrized over a and b. This is useful because you can restrict those types to members of some other class:
import Data.Set as Set
instance MN2 Set.Set a where
-- does not require Ord
return = Set.singleton
instance Prelude.Ord b => MN3 SMPlus a b where
-- Set.union requires Ord
m >>= f = Set.fold (Set.union . f) Set.empty m
Note than because of that Ord constraint, we are unable to define Monad Set.Set using unrestricted monads. Indeed, the monad class requires the monad to be usable at all types.
Also see: parameterized (indexed) monad.

Why constraints on data are a bad thing?

I know this question has been asked and answered lots of times but I still don't really understand why putting constraints on a data type is a bad thing.
For example, let's take Data.Map k a. All of the useful functions involving a Map need an Ord k constraint. So there is an implicit constraint on the definition of Data.Map. Why is it better to keep it implicit instead of letting the compiler and programmers know that Data.Map needs an orderable key.
Also, specifying a final type in a type declaration is something common, and one can see it as a way of "super" constraining a data type.
For example, I can write
data User = User { name :: String }
and that's acceptable. However is that not a constrained version of
data User' s = User' { name :: s }
After all 99% of the functions I'll write for the User type don't need a String and the few which will would probably only need s to be IsString and Show.
So, why is the lax version of User considered bad:
data (IsString s, Show s, ...) => User'' { name :: s }
while both User and User' are considered good?
I'm asking this, because lots of the time, I feel I'm unnecessarily narrowing my data (or even function) definitions, just to not have to propagate constraints.
Update
As far as I understand, data type constraints only apply to the constructor and don't propagate. So my question is then, why do data type constraints not work as expected (and propagate)? It's an extension anyway, so why not have a new extension doing data properly, if it was considered useful by the community?
TL;DR:
Use GADTs to provide implicit data contexts.
Don't use any kind of data constraint if you could do with Functor instances etc.
Map's too old to change to a GADT anyway.
Scroll to the bottom if you want to see the User implementation with GADTs
Let's use a case study of a Bag where all we care about is how many times something is in it. (Like an unordered sequence. We nearly always need an Eq constraint to do anything useful with it.
I'll use the inefficient list implementation so as not to muddy the waters over the Data.Map issue.
GADTs - the solution to the data constraint "problem"
The easy way to do what you're after is to use a GADT:
Notice below how the Eq constraint not only forces you to use types with an Eq instance when making GADTBags, it provides that instance implicitly wherever the GADTBag constructor appears. That's why count doesn't need an Eq context, whereas countV2 does - it doesn't use the constructor:
{-# LANGUAGE GADTs #-}
data GADTBag a where
GADTBag :: Eq a => [a] -> GADTBag a
unGADTBag (GADTBag xs) = xs
instance Show a => Show (GADTBag a) where
showsPrec i (GADTBag xs) = showParen (i>9) (("GADTBag " ++ show xs) ++)
count :: a -> GADTBag a -> Int -- no Eq here
count a (GADTBag xs) = length.filter (==a) $ xs -- but == here
countV2 a = length.filter (==a).unGADTBag
size :: GADTBag a -> Int
size (GADTBag xs) = length xs
ghci> count 'l' (GADTBag "Hello")
2
ghci> :t countV2
countV2 :: Eq a => a -> GADTBag a -> Int
Now we didn't need the Eq constraint when we found the total size of the bag, but it didn't clutter up our definition anyway. (We could have used size = length . unGADTBag just as well.)
Now lets make a functor:
instance Functor GADTBag where
fmap f (GADTBag xs) = GADTBag (map f xs)
oops!
DataConstraints_so.lhs:49:30:
Could not deduce (Eq b) arising from a use of `GADTBag'
from the context (Eq a)
That's unfixable (with the standard Functor class) because I can't restrict the type of fmap, but need to for the new list.
Data Constraint version
Can we do as you asked? Well, yes, except that you have to keep repeating the Eq constraint wherever you use the constructor:
{-# LANGUAGE DatatypeContexts #-}
data Eq a => EqBag a = EqBag {unEqBag :: [a]}
deriving Show
count' a (EqBag xs) = length.filter (==a) $ xs
size' (EqBag xs) = length xs -- Note: doesn't use (==) at all
Let's go to ghci to find out some less pretty things:
ghci> :so DataConstraints
DataConstraints_so.lhs:1:19: Warning:
-XDatatypeContexts is deprecated: It was widely considered a misfeature,
and has been removed from the Haskell language.
[1 of 1] Compiling Main ( DataConstraints_so.lhs, interpreted )
Ok, modules loaded: Main.
ghci> :t count
count :: a -> GADTBag a -> Int
ghci> :t count'
count' :: Eq a => a -> EqBag a -> Int
ghci> :t size
size :: GADTBag a -> Int
ghci> :t size'
size' :: Eq a => EqBag a -> Int
ghci>
So our EqBag count' function requires an Eq constraint, which I think is perfectly reasonable, but our size' function also requires one, which is less pretty. This is because the type of the EqBag constructor is EqBag :: Eq a => [a] -> EqBag a, and this constraint must be added every time.
We can't make a functor here either:
instance Functor EqBag where
fmap f (EqBag xs) = EqBag (map f xs)
for exactly the same reason as with the GADTBag
Constraintless bags
data ListBag a = ListBag {unListBag :: [a]}
deriving Show
count'' a = length . filter (==a) . unListBag
size'' = length . unListBag
instance Functor ListBag where
fmap f (ListBag xs) = ListBag (map f xs)
Now the types of count'' and show'' are exactly as we expect, and we can use standard constructor classes like Functor:
ghci> :t count''
count'' :: Eq a => a -> ListBag a -> Int
ghci> :t size''
size'' :: ListBag a -> Int
ghci> fmap (Data.Char.ord) (ListBag "hello")
ListBag {unListBag = [104,101,108,108,111]}
ghci>
Comparison and conclusions
The GADTs version automagically propogates the Eq constraint everywhere the constructor is used. The type checker can rely on there being an Eq instance, because you can't use the constructor for a non-Eq type.
The DatatypeContexts version forces the programmer to manually propogate the Eq constraint, which is fine by me if you want it, but is deprecated because it doesn't give you anything more than the GADT one does and was seen by many as pointless and annoying.
The unconstrained version is good because it doesn't prevent you from making Functor, Monad etc instances. The constraints are written exactly when they're needed, no more or less. Data.Map uses the unconstrained version partly because unconstrained is generally seen as most flexible, but also partly because it predates GADTs by some margin, and there needs to be a compelling reason to potentially break existing code.
What about your excellent User example?
I think that's a great example of a one-purpose data type that benefits from a constraint on the type, and I'd advise you to use a GADT to implement it.
(That said, sometimes I have a one-purpose data type and end up making it unconstrainedly polymorphic just because I love to use Functor (and Applicative), and would rather use fmap than mapBag because I feel it's clearer.)
{-# LANGUAGE GADTs #-}
import Data.String
data User s where
User :: (IsString s, Show s) => s -> User s
name :: User s -> s
name (User s) = s
instance Show (User s) where -- cool, no Show context
showsPrec i (User s) = showParen (i>9) (("User " ++ show s) ++)
instance (IsString s, Show s) => IsString (User s) where
fromString = User . fromString
Notice since fromString does construct a value of type User a, we need the context explicitly. After all, we composed with the constructor User :: (IsString s, Show s) => s -> User s. The User constructor removes the need for an explicit context when we pattern match (destruct), becuase it already enforced the constraint when we used it as a constructor.
We didn't need the Show context in the Show instance because we used (User s) on the left hand side in a pattern match.
Constraints
The problem is that constraints are not a property of the data type, but of the algorithm/function that operates on them. Different functions might need different and unique constraints.
A Box example
As an example, let's assume we want to create a container called Box which contains only 2 values.
data Box a = Box a a
We want it to:
be showable
allow the sorting of the two elements via sort
Does it make sense to apply the constraint of both Ord and Show on the data type? No, because the data type in itself could be only shown or only sorted and therefore the constraints are related to its use, not it's definition.
instance (Show a) => Show (Box a) where
show (Box a b) = concat ["'", show a, ", ", show b, "'"]
instance (Ord a) => Ord (Box a) where
compare (Box a b) (Box c d) =
let ca = compare a c
cb = compare b d
in if ca /= EQ then ca else cb
The Data.Map case
Data.Map's Ord constraints on the type is really needed only when we have > 1 elements in the container. Otherwise the container is usable even without an Ord key. For example, this algorithm:
transf :: Map NonOrd Int -> Map NonOrd Int
transf x =
if Map.null x
then Map.singleton NonOrdA 1
else x
Live demo
works just fine without the Ord constraint and always produce a non empty map.
Using DataTypeContexts reduces the number of programs you can write. If most of those illegal programs are nonsense, you might say it's worth the runtime cost associated with ghc passing in a type class dictionary that isn't used. For example, if we had
data Ord k => MapDTC k a
then #jefffrey's transf is rejected. But we should probably have transf _ = return (NonOrdA, 1) instead.
In some sense the context is documentation that says "every Map must have ordered keys". If you look at all of the functions in Data.Map you'll get a similar conclusion "every useful Map has ordered keys". While you can create maps with unordered keys using
mapKeysMonotonic :: (k1 -> k2) -> Map k1 a -> Map k2 a
singleton :: k2 a -> Map k2 a
But the moment you try to do anything useful with them, you'll wind up with No instance for Ord k2 somewhat later.

How does one statisfy a class constraint in an instance of a class that requires a type constructor rather than a concrete type?

I'm currently in Chapter 8 of Learn you a Haskell, and I've reached the section on the Functor typeclass. In said section the author gives examples of how different types could be made instances of the class (e.g Maybe, a custom Tree type, etc.) Seeing this, I decided to (for fun and practice) try implementing an instance for the Data.Set type; in all of this ignoring Data.Set.map, of course.
The actual instance itself is pretty straight-forward, and I wrote it as:
instance Functor Set.Set where
fmap f empty = Set.empty
fmap f s = Set.fromList $ map f (Set.elems s)
But, since I happen to use the function fromList this brings in a class constraint calling for the types used in the Set to be Ord, as is explained by a compiler error:
Error occurred
ERROR line 4 - Cannot justify constraints in instance member binding
*** Expression : fmap
*** Type : Functor Set => (a -> b) -> Set a -> Set b
*** Given context : Functor Set
*** Constraints : Ord b
See: Live Example
I tried putting a constraint on the instance, or adding a type signature to fmap, but neither succeeded (both were compiler errors as well.)
Given a situation like this, how can a constraint be fulfilled and satisfied? Is there any possible way?
Thanks in advance! :)
Unfortunately, there is no easy way to do this with the standard Functor class. This is why Set does not come with a Functor instance by default: you cannot write one.
This is something of a problem, and there have been some suggested solutions (e.g. defining the Functor class in a different way), but I do not know if there is a consensus on how to best handle this.
I believe one approach is to rewrite the Functor class using constraint kinds to reify the additional constraints instances of the new Functor class may have. This would let you specify that Set has to contain types from the Ord class.
Another approach uses only multi-parameter classes. I could only find the article about doing this for the Monad class, but making Set part of Monad faces the same problems as making it part of Functor. It's called Restricted Monads.
The basic gist of using multi-parameter classes here seems to be something like this:
class Functor' f a b where
fmap' :: (a -> b) -> f a -> f b
instance (Ord a, Ord b) => Functor' Data.Set.Set a b where
fmap' = Data.Set.map
Essentially, all you're doing here is making the types in the Set also part of the class. This then lets you constrain what these types can be when you write an instance of that class.
This version of Functor needs two extensions: MultiParamTypeClasses and FlexibleInstances. (You need the first extension to be able to define the class and the second extension to be able to define an instance for Set.)
Haskell : An example of a Foldable which is not a Functor (or not Traversable)? has a good discussion about this.
This is impossible. The purpose of the Functor class is that if you have Functor f => f a, you can replace the a with whatever you like. The class is not allowed to constrain you to only return this or that. Since Set requires that its elements satisfy certain constraints (and indeed this isn't an implementation detail but really an essential property of sets), it doesn't satisfy the requirements of Functor.
There are, as mentioned in another answer, ways of developing a class like Functor that does constrain you in that way, but it's really a different class, because it gives the user of the class fewer guarantees (you don't get to use this with whatever type parameter you want), in exchange for becoming applicable to a wider range of types. That is, after all, the classic tradeoff of defining a property of types: the more types you want to satisfy it, the less they must be forced to satisfy.
(Another interesting example of where this shows up is the MonadPlus class. In particular, for every instance MonadPlus TC you can make an instance Monoid (TC a), but you can't always go the other way around. Hence the Monoid (Maybe a) instance is different from the MonadPlus Maybe instance, because the former can restrict the a but the latter can't.)
You can do this using a CoYoneda Functor.
{-# LANGUAGE GADTs #-}
data CYSet a where
CYSet :: (Ord a) => Set.Set a -> (a -> b) -> CYSet b
liftCYSet :: (Ord a) => Set.Set a -> CYSet a
liftCYSet s = CYSet s id
lowerCYSet :: (Ord a) => CYSet a -> Set.Set a
lowerCYSet (CYSet s f) = Set.fromList $ map f $ Set.elems s
instance Functor CYSet where
fmap f (CYSet s g) = CYSet s (f . g)
main = putStrLn . show
$ lowerCYSet
$ fmap (\x -> x `mod` 3)
$ fmap abs
$ fmap (\x -> x - 5)
$ liftCYSet $ Set.fromList [1..10]
-- prints "fromList [0,1,2]"

Resources