Haskell subclassing and instance overlap - haskell

Coming from the OOP world, I sometimes find myself trying to use the inheritance pattern in Haskell, with varying degrees of success. Here's a little puzzle I encountered with subclassing (using GHC 8.10.7).
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE UndecidableInstances #-}
import Data.List (sort)
class Collection c a where
-- gets list of elements in the collection
elements :: c a -> [a]
class OrderedCollection c a where
-- gets sorted list of elements in the collection
orderedElements :: c a -> [a]
instance (Ord a, OrderedCollection c a) => Collection c a where
-- "default" implementation
elements = orderedElements
newtype SortedList a = SortedList [a]
deriving Show
instance (Ord a) => OrderedCollection SortedList a where
-- need to sort the elements in the list
orderedElements (SortedList xs) = sort xs
instance Collection SortedList a where
-- "optimized" implementation: no need to sort
elements (SortedList xs) = xs
test :: (Ord a, Show a, OrderedCollection c a) => c a -> IO ()
test coll = do
putStrLn $ "ordered elements: " ++ show (orderedElements coll)
putStrLn $ "elements: " ++ show (elements coll)
myList :: SortedList Int
myList = SortedList [3, 2, 1]
main :: IO ()
main = do
test myList
After including the necessary language extensions, this still gave me an error: Overlapping instances for Collection c a arising from a use of ‘elements’. It suggests using IncoherentInstances. Since this extension is now deprecated in favor of per-instance pragmas, I added an INCOHERENT pragma to the subclass instance:
instance {-# INCOHERENT #-} (Ord a, OrderedCollection c a) => Collection c a where
...
This successfully compiled. However, the result was not what I expected, as the output was:
ordered elements: [1,2,3]
elements: [1,2,3]
What I wanted was for the specialized implementation of Collection for SortedList to override the default (in an OO language, SortedList would inherit from OrderedCollection and then override the elements method). But here the type checker does not know to use SortedList's custom Collection implementation, because the type signature of test only imposes the constraint OrderedCollection c a.
Next, I tried adding the Collection constraint:
test :: (Ord a, Show a, Collection c a, OrderedCollection c a) => c a -> IO ()
This gave me the output I wanted:
ordered elements: [1,2,3]
elements: [3,2,1]
However, GHC also issued a warning about "fragile inner bindings" and suggested I add the MonoLocalBinds extension, which silences that warning. In any case, I'm not thrilled with having to include the Collection c a constraint (given it's implied by OrderedCollection c a), or having to use incoherent instances.
Interestingly, if I changed the INCOHERENT pragma to OVERLAPPABLE, it still compiled, and it also allowed me to remove MonoLocalBinds.
My question is, are there any alternative approaches to achieving the desired "inheritance" behavior here, without needing the redundant constraint in test?

When you write this:
instance ... => Collection c a where
You're declaring a Collection instance for all types ever. And it doesn't matter at all what's on the left of the fat arrow =>. Constraints do not participate in instance resolution. When the compiler tries to lookup an instance for a particular type, it only looks at what's on the right of the fat arrow =>, and only after finding a matching instance does it check if its constraints are satisfied. And if they're not, the compiler won't go back to look for another instance. That's how instance resolution works, and there are good reasons for it.
So, to reiterate: Collection c a means that this is an instance for all types.
And therefore, any subsequent Collection instances you might declare would of course be overlapping.
Thankfully, in this particular case, there is a better way: you can declare default methods without creating a universal instance like that. To do that, declare the method right inside the class declaration. And yes, you can put constraints on it too (see docs):
class Collection c a where
-- gets list of elements in the collection
elements :: c a -> [a]
default elements :: OrderedCollection c a => c a -> [a]
elements = orderedElements
But more generally, while type classes plus existential quantification is technically equivalent to OOP-style class hierarchies, if you try to actually model your domain like that, it would be more and more awkward and painful the further you go. It's a bit like trying to model ADTs in something like Java. Technically possible, but oh so messy!
There are some legitimate cases where a class hierarchy may make sense (one notable example is the GHC exception system), but most of the time there are much simpler ways.

Related

How to 'show' unshowable types?

I am using data-reify and graphviz to transform an eDSL into a nice graphical representation, for introspection purposes.
As simple, contrived example, consider:
{-# LANGUAGE GADTs #-}
data Expr a where
Constant :: a -> Expr a
Map :: (other -> a) -> Expr a -> Expr a
Apply :: Expr (other -> a) -> Expr a -> Expr a
instance Functor Expr where
fmap fun val = Map fun val
instance Applicative Expr where
fun_expr <*> data_expr = Apply fun_expr data_expr
pure val = Constant val
-- And then some functions to optimize an Expr AST, evaluate Exprs, etc.
To make introspection nicer, I would like to print the values which are stored inside certain AST nodes of the DSL datatype.
However, in general any a might be stored in Constant, even those that do not implement Show. This is not necessarily a problem since we can constrain the instance of Expr like so:
instance Show a => Show (Expr a) where
...
This is not what I want however: I would still like to be able to print Expr even if a is not Show-able, by printing some placeholder value (such as just its type and a message that it is unprintable) instead.
So we want to do one thing if we have an a implementing Show, and another if a particular a does not.
Furthermore, the DSL also has the constructors Map and Apply which are even more problematic. The constructor is existential in other, and thus we cannot assume anything about other, a or (other -> a). Adding constraints to the type of other to the Map resp. Apply constructors would break the implementation of Functor resp. Applicative which forwards to them.
But here also I'd like to print for the functions:
a unique reference. This is always possible (even though it is not pretty as it requires unsafePerformIO) using System.Mem.StableName.
Its type, if possible (one technique is to use show (typeOf fun), but it requires that fun is Typeable).
Again we reach the issue where we want to do one thing if we have an f implementing Typeable and another if f does not.
How to do this?
Extra disclaimer: The goal here is not to create 'correct' Show instances for types that do not support it. There is no aspiration to be able to Read them later, or that print a != print b implies a != b.
The goal is to print any datastructure in a 'nice for human introspection' way.
The part I am stuck at, is that I want to use one implementation if extra constraints are holding for a resp. (other -> a), but a 'default' one if these do not exist.
Maybe type classes with FlexibleInstances, or maybe type families are needed here? I have not been able to figure it out (and maybe I am on the wrong track all together).
Not all problems have solutions. Not all constraint systems have a satisfying assignment.
So... relax the constraints. Store the data you need to make a sensible introspective function in your data structure, and use functions with type signatures like show, fmap, pure, and (<*>), but not exactly equal to them. If you need IO, use IO in your type signature. In short: free yourself from the expectation that your exceptional needs fit into the standard library.
To deal with things where you may either have an instance or not, store data saying whether you have an instance or not:
data InstanceOrNot c where
Instance :: c => InstanceOrNot c
Not :: InstanceOrNot c
(Perhaps a Constraint-kinded Either-alike, rather than Maybe-alike, would be more appropriate. I suspect as you start coding this you will discover what's needed.) Demand that clients that call notFmap and friends supply these as appropriate.
In the comments, I propose parameterizing your type by the constraints you demand, and giving a Functor instance for the no-constraints version. Here's a short example showing how that might look:
{-# LANGUAGE GADTs #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE FlexibleInstances #-}
import Data.Kind
type family All cs a :: Constraint where
All '[] a = ()
All (c:cs) a = (c a, All cs a)
data Lol cs a where
Leaf :: a -> Lol cs a
Fmap :: All cs b => (a -> b) -> Lol cs a -> Lol cs b
instance Functor (Lol '[]) where
fmap f (Leaf a) = Leaf (f a)
fmap f (Fmap g garg) = Fmap (f . g) garg
Great timing! Well-typed recently released a library which allows you to recover runtime information. They specifically have an example of showing arbitrary values. It's on github at https://github.com/well-typed/recover-rtti.
It turns out that this is a problem which has been recognized by multiple people in the past, known as the 'Constrained Monad Problem'. There is an elegant solution, explained in detail in the paper The Constrained-Monad Problem by Neil Sculthorpe and Jan Bracker and George Giorgidze and Andy Gill.
A brief summary of the technique: Monads (and other typeclasses) have a 'normal form'. We can 'lift' primitives (which are constrained any way we wish) into this 'normal form' construction, itself an existential datatype, and then use any of the operations available for the typeclass we have lifted into. These operations themselves are not constrained, and thus we can use all of Haskell's normal typeclass functions.
Finally, to turn this back into the concrete type (which again has all the constraints we are interested in) we 'lower' it, which is an operation that takes for each of the typeclass' operations a function which it will apply at the appropriate time.
This way, constraints from the outside (which are part of the functions supplied to the lowering) and constraints from the inside (which are part of the primitives we lifted) are able to be matched, and finally we end up with one big happy constrained datatype for which we have been able to use any of the normal Functor/Monoid/Monad/etc. operations.
Interestingly, while the intermediate operations are not constrained, to my knowledge it is impossible to write something which 'breaks' them as this would break the categorical laws that the typeclass under consideration should adhere to.
This is available in the constrained-normal Hackage package to use in your own code.
The example I struggled with, could be implemented as follows:
{-# LANGUAGE GADTs #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE ConstraintKinds #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE UndecidableInstances #-}
module Example where
import Data.Dynamic
import Data.Kind
import Data.Typeable
import Control.Monad.ConstrainedNormal
-- | Required to have a simple constraint which we can use as argument to `Expr` / `Expr'`.
-- | This is definitely the part of the example with the roughest edges: I have yet to figure out
-- | how to make Haskell happy with constraints
class (Show a, Typeable a) => Introspectable a where {}
instance (Show a, Typeable a) => Introspectable a where {}
data Expr' (c :: * -> Constraint) a where
C :: a -> Expr' c a
-- M :: (a -> b) -> Expr' a -> Expr' b --^ NOTE: This one is actually never used as ConstrainedNormal will use the 'free' implementation based on A + C.
A :: c a => Expr' c (a -> b) -> Expr' c a -> Expr' c b
instance Introspectable a => Show (Expr' Introspectable a) where
show e = case e of
C x -> "(C " ++ show x ++ ")"
-- M f x = "(M " ++ show val ++ ")"
A fx x -> "(A " ++ show (typeOf fx) ++ " " ++ show x ++ ")"
-- | In user-facing code you'd not want to expose the guts of this construction
-- So let's introduce a 'wrapper type' which is what a user would normally interact with.
type Expr c a = NAF c (Expr' c) a
liftExpr :: c a => Expr' c a -> Expr c a
liftExpr expr = liftNAF expr
lowerExpr :: c a => Expr c a -> Expr' c a
lowerExpr lifted_expr = lowerNAF C A lifted_expr
constant :: Introspectable a => a -> Expr c a
constant val = pure val -- liftExpr (C val)
You could now for instance write
ghci> val = constant 10 :: Expr Introspectable Int
(C 10)
ghci> (+2) <$> val
(C 12)
ghci> (+) <$> constant 10 <*> constant 32 :: Expr Introspectable Int
And by using Data.Constraint.Trivial (part of the trivial-constrained library, although it is also possible to write your own 'empty constrained') one could instead write e.g.
ghci> val = constant 10 :: Expr Unconstrained Int
which will work just as before, but now val cannot be printed.
The one thing I have not yet figured out, is how to properly work with subsets of constraints (i.e. if I have a function that only requires Show, make it work with something that is Introspectable). Currently everything has to work with the 'big' set of constraints.
Another minor drawback is of course that you'll have to annotate the constraint type (e.g. if you do not want constraints, write Unconstrained manually), as GHC will otherwise complain that c0 is not known.
We've reached the goal of having a type which can be optionally be constrained to be printable, with all machinery that does not need printing to work also on all instances of the family of types including those that are not printable, and the types can be used as Monoids, Functors, Applicatives, etc just as you like.
I think it is a beautiful approach, and want to commend Neil Sculthorpe et al. for their work on the paper and the constrained-normal library that makes this possible. It's very cool!

Subset algebraic data type, or type-level set, in Haskell

Suppose you have a large number of types and a large number of functions that each return "subsets" of these types.
Let's use a small example to make the situation more explicit. Here's a simple algebraic data type:
data T = A | B | C
and there are two functions f, g that return a T
f :: T
g :: T
For the situation at hand, assume it is important that f can only return a A or B and g can only return a B or C.
I would like to encode this in the type system. Here are a few reasons/circumstances why this might be desirable:
Let the functions f and g have a more informative signature than just ::T
Enforce that implementations of f and g do not accidentally return a forbidden type that users of the implementation then accidentally use
Allow code reuse, e.g. when helper functions are involved that only operate on subsets of type T
Avoid boilerplate code (see below)
Make refactoring (much!) easier
One way to do this is to split up the algebraic datatype and wrap the individual types as needed:
data A = A
data B = B
data C = C
data Retf = RetfA A | RetfB B
data Retg = RetgB B | RetgC C
f :: Retf
g :: Retg
This works, and is easy to understand, but carries a lot of boilerplate for frequent unwrapping of the return types Retf and Retg.
I don't see polymorphism being of any help, here.
So, probably, this is a case for dependent types. It's not really a type-level list, rather a type-level set, but I've never seen a type-level set.
The goal, in the end, is to encode the domain knowledge via the types, so that compile-time checks are available, without having excessive boilerplate. (The boilerplate gets really annoying when there are lots of types and lots of functions.)
Define an auxiliary sum type (to be used as a data kind) where each branch corresponds to a version of your main type:
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE StandaloneKindSignatures #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE DataKinds #-}
import Data.Kind
import Data.Void
import GHC.TypeLits
data Version = AllEnabled | SomeDisabled
Then define a type family that maps the version and the constructor name (given as a type-level Symbol) to the type () if that branch is allowed, and to the empty type Void if it's disallowed.
type Enabled :: Version -> Symbol -> Type
type family Enabled v ctor where
Enabled SomeDisabled "C" = Void
Enabled _ _ = ()
Then define your type as follows:
type T :: Version -> Type
data T v = A !(Enabled v "A")
| B !(Enabled v "B")
| C !(Enabled v "C")
(The strictness annotations are there to help the exhaustivity checker.)
Typeclass instances can be derived, but separately for each version:
deriving instance Show (T AllEnabled)
deriving instance Eq (T AllEnabled)
deriving instance Show (T SomeDisabled)
deriving instance Eq (T SomeDisabled)
Here's an example of use:
noC :: T SomeDisabled
noC = A ()
main :: IO ()
main = print $ case noC of
A _ -> "A"
B _ -> "B"
-- this doesn't give a warning with -Wincomplete-patterns
This solution makes pattern-matching and construction more cumbersome, because those () are always there.
A variation is to have one type family per branch (as in Trees that Grow) instead of a two-parameter type family.
I tried to achieve something like this in the past, but without much success -- I was not too satisfied with my solution.
Still, one can use GADTs to encode this constraint:
data TagA = IsA | NotA
data TagC = IsC | NotC
data T (ta :: TagA) (tc :: TagC) where
A :: T 'IsA 'NotC
B :: T 'NotA 'NotC
C :: T 'NotA 'IsC
-- existential wrappers
data TnotC where TnotC :: T ta 'NotC -> TnotC
data TnotA where TnotA :: T 'NotA tc -> TnotA
f :: TnotC
g :: TnotA
This however gets boring fast, because of the wrapping/unwrapping of the exponentials. Consumer functions are more convenient since we can write
giveMeNotAnA :: T 'NotA tc -> Int
to require anything but an A. Producer functions instead need to use existentials.
In a type with many constructors, it also gets inconvenient since we have to use a GADT with many tags/parameters. Maybe this can be streamlined with some clever typeclass machinery.
Giving each individual value its own type scales extremely badly, and is quite unnecessarily fine-grained.
What you probably want is just restrict the types by some property on their values. In e.g. Coq, that would be a subset type:
Inductive T: Type :=
| A
| B
| C.
Definition Retf: Type := { x: T | x<>C }.
Definition Retg: Type := { x: T | x<>A }.
Well, Haskell has no way of expressing such value constraints, but that doesn't stop you from creating types that conceptually fulfill them. Just use newtypes:
newtype Retf = Retf { getRetf :: T }
mkRetf :: T -> Maybe Retf
mkRetf C = Nothing
mkRetf x = Retf x
newtype Retg = Retg { getRetg :: T }
mkRetg :: ...
Then in the implementation of f, you match for the final result of mkRetf and raise an error if it's Nothing. That way, an implementation mistake that makes it give a C will unfortunately not give a compilation error, but at least a runtime error from within the function that's actually at fault, rather than somewhere further down the line.
An alternative that might be ideal for you is Liquid Haskell, which does support subset types. I can't say too much about it, but it's supposedly pretty good (and will in new GHC versions have direct support).

Can I parameterise the empty constraint type?

I have a class for queues which allows the instance to define the constraints it places on the elements. For example, a priority queue requires its elements to be orderable:
{-# LANGUAGE MultiParamTypeClasses, ConstraintKinds, FunctionalDependencies #-}
class Queue q c | q -> c where
empty :: q a
qpop :: c a => q a -> Maybe (a, q a)
qpush :: c a => a -> q a -> q a
data PriorityQueue a = ...
instance Queue PriorityQueue Ord where
...
This works a charm: inside the instance declaration for PriorityQueue I can operate on elements of the queue using members of Ord such as (>).
I've got stuck trying to define a queue which places no requirements on its elements:
newtype LIFO a = LIFO [a]
instance Queue LIFO () where
empty = LIFO []
qpop (LIFO []) = Nothing
qpop (LIFO (x:xs)) = Just (x, LIFO xs)
qpush x (LIFO xs) = LIFO $ x:xs
This fails, with the following error message from GHC:
The second argument of `Queue' should have kind `* -> Constraint',
but `()' has kind `*'
In the instance declaration for `Queue LIFO ()'
This error message makes sense to me. Eq accepts a type parameter (we typically write Eq a => ...) whereas () has no parameters - it's a plain old kind mismatch.
I had a crack at writing a type function which ignores its second argument, which would allow me to write instance Queue LIFO (Const ()):
{-# LANGUAGE TypeFamilies, KindSignatures, PolyKinds #-}
type family Const a b :: k -> k2 -> k
type instance Const a b = a
I find this interaction of type families and kind polymorphism quite beautiful, so I was rather disappointed when it didn't work (I really thought it would!):
Expecting two more arguments to `a'
The first argument of `Const' should have kind `*',
but `a' has kind `k0 -> k1 -> k0'
In the type `a'
In the type instance declaration for `Const'
I have a feeling this last example is something stupid like a syntax mistake (I'm new to type families). How can I write a Constraint which doesn't place any restrictions on its argument?
This should work:
class NoConstraint a where
instance NoConstraint a where
instance Queue LIFO NoConstraint where
...
The above defines a constraint which is satisfied by all types. As such, the obligations c a where c = NoConstraint can always be discharged.
Also, since there are no members in that class, it should have zero (or nearly zero) run-time cost.
The "constraint" () you are trying to use is not seen as an empty constraint set by GHC, but as the unit type () :: *. This causes Const () :: k2 -> *, which triggers the kind error.
If you do not want to use a custom class, you might try e.g. Const (Eq ()) or Const (Num Int), which have the right kind k2 -> Constraint. I do not recommend this, though, since I find it less readable than using a custom class.
(This requires to enable some extensions, as Benjamin Hodgson points out below in a comment.)

Which dictionary does GHC choose when more than one is in scope?

Consider the following example:
import Data.Constraint
class Bar a where
bar :: a -> a
foo :: (Bar a) => Dict (Bar a) -> a -> a
foo Dict = bar
GHC has two choices for the dictionary to use when selecting a Bar instance in foo: it could use the dictionary from the Bar a constraint on foo, or it could use the runtime Dict to get a dictionary. See this question for an example where the dictionaries correspond to different instances.
Which dictionary does GHC use, and why is it the "correct" choice?
It just picks one. This isn't the correct choice; it's a pretty well-known wart. You can cause crashes this way, so it's a pretty bad state of affairs. Here is a short example using nothing but GADTs that demonstrates that it is possible to have two different instances in scope at once:
-- file Class.hs
{-# LANGUAGE GADTs #-}
module Class where
data Dict a where
Dict :: C a => Dict a
class C a where
test :: a -> Bool
-- file A.hs
module A where
import Class
instance C Int where
test _ = True
v :: Dict Int
v = Dict
-- file B.hs
module B where
import Class
instance C Int where
test _ = False
f :: Dict Int -> Bool
f Dict = test (0 :: Int)
-- file Main.hs
import TestA
import TestB
main = print (f v)
You will find that Main.hs compiles just fine, and even runs. It prints True on my machine with GHC 7.10.1, but that's not a stable outcome. Turning this into a crash is left to the reader.
GHC just picks one, and this is the correct choice. Any two dictionaries for the same constraint are supposed to be equal.
OverlappingInstances and IncoherentInstances are basically equivalent in destructive power; they both lose instance coherence by design (any two equal constraints in your program being satisfied by the same dictionary). OverlappingInstances gives you a little more ability to work out which instances will be used on a case-by-case basis, but this isn't that useful when you get to the point of passing around Dicts as first class values and so on. I would only consider using OverlappingInstances when I consider the overlapping instances extensionally equivalent (e.g., a more efficient but otherwise equal implementation for a specific type like Int), but even then, if I care enough about performance to write that specialized implementation, isn't it a performance bug if it doesn't get used when it could be?
In short, if you use OverlappingInstances, you give up the right to ask the question of which dictionary will be selected here.
Now it's true that you can break instance coherence without OverlappingInstances. In fact you can do it without orphans and without any extensions other than FlexibleInstances (arguably the problem is that the definition of "orphan" is wrong when FlexibleInstances is enabled). This is a very long-standing GHC bug, which hasn't been fixed in part because (a) it actually can't cause crashes directly as far as anybody seems to know, and (b) there might be a lot of programs that actually rely on having multiple instances for the same constraint in separate parts of the program, and that might be hard to avoid.
Getting back to the main topic, in principle it's important that GHC can select any dictionary that it has available to satisfy a constraint, because even though they are supposed to be equal, GHC might have more static information about some of them than others. Your example is a little bit too simple to be illustrative but imagine that you passed an argument to bar; in general GHC doesn't know anything about the dictionary passed in via Dict so it has to treat this as a call to an unknown function, but you called foo at a specific type T for which there was a Bar T instance in scope, then GHC would know that the bar from the Bar a constraint dictionary was T's bar and could generate a call to a known function, and potentially inline T's bar and do more optimizations as a result.
In practice, GHC is currently not this smart and it just uses the innermost dictionary available. It would probably be already better to always use the outermost dictionary. But cases like this where there are multiple dictionaries available are not very common, so we don't have good benchmarks to test on.
Here's a test:
{-# LANGUAGE FlexibleInstances, OverlappingInstances, IncoherentInstances #-}
import Data.Constraint
class C a where foo :: a -> String
instance C [a] where foo _ = "[a]"
instance C [()] where foo _ = "[()]"
aDict :: Dict (C [a])
aDict = Dict
bDict :: Dict (C [()])
bDict = Dict
bar1 :: String
bar1 = case (bDict, aDict :: Dict (C [()])) of
(Dict,Dict) -> foo [()] -- output: "[a]"
bar2 :: String
bar2 = case (aDict :: Dict (C [()]), bDict) of
(Dict,Dict) -> foo [()] -- output: "[()]"
GHC above happens to use the "last" dictionary which was brought into scope. I wouldn't rely on this, though.
If you limit yourself to overlapping instances, only, then you wouldn't be able to bring in scope two different dictionaries for the same type (as far as I can see), and everything should be fine since the choice of the dictionary becomes immaterial.
However, incoherent instances are another beast, since they allow you to commit to a generic instance and then use it at a type which has a more specific instance. This makes it very hard to understand which instance will be used.
In short, incoherent instances are evil.
Update: I ran some further tests. Using only overlapping instances and an orphan instance in a separate module you can still obtain two different dictionaries for the same type. So, we need even more caveats. :-(

Associated Parameter Restriction using Functional Dependency

The function f below, for a given type 'a', takes a parameter of type 'c'. For different types 'a', 'c' is restricted in different ways. Concretely, when 'a' is any Integral type, 'c' should be allowed to be any 'Real' type. When 'a' is Float, 'c' can ONLY be Float.
One attempt is:
{-# LANGUAGE
MultiParamTypeClasses,
FlexibleInstances,
FunctionalDependencies,
UndecidableInstances #-}
class AllowedParamType a c | a -> c
class Foo a where
f :: (AllowedParamType a c) => c -> a
fIntegral :: (Integral a, Real c) => c -> a
fIntegral = error "implementation elided"
instance (Integral i, AllowedParamType i d, Real d) => Foo i where
f = fIntegral
For some reason, GHC 7.4.1 complains that it "could not deduce (Real c) arising from a use of fIntegral". It seems to me that the functional dependency should allow this deduction. In the instance, a is unified with i, so by the functional dependency, d should be unified with c, which in the instance is declared to be 'Real'. What am I missing here?
Functional dependencies aside, will this approach be expressive enough to enforce the restrictions above, or is there a better way? We are only working with a few different values for 'a', so there will be instances like:
instance (Integral i, Real c) => AllowedParamType i c
instance AllowedParamType Float Float
Thanks
A possibly better way, is to use constraint kinds and type families (GHC extensions, requires GHC 7.4, I think). This allows you to specify the constraint as part of the class instance.
{-# LANGUAGE ConstraintKinds, TypeFamilies, FlexibleInstances, UndecidableInstances #-}
import GHC.Exts (Constraint)
class Foo a where
type ParamConstraint a b :: Constraint
f :: ParamConstraint a b => b -> a
instance Integral i => Foo i where
type ParamConstraint i b = Real b
f = fIntegral
EDIT: Upon further experimentation, there are some subtleties that mean that this doesn't work as expected, specifically, type ParamConstraint i b = Real b is too general. I don't know a solution (or if one exists) right now.
OK, this one's been nagging at me. given the wide variety of instances,
let's go the whole hog and get rid of any relationship between the
source and target type other than the presence of an instance:
{-# LANGUAGE OverlappingInstances, FlexibleInstances,TypeSynonymInstances,MultiParamTypeClasses #-}
class Foo a b where f :: a -> b
Now we can match up pairs of types with an f between them however we like, for example:
instance Foo Int Int where f = (+1)
instance Foo Int Integer where f = toInteger.((7::Int) -)
instance Foo Integer Int where f = fromInteger.(^ (2::Integer))
instance Foo Integer Integer where f = (*100)
instance Foo Char Char where f = id
instance Foo Char String where f = (:[]) -- requires TypeSynonymInstances
instance (Foo a b,Functor f) => Foo (f a) (f b) where f = fmap f -- requires FlexibleInstances
instance Foo Float Int where f = round
instance Foo Integer Char where f n = head $ show n
This does mean a lot of explicit type annotation to avoid No instance for... and Ambiguous type error messages.
For example, you can't do main = print (f 6), but you can do main = print (f (6::Int)::Int)
You could list all of the instances with the standard types that you want,
which could lead to an awful lot of repetition, our you could light the blue touchpaper and do:
instance Integral i => Foo Double i where f = round -- requires FlexibleInstances
instance Real r => Foo Integer r where f = fromInteger -- requires FlexibleInstances
Beware: this does not mean "Hey, if you've got an integral type i,
you can have an instance Foo Double i for free using this handy round function",
it means: "every time you have any type i, it's definitely an instance
Foo Double i. By the way, I'm using round for this, so unless your type i is Integral,
we're going to fall out." That's a big issue for the Foo Integer Char instance, for example.
This can easily break your other instances, so if you now type f (5::Integer) :: Integer you get
Overlapping instances for Foo Integer Integer
arising from a use of `f'
Matching instances:
instance Foo Integer Integer
instance Real r => Foo Integer r
You can change your pragmas to include OverlappingInstances:
{-# LANGUAGE OverlappingInstances, FlexibleInstances,TypeSynonymInstances,MultiParamTypeClasses #-}
So now f (5::Integer) :: Integer returns 500, so clearly it's using the more specific Foo Integer Integer instance.
I think this sort of approach might work for you, defining many instances by hand, carefully considering when to go completely wild
making instances out of standard type classes. (Alternatively, there aren't all that many standard types, and as we all know, notMany choose 2 = notIntractablyMany, so you could just list them all.)
Here's a suggestion to solve a more general problem, not yours specifically (I need more detail yet first - I promise to check later). I'm writing it in case other people are searching for a solution to a similar problem to you, I certainly was in the past, before I discovered SO. SO is especially great when it helps you try a radically new approach.
I used to have the work habit:
Introduce a multi-parameter type class (Types hanging out all over the place, so...)
Introduce functional dependencies (Should tidy it up but then I end up needing...)
Add FlexibleInstances (Alarm bells start ringing. There's a reason the compiler has this off by default...)
Add UndecidableInstances (GHC is telling you you're on your own, because it's not convinced it's up to the challenge you're setting it.)
Everything blows up. Refactor somehow.
Then I discovered the joys of type families (functional programming for types (hooray) - multi-parameter type classes are (a bit like) logic programming for types). My workflow changed to:
Introduce a type class including an associated type, i.e. replace
class MyProblematicClass a b | a -> b where
thing :: a -> b
thang :: b -> a -> b
with
class MyJustWorksClass a where
type Thing a :: * -- Thing a is a type (*), not a type constructor (* -> *)
thing :: a -> Thing a
thang :: Thing a -> a -> Thing a
Nervously add FlexibleInstances. Nothing goes wrong at all.
Sometimes fix things by using constraints like (MyJustWorksClass j,j~a)=> instead of (MyJustWorksClass a)=> or (Show t,t ~ Thing a,...)=> instead of (Show (Thing a),...) => to help ghc out. (~ essentially means 'is the same type as')
Nervously add FlexibleContexts. Nothing goes wrong at all.
Everything works.
The reason "Nothing goes wrong at all" is that ghc calculates the type Thing a using my type function Thang rather than trying to deduce it using a merely a bunch of assertions that there's a function there and it ought to be able to work it out.
Give it a go! Read Fun with Type Functions before reading the manual!

Resources