MultiParamTypeClasses - Why is this type variable ambiguous? - haskell

Suppose I define a multi-parameter type class:
{-# LANGUAGE MultiParamTypeClasses, AllowAmbiguousTypes, FlexibleContexts, FlexibleInstances #-}
class Table a b c where
decrement :: a -> a
evalutate :: a -> b -> c
Then I define a function that uses decrement, for simplicity:
d = decrement
When I try to load this in ghci (version 8.6.3):
• Could not deduce (Table a b0 c0)
arising from a use of ‘decrement’
from the context: Table a b c
bound by the type signature for:
d :: forall a b c. Table a b c => a -> a
at Thing.hs:13:1-28
The type variables ‘b0’, ‘c0’ are ambiguous
Relevant bindings include d :: a -> a (bound at Thing.hs:14:1)
These potential instance exist:
instance Table (DummyTable a b) a b
This is confusing to me because the type of d is exactly the type of decrement, which is denoted in the class declaration.
I thought of the following workaround:
data Table a b = Table (a -> b) ((Table a b) -> (Table a b))
But this seems notationally inconvenient, and I also just wanted to know why I was getting this error message in the first place.

The problem is that, since decrement only requires the a type, there is no way to figure out which types b and c should be, even at the point where the function is called (thus solving the polymorphism into a specific type) - therefore, GHC would be unable to decide which instance to use.
For example: let's suppose you have two instances of Table: Table Int String Bool, and Table Int Bool Float; you call your function d in a context where it is supposed to map an Int to another Int - problem is, that matches both instances! (a is Int for both).
Notice how, if you make your function equal to evalutate:
d = evalutate
then the compiler accepts it. This is because, since evalutate depends on the three type parameters a, b, and c, the context at the call site would allow for non-ambiguous instance resolution - just check which are the types for a, b, and c at the place where it is called.
This is, of course, not usually a problem for single-parameter type classes - only one type to resolve; it is when we deal with multiple parameters that things get complicated...
One common solution is to use functional dependencies - make b and c depend on a:
class Table a b c | a -> b c where
decrement :: a -> a
evalutate :: a -> b -> c
This tells the compiler that, for every instance of Table for a given type a, there will be one, and only one, instance (b and c will be uniquely determined by a); so it will know that there won't be any ambiguities and accept your d = decrement happily.

Related

Haskell function returning existential type

Is it possible to write a Haskell function that yields a parameterised type where the exact type parameter is hidden? I.e. something like f :: T -> (exists a. U a)? The obvious attempt:
{-# LANGUAGE ExistentialQuantification #-}
data D a = D a
data Wrap = forall a. Wrap (D a)
unwrap :: Wrap -> D a
unwrap (Wrap d) = d
fails to compile with:
Couldn't match type `a1' with `a'
`a1' is a rigid type variable bound by
a pattern with constructor
Wrap :: forall a. D a -> Wrap,
in an equation for `unwrap'
at test.hs:8:9
`a' is a rigid type variable bound by
the type signature for unwrap :: Wrap -> D a at test.hs:7:11
Expected type: D a
Actual type: D a1
In the expression: d
In an equation for `unwrap': unwrap (Wrap d) = d
I know this is a contrived example, but I'm curious if there is a way to convince GHC that I do not care for the exact type with which D is parameterised, without introducing another existential wrapper type for the result of unwrap.
To clarify, I do want type safety, but also would like to be able to apply a function dToString :: D a -> String that does not care about a (e.g. because it just extracts a String field from D) to the result of unwrap. I realise there are other ways of achieving it (e.g. defining wrapToString (Wrap d) = dToString d) but I'm more interested in whether there is a fundamental reason why such hiding under existential is not permitted.
Yes, you can, but not in a straightforward way.
{-# LANGUAGE ExistentialQuantification #-}
{-# LANGUAGE RankNTypes #-}
data D a = D a
data Wrap = forall a. Wrap (D a)
unwrap :: Wrap -> forall r. (forall a. D a -> r) -> r
unwrap (Wrap x) k = k x
test :: D a -> IO ()
test (D a) = putStrLn "Got a D something"
main = unwrap (Wrap (D 5)) test
You cannot return a D something_unknown from your function, but you can extract it and immediately pass it to another function that accepts D a, as shown.
Yes, you can convince GHC that you do not care for the exact type with which D is parameterised. Just, it's a horrible idea.
{-# LANGUAGE GADTs #-}
import Unsafe.Coerce
data D a = D a deriving (Show)
data Wrap where -- this GADT is equivalent to your `ExistentialQuantification` version
Wrap :: D a -> Wrap
unwrap :: Wrap -> D a
unwrap (Wrap (D a)) = D (unsafeCoerce a)
main = print (unwrap (Wrap $ D "bla") :: D Integer)
This is what happens when I execute that simple program:
and so on, until memory consumption brings down the system.
Types are important! If you circumvent the type system, you circumvent any predictability of your program (i.e. anything can happen, including thermonuclear war or the famous demons flying out of your nose).
Now, evidently you thought that types somehow work differently. In dynamic languages such as Python, and also to a degree in OO languages like Java, a type is in a sense a property that a value can have. So, (reference-) values don't just carry around the information needed to distinguish different values of a single type, but also information to distinguish different (sub-)types. That's in many senses rather inefficient – it's a major reason why Python is so slow and Java needs such a huge VM.
In Haskell, types don't exist at runtime. A function never knows what type the values have it's working with. Only because the compiler knows all about the types it will have, the function doesn't need any such knowledge – the compiler has already hard-coded it! (That is, unless you circumvent it with unsafeCoerce, which as I demonstrated is as unsafe as it sounds.)
If you do want to attach the type as a “property” to a value, you need to do it explicitly, and that's what those existential wrappers are there for. However, there are usually better ways to do it in a functional language. What's really the application you wanted this for?
Perhaps it's also helpful to recall what a signature with polymorphic result means. unwrap :: Wrap -> D a doesn't mean “the result is some D a... and the caller better don't care for the a used”. That would be the case in Java, but it would be rather useless in Haskell because there's nothing you can do with a value of unknown type.
Instead it means: for whatever type a the caller requests, this function is able to supply a suitable D a value. Of course this is tough to deliver – without extra information it's just as impossible as doing anything with a value of given unknown type. But if there are already a values in the arguments, or a is somehow constrained to a type class (e.g. fromInteger :: Num a => Integer -> a, then it's quite possible and very useful.
To obtain a String field – independent of the a parameter – you can just operate directly on the wrapped value:
data D a = D
{ dLabel :: String
, dValue :: a
}
data Wrap where Wrap :: D a -> Wrap
labelFromWrap :: Wrap -> String
labelFromWrap (Wrap (D l _)) = l
To write such functions on Wrap more generically (with any “ label accesor that doesn't care about a”), use Rank2-polymorphism as shown in n.m.'s answer.

How to define a function inside haskell newtype?

I am trying to decipher the record syntax in haskell for newtype and my understanding breaks when there is a function inside newtype. Consider this simple example
newtype C a b = C { getC :: (a -> b) -> a }
As per my reasoning C is a type which accepts a function and a parameter in it's constructor.
so,
let d1 = C $ (2 *) 3
:t d1 also gives
d1 :: Num ((a -> b) -> a) => C a b
Again to check this I do :t getC d1, which shows this
getC d1 :: Num ((a -> b) -> a) => (a -> b) -> a
Why the error if I try getC d1? getC should return the function and it's parameter or at least apply the parameter.
I can't have newtype C a b = C { getC :: (a->b)->b } deriving (Show), because this won't make sense!
It's always good to emphasise that Haskell has two completely separate namespaces, the type language and the value language. In your case, there's
A type constructor C :: Type -> Type -> Type, which lives in the type language. It takes two types a, b (of kind Type) and maps them to a type C a b (also of kind Type)†.
A value constructor C :: ((a->b) -> a) -> C a b, which lives in the value language. It takes a function f (of type (a->b) -> a) and maps it to a value C f (of type C a b).
Perhaps it would be less confusing if you had
newtype CT a b = CV ((a->b) -> a)
but because for a newtype there is always exactly one value constructor (and exactly one type constructor) it makes sense to name them the same.
CV is a value constructor that accepts one function, full stop. That function will have signature (a->b) -> a, i.e. its argument is also a function, but as far as CT is concerned this doesn't really matter.
Really, it's kind of wrong that data and newtype declarations use a = symbol, because it doesn't mean the things on the left and right are “the same” – can't, because they don't even belong to the same language. There's an alternative syntax which expresses the relation better:
{-# LANGUAGE GADTs #-}
import Data.Kind
data CT :: Type -> Type -> Type where
CV :: ((a->b) -> a) -> CT a b
As for that value you tried to construct
let d1 = CV $ (\x->(2*x)) 3
here you did not pass “a function and a parameter” to CV. What you actually did‡ was, you applied the function \x->2*x to the value 3 (might as well have written 6) and passed that number to CV. But as I said, CV expects a function. What then happens is, GHC tries to interpret 6 as a function, which gives the bogus constraint Num ((a->b) -> a). What that means is: “if (a->b)->a is a number type, then...”. Of course it isn't a number type, so the rest doesn't make sense either.
†It may seem redundant to talk of “types of kind Type”. Actually, when talking about “types” we often mean “entities in the type-level language”. These have kinds (“type-level types”) of which Type (the kind of (lifted) value-level values) is the most prominent, but not the only one – you can also have type-level numbers and type-level functions – C is indeed one.Note that Type was historically written *, but this notation is deprecated because it's inconsistent (confusion with multiplication operator).
‡This is because $ has the lowest precedence, i.e. the expression CV $ (\x->(2*x)) 3 is actually parsed as CV ((\x->(2*x)) 3), or equivalently let y = 2*3 in CV y.
As per my reasoning C is a type which accepts a function and a parameter
How so? The constructor has only one argument.
Newtypes always have a single constructor with exactly one argument.
The type C, otoh, has two type parameters. But that has nothing to do with the number of arguments you can apply to the constructor.

Haskell TypeCast type class

I've run into a type class called TypeCast in Haskell in a few different places.
It's rather cryptic, and I can't seem to fully parse it.
class TypeCast a b | a -> b, b -> a where typeCast :: a -> b
class TypeCast' t a b | t a -> b, t b -> a where typeCast' :: t -> a -> b
class TypeCast'' t a b | t a -> b, t b -> a where typeCast'' :: t -> a -> b
instance TypeCast' () a b => TypeCast a b where typeCast x = typeCast' () x
instance TypeCast'' t a b => TypeCast' t a b where typeCast' = typeCast''
instance TypeCast'' () a a where typeCast'' _ x = x
http://okmij.org/ftp/Haskell/typecast.html gives a helpful but perfunctory comment on this code. For more information, that page points me to http://homepages.cwi.nl/~ralf/HList/paper.pdf which is a broken link.
I see that TypeCast is a class that allows you to cast from one type to another, but I don't see why we need TypeCast' and TypeCast''.
It looks like all this code does is allow you to cast a type to itself. In some of the sample code I've seen, I tried replacing it with this:
class TypeCast a b | a -> b, b -> a where typeCast :: a -> b
instance TypeCast a a where typeCast a = a
and the samples still worked. The samples I've been looking at are mostly from that first link.
I was wondering if someone could explain what the six lines are for.
What is TypeCast actually for?
It isn't used for retrieving type information about existential types (that would break the type system, so it is impossible). To understand TypeCast, we first have to understand some particular details about the haskell type system. Consider the following motivating example:
data TTrue
data TFalse
class TypeEq a b c | a b -> c
instance TypeEq x x TTrue
instance TypeEq x y TFalse
The goal here is to have a boolean flag - on the type level - which tells you if two types are equal. You can use ~ for type equivalence - but this only gives you failure on type in equivalence (ie Int ~ Bool doesn't compile as opposed to TypeEq Int Bool r will give r ~ TFalse as the inferred type). However, this doesn't compile - the functional dependencies conflict. The reason is simple - x x is just an instantiation of x y (ie x ~ y => x y == x x), so according to the rules of fundeps (see the docs for full details of the rules) , the two instances must have the same value for c (or the two values must be insantiations of one another - which they aren't).
The TypeEq class exists in the HList library - lets take a look how it is implemented:
class HBool b => TypeEq x y b | x y -> b
instance TypeEq x x HTrue
instance (HBool b, TypeCast HFalse b) => TypeEq x y b
-- instance TypeEq x y HFalse -- would violate functional dependency
Naturally these instance don't conflict - HTrue is an instantiation of b. But wait! Doesn't TypeCast HFalse b imply that b must be HFalse? Yes, it does, but the compiler does not check the class instance constraint when attempting to resolve fundep conflicts. This is the key 'feature' which allows this class to exist.
As a brief note - the two instances still overlap. But with -XUndecidableInstances -XOverlappingInstances, the compiler will choose the first instance preferentially, due to the fact that the first instance is more 'specific' (in this case, that means it has at most 2 unique types - x and HTrue, while the other instance has at most 3). You can find the full set of rules that UndecidableInstances uses in the docs.
Why is TypeCast written the way it is?
If you look in the source for HList, there are multiple implementations of TypeCast. One implementation is:
instance TypeCast x x
The straightforward instance one would assume will work. Nope! From the comments in the file containing the above definition:
A generic implementation of type cast. For this implementation to
work, we need to import it at a higher level in the module hierarchy
than all clients of the class. Otherwise, type simplification will
inline TypeCast x y, which implies compile-time unification of x and y.
That is, the type simplifier (whose job it is to remove uses of type synonyms and constant class constraints) will see that x ~ y in TypeCast x x since that is the only instance that matches, but only in certain situations. Since code that behaves differently in different cases is 'Very Bad', the authors of HList have a second implementation, the one in your original post. Lets take a look:
class TypeCast a b | a -> b, b -> a
class TypeCast' t a b | t a -> b, t b -> a
class TypeCast'' t a b | t a -> b, t b -> a
instance TypeCast' () a b => TypeCast a b
instance TypeCast'' t a b => TypeCast' t a b
instance TypeCast'' () a a
In this case, TypeCast x y can never be simplified without looking at the class constraint (which the simplifier will not do!); there is no instance head which can imply x ~ y.
However, we still need to assert that x ~ y at some point in time - so we do it with more classes!
The only way we can know a ~ b in TypeCast a b is if TypeCast () a b implies a ~ b. This is only the case if TypeCast'' () a b implies a ~ b, which it does.
I can't give you the whole story unfortunatley; I don't know why
instance TypeCast' () a b => TypeCast a b
instance TypeCast' () a a
doesn't suffice (it works - I don't know why it wouldn't be used). I suspect it has something to do with error messages. I'm sure you could track down Oleg and ask him!
The HList paper was published in the proceedings of the Haskell Workshop 2004, and so is available from the ACM DL and other archives. Alas, the explanation in the published version is abbreviated for the lack of space. For full explanation, please see the expanded version of the paper published as a Technical Report, which is available at
http://okmij.org/ftp/Haskell/HList-ext.pdf (The CWI link is indeed no longer valid since Ralf left CWI long time ago.) Please see Appendix D in that TR for the explanation of TypeCast.
In the latest GHC, instead of TypeCast x y constraint you can write x ~ y. There is no corresponding typeCast method: it is no longer necessary. When you write the x ~ y constraint, GHC synthesizes something like typeCast (called coercion) automatically and behind the scenes.
Asking me a question directly in a e-mail message usually results in a much faster reply.

the Coverage Condition fails

I have a very simple piece of code as follow:
{-# LANGUAGE
MultiParamTypeClasses,
FunctionalDependencies,
FlexibleInstances,
FlexibleContexts
#-}
class Graph g n e | g -> n e where
nodes :: g -> [n]
edge :: g -> (n,n) -> Maybe e
instance Graph g Int e where
nodes g = []
edge g (n1,n2) = Nothing
I got an error related to the Coverage Condition fails for one of the functional dependencies. Do I need to add -XUndecidableInstances to permit this? or how I can fix this problem? Thanks
The problem in your case is not Int but e. The Coverage Condition is documented in GHC's manual Sect. 7.6.3.2. Relaxed rules for instance contexts and says:
The Coverage Condition. For each functional dependency, tvsleft -> tvsright, of the class, every type variable in S(tvsright) must appear in S(tvsleft), where S is the substitution mapping each type variable in the class declaration to the corresponding type in the instance declaration.
What does it mean in practise? In your case, your functional dependency says g -> n e, which means that for each instance the types denoted by n and e are unique for the type denoted by g. Now let's say you're defining an instance
instance Graph SomeTypeG SomeTypeN SomeTypeE where
...
The coverage condition says that any type variable appearing in SomeTypeE or SomeTypeN must appear in SomeTypeG. What happens if it's not satisfied? Let's suppose a type variable a appears in SomeTypeE but not in SomeTypeG. Then for fixed SomeTypeG we would have an infinite number of possible instances by substituting different types for a.
In your case
instance Graph g Int e where
...
e is such a type variable, so by the Coverage Condition it must appear in g, which is not true. Because it doesn't appear there, your definition would imply that Graph g Int Int is an instances, Graph g Int (Maybe Char) is another instance, etc., contradicting the functional dependency that requires that there is precisely one.
If you had defined something like
instance Graph g Int Char where
then it would be OK, as there are no type variables in Int and Char. Another valid instance could be
instance Graph (g2 e) Int e where
where g2 is now of kind * -> *. In this case, e appears in g2 e, which satisfies the Coverage Condition, and indeed, e is always uniquely determined from g2 e.
Your functional dependency says that your choice of type g determines your node and element types n and e, respectively. Does it make sense, then, to say that all graph types g (knowing nothing about g) determine the node type to be Int?

How can I combine two type constraints with a logical or in Haskell?

In Haskell we are given the ability to combine constraints on types with a logical and.
Consider the following
type And (a :: Constraint) b = (a, b)
or more complicatedly
class (a, b) => And a b
instance (a, b) => And a b
I want to know how to logically or two constraints together in Haskell.
My closest attempt is this, but it doesn't quite work. In this attempt I reify type constraints with tags and than dereify them with implicit parameters.
data ROr a b where
L :: a => ROr a b
R :: b => ROr a b
type Or a b = (?choose :: ROr a b)
y :: Or (a ~ Integer) (Bool ~ Integer) => a
y = case ?choose of
L -> 4
x :: Integer
x = let ?choose = L in y
It almost works, but the user has to apply the final part, and the compiler should do that for me. As well, this case does not let one choose a third choice when both constraints are satisfied.
How can I logically or two constraints together?
I believe that there is no way to automatically pick an ROr a b; it would violate the open world assumption if, e.g. b was satisfied, but later a was satisfied as well; any conflict resolution rule would necessarily cause the addition of an instance to change the behaviour of existing code.
That is, picking R when b is satisfied but a is not breaks the open world assumption, because it involves deciding that an instance is not satisfied;1 even if you added a "both satisfied" constructor, you would be able to use it to decide whether an instance is not present (by seeing if you get an L or an R).
Therefore, I do not believe that such an or constraint is possible; if you can observe which instance you get, then you can create a program whose behaviour changes by adding an instance, and if you can't observe which instance you get, then it's pretty useless.
1 The difference between this and normal instance resolution, which can also fail, is that normally, the compiler cannot decide that a constraint is satisfied; here, you're asking the compiler to decide that the constraint cannot be satisfied. A subtle but important difference.
I came here to answer your question on the cafe. Not sure the q here is the same, but anyway ...
a type class with three parameters.
class Foo a b c | a b -> c where
foo :: a -> b -> c
instance Foo A R A where ...
instance Foo R A A where ...
In addition to the functional dependency I'd like to express that at least one of the parameters a and b is c,
import Data.Type.Equality
import Data.Type.Bool
class ( ((a == c) || (b == c)) ~ True)
=> Foo a b c | a b -> c where ...
You'll need a bunch of extensions switched on. In particular UndecidableSuperClasses, because the type family calls in the class constraint are opaque as far as GHC can see.
Your q here
How can I logically or two constraints together?
Is far more tricky. For the type equality approach, == uses a Closed Type Family. So you could write a Closed Type Family returning kind Constraint, but I doubt there's a general solution. For your Foo class:
type family AorBeqC a b c :: Constraint where
AorBeqC a b a = ()
AorBeqC a b c = (b ~ c)
class AorBeqC a b c => Foo a b c | a b -> c where ...
It's likely to have poor and non-symmetrical type improvement behaviour: if GHC can see that a, c are apart, it'll go to the second equation and use (b ~ c) to improve either; if it can't see they're apart nor that they're unifiable, it'll get stuck.
In general, as #ehird points out, you can't test whether a constraint is not satisfiable. Type equality is special.

Resources