How can I combine two type constraints with a logical or in Haskell? - haskell

In Haskell we are given the ability to combine constraints on types with a logical and.
Consider the following
type And (a :: Constraint) b = (a, b)
or more complicatedly
class (a, b) => And a b
instance (a, b) => And a b
I want to know how to logically or two constraints together in Haskell.
My closest attempt is this, but it doesn't quite work. In this attempt I reify type constraints with tags and than dereify them with implicit parameters.
data ROr a b where
L :: a => ROr a b
R :: b => ROr a b
type Or a b = (?choose :: ROr a b)
y :: Or (a ~ Integer) (Bool ~ Integer) => a
y = case ?choose of
L -> 4
x :: Integer
x = let ?choose = L in y
It almost works, but the user has to apply the final part, and the compiler should do that for me. As well, this case does not let one choose a third choice when both constraints are satisfied.
How can I logically or two constraints together?

I believe that there is no way to automatically pick an ROr a b; it would violate the open world assumption if, e.g. b was satisfied, but later a was satisfied as well; any conflict resolution rule would necessarily cause the addition of an instance to change the behaviour of existing code.
That is, picking R when b is satisfied but a is not breaks the open world assumption, because it involves deciding that an instance is not satisfied;1 even if you added a "both satisfied" constructor, you would be able to use it to decide whether an instance is not present (by seeing if you get an L or an R).
Therefore, I do not believe that such an or constraint is possible; if you can observe which instance you get, then you can create a program whose behaviour changes by adding an instance, and if you can't observe which instance you get, then it's pretty useless.
1 The difference between this and normal instance resolution, which can also fail, is that normally, the compiler cannot decide that a constraint is satisfied; here, you're asking the compiler to decide that the constraint cannot be satisfied. A subtle but important difference.

I came here to answer your question on the cafe. Not sure the q here is the same, but anyway ...
a type class with three parameters.
class Foo a b c | a b -> c where
foo :: a -> b -> c
instance Foo A R A where ...
instance Foo R A A where ...
In addition to the functional dependency I'd like to express that at least one of the parameters a and b is c,
import Data.Type.Equality
import Data.Type.Bool
class ( ((a == c) || (b == c)) ~ True)
=> Foo a b c | a b -> c where ...
You'll need a bunch of extensions switched on. In particular UndecidableSuperClasses, because the type family calls in the class constraint are opaque as far as GHC can see.
Your q here
How can I logically or two constraints together?
Is far more tricky. For the type equality approach, == uses a Closed Type Family. So you could write a Closed Type Family returning kind Constraint, but I doubt there's a general solution. For your Foo class:
type family AorBeqC a b c :: Constraint where
AorBeqC a b a = ()
AorBeqC a b c = (b ~ c)
class AorBeqC a b c => Foo a b c | a b -> c where ...
It's likely to have poor and non-symmetrical type improvement behaviour: if GHC can see that a, c are apart, it'll go to the second equation and use (b ~ c) to improve either; if it can't see they're apart nor that they're unifiable, it'll get stuck.
In general, as #ehird points out, you can't test whether a constraint is not satisfiable. Type equality is special.

Related

MultiParamTypeClasses - Why is this type variable ambiguous?

Suppose I define a multi-parameter type class:
{-# LANGUAGE MultiParamTypeClasses, AllowAmbiguousTypes, FlexibleContexts, FlexibleInstances #-}
class Table a b c where
decrement :: a -> a
evalutate :: a -> b -> c
Then I define a function that uses decrement, for simplicity:
d = decrement
When I try to load this in ghci (version 8.6.3):
• Could not deduce (Table a b0 c0)
arising from a use of ‘decrement’
from the context: Table a b c
bound by the type signature for:
d :: forall a b c. Table a b c => a -> a
at Thing.hs:13:1-28
The type variables ‘b0’, ‘c0’ are ambiguous
Relevant bindings include d :: a -> a (bound at Thing.hs:14:1)
These potential instance exist:
instance Table (DummyTable a b) a b
This is confusing to me because the type of d is exactly the type of decrement, which is denoted in the class declaration.
I thought of the following workaround:
data Table a b = Table (a -> b) ((Table a b) -> (Table a b))
But this seems notationally inconvenient, and I also just wanted to know why I was getting this error message in the first place.
The problem is that, since decrement only requires the a type, there is no way to figure out which types b and c should be, even at the point where the function is called (thus solving the polymorphism into a specific type) - therefore, GHC would be unable to decide which instance to use.
For example: let's suppose you have two instances of Table: Table Int String Bool, and Table Int Bool Float; you call your function d in a context where it is supposed to map an Int to another Int - problem is, that matches both instances! (a is Int for both).
Notice how, if you make your function equal to evalutate:
d = evalutate
then the compiler accepts it. This is because, since evalutate depends on the three type parameters a, b, and c, the context at the call site would allow for non-ambiguous instance resolution - just check which are the types for a, b, and c at the place where it is called.
This is, of course, not usually a problem for single-parameter type classes - only one type to resolve; it is when we deal with multiple parameters that things get complicated...
One common solution is to use functional dependencies - make b and c depend on a:
class Table a b c | a -> b c where
decrement :: a -> a
evalutate :: a -> b -> c
This tells the compiler that, for every instance of Table for a given type a, there will be one, and only one, instance (b and c will be uniquely determined by a); so it will know that there won't be any ambiguities and accept your d = decrement happily.

Confusing about Haskell type inference

I have just started learning Haskell. As Haskell is static typed and has polymorphic type inference, the type of the identity function is
id :: a -> a
suggesting id can take any type as its parameter and return itself. It works fine when I try:
a = (id 1, id True)
I just suppose that at compile time, the first id is Num a :: a -> a, and the second id is Bool -> Bool. When I try the following code, it gives an error:
foo f a b = (f a, f b)
result = foo id 1 True
It shows the type of a must be the same type of b, since it works fine with
result = foo id 1 2
But is that true that the type of id's parameter can be polymorphic, so that a and b can be different type?
All right, this is a weird spooky corner of Haskell's type system. The problem here is that there are two ways to type inference your function foo.
-- rank 1
foo :: forall a b. (a -> b) -> a -> a -> (b, b)
foo f a b = (f a, f b)
-- rank 2
foo' :: (forall a. a -> a) -> a -> b -> (a, b)
foo' f a b = (f a, f b)
The second type is the one you want, but the first type is the one you're getting. The second type, as amalloy pointed out, is a rank-2 type (we're going to ignore what the two means but read the introduction in "Practical type inference for arbitrary-rank types" if you want a good explanation of ranks – don't be put off by the academic nature of the PDF file as the beginning is accessibly and clearly written).
We'll defer the definition of higher-ranked types for now and just say that the problem is that GHC is unable to infer the rank-2 type. Quote the paper:
Complete type inference is known to be undecidable for higher-rank (impredicative) type systems, but in practice programmers are more than willing to add type annotations to guide the type inference engine, and to document their code....
Kfoury and Wells show that typeability is decidable for rank ≤ 2, and undecidable for all ranks ≥ 3 (Kfoury & Wells, 1994). For the rank-2 fragment, the same paper gives a type inference algorithm. This inference algorithm is somewhat subtle, does not interact well with user-supplied type annotations, and has not, to our knowledge, been implemented in a production compiler.
Undecidable means there can be no algorithm that always leads to a correct yes-or-no decision. So there you have it: impossible to infer a rank-3-or-higher type, and it's too gosh-darn-hard to infer the rank-2 type.
Now, back to rank 2. The (forall a. a -> a) is what makes it rank-2. There's already an excellent Stack Overflow question about what the forall keyword means so I'll refer you to that, but basically it means you're able to call f a and f b in the expression (f a, f b) while having a and b be different types, which is what you wanted in the first place, before all this hot mess.
One last thing: The reason you don't normally see foralls in GHCi is that any foralls on the very outer scope are left off. So forall a b. (a -> b) -> a -> a -> (b, b) is equivalent to (a -> b) -> a -> a -> (b, b).
Overall this is a pain point of the language that's poorly explained.
(Hat tip to #amalloy in the comments.)

Haskell type signature with composite/multi-param type constructors

I've discovered these kinds of type signatures:
x :: a b -> Int
x f = 3
y :: a b c -> Int
y f = 3
z :: a b c d -> Int
z f = 3
> x [1] -- 3
> y (1, 2) -- 3
> z (1, 2, 3) -- 3
Basically:
x only accepts a value inhabiting a type constructor with 1 parameter or more.
y only accepts a value inhabiting a type constructor with 2 parameters or more.
z only accepts a value inhabiting a type constructor with 3 parameters or more.
They are valid, but I'm not sure what they mean nor what they could be used for.
They seem related to polytypic notions or polymorphism over type constructors, but enforce an invariant based on many parameters the type constructor accepts.
Without further constraints, such types are useless – there's nothing you could really do with them, expect pass them right on. But that's actually the same situation with a signature a -> Int: if nothing is known about a, there's nothing you can do with it either!
However, like with e.g. toInteger :: Integral a => a -> Integer, adding constraints to the arguments allows you to do stuff. For instance,
import Data.Foldable
import Prelude hiding (foldr)
x' :: (Foldable a, Integral b) => a b -> Integer
x' = foldr ((+) . toInteger) 0
Rather more often than not, when you have a type of the form a b ... n o p q, then a b ... p is at least an instance of the Functor class, often also Applicative and Monad; sometimes Foldable, Traversable, or Comonad; sometimes a b ... o will be Arrow... These constraints allow you to do quite a lot with the composite types, without knowing what particular type constructors you're dealing with.
After studying #leftaroundabout answer and experimenting in GHCI, I've come to an understanding with composite types. Their unification with applied types is based on both the evaluation order and their type variable's kind signature. The evaluation order is quite important as a b c ~ (((a) b) c) while a (b c) is (a ((b) c). This makes a b c match composite types where a is matched with type constructors of kind * -> * -> *, and a b with * -> * and a b c with *.
I explained it fully with diagrams and GHCI code in this gist (https://gist.github.com/CMCDragonkai/2a1d3ecb67dcdabfc7e0) (it's too long for stack overflow)

Haskell TypeCast type class

I've run into a type class called TypeCast in Haskell in a few different places.
It's rather cryptic, and I can't seem to fully parse it.
class TypeCast a b | a -> b, b -> a where typeCast :: a -> b
class TypeCast' t a b | t a -> b, t b -> a where typeCast' :: t -> a -> b
class TypeCast'' t a b | t a -> b, t b -> a where typeCast'' :: t -> a -> b
instance TypeCast' () a b => TypeCast a b where typeCast x = typeCast' () x
instance TypeCast'' t a b => TypeCast' t a b where typeCast' = typeCast''
instance TypeCast'' () a a where typeCast'' _ x = x
http://okmij.org/ftp/Haskell/typecast.html gives a helpful but perfunctory comment on this code. For more information, that page points me to http://homepages.cwi.nl/~ralf/HList/paper.pdf which is a broken link.
I see that TypeCast is a class that allows you to cast from one type to another, but I don't see why we need TypeCast' and TypeCast''.
It looks like all this code does is allow you to cast a type to itself. In some of the sample code I've seen, I tried replacing it with this:
class TypeCast a b | a -> b, b -> a where typeCast :: a -> b
instance TypeCast a a where typeCast a = a
and the samples still worked. The samples I've been looking at are mostly from that first link.
I was wondering if someone could explain what the six lines are for.
What is TypeCast actually for?
It isn't used for retrieving type information about existential types (that would break the type system, so it is impossible). To understand TypeCast, we first have to understand some particular details about the haskell type system. Consider the following motivating example:
data TTrue
data TFalse
class TypeEq a b c | a b -> c
instance TypeEq x x TTrue
instance TypeEq x y TFalse
The goal here is to have a boolean flag - on the type level - which tells you if two types are equal. You can use ~ for type equivalence - but this only gives you failure on type in equivalence (ie Int ~ Bool doesn't compile as opposed to TypeEq Int Bool r will give r ~ TFalse as the inferred type). However, this doesn't compile - the functional dependencies conflict. The reason is simple - x x is just an instantiation of x y (ie x ~ y => x y == x x), so according to the rules of fundeps (see the docs for full details of the rules) , the two instances must have the same value for c (or the two values must be insantiations of one another - which they aren't).
The TypeEq class exists in the HList library - lets take a look how it is implemented:
class HBool b => TypeEq x y b | x y -> b
instance TypeEq x x HTrue
instance (HBool b, TypeCast HFalse b) => TypeEq x y b
-- instance TypeEq x y HFalse -- would violate functional dependency
Naturally these instance don't conflict - HTrue is an instantiation of b. But wait! Doesn't TypeCast HFalse b imply that b must be HFalse? Yes, it does, but the compiler does not check the class instance constraint when attempting to resolve fundep conflicts. This is the key 'feature' which allows this class to exist.
As a brief note - the two instances still overlap. But with -XUndecidableInstances -XOverlappingInstances, the compiler will choose the first instance preferentially, due to the fact that the first instance is more 'specific' (in this case, that means it has at most 2 unique types - x and HTrue, while the other instance has at most 3). You can find the full set of rules that UndecidableInstances uses in the docs.
Why is TypeCast written the way it is?
If you look in the source for HList, there are multiple implementations of TypeCast. One implementation is:
instance TypeCast x x
The straightforward instance one would assume will work. Nope! From the comments in the file containing the above definition:
A generic implementation of type cast. For this implementation to
work, we need to import it at a higher level in the module hierarchy
than all clients of the class. Otherwise, type simplification will
inline TypeCast x y, which implies compile-time unification of x and y.
That is, the type simplifier (whose job it is to remove uses of type synonyms and constant class constraints) will see that x ~ y in TypeCast x x since that is the only instance that matches, but only in certain situations. Since code that behaves differently in different cases is 'Very Bad', the authors of HList have a second implementation, the one in your original post. Lets take a look:
class TypeCast a b | a -> b, b -> a
class TypeCast' t a b | t a -> b, t b -> a
class TypeCast'' t a b | t a -> b, t b -> a
instance TypeCast' () a b => TypeCast a b
instance TypeCast'' t a b => TypeCast' t a b
instance TypeCast'' () a a
In this case, TypeCast x y can never be simplified without looking at the class constraint (which the simplifier will not do!); there is no instance head which can imply x ~ y.
However, we still need to assert that x ~ y at some point in time - so we do it with more classes!
The only way we can know a ~ b in TypeCast a b is if TypeCast () a b implies a ~ b. This is only the case if TypeCast'' () a b implies a ~ b, which it does.
I can't give you the whole story unfortunatley; I don't know why
instance TypeCast' () a b => TypeCast a b
instance TypeCast' () a a
doesn't suffice (it works - I don't know why it wouldn't be used). I suspect it has something to do with error messages. I'm sure you could track down Oleg and ask him!
The HList paper was published in the proceedings of the Haskell Workshop 2004, and so is available from the ACM DL and other archives. Alas, the explanation in the published version is abbreviated for the lack of space. For full explanation, please see the expanded version of the paper published as a Technical Report, which is available at
http://okmij.org/ftp/Haskell/HList-ext.pdf (The CWI link is indeed no longer valid since Ralf left CWI long time ago.) Please see Appendix D in that TR for the explanation of TypeCast.
In the latest GHC, instead of TypeCast x y constraint you can write x ~ y. There is no corresponding typeCast method: it is no longer necessary. When you write the x ~ y constraint, GHC synthesizes something like typeCast (called coercion) automatically and behind the scenes.
Asking me a question directly in a e-mail message usually results in a much faster reply.

Type class definition with functions depending on an additional type

Still new to Haskell, I have hit a wall with the following:
I am trying to define some type classes to generalize a bunch of functions that use gaussian elimination to solve linear systems of equations.
Given a linear system
M x = k
the type a of the elements m(i,j) \elem M can be different from the type b of x and k. To be able to solve the system, a should be an instance of Num and b should have multiplication/addition operators with b, like in the following:
class MixedRing b where
(.+.) :: b -> b -> b
(.*.) :: (Num a) => b -> a -> b
(./.) :: (Num a) => b -> a -> b
Now, even in the most trivial implementation of these operators, I'll get Could not deduce a ~ Int. a is a rigid type variable errors (Let's forget about ./. which requires Fractional)
data Wrap = W { get :: Int }
instance MixedRing Wrap where
(.+.) w1 w2 = W $ (get w1) + (get w2)
(.*.) w s = W $ ((get w) * s)
I have read several tutorials on type classes but I can find no pointer to what actually goes wrong.
Let us have a look at the type of the implementation that you would have to provide for (.*.) to make Wrap an instance of MixedRing. Substituting Wrap for b in the type of the method yields
(.*.) :: Num a => Wrap -> a -> Wrap
As Wrap is isomorphic to Int and to not have to think about wrapping and unwrapping with Wrap and get, let us reduce our goal to finding an implementation of
(.*.) :: Num a => Int -> a -> Int
(You see that this doesn't make the challenge any easier or harder, don't you?)
Now, observe that such an implementation will need to be able to operate on all types a that happen to be in the type class Num. (This is what a type variable in such a type denotes: universal quantification.) Note: this is not the same (actually, it's the opposite) of saying that your implementation can itself choose what a to operate on); yet that is what you seem to suggest in your question: that your implementation should be allowed to pick Int as a choice for a.
Now, as you want to implement this particular (.*.) in terms of the (*) for values of type Int, we need something of the form
n .*. s = n * f s
with
f :: Num a => a -> Int
I cannot think of a function that converts from an arbitary Num-type a to Int in a meaningful way. I'd therefore say that there is no meaningful way to make Int (and, hence, Wrap) an instance of MixedRing; that is, not such that the instance behaves as you would probably expect it to do.
How about something like:
class (Num a) => MixedRing a b where
(.+.) :: b -> b -> b
(.*.) :: b -> a -> b
(./.) :: b -> a -> b
You'll need the MultiParamTypeClasses extension.
By the way, it seems to me that the mathematical structure you're trying to model is really module, not a ring. With the type variables given above, one says that b is an a-module.
Your implementation is not polymorphic enough.
The rule is, if you write a in the class definition, you can't use a concrete type in the instance. Because the instance must conform to the class and the class promised to accept any a that is Num.
To put it differently: Exactly the class variable is it that must be instantiated with a concrete type in an instance definition.
Have you tried:
data Wrap a = W { get :: a }
Note that once Wrap a is an instance, you can still use it with functions that accept only Wrap Int.

Resources