It could refer to either `Data.Monoid.<>' - haskell

I have following declaration:
data Two a b = Two a b deriving (Eq, Show)
instance (Semigroup a, Semigroup b) => Semigroup (Two a b) where
(Two a b) <> (Two c d) = Two (a <> c) (b <> d)
And tried it in the prelude:
*Main First Lib MonoidLaws Semi> (Two a b) <> (Two c d) = Two (a <> c) (b <> d)
<interactive>:10:3: error:
* Occurs check: cannot construct the infinite type: t1 ~ Two t1 t1
Expected type: t1 -> t -> b
Actual type: Two t1 t1 -> Two t t -> Two b b
* Relevant bindings include
(<>) :: t1 -> t -> b (bound at <interactive>:10:3)
How can I use mappend function from Semigroup for Two datatype in prelude?

Try it with something that's an instance of Semigroup, like List.
For example:
> (Two "1" "2") <> (Two "3" "4")
Two "13" "24"
In your attempt / example, a and b, c and d are not defined, so Haskell sees them as variables. Because you're using = between them, it's assuming you want to do pattern matching, so it's trying to match them to themselves, which is causing an infinite loop (as that is perfectly valid Haskell — to define values in terms of themselves). This is causing an error, though, because it would imply an infinite type, which it definitely seems is not what you want.
It might be worth starting with some simpler things, possibly. A good basic book which I helped author is http://happylearnhaskelltutorial.com but that doesn't deal with instantiating your own typeclasses. Having said that it'll give you a reasonably good understanding of pattern matching, variables, types and values which you'll need before you move on to understanding typeclasses enough to build your own instances.

Related

Hidden forall quantified types in ReifiedTraversal

This question really is more generic, since while I was asking it I found out how to fix it in this particular case (even though I don't like it) but I'll phrase it in my particular context.
Context:
I'm using the lens library and I found it particularly useful to provide functionality for "adding" traversals (conceptually, a traversal that traverses all the elements in both original traversals). I did not find a default implementation so I did it using Monoid. In order to be able to implement an instance, I had to use the ReifiedTraversal wrapper, which I assume is in the library precisely for this purpose:
-- Adding traversals
add_traversals :: Semigroup t => Traversal s t a b -> Traversal s t a b -> Traversal s t a b
add_traversals t1 t2 f s = liftA2 (<>) (t1 f s) (t2 f s)
instance Semigroup t => Semigroup (ReifiedTraversal s t a b) where
a1 <> a2 = Traversal (add_traversals (runTraversal a1) (runTraversal a2))
instance Semigroup s => Monoid (ReifiedTraversal' s a) where
mempty = Traversal (\_ -> pure . id)
The immediate application I want to extract from this is being able to provide a traversal for a specified set of indices in a list. Therefore, the underlying semigroup is [] and so is the underlying Traversable. First, I implemented a lens for an individual index in a list:
lens_idx :: Int -> Lens' [a] a
lens_idx _ f [] = error "No such index in the list"
lens_idx 0 f (x:xs) = fmap (\rx -> rx:xs) (f x)
lens_idx n f (x:xs) = fmap (\rxs -> x:rxs) (lens_idx (n-1) f xs)
All that remains to be done is to combine these two things, ideally to implement a function traversal_idxs :: [Int] -> Traversal' [a] a
Problem:
I get type checking errors when I try to use this. I know it has to do with the fact that Traversal is a type that includes a constrained forall quantifier in its definition. In order to be able to use the Monoid instance, I need to first reify the lenses provided by lens_idx (which are, of course, also traversals). I try to do this by doing:
r_lens_idx :: Int -> ReifiedTraversal' [a] a
r_lens_idx = Traversal . lens_idx
But this fails with two errors (two versions of the same error really):
Couldn't match type ‘f’ with ‘f0’...
Ambiguous type variable ‘f0’ arising from a use of ‘lens_idx’
prevents the constraint ‘(Functor f0)’ from being solved...
I understand this has to do with the hidden forall f. Functor f => in the Traversal definition. While writing this, I realized that the following does work:
r_lens_idx :: Int -> ReifiedTraversal' [a] a
r_lens_idx idx = Traversal (lens_idx idx)
So, by giving it the parameter it can make the f explicit to itself and then it can work with it. However, this feels extremely ad-hoc. Specially because originally I was trying to build this r_lens_idx inline in a where clause in the definition of the traversal_idxs function (in fact... on a function defining this function inline because I'm not really going to use it that often).
So, sure, I guess I can always use lambda abstraction, but... is this really the right way to deal with this? It feels like a hack, or rather, that the original error is an oversight by the type-checker.
The "adding" of traversals that you want was added in the most recent lens release, you can find it under the name adjoin. Note that it is unsound to use if your traversals overlap at all.
I am replying to my own question, although it is only pointing out that what I was trying to do with traversals was not actually possible in that shape and how I overcame it. There is still the underlying problem of the hidden forall quantified variables and how is it possible that lambda abstraction can make code that does not type check suddenly type check (or rather, why it did not type check to start with).
It turns out my implementation of Monoid for Traversal was deeply flawed. I realized when I started debugging it. For instance, I was trying to combine a list of indices, and a function that would return a lens for each index, mapping to that index in a list, to a traversal that would map to exactly those indices. That is possible, but it relies on the fact that List is a Monad, instead of just using the Applicative structure.
The function that I had written originally for add_traversal used only the Applicative structure, but instead of mapping to those indices in the list, it would duplicate the list for each index, concatenating them, each version of the list having applied its lens.
When trying to fix it, I realized I needed to use bind to implement what I really wanted, and then I stumbled upon this: https://www.reddit.com/r/haskell/comments/4tfao3/monadic_traversals/
So the answer was clear: I can do what I want, but it's not a Monoid over Traversal, but instead a Monoid over MTraversal. It still serves my purposes perfectly.
This is the resulting code for that:
-- Monadic traversals: Traversals that only work with monads, but they allow other things that rely on the fact they only need to work with monads, like sum.
type MTraversal s t a b = forall m. Monad m => (a -> m b) -> s -> m t
type MTraversal' s a = MTraversal s s a a
newtype ReifiedMTraversal s t a b = MTraversal {runMTraversal :: MTraversal s t a b}
type ReifiedMTraversal' s a = ReifiedMTraversal s s a a
-- Adding mtraversals
add_mtraversals :: Semigroup t => MTraversal r t a b -> MTraversal s r a b -> MTraversal s t a b
add_mtraversals t1 t2 f s = (t2 f s) >>= (t1 f)
instance Semigroup s => Semigroup (ReifiedMTraversal' s a) where
a1 <> a2 = MTraversal (add_mtraversals (runMTraversal a1) (runMTraversal a2))
instance Semigroup s => Monoid (ReifiedMTraversal' s a) where
mempty = MTraversal (\_ -> return . id)
Note that MTraversal is still a LensLike and an ASetter, so you can use many operators from the lens package, like .~.
As I mentioned, though, I still have to use lambda abstraction when using this for my purposes due to the forall quantifier being in an uncomfortable place, and I'd love if someone could clarify what the heck is up with the type checker in that regard.

Why is there no alternative instance for Either but a semigroup that behaves similarily to alternative?

I am a Haskell newbie and I wonder why there is no alternative instance for Either but a semigroup, which behaves as I would expect it from alternative:
instance Semigroup (Either a b) where
Left _ <> b = b
a <> _ = a
This instance discards or corrects "errors" and when both operands are tagged with Right, it takes the first. Isn't this exactly the "choice" that alternative offers?
I would expect the semigroup instance to look roughly like:
instance (Semigroup b) => Semigroup (Either a b) where
Left e <> _ = Left e
_ <> Left e = Left e
Right x <> Right y = Right (x <> y)
That means it propagates errors and appends regular results.
I guess I have either a wrong notion of Either or of the involved type classes.
What would you expect an Alternative instance to give you. I think a good way for you to get a feel for how Alternative and Semigroup differ is to look at another type that has instances for both: for example Maybe String:
λ > Just "a" <> Just "b"
Just "ab"
λ > Just "a" <> Nothing
Just "a"
λ > Nothing <> Just "b"
Just "b"
λ > Nothing <> Nothing
Nothing
λ > Just "a" <|> Just "b"
Just "a"
λ > Just "a" <|> Nothing
Just "a"
λ > Nothing <|> Just "b"
Just "b"
λ > Nothing <|> Nothing
Nothing
Alright, so the main difference seems to be for Just "a" and Just "b". This makes sense since you are combining them in the case of the Semigroup rather than taking the left biased option in the case of Alternative.
Now why couldn't you have an Alternative instance for Either. If you look at the functions which are part of the Alternative type class:
λ > :i Alternative
class Applicative f => Alternative (f :: * -> *) where
empty :: f a
(<|>) :: f a -> f a -> f a
some :: f a -> f [a]
many :: f a -> f [a]
{-# MINIMAL empty, (<|>) #-}
It looks like it defines a notion of empty; this is the identity of the (<|>) operator. The identity in the case means that the alternative between the identity and something else is always that something else.
Now, how would you construct an identity for Either e a? If you look at the constraint on the Alternative instance, you can see that it requires f to have an Applicative instance. That's fine, Either has an Applicative instance declared for Either e. As you can see the Either is only an applicative functor on the second type variable (a in the case of Either e a). So an identity for Either e would need e to also have an identity. While it is possible to construct a type where e has an instance of Alternative you can't make an instance for Alternative for Either with that e because no such constraint is in the type class definition (something along the lines of: (Alternative e, Applicative (f e)) => Alternative (f e)).
TL;DR: I am sorry if I lost you with my rambling, the short of it is that f in the case of Either is of the wrong kind, Alternative requires f :: * -> * while Either is of kind Either :: * -> * -> *
So Maybe can have an instance of Alternative because it has kind Maybe : * -> * and has a notion of identity (Nothing) that is required by empty. Have a look at all the instances of Alternative and pay attention to the kind of each instance data type.
You can find the kind of a data type in ghci with :k:
λ > :k Maybe
Maybe :: * -> *
λ > :k Either
Either :: * -> * -> *
Per the ticket Dietrich Epp posted above, the issue with Alternative is empty. If you have:
instance Alternative (Either a) where ...
You'd need to be able to pull some value Either a b "out of thin air" that was your identity object. One possible instance might be:
instance (Monoid a)=> Alternative (Either a) where
empty = Left mempty
...
You also ask why the Semigroup instance is defined the way it is, and frankly I don't understand it either. It would seem that the instance you propose would also permit a (compatible/lawful) Monoid instance as well:
instance Monoid b=> Monoid (Either a b) where
mempty = Right mempty
And this would be consistent with the Maybe instance (the algebraic relationship between Maybe and Either being obvious).
So the situation isn't good. Part of the issue is Alternative is sort of a second-class class if you will; it is a monoidal higher-kinded thingy, but its relation to Monoid and Semigroup, which obviously and explicitly (in the docs) form a hierarchy, is not defined.
I'm sure there's been a large amount of discussion on the libraries mailing list, and if there are some obvious "correct" solutions it's likely that moving to them could cause (in the worst case silent) breakage.

Haskell type signature with composite/multi-param type constructors

I've discovered these kinds of type signatures:
x :: a b -> Int
x f = 3
y :: a b c -> Int
y f = 3
z :: a b c d -> Int
z f = 3
> x [1] -- 3
> y (1, 2) -- 3
> z (1, 2, 3) -- 3
Basically:
x only accepts a value inhabiting a type constructor with 1 parameter or more.
y only accepts a value inhabiting a type constructor with 2 parameters or more.
z only accepts a value inhabiting a type constructor with 3 parameters or more.
They are valid, but I'm not sure what they mean nor what they could be used for.
They seem related to polytypic notions or polymorphism over type constructors, but enforce an invariant based on many parameters the type constructor accepts.
Without further constraints, such types are useless – there's nothing you could really do with them, expect pass them right on. But that's actually the same situation with a signature a -> Int: if nothing is known about a, there's nothing you can do with it either!
However, like with e.g. toInteger :: Integral a => a -> Integer, adding constraints to the arguments allows you to do stuff. For instance,
import Data.Foldable
import Prelude hiding (foldr)
x' :: (Foldable a, Integral b) => a b -> Integer
x' = foldr ((+) . toInteger) 0
Rather more often than not, when you have a type of the form a b ... n o p q, then a b ... p is at least an instance of the Functor class, often also Applicative and Monad; sometimes Foldable, Traversable, or Comonad; sometimes a b ... o will be Arrow... These constraints allow you to do quite a lot with the composite types, without knowing what particular type constructors you're dealing with.
After studying #leftaroundabout answer and experimenting in GHCI, I've come to an understanding with composite types. Their unification with applied types is based on both the evaluation order and their type variable's kind signature. The evaluation order is quite important as a b c ~ (((a) b) c) while a (b c) is (a ((b) c). This makes a b c match composite types where a is matched with type constructors of kind * -> * -> *, and a b with * -> * and a b c with *.
I explained it fully with diagrams and GHCI code in this gist (https://gist.github.com/CMCDragonkai/2a1d3ecb67dcdabfc7e0) (it's too long for stack overflow)

Algebraically interpreting polymorphism

So I understand the basic algebraic interpretation of types:
Either a b ~ a + b
(a, b) ~ a * b
a -> b ~ b^a
() ~ 1
Void ~ 0 -- from Data.Void
... and that these relations are true for concrete types, like Bool, as opposed to polymorphic types like a. I also know how to translate type signatures with polymorphic types into their concrete type representations by just translating the Church encoding according to the following isomorphism:
(forall r . (a -> r) -> r) ~ a
So if I have:
id :: forall a . a -> a
I know that it does not mean id ~ a^a, but it actually means:
id :: forall a . (() -> a) -> a
id ~ ()
~ 1
Similarly:
pair :: forall r . (a -> b -> r) -> r
pair ~ ((a, b) -> r) - > r
~ (a, b)
~ a * b
Which brings me to my question. What is the "algebraic" interpretation of this rule:
(forall r . (a -> r) -> r) ~ a
For every concrete type isomorphism I can point to an equivalent algebraic rule, such as:
(a, (b, c)) ~ ((a, b), c)
a * (b * c) = (a * b) * c
a -> (b -> c) ~ (a, b) -> c
(c^b)^a = c^(b * a)
But I don't understand the algebraic equality that is analogous to:
(forall r . (a -> r) -> r) ~ a
This is the famous Yoneda lemma for the identity functor.
Check this post for a readable introduction, and any category theory textbook for more.
Briefly, given f :: forall r. (a -> r) -> r you can apply f id to get an a, and conversely, given x :: a you can take ($x) to get forall r. (a -> r) -> r.
These operations are mutually inverse. Proof:
Obviously ($x) id == x. I will show that
($(f id)) == f,
since functions are equal when they are equal on all arguments, let's take x :: a -> r and show that
($(f id)) x == f x i.e.
x (f id) == f x.
Since f is polymorphic, it works as a natural transformation; this is the naturality diagram for f:
f_A
Hom(A, A) → A
(x.) ↓ ↓ x
Hom(A, R) → R
f_R
So x . f == f . (x.).
Plugging identity, (x . f) id == f x. QED
(Rewritten for clarity)
There seem to be two parts to your question. One is implied and is asking what the algebraic interpretation of forall is, and the other is asking about the cont/Yoneda transformation, which sdcvvc's answer already covered pretty well.
I'll try to address the algebraic interpretation of forall for you. You mention that A -> B is B^A but I'd like to take that a step further and expand it out to B * B * B * ... * B (|A| times). Although we do have exponentiation as a notation for repeated multiplication like that, there's a more flexible notation, ∏ (uppercase Pi) representing arbitrary indexed products. There are two components to a Pi: the range of values we want to multiply over, and the expression that we're multiplying out. For example, at the value level, you might express the factorial function as fact i = ∏ [1..i] (λx -> x).
Going back to the world of types, we can view the exponentiation operator in the A -> B ~ B^A correspondence as a Pi: B^A ~ ∏ A (λ_ -> B). This says that we're defining an A-ary product of Bs, such that the Bs cannot depend on the particular A we've chosen. Sure, it's equivalent to plain exponentiation, but it lets us move up to cases in which there is a dependence.
In the most general case, we get dependent types, like what you see in Agda or Coq: in Agda syntax, replicate : Bool -> ((n : Nat) -> Vec Bool n) is one possible application of a Pi type, which could be expressed more explicitly as replicate : Bool -> ∏ Nat (Vec Bool), or further as replicate : ∏ Bool (λ_ -> ∏ Nat (Vec Bool)).
Note that as you might expect from the underlying algebra, you can fuse both of the ∏s in the definition of replicate above into a single ∏ ranging over the cartesian product of the domains: ∏ Bool (\_ -> ∏ Nat (Vec Bool)) is equivalent to ∏ (Bool, Nat) (λ(_, n) -> Vec Bool n) just like it would be at the "value level". This is simply uncurrying from the perspective of type theory.
I do realize your question was about polymorphism, so I'll stop going on about dependent types, but they are relevant: forall in Haskell is roughly equivalent to a ∏ with a domain over the type (kind) of types, *. Indeed, the function-like behavior of polymorphism can be observed directly in GHC core, which types them as capital lambdas (Λ). As such, a polymorphic type like forall a. a -> a is actually just ∏ * (Λ a -> (a -> a)) (using the Λ notation now that we distinguish between types and values), which can be expanded out to the infinite product (Bool -> Bool, Int -> Int, () -> (), (Int -> Bool) -> (Int -> Bool), ...) for every possible type. Instantiation of the type variable is simply projecting out the suitable element from the *-ary product (or applying the type function).
Now, for the big piece I missed in my original version of this answer: parametricity. Parametricity can be described in several different ways, but none of the ones I know of (viewing types as relations, or (di)naturality in category theory) really has a very algebraic interpretation. For our purposes, though, it boils down to something fairly simple: you can't pattern-match on *. I know that GHC lets you do that at the type level with type families, but you can only cover a finite chunk of * when doing that, so there are necessarily always points at which your type family is undefined.
What this means, from the point of view of polymorphism, is that any type function F we write in ∏ * F must either be constant (i.e., completely ignore the type it was polymorphic over) or pass the type through unchanged. Thus, ∏ * (Λ _ -> B) is valid because it ignores its argument, and corresponds to forall a. B. The other case is something like ∏ * (Λ x -> Maybe x), which corresponds to forall a. Maybe a, which doesn't ignore the type argument, but only "passes it through". As such, a ∏ A that has an irrelevant domain A (such as when A = *) can be seen as more of an A-ary indexed intersection (picking the common elements across all instantiations of the index), rather than a product.
Crucially, at the value level, the rules of parametricity prevent any funny behavior that might suggest the types are larger than they really are. Because we don't have typecase, we can't construct a value of type forall a. B that does something different based on what a was instantiated to. Thus, although the type is technically a function * -> B, it is always a constant function, and is thus equivalent to a single value of B. Using the ∏ interpretation, it is indeed equivalent to an infinite *-ary product of Bs, but those B values must always be identical, so the infinite product is effectively as big as a single B.
Similarly, although ∏ * (Λ x -> (x -> x)) (a.k.a., forall a. a -> a) is technically equivalent to an infinite product of functions, none of those functions can inspect the type, so all are constrained to only return their input value and not do any funny business like (+1) : Int -> Int when instantiated to Int. Because there is only one (assuming a total language) function that can't inspect the type of its argument but must return a value of that same type, the infinite product is thus just as large as a single value.
Now, about your direct question on (forall r . (a -> r) -> r) ~ a. First, let's express your ~ operator more formally. It's really isomorphism, so we need two functions going back and forth, and an argument that they're inverses.
data Iso a b = Iso
{ to :: a -> b
, from :: b -> a
-- proof1 :: forall x. to (from x) == x
-- proof2 :: forall x. from (to x) == x
}
and now we express your original question in more formal terms. Your question amounts to constructing a term of the following (impredicative, so GHC has trouble with it, but we'll survive) type:
forall a. Iso (forall r. (a -> r) -> r) a
Which, using my earlier terminology, amounts to ∏ * (Λ a -> Iso (∏ * (Λ r -> ((a -> r) -> r))) a). Once again we have an infinite product that can't inspect its type argument. By handwaving, we can argue that the only possible values considering the parametricity rules (the other two proofs are respected automatically) for to and from are ($ id) and flip id.
If this feels unsatisfying, it's probably because the algebraic interpretation of forall didn't really add anything to the proof. It's really just plain old type theory, but I hope I was able to provide something that feels a little less categorical than the Yoneda form of it. It's worth noting that we don't actually need to use parametricity to write proof1 and proof2 above, though. Parametricity only enters the picture when we want to state that ($ id) and flip id are our only options for to and from (which we can't prove in Agda or Coq, for that reason).
To (attempt to) answer the actual question (which is less interesting than the answers to the broader issues raised), the question is ill formed because of a "type error"
Either ~ (+)
(,) ~ (*)
(->) b ~ flip (^)
() ~ 1
Void ~ 0
These all map types to integers, and type constructors to functions on naturals. In a sense, you have a functor from the category of types to the category of naturals. In the other direction, you "forget" stuff, since the types preserve algebraic structure while the naturals throw it away. I.e. given Either () () you can get a unique natural, but given that natural, you can get many types.
But this is different:
(forall r . (a -> r) -> r) ~ a
It maps a type to another type! It is not part of the above functor. It's just an isomorphism within the category of types. So let's give that a different symbol, <=>
Now we have
(forall r . (a -> r) -> r) <=> a
Now you note that we can not only send types to nats and arrows to arrows, but also some isomorphisms to other isomorphisms:
(a, (b, c)) <=> ((a, b), c) ~ a * (b * c) = (a * b) * c
But something subtle is going on here. In a sense, the latter isomorphism on pairs is true because the algebraic identity is true. This is to say that the "isomorphism" in the latter simply means that the two types are equivalent under the image of our functor to the nats.
The former isomorphism we need to prove directly, which is where we start to get to the underlying question -- is given our functor to the nats, what does forall r. map to? But the answer is that forall r. is neither a type, nor a meaningful arrow between types.
By introducing forall, we have moved away from first order types. There's no reason to expect that forall should fit in our above Functor, and indeed, it doesn't.
So we can explore, as others have above, why the isomorphism holds (which is itself very interesting) -- but in doing so we've abandoned the algebraic core of the question. A question which can be answered, I think, is, given the category of higher-order types and constructors as arrows between them, what is there meaningful Functor to?
Edit:
So now I have another approach which shows why adding polymorphism makes things go nuts. We start by asking a simpler question -- does a given polymorphic type have zero or more than zero inhabitants? This is the type inhabitation problem, and winds up being, via Curry-Howard, a problem in modified realizability, since it's the same thing as asking if a formula in some logic is realizable in an appropriate computational model. Now as that page explains, this is decidable in the simply typed lambda calculus but is PSPACE-complete. But once we move to anything more complicated, by adding polymorphism for example and going to System F, then it goes to undecidable!
So, if we can't decide if an arbitrary type is inhabited at all, then we clearly can't decide how many inhabitants it has!
It's an interesting question. I don't have a full answer, but this was too long for a comment.
The type signature (forall r. (a -> r) -> r) can be expressed as me saying
For any type r that you care to name, if you give me a function that takes a and produces an r, then I will give you back an r.
Now, this has to work for any type r, but it can be a specific type a. So the way for me to pull of this neat trick is to have an a sitting around somewhere, that I feed to the function (which produces an r for me) and then I hand that r back to you.
But if I have an a sitting around, I could give it to you:
If you give me a 1, I'll give you an a.
which corresponds to the type signature 1 -> a or simply a. By this informal argument we have
(forall r. (a -> r) -> r) ~ a
The next step would be to generate the corresponding algebraic expression, but I'm not clear on how the algebraic quantities interact with the universal quantification. We may need to wait for an expert!
A few links to the nLab:
Universal quantifier, corresponds to dependent product.
Existential quantifier, corresponds to dependent sum (dependent coproduct).
Thus, in settings of category theory:
Type | Modeled¹ as | In category
-------------------+---------------------------+-------------
Unit | Terminal object | CCC
Bottom | Initial object |
Record | Product |
Union | Sum (coproduct) |
Function | Exponential |
-------------------+---------------------------+-------------
Dependent product² | Right adjoint to pullback | LCCC
Dependent sum | Left adjoint to pullback |
¹) in appropriate category ─ CCC for total and non-polymorphic subset of Haskell (link), CPO for non-total traits of Haskell (link), LCCC for dependently typed languages.
²) forall quantification is a special case of dependent product:
∀(x :: *). y[x] ~ ∏(x : Set)y[x]
where Set is the universe of all small types.

How can I combine two type constraints with a logical or in Haskell?

In Haskell we are given the ability to combine constraints on types with a logical and.
Consider the following
type And (a :: Constraint) b = (a, b)
or more complicatedly
class (a, b) => And a b
instance (a, b) => And a b
I want to know how to logically or two constraints together in Haskell.
My closest attempt is this, but it doesn't quite work. In this attempt I reify type constraints with tags and than dereify them with implicit parameters.
data ROr a b where
L :: a => ROr a b
R :: b => ROr a b
type Or a b = (?choose :: ROr a b)
y :: Or (a ~ Integer) (Bool ~ Integer) => a
y = case ?choose of
L -> 4
x :: Integer
x = let ?choose = L in y
It almost works, but the user has to apply the final part, and the compiler should do that for me. As well, this case does not let one choose a third choice when both constraints are satisfied.
How can I logically or two constraints together?
I believe that there is no way to automatically pick an ROr a b; it would violate the open world assumption if, e.g. b was satisfied, but later a was satisfied as well; any conflict resolution rule would necessarily cause the addition of an instance to change the behaviour of existing code.
That is, picking R when b is satisfied but a is not breaks the open world assumption, because it involves deciding that an instance is not satisfied;1 even if you added a "both satisfied" constructor, you would be able to use it to decide whether an instance is not present (by seeing if you get an L or an R).
Therefore, I do not believe that such an or constraint is possible; if you can observe which instance you get, then you can create a program whose behaviour changes by adding an instance, and if you can't observe which instance you get, then it's pretty useless.
1 The difference between this and normal instance resolution, which can also fail, is that normally, the compiler cannot decide that a constraint is satisfied; here, you're asking the compiler to decide that the constraint cannot be satisfied. A subtle but important difference.
I came here to answer your question on the cafe. Not sure the q here is the same, but anyway ...
a type class with three parameters.
class Foo a b c | a b -> c where
foo :: a -> b -> c
instance Foo A R A where ...
instance Foo R A A where ...
In addition to the functional dependency I'd like to express that at least one of the parameters a and b is c,
import Data.Type.Equality
import Data.Type.Bool
class ( ((a == c) || (b == c)) ~ True)
=> Foo a b c | a b -> c where ...
You'll need a bunch of extensions switched on. In particular UndecidableSuperClasses, because the type family calls in the class constraint are opaque as far as GHC can see.
Your q here
How can I logically or two constraints together?
Is far more tricky. For the type equality approach, == uses a Closed Type Family. So you could write a Closed Type Family returning kind Constraint, but I doubt there's a general solution. For your Foo class:
type family AorBeqC a b c :: Constraint where
AorBeqC a b a = ()
AorBeqC a b c = (b ~ c)
class AorBeqC a b c => Foo a b c | a b -> c where ...
It's likely to have poor and non-symmetrical type improvement behaviour: if GHC can see that a, c are apart, it'll go to the second equation and use (b ~ c) to improve either; if it can't see they're apart nor that they're unifiable, it'll get stuck.
In general, as #ehird points out, you can't test whether a constraint is not satisfiable. Type equality is special.

Resources