(I'm totally rewriting this question to give it a better focus; you can see the history of changes if you want to see the original.)
Let's say I have two modules:
One module defines the function inverseAndSqrt. What this function actually does is not important; what is important is that it returns none, one, or both of two things in a way that the client can distinguish which one is which;
module Module1 (inverseAndSqrt) where
type TwoOpts a = (Maybe a, Maybe a)
inverseAndSqrt :: Int -> TwoOpts Float
inverseAndSqrt x = (if x /= 0 then Just (1.0/(fromIntegral x)) else Nothing,
if x >= 0 then Just (sqrt $ fromIntegral x) else Nothing)
another module defines other functions depending on inverseAndSqrt and on its type
module Module2 where
import Module1
fun :: (Maybe Float, Maybe Float) -> Float
fun (Just x, Just y) = x + y
fun (Just x, Nothing) = x
fun (Nothing, Just y) = y
exportedFun :: Int -> Float
exportedFun = fun . inverseAndSqrt
What I want to understand from the perspective of design principle is: how should I interface Module1 with other modules (e.g. Module2) in a way that makes it well encapsulated, reusable, etc?
The problems I see are
I could one day decide that I don't want to use a pair to return the two results anymore; I could decide to use a 2 elements list; or another type which is isomorphic (I think this is the right adjective, isn't it?) to a pair; if I do this, all client code will break
Exporting the TwoOpts type synonym doesn't solve anything, as Module1 could still change its implementation thus breaking client code.
Module1 is also forcing the type of the two optionals to be the same, but I'm not sure this is really relevant to this question...
How should I design Module1 (and thus edit Module2 as well) such that the two are not tightly coupled?
One thing I can think of is that maybe I should define a typeclass expressing what "a box with two optional things in it" is, and then Module1 and Module2 would use that as a common interface. But should that be in both module? In either of them? Or in none of them, in a third module? Or maybe such a class/concept is not needed?
I'm not a computer scientist so I'm sure that this question highlights some misunderstanding of mine due to lack of experience and theoretical background. Any help filling the gaps is welcome.
Possible modifications I'd like to support
Related to what chepner suggested in a comment to his answer, at some point I might want to extend the support from 2-tuple things to both 2- and 3-tuple things, having different accessor names for them, suche as get1of2/get2of2 (let's say these are the name we use when we first design Module1) vs get1of3/get2of3/get3of3.
At some point I would also be able to complement this 2-tuple-like type with something else, for instance an optional containing Just the sum¹ of the two main contents only if they are both Justs, or a Nothing if at least one of the two main contents is a Nothing. I guess in this case the internal representation of this class would be something like ((Maybe a, Maybe a), Maybe b) (¹ The sum is really a stupid example, so I've used b here instead of a to be more general than the sum would require).
To me, Haskell design is all type-centric. The design rule for functions is just "use the most general and accurate types that do the job", and the whole problem of design in Haskell is about coming up with the best types for the job.
We would like there to be no "junk" in the types, so that they have exactly one representation for each value you want to denote. E.g. String is a bad representation for numbers, because "0", "0.0", "-0" all mean the same thing, and also because "The Prisoner" is not a number -- it is a valid representation that does not have a valid denotation. If, say for performance reasons, the same denotation can be represented multiple ways, the type's API should make that difference invisible to the user.
So in your case, (Maybe a, Maybe a) is perfect -- it means exactly what you need it to mean. Using something more complicated is unnecessary, and will just complicate matters for the user. At some point whatever you expose will have to be convertible to a Maybe a for the first thing and a Maybe a for the second thing, and there is no extra information than that, so the tuple is perfect. Whether you use a type synonym or not is a matter of style -- I prefer not use synonyms at all and only give types names when I have a more formal abstraction in mind.
Connotation is important. For example, if I had a function for finding the roots of a quadratic polynomial, I probably wouldn't use TwoOpts, even though there are at most two of them. The fact that my return values are all "the same kind of thing" in an intuitive sense makes me prefer a list (or if I'm feeling particularly picky, a Set or Bag), even if the list has at most two elements. I just have it match my best understanding of the domain at the time, so I won't change it unless my understanding of the domain has changed in a significant way, in which case the opportunity to review all its uses is exactly what I want. If you are writing your functions to be as polymorphic as possible, then often you won't need to change anything but the specific moments the meaning is used, the exact moment domain knowledge is required (such as understanding the relationship between TwoOpts and Set). You don't need to "redo the plumbing" if it's made of a sufficiently flexible, polymorphic material.
Supposing you didn't have a clean isomorphism to a standard type like (Maybe a, Maybe a), and you wanted to formalize TwoOpts. The way here is to build an API out of its constructors, combinators, and eliminators. For example:
data TwoOpts a -- abstract, not exposed
-- constructors
none :: TwoOpts a
justLeft :: a -> TwoOpts a
justRight :: a -> TwoOpts a
both :: a -> a -> TwoOpts a
-- combinators
-- Semigroup and Monoid at least
swap :: TwoOpts a -> TwoOpts a
-- eliminators
getLeft :: TwoOpts a -> Maybe a
getRight :: TwoOpts a -> Maybe a
In this case the eliminators give exactly your representation (Maybe a, Maybe a) as their final coalgebra.
-- same as the tuple in a newtype, just more conventional
data TwoOpts a = TwoOpts (Maybe a) (Maybe a)
Or if you wanted to focus on the constructors side you could use an initial algebra
data TwoOpts a
= None
| JustLeft a
| JustRight a
| Both a a
You are at liberty to change this representation as long as it still implements the combinatory API above. If you have reason to use different representations of the same API, make the API into a typeclass (typeclass design is a whole other story).
In Einstein's famous words, "make it as simple as possible, but no simpler".
Don't define a simple type alias; this exposes the details of how you implement TwoOpts.
Instead, define a new type, but don't export the data constructor, but rather functions for accessing the two components. Then you are free to change the implementation of the type all you like without changing the interface, because the user can't pattern-match on a value of type TwoOpts a.
module Module1 (TwoOpts, inverseAndSqrt, getFirstOpt, getSecondOpt) where
data TwoOpts a = TwoOpts (Maybe a) (Maybe a)
getFirstOpt, getSecondOpt :: TwoOpts a -> Maybe a
getFirstOpt (TwoOpts a _) = a
getSecondOpt (TwoOpts _ b) = b
inverseAndSqrt :: Int -> TwoOpts Float
inverseAndSqrt x = TwoOpts (safeInverse x) (safeSqrt x)
where safeInverse 0 = Nothing
safeInverse x = Just (1.0 / fromIntegral x)
safeSqrt x | x >= 0 = Just $ sqrt $ fromIntegral x
| otherwise = Nothing
and
module Module2 where
import Module1
fun :: TwoOpts Float -> Float
fun a = case (getFirstOpts a, getSecondOpt a) of
(Just x, Just y) -> x + y
(Just x, Nothing) -> x
(Nothing, Just y) -> y
exportedFun :: Int -> Float
exportedFun = fun . inverseAndSqrt
Later, when you realize that you've reimplemented the type product, you can change your definitions without affecting any user code.
newtype TwoOpts a = TwoOpts { getOpts :: (Maybe a, Maybe a) }
getFirstOpt, getSecondOpt :: TwoOpts a -> Maybe a
getFirstOpt = fst . getOpts
getSecondOpt = snd . getOpts
Related
I am a mathematician who works a lot with category theory, and I've been using Haskell for a while to perform certain computations etc., but I am definitely not a programmer. I really love Haskell and want to become much more fluent in it, and the type system is something that I find especially great to have in place when writing programs.
However, I've recently been trying to implement category theoretic things, and am running into problems concerning the fact that you seemingly can't have class method laws in Haskell. In case my terminology here is wrong, what I mean is that I can write
class Monoid c where
id :: c -> c
m :: c -> c -> c
but I can't write some law along the lines of
m (m x y) z == m x $ m y z
From what I gather, this is due to the lack of dependent types in Haskell, but I'm not sure how exactly this is the case (having now read a bit about dependent types). It also seems that the convention is just to include laws like this in comments and hope that you don't accidentally cook up some instance that doesn't satisfy them.
How should I change my approach to Haskell to deal with this problem? Is there a nice mathematical/type-theoretic solution (for example, require the existence of an associator that is an isomorphism (though then the question is, how do we encode isomorphisms without a law?)); is there some 'hack' (using extensions such as DataKinds); should I be drastic and switch to using something like Idris instead; or is the best response to just change the way I think about using Haskell (i.e. accept that these laws can't be implemented in a Haskelly way)?
(bonus) How exactly does the lack of laws come from not supporting dependent types?
You want to require that:
m (m x y) z = m x (m y z) -- (1)
But to require this you need a way to check it. So you, or your compiler (or proof assistant), need to construct a proof of this. And the question is, what type is a proof of (1)?
One could imagine some Proof type but then maybe you could just construct a proof that 0 = 0 instead of a proof of (1) and both would have type Proof. So you’d need a more general type. I can’t decide how to break up the rest of the question so I’ll go for a super brief explanation of the Curry-Howard isomorphism followed by an explanation of how to prove two things are equal and then how dependent types are relevant.
The Curry-Howard isomorphism says that propositions are isomorphic to types and proofs are isomorphic to programs: a type corresponds to a proposition and a proof of that proposition corresponds to a program constructing a value inhabiting that type. Ignoring how many propositions might be expressed as types, an example would be that the type A * B (written (A, B) in Haskell) corresponds to the proposition “A and B,” while the type A + B (written Either A B in Haskell) corresponds to the proposition “A or B.” Finally the type A -> B corresponds to “A implies B,” as a proof of this is a program which takes evidence of A and gives you evidence of B. One should note that there isn’t a way to express not A but one could imagine adding a type Not A with builtins of type Either a (Not a) for the law of the excluded middle as well as Not (Not a) -> a, and a * Not a -> Void (where Void is a type which cannot be inhabited and therefore corresponds to false), but then one can’t really run these programs to get constructivist proofs.
Now we will ignore some realities of Haskell and imagine that there aren’t ways round these rules (in particular undefined :: a says everything is true, and unsafeCoerce :: a -> b says that anything implies anything else, or just other functions that don’t return where their existence does not imply the corresponding proof).
So we know how to combine propositions but what might a proposition be? Well one could be to say that two types are equal. In Haskell this corresponds to the GADT
data Eq a b where Refl :: Eq c c
Where this constructor corresponds to the reflexive property of equality.
[side note: if you’re still interested so far, you may be interested to look up Voevodsky’s univalent foundations, depending on how much the idea of “Homotopy type theory” interests you]
So can we prove something now? How about the transitive property of equality:
trans :: Eq a b -> Eq b c -> Eq a c
trans x y =
case x of
Refl -> -- by this match being successful, the compiler now knows that a = b
case y of
Refl -> -- and now b = c and so the compiler knows a = c
Refl -- the compiler knows that this is of type Eq d d, and as it knows a = c, this typechecks as Eq a c
This feels like one hasn’t really proven anything (especially as this mainly relies on the compiler knowing the transitive and symmetric properties), but one gets a similar feeling when proving simple things in logic as well.
So now how might you prove the original proposition (1)? Well let’s imagine we want a type c to be a monoid then we should also prove that $\forall x,y,z:c, m (m x y) z = m x (m y z).$ So we need a way to express m (m x y) z as a type. Strictly speaking this isn’t dependent types (this can be done with DataKinds to promote values and type families instead of functions). But you do need dependent types to have types depend on values. Specifically if you have a type Nat of natural numbers and a type family Vec :: Nat -> * (* is the kind (read type) of all types) of fixed length vectors, you could define a dependently typed function mkVec :: (n::Nat) -> Vec n. Observe how the type of the output depends on the value of the input.
So your law needs to have functions promoted to type level (skipping the questions about how one defines type equality and value equality), as well as dependent types (made up syntax):
class Monoid c where
e :: c
(*) :: c -> c -> c
idl :: (x::c) -> Eq x (e * x)
idr :: (x::c) -> Eq x (x * e)
assoc :: (x::c) -> (y::c) -> (z::c) -> Eq ((x * y) * z) (x * (y * z))
Observe how types tend to become large with dependent types and proofs. In a language missing typeclasses one could put such values into a record.
Final note on the theory of dependent types and how these correspond to the curry Howard isomorphism.
Dependent types can be considered an answer to the question: what types correspond to the propositions $\forall x\in S\quad P(x)$ and $\exists y\in T\quad Q(y)?$
The answer is that you create new ways to make types: the dependent product and the dependent sum (coproduct). The dependent product expresses “for all values $x$ of type $S,$ there is a value of type $P(x).$” A normal product would be a dependent product with $S=2,$ a type inhabited by two values. A dependent product might be written (x:T) -> P x. A dependent sum says “some value $y$ of type $T$, paired with a value of type $Q(y).$” this might be written (y:T) * Q y.
One can think of these as a generalisation of arbitrarily indexed (co)products from Set to general categories, where one might sensibly write e.g. $\prod_\Lambda X(\lambda),$ and sometimes such notation is used in type theory.
As I read through some sections in History of Haskell, I came across:
However, higher-kinded polymorphism has independent utility: it is entirely possible, and occasionally very useful, to declare data types parameterised over higher kinds, such as:
data ListFunctor f a = Nil | Cons a (f a)
Knowing "basic" ADTs I was a bit puzzled here, my "guess" was that the part in parens suggests a "parametric"/"dynamic" unary data constructor f? So any data constructor of kind * -> * that "can accept" type a? Is my thinking correct or am I misinterpreting the syntax? I know I'm "just guessing" but I'm hopeful to gain a "lay-programmer" intuition on this capability here, some sample scenario needing (or benefiting immensively from) this ;) mostly I can imagine (just not in what exact manner) this allowing more flexibility in those "small embedded versatile recursable config language"-ADTs that Haskell makes such a pleasure to formulate and write evals for.. close?
In GHCi, :i ListFunctor on the above gives:
type role ListFunctor representational nominal
data ListFunctor (f :: * -> *) a = Nil | Cons a (f a)
So this seems to be what's "inferred" from the crisper data declaration.
Yes, f can be any unary type constructor.
For instance ListFunctor [] Int or ListFunctor Maybe Char are well-kinded.
f can also be any n-ary type constructor with (n-1) arguments partially applied.
For instance ListFunctor ((->) Bool) Int or ListFunctor (Either ()) Char are well-kinded.
The basic kinding system is quite simple. If F :: * -> * -> ... -> *, then F expects type arguments. If G :: (* -> *) -> *, then G expects any thing of kind * -> * including unary type constructor and partial applications as the ones shown above. And so on.
A problem which is nicely solved by higher kinds is configuration options. Assume we have a record
data Opt = Opt
{ opt1 :: Bool
, opt2 :: String
-- many other fields here
}
Now, configuration settings can be found in a file and/or passed through the command line and/or in environment variables. During the parsing of all these settings sources, we need to cope with the fact that not all sources define all options. Hence, we need a more lax type to represent subsets of configuration settings:
data TempOpt = TempOpt
{ tempOpt1 :: Maybe Bool
, tempOpt2 :: Maybe String
-- many other fields here
}
-- merge all options in one single configuration, or fail
finalize :: [TempOpt] -> Maybe Opt
...
This is horrible, since it duplicates all the options! We would be tempted to remove the Opt type, and only use the weaker TempOpt, to reduce clutter. However, by doing this we will need to use some partial accessor like fromJust every time we need to access the value of an option in our program, even after the initial configuration handling part.
We can instead resort to higher kinds:
data FOpt f = FOpt
{ opt1 :: f Bool
, opt2 :: f String
-- many other fields here
}
type Opt = FOpt Identity
type TempOpt = FOpt Maybe
-- as before: merge all options in one single configuration, or fail
finalize :: [TempOpt] -> Maybe Opt
...
No more duplication. After we finalize the configuration settings, we get the static guarantee that settings are always present. We can now use the total accessor runIdentity to get them, instead of the dangerous fromJust.
I'm covering polymorphism and I'm trying to see the practical uses of such a feature.
My basic understanding of Rank 2 is:
type MyType = ∀ a. a -> a
subFunction :: a -> a
subFunction el = el
mainFunction :: MyType -> Int
mainFunction func = func 3
I understand that this is allowing the user to use a polymorphic function (subFunction) inside mainFunction and strictly specify it's output (Int). This seems very similar to GADT's:
data Example a where
ExampleInt :: Int -> Example Int
ExampleBool :: Bool -> Example Bool
1) Given the above, is my understanding of Rank 2 polymorphism correct?
2) What are the general situations where Rank 2 polymorphism can be used, as opposed to GADT's, for example?
If you pass a polymorphic function as and argument to a Rank2-polymorphic function, you're essentially passing not just one function but a whole family of functions – for all possible types that fulfill the constraints.
Typically, those forall quantifiers come with a class constraint. For example, I might wish to do number arithmetic with two different types simultaneously (for comparing precision or whatever).
data FloatCompare = FloatCompare {
singlePrecision :: Float
, doublePrecision :: Double
}
Now I might want to modify those numbers through some maths operation. Something like
modifyFloat :: (Num -> Num) -> FloatCompare -> FloatCompare
But Num is not a type, only a type class. I could of course pass a function that would modify any particular number type, but I couldn't use that to modify both a Float and a Double value, at least not without some ugly (and possibly lossy) converting back and forth.
Solution: Rank-2 polymorphism!
modifyFloat :: (∀ n . Num n => n -> n) -> FloatCompare -> FloatCompare
mofidyFloat f (FloatCompare single double)
= FloatCompare (f single) (f double)
The best single example of how this is useful in practice are probably lenses. A lens is a “smart accessor function” to a field in some larger data structure. It allows you to access fields, update them, gather results... while at the same time composing in a very simple way. How it works: Rank2-polymorphism; every lens is polymorphic, with the different instantiations corresponding to the “getter” / “setter” aspects, respectively.
The go-to example of an application of rank-2 types is runST as Benjamin Hodgson mentioned in the comments. This is a rather good example and there are a variety of examples using the same trick. For example, branding to maintain abstract data type invariants across multiple types, avoiding confusion of differentials in ad, a region-based version of ST.
But I'd actually like to talk about how Haskell programmers are implicitly using rank-2 types all the time. Every type class whose methods have universally quantified types desugars to a dictionary with a field with a rank-2 type. In practice, this is virtually always a higher-kinded type class* like Functor or Monad. I'll use a simplified version of Alternative as an example. The class declaration is:
class Alternative f where
empty :: f a
(<|>) :: f a -> f a -> f a
The dictionary representing this class would be:
data AlternativeDict f = AlternativeDict {
empty :: forall a. f a,
(<|>) :: forall a. f a -> f a -> f a }
Sometimes such an encoding is nice as it allows one to use different "instances" for the same type, perhaps only locally. For example, Maybe has two obvious instances of Alternative depending on whether Just a <|> Just b is Just a or Just b. Languages without type classes, such as Scala, do indeed use this encoding.
To connect to leftaroundabout's reference to lenses, you can view the hierarchy there as a hierarchy of type classes and the lens combinators as simply tools for explicitly building the relevant type class dictionaries. Of course, the reason it isn't actually a hierarchy of type classes is that we usually will have multiple "instances" for the same type. E.g. _head and _head . _tail are both "instances" of Traversal' s a.
* A higher-kinded type class doesn't necessarily lead to this, and it can happen for a type class of kind *. For example:
-- Higher-kinded but doesn't require universal quantification.
class Sum c where
sum :: c Int -> Int
-- Not higher-kinded but does require universal quantification.
class Length l where
length :: [a] -> l
If you are using modules in Haskell, you are already using Rank-2 types. Theoretically speaking, modules are records with rank-2 type properties.
For example, the Foo module below in Haskell ...
module Foo(id) where
id :: forall a. a -> a
id x = x
import qualified Foo
main = do
putStrLn (Foo.id "hello")
return ()
... can actually be thought as a record as follows:
type FooType = FooType {
id :: forall a. a -> a
}
Foo :: FooType
Foo = Foo {
id = \x -> x
}
P/S (unrelated this question): from a language design perspective, if you are going to support module system, then you might as well support higher-rank types (i.e. allow arbitrary quantification of type variables on any level) to reduce duplication of efforts (i.e. type checking a module should be almost the same as type checking a record with higher rank types).
I understand that when having
instance (Foo a) => Bar a
instance (Xyy a) => Bar a
GHC doesn't consider the contexts, and the instances are reported as duplicate.
What is counterintuitive, that (I guess) after selecting an instance, it still needs to check if the context matches, and if not, discard the instance. So why not reverse the order, and discard instances with non-matching contexts, and proceed with the remaining set.
Would this be intractable in some way? I see how it could cause more constraint resolution work upfront, but just as there is UndecidableInstances / IncoherentInstances, couldn't there be a ConsiderInstanceContexts when "I know what I am doing"?
This breaks the open-world assumption. Assume:
class B1 a
class B2 a
class T a
If we allow constraints to disambiguate instances, we may write
instance B1 a => T a
instance B2 a => T a
And may write
instance B1 Int
Now, if I have
f :: T a => a
Then f :: Int works. But, the open world assumption says that, once something works, adding more instances cannot break it. Our new system doesn't obey:
instance B2 Int
will make f :: Int ambiguous. Which implementation of T should be used?
Another way to state this is that you've broken coherence. For typeclasses to be coherent means that there is only one way to satisfy a given constraint. In normal Haskell, a constraint c has only one implementation. Even with overlapping instances, coherence generally holds true. The idea is that instance T a and instance {-# OVERLAPPING #-} T Int do not break coherence, because GHC can't be tricked into using the former instance in a place where the latter would do. (You can trick it with orphans, but you shouldn't.) Coherence, at least to me, seems somewhat desirable. Typeclass usage is "hidden", in some sense, and it makes sense to enforce that it be unambiguous. You can also break coherence with IncoherentInstances and/or unsafeCoerce, but, y'know.
In a category theoretic way, the category Constraint is thin: there is at most one instance/arrow from one Constraint to another. We first construct two arrows a : () => B1 Int and b : () => B2 Int, and then we break thinness by adding new arrows x_Int : B1 Int => T Int, y_Int : B2 Int => T Int such that x_Int . a and y_Int . b are both arrows () => T Int that are not identical. Diamond problem, anyone?
This does not answer you question as to why this is the case. Note, however, that you can always define a newtype wrapper to disambiguate between the two instances:
newtype FooWrapper a = FooWrapper a
newtype XyyWrapper a = XyyWrapper a
instance (Foo a) => Bar (FooWrapper a)
instance (Xyy a) => Bar (XyyWrapper a)
This has the added advantage that by passing around either a FooWrapper or a XyyWrapper you explicitly control which of the two instances you'd like to use if your a happens to satisfy both.
Classes are a bit weird. The original idea (which still pretty much works) is a sort of syntactic sugar around what would otherwise be data statements. For example you can imagine:
data Num a = Num {plus :: a -> a -> a, ... , fromInt :: Integer -> a}
numInteger :: Num Integer
numInteger = Num (+) ... id
then you can write functions which have e.g. type:
test :: Num x -> x -> x -> x -> x
test lib a b c = a + b * (abs (c + b))
where (+) = plus lib
(*) = times lib
abs = absoluteValue lib
So the idea is "we're going to automatically derive all of this library code." The question is, how do we find the library that we want? It's easy if we have a library of type Num Int, but how do we extend it to "constrained instances" based on functions of type:
fooLib :: Foo x -> Bar x
xyyLib :: Xyy x -> Bar x
The present solution in Haskell is to do a type-pattern-match on the output-types of those functions and propagate the inputs to the resulting declaration. But when there's two outputs of the same type, we would need a combinator which merges these into:
eitherLib :: Either (Foo x) (Xyy x) -> Bar x
and basically the problem is that there is no good constraint-combinator of this kind right now. That's your objection.
Well, that's true, but there are ways to achieve something morally similar in practice. Suppose we define some functions with types:
data F
data X
foobar'lib :: Foo x -> Bar' x F
xyybar'lib :: Xyy x -> Bar' x X
bar'barlib :: Bar' x y -> Bar x
Clearly the y is a sort of "phantom type" threaded through all of this, but it remains powerful because given that we want a Bar x we will propagate the need for a Bar' x y and given the need for the Bar' x y we will generate either a Bar' x X or a Bar' x y. So with phantom types and multi-parameter type classes, we get the result we want.
More info: https://www.haskell.org/haskellwiki/GHC/AdvancedOverlap
Adding backtracking would make instance resolution require exponential time, in the worst case.
Essentially, instances become logical statements of the form
P(x) => R(f(x)) /\ Q(x) => R(f(x))
which is equivalent to
(P(x) \/ Q(x)) => R(f(x))
Computationally, the cost of this check is (in the worst case)
c_R(n) = c_P(n-1) + c_Q(n-1)
assuming P and Q have similar costs
c_R(n) = 2 * c_PQ(n-1)
which leads to exponential growth.
To avoid this issue, it is important to have fast ways to choose a branch, i.e. to have clauses of the form
((fastP(x) /\ P(x)) \/ (fastQ(x) /\ Q(x))) => R(f(x))
where fastP and fastQ are computable in constant time, and are incompatible so that at most one branch needs to be visited.
Haskell decided that this "fast check" is head compatibility (hence disregarding contexts). It could use other fast checks, of course -- it's a design decision.
I want to create several incompatible, but otherwise equal, datatypes. That is, I'd like to have a parameterized type Foo a, and functions such as
bar :: (Foo a) -> (Foo a) -> (Foo a)
without actually caring about what a is. To clarify further, I'd like the type system to stop me from doing
x :: Foo Int
y :: Foo Char
bar x y
while I at the same time don't really care about Int and Char (I only care that they're not the same).
In my actual code I have a type for polynomials over a given ring. I don't actually care what the indeterminates are, as long as the type system stops me from adding a polynomial in t with a polynomial in s. So far I've solved this by creating a typeclass Indeterminate, and parameterizing my polynomial type as
data (Ring a, Indeterminate b) => Polynomial a b
This approach feels perfectly natural for the Ring part because I do care about which particular ring a given polynomial is over. It feels very contrived for the Indeterminate part, as detailed below.
The above approach works fine, but feels contrived. Especially so this part:
class Indeterminate a where
indeterminate :: a
data T = T
instance Indeterminate T where
indeterminate = T
data S = S
instance Indeterminate S where
indeterminate = S
(and so on for perhaps a few more indeterminates). It feels weird and wrong. Essentially I'm trying to demand that instances of Indeterminate be singletons (in this sense). The feeling of weirdness is one indicator that I might be attacking this wrongly. Another is the fact that I end up having to annotate a lot of my Polynomial a bs since the actual type b often cannot be inferred (that's not strange, but is annoying nevertheless).
Any suggestions? Should I just keep on doing it like this, or am I missing something?
PS: Don't feel offended if I don't upvote or accept answers immediately. I'll be unable to check back in for a few days.
First of all, I'm not sure this:
data (Ring a, Indeterminate b) => Polynomial a b
...is doing what you expect it to. Contexts on data definitions are not terribly useful--see the discussion here for some reasons why, most of which amount to them forcing you to add extra annotations without actually providing many additional type guarantees.
Second, do you actually care about the "indeterminate" parameter other than to ensure that the types are kept distinct? A pretty standard way of doing that sort of thing is what's called phantom types--essentially, parameters in the type constructor that aren't used in the data constructor. You'll never use or need a value of the phantom type, so functions can be as polymorphic as you want, e.g.:
data Foo a b = Foo b
foo :: Foo a b -> Foo a b
foo (Foo x) = Foo x
bar :: Foo a c -> Foo b c
bar (Foo x) = Foo x
baz :: Foo Int Int -> Foo Char Int -> Foo () Int
baz (Foo x) (Foo y) = Foo $ x + y
Obviously this does require annotations, but only in places where you're deliberately adding restrictions. Otherwise, inference will work normally for the phantom type parameter.
It seems to me that the above approach should be sufficient for what you're doing here--the business with singleton types is mostly about bridging the gap between more complicated type-level stuff and regular value-level computations by creating type proxies for values. This could be useful for, say, marking vectors with types that indicate their basis, or marking numeric values with physical units--both cases where the annotation has more meaning than just "an indeterminate called X".