Imagine a language which doesn't allow multiple value constructors for a data type. Instead of writing
data Color = White | Black | Blue
we would have
data White = White
data Black = Black
data Blue = Black
type Color = White :|: Black :|: Blue
where :|: (here it's not | to avoid confusion with sum types) is a built-in type union operator. Pattern matching would work in the same way
show :: Color -> String
show White = "white"
show Black = "black"
show Blue = "blue"
As you can see, in contrast to coproducts it results in a flat structure so you don't have to deal with injections. And, unlike sum types, it allows to randomly combine types resulting in greater flexibility and granularity:
type ColorsStartingWithB = Black :|: Blue
I believe it wouldn't be a problem to construct recursive data types as well
data Nil = Nil
data Cons a = Cons a (List a)
type List a = Cons a :|: Nil
I know union types are present in TypeScript and probably other languages, but why did the Haskell committee chose ADTs over them?
Haskell's sum type is very similar to your :|:.
The difference between the two is that the Haskell sum type | is a tagged union, while your "sum type" :|: is untagged.
Tagged means every instance is unique - you can distunguish Int | Int from Int (actually, this holds for any a):
data EitherIntInt = Left Int | Right Int
In this case: Either Int Int carries more information than Int because there can be a Left and Right Int.
In your :|:, you cannot distinguish those two:
type EitherIntInt = Int :|: Int
How do you know if it was a left or right Int?
See the comments for an extended discussion of the section below.
Tagged unions have another advantage: The compiler can verify whether you as the programmer handled all cases, which is implementation-dependent for general untagged unions. Did you handle all cases in Int :|: Int? Either this is isomorphic to Int by definition or the compiler has to decide which Int (left or right) to choose, which is impossible if they are indistinguishable.
Consider another example:
type (Integral a, Num b) => IntegralOrNum a b = a :|: b -- untagged
data (Integral a, Num b) => IntegralOrNum a b = Either a b -- tagged
What is 5 :: IntegralOrNum Int Double in the untagged union? It is both an instance of Integral and Num, so we can't decide for sure and have to rely on implementation details. On the other hand, the tagged union knows exactly what 5 should be because it is branded with either Left or Right.
As for naming: The disjoint union in Haskell is a union type. ADTs are only a means of implementing these.
I will try to expand the categorical argument mentioned by #BenjaminHodgson.
Haskell can be seen as the category Hask, in which objects are types and morphisms are functions between types (disregarding bottom).
We can define a product in Hask as tuple - categorically speaking it meets the definition of the product:
A product of a and b is the type c equipped with projections p and q such that p :: c -> a and q :: c -> b and for any other candidate c' equipped with p' and q' there exists a morphism m :: c' -> c such that we can write p' as p . m and q' as q . m.
Read up on this in Bartosz' Category Theory for Programmers for further information.
Now for every category, there exists the opposite category, which has the same morphism but reverses all the arrows. The coproduct is thus:
The coproduct c of a and b is the type c equipped with injections i :: a -> c and j :: b -> c such that for all other candidates c' with i' and j' there exists a morphism m :: c -> c' such that i' = m . i and j' = m . j.
Let's see how the tagged and untagged union perform given this definition:
The untagged union of a and b is the type a :|: b such that:
i :: a -> a :|: b is defined as i a = a and
j :: b -> a :|: b is defined as j b = b
However, we know that a :|: a is isomorphic to a. Based on that observation we can define a second candidate for the product a :|: a :|: b which is equipped with the exact same morphisms. Therefore, there is no single best candidate, since the morphism m between a :|: a :|: b and a :|: b is id. id is a bijection, which implies that m is invertible and "convert" types either way. A visual representation of that argument. Replace p with i and q with j.
Restricting ourselves Either, as you can verify yourself with:
i = Left and
j = Right
This shows that the categorical complement of the product type is the disjoint union, not the set-based union.
The set union is part of the disjoint union, because we can define it as follows:
data Left a = Left a
data Right b = Right b
type DisjUnion a b = Left a :|: Right b
Because we have shown above that the set union is not a valid candidate for the coproduct of two types, we would lose many "free" properties (which follow from parametricity as leftroundabout mentioned) by not choosing the disjoint union in the category Hask (because there would be no coproduct).
This is an idea I've thought a lot about myself: a language with “first-class type algebra”. Pretty sure we could do about everything this way that we do in Haskell. Certainly if these disjunctions were, like Haskell alternatives, tagged unions; then you could directly rewrite any ADT to use them. In fact GHC can do this for you: if you derive a Generic instance, a variant type will be represented by a :+: construct, which is in essence just Either.
I'm not so sure if untagged unions would also do. As long as you require the types participating in a sum to be discernibly different, the explicit tagging should in principle not be necessary. The language would then need a convenient way to match on types at runtime. Sounds a lot like what dynamic languages do – obviously comes with quite some overhead though.
The biggest problem would be that if the types on both sides of :|: must be unequal then you lose parametricity, which is one of Haskell's nicest traits.
Given that you mention TypeScript, it is instructive to have a look at what its docs have to say about its union types. The example there starts from a function...
function padLeft(value: string, padding: any) { //etc.
... that has a flaw:
The problem with padLeft is that its padding parameter is typed as any. That means that we can call it with an argument that’s neither a number nor a string
One plausible solution is then suggested, and rejected:
In traditional object-oriented code, we might abstract over the two types by creating a hierarchy of types. While this is much more explicit, it’s also a little bit overkill.
Rather, the handbook suggests...
Instead of any, we can use a union type for the padding parameter:
function padLeft(value: string, padding: string | number) { // etc.
Crucially, the concept of union type is then described in this way:
A union type describes a value that can be one of several types.
A string | number value in TypeScript can be either of string type or of number type, as string and number are subtypes of string | number (cf. Alexis King's comment to the question). An Either String Int value in Haskell, however, is neither of String type nor of Int type -- its only, monomorphic, type is Either String Int. Further implications of that difference show up in the remainder of the discussion:
If we have a value that has a union type, we can only access members that are common to all types in the union.
In a roughly analogous Haskell scenario, if we have, say, an Either Double Int, we cannot apply (2*) directly on it, even though both Double and Int have instances of Num. Rather, something like bimap is necessary.
What happens when we need to know specifically whether we have a Fish? [...] we’ll need to use a type assertion:
let pet = getSmallPet();
if ((<Fish>pet).swim) {
(<Fish>pet).swim();
}
else {
(<Bird>pet).fly();
}
This sort of downcasting/runtime type checking is at odds with how the Haskell type system ordinarily works, even though it can be implemented using the very same type system (also cf. leftaroundabout's answer). In contrast, there is nothing to figure out at runtime about the type of an Either Fish Bird: the case analysis happens at value level, and there is no need to deal with anything failing and producing Nothing (or worse, null) due to runtime type mismatches.
Related
Compiling without Continuations describes a way to extend ANF System F with join points. GHC itself has join points in Core (an intermediate representation) rather than exposing join points directly in the surface language (Haskell). Out of curiosity, I started trying to write a language that simply extends System F with join points. That is, the join points are user facing. However, there's something about the typing rules in the paper that I don't understand. Here are the parts that I do understand:
There are two environments, one for ordinary values/functions and one that only has join points.
The rational for ∆ being ε in several of the rules. In the expression let x:σ = u in ..., u cannot reference any join points (VBIND) because it join points cannot return to arbitrary locations.
The strange typing rule for JBIND. The paper does a good job explaining this.
Here's what I don't get. The paper introduces a notation that I will call the "overhead arrow", but the paper itself does not explicitly give it a name or mention it. Visually, it looks like a arrow pointing to the right, and it goes above an expression. Roughly, this seems to indicate a "tail context" (the paper does use this term). In the paper, these overhead arrows can be applied to terms, types, data constructors, and even environments. They can be nested as well. Here's the main difficulty I'm having. There are several rules with premises that include type environments under an overhead arrow. JUMP, CASE, RVBIND, and RJBIND all include premises with such a type environment (Figure 2 in the paper). However, none of the typing rules have a conclusion where the type environment is under an overhead arrow. So, I cannot see how JUMP, CASE, etc. can ever be used since the premises cannot be derived by any of the other rules.
That's the question, but if anyone has any supplementary material that provides more context are the overhead arrow convention or if anyone is aware an implementation of the System-F-with-join-points type system (other than in GHC's IR), that would be helpful too.
In this paper, x⃗ means “A sequence of x, separated by appropriate delimiters”.
A few examples:
If x is a variable, λx⃗. e is an abbreviation for λx1. λx2. … λxn e. In other words, many nested 1-argument lambdas, or a many-argument lambda.
If σ and τ are types, σ⃗ → τ is an abbreviation for σ1 → σ2 → … → σn → τ. In other words, a function type with many parameter types.
If a is a type variable and σ is a type, ∀a⃗. σ is an abbreviation for ∀a1. ∀a2. … ∀an. σ. In other words, many nested polymorphic functions, or a polymorphic function with many type parameters.
In Figure 1 of the paper, the syntax of a jump expression is defined as:
e, u, v ⩴ … | jump j ϕ⃗ e⃗ τ
If this declaration were translated into a Haskell data type, it might look like this:
data Term
-- | A jump expression has a label that it jumps to, a list of type argument
-- applications, a list of term argument applications, and the return type
-- of the overall `jump`-expression.
= Jump LabelVar [Type] [Term] Type
| ... -- Other syntactic forms.
That is, a data constructor that takes a label variable j, a sequence of type arguments ϕ⃗, a sequence of term arguments e⃗, and a return type τ.
“Zipping” things together:
Sometimes, multiple uses of the overhead arrow place an implicit constraint that their sequences have the same length. One place that this occurs is with substitutions.
{ϕ/⃗a} means “replace a1 with ϕ1, replace a2 with ϕ2, …, replace an with ϕn”, implicitly asserting that both a⃗ and ϕ⃗ have the same length, n.
Worked example: the JUMP rule:
The JUMP rule is interesting because it provides several uses of sequencing, and even a sequence of premises. Here’s the rule again:
(j : ∀a⃗. σ⃗ → ∀r. r) ∈ Δ
(Γ; ε ⊢⃗ u : σ {ϕ/⃗a})
Γ; Δ ⊢ jump j ϕ⃗ u⃗ τ : τ
The first premise should be fairly straightforward, now: lookup j in the label context Δ, and check that the type of j starts with a bunch of ∀s, followed by a bunch of function types, ending with a ∀r. r.
The second “premise” is actually a sequence of premises. What is it looping over? So far, the sequences we have in scope are ϕ⃗, σ⃗, a⃗, and u⃗.
ϕ⃗ and a⃗ are used in a nested sequence, so probably not those two.
On the other hand, u⃗ and σ⃗ seem quite plausible if you consider what they mean.
σ⃗ is the list of argument types expected by the label j, and u⃗ is the list of argument terms provided to the label j, and it makes sense that you might want to iterate over argument types and argument terms together.
So this “premise” actually means something like this:
for each pair of σ and u:
Γ; ε ⊢ u : σ {ϕ/⃗a}
Pseudo-Haskell implementation
Finally, here’s a somewhat-complete code sample illustrating what this typing rule might look like in an actual implementation. x⃗ is implemented as a list of x values, and some monad M is used to signal failure when a premise is not satisfied.
data LabelVar
data Type
= ...
data Term
= Jump LabelVar [Type] [Term] Type
| ...
typecheck :: TermContext -> LabelContext -> Term -> M Type
typecheck gamma delta (Jump j phis us tau) = do
-- Look up `j` in the label context. If it's not there, throw an error.
typeOfJ <- lookupLabel j delta
-- Check that the type of `j` has the right shape: a bunch of `foralls`,
-- followed by a bunch of function types, ending with `forall r.r`. If it
-- has the correct shape, split it into a list of `a`s, a list of `\sigma`s
-- and the return type, `forall r.r`.
(as, sigmas, ret) <- splitLabelType typeOfJ
-- exactZip is a helper function that "zips" two sequences together.
-- If the sequences have the same length, it produces a list of pairs of
-- corresponding elements. If not, it raises an error.
for each (u, sigma) in exactZip (us, sigmas):
-- Type-check the argument `u` in a context without any tail calls,
-- and assert that its type has the correct form.
sigma' <- typecheck gamma emptyLabelContext u
-- let subst = { \sequence{\phi / a} }
subst <- exactZip as phis
assert (applySubst subst sigma == sigma')
-- After all the premises have been satisfied, the type of the `jump`
-- expression is just its return type.
return tau
-- Other syntactic forms
typecheck gamma delta u = ...
-- Auxiliary definitions
type M = ...
instance Monad M
lookupLabel :: LabelVar -> LabelContext -> M Type
splitLabelType :: Type -> M ([TypeVar], [Type], Type)
exactZip :: [a] -> [b] -> M [(a, b)]
applySubst :: [(TypeVar, Type)] -> Type -> Type
As far as I know SPJ’s style for notation, and this does align with what I see in the paper, it simply means “0 or more”. E.g. you can replace \overarrow{a} with a_1, …, a_n, n >= 0.
It may be “1 or more” in some cases, but it shouldn’t be hard to figure one which one of the two.
There is a Tuple as a Product of any number of types and there is an Either as a Sum of two types. What is the name for a Sum of any number of types, something like this
data Thing a b c d ... = Thing1 a | Thing2 b | Thing3 c | Thing4 d | ...
Is there any standard implementation?
Before I make the suggestion against using such types, let me explain some background.
Either is a sum type, and a pair or 2-tuple is a product type. Sums and products can exist over arbitrarily many underlying types (sets). However, in Haskell, only tuples come in a variety of sizes out of the box. Either on the other hand, can to be (arbitrarily) nested to achieve that: Either Foo (Either Bar Baz).
Of course it's easy to instead define e.g. the types Either3 and Either4 etc, in the spirit of 3-tuples, 4-tuples and so on.
data Either3 a b c = Left a | Middle b | Right c
data Either4 a b c d = LeftMost a | Left b | Right c | RightMost d
...if you really want. Or you can find a library the does this, but I doubt you could call it "standard" by any standards...
However, if you do define your own generic sum and product types, they will be completely isomorphic to any type that is structurally equivalent, regardless of where it is defined. This means that you can, with relative ease, nicely adapt your code to interface with any other code that uses an alternative definition.
Furthermore, it is even very likely to be beneficial because that way you can give more meaningful, descriptive names to your sum and product types, instead of going with the generic tuple and either. In fact, some people advise for using custom types because it essentially adds static type safety. This also applies to non-sum/product types, e.g.:
employment :: Bool -- so which one is unemplyed and which one is employed?
data Empl = Employed | Unemployed
employment' :: Empl -- no ambiguity
or
person :: (Name, Age) -- yeah but when you see ("Erik", 29), is it just some random pair of name and age, or does it represent a person?
data Person = Person { name :: Name, age :: Age }
person' :: Person -- no ambiguity
— above, Person really encodes a product type, but with more meaning attached to it. You can also do newtype Person = Person (Name, Age), and it's actually quite equivalent anyway. So I always just prefer a nice and intention-revealing custom type. The same goes about Either and custom sum types.
So basically, Haskell gives you all the tools necessary to quickly build your own custom types with very clean and readable syntax, so it's best if we use it not resort to primitive types like tuples and either. However, it's nice to know about this isomorphism, for example in the context of generic programming. If you want to know more about that, you can google up "scrap your boilerplate" and "template your boilerplate" and just "(datatype) generic programming".
P.S. The reason they are called sum and product types respectively is that they correspond to set-union (sum) and set-product. Therefore, the number of values (or unique instances if you will) in the set that is described by the product type (a, b) is the product of the number of values in a and the number of values in b. For example (Bool, Bool) has exactly 2*2 values: (True, True), (False, False), (True, False), (False, True).
However Either Bool Bool has 2+2 values, Left True, Left False, Right True, Right False. So it happens to be the same number but that's obviously not the case in general.
But of course this can also be said about our custom Person product type, so again, there is little reason to use Either and tuples.
There are some predefined versions in HaXml package with OneOfN, TwoOfN, .. constructors.
In a generic context, this is usually done inductively, using Either or
data (:+:) f g a = L1 (f a) | R1 (g a)
The latter is defined in GHC.Generics to match the funny way it handles things.
In fact, the generic approach is to break every algebraic datatype down into (:+:) and
data (:*:) f g a = f a :*: f a
along with some extra stuff. That is, it turns everything into binary sums and binary products.
In a more concrete context, you're almost always better off using a custom algebraic datatype for things bigger than pairs or with more options than Either, as others have discussed. Slightly larger tuples (triples and maybe 4-tuples) can be useful for local one-off constructs, but it's hard to see how you'd use larger general sum types as one-offs.
Such a type is usually called a sum, variant, union, or tagged union type. Because the capability is a built-in feature of data types in Haskell, there's no name for it widely used in Haskell code. The Report only calls them "algebraic datatypes" (usually abbreviated to ADT), so that's the name you'll see most often in comments, but this name includes types with only one data constructor, which are only sum types in the trivial sense.
This question already has answers here:
Understanding Haskell's Bool Deriving an Ord
(3 answers)
Closed 7 years ago.
Can someone please explain the following output?
Prelude> compare True False
GT
it :: Ordering
Prelude> compare False True
LT
it :: Ordering
Why are Bool type values ordered in Haskell - especially, since we can demonstrate that values of True and False are not exactly 1 and 0 (unlike many other languages)?
This is how the derived instance of Ord works:
data D = A | B | C deriving Ord
Given that datatype, we get C > B > A. Bool is defined as False | True, and it kind of makes sense when you look at other examples such as:
Maybe a = Nothing | Just a
Either a b = Left a | Right b
In each of the case having "some" ("truthy") value is greater than having no values at all (or having "left" or "bad" or "falsy" value).
While Bool is not Int, it can be converted to the 0,1 fragment of Int since it is an Enum type.
fromEnum False = 0
fromEnum True = 1
Now, the Enum could have been different, reversing 0 and 1, but that would probably be surprising to most programmers thinking about bits.
Since it has an Enum type, everything else being equal, it's better to define an Ord instance which follows the same order, satisfying
compare x y = compare (fromEnum x) (fromEnum y)
In fact, each instance generated from deriving (Eq, Ord, Enum) follows such property.
On a more theoretical note, logicians tend to order propositions from the strongest to the weakest (forming a lattice). In this structure, False (as a proposition) is the bottom, i.e. the least element, while True is the top. While this is only a convention (theory would be just as nice if we picked the opposite ordering), it's a good thing to be consistent.
Minor downside: the implication boolean connective is actually p <= q expressing that p implies q, instead of the converse as the "arrow" seems to indicate.
Let me answer your question with a question: Why is there an Ord instance for ()?
Unlike Bool, () has only one possible value: (). So why the hell would you ever want to compare it? There is only one value possible!
Basically, it's useful if all or most of the standard basic types have instances for common classes. It makes it easier to derive instances for your own types. If Foo doesn't have an Ord instance, and your new type has a single Foo field, then you can't auto-derive an Ord instance.
You might, for example, have some kind of tree type where we can attach several items of information to the leaves. Something like Tree x y z. And you might want to have an Eq instance to compare trees. It would be annoying if Tree () Int String didn't have an Eq instance just because () doesn't. So that's why () has Eq (and Ord and a few others).
Similar remarks apply to Bool. It might not sound particularly useful to compare two bool values, but it would be irritating if your Ord instance vanishes as soon as you put a bool in there.
(One other complicating factor is that sometimes we want Ord because there's a logically meaningful ordering for things, and sometimes we just want some arbitrary order, typically so we can use something as a key for Data.Map or similar. Arguably there ought to be two separate classes for that… but there isn't.)
Basically, it comes from math. In set theory or category theory boolean functions are usually thought of as classifiers of subsets/subobjects. In plain terms, function f :: a -> Bool is identified with filter f :: [a] -> [a]. So, if we change one value from False to True, the resulting filtered list (subset, subobject, whatever) is going to have more elements. Therefore, True is considered "bigger" than False.
I have been reading the existential section on Wikibooks and this is what is stated there:
Firstly, forall really does mean 'for all'. One way of thinking about
types is as sets of values with that type, for example, Bool is the
set {True, False, ⊥} (remember that bottom, ⊥, is a member of every
type!), Integer is the set of integers (and bottom), String is the set
of all possible strings (and bottom), and so on. forall serves as an
intersection over those sets. For example, forall a. a is the intersection over all types, which must be {⊥}, that is, the type (i.e. set) whose only value (i.e. element) is bottom.
How does forall serve as an intersection over those sets ?
forall in formal logic means that it can be any value from the universe of discourse. How does in Haskell it gets translated to intersection ?
Haskell's forall-s can be viewed as restricted dependent function types, which I think is the conceptually most enlightening approach and also most amenable to set-theoretic or logical interpretations.
In a dependent language one can bind the values of arguments in function types, and mention those values in the return types.
-- Idris
id : (a : Type) -> (a -> a)
id _ x = x
-- Can also leave arguments implicit (to be inferred)
id : a -> a
id x = x
-- Generally, an Idris function type has the form "(x : A) -> F x"
-- where A is a type (or kind/sort, or any level really) and F is
-- a function of type "A -> Type"
-- Haskell
id :: forall (a : *). (a -> a)
id x = x
The crucial difference is that Haskell can only bind types, lifted kinds, and type constructors, using forall, while dependent languages can bind anything.
In the literature dependent functions are called dependent products. Why call them that, when they are, well, functions? It turns out that we can implement Haskell's algebraic product types using only dependent functions.
Generally, any function a -> b can be viewed as a lookup function for some product, where the keys have type a and the elements have type b. Bool -> Int can be interpreted as a pair of Int-s. This interpretation is not very interesting for non-dependent functions, since all the product fields must be of the same type. With dependent functions, our pair can be properly polymorphic:
Pair : Type -> Type -> Type
Pair a b = (index : Bool) -> (if index then a else b)
fst : Pair a b -> a
fst pair = pair True
snd : Pair a b -> b
snd pair = pair False
setFst : c -> Pair a b -> Pair c b
setFst c pair = \index -> if index then c else pair False
setSnd : c -> Pair a b -> Pair a c
setSnd c pair = \index -> if index then pair True else c
We have recovered all the essential functionality of pairs here. Also, using Pair we can build up products of arbitrary arity.
So, how does is tie in to the interpretation of forall-s? Well, we can interpret ordinary products and build up some intuition for them, and then try to transfer that to forall-s.
So, let's look a bit first at the algebra of ordinary products. Algebraic types are called algebraic because we can determine the number of their values by algebra. Link to detailed explanation. If A has |A| number of values and B has |B| number of values, then Pair A B has |A| * |B| number of possible values. With sum types we sum the number of inhabitants. Since A -> B can be viewed as a product with |A| fields, all having type B, the number of the inhabitants of A -> B is |A| number of |B|-s multiplied together, which equals |B|^|A|. Hence the name "exponential type" that is sometimes used to denote functions. When the function is dependent, we fall back to the "product over some number of different types" interpretation, since the exponential formula no longer fits.
Armed with this understanding, we can interpret forall (a :: *). t as a product type with indices of type * and fields having type t, where a might be mentioned inside t, and thus the field types may vary depending on the choice of a. We can look up the fields by making Haskell infer some particular type for the forall, effectively applying the function to the type argument.
Note that this product has as many fields as many values of indices there are, which is pretty much infinite here, considering the potential number of Haskell types.
You have to view types in either negative or positive context—i.e. either in the process of construction or the process of use (have/receive and this is all probably best understood in Game Semantics, but I am not familiar with them).
If I "give you" a type forall a . a then you know I must have constructed it somehow. The only way for a particular constructed value to have the type forall a . a is that it could be a stand-in "for all" types in the universe of discourse—which is, of course, the intersection of their functionality. In sane languages no such value exists (Void), but in Haskell we have bottom.
bottom :: forall a . a
bottom = let a = a in a
On the other hand, if I somehow magically have a value of forall a . a and I attempt to use it then we get the opposite effect—I can treat it as anything in the union of all types in the universe of discourse (which is what you were looking for) and thus I have
absurd :: (forall a . a) -> b
absurd a = a
How does forall serve as an intersection over those sets ?
Here you may benefit from starting to read a bit about the Curry-Howard correspondence. To make a long story short, you can think of a type as a logical proposition, language expressions as proofs of their types, and values as normal form proofs (proofs that cannot be simplified any further). So for example, "Hello world!" :: String would be read as ""Hello world!" is a proof of the proposition String."
So now think of forall a. a as a proposition. Intuitively, think of this as a second-order quantified statement over a propositional variable: "For all statements a, a." It's basically asserting all propositions. This means that if x is a proof of forall a. a, then for any proposition P, x is also a proof of P. So, since the proofs of forall a. a are the proofs that prove any propositions, then it must follow that the proofs of forall a. a must be the same as what you'd get if you mapped each proposition to the set of its proofs and took their intersection. And the only normal-form proof (i.e. "value") that is common to all those sets is bottom.
Another informal way to look at it is that universal quantification is like an infinite conjunction (∀x.P(x) is like P(c0) ∧ P(c1) ∧ ...). Conjunction, seen from a model-theoretical view, is set intersection; the set of evaluation environments where A ∧ B is true is the intersection of the environments where A is true and the ones where B is true.
Is there a word that describes data types that
have exactly two constructors; and
are not recursive?
i.e. describes these types
data Bool = False | True
data Maybe a = Nothing | Just a
data Either l r = Left l | Right r
but excludes these types
data Ordering = LT | EQ | GT -- too many constructors
data () = () -- too few constructors
data [a] = a | a : [a] -- recursive definition
I think the trait of having exactly two constructors is quite meaningless. Imagine the types:
data StrictOrdering = LT | GT
data Ordering' = EQ | NEQ !StrictOrdering
The type Ordering' is equivalent to the Ordering you mentioned, differing only in '2-constructorness'.
On the other hand, Maybe Bool, Either Bool Bool and Bool are very different and don't seem to deserve the same name except for being called 'sum types'.
Now, one may find some similarities between exists a. Maybe a and Bool, but to point them out one needs more constraints than just '2-constructorness'.
"Having two constructors" is a property that carries little information about what can be represented by such a type. It means forcing to weak-head-normal-form (WHNF) allows a binary choice in a case statement. Perhaps you could call it a "Two Headed Type" to coin a phrase.
It is more useful to GHC as a way to create an optimized representation in RAM for the data, since GHC uses pointer tagging which helps for types up to 4 constructors (or 8 on 64-bit machines).
How about nonrecursive two-constructor sum-type?
What about a binary sum/coproduct type (of two types)?