Understanding practice exam about Agda - programming-languages

I am going through my practice exam for Programming language Foundations using agda and it has the following question:
You are given the following Agda declaration:
data Even : N → Set where
ezero : Even 0
esuc : { n : N } → Even n → Even (2+ n)
Assume that the standard library of natural numbers has been imported. Answer the following questions:
a)What is the type of ezero?
b)Are there any terms of type Even 1?
c)How many terms are of type Even 2? List them
d)Describe one potential problem that might occur if we change the return type of esuc to be Even (n+2) instead of Even (2+n).
We're not provided a solution manual. The question seems pretty basic but I am not sure about any of these.I think the answer to the first three are:
a) Set
b) No terms of type Even 1
c) One term of type Even 2
d) don't know
Answers to these questions along with a brief explanation would be highly appreciated. Thanks

What is the type of ezero?
The type of the data constructor ezero can be read from the data declaration: ezero : Even 0 states that it has type Even 0.
Are there any terms of type Even 1?
No there aren't any. This can be seen by a case distinction: if there were a term then it'd start with either one of the two constructors. And because these have specific return indices, they'd have to unify with 1.
ezero would enforce 1 = 0
esuc would mean that there is an n such that 1 = 2+ n
Both of these situations are impossible.
How many terms are of type Even 2? List them
There is exactly one: esuc ezero. With a reasoning similar to the one in the previous question, we can prove that ezero and esuc (esuc p) (for some p) won't do.
Describe one potential problem that might occur if we change the return type of esuc to be Even (n+2) instead of Even (2+n).
Consider the proof plus-Even : {m n : N} → Even m → Even n → Even (m + n). Because (+) is defined by induction on its first argument, you won't be able to immediately apply esuc in the step case. You are going to need to use rewriting to reorganize the type of the goal from Even ((m +2) + n) (or Even (m + (n +2)) depending on which argument you perform induction on) to Even ((m + n) +2) beforehand.

Related

Pattern match over non-constructor functions

One of the most powerful ways pattern matching and lazy evaluation can come together is to bypass expensive computation. However I am still shocked that Haskell only permits the pattern matching of constructors, which is barely pattern matching at all!
Is there some way to impliment the following functionality in Haskell:
exp :: Double -> Double
exp 0 = 1
exp (log a) = a
--...
log :: Double -> Double
log 1 = 0
log (exp a) = a
--...
The original problem I found this useful in was writing an associativity preference / rule in a Monoid class:
class Monoid m where
iden :: m
(+) m -> m -> m
(+) iden a = a
(+) a iden = a
--Line with issue
(+) ((+) a b) c = (+) a ((+) b c)
There's no reason to be shocked about this. How would it be even remotely feasible to pattern match on arbitrary functions? Most functions aren't invertible, and even for those that are it is typically nontrivial to actually compute the inverses.
Of course the compiler could in principle handle trivial examples like replacing literal exp (log x) with x, but that would be almost completely useless in practice (in the unlikely event somebody were to literally write that, they could as well reduce it right there in the source), and would generally lead to very weird unpredictable behaviour if inlining order changes whether or not the compiler can see that a match applies.
(There is however a thing called rewrite rules, which is similar to what you proposed but is seen as only an optimisation tool.)
Even the two lines from the Monoid class that don't error don't make sense, but for different reasons. First, when you write
(+) iden a = a
(+) a iden = a
this doesn't do what you seem to think. These are actually two redundant catch-call clauses, equivalent to
(+) x y = y
(+) x y = x
...which is an utterly nonsensical thing to write. What you want to state could in fact be written as
default (+) :: Eq a => a -> a -> a
x+y
| x==iden = y
| y==iden = x
| otherwise = ...
...but this still doesn't accomplish anything useful, because this is never going to be a full definition of +. And as soon as a concrete instance even begins to define its own + operator, the complete default one is going to be ignored.
Moreover, if you were to have these kind of clauses all over your Haskell project it would in practice just mean your performing a lot of unnecessary, redundant extra checks. A law-abiding Monoid instance needs to fulfill mempty <> a ≡ a anyway, no point explicitly special-casing it.
I think what you really want is tests. It would make sense to specify laws right in a class declaration in a way that they could automatically be checked, but standard Haskell has no syntax for this. Most projects just do it in a separate test suite, using QuickCheck to generate example inputs. I think there's also a tool that allow you to put the test cases right in your source file, but I forgot what it's called.

If Either can be either Left or Right but not both, then why does it correspond to OR instead of XOR in Curry-Howard correspondence?

When I asked this question, one of the answers, now deleted, was suggesting that the type Either corresponds to XOR, rather than OR, in the Curry-Howard correspondence, because it cannot be Left and Right at the same time.
Where is the truth?
If you have a value of type P and a value of type Q (that is, you have both a proof of P and a proof of Q), then you are still able to provide a value of type Either P Q.
Consider
x :: P
y :: Q
...
z :: Either P Q
z = Left x -- Another possible proof would be `Right y`
While Either does not have a specific case that explicitly represents this situation (unlike These), it does not do anything to exclude it (as in exclusive OR).
This third case where both have proofs is a bit different than the other two cases where only one has a proof, which reflects the fact that "not excluding" something is a bit different than "including" something in intuitionistic logic, since Either does not provide a particular witness for this fact. However Either is not an XOR in the way that XOR would typically work since, as I said, it does not exclude the case where both parts have proofs. What Daniel Wagner proposes in this answer, on the other hand, is much closer to an XOR.
Either is kind of like an exclusive OR in terms of what its possible witnesses are. On the other hand, it is like an inclusive OR when you consider whether or not you can actually create a witness in four possible scenarios: having a proof of P and a refutation of Q, having a proof of Q and a refutation of P, having a proof of both or having a refutation of both.[1] While you can construct a value of type Either P Q when you have a proof of both P and Q (similar to an inclusive OR), you cannot distinguish this situation from the situation where only P has a proof or only Q has a proof using only a value of type Either P Q (kind of similar to an exclusive OR). Daniel Wagner's solution, on the other hand, is similar to exclusive OR on both construction and deconstruction.
It is also worth mentioning that These more explicitly represents the possibility of both having proofs. These is similar to inclusive OR on both construction and deconstruction. However, it's also worth noting that there is nothing preventing you from using an "incorrect" constructor when you have a proof of both P and Q. You could extend These to be even more representative of an inclusive OR in this regard with a bit of additional complexity:
data IOR a b
= OnlyFirst a (Not b)
| OnlySecond (Not a) b
| Both a b
type Not a = a -> Void
The potential "wrong constructor" issue of These (and the lack of a "both" witness in Either) doesn't really matter if you are only interested in a proof irrelevant logical system (meaning that there is no way to distinguish between any two proofs of the same proposition), but it might matter in cases where you want more computational relevance in the logic.[2]
In the practical situation of writing computer programs that are actually meant to be executed, computational relevance is often extremely important. Even though 0 and 23 are both proofs that the Int type is inhabited, we certainly like to distinguish between the two values in programs, in general!
Regarding "construction" and "destruction"
Essentially, I just mean "creating values of a type" by construction and "pattern matching" by destruction (sometimes people use the words "introduction" and "elimination" here, particularly in the context of logic).
In the case of Daniel Wagner's solutions:
Construction: When you construct a value of type Xor A B, you must provide a proof of exactly one of A or B and a refutation of the other one. This is similar to exclusive or. It is not possible to construct a value of this unless you have a refutation of either A or B and a proof of the other one. A particularly significant fact is that you cannot construct a value of this type if you have a proof of both A and B and you don't have a refutation of either of them (unlike inclusive OR).
Destruction: When you pattern match on a value of type Xor A B, you always have a proof of one of the types and a refutation of the other. It will never give you a proof of both of them. This follows from its definition.
In the case of IOR:
Construction: When you create a value of type IOR A B, you must do exactly one of the following: (1) provide only a proof of A and a refutation of B, (2) provide a proof of B and a refutation of B, (3) provide a proof of both A and B. This is like inclusive OR. These three possibilities correspond exactly to each of the three constructors of IOR, with no overlap. Note that, unlike the situation with These, you cannot use the "incorrect constructor" in the case where you have a proof of both A and B: the only way to make a value of type IOR A B in this case is to use Both (since you would otherwise need to provide a refutation of either A or B).
Destruction: Since the three possible situations where you have a proof of at least one of A and B are exactly represented by IOR, with a separate constructor for each (and no overlap between the constructors), you will always know exactly which of A and B are true and which is false (if applicable) by pattern matching on it.
Pattern matching on IOR
Pattern matching on IOR works exactly like pattern matching on any other algebraic datatype. Here is an example:
x :: IOR Char Int
x = Both 'c' 3
y :: IOR Char Void
y = OnlyFirst 'a' (\v -> v)
f :: Not p -> IOR p Int
f np = OnlySecond np 7
z :: IOR Void Int
z = f notVoid
g :: IOR p Int -> Int
g w =
case w of
OnlyFirst p q -> -1
OnlySecond p q -> q
Both p q -> q
-- We can show that the proposition represented by "Void" is indeed false:
notVoid :: Not Void
notVoid = \v -> v
Then a sample GHCi session, with the above code loaded:
ghci> g x
3
ghci> g z
7
[1]This gets a bit more complex when you consider that some statements are undecidable and therefore you cannot construct a proof or a refutation for them.
[2]Homotopy type theory would be one example of a proof relevant system, but this is reaching the limit of my knowledge as of now.
The confusion stems from the Boolean truth-table exposition of logic. In particular, when both arguments are True, OR is True, whereas XOR is False. Logically it means that to prove OR it's enough to provide the proof of one of the arguments; but it's okay if the other one is True as well--we just don't care.
In Curry-Howard interpretation, if somebody gives you an element of Either a b, and you were able to extract the value of a from it, you still know nothing about b. It could be inhabited or not.
On the other hand, to prove XOR, you not only need the proof of one argument, you must also provide the proof of the falsehood of the other argument.
So, with Curry-Howard interpretation, if somebody gives you an element of Xor a b and you were able to extract the value of a from it, you would conclude that b is uninhabited (that is, isomorphic to Void). Conversely, if you were able to extract the value of b, then you'd know that a was uninhabited.
The proof of falsehood of a is a function a->Void. Such a function would be able to produce a value of Void, given a value of a, which is clearly impossible. So there can be no values of a. (There is only one function that returns Void, and that's the identity on Void.)
Perhaps try replacing “proof” in the Curry-Howard isomorphism with “evidence”.
Below I will use italics for propositions and proofs (which I will also call evidence), the mathematical side of the isomorphism, and I will use code for types and values.
The question is: suppose I know the type for [values corresponding to] evidence that P is true (I will call this type P), and I know the type for evidence that Q is true (I call this type Q), then what is the type for evidence of the proposition R = P OR Q?
Well there are two ways to prove R: we can prove P, or we can prove Q. We could prove both but that would be more work than necessary.
Now ask what the type should be? It is the type for things which are either evidence of P or evidence of Q. I.e. values which are either things of type P or things of type Q. The type Either P Q contains precisely those values.
What if you have evidence of P AND Q? Well this is just a value of type (P, Q), and we can write a simple function:
f :: (p,q) -> Either p q
f (a,b) = Left a
And this gives us a way to prove P OR Q if we can prove P AND Q. Therefore Either cannot correspond to xor.
What is the type for P XOR Q?
At this point I will say that negations are a bit annoying in this sort of constructive logic.
Let’s convert the question to things we understand, and a simpler thing we don’t:
P XOR Q = (P AND (NOT Q)) OR (Q AND (NOT P))
Ask now: what is the type for evidence of NOT P?
I don’t have an intuitive explanation for why this is the simplest type but if NOT P were true then evidence of P being true would be a contradiction, which we say as proving FALSE, the unprovable thing (aka BOTTOM or BOT). That is, NOT P may be written in simpler terms as: P IMPLIES FALSE. The type for FALSE is called Void (in haskell). It is a type which no values inhabit because there are no proofs of it. Therefore if you could construct a value of that type you would have problems. IMPLIES corresponds to functions and so the type corresponding to NOT P is P -> Void.
We put this with what we know and get the following equivalence in the language of propositions:
P XOR Q = (P AND (NOT Q)) OR (Q AND (NOT P)) = (P AND (Q IMPLIES FALSE)) OR ((P IMPLIES FALSE) AND Q)
The type is then:
type Xor p q = Either (p, q -> Void) (p -> Void, q)

How to think about the lack of laws

I am a mathematician who works a lot with category theory, and I've been using Haskell for a while to perform certain computations etc., but I am definitely not a programmer. I really love Haskell and want to become much more fluent in it, and the type system is something that I find especially great to have in place when writing programs.
However, I've recently been trying to implement category theoretic things, and am running into problems concerning the fact that you seemingly can't have class method laws in Haskell. In case my terminology here is wrong, what I mean is that I can write
class Monoid c where
id :: c -> c
m :: c -> c -> c
but I can't write some law along the lines of
m (m x y) z == m x $ m y z
From what I gather, this is due to the lack of dependent types in Haskell, but I'm not sure how exactly this is the case (having now read a bit about dependent types). It also seems that the convention is just to include laws like this in comments and hope that you don't accidentally cook up some instance that doesn't satisfy them.
How should I change my approach to Haskell to deal with this problem? Is there a nice mathematical/type-theoretic solution (for example, require the existence of an associator that is an isomorphism (though then the question is, how do we encode isomorphisms without a law?)); is there some 'hack' (using extensions such as DataKinds); should I be drastic and switch to using something like Idris instead; or is the best response to just change the way I think about using Haskell (i.e. accept that these laws can't be implemented in a Haskelly way)?
(bonus) How exactly does the lack of laws come from not supporting dependent types?
You want to require that:
m (m x y) z = m x (m y z) -- (1)
But to require this you need a way to check it. So you, or your compiler (or proof assistant), need to construct a proof of this. And the question is, what type is a proof of (1)?
One could imagine some Proof type but then maybe you could just construct a proof that 0 = 0 instead of a proof of (1) and both would have type Proof. So you’d need a more general type. I can’t decide how to break up the rest of the question so I’ll go for a super brief explanation of the Curry-Howard isomorphism followed by an explanation of how to prove two things are equal and then how dependent types are relevant.
The Curry-Howard isomorphism says that propositions are isomorphic to types and proofs are isomorphic to programs: a type corresponds to a proposition and a proof of that proposition corresponds to a program constructing a value inhabiting that type. Ignoring how many propositions might be expressed as types, an example would be that the type A * B (written (A, B) in Haskell) corresponds to the proposition “A and B,” while the type A + B (written Either A B in Haskell) corresponds to the proposition “A or B.” Finally the type A -> B corresponds to “A implies B,” as a proof of this is a program which takes evidence of A and gives you evidence of B. One should note that there isn’t a way to express not A but one could imagine adding a type Not A with builtins of type Either a (Not a) for the law of the excluded middle as well as Not (Not a) -> a, and a * Not a -> Void (where Void is a type which cannot be inhabited and therefore corresponds to false), but then one can’t really run these programs to get constructivist proofs.
Now we will ignore some realities of Haskell and imagine that there aren’t ways round these rules (in particular undefined :: a says everything is true, and unsafeCoerce :: a -> b says that anything implies anything else, or just other functions that don’t return where their existence does not imply the corresponding proof).
So we know how to combine propositions but what might a proposition be? Well one could be to say that two types are equal. In Haskell this corresponds to the GADT
data Eq a b where Refl :: Eq c c
Where this constructor corresponds to the reflexive property of equality.
[side note: if you’re still interested so far, you may be interested to look up Voevodsky’s univalent foundations, depending on how much the idea of “Homotopy type theory” interests you]
So can we prove something now? How about the transitive property of equality:
trans :: Eq a b -> Eq b c -> Eq a c
trans x y =
case x of
Refl -> -- by this match being successful, the compiler now knows that a = b
case y of
Refl -> -- and now b = c and so the compiler knows a = c
Refl -- the compiler knows that this is of type Eq d d, and as it knows a = c, this typechecks as Eq a c
This feels like one hasn’t really proven anything (especially as this mainly relies on the compiler knowing the transitive and symmetric properties), but one gets a similar feeling when proving simple things in logic as well.
So now how might you prove the original proposition (1)? Well let’s imagine we want a type c to be a monoid then we should also prove that $\forall x,y,z:c, m (m x y) z = m x (m y z).$ So we need a way to express m (m x y) z as a type. Strictly speaking this isn’t dependent types (this can be done with DataKinds to promote values and type families instead of functions). But you do need dependent types to have types depend on values. Specifically if you have a type Nat of natural numbers and a type family Vec :: Nat -> * (* is the kind (read type) of all types) of fixed length vectors, you could define a dependently typed function mkVec :: (n::Nat) -> Vec n. Observe how the type of the output depends on the value of the input.
So your law needs to have functions promoted to type level (skipping the questions about how one defines type equality and value equality), as well as dependent types (made up syntax):
class Monoid c where
e :: c
(*) :: c -> c -> c
idl :: (x::c) -> Eq x (e * x)
idr :: (x::c) -> Eq x (x * e)
assoc :: (x::c) -> (y::c) -> (z::c) -> Eq ((x * y) * z) (x * (y * z))
Observe how types tend to become large with dependent types and proofs. In a language missing typeclasses one could put such values into a record.
Final note on the theory of dependent types and how these correspond to the curry Howard isomorphism.
Dependent types can be considered an answer to the question: what types correspond to the propositions $\forall x\in S\quad P(x)$ and $\exists y\in T\quad Q(y)?$
The answer is that you create new ways to make types: the dependent product and the dependent sum (coproduct). The dependent product expresses “for all values $x$ of type $S,$ there is a value of type $P(x).$” A normal product would be a dependent product with $S=2,$ a type inhabited by two values. A dependent product might be written (x:T) -> P x. A dependent sum says “some value $y$ of type $T$, paired with a value of type $Q(y).$” this might be written (y:T) * Q y.
One can think of these as a generalisation of arbitrarily indexed (co)products from Set to general categories, where one might sensibly write e.g. $\prod_\Lambda X(\lambda),$ and sometimes such notation is used in type theory.

How do we overcome the compile time and runtime gap when programming in a Dependently Typed Language?

I'm told that in dependent type system, "types" and "values" is mixed, and we can treat both of them as "terms" instead.
But there is something I can't understand: in a strongly typed programming language without Dependent Type (like Haskell), Types is decided (infered or checked) at compile time, but values is decided (computed or inputed) at runtime.
I think there must be a gap between these two stages. Just think that if a value is interactively read from STDIN, how can we reference this value in a type which must be decided AOT?
e.g. There is a natural number n and a list of natural number xs (which contains n elements) which I need to read from STDIN, how can I put them into a data structure Vect n Nat?
Suppose we input n :: Int at runtime from STDIN. We then read n strings, and store them into vn :: Vect n String (pretend for the moment this can be done).
Similarly, we can read m :: Int and vm :: Vect m String. Finally, we concatenate the two vectors: vn ++ vm (simplifying a bit here). This can be type checked, and will have type Vect (n+m) String.
Now it is true that the type checker runs at compile time, before the values n,m are known, and also before vn,vm are known. But this does not matter: we can still reason symbolically on the unknowns n,m and argue that vn ++ vm has that type, involving n+m, even if we do not yet know what n+m actually is.
It is not that different from doing math, where we manipulate symbolic expressions involving unknown variables according to some rules, even if we do not know the values of the variables. We don't need to know what number is n to see that n+n = 2*n.
Similarly, the type checker can type check
-- pseudocode
readNStrings :: (n :: Int) -> IO (Vect n String)
readNStrings O = return Vect.empty
readNStrings (S p) = do
s <- getLine
vp <- readNStrings p
return (Vect.cons s vp)
(Well, actually some more help from the programmer could be needed to typecheck this, since it involves dependent matching and recursion. But I'll neglect this.)
Importantly, the type checker can check that without knowing what n is.
Note that the same issue actually already arises with polymorphic functions.
fst :: forall a b. (a, b) -> a
fst (x, y) = x
test1 = fst # Int # Float (2, 3.5)
test2 = fst # String # Bool ("hi!", True)
...
One might wonder "how can the typechecker check fst without knowing what types a and b will be at runtime?". Again, by reasoning symbolically.
With type arguments this is arguably more obvious since we usually run the programs after type erasure, unlike value parameters like our n :: Int above, which can not be erased. Still, there is some similarity between universally quantifying over types or over Int.
It seems to me that there are two questions here:
Given that some values are unknown during compile-time (e.g., values read from STDIN), how can we make use of them in types? (Note that chi has already given an excellent answer to this.)
Some operations (e.g., getLine) seem to make absolutely no sense at compile-time; how could we possibly talk about them in types?
The answer to (1), as chi has said, is symbolic or abstract reasoning. You can read in a number n, and then have a procedure that builds a Vect n Nat by reading from the command line n times, making use of arithmetic properties such as the fact that 1+(n-1) = n for nonzero natural numbers.
The answer to (2) is a bit more subtle. Naively, you might want to say "this function returns a vector of length n, where n is read from the command line". There are two types you might try to give this (apologies if I'm getting Haskell notation wrong)
unsafePerformIO (do n <- getLine; return (IO (Vect (read n :: Int) Nat)))
or (in pseudo-Coq notation, since I'm not sure what Haskell's notation for existential types is)
IO (exists n, Vect n Nat)
These two types can actually both be made sense of, and say different things. The first type, to me, says "at compile time, read n from the command line, and return a function which, at runtime, gives a vector of length n by performing IO". The second type says "at runtime, perform IO to get a natural number n and a vector of length n".
The way I like looking at this is that all side effects (other than, perhaps, non-termination) are monad transformers, and there is only one monad: the "real-world" monad. Monad transformers work just as well at the type level as at the term level; the one thing which is special is run :: M a -> a which executes the monad (or stack of monad transformers) in the "real world". There are two points in time at which you can invoke run: one is at compile time, where you invoke any instance of run which shows up at the type level. Another is at runtime, where you invoke any instance of run which shows up at the value level. Note that run only makes sense if you specify an evaluation order; if your language does not specify whether it is call-by-value or call-by-name (or call-by-push-value or call-by-need or call-by-something-else), you can get incoherencies when you try to compute a type.

Abusing the algebra of algebraic data types - why does this work?

The 'algebraic' expression for algebraic data types looks very suggestive to someone with a background in mathematics. Let me try to explain what I mean.
Having defined the basic types
Product •
Union +
Singleton X
Unit 1
and using the shorthand X² for X•X and 2X for X+X et cetera, we can then define algebraic expressions for e.g. linked lists
data List a = Nil | Cons a (List a) ↔ L = 1 + X • L
and binary trees:
data Tree a = Nil | Branch a (Tree a) (Tree a) ↔ T = 1 + X • T²
Now, my first instinct as a mathematician is to go nuts with these expressions, and try to solve for L and T. I could do this through repeated substitution, but it seems much easier to abuse the notation horrifically and pretend I can rearrange it at will. For example, for a linked list:
L = 1 + X • L
(1 - X) • L = 1
L = 1 / (1 - X) = 1 + X + X² + X³ + ...
where I've used the power series expansion of 1 / (1 - X) in a totally unjustified way to derive an interesting result, namely that an L type is either Nil, or it contains 1 element, or it contains 2 elements, or 3, etc.
It gets more interesting if we do it for binary trees:
T = 1 + X • T²
X • T² - T + 1 = 0
T = (1 - √(1 - 4 • X)) / (2 • X)
T = 1 + X + 2 • X² + 5 • X³ + 14 • X⁴ + ...
again, using the power series expansion (done with Wolfram Alpha). This expresses the non-obvious (to me) fact that there is only one binary tree with 1 element, 2 binary trees with two elements (the second element can be on the left or the right branch), 5 binary trees with three elements etc.
So my question is - what am I doing here? These operations seem unjustified (what exactly is the square root of an algebraic data type anyway?) but they lead to sensible results. does the quotient of two algebraic data types have any meaning in computer science, or is it just notational trickery?
And, perhaps more interestingly, is it possible to extend these ideas? Is there a theory of the algebra of types that allows, for example, arbitrary functions on types, or do types require a power series representation? If you can define a class of functions, then does composition of functions have any meaning?
Disclaimer: A lot of this doesn't really work quite right when you account for ⊥, so I'm going to blatantly disregard that for the sake of simplicity.
A few initial points:
Note that "union" is probably not the best term for A+B here--that's specifically a disjoint union of the two types, because the two sides are distinguished even if their types are the same. For what it's worth, the more common term is simply "sum type".
Singleton types are, effectively, all unit types. They behave identically under algebraic manipulations and, more importantly, the amount of information present is still preserved.
You probably want a zero type as well. Haskell provides that as Void. There are no values whose type is zero, just as there is one value whose type is one.
There's still one major operation missing here but I'll get back to that in a moment.
As you've probably noticed, Haskell tends to borrow concepts from Category Theory, and all of the above has a very straightforward interpretation as such:
Given objects A and B in Hask, their product A×B is the unique (up to isomorphism) type that allows two projections fst : A×B → A and snd : A×B → B, where given any type C and functions f : C → A, g : C → B you can define the pairing f &&& g : C → A×B such that fst ∘ (f &&& g) = f and likewise for g. Parametricity guarantees the universal properties automatically and my less-than-subtle choice of names should give you the idea. The (&&&) operator is defined in Control.Arrow, by the way.
The dual of the above is the coproduct A+B with injections inl : A → A+B and inr : B → A+B, where given any type C and functions f : A → C, g : B → C, you can define the copairing f ||| g : A+B → C such that the obvious equivalences hold. Again, parametricity guarantees most of the tricky parts automatically. In this case, the standard injections are simply Left and Right and the copairing is the function either.
Many of the properties of product and sum types can be derived from the above. Note that any singleton type is a terminal object of Hask and any empty type is an initial object.
Returning to the aforementioned missing operation, in a cartesian closed category you have exponential objects that correspond to arrows of the category. Our arrows are functions, our objects are types with kind *, and the type A -> B indeed behaves as BA in the context of algebraic manipulation of types. If it's not obvious why this should hold, consider the type Bool -> A. With only two possible inputs, a function of that type is isomorphic to two values of type A, i.e. (A, A). For Maybe Bool -> A we have three possible inputs, and so on. Also, observe that if we rephrase the copairing definition above to use algebraic notation, we get the identity CA × CB = CA+B.
As for why this all makes sense--and in particular why your use of the power series expansion is justified--note that much of the above refers to the "inhabitants" of a type (i.e., distinct values having that type) in order to demonstrate the algebraic behavior. To make that perspective explicit:
The product type (A, B) represents a value each from A and B, taken independently. So for any fixed value a :: A, there is one value of type (A, B) for each inhabitant of B. This is of course the cartesian product, and the number of inhabitants of the product type is the product of the number of inhabitants of the factors.
The sum type Either A B represents a value from either A or B, with the left and right branches distinguished. As mentioned earlier, this is a disjoint union, and the number of inhabitants of the sum type is the sum of the number of inhabitants of the summands.
The exponential type B -> A represents a mapping from values of type B to values of type A. For any fixed argument b :: B, any value of A can be assigned to it; a value of type B -> A picks one such mapping for each input, which is equivalent to a product of as many copies of A as B has inhabitants, hence the exponentiation.
While it's tempting at first to treat types as sets, that doesn't actually work very well in this context--we have disjoint union rather than the standard union of sets, there's no obvious interpretation of intersection or many other set operations, and we don't usually care about set membership (leaving that to the type checker).
On the other hand, the constructions above spend a lot of time talking about counting inhabitants, and enumerating the possible values of a type is a useful concept here. That quickly leads us to enumerative combinatorics, and if you consult the linked Wikipedia article you'll find that one of the first things it does is define "pairs" and "unions" in exactly the same sense as product and sum types by way of generating functions, then does the same for "sequences" that are identical to Haskell's lists using exactly the same technique you did.
Edit: Oh, and here's a quick bonus that I think demonstrates the point strikingly. You mentioned in a comment that for a tree type T = 1 + T^2 you can derive the identity T^6 = 1, which is clearly wrong. However, T^7 = T does hold, and a bijection between trees and seven-tuples of trees can be constructed directly, cf. Andreas Blass's "Seven Trees in One".
Edit×2: On the subject of the "derivative of a type" construction mentioned in other answers, you might also enjoy this paper from the same author which builds on the idea further, including notions of division and other interesting whatnot.
Binary trees are defined by the equation T=1+XT^2 in the semiring of types. By construction, T=(1-sqrt(1-4X))/(2X) is defined by the same equation in the semiring of complex numbers. So given that we're solving the same equation in the same class of algebraic structure it actually shouldn't be surprising that we see some similarities.
The catch is that when we reason about polynomials in the semiring of complex numbers we typically use the fact that the complex numbers form a ring or even a field so we find ourselves using operations such as subtraction that don't apply to semirings. But we can often eliminate subtractions from our arguments if we have a rule that allows us to cancel from both sides of an equation. This is the kind of thing proved by Fiore and Leinster showing that many arguments about rings can be transferred to semirings.
This means that lots of your mathematical knowledge about rings can be reliably transferred to types. As a result, some arguments involving complex numbers or power series (in the ring of formal power series) can carry over to types in a completely rigorous way.
However there's more to the story than this. It's one thing to prove two types are equal (say) by showing two power series are equal. But you can also deduce information about types by inspecting the terms in the power series. I'm not sure of what the formal statement here should be. (I recommend Brent Yorgey's paper on combinatorial species for some work that's closely related but species are not the same as types.)
What I find utterly mind blowing is that what you've discovered can be extended to calculus. Theorems about calculus can be transferred over to the semiring of types. In fact, even arguments about finite differences can be transferred over and you find that classical theorems from numerical analysis have interpretations in type theory.
Have fun!
Calculus and Maclaurin series with types
Here is another minor addition - a combinatorial insight into why the coefficients in a series expansion should 'work', in particular focusing on series which can be derived using the Taylor-Maclaurin approach from calculus. NB: the example series expansion you give of the manipulated list type is a Maclaurin series.
Since other answers and comments deal with the behaviour of algebraic type expressions (sums, products and exponents), this answer will elide that detail and focus on type 'calculus'.
You may notice inverted commas doing some heavy lifting in this answer. There are two reasons:
we are in the business of giving interpretations from one domain to entities from another and it seems appropriate to delimit such such foreign notions in this way.
some notions will be able to be formalised more rigorously, but the shape and ideas seem more important (and take less space to write) than the details.
Definition of Maclaurin series
The Maclaurin series of a function f : ℝ → ℝ is defined as
f(0) + f'(0)X + (1/2)f''(0)X² + ... + (1/n!)f⁽ⁿ⁾(0)Xⁿ + ...
where f⁽ⁿ⁾ means the nth derivative of f.
To be able to make sense of the Maclaurin series as interpreted with types, we need to understand how we can interpret three things in a type context:
a (possibly multiple) derivative
applying a function to 0
terms like (1/n!)
and it turns out that these concepts from analysis have suitable counterparts in the type world.
What do I mean by a 'suitable counterpart'? It should have the flavour of an isomorphism - if we can preserve truth in both directions, facts derivable in one context can be transferred to the other.
Calculus with types
So what does the derivative of a type expression mean? It turns out that for a large and well-behaved ('differentiable') class of type expressions and functors, there is a natural operation which behaves similarly enough to be a suitable interpretation!
To spoil the punchline, the operation analogous to differentiation is that of making 'one-hole contexts'. This is an excellent place to expand on this particular point further but the basic concept of a one-hole context (da/dx) is that it represents the result of extracting a single subitem of a particular type (x) from a term (of type a), preserving all other information, including that necessary to determine the original location of the subitem. For example, one way to represent a one-hole context for a list is with two lists: one for items which came before the extracted one, and one for items which came after.
The motivation for identifying this operation with differentiation comes from the following observations. We write da/dx to mean the type of one-hole contexts for type a with hole of type x.
d1/dx = 0
dx/dx = 1
d(a + b)/dx = da/dx + db/dx
d(a × b)/dx = a × db/dx + b × da/dx
d(g(f(x))/dx = d(g(y))/dy[f(x)/a] × df(x)/dx
Here, 1 and 0 represent types with exactly one and exactly zero inhabitants, respectively, and + and × represent sum and product types as usual. f and g are used to represent type functions, or type expression formers, and [f(x)/a] means the operation of substituting f(x) for every a in the preceding expression.
This may be written in a point-free style, writing f' to mean the derivative function of type function f, thus:
(x ↦ 1)' = x ↦ 0
(x ↦ x)' = x ↦ 1
(f + g)' = f' + g'
(f × g)' = f × g' + g × f'
(g ∘ f)' = (g' ∘ f) × f'
which may be preferable.
NB the equalities can be made rigorous and exact if we define derivatives using isomorphism classes of types and functors.
Now, we notice in particular that the rules in calculus pertaining to the algebraic operations of addition, multiplication and composition (often called the Sum, Product and Chain rules) are reflected exactly by the operation of 'making a hole'. Further, the base cases of 'making a hole' in a constant expression or the termx itself also behave as differentiation, so by induction we get differentiation-like behaviour for all algebraic type expressions.
Now we can interpret differentiation, what does the nth 'derivative' of a type expression, dⁿe/dxⁿ mean? It is a type representing n-place contexts: terms which, when 'filled' with n terms of type x yield an e. There is another key observation related to '(1/n!)' coming later.
The invariant part of a type functor: applying a function to 0
We already have an interpretation for 0 in the type world: an empty type with no members. What does it mean, from a combinatorial point of view, to apply a type function to it? In more concrete terms, supposing f is a type function, what does f(0) look like? Well, we certainly don't have access to anything of type 0, so any constructions of f(x) which require an x are unavailable. What is left is those terms which are accessible in their absence, which we can call the 'invariant' or 'constant' part of the type.
For an explicit example, take the Maybe functor, which can be represented algebraically as x ↦ 1 + x. When we apply this to 0, we get 1 + 0 - it's just like 1: the only possible value is the None value. For a list, similarly, we get just the term corresponding to the empty list.
When we bring it back and interpret the type f(0) as a number it can be thought of as the count of how many terms of type f(x) (for any x) can be obtained without access to an x: that is, the number of 'empty-like' terms.
Putting it together: complete interpretation of a Maclaurin series
I'm afraid I can't think of an appropriate direct interpretation of (1/n!) as a type.
If we consider, though, the type f⁽ⁿ⁾(0) in light of the above, we see that it can be interpreted as the type of n-place contexts for a term of type f(x) which do not already contain an x - that is, when we 'integrate' them n times, the resulting term has exactly n xs, no more, no less. Then the interpretation of the type f⁽ⁿ⁾(0) as a number (as in the coefficients of the Maclaurin series of f) is simply a count of how many such empty n-place contexts there are. We are nearly there!
But where does (1/n!) end up? Examining the process of type 'differentiation' shows us that, when applied multiple times, it preserves the 'order' in which subterms are extracted. For example, consider the term (x₀, x₁) of type x × x and the operation of 'making a hole' in it twice. We get both sequences
(x₀, x₁) ↝ (_₀, x₁) ↝ (_₀, _₁)
(x₀, x₁) ↝ (x₀, _₀) ↝ (_₁, _₀)
(where _ represents a 'hole')
even though both come from the same term, because there are 2! = 2 ways to take two elements from two, preserving order. In general, there are n! ways to take n elements from n. So in order to get a count of the number of configurations of a functor type which have n elements, we have to count the type f⁽ⁿ⁾(0) and divide by n!, exactly as in the coefficients of the Maclaurin series.
So dividing by n! turns out to be interpretable simply as itself.
Final thoughts: 'recursive' definitions and analyticity
First, some observations:
if a function f : ℝ → ℝ has a derivative, this derivative is unique
similarly, if a function f : ℝ → ℝ is analytic, it has exactly one corresponding polynomial series
Since we have the chain rule, we can use implicit differentiation, if we formalise type derivatives as isomorphism classes. But implicit differentiation doesn't require any alien manoeuvres like subtraction or division! So we can use it to analyse recursive type definitions. To take your list example, we have
L(X) ≅ 1 + X × L(X)
L'(X) = X × L'(X) + L(X)
and then we can evaluate
L'(0) = L(0) = 1
to obtain the coefficient of X¹ in the Maclaurin series.
But since we are confident that these expressions are indeed strictly 'differentiable', if only implicitly, and since we have the correspondence with functions ℝ → ℝ, where derivatives are certainly unique, we can rest assured that even if we obtain the values using 'illegal' operations, the result is valid.
Now, similarly, to use the second observation, due to the correspondence (is it a homomorphism?) with functions ℝ → ℝ, we know that, provided we are satisfied that a function has a Maclaurin series, if we can find any series at all, the principles outlined above can be applied to make it rigorous.
As for your question about composition of functions, I suppose the chain rule provides a partial answer.
I'm not certain how many Haskell-style ADTs this applies to, but I suspect it is many if not all. I have discovered a truly marvellous proof of this fact, but this margin is too small to contain it...
Now, certainly this is only one way to work out what is going on here and there are probably many other ways.
Summary: TL;DR
type 'differentiation' corresponds to 'making a hole'.
applying a functor to 0 gets us the 'empty-like' terms for that functor.
Maclaurin power series therefore (somewhat) rigorously correspond to enumerating the number of members of a functor type with a certain number of elements.
implicit differentiation makes this more watertight.
uniqueness of derivatives and uniqueness of power series mean we can fudge the details and it works.
It seems that all you're doing is expanding the recurrence relation.
L = 1 + X • L
L = 1 + X • (1 + X • (1 + X • (1 + X • ...)))
= 1 + X + X^2 + X^3 + X^4 ...
T = 1 + X • T^2
L = 1 + X • (1 + X • (1 + X • (1 + X • ...^2)^2)^2)^2
= 1 + X + 2 • X^2 + 5 • X^3 + 14 • X^4 + ...
And since the rules for the operations on the types work like the rules for arithmetic operations, you can use algebraic means to help you figure out how to expand the recurrence relation (since it is not obvious).
I don't have a complete answer, but these manipulations tend to 'just work'. A relevant paper might be Objects of Categories as Complex Numbers by Fiore and Leinster - I came across that one while reading sigfpe's blog on a related subject ; the rest of that blog is a goldmine for similar ideas and is worth checking out!
You can also differentiate datatypes, by the way - that will get you the appropriate Zipper for the datatype!
The Algebra of Communicating Processes (ACP) deals with similar kinds of expressions for processes.
It offers addition and multiplication as operators for choice and sequence, with associated neutral elements.
Based on these there are operators for other constructs, such as parallelism and disruption.
See http://en.wikipedia.org/wiki/Algebra_of_Communicating_Processes. There is also a paper online named "A Brief History of Process Algebra".
I am working on extending programming languages with ACP. Last April I presented a research paper at Scala Days 2012, available at http://code.google.com/p/subscript/
At the conference I demonstrated a debugger running a parallel recursive specification of a bag:
Bag = A; (Bag&a)
where A and a stand for input and output actions; the semicolon and ampersand stand for sequence and parallelism. See the video at SkillsMatter, reachable from the previous link.
A bag specification more comparable to
L = 1 + X•L
would be
B = 1 + X&B
ACP defines parallelism in terms of choice and sequence using axioms; see the Wikipedia article. I wonder what the bag analogy would be for
L = 1 / (1-X)
ACP style programming is handy for text parsers and GUI controllers. Specifications such as
searchCommand = clicked(searchButton) + key(Enter)
cancelCommand = clicked(cancelButton) + key(Escape)
may be written down more concisely by making the two refinements "clicked" and "key" implicit (like what Scala allows with functions). Hence we can write:
searchCommand = searchButton + Enter
cancelCommand = cancelButton + Escape
The right hand sides now contains operands that are data, rather than processes. At this level it is not necessary needed to know what implicit refinements will turn these operands into processes; they would not necessarily refine into input actions; output actions would also apply, e.g. in the specification of a test robot.
Processes get this way data as companions; thus I coin the term "item algebra".
Dependent type theory and 'arbitrary' type functions
My first answer to this question was high on concepts and low on details and reflected on the subquestion, 'what is going on?'; this answer will be the same but focused on the subquestion, 'can we get arbitrary type functions?'.
One extension to the algebraic operations of sum and product are the so called 'large operators', which represent the sum and product of a sequence (or more generally, the sum and product of a function over a domain) usually written Σ and Π respectively. See Sigma Notation.
So the sum
a₀ + a₁X + a₂X² + ...
might be written
Σ[i ∈ ℕ]aᵢXⁱ
where a is some sequence of real numbers, for example. The product would be represented similarly with Π instead of Σ.
When you look from a distance, this kind of expression looks a lot like an 'arbitrary' function in X; we are limited of course to expressible series, and their associated analytic functions. Is this a candidate for a representation in a type theory? Definitely!
The class of type theories which have immediate representations of these expressions is the class of 'dependent' type theories: theories with dependent types. Naturally we have terms dependent on terms, and in languages like Haskell with type functions and type quantification, terms and types depending on types. In a dependent setting, we additionally have types depending on terms. Haskell is not a dependently typed language, although many features of dependent types can be simulated by torturing the language a bit.
Curry-Howard and dependent types
The 'Curry-Howard isomorphism' started life as an observation that the terms and type-judging rules of simply-typed lambda calculus correspond exactly to natural deduction (as formulated by Gentzen) applied to intuitionistic propositional logic, with types taking the place of propositions, and terms taking the place of proofs, despite the two being independently invented/discovered. Since then, it has been a huge source of inspiration for type theorists. One of the most obvious things to consider is whether, and how, this correspondence for propositional logic can be extended to predicate or higher order logics. Dependent type theories initially arose from this avenue of exploration.
For an introduction to the Curry-Howard isomorphism for simply-typed lambda calculus, see here. As an example, if we want to prove A ∧ B we must prove A and prove B; a combined proof is simply a pair of proofs: one for each conjunct.
In natural deduction:
Γ ⊢ A Γ ⊢ B
Γ ⊢ A ∧ B
and in simply-typed lambda calculus:
Γ ⊢ a : A Γ ⊢ b : B
Γ ⊢ (a, b) : A × B
Similar correspondences exist for ∨ and sum types, → and function types, and the various elimination rules.
An unprovable (intuitionistically false) proposition corresponds to an uninhabited type.
With the analogy of types as logical propositions in mind, we can start to consider how to model predicates in the type-world. There are many ways in which this has been formalised (see this introduction to Martin-Löf's Intuitionistic Type Theory for a widely-used standard) but the abstract approach usually observes that a predicate is like a proposition with free term variables, or, alternatively, a function taking terms to propositions. If we allow type expressions to contain terms, then a treatment in lambda calculus style immediately presents itself as a possibility!
Considering only constructive proofs, what constitutes a proof of ∀x ∈ X.P(x)? We can think of it as a proof function, taking terms (x) to proofs of their corresponding propositions (P(x)). So members (proofs) of the type (proposition) ∀x : X.P(x) are 'dependent functions', which for each x in X give a term of type P(x).
What about ∃x ∈ X.P(x)? We need any member of X, x, together with a proof of P(x). So members (proofs) of the type (proposition) ∃x : X.P(x) are 'dependent pairs': a distinguished term x in X, together with a term of type P(x).
Notation:
I will use
∀x ∈ X...
for actual statements about members of the class X, and
∀x : X...
for type expressions corresponding to universal quantification over type X. Likewise for ∃.
Combinatorial considerations: products and sums
As well as the Curry-Howard correspondence of types with propositions, we have the combinatorial correspondence of algebraic types with numbers and functions, which is the main point of this question. Happily, this can be extended to the dependent types outlined above!
I will use the modulus notation
|A|
to represent the 'size' of a type A, to make explicit the correspondence outlined in the question, between types and numbers. Note that this is a concept outside of the theory; I do not claim that there need be any such operator within the language.
Let us count the possible (fully reduced, canonical) members of type
∀x : X.P(x)
which is the type of dependent functions taking terms x of type X to terms of type P(x). Each such function must have an output for every term of X, and this output must be of a particular type. For each x in X, then, this gives |P(x)| 'choices' of output.
The punchline is
|∀x : X.P(x)| = Π[x : X]|P(x)|
which of course doesn't make huge deal of sense if X is IO (), but is applicable to algebraic types.
Similarly, a term of type
∃x : X.P(x)
is the type of pairs (x, p) with p : P(x), so given any x in X we can construct an appropriate pair with any member of P(x), giving |P(x)| 'choices'.
Hence,
|∃x : X.P(x)| = Σ[x : X]|P(x)|
with the same caveats.
This justifies the common notation for dependent types in theories using the symbols Π and Σ, and indeed many theories blur the distinction between 'for all' and 'product' and between 'there is' and 'sum', due to the above-mentioned correspondences.
We are getting close!
Vectors: representing dependent tuples
Can we now encode numerical expressions like
Σ[n ∈ ℕ]Xⁿ
as type expressions?
Not quite. While we can informally consider the meaning of expressions like Xⁿ in Haskell, where X is a type and n a natural number, it's an abuse of notation; this is a type expression containing a number: distinctly not a valid expression.
On the other hand, with dependent types in the picture, types containing numbers is precisely the point; in fact, dependent tuples or 'vectors' are a very commonly-cited example of how dependent types can provide pragmatic type-level safety for operations like list access. A vector is just a list along with type-level information regarding its length: precisely what we are after for type expressions like Xⁿ.
For the duration of this answer, let
Vec X n
be the type of length-n vectors of X-type values.
Technically n here is, rather than an actual natural number, a representation in the system of a natural number. We can represent natural numbers (Nat) in Peano style as either zero (0) or the successor (S) of another natural number, and for n ∈ ℕ I write ˻n˼ to mean the term in Nat which represents n. For example, ˻3˼ is S (S (S 0)).
Then we have
|Vec X ˻n˼| = |X|ⁿ
for any n ∈ ℕ.
Nat types: promoting ℕ terms to types
Now we can encode expressions like
Σ[n ∈ ℕ]Xⁿ
as types. This particular expression would give rise to a type which is of course isomorphic to the type of lists of X, as identified in the question. (Not only that, but from a category-theoretic point of view, the type function - which is a functor - taking X to the above type is naturally isomorphic to the List functor.)
One final piece of the puzzle for 'arbitrary' functions is how to encode, for
f : ℕ → ℕ
expressions like
Σ[n ∈ ℕ]f(n)Xⁿ
so that we can apply arbitrary coefficients to a power series.
We already understand the correspondence of algebraic types with numbers, allowing us to map from types to numbers and type functions to numerical functions. We can also go the other way! - taking a natural number, there is obviously a definable algebraic type with that many term members, whether or not we have dependent types. We can easily prove this outside of the type theory by induction. What we need is a way to map from natural numbers to types, inside the system.
A pleasing realisation is that, once we have dependent types, proof by induction and construction by recursion become intimately similar - indeed they are the very same thing in many theories. Since we can prove by induction that types exist which fulfil our needs, should we not be able to construct them?
There are several ways to represent types at the term level. I will use here an imaginary Haskellish notation with * for the universe of types, itself usually considered a type in a dependent setting.1
Likewise, there are also at least as many ways to notate 'ℕ-elimination' as there are dependent type theories. I will use a Haskellish pattern-matching notation.
We need a mapping, α from Nat to *, with the property
∀n ∈ ℕ.|α ˻n˼| = n.
The following pseudodefinition suffices.
data Zero -- empty type
data Successor a = Z | Suc a -- Successor ≅ Maybe
α : Nat -> *
α 0 = Zero
α (S n) = Successor (α n)
So we see that the action of α mirrors the behaviour of the successor S, making it a kind of homomorphism. Successor is a type function which 'adds one' to the number of members of a type; that is, |Successor a| = 1 + |a| for any a with a defined size.
For example α ˻4˼ (which is α (S (S (S (S 0))))), is
Successor (Successor (Successor (Successor Zero)))
and the terms of this type are
Z
Suc Z
Suc (Suc Z)
Suc (Suc (Suc Z))
giving us exactly four elements: |α ˻4˼| = 4.
Likewise, for any n ∈ ℕ, we have
|α ˻n˼| = n
as required.
Many theories require that the members of * are mere representatives of types, and an operation is provided as an explicit mapping from terms of type * to their associated types. Other theories permit the literal types themselves to be term-level entities.
'Arbitrary' functions?
Now we have the apparatus to express a fully general power series as a type!
The series
Σ[n ∈ ℕ]f(n)Xⁿ
becomes the type
∃n : Nat.α (˻f˼ n) × (Vec X n)
where ˻f˼ : Nat → Nat is some suitable representation within the language of the function f. We can see this as follows.
|∃n : Nat.α (˻f˼ n) × (Vec X n)|
= Σ[n : Nat]|α (˻f˼ n) × (Vec X n)| (property of ∃ types)
= Σ[n ∈ ℕ]|α (˻f˼ ˻n˼) × (Vec X ˻n˼)| (switching Nat for ℕ)
= Σ[n ∈ ℕ]|α ˻f(n)˼ × (Vec X ˻n˼)| (applying ˻f˼ to ˻n˼)
= Σ[n ∈ ℕ]|α ˻f(n)˼||Vec X ˻n˼| (splitting product)
= Σ[n ∈ ℕ]f(n)|X|ⁿ (properties of α and Vec)
Just how 'arbitrary' is this? We are limited not only to integer coefficients by this method, but to natural numbers. Apart from that, f can be anything at all, given a Turing Complete language with dependent types, we can represent any analytic function with natural number coefficients.
I haven't investigated the interaction of this with, for example, the case provided in the question of List X ≅ 1/(1 - X) or what possible sense such negative and non-integer 'types' might have in this context.
Hopefully this answer goes some way to exploring how far we can go with arbitrary type functions.

Resources