What is the relationship between polymorphism's rank and (im)predicativity? - haskell

What is the relationship between polymorphism's rank and (im)predicativity?
Can rank-1 polymorphism be either predicative or impredicative?
Can rank-k polymorphism with k > 1 be either predicative or impredicative?
My confusions come from:
Why does https://en.wikipedia.org/wiki/Parametric_polymorphism mention predicativity under rank-1 polymorphism? (Seems to me rank-1 implies predicativity)
Rank-1 (prenex) polymorphism
In a prenex polymorphic system, type variables may not be instantiated with polymorphic types.[4] This is very similar to what is called "ML-style" or
"Let-polymorphism" (technically ML's Let-polymorphism has a few other
syntactic restrictions). This restriction makes the distinction
between polymorphic and non-polymorphic types very important; thus in
predicative systems polymorphic types are sometimes referred to as
type schemas to distinguish them from ordinary (monomorphic) types,
which are sometimes called monotypes. A consequence is that all
types can be written in a form that places all quantifiers at the
outermost (prenex) position. For example, consider the append
function described above, which has type
forall a. [a] × [a] -> [a]
In order to apply this function to a pair of lists, a type must be
substituted for the variable a in the type of the function such that
the type of the arguments matches up with the resulting function type.
In an impredicative system, the type being substituted may be any type
whatsoever, including a type that is itself polymorphic; thus append
can be applied to pairs of lists with elements of any type—even to
lists of polymorphic functions such as append itself. Polymorphism in
the language ML is predicative.[citation needed] This is because
predicativity, together with other restrictions, makes the type system
simple enough that full type inference is always possible.
As a practical example, OCaml (a descendant or dialect of ML) performs
type inference and supports impredicative polymorphism, but in some
cases when impredicative polymorphism is used, the system's type
inference is incomplete unless some explicit type annotations are
provided by the programmer.
...
Predicative polymorphism
In a predicative parametric polymorphic system, a type τ containing
a type variable α may not be used in such a way that α is
instantiated to a polymorphic type. Predicative type theories include
Martin-Löf Type Theory and NuPRL.
https://wiki.haskell.org/Impredicative_types :
Impredicative types are an advanced form of polymorphism, to be
contrasted with rank-N types.
Standard Haskell allows polymorphic types via the use of type
variables, which are understood to be universally quantified: id :: a -> a means "for all types a, id can take an argument and return a result of that type". All universal quantifiers ("for all"s) must
appear at the beginning of a type.
Higher-rank polymorphism (e.g. rank-N types) allows universal
quantifiers to appear inside function types as well. It turns out that
appearing to the right of function arrows is not interesting: Int -> forall a. a -> [a] is actually the same as forall a. Int -> a -> [a].
However, higher-rank polymorphism allows quantifiers to the left of
function arrows, too, and (forall a. [a] -> Int) -> Int really is
different from forall a. ([a] -> Int) -> Int.
Impredicative types take this idea to its natural conclusion:
universal quantifiers are allowed anywhere in a type, even inside
normal datatypes like lists or Maybe.
Thanks.

Can rank-1 polymorphism be either predicative or impredicative?
No, rank-1 polymorphism is always predicative, because any forall quantifiers do not appear as arguments to type constructors, that is, quantifiers are “prenex”.
Can rank-k polymorphism with k > 1 be either predicative or impredicative?
Higher-rank polymorphism is always impredicative; the RankNTypes extension enables impredicative polymorphism only for the (->) constructor, that is, given a type a -> b, a or b may be instantiated with a type containing foralls. We typically refer to such types as higher-rank only when a contains foralls, because (except for TypeApplications) X -> forall t. Y is equivalent to forall t. X -> Y.
General impredicative polymorphism (with the broken ImpredicativeTypes extension) is not supported. For example, you can’t write Maybe (forall a. [a] -> [a]). This is essentially because it’s difficult to automatically determine when to generalise and when to instantiate that quantifier. Fortunately, you can make this explicit using a newtype wrapper to “hide” the impredicativity, or rather, make it clear to the compiler what you want to do about quantifiers, e.g.:
{-# LANGUAGE RankNTypes #-}
newtype ListTransform = ListTransform { unLT :: forall a. [a] -> [a] }
f :: Maybe ListTransform -> [Int] -> [Char] -> ([Int], [Char])
f Nothing is cs = (is, cs)
f (Just (ListTransform t)) is cs = (t is, t cs)
-- or: f (Just lt) is cs = (unLT lt is, unLT lt cs)

Related

Why is impredicative polymorphism allowed only for functions in Haskell?

In Haskell I can't write
f :: [forall a. a -> a]
f = [id]
because
• Illegal polymorphic type: forall a. a -> a
GHC doesn't yet support impredicative polymorphism
But I can happily do
f :: (forall a. a -> a) -> (a, b) -> (a, b)
f i (x, y) = (i x, i y)
So as I see GHC does support impredicative polymorphism which is contradict to the error message above. Why is the (->) type constructor treated specially in this case? What prevents GHC from having this feature generalized over all datatypes?
Higher-rank polymorphism is a special case of impredicative polymorphism, where the type constructor is (->) instead of any arbitrary constructor like [].
The basic problems with impredicativity are that it makes type checking hard and type inference impossible in the general case—and indeed we can’t infer types of a higher rank than 2: you have to provide a type annotation. This is the ostensible reason for the existence of the Rank2Types extension separate from RankNTypes, although in GHC they’re synonymous.
However, for the restricted case of (->), there are simplified algorithms for checking these types and doing the necessary amount of inference along the way for the programmer’s convenience, such as Complete and Easy Bidirectional Type Checking for Higher-rank Polymorphism—compare that to the complexity of Boxy Types: Inference for Higher-rank Types and Impredicativity.
The actual reasons in GHC are partly historical: there had been an ImpredicativeTypes extension, which was deprecated because it never worked properly or ergonomically. Part of the problem was that we didn’t yet have the TypeApplications extension, so there was no convenient way to explicitly supply a polymorphic type as a type argument, and the compiler attempted to do more inference than it ought to. In GHC 9.2, ImpredicativeTypes has come out of retirement, thanks to GHC proposal 274 and an algorithm, Quick Look, that infers a predictable subset of impredicative types.
In the absence of ImpredicativeTypes, there have been alternatives for a while: with RankNTypes, you can “hide” other forms of impredicativity by wrapping the polymorphic type in a newtype and explicitly packing & unpacking it to tell the compiler exactly where you want to generalise and instantiate type variables.
newtype Id = Id { unId :: forall a. a -> a }
f :: [Id]
f = [Id id] -- generalise
(unId (head f) (), unId (head f) 'x') -- instantiate to () and Char

Why aren't Haskell variables polymorphic when bound by pattern matching?

Consider a variable introduced in a pattern, such as f in this Haskell example:
case (\x -> x) of f -> (f True, f 'c')
This code results in a type error ("Couldn't match expected type ‘Bool’ with actual type ‘Char’"), because of the two different uses of f. It shows that the inferred type of f is not polymorphic in Haskell.
But why shouldn't f be polymorphic?
I have two points of comparison: OCaml and "textbook" Hindley-Milner. Both suggest that f ought to be polymorphic.
In OCaml, the analogous code is not an error:
match (fun x -> x) with f -> (f true, f 'c')
This evaluates to (true, 'c') with type bool * char. So it looks like OCaml gets along fine with assigning f a polymorphic type.
We can gain clarity by stripping things down to the fundamentals of Hindley-Milner - lambda calculus with "let" - which both Haskell and OCaml are based on. When reduce to this core system, of course, there is no such thing as pattern matching. We can draw parallels though. Between "let" and "lambda", case expr1 of f -> expr2 is much closer to let f = expr1 in expr2 than to (lambda f. expr2) expr1. "Case", like "let", syntactically restricts f to be bound to expr1, while a function lambda f. expr2 doesn't know what f will be bound to since the function has no such restriction on where in the program it will be called. This was the reason why let-bound variables are generalized in Hindley-Milner and lambda-bound variables are not. It appears that the same reasoning that allows let-bound variables to be generalized shows that variables introduced by pattern matching could be generalized too.
The examples above are minimal for clarity, so they only show a trivial pattern f in the pattern matching, but all the same logic extends to arbitrarily complex patterns like Just (a:b:(x,y):_), which can introduce multiple variables that would all be generalized.
Is my analysis correct? In Haskell specifically - recognizing that it's not just plain Hindley-Milner and not OCaml - why don't we generalize the type of f in the first example?
Was this an explicit language design decision, and if so, what were the reasons? (I note that some in the community think that not even "let" should be generalized, but I would imagine the design decision pre-dates that paper.)
If variables introduced in a pattern were made polymorphic similar to "let", would that break compatibility with other aspects of Haskell in a significant way?
If we assign a polymorphic type (forall x. t) to a case scrutinee, then it matches no non-trivial pattern, so there's no point for having case.
Could we generalize in some other useful way? Not really, because of GHC's lack of support for "impredicative" instantiation. In your example of Just (a:b:(x,y):_), not a single bound variable can have polymorphic type, since Maybe, (,), and [] cannot be instantiated with such types.
One thing works, as mentioned in the comments: data types with polymorphic fields, such as data Endo = Endo (forall a. a -> a). However, type checking for polymorphic fields doesn't technically involve a generalization step, nor does it behave like let-generalization.
In principle, generalization could be performed at many points, for example even at arbitrary function arguments (e.g. in f (\x -> x)). However, too much generalization clogs up type inference by introducing untractable higher-rank types; this can be also understood as eliminating useful type dependencies between different parts of the program by removing unsolved metavariables. Although there are systems which can handle higher-rank inference much better than GHC, most notably MLF, they're also much more complicated and haven't seen much practical use. I personally prefer to not have silent let-generalization at all.
One first issue is that with type classes, generalization is not always free. Consider show :: forall a. Show a => a -> String and this expression:
case show of
f -> ...
If you generalize f to f :: forall a. Show a => a -> String, then GHC will pass a Show dictionary at every call of f, instead of once at the single occurrence of show. In case there are multiple calls all at the same type, this duplicates work compared to not generalizing.
It is also not actually a generalization of the current type inference algorithm when combined with type classes: it can cause existing programs to no longer typecheck. For example,
case show of
f -> f () ++ f mempty
By not generalizing f, we can infer that mempty has type (). On the other hand, generalizing f :: forall a. Show a => a -> String will lose that connection, and the type of mempty in that expression will be ambiguous.
It is true though that these are minor issues, and maybe things would be mostly fine with some monomorphism restrictions, even if not entirely backwards compatible.
In addition to the other answers, there's a reason for how type variables are treated in pattern matches in terms of the interaction with existential types. Let's take a look at a couple definitions from Data.Functor.Coyoneda:
{-# LANGUAGE GADTs #-}
data Coyoneda f a where
Coyoneda :: (b -> a) -> f b -> Coyoneda f a
lowerCoyoneda :: Functor f => Coyoneda f a -> f a
lowerCoyoneda (Coyoneda g x) = fmap g x
Coyoneda has an existential type variable used by both arguments to the constructor. If GHC doesn't pin that type down, there's no way for the fmap in lowerCoyoneda to type-check. GHC needs to know that g and x have the appropriate relation in their types, and that requires fixing the type variable in the pattern match.

Types constructors and existential types

Only polymorphic function can be applied to values of existential types.
Those properties can be expressed by the corresponding quantifiers for expressions, and characterized by natural transformations.
Similarly, when we define a type constructor
data List a = Nil | Cons a (List a)
This type constructor works for all a whereas type families allows to have non uniform type constructors
type family TRes i o
type instance TRes Bool = String
type instance TRes String = Bool
What natural transformation characterizes precisely this idea of "uniformity" at type level ?
Is there an equivalent of forcing naturality like we have at value level with rank-n types ?
ApplyNat :: (forall a. a -> F a) -> b -> F b
I think you've confused a couple of different ideas here.
This type constructor works for all a.
That's totality. List :: * -> * produces a valid type of kind * given any argument a of kind *. Haskell 98 datatypes are always total, but, as you point out, in modern Haskell you can write type families which don't cover all possible cases. TRes Int is not a "real" type, in the sense that it contains no values, it doesn't reduce to any other type, and it's not equal to any type other than TRes Int.
Haskell has no totality checker at the value level or the type level (apart from the rules about undecidable instances, which are a blunt instrument), so, just as there is no way to rule out undefined values, there is no way to rule out "stuck" type families like TRes Int. (For more on "stuck" type families see this blog post by Richard Eisenberg, the designer of TypeInType.)
Naturality is an altogether different idea. In value-level Haskell, a natural transformation between f and g is a polymorphic function mapping values of type f x to values of type g x, without knowing anything about x.
type f ~> g = forall x. f x -> g x
With GHC 8 and TypeInType we can talk about kinds using the same language we use to talk about types, because kinds are types. The type expression forall x. f x -> g x has kind * ((~>) :: forall k. (k -> *) -> (k -> *) -> *), so it's a perfectly valid classifier for types as well. A type with that kind is a polymorphic type function mapping types of kind f x to types of kind g x.
What would you use a type-level natural transformation for, in the real world? I dunno. You wouldn't, probably.

Understanding Polytypes in Hindley-Milner Type Inference

I'm reading the Wikipedia article on Hindley–Milner Type Inference trying to make some sense out of it. So far this is what I've understood:
Types are classified as either monotypes or polytypes.
Monotypes are further classified as either type constants (like int or string) or type variables (like α and β).
Type constants can either be concrete types (like int and string) or type constructors (like Map and Set).
Type variables (like α and β) behave as placeholders for concrete types (like int and string).
Now I'm having a little difficulty understanding polytypes but after learning a bit of Haskell this is what I make of it:
Types themselves have types. Formally types of types are called kinds (i.e. there are different kinds of types).
Concrete types (like int and string) and type variables (like α and β) are of kind *.
Type constructors (like Map and Set) are lambda abstractions of types (e.g. Set is of kind * -> * and Map is of kind * -> * -> *).
What I don't understand is what do qualifiers signify. For example what does ∀α.σ represent? I can't seem to make heads or tails of it and the more I read the following paragraph the more confused I get:
A function with polytype ∀α.α -> α by contrast can map any value of the same type to itself, and the identity function is a value for this type. As another example ∀α.(Set α) -> int is the type of a function mapping all finite sets to integers. The count of members is a value for this type. Note that qualifiers can only appear top level, i.e. a type ∀α.α -> ∀α.α for instance, is excluded by syntax of types and that monotypes are included in the polytypes, thus a type has the general form ∀α₁ . . . ∀αₙ.τ.
First, kinds and polymorphic types are different things. You can have a HM type system where all types are of the same kind (*), you could also have a system without polymorphism but with complex kinds.
If a term M is of type ∀a.t, it means that for whatever type s we can substitute s for a in t (often written as t[a:=s] and we'll have that M is of type t[a:=s]. This is somewhat similar to logic, where we can substitute any term for a universally quantified variable, but here we're dealing with types.
This is precisely what happens in Haskell, just that in Haskell you don't see the quantifiers. All type variables that appear in a type signature are implicitly quantified, just as if you had forall in front of the type. For example, map would have type
map :: forall a . forall b . (a -> b) -> [a] -> [b]
etc. Without this implicit universal quantification, type variables a and b would have to have some fixed meaning and map wouldn't be polymorphic.
The HM algorithm distinguishes types (without quantifiers, monotypes) and type schemas (universaly quantified types, polytypes). It's important that at some places it uses type schemas (like in let), but at other places only types are allowed. This makes the whole thing decidable.
I also suggest you to read the article about System F. It is a more complex system, which allows forall anywhere in types (therefore everything there is just called type), but type inference/checking is undecidable. It can help you understand how forall works. System F is described in depth in Girard, Lafont and Taylor, Proofs and Types.
Consider l = \x -> t in Haskell. It is a lambda, which represents a term t fith a variable x, which will be substituted later (e.g. l 1, whatever it would mean) . Similarly, ∀α.σ represents a type with a type variable α, that is, f : ∀α.σ if a function parameterized by a type α. In some sense, σ depends on α, so f returns a value of type σ(α), where α will be substituted in σ(α) later, and we will get some concrete type.
In Haskell you are allowed to omit ∀ and define functions just like id : a -> a. The reason to allowing omitting the quantifier is basically since they are allowed only top level (without RankNTypes extension). You can try this piece of code:
id2 : a -> a -- I named it id2 since id is already defined in Prelude
id2 x = x
If you ask ghci for the type of id(:t id), it will return a -> a. To be more precise (more type theoretic), id has the type ∀a. a -> a. Now, if you add to your code:
val = id2 3
, 3 has the type Int, so the type Int will be substituted into σ and we will get the concrete type Int -> Int.

What are type quantifiers?

Many statically typed languages have parametric polymorphism. For example in C# one can define:
T Foo<T>(T x){ return x; }
In a call site you can do:
int y = Foo<int>(3);
These types are also sometimes written like this:
Foo :: forall T. T -> T
I have heard people say "forall is like lambda-abstraction at the type level". So Foo is a function that takes a type (for example int), and produces a value (for example a function of type int -> int). Many languages infer the type parameter, so that you can write Foo(3) instead of Foo<int>(3).
Suppose we have an object f of type forall T. T -> T. What we can do with this object is first pass it a type Q by writing f<Q>. Then we get back a value with type Q -> Q. However, certain f's are invalid. For example this f:
f<int> = (x => x+1)
f<T> = (x => x)
So if we "call" f<int> then we get back a value with type int -> int, and in general if we "call" f<Q> then we get back a value with type Q -> Q, so that's good. However, it is generally understood that this f is not a valid thing of type forall T. T -> T, because it does something different depending on which type you pass it. The idea of forall is that this is explicitly not allowed. Also, if forall is lambda for the type level, then what is exists? (i.e. existential quantification). For these reasons it seems that forall and exists are not really "lambda at the type level". But then what are they? I realize this question is rather vague, but can somebody clear this up for me?
A possible explanation is the following:
If we look at logic, quantifiers and lambda are two different things. An example of a quantified expression is:
forall n in Integers: P(n)
So there are two parts to forall: a set to quantify over (e.g. Integers), and a predicate (e.g. P). Forall can be viewed as a higher order function:
forall n in Integers: P(n) == forall(Integers,P)
With type:
forall :: Set<T> -> (T -> bool) -> bool
Exists has the same type. Forall is like an infinite conjunction, where S[n] is the n-th elemen to of the set S:
forall(S,P) = P(S[0]) ∧ P(S[1]) ∧ P(S[2]) ...
Exists is like an infinite disjunction:
exists(S,P) = P(S[0]) ∨ P(S[1]) ∨ P(S[2]) ...
If we do an analogy with types, we could say that the type analogue of ∧ is computing the intersection type ∩, and the type analogue of ∨ computing the union type ∪. We could then define forall and exists on types as follows:
forall(S,P) = P(S[0]) ∩ P(S[1]) ∩ P(S[2]) ...
exists(S,P) = P(S[0]) ∪ P(S[1]) ∪ P(S[2]) ...
So forall is an infinite intersection, and exists is an infinite union. Their types would be:
forall, exists :: Set<T> -> (T -> Type) -> Type
For example the type of the polymorphic identity function. Here Types is the set of all types, and -> is the type constructor for functions and => is lambda abstraction:
forall(Types, t => (t -> t))
Now a thing of type forall T:Type. T -> T is a value, not a function from types to values. It is a value whose type is the intersection of all types T -> T where T ranges over all types. When we use such a value, we do not have to apply it to a type. Instead, we use a subtype judgement:
id :: forall T:Type. T -> T
id = (x => x)
id2 = id :: int -> int
This downcasts id to have type int -> int. This is valid because int -> int also appears in the infinite intersection.
This works out nicely I think, and it clearly explains what forall is and how it is different from lambda, but this model is incompatible with what I have seen in languages like ML, F#, C#, etc. For example in F# you do id<int> to get the identity function on ints, which does not make sense in this model: id is a function on values, not a function on types that returns a function on values.
Can somebody with knowledge of type theory explain what exactly are forall and exists? And to what extent is it true that "forall is lambda at the type level"?
Let me address your questions separately.
Calling forall "a lambda at the type level" is inaccurate for two reasons. First, it is the type of a lambda, not the lambda itself. Second, that lambda lives on the term level, even though it abstracts over types (lambdas on the type level exist as well, they provide what is often called generic types).
Universal quantification does not necessarily imply "same behaviour" for all instantiations. That is a particular property called "parametricity" that may or may not be present. The plain polymorphic lambda calculus is parametric, because you simply cannot express any non-parametric behaviour. But if you add constructs like typecase (a.k.a. intensional type analysis) or checked casts as a weaker form of that, then you loose parametricity. Parametricity implies nice properties, e.g. it allows a language to be implemented without any runtime representation of types. And it induces very strong reasoning principles, see e.g. Wadler's paper "Theorems for free!". But it's a trade-off, sometimes you want dispatch on types.
Existential types essentially denote pairs of a type (the so-called witness) and a term, sometimes called packages. One common way to view these is as implementation of abstract data types. Here is a simple example:
pack (Int, (λx. x, λx. x)) : ∃ T. (Int → T) × (T → Int)
This is a simple ADT whose representation is Int and that only provides two operations (as a nested tuple), for converting ints in and out of the abstract type T. This is the basis of type theories for modules, for example.
In summary, universal quantification provides client-side data abstraction, while existential types dually provides implementor-side data abstraction.
As an additional remark, in the so-called lambda cube, forall and arrow are generalised to the unified notion of Π-type (where T1→T2 = Π(x:T1).T2 and ∀A.T = Π(A:&ast;).T) and likewise exists and tupling can be generalised to Σ-types (where T1×T2 = Σ(x:T1).T2 and ∃A.T = Σ(A:&ast;).T). Here, the type &ast; is the "type of types".
A few remarks to complement the two already-excellent answers.
First, one cannot say that forall is lambda at the type-level because there already is a notion of lambda at the type level, and it is different from forall. It appears in system F_omega, an extension of System F with type-level computation, that is useful to explain ML modules systems for example (F-ing modules, by Andreas Rossberg, Claudio Russo and Derek Dreyer, 2010).
In (a syntax for) System F_omega you can write for example:
type prod =
lambda (a : *). lambda (b : *).
forall (c : *). (a -> b -> c) -> c
This is a definition of the "type constructor" prod, such as prod a b is the type of the church-encoding of the product type (a, b). If there is computation at the type level, then you need to control it if you want to ensure termination of type-checking (otherwise you could define the type (lambda t. t t) (lambda t. t t). This is done by using a "type system at the type level", or a kind system. prod would be of kind * -> * -> *. Only the types at kind * can be inhabited by values, types at higher-kind can only be applied at the type level. lambda (c : k) . .... is a type-level abstraction that cannot be the type of a value, and may live at any kind of the form k -> ..., while forall (c : k) . .... classify values that are polymorphic in some type c : k and is necessarily of ground kind *.
Second, there is an important difference between the forall of System F and the Pi-types of Martin-Löf type theory. In System F, polymorphic values do the same thing on all types. As a first approximation, you could say that a value of type forall a . a -> a will (implicitly) take a type t as input and return a value of type t -> t. But that suggest that there may be some computation happening in the process, which is not the case. Morally, when you instantiate a value of type forall a. a -> a into a value of type t -> t, the value does not change. There are three (related) ways to think about it:
System F quantification has type erasure, you can forget about the types and you will still know what the dynamic semantic of the program is. When we use ML type inference to leave the polymorphism abstraction and instantiation implicit in our programs, we don't really let the inference engine "fill holes in our program", if you think of "program" as the dynamic object that will be run and compute.
A forall a . foo is not a something that "produces an instance of foo for each type a, but a single type foo that is "generic in an unknown type a".
You can explain universal quantification as an infinite conjunction, but there is an uniformity condition that all conjuncts have the same structure, and in particular that their proofs are all alike.
By contrast, Pi-types in Martin-Löf type theory are really more like function types that take something and return something. That's one of the reason why they can easily be used not only to depend on types, but also to depend on terms (dependent types).
This has very important implications once you're concerned about the soundness of those formal theories. System F is impredicative (a forall-quantified type quantifies on all types, itself included), and the reason why it's still sound is this uniformity of universal quantification. While introducing non-parametric constructs is reasonable from a programmer's point of view (and we can still reason about parametricity in an generally-non-parametric language), it very quickly destroys the logical consistency of the underlying static reasoning system. Martin-Löf predicative theory is much simpler to prove correct and to extend in correct way.
For a high-level description of this uniformity/genericity aspect of System F, see Fruchart and Longo's 97 article Carnap's remarks on Impredicative Definitions and the Genericity Theorem. For a more technical study of System F failure in presence of non-parametric constructs, see Parametricity and variants of Girard's J operator by Robert Harper and John Mitchell (1999). Finally, for a description, from a language design point of view, on how to abandon global parametricity to introduce non-parametric constructs but still be able to locally discuss parametricity, see Non-Parametric Parametricity by George Neis, Derek Dreyer and Andreas Rossberg, 2011.
This discussion of the difference between "computational abstraction" and "uniform abstract" has been revived by the large amount of work on representing variable binders. A binding construction feels like an abstraction (and can be modeled by a lambda-abstraction in HOAS style) but has an uniform structure that makes it rather like a data skeleton than a family of results. This has been much discussed, for example in the LF community, "representational arrows" in Twelf, "positive arrows" in Licata&Harper's work, etc.
Recently there have been several people working on the related notion of "irrelevance" (lambda-abstractions where the result "does not depend" on the argument), but it's still not totally clear how closely this is related to parametric polymorphism. One example is the work of Nathan Mishra-Linger with Tim Sheard (eg. Erasure and Polymorphism in Pure Type Systems).
if forall is lambda ..., then what is exists
Why, tuple of course!
In Martin-Löf type theory you have Π types, corresponding to functions/universal quantification and Σ-types, corresponding to tuples/existential quantification.
Their types are very similar to what you have proposed (I am using Agda notation here):
Π : (A : Set) -> (A -> Set) -> Set
Σ : (A : Set) -> (A -> Set) -> Set
Indeed, Π is an infinite product and Σ is infinite sum. Note that they are not "intersection" and "union" though, as you proposed because you can't do that without additionally defining where the types intersect. (which values of one type correspond to which values of the other type)
From these two type constructors you can have all of normal, polymorphic and dependent functions, normal and dependent tuples, as well as existentially and universally-quantified statements:
-- Normal function, corresponding to "Integer -> Integer" in Haskell
factorial : Π ℕ (λ _ → ℕ)
-- Polymorphic function corresponding to "forall a . a -> a"
id : Π Set (λ A -> Π A (λ _ → A))
-- A universally-quantified logical statement: all natural numbers n are equal to themselves
refl : Π ℕ (λ n → n ≡ n)
-- (Integer, Integer)
twoNats : Σ ℕ (λ _ → ℕ)
-- exists a. Show a => a
someShowable : Σ Set (λ A → Σ A (λ _ → Showable A))
-- There are prime numbers
aPrime : Σ ℕ IsPrime
However, this does not address parametricity at all and AFAIK parametricity and Martin-Löf type theory are independent.
For parametricity, people usually refer to the Philip Wadler's work.

Resources