Types constructors and existential types - haskell

Only polymorphic function can be applied to values of existential types.
Those properties can be expressed by the corresponding quantifiers for expressions, and characterized by natural transformations.
Similarly, when we define a type constructor
data List a = Nil | Cons a (List a)
This type constructor works for all a whereas type families allows to have non uniform type constructors
type family TRes i o
type instance TRes Bool = String
type instance TRes String = Bool
What natural transformation characterizes precisely this idea of "uniformity" at type level ?
Is there an equivalent of forcing naturality like we have at value level with rank-n types ?
ApplyNat :: (forall a. a -> F a) -> b -> F b

I think you've confused a couple of different ideas here.
This type constructor works for all a.
That's totality. List :: * -> * produces a valid type of kind * given any argument a of kind *. Haskell 98 datatypes are always total, but, as you point out, in modern Haskell you can write type families which don't cover all possible cases. TRes Int is not a "real" type, in the sense that it contains no values, it doesn't reduce to any other type, and it's not equal to any type other than TRes Int.
Haskell has no totality checker at the value level or the type level (apart from the rules about undecidable instances, which are a blunt instrument), so, just as there is no way to rule out undefined values, there is no way to rule out "stuck" type families like TRes Int. (For more on "stuck" type families see this blog post by Richard Eisenberg, the designer of TypeInType.)
Naturality is an altogether different idea. In value-level Haskell, a natural transformation between f and g is a polymorphic function mapping values of type f x to values of type g x, without knowing anything about x.
type f ~> g = forall x. f x -> g x
With GHC 8 and TypeInType we can talk about kinds using the same language we use to talk about types, because kinds are types. The type expression forall x. f x -> g x has kind * ((~>) :: forall k. (k -> *) -> (k -> *) -> *), so it's a perfectly valid classifier for types as well. A type with that kind is a polymorphic type function mapping types of kind f x to types of kind g x.
What would you use a type-level natural transformation for, in the real world? I dunno. You wouldn't, probably.

Related

What is the relationship between polymorphism's rank and (im)predicativity?

What is the relationship between polymorphism's rank and (im)predicativity?
Can rank-1 polymorphism be either predicative or impredicative?
Can rank-k polymorphism with k > 1 be either predicative or impredicative?
My confusions come from:
Why does https://en.wikipedia.org/wiki/Parametric_polymorphism mention predicativity under rank-1 polymorphism? (Seems to me rank-1 implies predicativity)
Rank-1 (prenex) polymorphism
In a prenex polymorphic system, type variables may not be instantiated with polymorphic types.[4] This is very similar to what is called "ML-style" or
"Let-polymorphism" (technically ML's Let-polymorphism has a few other
syntactic restrictions). This restriction makes the distinction
between polymorphic and non-polymorphic types very important; thus in
predicative systems polymorphic types are sometimes referred to as
type schemas to distinguish them from ordinary (monomorphic) types,
which are sometimes called monotypes. A consequence is that all
types can be written in a form that places all quantifiers at the
outermost (prenex) position. For example, consider the append
function described above, which has type
forall a. [a] × [a] -> [a]
In order to apply this function to a pair of lists, a type must be
substituted for the variable a in the type of the function such that
the type of the arguments matches up with the resulting function type.
In an impredicative system, the type being substituted may be any type
whatsoever, including a type that is itself polymorphic; thus append
can be applied to pairs of lists with elements of any type—even to
lists of polymorphic functions such as append itself. Polymorphism in
the language ML is predicative.[citation needed] This is because
predicativity, together with other restrictions, makes the type system
simple enough that full type inference is always possible.
As a practical example, OCaml (a descendant or dialect of ML) performs
type inference and supports impredicative polymorphism, but in some
cases when impredicative polymorphism is used, the system's type
inference is incomplete unless some explicit type annotations are
provided by the programmer.
...
Predicative polymorphism
In a predicative parametric polymorphic system, a type τ containing
a type variable α may not be used in such a way that α is
instantiated to a polymorphic type. Predicative type theories include
Martin-Löf Type Theory and NuPRL.
https://wiki.haskell.org/Impredicative_types :
Impredicative types are an advanced form of polymorphism, to be
contrasted with rank-N types.
Standard Haskell allows polymorphic types via the use of type
variables, which are understood to be universally quantified: id :: a -> a means "for all types a, id can take an argument and return a result of that type". All universal quantifiers ("for all"s) must
appear at the beginning of a type.
Higher-rank polymorphism (e.g. rank-N types) allows universal
quantifiers to appear inside function types as well. It turns out that
appearing to the right of function arrows is not interesting: Int -> forall a. a -> [a] is actually the same as forall a. Int -> a -> [a].
However, higher-rank polymorphism allows quantifiers to the left of
function arrows, too, and (forall a. [a] -> Int) -> Int really is
different from forall a. ([a] -> Int) -> Int.
Impredicative types take this idea to its natural conclusion:
universal quantifiers are allowed anywhere in a type, even inside
normal datatypes like lists or Maybe.
Thanks.
Can rank-1 polymorphism be either predicative or impredicative?
No, rank-1 polymorphism is always predicative, because any forall quantifiers do not appear as arguments to type constructors, that is, quantifiers are “prenex”.
Can rank-k polymorphism with k > 1 be either predicative or impredicative?
Higher-rank polymorphism is always impredicative; the RankNTypes extension enables impredicative polymorphism only for the (->) constructor, that is, given a type a -> b, a or b may be instantiated with a type containing foralls. We typically refer to such types as higher-rank only when a contains foralls, because (except for TypeApplications) X -> forall t. Y is equivalent to forall t. X -> Y.
General impredicative polymorphism (with the broken ImpredicativeTypes extension) is not supported. For example, you can’t write Maybe (forall a. [a] -> [a]). This is essentially because it’s difficult to automatically determine when to generalise and when to instantiate that quantifier. Fortunately, you can make this explicit using a newtype wrapper to “hide” the impredicativity, or rather, make it clear to the compiler what you want to do about quantifiers, e.g.:
{-# LANGUAGE RankNTypes #-}
newtype ListTransform = ListTransform { unLT :: forall a. [a] -> [a] }
f :: Maybe ListTransform -> [Int] -> [Char] -> ([Int], [Char])
f Nothing is cs = (is, cs)
f (Just (ListTransform t)) is cs = (t is, t cs)
-- or: f (Just lt) is cs = (unLT lt is, unLT lt cs)

What does Functor's fmap tell about types?

What does f a and f b tell me about its type?
class Functor f where
fmap :: (a -> b) -> f a -> f b
I think I get the idea behind standard instances of a functor. However I'm having hard time understanding what f a and f actually represent.
I understand that f a and f b are just types and they must carry information what type constructor was used to create them and type arguments that were used.
Is f a type constructor of kind * -> *? Is (->) r a type constructor just like Maybe is?
I understand that f a and f b are just types and they must carry information what type constructor was used to create them and type arguments that were used.
Good explanation.
Is f a type constructor of kind * -> *?
In effect.
Is (->) r a type constructor just like Maybe is?
In effect, yes:
Yes in the sense that you can apply it to a type like String and get r -> String, just like you can apply Maybe to String to get Maybe String. You can use for f anything that gives you a type from any other type.
..but no...
No, in the sense that Daniel Wagner points out; To be precise, Maybe and [] are type constructors, but (->) r and Either a are sort of like partially applied type constructors. Nevertheless they make good functors, because you can freely apply functions "inside" them and change the type of "the contents".
(Stuff in inverted commas is very hand-wavy imprecise terminology.)
My (possibly mildly tortured) reading of chapter 4 of the Haskell 2010 Report is that Maybe and (->) r are both types, of kind * -> *. Alternatively, the Report also labels them as type expressions—but I can't discern a firm difference in how the Report uses the two terms, except perhaps for surface syntax details. (->) and Maybe are type constructors; type expressions are assembled from type constructors and type variables.
For example, section 4.1.1 ("Kinds") of the 2010 report says (my boldface):
To ensure that they are valid, type expressions are classified into different kinds, which take one of two possible forms:
The symbol ∗ represents the kind of all nullary type constructors.
If κ1 and κ2 are kinds, then κ1 → κ2 is the kind of types that take a type of kind κ1 and return a type of kind κ2.
Section 4.3.2, "Instance Declarations" (my boldface):
An instance declaration that makes the type T to be an instance of class C is called a C-T instance declaration and is subject to these static restrictions:
A type may not be declared as an instance of a particular class more than once in the program.
The class and type must have the same kind; this can be determined using kind inference as described in Section 4.6.
So going by that language, the following instance declaration makes the type (->) r to be an instance of the class Functor:
instance Functor ((->) r) where
fmap f g = f . g
The funny thing about this terminology is that we call (->) r a "type" even though there are no expressions in Haskell that have that type—not even undefined:
foo :: (->) r
foo = undefined
{-
[1 of 1] Compiling Main ( ../src/scratch.hs, interpreted )
../src/scratch.hs:1:8:
Expecting one more argument to `(->) r'
In the type signature for `foo': foo :: (->) r
-}
But I think that's not a big deal. Basically, all declarations in Haskell must have types of kind *.
As a side note, from my limited understanding of dependently typed languages, many of these lack Haskell's firm distinction between terms and types, so that something like (->) Boolean is an expression whose value is a function that takes a type as its argument and produces a type as its result.

Are GHC's Type Famlies An Example of System F-omega?

I'm reading up about the Lambda-Cube, and I'm particularly interested in System F-omega, which allows for "type operators" i.e. types depending on types. This sounds a lot like GHC's type families. For example
type family Foo a
type instance Foo Int = Int
type instance Foo Float = ...
...
where the actual type depends on the type parameter a. Am I right in thinking that type families are an example of the type operators ala system F-omega? Or am I out in left field?
System F-omega allows universal quantification, abstraction and application at higher kinds, so not only over types (at kind *), but also at kinds k1 -> k2, where k1 and k2 are themselves kinds generated from * and ->. Hence, the type level itself becomes a simply typed lambda-calculus.
Haskell delivers slightly less than F-omega, in that the type system allows quantification and application at higher kinds, but not abstraction. Quantification at higher kinds is how we have types like
fmap :: forall f, s, t. Functor f => (s -> t) -> f s -> f t
with f :: * -> *. Correspondingly, variables like f can be instantiated with higher-kinded type expressions, such as Either String. The lack of abstraction makes it possible to solve unification problems in type expressions by the standard first-order techniques which underpin the Hindley-Milner type system.
However, type families are not really another means to introduce higher-kinded types, nor a replacement for the missing type-level lambda. Crucially, they must be fully applied. So your example,
type family Foo a
type instance Foo Int = Int
type instance Foo Float = ...
....
should not be considered as introducing some
Foo :: * -> * -- this is not what's happening
because Foo on its own is not a meaningful type expression. We have only the weaker rule that Foo t :: * whenever t :: *.
Type families do, however, act as a distinct type-level computation mechanism beyond F-omega, in that they introduce equations between type expressions. The extension of System F with equations is what gives us the "System Fc" which GHC uses today. Equations s ~ t between type expressions of kind * induce coercions transporting values from s to t. Computation is done by deducing equations from the rules you give when you define type families.
Moreover, you can give type families a higher-kinded return type, as in
type family Hoo a
type instance Hoo Int = Maybe
type instance Hoo Float = IO
...
so that Hoo t :: * -> * whenever t :: *, but still we cannot let Hoo stand alone.
The trick we sometimes use to get around this restriction is newtype wrapping:
newtype Noo i = TheNoo {theNoo :: Foo i}
which does indeed give us
Noo :: * -> *
but means that we have to apply the projection to make computation happen, so Noo Int and Int are provably distinct types, but
theNoo :: Noo Int -> Int
So it's a bit clunky, but we can kind of compensate for the fact that type families do not directly correspond to type operators in the F-omega sense.

Understanding Polytypes in Hindley-Milner Type Inference

I'm reading the Wikipedia article on Hindley–Milner Type Inference trying to make some sense out of it. So far this is what I've understood:
Types are classified as either monotypes or polytypes.
Monotypes are further classified as either type constants (like int or string) or type variables (like α and β).
Type constants can either be concrete types (like int and string) or type constructors (like Map and Set).
Type variables (like α and β) behave as placeholders for concrete types (like int and string).
Now I'm having a little difficulty understanding polytypes but after learning a bit of Haskell this is what I make of it:
Types themselves have types. Formally types of types are called kinds (i.e. there are different kinds of types).
Concrete types (like int and string) and type variables (like α and β) are of kind *.
Type constructors (like Map and Set) are lambda abstractions of types (e.g. Set is of kind * -> * and Map is of kind * -> * -> *).
What I don't understand is what do qualifiers signify. For example what does ∀α.σ represent? I can't seem to make heads or tails of it and the more I read the following paragraph the more confused I get:
A function with polytype ∀α.α -> α by contrast can map any value of the same type to itself, and the identity function is a value for this type. As another example ∀α.(Set α) -> int is the type of a function mapping all finite sets to integers. The count of members is a value for this type. Note that qualifiers can only appear top level, i.e. a type ∀α.α -> ∀α.α for instance, is excluded by syntax of types and that monotypes are included in the polytypes, thus a type has the general form ∀α₁ . . . ∀αₙ.τ.
First, kinds and polymorphic types are different things. You can have a HM type system where all types are of the same kind (*), you could also have a system without polymorphism but with complex kinds.
If a term M is of type ∀a.t, it means that for whatever type s we can substitute s for a in t (often written as t[a:=s] and we'll have that M is of type t[a:=s]. This is somewhat similar to logic, where we can substitute any term for a universally quantified variable, but here we're dealing with types.
This is precisely what happens in Haskell, just that in Haskell you don't see the quantifiers. All type variables that appear in a type signature are implicitly quantified, just as if you had forall in front of the type. For example, map would have type
map :: forall a . forall b . (a -> b) -> [a] -> [b]
etc. Without this implicit universal quantification, type variables a and b would have to have some fixed meaning and map wouldn't be polymorphic.
The HM algorithm distinguishes types (without quantifiers, monotypes) and type schemas (universaly quantified types, polytypes). It's important that at some places it uses type schemas (like in let), but at other places only types are allowed. This makes the whole thing decidable.
I also suggest you to read the article about System F. It is a more complex system, which allows forall anywhere in types (therefore everything there is just called type), but type inference/checking is undecidable. It can help you understand how forall works. System F is described in depth in Girard, Lafont and Taylor, Proofs and Types.
Consider l = \x -> t in Haskell. It is a lambda, which represents a term t fith a variable x, which will be substituted later (e.g. l 1, whatever it would mean) . Similarly, ∀α.σ represents a type with a type variable α, that is, f : ∀α.σ if a function parameterized by a type α. In some sense, σ depends on α, so f returns a value of type σ(α), where α will be substituted in σ(α) later, and we will get some concrete type.
In Haskell you are allowed to omit ∀ and define functions just like id : a -> a. The reason to allowing omitting the quantifier is basically since they are allowed only top level (without RankNTypes extension). You can try this piece of code:
id2 : a -> a -- I named it id2 since id is already defined in Prelude
id2 x = x
If you ask ghci for the type of id(:t id), it will return a -> a. To be more precise (more type theoretic), id has the type ∀a. a -> a. Now, if you add to your code:
val = id2 3
, 3 has the type Int, so the type Int will be substituted into σ and we will get the concrete type Int -> Int.

What are type quantifiers?

Many statically typed languages have parametric polymorphism. For example in C# one can define:
T Foo<T>(T x){ return x; }
In a call site you can do:
int y = Foo<int>(3);
These types are also sometimes written like this:
Foo :: forall T. T -> T
I have heard people say "forall is like lambda-abstraction at the type level". So Foo is a function that takes a type (for example int), and produces a value (for example a function of type int -> int). Many languages infer the type parameter, so that you can write Foo(3) instead of Foo<int>(3).
Suppose we have an object f of type forall T. T -> T. What we can do with this object is first pass it a type Q by writing f<Q>. Then we get back a value with type Q -> Q. However, certain f's are invalid. For example this f:
f<int> = (x => x+1)
f<T> = (x => x)
So if we "call" f<int> then we get back a value with type int -> int, and in general if we "call" f<Q> then we get back a value with type Q -> Q, so that's good. However, it is generally understood that this f is not a valid thing of type forall T. T -> T, because it does something different depending on which type you pass it. The idea of forall is that this is explicitly not allowed. Also, if forall is lambda for the type level, then what is exists? (i.e. existential quantification). For these reasons it seems that forall and exists are not really "lambda at the type level". But then what are they? I realize this question is rather vague, but can somebody clear this up for me?
A possible explanation is the following:
If we look at logic, quantifiers and lambda are two different things. An example of a quantified expression is:
forall n in Integers: P(n)
So there are two parts to forall: a set to quantify over (e.g. Integers), and a predicate (e.g. P). Forall can be viewed as a higher order function:
forall n in Integers: P(n) == forall(Integers,P)
With type:
forall :: Set<T> -> (T -> bool) -> bool
Exists has the same type. Forall is like an infinite conjunction, where S[n] is the n-th elemen to of the set S:
forall(S,P) = P(S[0]) ∧ P(S[1]) ∧ P(S[2]) ...
Exists is like an infinite disjunction:
exists(S,P) = P(S[0]) ∨ P(S[1]) ∨ P(S[2]) ...
If we do an analogy with types, we could say that the type analogue of ∧ is computing the intersection type ∩, and the type analogue of ∨ computing the union type ∪. We could then define forall and exists on types as follows:
forall(S,P) = P(S[0]) ∩ P(S[1]) ∩ P(S[2]) ...
exists(S,P) = P(S[0]) ∪ P(S[1]) ∪ P(S[2]) ...
So forall is an infinite intersection, and exists is an infinite union. Their types would be:
forall, exists :: Set<T> -> (T -> Type) -> Type
For example the type of the polymorphic identity function. Here Types is the set of all types, and -> is the type constructor for functions and => is lambda abstraction:
forall(Types, t => (t -> t))
Now a thing of type forall T:Type. T -> T is a value, not a function from types to values. It is a value whose type is the intersection of all types T -> T where T ranges over all types. When we use such a value, we do not have to apply it to a type. Instead, we use a subtype judgement:
id :: forall T:Type. T -> T
id = (x => x)
id2 = id :: int -> int
This downcasts id to have type int -> int. This is valid because int -> int also appears in the infinite intersection.
This works out nicely I think, and it clearly explains what forall is and how it is different from lambda, but this model is incompatible with what I have seen in languages like ML, F#, C#, etc. For example in F# you do id<int> to get the identity function on ints, which does not make sense in this model: id is a function on values, not a function on types that returns a function on values.
Can somebody with knowledge of type theory explain what exactly are forall and exists? And to what extent is it true that "forall is lambda at the type level"?
Let me address your questions separately.
Calling forall "a lambda at the type level" is inaccurate for two reasons. First, it is the type of a lambda, not the lambda itself. Second, that lambda lives on the term level, even though it abstracts over types (lambdas on the type level exist as well, they provide what is often called generic types).
Universal quantification does not necessarily imply "same behaviour" for all instantiations. That is a particular property called "parametricity" that may or may not be present. The plain polymorphic lambda calculus is parametric, because you simply cannot express any non-parametric behaviour. But if you add constructs like typecase (a.k.a. intensional type analysis) or checked casts as a weaker form of that, then you loose parametricity. Parametricity implies nice properties, e.g. it allows a language to be implemented without any runtime representation of types. And it induces very strong reasoning principles, see e.g. Wadler's paper "Theorems for free!". But it's a trade-off, sometimes you want dispatch on types.
Existential types essentially denote pairs of a type (the so-called witness) and a term, sometimes called packages. One common way to view these is as implementation of abstract data types. Here is a simple example:
pack (Int, (λx. x, λx. x)) : ∃ T. (Int → T) × (T → Int)
This is a simple ADT whose representation is Int and that only provides two operations (as a nested tuple), for converting ints in and out of the abstract type T. This is the basis of type theories for modules, for example.
In summary, universal quantification provides client-side data abstraction, while existential types dually provides implementor-side data abstraction.
As an additional remark, in the so-called lambda cube, forall and arrow are generalised to the unified notion of Π-type (where T1→T2 = Π(x:T1).T2 and ∀A.T = Π(A:&ast;).T) and likewise exists and tupling can be generalised to Σ-types (where T1×T2 = Σ(x:T1).T2 and ∃A.T = Σ(A:&ast;).T). Here, the type &ast; is the "type of types".
A few remarks to complement the two already-excellent answers.
First, one cannot say that forall is lambda at the type-level because there already is a notion of lambda at the type level, and it is different from forall. It appears in system F_omega, an extension of System F with type-level computation, that is useful to explain ML modules systems for example (F-ing modules, by Andreas Rossberg, Claudio Russo and Derek Dreyer, 2010).
In (a syntax for) System F_omega you can write for example:
type prod =
lambda (a : *). lambda (b : *).
forall (c : *). (a -> b -> c) -> c
This is a definition of the "type constructor" prod, such as prod a b is the type of the church-encoding of the product type (a, b). If there is computation at the type level, then you need to control it if you want to ensure termination of type-checking (otherwise you could define the type (lambda t. t t) (lambda t. t t). This is done by using a "type system at the type level", or a kind system. prod would be of kind * -> * -> *. Only the types at kind * can be inhabited by values, types at higher-kind can only be applied at the type level. lambda (c : k) . .... is a type-level abstraction that cannot be the type of a value, and may live at any kind of the form k -> ..., while forall (c : k) . .... classify values that are polymorphic in some type c : k and is necessarily of ground kind *.
Second, there is an important difference between the forall of System F and the Pi-types of Martin-Löf type theory. In System F, polymorphic values do the same thing on all types. As a first approximation, you could say that a value of type forall a . a -> a will (implicitly) take a type t as input and return a value of type t -> t. But that suggest that there may be some computation happening in the process, which is not the case. Morally, when you instantiate a value of type forall a. a -> a into a value of type t -> t, the value does not change. There are three (related) ways to think about it:
System F quantification has type erasure, you can forget about the types and you will still know what the dynamic semantic of the program is. When we use ML type inference to leave the polymorphism abstraction and instantiation implicit in our programs, we don't really let the inference engine "fill holes in our program", if you think of "program" as the dynamic object that will be run and compute.
A forall a . foo is not a something that "produces an instance of foo for each type a, but a single type foo that is "generic in an unknown type a".
You can explain universal quantification as an infinite conjunction, but there is an uniformity condition that all conjuncts have the same structure, and in particular that their proofs are all alike.
By contrast, Pi-types in Martin-Löf type theory are really more like function types that take something and return something. That's one of the reason why they can easily be used not only to depend on types, but also to depend on terms (dependent types).
This has very important implications once you're concerned about the soundness of those formal theories. System F is impredicative (a forall-quantified type quantifies on all types, itself included), and the reason why it's still sound is this uniformity of universal quantification. While introducing non-parametric constructs is reasonable from a programmer's point of view (and we can still reason about parametricity in an generally-non-parametric language), it very quickly destroys the logical consistency of the underlying static reasoning system. Martin-Löf predicative theory is much simpler to prove correct and to extend in correct way.
For a high-level description of this uniformity/genericity aspect of System F, see Fruchart and Longo's 97 article Carnap's remarks on Impredicative Definitions and the Genericity Theorem. For a more technical study of System F failure in presence of non-parametric constructs, see Parametricity and variants of Girard's J operator by Robert Harper and John Mitchell (1999). Finally, for a description, from a language design point of view, on how to abandon global parametricity to introduce non-parametric constructs but still be able to locally discuss parametricity, see Non-Parametric Parametricity by George Neis, Derek Dreyer and Andreas Rossberg, 2011.
This discussion of the difference between "computational abstraction" and "uniform abstract" has been revived by the large amount of work on representing variable binders. A binding construction feels like an abstraction (and can be modeled by a lambda-abstraction in HOAS style) but has an uniform structure that makes it rather like a data skeleton than a family of results. This has been much discussed, for example in the LF community, "representational arrows" in Twelf, "positive arrows" in Licata&Harper's work, etc.
Recently there have been several people working on the related notion of "irrelevance" (lambda-abstractions where the result "does not depend" on the argument), but it's still not totally clear how closely this is related to parametric polymorphism. One example is the work of Nathan Mishra-Linger with Tim Sheard (eg. Erasure and Polymorphism in Pure Type Systems).
if forall is lambda ..., then what is exists
Why, tuple of course!
In Martin-Löf type theory you have Π types, corresponding to functions/universal quantification and Σ-types, corresponding to tuples/existential quantification.
Their types are very similar to what you have proposed (I am using Agda notation here):
Π : (A : Set) -> (A -> Set) -> Set
Σ : (A : Set) -> (A -> Set) -> Set
Indeed, Π is an infinite product and Σ is infinite sum. Note that they are not "intersection" and "union" though, as you proposed because you can't do that without additionally defining where the types intersect. (which values of one type correspond to which values of the other type)
From these two type constructors you can have all of normal, polymorphic and dependent functions, normal and dependent tuples, as well as existentially and universally-quantified statements:
-- Normal function, corresponding to "Integer -> Integer" in Haskell
factorial : Π ℕ (λ _ → ℕ)
-- Polymorphic function corresponding to "forall a . a -> a"
id : Π Set (λ A -> Π A (λ _ → A))
-- A universally-quantified logical statement: all natural numbers n are equal to themselves
refl : Π ℕ (λ n → n ≡ n)
-- (Integer, Integer)
twoNats : Σ ℕ (λ _ → ℕ)
-- exists a. Show a => a
someShowable : Σ Set (λ A → Σ A (λ _ → Showable A))
-- There are prime numbers
aPrime : Σ ℕ IsPrime
However, this does not address parametricity at all and AFAIK parametricity and Martin-Löf type theory are independent.
For parametricity, people usually refer to the Philip Wadler's work.

Resources