let a = b in c can be thought as a syntactic sugar for (\a -> c) b, but in a typed setting in general it's not the case. For example, in the Milner calculus let a = \x -> x in (a True, a 1) is typable, but seemingly equivalent (\a -> (a True, a 1)) (\x -> x) is not.
However, the latter is typable in System F with a rank 2 type for the first lambda.
My questions are:
Is let polymorphism a rank 2 feature that sneaked secretly in the otherwise rank 1 world of Milner calculus?
The purpose of having of separate let construct seems to specify which types should be generalized by type checker, and which are not. Does it serve any other purposes? Are there any reasons to extend more powerful systems e.g. System F with separate let which is not sugar? Are there any papers on the rationale behind the design of the Milner calculus which no longer seems obvious to me?
Is there the most general type for \a -> (a True, a 1) in System F?
Are there type systems closed under beta expansion? I.e. if P is typable and M N = P then M is typable?
Some clarifications:
By equivalence I mean equivalence modulo type annotations. Is 'System F a la Church' the correct term for that?
I know that in general the principal typing property doesn't hold in F, but a principal type could exist for my particular term.
By let I mean the non-recursive flavour of let. Extension of system F with recursive let is obviously useful as it allows for non-termination.
W.r.t. to the four questions asked:
A key insight in this matter is that rather than just typing a
lambda-abstraction with a potentially polymorphic argument type, we
are typing a (sugared) abstraction that is (1) applied exactly once
and, moreover, that is (2) applied to a statically known argument.
That is, we can first subject the "argument" (i.e. the definiens of
the local definition) to type reconstruction to find its
(polymorphic) type; then assign the found type to the "parameter"
(the definiendum); and then, finally, type the body in the extended
type context.
Note that that is considerably more easy than general rank-2 type
inference.
Note that, strictly speaking, let .. = .. in .. is only syntactic sugar in System F if you demand that the definiendum carries a type annotation: let .. : .. = .. in .. .
Here are two solutions for T in (\a :: T -> (a True, a 1)) in System F: forall b. (forall a. a -> b) -> (b, b) and forall c d. (forall a b. a -> b) -> (c, d). Note that neither one of them is more general than the other. In general, System F does not admit principal types.
I suppose this holds for the simply typed lambda-calculus?
Types are not preserved under beta-expansion in any calculus that can express the concept of "dead code". You can probably figure out how to write something similar to this in any usable language:
if True then something typable else utter nonsense
For example, let M = (\x y -> x) (something typable) and N = (utter nonsense) and P = (something typable), so that M N = P, and P is typable, but M N isn't.
...rereading your question, I see that you only demand that M be typable, but that seems like a very strange meaning to give to "preserved under beta-expansion" to me. Anyway, I don't see why some argument like the above couldn't apply: simply let M have some untypable dead code in it.
You could type (\a -> (a True, a 1)) (\x -> x) if instead of generalizing only let expressions, you generalized all lambda abstractions. Having done so, one also needs to instantiate type schemas at every use point, not simply at the point where the binder which refers to them is actually used. I'm don't think there's any problem with this actually, outside of the fact that its vastly less efficient. I recall some discussion of this in TAPL, in fact, making similar points.
I recall many years ago seeing in a book about lambda calculus (possibly Barendregt) a type system preserved by beta expansion. It had no quantification, but it had disjunction to express that a term needed to be of more than one type simultaneously, as well as a special type omega which every term inhabited. As I recall, the latter avoids Daniel Wagner's dead code objection. While every expression was well-typed, restricting the position of omega in the type allowed you to charactize which expressions had (weak?) head normal forms.
Also if I recall correctly, fully normal form expressions had principal types, which did not contain omega.
For example the principal type of \f x -> f (f x) (the Church numeral 2) would be something like ((A -> B) /\ (B -> C)) -> A -> C
Not able to answer all your very specialized questions, but no, its not a rank 2 feature. As you write, it's just that let definitions are being quantified which yields a fully polymorphic rank-1 type unless the definition depends on some monomorphic value ina nouter scope.
Please also note that Haskell let is known as let rec in other languages and allows definition of mutually recursive functions and values. This is something you would not want to code manually with lambda-expressions and Y-combinators.
Related
I am a mathematician who works a lot with category theory, and I've been using Haskell for a while to perform certain computations etc., but I am definitely not a programmer. I really love Haskell and want to become much more fluent in it, and the type system is something that I find especially great to have in place when writing programs.
However, I've recently been trying to implement category theoretic things, and am running into problems concerning the fact that you seemingly can't have class method laws in Haskell. In case my terminology here is wrong, what I mean is that I can write
class Monoid c where
id :: c -> c
m :: c -> c -> c
but I can't write some law along the lines of
m (m x y) z == m x $ m y z
From what I gather, this is due to the lack of dependent types in Haskell, but I'm not sure how exactly this is the case (having now read a bit about dependent types). It also seems that the convention is just to include laws like this in comments and hope that you don't accidentally cook up some instance that doesn't satisfy them.
How should I change my approach to Haskell to deal with this problem? Is there a nice mathematical/type-theoretic solution (for example, require the existence of an associator that is an isomorphism (though then the question is, how do we encode isomorphisms without a law?)); is there some 'hack' (using extensions such as DataKinds); should I be drastic and switch to using something like Idris instead; or is the best response to just change the way I think about using Haskell (i.e. accept that these laws can't be implemented in a Haskelly way)?
(bonus) How exactly does the lack of laws come from not supporting dependent types?
You want to require that:
m (m x y) z = m x (m y z) -- (1)
But to require this you need a way to check it. So you, or your compiler (or proof assistant), need to construct a proof of this. And the question is, what type is a proof of (1)?
One could imagine some Proof type but then maybe you could just construct a proof that 0 = 0 instead of a proof of (1) and both would have type Proof. So you’d need a more general type. I can’t decide how to break up the rest of the question so I’ll go for a super brief explanation of the Curry-Howard isomorphism followed by an explanation of how to prove two things are equal and then how dependent types are relevant.
The Curry-Howard isomorphism says that propositions are isomorphic to types and proofs are isomorphic to programs: a type corresponds to a proposition and a proof of that proposition corresponds to a program constructing a value inhabiting that type. Ignoring how many propositions might be expressed as types, an example would be that the type A * B (written (A, B) in Haskell) corresponds to the proposition “A and B,” while the type A + B (written Either A B in Haskell) corresponds to the proposition “A or B.” Finally the type A -> B corresponds to “A implies B,” as a proof of this is a program which takes evidence of A and gives you evidence of B. One should note that there isn’t a way to express not A but one could imagine adding a type Not A with builtins of type Either a (Not a) for the law of the excluded middle as well as Not (Not a) -> a, and a * Not a -> Void (where Void is a type which cannot be inhabited and therefore corresponds to false), but then one can’t really run these programs to get constructivist proofs.
Now we will ignore some realities of Haskell and imagine that there aren’t ways round these rules (in particular undefined :: a says everything is true, and unsafeCoerce :: a -> b says that anything implies anything else, or just other functions that don’t return where their existence does not imply the corresponding proof).
So we know how to combine propositions but what might a proposition be? Well one could be to say that two types are equal. In Haskell this corresponds to the GADT
data Eq a b where Refl :: Eq c c
Where this constructor corresponds to the reflexive property of equality.
[side note: if you’re still interested so far, you may be interested to look up Voevodsky’s univalent foundations, depending on how much the idea of “Homotopy type theory” interests you]
So can we prove something now? How about the transitive property of equality:
trans :: Eq a b -> Eq b c -> Eq a c
trans x y =
case x of
Refl -> -- by this match being successful, the compiler now knows that a = b
case y of
Refl -> -- and now b = c and so the compiler knows a = c
Refl -- the compiler knows that this is of type Eq d d, and as it knows a = c, this typechecks as Eq a c
This feels like one hasn’t really proven anything (especially as this mainly relies on the compiler knowing the transitive and symmetric properties), but one gets a similar feeling when proving simple things in logic as well.
So now how might you prove the original proposition (1)? Well let’s imagine we want a type c to be a monoid then we should also prove that $\forall x,y,z:c, m (m x y) z = m x (m y z).$ So we need a way to express m (m x y) z as a type. Strictly speaking this isn’t dependent types (this can be done with DataKinds to promote values and type families instead of functions). But you do need dependent types to have types depend on values. Specifically if you have a type Nat of natural numbers and a type family Vec :: Nat -> * (* is the kind (read type) of all types) of fixed length vectors, you could define a dependently typed function mkVec :: (n::Nat) -> Vec n. Observe how the type of the output depends on the value of the input.
So your law needs to have functions promoted to type level (skipping the questions about how one defines type equality and value equality), as well as dependent types (made up syntax):
class Monoid c where
e :: c
(*) :: c -> c -> c
idl :: (x::c) -> Eq x (e * x)
idr :: (x::c) -> Eq x (x * e)
assoc :: (x::c) -> (y::c) -> (z::c) -> Eq ((x * y) * z) (x * (y * z))
Observe how types tend to become large with dependent types and proofs. In a language missing typeclasses one could put such values into a record.
Final note on the theory of dependent types and how these correspond to the curry Howard isomorphism.
Dependent types can be considered an answer to the question: what types correspond to the propositions $\forall x\in S\quad P(x)$ and $\exists y\in T\quad Q(y)?$
The answer is that you create new ways to make types: the dependent product and the dependent sum (coproduct). The dependent product expresses “for all values $x$ of type $S,$ there is a value of type $P(x).$” A normal product would be a dependent product with $S=2,$ a type inhabited by two values. A dependent product might be written (x:T) -> P x. A dependent sum says “some value $y$ of type $T$, paired with a value of type $Q(y).$” this might be written (y:T) * Q y.
One can think of these as a generalisation of arbitrarily indexed (co)products from Set to general categories, where one might sensibly write e.g. $\prod_\Lambda X(\lambda),$ and sometimes such notation is used in type theory.
In case of explicit type annotations Haskell checks whether the inferred type is at least as polymorphic as its signature, or in other words, whether the inferred type is a subtype of the explicit one. Hence, the following functions are ill-typed:
foo :: a -> b
foo x = x
bar :: (a -> b) -> a -> c
bar f x = f x
In my scenario, however, I only have a function signature and need to verify, whether it is "inhabited" by a potential implementation - hopefully, this explanation makes sense at all!
Due to the parametricity property I'd assume that for both foo and bar there don't exist an implementation and consequently, both should be rejected. But I don't know how to conclude this programmatically.
Goal is to sort out all or at least a subset of invalid type signatures like ones above. I am grateful for every hint.
Goal is to sort out all or at least a subset of invalid type signatures like ones above. I am grateful for every hint.
You might want to have a look at the Curry-Howard correspondence.
Basically, types in functional programs correspond to logical formulas.
Just replace -> with implication, (,) with conjunction (AND), and Either with disjunction (OR). Inhabited types are exactly those having a corresponding formula which is a tautology in intuitionistic logic.
There are algorithms which can decide provability in intuitionistic logic (e.g. exploiting cut-elimination in Gentzen's sequents), but the problem is PSPACE-complete, so in general we can't work with very large types. For medium-sized types, though, the cut-elimination algorithm works fine.
If you only want a subset of not-inhabited types, you can restrict to those having a corresponding formula which is NOT a tautology in classical logic. This is correct, since intuitionistic tautologies are also classical ones. Checking whether a formula P is not a classical tautology can be done by asking whether not P is a satisfiable formula. So, the problem is in NP. Not much, but better than PSPACE-complete.
For instance, both the above-mentioned types
a -> b
(a -> b) -> a -> c
are clearly NOT tautologies! Hence they are not inhabited.
Finally, note that in Haskell undefined :: T and let x = x in x :: T for any type T, so technically every type is inhabited. Once one restricts to terminating programs which are free from runtime errors, we get a more meaningful notion of "inhabited", which is the one addressed by the Curry-Howard correspondence.
Here's a recent implementation of that as a GHC plugin that unfortunately requires GHC HEAD currently.
It consists of a type class with a single method
class JustDoIt a where
justDoIt :: a
Such that justDoIt typechecks whenever the plugin can find an inhabitant of its inferred type.
foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt
For more information, read Joachim Breitner's blogpost, which also mentions a few other options: djinn (already in other comments here), exference, curryhoward for Scala, hezarfen for Idris.
I've been reading up on various type systems and lambda calculi, and i see that all of the typed lambda calculi in the lambda cube are strongly normalizing rather than Turing equivalent. This includes System F, the simply typed lambda calculus plus polymorphism.
This leads me to the following questions, for which I've been unable to find any comprehensible answer:
How does the formalism of (e.g.) Haskell differ from the calculus on which it is ostensibly based?
What language features in Haskell don't fall within System F formalism?
What's the minimum change necessary to allow Turing complete computation?
Thank you so much to whomever helps me understand this.
In a word, general recursion.
Haskell allows for arbitrary recursion while System F has no form of recursion. The lack of infinite types means fix isn't expressible as a closed term.
There is no primitive notion of names and recursion. In fact, pure System F has no notion of any such thing as definitions!
So in Haskell this single definition is what adds turing completeness
fix :: (a -> a) -> a
fix f = let x = f x in x
Really this function is indicative of a more general idea, by having fully recursive bindings, we get turing completeness. Notice that this applies to types, not just values.
data Rec a = Rec {unrec :: Rec a -> a}
y :: (a -> a) -> a
y f = u (Rec u)
where u x = f $ unrec x x
With infinite types we can write the Y combinator (modulo some unfolding) and through it general recursion!
In pure System F, we often have some informal notion of definitions, but these are simply shorthands that are to be mentally inlined fully. This isn't possible in Haskell as this would create infinite terms.
The kernel of Haskell terms without any notion of let, where or = is strongly normalizing, since we don't have infinite types. Even this core term calculus isn't really System F. System F has "big lambdas" or type abstraction. The full term for id in System F is
id := /\ A -> \(x : A) -> x
This is because type inference for System F is undecidable! We explicitly notate wherever and whenever we expect polymorphism. In Haskell such a property would be annoying, so we limit the power of Haskell. In particular, we never infer a polymorphic type for a Haskell lambda argument without annotation (terms and conditions may apply). This is why in ML and Haskell
let x = exp in foo
isn't the same as
(\x -> foo) exp
even when exp isn't recursive! This is the crux of HM type inference and algorithm W, called "let generalization".
Many statically typed languages have parametric polymorphism. For example in C# one can define:
T Foo<T>(T x){ return x; }
In a call site you can do:
int y = Foo<int>(3);
These types are also sometimes written like this:
Foo :: forall T. T -> T
I have heard people say "forall is like lambda-abstraction at the type level". So Foo is a function that takes a type (for example int), and produces a value (for example a function of type int -> int). Many languages infer the type parameter, so that you can write Foo(3) instead of Foo<int>(3).
Suppose we have an object f of type forall T. T -> T. What we can do with this object is first pass it a type Q by writing f<Q>. Then we get back a value with type Q -> Q. However, certain f's are invalid. For example this f:
f<int> = (x => x+1)
f<T> = (x => x)
So if we "call" f<int> then we get back a value with type int -> int, and in general if we "call" f<Q> then we get back a value with type Q -> Q, so that's good. However, it is generally understood that this f is not a valid thing of type forall T. T -> T, because it does something different depending on which type you pass it. The idea of forall is that this is explicitly not allowed. Also, if forall is lambda for the type level, then what is exists? (i.e. existential quantification). For these reasons it seems that forall and exists are not really "lambda at the type level". But then what are they? I realize this question is rather vague, but can somebody clear this up for me?
A possible explanation is the following:
If we look at logic, quantifiers and lambda are two different things. An example of a quantified expression is:
forall n in Integers: P(n)
So there are two parts to forall: a set to quantify over (e.g. Integers), and a predicate (e.g. P). Forall can be viewed as a higher order function:
forall n in Integers: P(n) == forall(Integers,P)
With type:
forall :: Set<T> -> (T -> bool) -> bool
Exists has the same type. Forall is like an infinite conjunction, where S[n] is the n-th elemen to of the set S:
forall(S,P) = P(S[0]) ∧ P(S[1]) ∧ P(S[2]) ...
Exists is like an infinite disjunction:
exists(S,P) = P(S[0]) ∨ P(S[1]) ∨ P(S[2]) ...
If we do an analogy with types, we could say that the type analogue of ∧ is computing the intersection type ∩, and the type analogue of ∨ computing the union type ∪. We could then define forall and exists on types as follows:
forall(S,P) = P(S[0]) ∩ P(S[1]) ∩ P(S[2]) ...
exists(S,P) = P(S[0]) ∪ P(S[1]) ∪ P(S[2]) ...
So forall is an infinite intersection, and exists is an infinite union. Their types would be:
forall, exists :: Set<T> -> (T -> Type) -> Type
For example the type of the polymorphic identity function. Here Types is the set of all types, and -> is the type constructor for functions and => is lambda abstraction:
forall(Types, t => (t -> t))
Now a thing of type forall T:Type. T -> T is a value, not a function from types to values. It is a value whose type is the intersection of all types T -> T where T ranges over all types. When we use such a value, we do not have to apply it to a type. Instead, we use a subtype judgement:
id :: forall T:Type. T -> T
id = (x => x)
id2 = id :: int -> int
This downcasts id to have type int -> int. This is valid because int -> int also appears in the infinite intersection.
This works out nicely I think, and it clearly explains what forall is and how it is different from lambda, but this model is incompatible with what I have seen in languages like ML, F#, C#, etc. For example in F# you do id<int> to get the identity function on ints, which does not make sense in this model: id is a function on values, not a function on types that returns a function on values.
Can somebody with knowledge of type theory explain what exactly are forall and exists? And to what extent is it true that "forall is lambda at the type level"?
Let me address your questions separately.
Calling forall "a lambda at the type level" is inaccurate for two reasons. First, it is the type of a lambda, not the lambda itself. Second, that lambda lives on the term level, even though it abstracts over types (lambdas on the type level exist as well, they provide what is often called generic types).
Universal quantification does not necessarily imply "same behaviour" for all instantiations. That is a particular property called "parametricity" that may or may not be present. The plain polymorphic lambda calculus is parametric, because you simply cannot express any non-parametric behaviour. But if you add constructs like typecase (a.k.a. intensional type analysis) or checked casts as a weaker form of that, then you loose parametricity. Parametricity implies nice properties, e.g. it allows a language to be implemented without any runtime representation of types. And it induces very strong reasoning principles, see e.g. Wadler's paper "Theorems for free!". But it's a trade-off, sometimes you want dispatch on types.
Existential types essentially denote pairs of a type (the so-called witness) and a term, sometimes called packages. One common way to view these is as implementation of abstract data types. Here is a simple example:
pack (Int, (λx. x, λx. x)) : ∃ T. (Int → T) × (T → Int)
This is a simple ADT whose representation is Int and that only provides two operations (as a nested tuple), for converting ints in and out of the abstract type T. This is the basis of type theories for modules, for example.
In summary, universal quantification provides client-side data abstraction, while existential types dually provides implementor-side data abstraction.
As an additional remark, in the so-called lambda cube, forall and arrow are generalised to the unified notion of Π-type (where T1→T2 = Π(x:T1).T2 and ∀A.T = Π(A:*).T) and likewise exists and tupling can be generalised to Σ-types (where T1×T2 = Σ(x:T1).T2 and ∃A.T = Σ(A:*).T). Here, the type * is the "type of types".
A few remarks to complement the two already-excellent answers.
First, one cannot say that forall is lambda at the type-level because there already is a notion of lambda at the type level, and it is different from forall. It appears in system F_omega, an extension of System F with type-level computation, that is useful to explain ML modules systems for example (F-ing modules, by Andreas Rossberg, Claudio Russo and Derek Dreyer, 2010).
In (a syntax for) System F_omega you can write for example:
type prod =
lambda (a : *). lambda (b : *).
forall (c : *). (a -> b -> c) -> c
This is a definition of the "type constructor" prod, such as prod a b is the type of the church-encoding of the product type (a, b). If there is computation at the type level, then you need to control it if you want to ensure termination of type-checking (otherwise you could define the type (lambda t. t t) (lambda t. t t). This is done by using a "type system at the type level", or a kind system. prod would be of kind * -> * -> *. Only the types at kind * can be inhabited by values, types at higher-kind can only be applied at the type level. lambda (c : k) . .... is a type-level abstraction that cannot be the type of a value, and may live at any kind of the form k -> ..., while forall (c : k) . .... classify values that are polymorphic in some type c : k and is necessarily of ground kind *.
Second, there is an important difference between the forall of System F and the Pi-types of Martin-Löf type theory. In System F, polymorphic values do the same thing on all types. As a first approximation, you could say that a value of type forall a . a -> a will (implicitly) take a type t as input and return a value of type t -> t. But that suggest that there may be some computation happening in the process, which is not the case. Morally, when you instantiate a value of type forall a. a -> a into a value of type t -> t, the value does not change. There are three (related) ways to think about it:
System F quantification has type erasure, you can forget about the types and you will still know what the dynamic semantic of the program is. When we use ML type inference to leave the polymorphism abstraction and instantiation implicit in our programs, we don't really let the inference engine "fill holes in our program", if you think of "program" as the dynamic object that will be run and compute.
A forall a . foo is not a something that "produces an instance of foo for each type a, but a single type foo that is "generic in an unknown type a".
You can explain universal quantification as an infinite conjunction, but there is an uniformity condition that all conjuncts have the same structure, and in particular that their proofs are all alike.
By contrast, Pi-types in Martin-Löf type theory are really more like function types that take something and return something. That's one of the reason why they can easily be used not only to depend on types, but also to depend on terms (dependent types).
This has very important implications once you're concerned about the soundness of those formal theories. System F is impredicative (a forall-quantified type quantifies on all types, itself included), and the reason why it's still sound is this uniformity of universal quantification. While introducing non-parametric constructs is reasonable from a programmer's point of view (and we can still reason about parametricity in an generally-non-parametric language), it very quickly destroys the logical consistency of the underlying static reasoning system. Martin-Löf predicative theory is much simpler to prove correct and to extend in correct way.
For a high-level description of this uniformity/genericity aspect of System F, see Fruchart and Longo's 97 article Carnap's remarks on Impredicative Definitions and the Genericity Theorem. For a more technical study of System F failure in presence of non-parametric constructs, see Parametricity and variants of Girard's J operator by Robert Harper and John Mitchell (1999). Finally, for a description, from a language design point of view, on how to abandon global parametricity to introduce non-parametric constructs but still be able to locally discuss parametricity, see Non-Parametric Parametricity by George Neis, Derek Dreyer and Andreas Rossberg, 2011.
This discussion of the difference between "computational abstraction" and "uniform abstract" has been revived by the large amount of work on representing variable binders. A binding construction feels like an abstraction (and can be modeled by a lambda-abstraction in HOAS style) but has an uniform structure that makes it rather like a data skeleton than a family of results. This has been much discussed, for example in the LF community, "representational arrows" in Twelf, "positive arrows" in Licata&Harper's work, etc.
Recently there have been several people working on the related notion of "irrelevance" (lambda-abstractions where the result "does not depend" on the argument), but it's still not totally clear how closely this is related to parametric polymorphism. One example is the work of Nathan Mishra-Linger with Tim Sheard (eg. Erasure and Polymorphism in Pure Type Systems).
if forall is lambda ..., then what is exists
Why, tuple of course!
In Martin-Löf type theory you have Π types, corresponding to functions/universal quantification and Σ-types, corresponding to tuples/existential quantification.
Their types are very similar to what you have proposed (I am using Agda notation here):
Π : (A : Set) -> (A -> Set) -> Set
Σ : (A : Set) -> (A -> Set) -> Set
Indeed, Π is an infinite product and Σ is infinite sum. Note that they are not "intersection" and "union" though, as you proposed because you can't do that without additionally defining where the types intersect. (which values of one type correspond to which values of the other type)
From these two type constructors you can have all of normal, polymorphic and dependent functions, normal and dependent tuples, as well as existentially and universally-quantified statements:
-- Normal function, corresponding to "Integer -> Integer" in Haskell
factorial : Π ℕ (λ _ → ℕ)
-- Polymorphic function corresponding to "forall a . a -> a"
id : Π Set (λ A -> Π A (λ _ → A))
-- A universally-quantified logical statement: all natural numbers n are equal to themselves
refl : Π ℕ (λ n → n ≡ n)
-- (Integer, Integer)
twoNats : Σ ℕ (λ _ → ℕ)
-- exists a. Show a => a
someShowable : Σ Set (λ A → Σ A (λ _ → Showable A))
-- There are prime numbers
aPrime : Σ ℕ IsPrime
However, this does not address parametricity at all and AFAIK parametricity and Martin-Löf type theory are independent.
For parametricity, people usually refer to the Philip Wadler's work.
In statically typed functional programming languages, like Standard ML, F#, OCaml and Haskell, a function will usually be written with the parameters separated from each other and from the function name simply by whitespace:
let add a b =
a + b
The type here being "int -> (int -> int)", i.e. a function that takes an int and returns a function which its turn takes and int and which finally returns an int. Thus currying becomes possible.
It's also possible to define a similar function that takes a tuple as an argument:
let add(a, b) =
a + b
The type becomes "(int * int) -> int" in this case.
From the point of view of language design, is there any reason why one could not simply identify these two type patterns in the type algebra? In other words, so that "(a * b) -> c" reduces to "a -> (b -> c)", allowing both variants to be curried with equal ease.
I assume this question must have cropped up when languages like the four I mentioned were designed. So does anyone know any reason or research indicating why all four of these languages chose not to "unify" these two type patterns?
I think the consensus today is to handle multiple arguments by currying (the a -> b -> c form) and to reserve tuples for when you genuinely want values of tuple type (in lists and so on). This consensus is respected by every statically typed functional language since Standard ML, which (purely as a matter of convention) uses tuples for functions that take multiple arguments.
Why is this so? Standard ML is the oldest of these languages, and when people were first writing ML compilers, it was not known how to handle curried arguments efficiently. (At the root of the problem is the fact that any curried function could be partially applied by some other code you haven't seen yet, and you have to compile it with that possibility in mind.) Since Standard ML was designed, compilers have improved, and you can read about the state of the art in an excellent paper by Simon Marlow and Simon Peyton Jones.
LISP, which is the oldest functional language of them all, has a concrete syntax which is extremely unfriendly to currying and partial application. Hrmph.
At least one reason not to conflate curried and uncurried functional types is that it would be very confusing when tuples are used as returned values (a convenient way in these typed languages to return multiple values). In the conflated type system, how can function remain nicely composable? Would a -> (a * a) also transform to a -> a -> a? If so, are (a * a) -> a and a -> (a * a) the same type? If not, how would you compose a -> (a * a) with (a * a) -> a?
You propose an interesting solution to the composition issue. However, I don't find it satisfying because it doesn't mesh well with partial application, which is really a key convenience of curried functions. Let me attempt to illustrate the problem in Haskell:
apply f a b = f a b
vecSum (a1,a2) (b1,b2) = (a1+b1,a2+b2)
Now, perhaps your solution could understand map (vecSum (1,1)) [(0,1)], but what about the equivalent map (apply vecSum (1,1)) [(0,1)]? It becomes complicated! Does your fullest unpacking mean that the (1,1) is unpacked with apply's a & b arguments? That's not the intent... and in any case, reasoning becomes complicated.
In short, I think it would be very hard to conflate curried and uncurried functions while (1) preserving the semantics of code valid under the old system and (2) providing a reasonable intuition and algorithm for the conflated system. It's an interesting problem, though.
Possibly because it is useful to have a tuple as a separate type, and it is good to keep different types separate?
In Haskell at least, it is quite easy to go from one form to the other:
Prelude> let add1 a b = a+b
Prelude> let add2 (a,b) = a+b
Prelude> :t (uncurry add1)
(uncurry add1) :: (Num a) => (a, a) -> a
Prelude> :t (curry add2)
(curry add2) :: (Num a) => a -> a -> a
so uncurry add1 is same as add2 and curry add2 is the same as add1. I guess similar things are possible in the other languages as well. What greater gains would there be to automatically currying every function defined on a tuple? (Since that's what you seem to be asking.)
Expanding on the comments under namin's good answer:
So assume a function which has type 'a -> ('a * 'a):
let gimme_tuple(a : int) =
(a*2, a*3)
Then assume a function which has type ('a * 'a) -> 'b:
let add(a : int, b) =
a + b
Then composition (assuming the conflation that I propose) wouldn't pose any problem as far as I can see:
let foo = add(gimme_tuple(5))
// foo gets value 5*2 + 5*3 = 25
But then you could conceive of a polymorphic function that takes the place of add in the last code snippet, for instance a little function that just takes a 2-tuple and makes a list of the two elements:
let gimme_list(a, b) =
[a, b]
This would have the type ('a * 'a) -> ('a list). Composition now would be problematic. Consider:
let bar = gimme_list(gimme_tuple(5))
Would bar now have the value [10, 15] : int list or would bar be a function of type (int * int) -> ((int * int) list), which would eventually return a list whose head would be the tuple (10, 15)? For this to work, I posited in a comment to namin's answer that one would need an additional rule in the type system that the binding of actual to formal parameters be the "fullest possible", i.e. that the system should attempt a non-partial binding first and only try a partial binding if a full binding is unattainable. In our example, it would mean that we get the value [10, 15] because a full binding is possible in this case.
Is such a concept of "fullest possible" inherently meaningless? I don't know, but I can't immediately see a reason it would be.
One problem is of course if you want the second interpretation of the last code snippet, then you'd need to go jump through an extra hoop (typically an anonymous function) to get that.
Tuples are not present in these languages simply to be used as function parameters. They are a really convenient way to represent structured data, e.g., a 2D point (int * int), a list element ('a * 'a list), or a tree node ('a * 'a tree * 'a tree). Remember also that structures are just syntactic sugar for tuples. I.e.,
type info =
{ name : string;
age : int;
address : string; }
is a convenient way of handling a (string * int * string) tuple.
There's no fundamental reason you need tuples in a programming language (you can build up tuples in the lambda calculus, just as you can booleans and integers—indeed using currying*), but they are convenient and useful.
* E.g.,
tuple a b = λf.f a b
fst x = x (λa.λb.a)
snd x = x (λa.λb.b)
curry f = λa.λb.f (λg.g a b)
uncurry f = λx.x f