In Haskell I can't write
f :: [forall a. a -> a]
f = [id]
because
• Illegal polymorphic type: forall a. a -> a
GHC doesn't yet support impredicative polymorphism
But I can happily do
f :: (forall a. a -> a) -> (a, b) -> (a, b)
f i (x, y) = (i x, i y)
So as I see GHC does support impredicative polymorphism which is contradict to the error message above. Why is the (->) type constructor treated specially in this case? What prevents GHC from having this feature generalized over all datatypes?
Higher-rank polymorphism is a special case of impredicative polymorphism, where the type constructor is (->) instead of any arbitrary constructor like [].
The basic problems with impredicativity are that it makes type checking hard and type inference impossible in the general case—and indeed we can’t infer types of a higher rank than 2: you have to provide a type annotation. This is the ostensible reason for the existence of the Rank2Types extension separate from RankNTypes, although in GHC they’re synonymous.
However, for the restricted case of (->), there are simplified algorithms for checking these types and doing the necessary amount of inference along the way for the programmer’s convenience, such as Complete and Easy Bidirectional Type Checking for Higher-rank Polymorphism—compare that to the complexity of Boxy Types: Inference for Higher-rank Types and Impredicativity.
The actual reasons in GHC are partly historical: there had been an ImpredicativeTypes extension, which was deprecated because it never worked properly or ergonomically. Part of the problem was that we didn’t yet have the TypeApplications extension, so there was no convenient way to explicitly supply a polymorphic type as a type argument, and the compiler attempted to do more inference than it ought to. In GHC 9.2, ImpredicativeTypes has come out of retirement, thanks to GHC proposal 274 and an algorithm, Quick Look, that infers a predictable subset of impredicative types.
In the absence of ImpredicativeTypes, there have been alternatives for a while: with RankNTypes, you can “hide” other forms of impredicativity by wrapping the polymorphic type in a newtype and explicitly packing & unpacking it to tell the compiler exactly where you want to generalise and instantiate type variables.
newtype Id = Id { unId :: forall a. a -> a }
f :: [Id]
f = [Id id] -- generalise
(unId (head f) (), unId (head f) 'x') -- instantiate to () and Char
Related
What is the relationship between polymorphism's rank and (im)predicativity?
Can rank-1 polymorphism be either predicative or impredicative?
Can rank-k polymorphism with k > 1 be either predicative or impredicative?
My confusions come from:
Why does https://en.wikipedia.org/wiki/Parametric_polymorphism mention predicativity under rank-1 polymorphism? (Seems to me rank-1 implies predicativity)
Rank-1 (prenex) polymorphism
In a prenex polymorphic system, type variables may not be instantiated with polymorphic types.[4] This is very similar to what is called "ML-style" or
"Let-polymorphism" (technically ML's Let-polymorphism has a few other
syntactic restrictions). This restriction makes the distinction
between polymorphic and non-polymorphic types very important; thus in
predicative systems polymorphic types are sometimes referred to as
type schemas to distinguish them from ordinary (monomorphic) types,
which are sometimes called monotypes. A consequence is that all
types can be written in a form that places all quantifiers at the
outermost (prenex) position. For example, consider the append
function described above, which has type
forall a. [a] × [a] -> [a]
In order to apply this function to a pair of lists, a type must be
substituted for the variable a in the type of the function such that
the type of the arguments matches up with the resulting function type.
In an impredicative system, the type being substituted may be any type
whatsoever, including a type that is itself polymorphic; thus append
can be applied to pairs of lists with elements of any type—even to
lists of polymorphic functions such as append itself. Polymorphism in
the language ML is predicative.[citation needed] This is because
predicativity, together with other restrictions, makes the type system
simple enough that full type inference is always possible.
As a practical example, OCaml (a descendant or dialect of ML) performs
type inference and supports impredicative polymorphism, but in some
cases when impredicative polymorphism is used, the system's type
inference is incomplete unless some explicit type annotations are
provided by the programmer.
...
Predicative polymorphism
In a predicative parametric polymorphic system, a type τ containing
a type variable α may not be used in such a way that α is
instantiated to a polymorphic type. Predicative type theories include
Martin-Löf Type Theory and NuPRL.
https://wiki.haskell.org/Impredicative_types :
Impredicative types are an advanced form of polymorphism, to be
contrasted with rank-N types.
Standard Haskell allows polymorphic types via the use of type
variables, which are understood to be universally quantified: id :: a -> a means "for all types a, id can take an argument and return a result of that type". All universal quantifiers ("for all"s) must
appear at the beginning of a type.
Higher-rank polymorphism (e.g. rank-N types) allows universal
quantifiers to appear inside function types as well. It turns out that
appearing to the right of function arrows is not interesting: Int -> forall a. a -> [a] is actually the same as forall a. Int -> a -> [a].
However, higher-rank polymorphism allows quantifiers to the left of
function arrows, too, and (forall a. [a] -> Int) -> Int really is
different from forall a. ([a] -> Int) -> Int.
Impredicative types take this idea to its natural conclusion:
universal quantifiers are allowed anywhere in a type, even inside
normal datatypes like lists or Maybe.
Thanks.
Can rank-1 polymorphism be either predicative or impredicative?
No, rank-1 polymorphism is always predicative, because any forall quantifiers do not appear as arguments to type constructors, that is, quantifiers are “prenex”.
Can rank-k polymorphism with k > 1 be either predicative or impredicative?
Higher-rank polymorphism is always impredicative; the RankNTypes extension enables impredicative polymorphism only for the (->) constructor, that is, given a type a -> b, a or b may be instantiated with a type containing foralls. We typically refer to such types as higher-rank only when a contains foralls, because (except for TypeApplications) X -> forall t. Y is equivalent to forall t. X -> Y.
General impredicative polymorphism (with the broken ImpredicativeTypes extension) is not supported. For example, you can’t write Maybe (forall a. [a] -> [a]). This is essentially because it’s difficult to automatically determine when to generalise and when to instantiate that quantifier. Fortunately, you can make this explicit using a newtype wrapper to “hide” the impredicativity, or rather, make it clear to the compiler what you want to do about quantifiers, e.g.:
{-# LANGUAGE RankNTypes #-}
newtype ListTransform = ListTransform { unLT :: forall a. [a] -> [a] }
f :: Maybe ListTransform -> [Int] -> [Char] -> ([Int], [Char])
f Nothing is cs = (is, cs)
f (Just (ListTransform t)) is cs = (t is, t cs)
-- or: f (Just lt) is cs = (unLT lt is, unLT lt cs)
Consider a variable introduced in a pattern, such as f in this Haskell example:
case (\x -> x) of f -> (f True, f 'c')
This code results in a type error ("Couldn't match expected type ‘Bool’ with actual type ‘Char’"), because of the two different uses of f. It shows that the inferred type of f is not polymorphic in Haskell.
But why shouldn't f be polymorphic?
I have two points of comparison: OCaml and "textbook" Hindley-Milner. Both suggest that f ought to be polymorphic.
In OCaml, the analogous code is not an error:
match (fun x -> x) with f -> (f true, f 'c')
This evaluates to (true, 'c') with type bool * char. So it looks like OCaml gets along fine with assigning f a polymorphic type.
We can gain clarity by stripping things down to the fundamentals of Hindley-Milner - lambda calculus with "let" - which both Haskell and OCaml are based on. When reduce to this core system, of course, there is no such thing as pattern matching. We can draw parallels though. Between "let" and "lambda", case expr1 of f -> expr2 is much closer to let f = expr1 in expr2 than to (lambda f. expr2) expr1. "Case", like "let", syntactically restricts f to be bound to expr1, while a function lambda f. expr2 doesn't know what f will be bound to since the function has no such restriction on where in the program it will be called. This was the reason why let-bound variables are generalized in Hindley-Milner and lambda-bound variables are not. It appears that the same reasoning that allows let-bound variables to be generalized shows that variables introduced by pattern matching could be generalized too.
The examples above are minimal for clarity, so they only show a trivial pattern f in the pattern matching, but all the same logic extends to arbitrarily complex patterns like Just (a:b:(x,y):_), which can introduce multiple variables that would all be generalized.
Is my analysis correct? In Haskell specifically - recognizing that it's not just plain Hindley-Milner and not OCaml - why don't we generalize the type of f in the first example?
Was this an explicit language design decision, and if so, what were the reasons? (I note that some in the community think that not even "let" should be generalized, but I would imagine the design decision pre-dates that paper.)
If variables introduced in a pattern were made polymorphic similar to "let", would that break compatibility with other aspects of Haskell in a significant way?
If we assign a polymorphic type (forall x. t) to a case scrutinee, then it matches no non-trivial pattern, so there's no point for having case.
Could we generalize in some other useful way? Not really, because of GHC's lack of support for "impredicative" instantiation. In your example of Just (a:b:(x,y):_), not a single bound variable can have polymorphic type, since Maybe, (,), and [] cannot be instantiated with such types.
One thing works, as mentioned in the comments: data types with polymorphic fields, such as data Endo = Endo (forall a. a -> a). However, type checking for polymorphic fields doesn't technically involve a generalization step, nor does it behave like let-generalization.
In principle, generalization could be performed at many points, for example even at arbitrary function arguments (e.g. in f (\x -> x)). However, too much generalization clogs up type inference by introducing untractable higher-rank types; this can be also understood as eliminating useful type dependencies between different parts of the program by removing unsolved metavariables. Although there are systems which can handle higher-rank inference much better than GHC, most notably MLF, they're also much more complicated and haven't seen much practical use. I personally prefer to not have silent let-generalization at all.
One first issue is that with type classes, generalization is not always free. Consider show :: forall a. Show a => a -> String and this expression:
case show of
f -> ...
If you generalize f to f :: forall a. Show a => a -> String, then GHC will pass a Show dictionary at every call of f, instead of once at the single occurrence of show. In case there are multiple calls all at the same type, this duplicates work compared to not generalizing.
It is also not actually a generalization of the current type inference algorithm when combined with type classes: it can cause existing programs to no longer typecheck. For example,
case show of
f -> f () ++ f mempty
By not generalizing f, we can infer that mempty has type (). On the other hand, generalizing f :: forall a. Show a => a -> String will lose that connection, and the type of mempty in that expression will be ambiguous.
It is true though that these are minor issues, and maybe things would be mostly fine with some monomorphism restrictions, even if not entirely backwards compatible.
In addition to the other answers, there's a reason for how type variables are treated in pattern matches in terms of the interaction with existential types. Let's take a look at a couple definitions from Data.Functor.Coyoneda:
{-# LANGUAGE GADTs #-}
data Coyoneda f a where
Coyoneda :: (b -> a) -> f b -> Coyoneda f a
lowerCoyoneda :: Functor f => Coyoneda f a -> f a
lowerCoyoneda (Coyoneda g x) = fmap g x
Coyoneda has an existential type variable used by both arguments to the constructor. If GHC doesn't pin that type down, there's no way for the fmap in lowerCoyoneda to type-check. GHC needs to know that g and x have the appropriate relation in their types, and that requires fixing the type variable in the pattern match.
The following program type-checks:
{-# LANGUAGE RankNTypes #-}
import Numeric.AD (grad)
newtype Fun = Fun (forall a. Num a => [a] -> a)
test1 [u, v] = (v - (u * u * u))
test2 [u, v] = ((u * u) + (v * v) - 1)
main = print $ fmap (\(Fun f) -> grad f [1,1]) [Fun test1, Fun test2]
But this program fails:
main = print $ fmap (\f -> grad f [1,1]) [test1, test2]
With the type error:
Grad.hs:13:33: error:
• Couldn't match type ‘Integer’
with ‘Numeric.AD.Internal.Reverse.Reverse s Integer’
Expected type: [Numeric.AD.Internal.Reverse.Reverse s Integer]
-> Numeric.AD.Internal.Reverse.Reverse s Integer
Actual type: [Integer] -> Integer
• In the first argument of ‘grad’, namely ‘f’
In the expression: grad f [1, 1]
In the first argument of ‘fmap’, namely ‘(\ f -> grad f [1, 1])’
Intuitively, the latter program looks correct. After all, the
following, seemingly equivalent program does work:
main = print $ [grad test1 [1,1], grad test2 [1,1]]
It looks like a limitation in GHC's type system. I would like to know
what causes the failure, why this limitation exists, and any possible
workarounds besides wrapping the function (per Fun above).
(Note: this is not caused by the monomorphism restriction; compiling
with NoMonomorphismRestriction does not help.)
This is an issue with GHC's type system. It is really GHC's type system by the way; the original type system for Haskell/ML like languages don't support higher rank polymorphism, let alone impredicative polymorphism which is what we're using here.
The issue is that in order to type check this we need to support foralls at any position in a type. Not only bunched all the way at the front of the type (the normal restriction which allows for type inference). Once you leave this area type inference becomes undecidable in general (for rank n polymorphism and beyond). In our case, the type of [test1, test2] would need to be [forall a. Num a => a -> a] which is a problem considering that it doesn't fit into the scheme discussed above. It would require us to use impredicative polymorphism, so called because a ranges over types with foralls in them and so a could be replaced with the type in which it's being used.
So, therefore there's going to be some cases that misbehave just because the problem is not fully solvable. GHC does have some support for rank n polymorphism and a bit of support for impredicative polymorphism but it's generally better to just use newtype wrappers to get reliable behavior. To the best of my knowledge, GHC also discourages using this feature precisely because it's so hard to figure out exactly what the type inference algorithm will handle.
In summary, math says that there will be flaky cases and newtype wrappers are the best, if somewhat dissatisfying way, to cope with it.
The type inference algorithm will not infer higher rank types (those with forall at the left of ->). If I remember correctly, it becomes undecidable. Anyway, consider this code
foo f = (f True, f 'a')
what should its type be? We could have
foo :: (forall a. a -> a) -> (Bool, Char)
but we could also have
foo :: (forall a. a -> Int) -> (Int, Int)
or, for any type constructor F :: * -> *
foo :: (forall a. a -> F a) -> (F Bool, F Char)
Here, as far as I can see, we can not find a principal type -- a type which is the most general type we can assign to foo.
If a principal type does not exist, the type inference machinery can only pick a suboptimal type for foo, which can cause type errors later on. This is bad. Instead, GHC relies on a Hindley-Milner style type inference engine, which was greatly extended so to cover more advanced Haskell types. This mechanism, unlike plain Hindley-Milner, will assign f a polymorphic type provided the user explicitly required that, e.g. by giving foo a signature.
Using a wrapper newtype like Fun also instructs GHC in a similar way, providing the polymorphic type for f.
Given a number of typeclass constraints:
{-# LANGUAGE ConstraintKinds, MultiParamTypeClasses #-}
import Data.Array.Unboxed(Ix,IArray,UArray)
type IntLike a = (Ord a, Num a, Enum a, Show a, Ix a, IArray UArray a)
How can I find out which types satisfy IntLike, i.e. all the mentioned constraints jointly?
I can puzzle together the information needed from the output of ghci's :info command, and then doublecheck my work by calling (or having ghci typecheck)
isIntLike :: IntLike -> Bool
isIntLike = const True
at various types, e.g. isIntLike (3::Int).
Is there a way to get ghci to do this for me?
I'm currently interested in concrete types, but wouldn't mind having a more general solution which also does clever stuff with unifying contexts!
Community Wiki answer based on the comments:
You can do this using template haskell.
main = print $(reify ''Show >>= stringE . show).
This won't work for type synonyms - rather, reify returns the AST representing the type synonym itself, without expanding it. You can check for type synonyms which are constraints, extract the constraints of which that type synonym consists, and continue reifying those.
I'm wondering why this piece of code doesn't type-check:
{-# LANGUAGE ScopedTypeVariables, Rank2Types, RankNTypes #-}
{-# OPTIONS -fglasgow-exts #-}
module Main where
foo :: [forall a. a]
foo = [1]
ghc complains:
Could not deduce (Num a) from the context ()
arising from the literal `1' at exist5.hs:7:7
Given that:
Prelude> :t 1
1 :: (Num t) => t
Prelude>
it seems that the (Num t) context can't match the () context of arg. The point I can't understand is that since () is more general than (Num t), the latter should and inclusion of the former. Has this anything to do with lack of Haskell support for sub-typing?
Thank you for any comment on this.
You're not using existential quantification here. You're using rank N types.
Here [forall a. a] means that every element must have every possible type (not any, every). So [undefined, undefined] would be a valid list of that type and that's basically it.
To expand on that a bit: if a list has type [forall a. a] that means that all the elements have type forall a. a. That means that any function that takes any kind of argument, can take an element of that list as argument. This is no longer true if you put in an element which has a more specific type than forall a. a, so you can't.
To get a list which can contain any type, you need to define your own list type with existential quantification. Like so:
data MyList = Nil | forall a. Cons a MyList
foo :: MyList
foo = Cons 1 Nil
Of course unless you restrain element types to at least instantiate Show, you can't do anything with a list of that type.
First, your example doesn't even get that far with me for the current GHC, because you need to enable ImpredecativeTypes as well. Doing so results in a warning that ImpredicativeTypes will be simplified or removed in the next GHC. So we're not in good territory here. Nonetheless, adding the proper Num constraint (foo :: [forall a. Num a => a]) does allow your example to compile.
Let's leave aside impredicative types and look at a simpler example:
data Foo = Foo (forall a. a)
foo = Foo 1
This also doesn't compile with the error Could not deduce (Num a) from the context ().
Why? Well, the type promises that you're going to give the Foo constructor something with the quality that for any type a, it produces an a. The only thing that satisfies this is bottom. An integer literal, on the other hand, promises that for any type a that is of class Num it produces an a. So the types are clearly incompatible. We can however pull the forall a bit further out, to get what you probably want:
data Foo = forall a. Foo a
foo = Foo 1
So that compiles. But what can we do with it? Well, let's try to define an extractor function:
unFoo (Foo x) = x
Oops! Quantified type variable 'a' escapes. So we can define that, but we can't do much interesting with it. If we gave a class context, then we could at least use some of the class functions on it.
There is a time and place for existentials, including ones without class context, but its fairly rare, especially when you're getting started. When you do end up using them, often it will be in the context of GADTs, which are a superset of existential types, but in which the way that existentials arise feels quite natural.
Because the declaration [forall a. a] is (in meaning) the equivalent of saying, "I have a list, and if you (i.e. the computer) pick a type, I guarantee that the elements of said list will be that type."
The compiler is "calling your bluff", so-to-speak, by complaining, "I 'know' that if you give me a 1, that its type is in the Num class, but you said that I could pick any type I wanted to for that list."
Basically, you're trying to use the value of a universal type as if it were the type of a universal value. Those aren't the same thing, though.