Why is it said that typeclasses are existential? - haskell

According to this link describing existential types:
A value of an existential type like ∃x. F(x) is a pair containing some type x and a value of the type F(x). Whereas a value of a polymorphic type like ∀x. F(x) is a function that takes some type x and produces a value of type F(x). In both cases, the type closes over some type constructor F.
But a function definition with type class constraints doesn't pair with the type class instance.
It's not forall f, exists Functor f, ... (because it's obvious not every type f has instance of Functor f, hence exists Functor f ... not true).
It's not exists f and Functor f, ... (because it's applicable to all instances of satisfied f, not only the chosen one).
To me, it's forall f and instances of Functor f, ..., more like to scala's implicit arguments rather than existential types.
And according to this link describing type classes:
[The class declaration for Eq] means, logically, there is a type a for which the type a -> a -> Bool is inhabited, or, from a it can be proved that a -> a -> Bool (the class promises two different proofs for this, having names == and /=). This proposition is of existential nature (not to be confused with existential type)
What's the difference between type classes and existential types, and why are they both considered "existential"?

The wiki you quote is wrong, or at least being imprecise. A class declaration is not an existential proposition; it is not a proposition of any kind, it is merely a definition of a shorthand. One could then move on to making a proposition using that definition if you wanted, but on its own it's nothing like that. For example,
class Eq a where (==) :: a -> a -> Bool
makes a new definition. One could then write a non-existential, non-universal proposition using it, say,
Eq ()
which we could "prove" by writing:
instance Eq () where () == () = True
Or one could write
prop_ExistsFoo :: exists a. Eq a *> a
as an existential proposition. (Haskell doesn't actually have the exists proposition former, nor (*>). Think of (*>) as dual to (=>) -- just like exists is dual to forall. So where (=>) is a function which takes evidence of a constraint, (*>) is a tuple that contains evidence of a constraint, just like forall is for a function that takes a type while exists is for a tuple that contains a type.) We could "prove" this proposition by, e.g.
prop_ExistsFoo = ()
Note here that the type contained in the exists tuple is (); the evidence contained in the (*>) tuple is the Eq () instance we wrote above. I have honored Haskell's tendency to make types and instances silent and implicit here, so they don't appear in the visible proof text.
Similarly, we could make a different, universal proposition out of Eq by writing something like
prop_ForallEq :: forall a. Eq a => a
which is not nontrivially provable, or
prop_ForallEq2 :: forall a. Eq a => a -> a -> Bool
which we could "prove", for example, by writing
prop_ForallEq2 x y = not (x == y)
or in many other ways.
But the class declaration in itself is definitely not an existential proposition, and it doesn't have "existential nature", whatever that is supposed to mean. Instead of getting hung up and confused on that, please congratulate yourself for correctly labeling this incorrect claim as confusing!

The second quote is imprecise. The existential claim comes with the instances, not with the class itself. Consider the following class:
class Chaos a where
to :: a -> y
from :: x -> a
While this is a perfectly valid declaration, there can't possibly be any instances of Chaos (it there were, to . from would exist, which would be quite amusing). The type of, say, to...
GHCi> :t to
to :: Chaos a => a -> y
... tells us that, given any type a, if a is an instance of Chaos, there is a function which can turn an a into a value of any type whatsoever. If Chaos has no instances, that statement is vacuously true, so we can't infer the existence of any such function from it.
Putting classes aside for a moment, this situation is rather similar to what we have with the absurd function:
absurd :: Void -> a
This type says that, given a Void value, we can produce a value of any type whatsoever. That sounds, well, absurd -- but then we remember that Void is the empty type, which means there are no Void values, and it's all good.
For the sake of contrast, we might note that instances become possible once we break Chaos apart in two classes:
class Primordial a where
conjure :: a -> y
class Doom a where
destroy :: x -> a
instance Primordial Void where
conjure = absurd
instance Doom () where
destroy = const ()
When we, for example, write instance Primordial Void, we are claiming that Void is an instance of Primordial. That implies there must exist a function conjure :: Void -> y, at which point we must back up the claim by supplying an implementation.

Related

Clarification of Terms around Haskell Type system

Type system in haskell seem to be very Important and I wanted to clarify some terms revolving around haskell type system.
Some type classes
Functor
Applicative
Monad
After using :info I found that Functor is a type class, Applicative is a type class with => (deriving?) Functor and Monad deriving Applicative type class.
I've read that Maybe is a Monad, does that mean Maybe is also Applicative and Functor?
-> operator
When i define a type
data Maybe = Just a | Nothing
and check :t Just I get Just :: a -> Maybe a. How to read this -> operator?
It confuses me with the function where a -> b means it evaluates a to b (sort of returns a maybe) – I tend to think lhs to rhs association but it turns when defining types?
The term type is used in ambiguous ways, Type, Type Class, Type Constructor, Concrete Type etc... I would like to know what they mean to be exact
Indeed the word “type” is used in somewhat ambiguous ways.
The perhaps most practical way to look at it is that a type is just a set of values. For example, Bool is the finite set containing the values True and False.Mathematically, there are subtle differences between the concepts of set and type, but they aren't really important for a programmer to worry about. But you should in general consider the sets to be infinite, for example Integer contains arbitrarily big numbers.
The most obvious way to define a type is with a data declaration, which in the simplest case just lists all the values:
data Colour = Red | Green | Blue
There we have a type which, as a set, contains three values.
Concrete type is basically what we say to make it clear that we mean the above: a particular type that corresponds to a set of values. Bool is a concrete type, that can easily be understood as a data definition, but also String, Maybe Integer and Double -> IO String are concrete types, though they don't correspond to any single data declaration.
What a concrete type can't have is type variables†, nor can it be an incompletely applied type constructor. For example, Maybe is not a concrete type.
So what is a type constructor? It's the type-level analogue to value constructors. What we mean mathematically by “constructor” in Haskell is an injective function, i.e. a function f where if you're given f(x) you can clearly identify what was x. Furthermore, any different constructors are assumed to have disjoint ranges, which means you can also identify f.‡
Just is an example of a value constructor, but it complicates the discussion that it also has a type parameter. Let's consider a simplified version:
data MaybeInt = JustI Int | NothingI
Now we have
JustI :: Int -> MaybeInt
That's how JustI is a function. Like any function of the same signature, it can be applied to argument values of the right type, like, you can write JustI 5.What it means for this function to be injective is that I can define a variable, say,
quoxy :: MaybeInt
quoxy = JustI 9328
and then I can pattern match with the JustI constructor:
> case quoxy of { JustI n -> print n }
9328
This would not be possible with a general function of the same signature:
foo :: Int -> MaybeInt
foo i = JustI $ negate i
> case quoxy of { foo n -> print n }
<interactive>:5:17: error: Parse error in pattern: foo
Note that constructors can be nullary, in which case the injective property is meaningless because there is no contained data / arguments of the injective function. Nothing and True are examples of nullary constructors.
Type constructors are the same idea as value constructors: type-level functions that can be pattern-matched. Any type-name defined with data is a type constructor, for example Bool, Colour and Maybe are all type constructors. Bool and Colour are nullary, but Maybe is a unary type constructor: it takes a type argument and only the result is then a concrete type.
So unlike value-level functions, type-level functions are kind of by default type constructors. There are also type-level functions that aren't constructors, but they require -XTypeFamilies.
A type class may be understood as a set of types, in the same vein as a type can be seen as a set of values. This is not quite accurate, it's closer to true to say a class is a set of type constructors but again it's not as useful to ponder the mathematical details – better to look at examples.
There are two main differences between type-as-set-of-values and class-as-set-of-types:
How you define the “elements”: when writing a data declaration, you need to immediately describe what values are allowed. By contrast, a class is defined “empty”, and then the instances are defined later on, possibly in a different module.
How the elements are used. A data type basically enumerates all the values so they can be identified again. Classes meanwhile aren't generally concerned with identifying types, rather they specify properties that the element-types fulfill. These properties come in the form of methods of a class. For example, the instances of the Num class are types that have the property that you can add elements together.
You could say, Haskell is statically typed on the value level (fixed sets of values in each type), but duck-typed on the type level (classes just require that somebody somewhere implements the necessary methods).
A simplified version of the Num example:
class Num a where
(+) :: a -> a -> a
instance Num Int where
0 + x = x
x + y = ...
If the + operator weren't already defined in the prelude, you would now be able to use it with Int numbers. Then later on, perhaps in a different module, you could also make it usable with new, custom number types:
data MyNumberType = BinDigits [Bool]
instance Num MyNumberType where
BinDigits [] + BinDigits l = BinDigits l
BinDigits (False:ds) + BinDigits (False:es)
= BinDigits (False : ...)
Unlike Num, the Functor...Monad type classes are not classes of types, but of 1-ary type constructors. I.e. every functor is a type constructor taking one argument to make it a concrete type. For instance, recall that Maybe is a 1-ary type constructor.
class Functor f where
fmap :: (a->b) -> f a -> f b
instance Functor Maybe where
fmap f (Just a) = Just (f a)
fmap _ Nothing = Nothing
As you have concluded yourself, Applicative is a subclass of Functor. D being a subclass of C means basically that D is a subset of the set of type constructors in C. Therefore, yes, if Maybe is an instance of Monad it also is an instance of Functor.
†That's not quite true: if you consider the _universal quantor_ explicitly as part of the type, then a concrete type can contain variables. This is a bit of an advanced subject though.
‡This is not guaranteed to be true if the -XPatternSynonyms extension is used.

Which is a polymorphic type: a type or a set of types?

Programming in Haskell by Hutton says:
A type that contains one or more type variables is called polymorphic.
Which is a polymorphic type: a type or a set of types?
Is a polymorphic type with a concrete type substituting its type variable a type?
Is a polymorphic type with different concrete types substituting its type variable considered the same or different types?
Is a polymorphic type with a concrete type substituting its type variable a type?
That's the point, yes. However, you need to be careful. Consider:
id :: a -> a
That's polymorphic. You can substitute a := Int and get Int -> Int, and a := Float -> Float and get (Float -> Float) -> Float -> Float. However, you cannot say a := Maybe and get id :: Maybe -> Maybe. That just doesn't make sense. Instead, we have to require that you can only substitute concrete types like Int and Maybe Float for a, not abstract ones like Maybe. This is handled with the kind system. This is not too important for your question, so I'll just summarize. Int and Float and Maybe Float are all concrete types (that is, they have values), so we say that they have type Type (the type of a type is often called its kind). Maybe is a function that takes a concrete type as an argument and returns a new concrete type, so we say Maybe :: Type -> Type. In the type a -> a, we say the type variable a must have type Type, so now the substitutions a := Int, a := String, etc. are allowed, while stuff like a := Maybe isn't.
Is a polymorphic type with different concrete types substituting its type variable considered the same or different types?
No. Back to a -> a: a := Int gives Int -> Int, but a := Float gives Float -> Float. Not the same.
Which is a polymorphic type: a type or a set of types?
Now that's a loaded question. You can skip to the TL;DR at the end, but the question of "what is a polymorphic type" is actually really confusing in Haskell, so here's a wall of text.
There are two ways to see it. Haskell started with one, then moved to the other, and now we have a ton of old literature referring to the old way, so the syntax of the modern system tries to maintain compatibility. It's a bit of a hot mess. Consider
id x = x
What is the type of id? One point of view is that id :: Int -> Int, and also id :: Float -> Float, and also id :: (Int -> Int) -> Int -> Int, ad infinitum, all simultaneously. This infinite family of types can be summed up with one polymorphic type, id :: a -> a. This point of view gives you the Hindley-Milner type system. This is not how modern GHC Haskell works, but this system is what Haskell was based on at its creation.
In Hindley-Milner, there is a hard line between polymorphic types and monomorphic types, and the union of these two groups gives you "types" in general. It's not really fair to say that, in HM, polymorphic types (in HM jargon, "polytypes") are types. You can't take polytypes as arguments, or return them from functions, or place them in a list. Instead, polytypes are only templates for monotypes. If you squint, in HM, a polymorphic type can be seen as a set of those monotypes that fit the schema.
Modern Haskell is built on System F (plus extensions). In System F,
id = \x -> x -- rewriting the example
is not a complete definition. Therefore we can't even think about giving it a type. Every lambda-bound variable needs a type annotation, but x has no annotation. Worse, we can't even decide on one: \(x :: Int) -> x is just as good as \(x :: Float) -> x. In System F, what we do is we write
id = /\(a :: Type) -> \(x :: a) -> x
using /\ to represent Λ (upper-case lambda) much as we use \ to represent λ.
id is a function taking two arguments. The first argument is a Type, named a. The second argument is an a. The result is also an a. The type signature is:
id :: forall (a :: Type). a -> a
forall is a new kind of function arrow, basically. Note that it provides a binder for a. In HM, when we said id :: a -> a, we didn't really define what a was. It was a fresh, global variable. By convention, more than anything else, that variable is not used anywhere else (otherwise the Generalization rule doesn't apply and everything breaks down). If I had written e.g. inject :: a -> Maybe a, afterwards, the textual occurrences of a would be referring to a new global entity, different from the one in id. In System F, the a in forall a. a -> a actually has scope. It's a "local variable" available only for use underneath that forall. The a in inject :: forall a. a -> Maybe a may or may not be the "same" a; it doesn't matter, because we have actual scoping rules that keep everything from falling apart.
Because System F has hygienic scoping rules for type variables, polymorphic types are allowed to do everything other types can do. You can take them as arguments
runCont :: forall (a :: Type). (forall (r :: Type). (a -> r) -> r) -> a
runCons a f = f a (id a) -- omitting type signatures; you can fill them in
You put them in data constructors
newtype Yoneda f a = Yoneda (forall b. (a -> b) -> f b)
You can place them in polymorphic containers:
type Bool = forall a. a -> a -> a
true, false :: Bool
true a t f = t
false a t f = f
thueMorse :: [Bool]
thueMorse = false : true : true : false : _etc
There's an important difference from HM. In HM, if something has polymorphic type, it also has, simultaneously, an infinity of monomorphic types. In System F, a thing can only have one type. id = /\a -> \(x :: a) -> x has type forall a. a -> a, not Int -> Int or Float -> Float. In order to get an Int -> Int out of id, you have to actually give it an argument: id Int :: Int -> Int, and id Float :: Float -> Float.
Haskell is not System F, however. System F is closer to what GHC calls Core, which is an internal language that GHC compiles Haskell to—basically Haskell without any syntax sugar. Haskell is a Hindley-Milner flavored veneer on top of a System F core. In Haskell, nominally a polymorphic type is a type. They do not act like sets of types. However, polymorphic types are still second class. Haskell doesn't let you actually type forall without -XExplicitForalls. It emulates Hindley-Milner's wonky implicit global variable creation by inserting foralls in certain places. The places where it does so are changed by -XScopedTypeVariables. You can't take polymorphic arguments or have polymorphic fields unless you enable -XRankNTypes. You cannot say things like [forall a. a -> a -> a], nor can you say id (forall a. a -> a -> a) :: (forall a. a -> a -> a) -> (forall a. a -> a -> a)—you must define e.g. newtype Bool = Bool { ifThenElse :: forall a. a -> a -> a } to wrap the polymorphism under something monomorphic. You cannot explicitly give type arguments unless you enable -XTypeApplications, and then you can write id #Int :: Int -> Int. You cannot write type lambdas (/\), period; instead, they are inserted implicitly whenever possible. If you define id :: forall a. a -> a, then you cannot even write id in Haskell. It will always be implicitly expanded to an application, id #_.
TL;DR: In Haskell, a polymorphic type is a type. It's not treated as a set of types, or a rule/schema for types, or whatever. However, due to historical reasons, they are treated as second class citizens. By default, it looks like they are treated as mere sets of types, if you squint a bit. Most restrictions on them can be lifted with suitable language extensions, at which point they look more like "just types". The one remaining big restriction (no impredicative instantiations allowed) is rather fundamental and cannot be erased, but that's fine because there's a workaround.
There is some nuance in the word "type" here. Values have concrete types, which cannot be polymorphic. Expressions, on the other hand, have general types, which can be polymorphic. If you're thinking of types for values, then a polymorphic type can be thought of loosely as defining sets of possible concrete types. (At least first-order polymorphic types! Higher-order polymorphism breaks this intuition.) But that's not always a particularly useful way of thinking, and it's not a sufficient definition. It doesn't capture which sets of types can be described in this way (and related notions like parametricity.)
It's a good observation, though, that the same word, "type", is used in these two related, but different, ways.
EDIT: The answer below turns out not to answer the question. The difference is a subtle mistake in terminology: types like Maybe and [] are higher-kinded, whereas types like forall a. a -> a and forall a. Maybe a are polymorphic. The answer below relates to higher-kinded types, but the question was asked about polymorphic types. I’m still leaving this answer up in case it helps anyone else, but I realise now it’s not really an answer to the question.
I would argue that a polymorphic higher-kinded type is closer to a set of types. For instance, you could see Maybe as the set {Maybe Int, Maybe Bool, …}.
However, strictly speaking, this is a bit misleading. To address this in more detail, we need to learn about kinds. Similarly to how types describe values, we say that kinds describe types. The idea is:
A concrete type (that is, one which has values) has a kind of *. Examples include Bool, Char, Int and Maybe String, which all have type *. This is denoted e.g. Bool :: *. Note that functions such as Int -> String also have kind *, as these are concrete types which can contain values such as show!
A type with a type parameter has a kind containing arrows. For instance, in the same way that id :: a -> a, we can say that Maybe :: * -> *, since Maybe takes a concrete type as an argument (such as Int), and produces a concrete type as a result (such as Maybe Int). Something like a -> a also has kind * -> *, since it has one type parameter (a) and produces a concrete result (a -> a). You can get more complex kinds as well: for instance, data Foo f x = FooConstr (f x x) has kind Foo :: (* -> * -> *) -> * -> *. (Can you see why?)
(If the above explanation doesn’t make sense, the Learn You a Haskell book has a great section on kinds as well.)
So now we can answer your questions properly:
Which is a polymorphic higher-kinded type: a type or a set of types?
Neither: a polymorphic higher-kinded type is a type-level function, as indicated by the arrows in its kind. For instance, Maybe :: * -> * is a type-level function which converts e.g. Int → Maybe Int, Bool → Maybe Bool etc.
Is a polymorphic higher-kinded type with a concrete type substituting its type variable a type?
Yes, when your polymorphic higher-kinded type has a kind * -> * (i.e. it has one type parameter, which accepts a concrete type). When you apply a concrete type Conc :: * to a type Poly :: * -> *, it’s just function application, as detailed above, with the result being Poly Conc :: * i.e. a concrete type.
Is a polymorphic higher-kinded type with different concrete types substituting its type variable considered the same or different types?
This question is a bit out of place, as it doesn’t have anything to do with kinds. The answer is definitely no: two types like Maybe Int and Maybe Bool are not the same. Nothing may be a member of both types, but only the former contains a value Just 4, and only the latter contains a value Just False.
On the other hand, it is possible to have two different substitutions where the resulting types are isomorphic. (An isomorphism is where two types are different, but equivalent in some way. For instance, (a, b) and (b, a) are isomorphic, despite being the same type. The formal condition is that two types p,q are isomorphic when you can write two inverse functions p -> q and q -> p.)
One example of this is Const:
data Const a b = Const { getConst :: a }
This type just ignores its second type parameter; as a result, two types like Const Int Char and Const Int Bool are isomorphic. However, they are not the same type: if you make a value of type Const Int Char, but then use it as something of type Const Int Bool, this will result in a type error. This sort of functionality is incredibly useful, as it means you can ‘tag’ a type a using Const a tag, then use the tag as a marker of information on the type level.

Practical applications of Rank 2 polymorphism?

I'm covering polymorphism and I'm trying to see the practical uses of such a feature.
My basic understanding of Rank 2 is:
type MyType = ∀ a. a -> a
subFunction :: a -> a
subFunction el = el
mainFunction :: MyType -> Int
mainFunction func = func 3
I understand that this is allowing the user to use a polymorphic function (subFunction) inside mainFunction and strictly specify it's output (Int). This seems very similar to GADT's:
data Example a where
ExampleInt :: Int -> Example Int
ExampleBool :: Bool -> Example Bool
1) Given the above, is my understanding of Rank 2 polymorphism correct?
2) What are the general situations where Rank 2 polymorphism can be used, as opposed to GADT's, for example?
If you pass a polymorphic function as and argument to a Rank2-polymorphic function, you're essentially passing not just one function but a whole family of functions – for all possible types that fulfill the constraints.
Typically, those forall quantifiers come with a class constraint. For example, I might wish to do number arithmetic with two different types simultaneously (for comparing precision or whatever).
data FloatCompare = FloatCompare {
singlePrecision :: Float
, doublePrecision :: Double
}
Now I might want to modify those numbers through some maths operation. Something like
modifyFloat :: (Num -> Num) -> FloatCompare -> FloatCompare
But Num is not a type, only a type class. I could of course pass a function that would modify any particular number type, but I couldn't use that to modify both a Float and a Double value, at least not without some ugly (and possibly lossy) converting back and forth.
Solution: Rank-2 polymorphism!
modifyFloat :: (∀ n . Num n => n -> n) -> FloatCompare -> FloatCompare
mofidyFloat f (FloatCompare single double)
= FloatCompare (f single) (f double)
The best single example of how this is useful in practice are probably lenses. A lens is a “smart accessor function” to a field in some larger data structure. It allows you to access fields, update them, gather results... while at the same time composing in a very simple way. How it works: Rank2-polymorphism; every lens is polymorphic, with the different instantiations corresponding to the “getter” / “setter” aspects, respectively.
The go-to example of an application of rank-2 types is runST as Benjamin Hodgson mentioned in the comments. This is a rather good example and there are a variety of examples using the same trick. For example, branding to maintain abstract data type invariants across multiple types, avoiding confusion of differentials in ad, a region-based version of ST.
But I'd actually like to talk about how Haskell programmers are implicitly using rank-2 types all the time. Every type class whose methods have universally quantified types desugars to a dictionary with a field with a rank-2 type. In practice, this is virtually always a higher-kinded type class* like Functor or Monad. I'll use a simplified version of Alternative as an example. The class declaration is:
class Alternative f where
empty :: f a
(<|>) :: f a -> f a -> f a
The dictionary representing this class would be:
data AlternativeDict f = AlternativeDict {
empty :: forall a. f a,
(<|>) :: forall a. f a -> f a -> f a }
Sometimes such an encoding is nice as it allows one to use different "instances" for the same type, perhaps only locally. For example, Maybe has two obvious instances of Alternative depending on whether Just a <|> Just b is Just a or Just b. Languages without type classes, such as Scala, do indeed use this encoding.
To connect to leftaroundabout's reference to lenses, you can view the hierarchy there as a hierarchy of type classes and the lens combinators as simply tools for explicitly building the relevant type class dictionaries. Of course, the reason it isn't actually a hierarchy of type classes is that we usually will have multiple "instances" for the same type. E.g. _head and _head . _tail are both "instances" of Traversal' s a.
* A higher-kinded type class doesn't necessarily lead to this, and it can happen for a type class of kind *. For example:
-- Higher-kinded but doesn't require universal quantification.
class Sum c where
sum :: c Int -> Int
-- Not higher-kinded but does require universal quantification.
class Length l where
length :: [a] -> l
If you are using modules in Haskell, you are already using Rank-2 types. Theoretically speaking, modules are records with rank-2 type properties.
For example, the Foo module below in Haskell ...
module Foo(id) where
id :: forall a. a -> a
id x = x
import qualified Foo
main = do
putStrLn (Foo.id "hello")
return ()
... can actually be thought as a record as follows:
type FooType = FooType {
id :: forall a. a -> a
}
Foo :: FooType
Foo = Foo {
id = \x -> x
}
P/S (unrelated this question): from a language design perspective, if you are going to support module system, then you might as well support higher-rank types (i.e. allow arbitrary quantification of type variables on any level) to reduce duplication of efforts (i.e. type checking a module should be almost the same as type checking a record with higher rank types).

What does Functor's fmap tell about types?

What does f a and f b tell me about its type?
class Functor f where
fmap :: (a -> b) -> f a -> f b
I think I get the idea behind standard instances of a functor. However I'm having hard time understanding what f a and f actually represent.
I understand that f a and f b are just types and they must carry information what type constructor was used to create them and type arguments that were used.
Is f a type constructor of kind * -> *? Is (->) r a type constructor just like Maybe is?
I understand that f a and f b are just types and they must carry information what type constructor was used to create them and type arguments that were used.
Good explanation.
Is f a type constructor of kind * -> *?
In effect.
Is (->) r a type constructor just like Maybe is?
In effect, yes:
Yes in the sense that you can apply it to a type like String and get r -> String, just like you can apply Maybe to String to get Maybe String. You can use for f anything that gives you a type from any other type.
..but no...
No, in the sense that Daniel Wagner points out; To be precise, Maybe and [] are type constructors, but (->) r and Either a are sort of like partially applied type constructors. Nevertheless they make good functors, because you can freely apply functions "inside" them and change the type of "the contents".
(Stuff in inverted commas is very hand-wavy imprecise terminology.)
My (possibly mildly tortured) reading of chapter 4 of the Haskell 2010 Report is that Maybe and (->) r are both types, of kind * -> *. Alternatively, the Report also labels them as type expressions—but I can't discern a firm difference in how the Report uses the two terms, except perhaps for surface syntax details. (->) and Maybe are type constructors; type expressions are assembled from type constructors and type variables.
For example, section 4.1.1 ("Kinds") of the 2010 report says (my boldface):
To ensure that they are valid, type expressions are classified into different kinds, which take one of two possible forms:
The symbol ∗ represents the kind of all nullary type constructors.
If κ1 and κ2 are kinds, then κ1 → κ2 is the kind of types that take a type of kind κ1 and return a type of kind κ2.
Section 4.3.2, "Instance Declarations" (my boldface):
An instance declaration that makes the type T to be an instance of class C is called a C-T instance declaration and is subject to these static restrictions:
A type may not be declared as an instance of a particular class more than once in the program.
The class and type must have the same kind; this can be determined using kind inference as described in Section 4.6.
So going by that language, the following instance declaration makes the type (->) r to be an instance of the class Functor:
instance Functor ((->) r) where
fmap f g = f . g
The funny thing about this terminology is that we call (->) r a "type" even though there are no expressions in Haskell that have that type—not even undefined:
foo :: (->) r
foo = undefined
{-
[1 of 1] Compiling Main ( ../src/scratch.hs, interpreted )
../src/scratch.hs:1:8:
Expecting one more argument to `(->) r'
In the type signature for `foo': foo :: (->) r
-}
But I think that's not a big deal. Basically, all declarations in Haskell must have types of kind *.
As a side note, from my limited understanding of dependently typed languages, many of these lack Haskell's firm distinction between terms and types, so that something like (->) Boolean is an expression whose value is a function that takes a type as its argument and produces a type as its result.

What is the purpose of Rank2Types?

I am not really proficient in Haskell, so this might be a very easy question.
What language limitation do Rank2Types solve? Don't functions in Haskell already support polymorphic arguments?
It's hard to understand higher-rank polymorphism unless you study System F directly, because Haskell is designed to hide the details of that from you in the interest of simplicity.
But basically, the rough idea is that polymorphic types don't really have the a -> b form that they do in Haskell; in reality, they look like this, always with explicit quantifiers:
id :: ∀a.a → a
id = Λt.λx:t.x
If you don't know the "∀" symbol, it's read as "for all"; ∀x.dog(x) means "for all x, x is a dog." "Λ" is capital lambda, used for abstracting over type parameters; what the second line says is that id is a function that takes a type t, and then returns a function that's parametrized by that type.
You see, in System F, you can't just apply a function like that id to a value right away; first you need to apply the Λ-function to a type in order to get a λ-function that you apply to a value. So for example:
(Λt.λx:t.x) Int 5 = (λx:Int.x) 5
= 5
Standard Haskell (i.e., Haskell 98 and 2010) simplifies this for you by not having any of these type quantifiers, capital lambdas and type applications, but behind the scenes GHC puts them in when it analyzes the program for compilation. (This is all compile-time stuff, I believe, with no runtime overhead.)
But Haskell's automatic handling of this means that it assumes that "∀" never appears on the left-hand branch of a function ("→") type. Rank2Types and RankNTypes turn off those restrictions and allow you to override Haskell's default rules for where to insert forall.
Why would you want to do this? Because the full, unrestricted System F is hella powerful, and it can do a lot of cool stuff. For example, type hiding and modularity can be implemented using higher-rank types. Take for example a plain old function of the following rank-1 type (to set the scene):
f :: ∀r.∀a.((a → r) → a → r) → r
To use f, the caller first must choose what types to use for r and a, then supply an argument of the resulting type. So you could pick r = Int and a = String:
f Int String :: ((String → Int) → String → Int) → Int
But now compare that to the following higher-rank type:
f' :: ∀r.(∀a.(a → r) → a → r) → r
How does a function of this type work? Well, to use it, first you specify which type to use for r. Say we pick Int:
f' Int :: (∀a.(a → Int) → a → Int) → Int
But now the ∀a is inside the function arrow, so you can't pick what type to use for a; you must apply f' Int to a Λ-function of the appropriate type. This means that the implementation of f' gets to pick what type to use for a, not the caller of f'. Without higher-rank types, on the contrary, the caller always picks the types.
What is this useful for? Well, for many things actually, but one idea is that you can use this to model things like object-oriented programming, where "objects" bundle some hidden data together with some methods that work on the hidden data. So for example, an object with two methods—one that returns an Int and another that returns a String, could be implemented with this type:
myObject :: ∀r.(∀a.(a → Int, a -> String) → a → r) → r
How does this work? The object is implemented as a function that has some internal data of hidden type a. To actually use the object, its clients pass in a "callback" function that the object will call with the two methods. For example:
myObject String (Λa. λ(length, name):(a → Int, a → String). λobjData:a. name objData)
Here we are, basically, invoking the object's second method, the one whose type is a → String for an unknown a. Well, unknown to myObject's clients; but these clients do know, from the signature, that they will be able to apply either of the two functions to it, and get either an Int or a String.
For an actual Haskell example, below is the code that I wrote when I taught myself RankNTypes. This implements a type called ShowBox which bundles together a value of some hidden type together with its Show class instance. Note that in the example at the bottom, I make a list of ShowBox whose first element was made from a number, and the second from a string. Since the types are hidden by using the higher-rank types, this doesn't violate type checking.
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE ImpredicativeTypes #-}
type ShowBox = forall b. (forall a. Show a => a -> b) -> b
mkShowBox :: Show a => a -> ShowBox
mkShowBox x = \k -> k x
-- | This is the key function for using a 'ShowBox'. You pass in
-- a function #k# that will be applied to the contents of the
-- ShowBox. But you don't pick the type of #k#'s argument--the
-- ShowBox does. However, it's restricted to picking a type that
-- implements #Show#, so you know that whatever type it picks, you
-- can use the 'show' function.
runShowBox :: forall b. (forall a. Show a => a -> b) -> ShowBox -> b
-- Expanded type:
--
-- runShowBox
-- :: forall b. (forall a. Show a => a -> b)
-- -> (forall b. (forall a. Show a => a -> b) -> b)
-- -> b
--
runShowBox k box = box k
example :: [ShowBox]
-- example :: [ShowBox] expands to this:
--
-- example :: [forall b. (forall a. Show a => a -> b) -> b]
--
-- Without the annotation the compiler infers the following, which
-- breaks in the definition of 'result' below:
--
-- example :: forall b. [(forall a. Show a => a -> b) -> b]
--
example = [mkShowBox 5, mkShowBox "foo"]
result :: [String]
result = map (runShowBox show) example
PS: for anybody reading this who's wondered how come ExistentialTypes in GHC uses forall, I believe the reason is because it's using this sort of technique behind the scenes.
Do not functions in Haskell already support polymorphic arguments?
They do, but only of rank 1. This means that while you can write a function that takes different types of arguments without this extension, you can't write a function that uses its argument as different types in the same invocation.
For example the following function can't be typed without this extension because g is used with different argument types in the definition of f:
f g = g 1 + g "lala"
Note that it's perfectly possible to pass a polymorphic function as an argument to another function. So something like map id ["a","b","c"] is perfectly legal. But the function may only use it as monomorphic. In the example map uses id as if it had type String -> String. And of course you can also pass a simple monomorphic function of the given type instead of id. Without rank2types there is no way for a function to require that its argument must be a polymorphic function and thus also no way to use it as a polymorphic function.
Luis Casillas's answer gives a lot of great info about what rank 2 types mean, but I'll just expand on one point he didn't cover. Requiring an argument to be polymorphic doesn't just allow it to be used with multiple types; it also restricts what that function can do with its argument(s) and how it can produce its result. That is, it gives the caller less flexibility. Why would you want to do that? I'll start with a simple example:
Suppose we have a data type
data Country = BigEnemy | MediumEnemy | PunyEnemy | TradePartner | Ally | BestAlly
and we want to write a function
f g = launchMissilesAt $ g [BigEnemy, MediumEnemy, PunyEnemy]
that takes a function that's supposed to choose one of the elements of the list it's given and return an IO action launching missiles at that target. We could give f a simple type:
f :: ([Country] -> Country) -> IO ()
The problem is that we could accidentally run
f (\_ -> BestAlly)
and then we'd be in big trouble! Giving f a rank 1 polymorphic type
f :: ([a] -> a) -> IO ()
doesn't help at all, because we choose the type a when we call f, and we just specialize it to Country and use our malicious \_ -> BestAlly again. The solution is to use a rank 2 type:
f :: (forall a . [a] -> a) -> IO ()
Now the function we pass in is required to be polymorphic, so \_ -> BestAlly won't type check! In fact, no function returning an element not in the list it is given will typecheck (although some functions that go into infinite loops or produce errors and therefore never return will do so).
The above is contrived, of course, but a variation on this technique is key to making the ST monad safe.
Higher-rank types aren't as exotic as the other answers have made out. Believe it or not, many object-oriented languages (including Java and C#!) feature them. (Of course, no one in those communities knows them by the scary-sounding name "higher-rank types".)
The example I'm going to give is a textbook implementation of the Visitor pattern, which I use all the time in my daily work. This answer is not intended as an introduction to the visitor pattern; that knowledge is readily available elsewhere.
In this fatuous imaginary HR application, we wish to operate on employees who may be full-time permanent staff or temporary contractors. My preferred variant of the Visitor pattern (and indeed the one which is relevant to RankNTypes) parameterises the visitor's return type.
interface IEmployeeVisitor<T>
{
T Visit(PermanentEmployee e);
T Visit(Contractor c);
}
class XmlVisitor : IEmployeeVisitor<string> { /* ... */ }
class PaymentCalculator : IEmployeeVisitor<int> { /* ... */ }
The point is that a number of visitors with different return types can all operate on the same data. This means IEmployee must express no opinion as to what T ought to be.
interface IEmployee
{
T Accept<T>(IEmployeeVisitor<T> v);
}
class PermanentEmployee : IEmployee
{
// ...
public T Accept<T>(IEmployeeVisitor<T> v)
{
return v.Visit(this);
}
}
class Contractor : IEmployee
{
// ...
public T Accept<T>(IEmployeeVisitor<T> v)
{
return v.Visit(this);
}
}
I wish to draw your attention to the types. Observe that IEmployeeVisitor universally quantifies its return type, whereas IEmployee quantifies it inside its Accept method - that is to say, at a higher rank. Translating clunkily from C# to Haskell:
data IEmployeeVisitor r = IEmployeeVisitor {
visitPermanent :: PermanentEmployee -> r,
visitContractor :: Contractor -> r
}
newtype IEmployee = IEmployee {
accept :: forall r. IEmployeeVisitor r -> r
}
So there you have it. Higher-rank types show up in C# when you write types containing generic methods.
For those familiar with object oriented languages, a higher-rank function is simply a generic function that expects as its argument another generic function.
E.g. in TypeScript you could write:
type WithId<T> = T & { id: number }
type Identifier = <T>(obj: T) => WithId<T>
type Identify = <TObj>(obj: TObj, f: Identifier) => WithId<TObj>
See how the generic function type Identify demands a generic function of the type Identifier? This makes Identify a higher-rank function.

Resources