Haskell-(Type declaration) what is "a"? - haskell

This is perhaps a very basic question, but, nevertheless, it does not seem to have been covered on SO.
I recently took up Haskell and up until now type declarations consisted of mostly the following:
Int
Bool
Float
etc, etc
Now I am getting into lists and I am seeing type declarations that use a, such as in the following function that iterates through an associative list:
contains :: Int -> [(Int,a)] -> [a]
contains x list = [values | (key,values)<-list, x==key]
Can someone provide an explanation as to what this a is, and how it works? From observation it seems to represent every type. Does this mean I can input any list of any type as parameter?

Yes, you're right, it represents "any type" - the restriction being that all as in a given type signature must resolve to the same type. So you can input a list of any type, but when you use contains to look up a value in the list, the value you look up must be the same type as the elements of the list - which makes sense of course.

In Haskell, uppercase types are concrete types (Int, Bool) or type constructors (Maybe, Either) while lowercase types are type variables. A function is implicitly generic in all the type variables it uses, so this:
contains :: Int -> [(Int, a)] -> [a]
Is shorthand for this*:
contains :: forall a. Int -> [(Int, a)] -> [a]
In C++, forall is spelled template:
template<typename a>
list<a> contains(int, list<pair<int, a>>);
In Java and C#, it’s spelled with angle brackets:
list<a> contains<a>(int, list<pair<int, a>>);
Of course, in these languages, generic type variables are often called T, U, V, while in Haskell they’re often called a, b, c. It’s just a difference of convention.
* This syntax is enabled by the -XExplicitForAll flag in GHC, as well as other extensions.

Related

What does a stand for in a data type declaration?

Normally when using type declarations we do:
function_name :: Type -> Type
However in an exercise I am trying to solve there is the following structure:
function_name :: Type a -> Type a
or explicitly as in the exercise
alphabet :: DFA a -> Alphabet a
alphabet = undefined
What does a stand for?
Short answer: it's a type variable.
At the computation level, the way we define functions is to use variables to refer to their arguments. Like this:
f x = x + 3
Here x is a variable, and its value will be chosen when the function is called. Haskell has a similar (but not identical...) mechanism in its type sublanguage. For example, you can write things like:
type F x = (x, Int, x)
type Endo a = a -> a -> a
Here again x is a variable in the first one (and a in the second), and its value will be chosen at use sites. One can also use this mechanism when defining new types. (The previous two examples just give new names to existing types, but the following does more.) One of the most basic nontrivial examples of this is the Maybe family of types:
data Maybe a = Nothing | Just a
The things on the right of the = are computation-level, so you can mostly ignore them for now, but on the left we are declaring a new family of types Maybe which accepts other types as an argument. For example, Maybe Int, Maybe (Bool, String), Maybe (Endo Char), and even passing in expressions that have variables like Maybe (x, Int, x) are all possible.
Syntactically, type constructors (things which are defined as part of the program text and that we expect the compiler to look up the definition for) start with an upper case letter and type variables (things which will be instantiated later and so don't currently have a concrete definition) start with lower case letters.
So, in the type signature you showed:
alphabet :: DFA a -> Alphabet a
I suspect there are actually two constructs new to you, not just one: first, the type variable a that you asked about, and second, the concept of type application, where we apply at the type level one "function-like" type to another. (Outside of this answer, people say "parameterized" instead of "function-like".)
...and, believe it or not, there is even a type system for types that makes sure you don't write things like these:
Int a -- Int is not parameterized, so shouldn't be applied to arguments
Int Char -- ditto
Maybe -> String -- Maybe is parameterized, so should be applied to
-- arguments, but isn't

Which is a polymorphic type: a type or a set of types?

Programming in Haskell by Hutton says:
A type that contains one or more type variables is called polymorphic.
Which is a polymorphic type: a type or a set of types?
Is a polymorphic type with a concrete type substituting its type variable a type?
Is a polymorphic type with different concrete types substituting its type variable considered the same or different types?
Is a polymorphic type with a concrete type substituting its type variable a type?
That's the point, yes. However, you need to be careful. Consider:
id :: a -> a
That's polymorphic. You can substitute a := Int and get Int -> Int, and a := Float -> Float and get (Float -> Float) -> Float -> Float. However, you cannot say a := Maybe and get id :: Maybe -> Maybe. That just doesn't make sense. Instead, we have to require that you can only substitute concrete types like Int and Maybe Float for a, not abstract ones like Maybe. This is handled with the kind system. This is not too important for your question, so I'll just summarize. Int and Float and Maybe Float are all concrete types (that is, they have values), so we say that they have type Type (the type of a type is often called its kind). Maybe is a function that takes a concrete type as an argument and returns a new concrete type, so we say Maybe :: Type -> Type. In the type a -> a, we say the type variable a must have type Type, so now the substitutions a := Int, a := String, etc. are allowed, while stuff like a := Maybe isn't.
Is a polymorphic type with different concrete types substituting its type variable considered the same or different types?
No. Back to a -> a: a := Int gives Int -> Int, but a := Float gives Float -> Float. Not the same.
Which is a polymorphic type: a type or a set of types?
Now that's a loaded question. You can skip to the TL;DR at the end, but the question of "what is a polymorphic type" is actually really confusing in Haskell, so here's a wall of text.
There are two ways to see it. Haskell started with one, then moved to the other, and now we have a ton of old literature referring to the old way, so the syntax of the modern system tries to maintain compatibility. It's a bit of a hot mess. Consider
id x = x
What is the type of id? One point of view is that id :: Int -> Int, and also id :: Float -> Float, and also id :: (Int -> Int) -> Int -> Int, ad infinitum, all simultaneously. This infinite family of types can be summed up with one polymorphic type, id :: a -> a. This point of view gives you the Hindley-Milner type system. This is not how modern GHC Haskell works, but this system is what Haskell was based on at its creation.
In Hindley-Milner, there is a hard line between polymorphic types and monomorphic types, and the union of these two groups gives you "types" in general. It's not really fair to say that, in HM, polymorphic types (in HM jargon, "polytypes") are types. You can't take polytypes as arguments, or return them from functions, or place them in a list. Instead, polytypes are only templates for monotypes. If you squint, in HM, a polymorphic type can be seen as a set of those monotypes that fit the schema.
Modern Haskell is built on System F (plus extensions). In System F,
id = \x -> x -- rewriting the example
is not a complete definition. Therefore we can't even think about giving it a type. Every lambda-bound variable needs a type annotation, but x has no annotation. Worse, we can't even decide on one: \(x :: Int) -> x is just as good as \(x :: Float) -> x. In System F, what we do is we write
id = /\(a :: Type) -> \(x :: a) -> x
using /\ to represent Λ (upper-case lambda) much as we use \ to represent λ.
id is a function taking two arguments. The first argument is a Type, named a. The second argument is an a. The result is also an a. The type signature is:
id :: forall (a :: Type). a -> a
forall is a new kind of function arrow, basically. Note that it provides a binder for a. In HM, when we said id :: a -> a, we didn't really define what a was. It was a fresh, global variable. By convention, more than anything else, that variable is not used anywhere else (otherwise the Generalization rule doesn't apply and everything breaks down). If I had written e.g. inject :: a -> Maybe a, afterwards, the textual occurrences of a would be referring to a new global entity, different from the one in id. In System F, the a in forall a. a -> a actually has scope. It's a "local variable" available only for use underneath that forall. The a in inject :: forall a. a -> Maybe a may or may not be the "same" a; it doesn't matter, because we have actual scoping rules that keep everything from falling apart.
Because System F has hygienic scoping rules for type variables, polymorphic types are allowed to do everything other types can do. You can take them as arguments
runCont :: forall (a :: Type). (forall (r :: Type). (a -> r) -> r) -> a
runCons a f = f a (id a) -- omitting type signatures; you can fill them in
You put them in data constructors
newtype Yoneda f a = Yoneda (forall b. (a -> b) -> f b)
You can place them in polymorphic containers:
type Bool = forall a. a -> a -> a
true, false :: Bool
true a t f = t
false a t f = f
thueMorse :: [Bool]
thueMorse = false : true : true : false : _etc
There's an important difference from HM. In HM, if something has polymorphic type, it also has, simultaneously, an infinity of monomorphic types. In System F, a thing can only have one type. id = /\a -> \(x :: a) -> x has type forall a. a -> a, not Int -> Int or Float -> Float. In order to get an Int -> Int out of id, you have to actually give it an argument: id Int :: Int -> Int, and id Float :: Float -> Float.
Haskell is not System F, however. System F is closer to what GHC calls Core, which is an internal language that GHC compiles Haskell to—basically Haskell without any syntax sugar. Haskell is a Hindley-Milner flavored veneer on top of a System F core. In Haskell, nominally a polymorphic type is a type. They do not act like sets of types. However, polymorphic types are still second class. Haskell doesn't let you actually type forall without -XExplicitForalls. It emulates Hindley-Milner's wonky implicit global variable creation by inserting foralls in certain places. The places where it does so are changed by -XScopedTypeVariables. You can't take polymorphic arguments or have polymorphic fields unless you enable -XRankNTypes. You cannot say things like [forall a. a -> a -> a], nor can you say id (forall a. a -> a -> a) :: (forall a. a -> a -> a) -> (forall a. a -> a -> a)—you must define e.g. newtype Bool = Bool { ifThenElse :: forall a. a -> a -> a } to wrap the polymorphism under something monomorphic. You cannot explicitly give type arguments unless you enable -XTypeApplications, and then you can write id #Int :: Int -> Int. You cannot write type lambdas (/\), period; instead, they are inserted implicitly whenever possible. If you define id :: forall a. a -> a, then you cannot even write id in Haskell. It will always be implicitly expanded to an application, id #_.
TL;DR: In Haskell, a polymorphic type is a type. It's not treated as a set of types, or a rule/schema for types, or whatever. However, due to historical reasons, they are treated as second class citizens. By default, it looks like they are treated as mere sets of types, if you squint a bit. Most restrictions on them can be lifted with suitable language extensions, at which point they look more like "just types". The one remaining big restriction (no impredicative instantiations allowed) is rather fundamental and cannot be erased, but that's fine because there's a workaround.
There is some nuance in the word "type" here. Values have concrete types, which cannot be polymorphic. Expressions, on the other hand, have general types, which can be polymorphic. If you're thinking of types for values, then a polymorphic type can be thought of loosely as defining sets of possible concrete types. (At least first-order polymorphic types! Higher-order polymorphism breaks this intuition.) But that's not always a particularly useful way of thinking, and it's not a sufficient definition. It doesn't capture which sets of types can be described in this way (and related notions like parametricity.)
It's a good observation, though, that the same word, "type", is used in these two related, but different, ways.
EDIT: The answer below turns out not to answer the question. The difference is a subtle mistake in terminology: types like Maybe and [] are higher-kinded, whereas types like forall a. a -> a and forall a. Maybe a are polymorphic. The answer below relates to higher-kinded types, but the question was asked about polymorphic types. I’m still leaving this answer up in case it helps anyone else, but I realise now it’s not really an answer to the question.
I would argue that a polymorphic higher-kinded type is closer to a set of types. For instance, you could see Maybe as the set {Maybe Int, Maybe Bool, …}.
However, strictly speaking, this is a bit misleading. To address this in more detail, we need to learn about kinds. Similarly to how types describe values, we say that kinds describe types. The idea is:
A concrete type (that is, one which has values) has a kind of *. Examples include Bool, Char, Int and Maybe String, which all have type *. This is denoted e.g. Bool :: *. Note that functions such as Int -> String also have kind *, as these are concrete types which can contain values such as show!
A type with a type parameter has a kind containing arrows. For instance, in the same way that id :: a -> a, we can say that Maybe :: * -> *, since Maybe takes a concrete type as an argument (such as Int), and produces a concrete type as a result (such as Maybe Int). Something like a -> a also has kind * -> *, since it has one type parameter (a) and produces a concrete result (a -> a). You can get more complex kinds as well: for instance, data Foo f x = FooConstr (f x x) has kind Foo :: (* -> * -> *) -> * -> *. (Can you see why?)
(If the above explanation doesn’t make sense, the Learn You a Haskell book has a great section on kinds as well.)
So now we can answer your questions properly:
Which is a polymorphic higher-kinded type: a type or a set of types?
Neither: a polymorphic higher-kinded type is a type-level function, as indicated by the arrows in its kind. For instance, Maybe :: * -> * is a type-level function which converts e.g. Int → Maybe Int, Bool → Maybe Bool etc.
Is a polymorphic higher-kinded type with a concrete type substituting its type variable a type?
Yes, when your polymorphic higher-kinded type has a kind * -> * (i.e. it has one type parameter, which accepts a concrete type). When you apply a concrete type Conc :: * to a type Poly :: * -> *, it’s just function application, as detailed above, with the result being Poly Conc :: * i.e. a concrete type.
Is a polymorphic higher-kinded type with different concrete types substituting its type variable considered the same or different types?
This question is a bit out of place, as it doesn’t have anything to do with kinds. The answer is definitely no: two types like Maybe Int and Maybe Bool are not the same. Nothing may be a member of both types, but only the former contains a value Just 4, and only the latter contains a value Just False.
On the other hand, it is possible to have two different substitutions where the resulting types are isomorphic. (An isomorphism is where two types are different, but equivalent in some way. For instance, (a, b) and (b, a) are isomorphic, despite being the same type. The formal condition is that two types p,q are isomorphic when you can write two inverse functions p -> q and q -> p.)
One example of this is Const:
data Const a b = Const { getConst :: a }
This type just ignores its second type parameter; as a result, two types like Const Int Char and Const Int Bool are isomorphic. However, they are not the same type: if you make a value of type Const Int Char, but then use it as something of type Const Int Bool, this will result in a type error. This sort of functionality is incredibly useful, as it means you can ‘tag’ a type a using Const a tag, then use the tag as a marker of information on the type level.

Haskell type keyword used with signature

In the code from Scrap Your Zippers, what does the following line mean:
type Move a = Zipper a -> Maybe (Zipper a)
Type is a synonym for a type and uses the same data constructors, so this make no sense. How is it used here?
type allows us to make synonyms, as you say. This means we can make shortened versions of long and complicated types. Here is the definition of the String base type. Yes, this is how it's defined:
type String = [Char]
This allows us to make types more readable when we write them; everyone prefers seeing String to [Char].
You can also have type arguments like in the data keyword. Here are some Examples:
type Predicate t = t -> Bool
type Transform t = t -> t
type RightFoldSignature a b = (a -> b -> b) -> b -> [a] -> b
type TwoTuple a b = (a,b)
type ThreeTuple a b c = (a,b,c)
... And so on. So, there's nothing particularly strange going on with the declaration you have there - the author is making a type synonym to make things easier to write and clearer to read, presumably to be used in the types of the functions the author wants to create.
Learn you a Haskell has it's own little section on this, a list of the different declarations can be found here, and an article here.

How are variable names chosen in type signatures inferred by GHC?

When I play with checking types of functions in Haskell with :t, for example like those in my previous question, I tend to get results such as:
Eq a => a -> [a] -> Bool
(Ord a, Num a, Ord a1, Num a1) => a -> a1 -> a
(Num t2, Num t1, Num t, Enum t2, Enum t1, Enum t) => [(t, t1, t2)]
It seems that this is not such a trivial question - how does the Haskell interpreter pick literals to symbolize typeclasses? When would it choose a rather than t? When would it choose a1 rather than b? Is it important from the programmer's point of view?
The names of the type variables aren't significant. The type:
Eq element => element -> [element] -> Bool
Is exactly the same as:
Eq a => a -> [a] -> Bool
Some names are simply easier to read/remember.
Now, how can an inferencer choose the best names for types?
Disclaimer: I'm absolutely not a GHC developer. However I'm working on a type-inferencer for Haskell in my bachelor thesis.
During inferencing the names chosen for the variables aren't probably that readable. In fact they are almost surely something along the lines of _N with N a number or aN with N a number.
This is due to the fact that you often have to "refresh" type variables in order to complete inferencing, so you need a fast way to create new names. And using numbered variables is pretty straightforward for this purpose.
The names displayed when inference is completed can be "pretty printed". The inferencer can rename the variables to use a, b, c and so on instead of _1, _2 etc.
The trick is that most operations have explicit type signatures. Some definitions require to quantify some type variables (class, data and instance for example).
All these names that the user explicitly provides can be used to display the type in a better way.
When inferencing you can somehow keep track of where the fresh type variables came from, in order to be able to rename them with something more sensible when displaying them to the user.
An other option is to refresh variables by adding a number to them. For example a fresh type of return could be Monad m0 => a0 -> m0 a0 (Here we know to use m and a simply because the class definition for Monad uses those names). When inferencing is finished you can get rid of the numbers and obtain the pretty names.
In general the inferencer will try to use names that were explicitly provided through signatures. If such a name was already used it might decide to add a number instead of using a different name (e.g. use b1 instead of c if b was already bound).
There are probably some other ad hoc rules. For example the fact that tuple elements have like t, t1, t2, t3 etc. is probably something done with a custom rule. In fact t doesn't appear in the signature for (,,) for example.
How does GHCi pick names for type variables? explains how many of these variable names come about. As Ganesh Sittampalam pointed out in a comment, something strange seems to be happening with arithmetic sequences. Both the Haskell 98 report and the Haskell 2010 report indicate that
[e1..] = enumFrom e1
GHCi, however, gives the following:
Prelude> :t [undefined..]
[undefined..] :: Enum t => [t]
Prelude> :t enumFrom undefined
enumFrom undefined :: Enum a => [a]
This makes it clear that the weird behavior has nothing to do with the Enum class itself, but rather comes in from some stage in translating the syntactic sequence to the enumFrom form. I wondered if maybe GHC wasn't really using that translation, but it really is:
{-# LANGUAGE NoMonomorphismRestriction #-}
module X (aoeu,htns) where
aoeu = [undefined..]
htns = enumFrom undefined
compiled using ghc -ddump-simpl enumlit.hs gives
X.htns :: forall a_aiD. GHC.Enum.Enum a_aiD => [a_aiD]
[GblId, Arity=1]
X.htns =
\ (# a_aiG) ($dEnum_aiH :: GHC.Enum.Enum a_aiG) ->
GHC.Enum.enumFrom # a_aiG $dEnum_aiH (GHC.Err.undefined # a_aiG)
X.aoeu :: forall t_aiS. GHC.Enum.Enum t_aiS => [t_aiS]
[GblId, Arity=1]
X.aoeu =
\ (# t_aiV) ($dEnum_aiW :: GHC.Enum.Enum t_aiV) ->
GHC.Enum.enumFrom # t_aiV $dEnum_aiW (GHC.Err.undefined # t_aiV)
so the only difference between these two representations is the assigned type variable name. I don't know enough about how GHC works to know where that t comes from, but at least I've narrowed it down!
Ørjan Johansen has noted in a comment that something similar seems to happen with function definitions and lambda abstractions.
Prelude> :t \x -> x
\x -> x :: t -> t
but
Prelude> :t map (\x->x) $ undefined
map (\x->x) $ undefined :: [b]
In the latter case, the type b comes from an explicit type signature given to map.
Are you familiar with the concepts of alpha equivalence and alpha substitution? This captures the notion that, for example, both of the following are completely equivalent and interconvertible (in certain circumstances) even though they differ:
\x -> (x, x)
\y -> (y, y)
The same concept can be extended to the level of types and type variables (see "System F" for further reading). Haskell in fact has a notion of "lambdas at the type level" for binding type variables, but it's hard to see because they're implicit by default. However, you can make them explicit by using the ExplicitForAll extension, and play around with explicitly binding your type variables:
ghci> :set -XExplicitForAll
ghci> let f x = x; f :: forall a. a -> a
In the second line, I use the forall keyword to introduce a new type variable, which is then used in a type.
In other words, it doesn't matter whether you choose a or t in your example, as long as the type expressions satisfy alpha-equivalence. Choosing type variable names so as to maximize human convenience is an entirely different topic, and probably far more complicated!

Understanding Polytypes in Hindley-Milner Type Inference

I'm reading the Wikipedia article on Hindley–Milner Type Inference trying to make some sense out of it. So far this is what I've understood:
Types are classified as either monotypes or polytypes.
Monotypes are further classified as either type constants (like int or string) or type variables (like α and β).
Type constants can either be concrete types (like int and string) or type constructors (like Map and Set).
Type variables (like α and β) behave as placeholders for concrete types (like int and string).
Now I'm having a little difficulty understanding polytypes but after learning a bit of Haskell this is what I make of it:
Types themselves have types. Formally types of types are called kinds (i.e. there are different kinds of types).
Concrete types (like int and string) and type variables (like α and β) are of kind *.
Type constructors (like Map and Set) are lambda abstractions of types (e.g. Set is of kind * -> * and Map is of kind * -> * -> *).
What I don't understand is what do qualifiers signify. For example what does ∀α.σ represent? I can't seem to make heads or tails of it and the more I read the following paragraph the more confused I get:
A function with polytype ∀α.α -> α by contrast can map any value of the same type to itself, and the identity function is a value for this type. As another example ∀α.(Set α) -> int is the type of a function mapping all finite sets to integers. The count of members is a value for this type. Note that qualifiers can only appear top level, i.e. a type ∀α.α -> ∀α.α for instance, is excluded by syntax of types and that monotypes are included in the polytypes, thus a type has the general form ∀α₁ . . . ∀αₙ.τ.
First, kinds and polymorphic types are different things. You can have a HM type system where all types are of the same kind (*), you could also have a system without polymorphism but with complex kinds.
If a term M is of type ∀a.t, it means that for whatever type s we can substitute s for a in t (often written as t[a:=s] and we'll have that M is of type t[a:=s]. This is somewhat similar to logic, where we can substitute any term for a universally quantified variable, but here we're dealing with types.
This is precisely what happens in Haskell, just that in Haskell you don't see the quantifiers. All type variables that appear in a type signature are implicitly quantified, just as if you had forall in front of the type. For example, map would have type
map :: forall a . forall b . (a -> b) -> [a] -> [b]
etc. Without this implicit universal quantification, type variables a and b would have to have some fixed meaning and map wouldn't be polymorphic.
The HM algorithm distinguishes types (without quantifiers, monotypes) and type schemas (universaly quantified types, polytypes). It's important that at some places it uses type schemas (like in let), but at other places only types are allowed. This makes the whole thing decidable.
I also suggest you to read the article about System F. It is a more complex system, which allows forall anywhere in types (therefore everything there is just called type), but type inference/checking is undecidable. It can help you understand how forall works. System F is described in depth in Girard, Lafont and Taylor, Proofs and Types.
Consider l = \x -> t in Haskell. It is a lambda, which represents a term t fith a variable x, which will be substituted later (e.g. l 1, whatever it would mean) . Similarly, ∀α.σ represents a type with a type variable α, that is, f : ∀α.σ if a function parameterized by a type α. In some sense, σ depends on α, so f returns a value of type σ(α), where α will be substituted in σ(α) later, and we will get some concrete type.
In Haskell you are allowed to omit ∀ and define functions just like id : a -> a. The reason to allowing omitting the quantifier is basically since they are allowed only top level (without RankNTypes extension). You can try this piece of code:
id2 : a -> a -- I named it id2 since id is already defined in Prelude
id2 x = x
If you ask ghci for the type of id(:t id), it will return a -> a. To be more precise (more type theoretic), id has the type ∀a. a -> a. Now, if you add to your code:
val = id2 3
, 3 has the type Int, so the type Int will be substituted into σ and we will get the concrete type Int -> Int.

Resources