Haskell: What are the types of these expressions (if they have any)? - haskell

I'm new to haskell and I am doing some exercises to learn as much as I can about types, but some of the questions are really confusing to me. The exercise I'm struggling with reads:
What are the types of the following expressions? If the expression has no type, do state as much. Also
be sure to declare necessary class restrictions where needed.
5 + 8 :: ?
(+) 2 :: ?
(+2) :: ?
(2+) :: ?
I get that 5 + 8 will return an Int, but the others are not valid expressions by them selves. Does that mean that they have no type, or should i think of them as functions (f :: Int -> Int)(f x = x + 2)?

First, the answers:
5 + 8 has the type forall a. Num a => a which can be specialized to Int.
(+) 2 and (2 +) are the same and both have the type forall a. Num a => a -> a which can be specialized to Int -> Int.
(+ 2) is different but also has the type forall a. Num a => a -> a which can be specialized to Int -> Int.
(+) has the type forall a. Num a => a -> a -> a which can be specialized to Int -> Int -> Int.
For further explanation, read on.
One Literal, Many Types
In Haskell, a numeric literal like 114514 does not have a concrete type like Int. This is good because we have many different types for numbers, incl. Int, Integer, Float, Double and so on, and we do not want to have different notations for each type.
The literals 5, 114514, and 1919810 all have the type
forall a. Num a => a
You can read it like this: "for any type a, if a is an instance of the Num typeclass, then the value can have the type a." Int, Integer, Float and Double are all instances of Num, and because Haskell has (relatively) strong type inference, in different contexts it will be specialized to concrete types like Int.
So what is typeclass?
Typeclasses
The way we express some type "supports" some operation(s) in Haskell is by typeclasses. A typeclass is a set of function signatures, without real implementations. They represent operations we want to make on some types (e.g. Num represents operations we want to make on numeric types), but the actual implementations on different types may differ (the actual calculation of integers and floating point numbers are really different).
We can make a type be an instance of a typeclass (note that this has nothing to do with the instances and classes in object-oriented programming), by actually defining these functions for this type. In this way, we defined this type to support these operations.
Num is one of the typeclasses, which represents the support of numeric operations. It is partially defined as below (I did not put the full one here, in order to reduce verbosity):
class Num a where
(+) :: a -> a -> a
(-) :: a -> a -> a
(*) :: a -> a -> a
You can see that in the signatures we have as instead of real concrete types, these functions are said to be polymorphic, that is, they are generic and can be specialized to different types.
For example, if a and b are both Ints, then a + b has type Int too, because haskell inferred that the + we used here should be the one defined for Int, given that both of its arguments are Ints.
So, if some type is an instance of Num, it means that the +, - and * operator is defined for this type. Being an instance of Num means supporting these operators.
Sections
One good thing of Haskell is its (relatively) flexible infix operators. In Haskell, infix operators are nothing but normal functions with an infix notation (which is merely a syntactic sugar). We can also use infix operators in a prefix manner. For example, 5 + 8 is equivalent to (+) 5 8.
It seems that you are confused with (+) 5, (+5) and (5+). Remember that if we put parentheses on both sides of an infix function, we make it prefix. And as you may already know, prefix functions can be partially applied, that is, give the function only some of its parameters, so it becomes a functions with less parameters to be given later. (+) 5 means that we partially applied (+) by only giving its first argument, so it becomes a function waiting for another one argument, or originally its second argument. We can apply (+) 5 to 8 so it becomes ((+) 5) 8, which is equivalent to 5 + 8.
On the other hand, (5+) and (+5) are called sections, which are another syntactic sugar for infix operators. (5+) means you filled the left hand side of the operator, and it becomes a function waiting for its right hand side. (5+) 8 means 5 + 8 or (+) 5 8. (+5) is flipped, i.e. you filled the right hand side, so it is a function waiting for the left hand side. (+5) 8 means 8 + 5 or (+) 8 5.
Haskell is a language very different from other languages you may have learned; every expression that compiles has a type. Functions are first-class and every function has a type too. Thinking in types will really help in your learning progress.

In Haskell:
operators are just functions
functions can be provided less arguments then declaration
parenthesis can be used to disambiguate order and components of arguments
Taken together it means that:
(+) Is just + function, and will take 2 more arguments
(1 +) Is function with it's first argument provided and will take one more
(+ 2) Is another function that have is argument provided, but this time second one. It also awaits one more argument

Related

The type-specification operator is like down-casting in Object-Oriented languages?

I was going through the book Haskell Programming from First Principles and came across following code-snippet.
Prelude> fifteen = 15
Prelude> :t fifteen
fifteen :: Num a => a
Prelude> fifteenInt = fifteen :: Int
Prelude> fifteenDouble = fifteen :: Double
Prelude> :t fifteenInt
fifteenInt :: Int
Prelude> :t fifteenDouble
fifteenInt :: Double
Here, Num is the type-class that is like the base class in OO languages. What I mean is when I write a polymorphic function, I take a type variable that is constrained by Num type class. However, as seen above, casting fifteen as Int or Double works. Isn't it equivalent to down-casting in OO languages?
Wouldn't some more information (a bunch of Double type specific functions in this case) be required for me to be able to do that?
Thanks for helping me out.
No, it's not equivalent. Downcasting in OO is a runtime operation: you have a value whose concrete type you don't know, and you basically assert that it has some particular case – which is an error if it happens to be actually a different concrete type.
In Haskell, :: isn't really an operator at all. It just adds extra information to the typechecker at compile-time. I.e. if it compiles at all, you can always be sure that it will actually work at runtime.
The reason it works at all is that fifteen has no concrete type. It's like a template / generic in OO languages. So when you add the :: Double constraint, the compiler can then pick what type is instantiated for a. And Double is ok because it is a member of the Num typeclass, but don't confuse a typeclass with an OO class: an OO class specifies one concrete type, which may however have subtypes. In Haskell, subtypes don't exist, and a class is more like an interface in OO languages. You can also think of a typeclass as a set of types, and fifteen has potentially all of the types in the Num class; which one of these is actually used can be chosen with a signature.
Downcasting is not a good analogy. Rather, compare to generic functions.
Very roughly, you can pretend that your fifteen is a generic function
// pseudo code in OOP
A fifteen<A>() where A : Num
When you use fifteen :: Double in Haskell, you tell the compiler that the result of the above function is Double, and that enables the compiler to "call" the above OOP function as fifteen<Double>(), inferring the generic argument.
With some extension on, GHC Haskell has a more direct way to choose the generic parameter, namely the type application fifteen #Double.
There is a difference between the two ways in that ... :: Double specifies what is the return type, while #Double specifies what is the generic argument. In this fifteen case they are the same, but this is not always the case. For instance:
> list = [(15, True)]
> :t list
list :: Num a => [(a, Bool)]
Here, to choose a = Double, we need to write either list :: [(Double, Bool)] or list #Double.
In the type forall a. Num a => a†, the forall a and Num a are parameters specified by the “caller”, that is, the place where the definition (fifteen) used. The type parameter is implicitly filled in with a type argument by GHC during type inference; the Num constraint becomes an extra parameter, a “dictionary” comprising a record of functions ((+), (-), abs, &c.) for a particular Num instance, and which Num dictionary to pass in is determined from the type. The type argument exists only at compile time, and the dictionary is then typically inlined to specialise the function and enable further optimisations, so neither of these parameters typically has any runtime representation.
So in fifteen :: Double, the compiler deduces that a must be equal to Double, giving (a ~ Double, Num a) => a, which is simplified first to Num Double => Double, then to simply Double, because the constraint Num Double is satisfied by the existence of an instance Num Double definition. There is no subtyping or runtime downcasting going on, only the solution of equality constraints, statically.
The type argument can also be specified explicitly with the TypeApplications syntax of fifteen #Double, typically written like fifteen<Double> in OO languages.
The inferred type of fifteen includes a Num constraint because the literal 15 is implicitly a call to something like fromInteger (15 :: Integer)‡. fromInteger has the type Num a => Integer -> a and is a method of the Num typeclass, so you can think of a literal as “partially applying” the Integer argument while leaving the Num a argument unspecified, then the caller decides which concrete type to supply for a, and the compiler inserts a call to the fromInteger function in the Num dictionary passed in for that type.
† forall quantifiers are typically implicit, but can be written explicitly with various extensions, such as ExplicitForAll, ScopedTypeVariables, and RankNTypes.
‡ I say “something like” because this abuses the notation 15 :: Integer to denote a literal Integer, not circularly defined in terms of fromInteger again. (Else it would loop: fromInteger 15 = fromInteger (fromInteger 15) = fromInteger (fromInteger (fromInteger 15))…) This desugaring can be “magic” because it’s a part of the language itself, not something defined within the language.

Haskell Type Coercion

I trying to wrap my head around Haskell type coercion. Meaning, when does can one pass a value into a function without casting and how that works. Here is a specific example, but I am looking for a more general explanation I can use going forward to try and understand what is going on:
Prelude> 3 * 20 / 4
15.0
Prelude> let c = 20
Prelude> :t c
c :: Integer
Prelude> 3 * c / 4
<interactive>:94:7:
No instance for (Fractional Integer)
arising from a use of `/'
Possible fix: add an instance declaration for (Fractional Integer)
In the expression: 3 * c / 4
In an equation for `it': it = 3 * c / 4
The type of (/) is Fractional a => a -> a -> a. So, I'm guessing that when I do "3 * 20" using literals, Haskell somehow assumes that the result of that expression is a Fractional. However, when a variable is used, it's type is predefined to be Integer based on the assignment.
My first question is how to fix this. Do I need to cast the expression or convert it somehow?
My second question is that this seems really weird to me that you can't do basic math without having to worry so much about int/float types. I mean there's an obvious way to convert automatically between these, why am I forced to think about this and deal with it? Am I doing something wrong to begin with?
I am basically looking for a way to easily write simple arithmetic expressions without having to worry about the neaty greaty details and keeping the code nice and clean. In most top-level languages the compiler works for me -- not the other way around.
If you just want the solution, look at the end.
You nearly answered your own question already. Literals in Haskell are overloaded:
Prelude> :t 3
3 :: Num a => a
Since (*) also has a Num constraint
Prelude> :t (*)
(*) :: Num a => a -> a -> a
this extends to the product:
Prelude> :t 3 * 20
3 * 20 :: Num a => a
So, depending on context, this can be specialized to be of type Int, Integer, Float, Double, Rational and more, as needed. In particular, as Fractional is a subclass of Num, it can be used without problems in a division, but then the constraint will become
stronger and be for class Fractional:
Prelude> :t 3 * 20 / 4
3 * 20 / 4 :: Fractional a => a
The big difference is the identifier c is an Integer. The reason why a simple let-binding in GHCi prompt isn't assigned an overloaded type is the dreaded monomorphism restriction. In short: if you define a value that doesn't have any explicit arguments,
then it cannot have overloaded type unless you provide an explicit type signature.
Numeric types are then defaulted to Integer.
Once c is an Integer, the result of the multiplication is Integer, too:
Prelude> :t 3 * c
3 * c :: Integer
And Integer is not in the Fractional class.
There are two solutions to this problem.
Make sure your identifiers have overloaded type, too. In this case, it would
be as simple as saying
Prelude> let c :: Num a => a; c = 20
Prelude> :t c
c :: Num a => a
Use fromIntegral to cast an integral value to an arbitrary numeric value:
Prelude> :t fromIntegral
fromIntegral :: (Integral a, Num b) => a -> b
Prelude> let c = 20
Prelude> :t c
c :: Integer
Prelude> :t fromIntegral c
fromIntegral c :: Num b => b
Prelude> 3 * fromIntegral c / 4
15.0
Haskell will never automatically convert one type into another when you pass it to a function. Either it's compatible with the expected type already, in which case no coercion is necessary, or the program fails to compile.
If you write a whole program and compile it, things generally "just work" without you having to think too much about int/float types; so long as you're consistent (i.e. you don't try to treat something as an Int in one place and a Float in another) the constraints just flow through the program and figure out the types for you.
For example, if I put this in a source file and compile it:
main = do
let c = 20
let it = 3 * c / 4
print it
Then everything's fine, and running the program prints 15.0. You can see from the .0 that GHC successfully figured out that c must be some kind of fractional number, and made everything work, without me having to give any explicit type signatures.
c can't be an integer because the / operator is for mathematical division, which isn't defined on integers. The operation of integer division is represented by the div function (usable in operator fashion as x `div` y). I think this might be what is tripping you up in your whole program? This is unfortunately just one of those things you have to learn by getting tripped up by it, if you're used to the situation in many other languages where / is sometimes mathematical division and sometimes integer division.
It's when you're playing around in the interpreter that things get messy, because there you tend to bind values with no context whatsoever. In interpreter GHCi has to execute let c = 20 on its own, because you haven't entered 3 * c / 4 yet. It has no way of knowing whether you intend that 20 to be an Int, Integer, Float, Double, Rational, etc
Haskell will pick a default type for numeric values; otherwise if you never use any functions that only work on one particular type of number you'd always get an error about ambiguous type variables. This normally works fine, because these default rules are applied while reading the whole module and so take into account all the other constraints on the type (like whether you've ever used it with /). But here there are no other constraints it can see, so the type defaulting picks the first cab off the rank and makes c an Integer.
Then, when you ask GHCi to evaluate 3 * c / 4, it's too late. c is an Integer, so must 3 * c be, and Integers don't support /.
So in the interpreter, yes, sometimes if you don't give an explicit type to a let binding GHC will pick an incorrect type, especially with numeric types. After that, you're stuck with whatever operations are supported by the concrete type GHCi picked, but when you get this kind of error you can always rebind the variable; e.g. let c = 20.0.
However I suspect in your real program the problem is simply that the operation you wanted was actually div rather than /.
Haskell is a bit unusual in this way. Yes you can't divide to integers together but it's rarely a problem.
The reason is that if you look at the Num typeclass, there's a function fromIntegral this allows you to convert literals into the appropriate type. This with type inference alleviates 99% of the cases where it'd be a problem. Quick example:
newtype Foo = Foo Integer
deriving (Show, Eq)
instance Num Foo where
fromInteger _ = Foo 0
negate = undefined
abs = undefined
(+) = undefined
(-) = undefined
(*) = undefined
signum = undefined
Now if we load this into GHCi
*> 0 :: Foo
Foo 0
*> 1 :: Foo
Foo 0
So you see we are able to do some pretty cool things with how GHCi parses a raw integer. This has a lot of practical uses in DSL's that we won't talk about here.
Next question was how to get from a Double to an Integer or vice versa. There's a function for that.
In the case of going from an Integer to a Double, we'd use fromInteger as well. Why?
Well the type signature for it is
(Num a) => Integer -> a
and since we can use (+) with Doubles we know they're a Num instance. And from there it's easy.
*> 0 :: Double
0.0
Last piece of the puzzle is Double -> Integer. Well a brief search on Hoogle shows
truncate
floor
round
-- etc ...
I'll leave that to you to search.
Type coercion in Haskell isn't automatic (or rather, it doesn't actually exist). When you write the literal 20 it's inferred to be of type Num a => a (conceptually anyway. I don't think it works quite like that) and will, depending on the context in which it is used (i.e. what functions you pass it to) be instantiated with an appropitiate type (I believe if no further constraints are applied, this will default to Integer when you need a concrete type at some point). If you need a different kind of Num, you need to convert the numbers e.g. (3* fromIntegral c / 4) in your example.
The type of (/) is Fractional a => a -> a -> a.
To divide Integers, use div instead of (/). Note that the type of div is
div :: Integral a => a -> a -> a
In most top-level languages the compiler works for me -- not the other way around.
I argue that the Haskell compiler works for you just as much, if not more so, than those of other languages you have used. Haskell is a very different language than the traditional imperative languages (such as C, C++, Java, etc.) you are probably used to. This means that the compiler works differently as well.
As others have stated, Haskell will never automatically coerce from one type to another. If you have an Integer which needs to be used as a Float, you need to do the conversion explicitly with fromInteger.

Functions don't just have types: They ARE Types. And Kinds. And Sorts. Help put a blown mind back together

I was doing my usual "Read a chapter of LYAH before bed" routine, feeling like my brain was expanding with every code sample. At this point I was convinced that I understood the core awesomeness of Haskell, and now just had to understand the standard libraries and type classes so that I could start writing real software.
So I was reading the chapter about applicative functors when all of a sudden the book claimed that functions don't merely have types, they are types, and can be treated as such (For example, by making them instances of type classes). (->) is a type constructor like any other.
My mind was blown yet again, and I immediately jumped out of bed, booted up the computer, went to GHCi and discovered the following:
Prelude> :k (->)
(->) :: ?? -> ? -> *
What on earth does it mean?
If (->) is a type constructor, what are the value constructors? I can take a guess, but would have no idea how define it in traditional data (->) ... = ... | ... | ... format. It's easy enough to do this with any other type constructor: data Either a b = Left a | Right b. I suspect my inability to express it in this form is related to the extremly weird type signature.
What have I just stumbled upon? Higher kinded types have kind signatures like * -> * -> *. Come to think of it... (->) appears in kind signatures too! Does this mean that not only is it a type constructor, but also a kind constructor? Is this related to the question marks in the type signature?
I have read somewhere (wish I could find it again, Google fails me) about being able to extend type systems arbitrarily by going from Values, to Types of Values, to Kinds of Types, to Sorts of Kinds, to something else of Sorts, to something else of something elses, and so on forever. Is this reflected in the kind signature for (->)? Because I've also run into the notion of the Lambda cube and the calculus of constructions without taking the time to really investigate them, and if I remember correctly it is possible to define functions that take types and return types, take values and return values, take types and return values, and take values which return types.
If I had to take a guess at the type signature for a function which takes a value and returns a type, I would probably express it like this:
a -> ?
or possibly
a -> *
Although I see no fundamental immutable reason why the second example couldn't easily be interpreted as a function from a value of type a to a value of type *, where * is just a type synonym for string or something.
The first example better expresses a function whose type transcends a type signature in my mind: "a function which takes a value of type a and returns something which cannot be expressed as a type."
You touch so many interesting points in your question, so I am
afraid this is going to be a long answer :)
Kind of (->)
The kind of (->) is * -> * -> *, if we disregard the boxity GHC
inserts. But there is no circularity going on, the ->s in the
kind of (->) are kind arrows, not function arrows. Indeed, to
distinguish them kind arrows could be written as (=>), and then
the kind of (->) is * => * => *.
We can regard (->) as a type constructor, or maybe rather a type
operator. Similarly, (=>) could be seen as a kind operator, and
as you suggest in your question we need to go one 'level' up. We
return to this later in the section Beyond Kinds, but first:
How the situation looks in a dependently typed language
You ask how the type signature would look for a function that takes a
value and returns a type. This is impossible to do in Haskell:
functions cannot return types! You can simulate this behaviour using
type classes and type families, but let us for illustration change
language to the dependently typed language
Agda. This is a
language with similar syntax as Haskell where juggling types together
with values is second nature.
To have something to work with, we define a data type of natural
numbers, for convenience in unary representation as in
Peano Arithmetic.
Data types are written in
GADT style:
data Nat : Set where
Zero : Nat
Succ : Nat -> Nat
Set is equivalent to * in Haskell, the "type" of all (small) types,
such as Natural numbers. This tells us that the type of Nat is
Set, whereas in Haskell, Nat would not have a type, it would have
a kind, namely *. In Agda there are no kinds, but everything has
a type.
We can now write a function that takes a value and returns a type.
Below is a the function which takes a natural number n and a type,
and makes iterates the List constructor n applied to this
type. (In Agda, [a] is usually written List a)
listOfLists : Nat -> Set -> Set
listOfLists Zero a = a
listOfLists (Succ n) a = List (listOfLists n a)
Some examples:
listOfLists Zero Bool = Bool
listOfLists (Succ Zero) Bool = List Bool
listOfLists (Succ (Succ Zero)) Bool = List (List Bool)
We can now make a map function that operates on listsOfLists.
We need to take a natural number that is the number of iterations
of the list constructor. The base cases are when the number is
Zero, then listOfList is just the identity and we apply the function.
The other is the empty list, and the empty list is returned.
The step case is a bit move involving: we apply mapN to the head
of the list, but this has one layer less of nesting, and mapN
to the rest of the list.
mapN : {a b : Set} -> (a -> b) -> (n : Nat) ->
listOfLists n a -> listOfLists n b
mapN f Zero x = f x
mapN f (Succ n) [] = []
mapN f (Succ n) (x :: xs) = mapN f n x :: mapN f (Succ n) xs
In the type of mapN, the Nat argument is named n, so the rest of
the type can depend on it. So this is an example of a type that
depends on a value.
As a side note, there are also two other named variables here,
namely the first arguments, a and b, of type Set. Type
variables are implicitly universally quantified in Haskell, but
here we need to spell them out, and specify their type, namely
Set. The brackets are there to make them invisible in the
definition, as they are always inferable from the other arguments.
Set is abstract
You ask what the constructors of (->) are. One thing to point out
is that Set (as well as * in Haskell) is abstract: you cannot
pattern match on it. So this is illegal Agda:
cheating : Set -> Bool
cheating Nat = True
cheating _ = False
Again, you can simulate pattern matching on types constructors in
Haskell using type families, one canoical example is given on
Brent Yorgey's blog.
Can we define -> in the Agda? Since we can return types from
functions, we can define an own version of -> as follows:
_=>_ : Set -> Set -> Set
a => b = a -> b
(infix operators are written _=>_ rather than (=>)) This
definition has very little content, and is very similar to doing a
type synonym in Haskell:
type Fun a b = a -> b
Beyond kinds: Turtles all the way down
As promised above, everything in Agda has a type, but then
the type of _=>_ must have a type! This touches your point
about sorts, which is, so to speak, one layer above Set (the kinds).
In Agda this is called Set1:
FunType : Set1
FunType = Set -> Set -> Set
And in fact, there is a whole hierarchy of them! Set is the type of
"small" types: data types in haskell. But then we have Set1,
Set2, Set3, and so on. Set1 is the type of types which mentions
Set. This hierarchy is to avoid inconsistencies such as Girard's
paradox.
As noticed in your question, -> is used for types and kinds in
Haskell, and the same notation is used for function space at all
levels in Agda. This must be regarded as a built in type operator,
and the constructors are lambda abstraction (or function
definitions). This hierarchy of types is similar to the setting in
System F omega, and more
information can be found in the later chapters of
Pierce's Types and Programming Languages.
Pure type systems
In Agda, types can depend on values, and functions can return types,
as illustrated above, and we also had an hierarchy of
types. Systematic investigation of different systems of the lambda
calculi is investigated in more detail in Pure Type Systems. A good
reference is
Lambda Calculi with Types by Barendregt,
where PTS are introduced on page 96, and many examples on page 99 and onwards.
You can also read more about the lambda cube there.
Firstly, the ?? -> ? -> * kind is a GHC-specific extension. The ? and ?? are just there to deal with unboxed types, which behave differently from just * (which has to be boxed, as far as I know). So ?? can be any normal type or an unboxed type (e.g. Int#); ? can be either of those or an unboxed tuple. There is more information here: Haskell Weird Kinds: Kind of (->) is ?? -> ? -> *
I think a function can't return an unboxed type because functions are lazy. Since a lazy value is either a value or a thunk, it has to be boxed. Boxed just means it is a pointer rather than just a value: it's like Integer() vs int in Java.
Since you are probably not going to be using unboxed types in LYAH-level code, you can imagine that the kind of -> is just * -> * -> *.
Since the ? and ?? are basically just more general version of *, they do not have anything to do with sorts or anything like that.
However, since -> is just a type constructor, you can actually partially apply it; for example, (->) e is an instance of Functor and Monad. Figuring out how to write these instances is a good mind-stretching exercise.
As far as value constructors go, they would have to just be lambdas (\ x ->) or function declarations. Since functions are so fundamental to the language, they get their own syntax.

Why can't I add Integer to Double in Haskell?

Why is it that I can do:
1 + 2.0
but when I try:
let a = 1
let b = 2.0
a + b
<interactive>:1:5:
Couldn't match expected type `Integer' with actual type `Double'
In the second argument of `(+)', namely `b'
In the expression: a + b
In an equation for `it': it = a + b
This seems just plain weird! Does it ever trip you up?
P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!
The type signature of (+) is defined as Num a => a -> a -> a, which means that it works on any member of the Num typeclass, but both arguments must be of the same type.
The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do for the let expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1 and 2.0.
All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double, so it just assumes the other one was supposed to be a Double and executes the computation. However, when you use the let expression, it only has one number to work off of, so it decides 1 is an Integer and 2.0 is a Double.
EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.
The first works because numeric literals are polymorphic (they are interpreted as fromInteger literal resp. fromRational literal), so in 1 + 2.0, you really have fromInteger 1 + fromRational 2, in the absence of other constraints, the result type defaults to Double.
The second does not work because of the monomorphism restriction. If you bind something without a type signature and with a simple pattern binding (name = expresion), that entity gets assigned a monomorphic type. For the literal 1, we have a Num constraint, therefore, according to the defaulting rules, its type is defaulted to Integer in the binding let a = 1. Similarly, the fractional literal's type is defaulted to Double.
It will work, by the way, if you :set -XNoMonomorphismRestriction in ghci.
The reason for the monomorphism restriction is to prevent loss of sharing, if you see a value that looks like a constant, you don't expect it to be calculated more than once, but if it had a polymorphic type, it would be recomputed everytime it is used.
You can use GHCI to learn a little more about this. Use the command :t to get the type of an expression.
Prelude> :t 1
1 :: Num a => a
So 1 is a constant which can be any numeric type (Double, Integer, etc.)
Prelude> let a = 1
Prelude> :t a
a :: Integer
So in this case, Haskell inferred the concrete type for a is Integer. Similarly, if you write let b = 2.0 then Haskell infers the type Double. Using let made Haskell infer a more specific type than (perhaps) was necessary, and that leads to your problem. (Someone with more experience than me can perhaps comment as to why this is the case.) Since (+) has type Num a => a -> a -> a, the two arguments need to have the same type.
You can fix this with the fromIntegral function:
Prelude> :t fromIntegral
fromIntegral :: (Num b, Integral a) => a -> b
This function converts integer types to other numeric types. For example:
Prelude> let a = 1
Prelude> let b = 2.0
Prelude> (fromIntegral a) + b
3.0
Others have addressed many aspects of this question quite well. I'd like to say a word about the rationale behind why + has the type signature Num a => a -> a -> a.
Firstly, the Num typeclass has no way to convert one artbitrary instance of Num into another. Suppose I have a data type for imaginary numbers; they are still numbers, but you really can't properly convert them into just an Int.
Secondly, which type signature would you prefer?
(+) :: (Num a, Num b) => a -> b -> a
(+) :: (Num a, Num b) => a -> b -> b
(+) :: (Num a, Num b, Num c) => a -> b -> c
After considering the other options, you realize that a -> a -> a is the simplest choice. Polymorphic results (as in the third suggestion above) are cool, but can sometimes be too generic to be used conveniently.
Thirdly, Haskell is not Blub. Most, though arguably not all, design decisions about Haskell do not take into account the conventions and expectations of popular languages. I frequently enjoy saying that the first step to learning Haskell is to unlearn everything you think you know about programming first. I'm sure most, if not all, experienced Haskellers have been tripped up by the Num typeclass, and various other curiosities of Haskell, because most have learned a more "mainstream" language first. But be patient, you will eventually reach Haskell nirvana. :)

How does Haskell actually define the + function?

While reading Real world Haskell I came up with this note:
ghci> :info (+)
class (Eq a, Show a) => Num a where
(+) :: a -> a -> a
...
-- Defined in GHC.Num
infixl 6 +
But how can Haskell define + as a non-native function? At some level you have to say that 2 + 3 will become assembler i.e. machine code.
The + function is overloaded and for some types, like Int and Double the definition of + is something like
instance Num Int where
x + y = primAddInt x y
where primAddInt is a function the compiler knows about and will generate machine code for.
The details of how this looks and works depends on the Haskell implementation you're looking at.
It is in fact possible to define numbers without ANY native primitives. There are many ways, but the simplest is:
data Peano = Z | S Peano
Then you can define instance Num for this type using pattern-matching. The second common representation of numbers is so called Church encoding using only functions (all numbers will be represented by some obscure functions, and + will 'add' two functions together to form third one).
Very interesting encodings are possible indeed. For example, you can represent arbitrary precision reals in [0,1) using sequences of bits:
data RealReal = RealReal Bool RealReal | RealEnd
In GHC of course it is defined in a machine-specific way by using either primitives or FFI.

Resources