Haskell Types in some examples - haskell

I'm in the process of learning Haskell and I'm a beginner.
I wish I could search this question on StackOverflow.
But honestly I'm not quite sure what to search for.
I already tried to get the answers without much success so
please bear with me. It seems this is still really low level
stuff.
So my ghci interactive session never seems to output
"primitive types" like Int for example. I don't know how
else to put it. At the moment I'm trying to follow the
tutorial on http://book.realworldhaskell.org/read/getting-started.html.
Unfortunately, I can't seem to produce the same results.
For example:
Prelude> 5
5
Prelude> :type it
it :: Num a => a
I have to specifically say:
Prelude> let e = 5 :: Int
Prelude> e
5
Prelude> :type it
it :: Int
This is all very confusing to me so I hope somebody
can clear up this confusion a little bit.
EDIT:
On http://book.realworldhaskell.org/read/getting-started.html it says: "Haskell has several numeric types. For example, a literal number such as 1 could, depending on the context in which it appears, be an integer or a floating point value. When we force ghci to evaluate the expression 3 + 2, it has to choose a type so that it can print the value, and it defaults to Integer." I can't seem to force ghci to evaluate the type.
So for example:
Prelude> 3 + 2
5
Prelude> :t it
it :: Num a => a
Where I expected "Integer" to be the correct type.

There are a number of things going on here.
Numeric literals in Haskell are polymorphic; the type of the literal 5 really is Num a => a. It can belong to any type that adheres to the Num type class.
Addition is part of the Num type class, so an addition of two numeric literals is still Num a => a.
Interactive evaluation in ghci is very similar to evaluating actions in the IO monad. When you enter a bare expression, ghci acts as if you ran something like the following:
main = do
let it = 5 + 5
print it
It's not exactly like that, though, because in a program like that, inference would work over the entire do expression body to find a specific type. When you enter a single line, it has to infer a type and compile something with only the context available as of the end of the line you entered. So the print doesn't affect the type inferred for the let-binding, as it's not something you entered.
There's nothing in that program that would constrain it to a particular instance of Num or Show; this means that it is still a polymorphic value. Specifically, GHC compiles values with type class constraints to functions that accept a type class dictionary that provides the instance implementations required to meet the constraint. So, although it looks like a monomorphic value, it is actually represented by GHC as a function. This was surprising to enough people that the dreaded "Monomorphism Restriction" was invented to prevent this kind of surprise. It disallows pattern bindings (such as this one) where an identifier is bound to a polymorphic type.
The Monomorphism Restriction is off by default in GHC now, and it has been off by default in GHCi since version 7.8.
See the GHC manual for more info.
Haskell provides a special bit of magic for polymorphic numbers; each module can make a default declaration that provides type defaulting rules for polymorphic numbers. At your ghci prompt, the defaulting rules made ghci choose 'Int' when it was forced to provide instance dictionaries to show it in order to get to an IO action value.
Here's the relevant section in the Haskell 98 Report.
To sum it up: it was bound to the expression 5 + 5, which has type Num a => a because that's the more general inferred type based on the polymorphic numeric literals.
Polymorphic values are represented as functions waiting for a typeclass dictionary. So evaluating it at a particular instance doesn't force it to become monomorphic.
However, Haskell's type default rules allow it to pick a particular type when you implicitly print it as part of the ghci interaction. It picks Int and so it chooses the Int type class instance dictionaries for Show and Num when forced to by print it.
I hope that makes it somewhat less confusing!
By the way, here is an example of how you can get the same behavior outside of ghci by explicitly requesting the polymorphic let-binding. Without the type signature in this context, it will infer a monomorphic type for foo and give a type error.
main = do
let foo :: Num a => a
foo = 5 + 5
let bar = 8 :: Double
let baz = 9 :: Int
print (foo + bar)
print (foo + baz)
This will compile and run, printing the following:
18.0
19
UPDATE:
Looking at the Real World Haskell example and the comment thread, some people included different ghci logs along with their ghc versions. Using that information, I looked at ghc release notes and found that starting in version 7.8, the Monomorphism Restriction was disabled in ghci by default.
If you run the following command, you'll re-enable the Monomorphism Restriction and in order to be friendly, ghci will default the binding to Integer rather than giving you either an error or a polymorphic binding:
Prelude> :set -XMonomorphismRestriction
Prelude> 5 + 5
10
Prelude> :t it
it :: Integer
Prelude>

It appears that GHCi is performing some magic here. It is correctly defaulting the numbers to Integers so that they can be printed. However it is binding it to the polymorphic type before the defaulting appears.
I guess you want to see the type after the defaulting takes place. For that, I would recommend to use the Data.Typeable library as follows:
> import Data.Typeable
> let withType x = (x, typeOf x)
> withType 5
(5,Integer)
Above, GHCi has to default 5 to Integer, but this causes typeOf x to report the representation of the type after the defaulting happened. Hence we get the wanted type.
The following also works, precisely because typeOf is called after the defaulting happened:
> :type 5
5 :: Num a => a
> typeOf 5
Integer
Keep however in mind that typeOf only works for monomorphic types. In general, the polymorphic result of :type is more useful.

Numbers in Haskell are polymorphic, there are separate types for Fixed and arbitrary precision Integers, Rationals, Floating Point numbers, and user defined number types. All can be instantiated with simple literals, by implementing the fromInteger method on the Num typeclass. The value you've given, (True, 1, "hello world", 3) has two integral literals, and they can be used to create two numbers, of possibly different types. the bit of type before the fat arrow, (Num t, Num t1), is saying that in the inferred type, t and t1 can be anything, so long as they happen to have the Num typeclass defined on them, ie they can be obtained with a fromInteger.

Related

The type-specification operator is like down-casting in Object-Oriented languages?

I was going through the book Haskell Programming from First Principles and came across following code-snippet.
Prelude> fifteen = 15
Prelude> :t fifteen
fifteen :: Num a => a
Prelude> fifteenInt = fifteen :: Int
Prelude> fifteenDouble = fifteen :: Double
Prelude> :t fifteenInt
fifteenInt :: Int
Prelude> :t fifteenDouble
fifteenInt :: Double
Here, Num is the type-class that is like the base class in OO languages. What I mean is when I write a polymorphic function, I take a type variable that is constrained by Num type class. However, as seen above, casting fifteen as Int or Double works. Isn't it equivalent to down-casting in OO languages?
Wouldn't some more information (a bunch of Double type specific functions in this case) be required for me to be able to do that?
Thanks for helping me out.
No, it's not equivalent. Downcasting in OO is a runtime operation: you have a value whose concrete type you don't know, and you basically assert that it has some particular case – which is an error if it happens to be actually a different concrete type.
In Haskell, :: isn't really an operator at all. It just adds extra information to the typechecker at compile-time. I.e. if it compiles at all, you can always be sure that it will actually work at runtime.
The reason it works at all is that fifteen has no concrete type. It's like a template / generic in OO languages. So when you add the :: Double constraint, the compiler can then pick what type is instantiated for a. And Double is ok because it is a member of the Num typeclass, but don't confuse a typeclass with an OO class: an OO class specifies one concrete type, which may however have subtypes. In Haskell, subtypes don't exist, and a class is more like an interface in OO languages. You can also think of a typeclass as a set of types, and fifteen has potentially all of the types in the Num class; which one of these is actually used can be chosen with a signature.
Downcasting is not a good analogy. Rather, compare to generic functions.
Very roughly, you can pretend that your fifteen is a generic function
// pseudo code in OOP
A fifteen<A>() where A : Num
When you use fifteen :: Double in Haskell, you tell the compiler that the result of the above function is Double, and that enables the compiler to "call" the above OOP function as fifteen<Double>(), inferring the generic argument.
With some extension on, GHC Haskell has a more direct way to choose the generic parameter, namely the type application fifteen #Double.
There is a difference between the two ways in that ... :: Double specifies what is the return type, while #Double specifies what is the generic argument. In this fifteen case they are the same, but this is not always the case. For instance:
> list = [(15, True)]
> :t list
list :: Num a => [(a, Bool)]
Here, to choose a = Double, we need to write either list :: [(Double, Bool)] or list #Double.
In the type forall a. Num a => a†, the forall a and Num a are parameters specified by the “caller”, that is, the place where the definition (fifteen) used. The type parameter is implicitly filled in with a type argument by GHC during type inference; the Num constraint becomes an extra parameter, a “dictionary” comprising a record of functions ((+), (-), abs, &c.) for a particular Num instance, and which Num dictionary to pass in is determined from the type. The type argument exists only at compile time, and the dictionary is then typically inlined to specialise the function and enable further optimisations, so neither of these parameters typically has any runtime representation.
So in fifteen :: Double, the compiler deduces that a must be equal to Double, giving (a ~ Double, Num a) => a, which is simplified first to Num Double => Double, then to simply Double, because the constraint Num Double is satisfied by the existence of an instance Num Double definition. There is no subtyping or runtime downcasting going on, only the solution of equality constraints, statically.
The type argument can also be specified explicitly with the TypeApplications syntax of fifteen #Double, typically written like fifteen<Double> in OO languages.
The inferred type of fifteen includes a Num constraint because the literal 15 is implicitly a call to something like fromInteger (15 :: Integer)‡. fromInteger has the type Num a => Integer -> a and is a method of the Num typeclass, so you can think of a literal as “partially applying” the Integer argument while leaving the Num a argument unspecified, then the caller decides which concrete type to supply for a, and the compiler inserts a call to the fromInteger function in the Num dictionary passed in for that type.
† forall quantifiers are typically implicit, but can be written explicitly with various extensions, such as ExplicitForAll, ScopedTypeVariables, and RankNTypes.
‡ I say “something like” because this abuses the notation 15 :: Integer to denote a literal Integer, not circularly defined in terms of fromInteger again. (Else it would loop: fromInteger 15 = fromInteger (fromInteger 15) = fromInteger (fromInteger (fromInteger 15))…) This desugaring can be “magic” because it’s a part of the language itself, not something defined within the language.

Random unable to deduce correct type when comparing the value to a float

I'm not really sure how to phrase the title better. Anyways, say I do something like this
(a, _) = random (mkStdGen 0)
b = if a then 1 else 0
Haskell can deduce that the type of a is Bool.
But if I do
(a, _) = random (mkStdGen 0)
b = if a > 0.5 then 1 else 0
it doesn't figure out that I want a to be a Float.
I don't know much about type inference and I don't know much about the typesystem but couldn't it just look for a type that is Random, Ord, and Fractional?
Float would fit in there.
Or why not just have a more generic type in that place since I obviously only care that it has those three typeclasses? Then later if I use the value for something else, it can get more info and maybe get a more concrete type.
The reason why Haskell can't figure this out is because numeric literals are polymorphic, so if you see 0.5, the compiler interprets it more or less as 0.5 :: Fractional a => a. Since this is still polymorphic, even with Random and Ord constraints, the compiler does not "guess" what you meant there, it instead throws an error saying that it doesn't know what you meant. For example, it could also be Double, or a custom defined data type, or any one of a potentially infinite number of types. It's also not very difficult to specify a type signature here or there, if a > (0.5 :: Float) then 1 else 0 isn't much extra work, and it makes your intention explicit rather than implicit.
It also can't just use a "generic type in that place", even with those three typeclasses, because it doesn't know the behavior of those typeclasses without a concrete type. For example, I could write a newtype around Float as
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype MyFloat = MyFloat Float deriving (Eq, Show, Read, Num, Floating, Random)
instance Ord MyFloat where
compare (MyFloat x) (MyFloat y) = compare (sin x) (sin y)
This is perfectly valid code, and could be something that satisfies your constraints, but would have drastically different behavior at runtime than simply Float.
You can always just do this to solve your problem:
(a, _) = random (mkStdGen 0) :: (Float, StdGen)
This forces a to be of type Float and the compiler is able to infer the rest of the types.
The problem Haskell has more than one type which is in the classes Random, Ord and Fractional. It could be Float, as well as it could also be Double. Because Haskell does not know what you mean, it throws an error.
I see the other answers haven't mentioned Haskell's defaulting rules. Under many circumstances, Haskell by default will choose either Integer or Double (not Float, though) if your type is ambiguous, needs to be made specific, and one of those fits.
Unfortunately, one of the requirements for defaulting is that all the type classes involved are "standard". And the class Random is not a standard class. (It used to be in the Haskell 98 Report, but was removed from Haskell 2010. I vaguely recall it did not work for defaulting even back then, though.)
However, as that link also shows, you can relax that restriction with GHC's ExtendedDefaultRules extension, after which your code will cause Double to be chosen. You can even get Float to be chosen if you make a different default declaration that prefers Float to Double (I would not generally recommend this, though, as Double is higher precision.) The following works the way you expected your original to do:
{-# LANGUAGE ExtendedDefaultRules #-}
import System.Random
default (Integer, Float)
(a, _) = random (mkStdGen 0)
b = if a > 0.5 then 1 else 0

Haskell function composition confusion

I'm trying to learn haskell and I've been going over chapter 6 and 7 of Learn you a Haskell. Why don't the following two function definitions give the same result? I thought (f . g) x = f (g (x))?
Def 1
let{ t :: Eq x => [x] -> Int; t xs = length( nub xs)}
t [1]
1
Def 2
let t = length . nub
t [1]
<interactive>:78:4:
No instance for (Num ()) arising from the literal `1'
Possible fix: add an instance declaration for (Num ())
In the expression: 1
In the first argument of `t', namely `[1]'
In the expression: t [1]
The problem is with your type signatures and the dreaded monomorphism restriction. You have a type signature in your first version but not in your second; ironically, it would have worked the other way around!
Try this:
λ>let t :: Eq x => [x] -> Int; t = length . nub
λ>t [1]
1
The monomorphism restriction forces things that don't look like functions to have a monomorphic type unless they have an explicit type signature. The type you want for t is polymorphic: note the type variable x. However, with the monomorphism restriction, x gets "defaulted" to (). Check this out:
λ>let t = length . nub
λ>:t t
t :: [()] -> Int
This is very different from the version with the type signature above!
The compiler chooses () for the monomorphic type because of defaulting. Defaulting is just the process Haskell uses to choose a type from a typeclass. All this really means is that, in the repl, Haskell will try using the () type if it encounters an ambiguous type variable in the Show, Eq or Ord classes. Yes, this is basically arbitrary, but it's pretty handy for playing around without having to write type signatures everywhere! Also, the defaulting rules are more conservative in files, so this is basically just something that happens in GHCi.
In fact, defaulting to () seems to mostly be a hack to make printf work correctly in GHCi! It's an obscure Haskell curio, but I'd ignore it in practice.
Apart from including a type signature, you could also just turn the monomorphism restriction off in the repl:
λ>:set -XNoMonomorphismRestriction
This is fine in GHCi, but I would not use it in real modules--instead, make sure to always include a type signature for top-level definitions inside files.
EDIT: Ever since GHC 7.8.1, the monomorphism restriction is turned off by default in GHCi. This means that all this code would work fine with a recent version of GHCi and you do not need to set the flag explicitly. It can still be an issue for values defined in a file with no type signature, however.
This is another instance of the "Dreaded" Monomorphism Restriction which leads GHCi to infer a monomorphic type for the composed function. You can disable it in GHCi with
> :set -XNoMonomorphismRestriction

Haskell Type Coercion

I trying to wrap my head around Haskell type coercion. Meaning, when does can one pass a value into a function without casting and how that works. Here is a specific example, but I am looking for a more general explanation I can use going forward to try and understand what is going on:
Prelude> 3 * 20 / 4
15.0
Prelude> let c = 20
Prelude> :t c
c :: Integer
Prelude> 3 * c / 4
<interactive>:94:7:
No instance for (Fractional Integer)
arising from a use of `/'
Possible fix: add an instance declaration for (Fractional Integer)
In the expression: 3 * c / 4
In an equation for `it': it = 3 * c / 4
The type of (/) is Fractional a => a -> a -> a. So, I'm guessing that when I do "3 * 20" using literals, Haskell somehow assumes that the result of that expression is a Fractional. However, when a variable is used, it's type is predefined to be Integer based on the assignment.
My first question is how to fix this. Do I need to cast the expression or convert it somehow?
My second question is that this seems really weird to me that you can't do basic math without having to worry so much about int/float types. I mean there's an obvious way to convert automatically between these, why am I forced to think about this and deal with it? Am I doing something wrong to begin with?
I am basically looking for a way to easily write simple arithmetic expressions without having to worry about the neaty greaty details and keeping the code nice and clean. In most top-level languages the compiler works for me -- not the other way around.
If you just want the solution, look at the end.
You nearly answered your own question already. Literals in Haskell are overloaded:
Prelude> :t 3
3 :: Num a => a
Since (*) also has a Num constraint
Prelude> :t (*)
(*) :: Num a => a -> a -> a
this extends to the product:
Prelude> :t 3 * 20
3 * 20 :: Num a => a
So, depending on context, this can be specialized to be of type Int, Integer, Float, Double, Rational and more, as needed. In particular, as Fractional is a subclass of Num, it can be used without problems in a division, but then the constraint will become
stronger and be for class Fractional:
Prelude> :t 3 * 20 / 4
3 * 20 / 4 :: Fractional a => a
The big difference is the identifier c is an Integer. The reason why a simple let-binding in GHCi prompt isn't assigned an overloaded type is the dreaded monomorphism restriction. In short: if you define a value that doesn't have any explicit arguments,
then it cannot have overloaded type unless you provide an explicit type signature.
Numeric types are then defaulted to Integer.
Once c is an Integer, the result of the multiplication is Integer, too:
Prelude> :t 3 * c
3 * c :: Integer
And Integer is not in the Fractional class.
There are two solutions to this problem.
Make sure your identifiers have overloaded type, too. In this case, it would
be as simple as saying
Prelude> let c :: Num a => a; c = 20
Prelude> :t c
c :: Num a => a
Use fromIntegral to cast an integral value to an arbitrary numeric value:
Prelude> :t fromIntegral
fromIntegral :: (Integral a, Num b) => a -> b
Prelude> let c = 20
Prelude> :t c
c :: Integer
Prelude> :t fromIntegral c
fromIntegral c :: Num b => b
Prelude> 3 * fromIntegral c / 4
15.0
Haskell will never automatically convert one type into another when you pass it to a function. Either it's compatible with the expected type already, in which case no coercion is necessary, or the program fails to compile.
If you write a whole program and compile it, things generally "just work" without you having to think too much about int/float types; so long as you're consistent (i.e. you don't try to treat something as an Int in one place and a Float in another) the constraints just flow through the program and figure out the types for you.
For example, if I put this in a source file and compile it:
main = do
let c = 20
let it = 3 * c / 4
print it
Then everything's fine, and running the program prints 15.0. You can see from the .0 that GHC successfully figured out that c must be some kind of fractional number, and made everything work, without me having to give any explicit type signatures.
c can't be an integer because the / operator is for mathematical division, which isn't defined on integers. The operation of integer division is represented by the div function (usable in operator fashion as x `div` y). I think this might be what is tripping you up in your whole program? This is unfortunately just one of those things you have to learn by getting tripped up by it, if you're used to the situation in many other languages where / is sometimes mathematical division and sometimes integer division.
It's when you're playing around in the interpreter that things get messy, because there you tend to bind values with no context whatsoever. In interpreter GHCi has to execute let c = 20 on its own, because you haven't entered 3 * c / 4 yet. It has no way of knowing whether you intend that 20 to be an Int, Integer, Float, Double, Rational, etc
Haskell will pick a default type for numeric values; otherwise if you never use any functions that only work on one particular type of number you'd always get an error about ambiguous type variables. This normally works fine, because these default rules are applied while reading the whole module and so take into account all the other constraints on the type (like whether you've ever used it with /). But here there are no other constraints it can see, so the type defaulting picks the first cab off the rank and makes c an Integer.
Then, when you ask GHCi to evaluate 3 * c / 4, it's too late. c is an Integer, so must 3 * c be, and Integers don't support /.
So in the interpreter, yes, sometimes if you don't give an explicit type to a let binding GHC will pick an incorrect type, especially with numeric types. After that, you're stuck with whatever operations are supported by the concrete type GHCi picked, but when you get this kind of error you can always rebind the variable; e.g. let c = 20.0.
However I suspect in your real program the problem is simply that the operation you wanted was actually div rather than /.
Haskell is a bit unusual in this way. Yes you can't divide to integers together but it's rarely a problem.
The reason is that if you look at the Num typeclass, there's a function fromIntegral this allows you to convert literals into the appropriate type. This with type inference alleviates 99% of the cases where it'd be a problem. Quick example:
newtype Foo = Foo Integer
deriving (Show, Eq)
instance Num Foo where
fromInteger _ = Foo 0
negate = undefined
abs = undefined
(+) = undefined
(-) = undefined
(*) = undefined
signum = undefined
Now if we load this into GHCi
*> 0 :: Foo
Foo 0
*> 1 :: Foo
Foo 0
So you see we are able to do some pretty cool things with how GHCi parses a raw integer. This has a lot of practical uses in DSL's that we won't talk about here.
Next question was how to get from a Double to an Integer or vice versa. There's a function for that.
In the case of going from an Integer to a Double, we'd use fromInteger as well. Why?
Well the type signature for it is
(Num a) => Integer -> a
and since we can use (+) with Doubles we know they're a Num instance. And from there it's easy.
*> 0 :: Double
0.0
Last piece of the puzzle is Double -> Integer. Well a brief search on Hoogle shows
truncate
floor
round
-- etc ...
I'll leave that to you to search.
Type coercion in Haskell isn't automatic (or rather, it doesn't actually exist). When you write the literal 20 it's inferred to be of type Num a => a (conceptually anyway. I don't think it works quite like that) and will, depending on the context in which it is used (i.e. what functions you pass it to) be instantiated with an appropitiate type (I believe if no further constraints are applied, this will default to Integer when you need a concrete type at some point). If you need a different kind of Num, you need to convert the numbers e.g. (3* fromIntegral c / 4) in your example.
The type of (/) is Fractional a => a -> a -> a.
To divide Integers, use div instead of (/). Note that the type of div is
div :: Integral a => a -> a -> a
In most top-level languages the compiler works for me -- not the other way around.
I argue that the Haskell compiler works for you just as much, if not more so, than those of other languages you have used. Haskell is a very different language than the traditional imperative languages (such as C, C++, Java, etc.) you are probably used to. This means that the compiler works differently as well.
As others have stated, Haskell will never automatically coerce from one type to another. If you have an Integer which needs to be used as a Float, you need to do the conversion explicitly with fromInteger.

Why can't I add Integer to Double in Haskell?

Why is it that I can do:
1 + 2.0
but when I try:
let a = 1
let b = 2.0
a + b
<interactive>:1:5:
Couldn't match expected type `Integer' with actual type `Double'
In the second argument of `(+)', namely `b'
In the expression: a + b
In an equation for `it': it = a + b
This seems just plain weird! Does it ever trip you up?
P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!
The type signature of (+) is defined as Num a => a -> a -> a, which means that it works on any member of the Num typeclass, but both arguments must be of the same type.
The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do for the let expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1 and 2.0.
All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double, so it just assumes the other one was supposed to be a Double and executes the computation. However, when you use the let expression, it only has one number to work off of, so it decides 1 is an Integer and 2.0 is a Double.
EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.
The first works because numeric literals are polymorphic (they are interpreted as fromInteger literal resp. fromRational literal), so in 1 + 2.0, you really have fromInteger 1 + fromRational 2, in the absence of other constraints, the result type defaults to Double.
The second does not work because of the monomorphism restriction. If you bind something without a type signature and with a simple pattern binding (name = expresion), that entity gets assigned a monomorphic type. For the literal 1, we have a Num constraint, therefore, according to the defaulting rules, its type is defaulted to Integer in the binding let a = 1. Similarly, the fractional literal's type is defaulted to Double.
It will work, by the way, if you :set -XNoMonomorphismRestriction in ghci.
The reason for the monomorphism restriction is to prevent loss of sharing, if you see a value that looks like a constant, you don't expect it to be calculated more than once, but if it had a polymorphic type, it would be recomputed everytime it is used.
You can use GHCI to learn a little more about this. Use the command :t to get the type of an expression.
Prelude> :t 1
1 :: Num a => a
So 1 is a constant which can be any numeric type (Double, Integer, etc.)
Prelude> let a = 1
Prelude> :t a
a :: Integer
So in this case, Haskell inferred the concrete type for a is Integer. Similarly, if you write let b = 2.0 then Haskell infers the type Double. Using let made Haskell infer a more specific type than (perhaps) was necessary, and that leads to your problem. (Someone with more experience than me can perhaps comment as to why this is the case.) Since (+) has type Num a => a -> a -> a, the two arguments need to have the same type.
You can fix this with the fromIntegral function:
Prelude> :t fromIntegral
fromIntegral :: (Num b, Integral a) => a -> b
This function converts integer types to other numeric types. For example:
Prelude> let a = 1
Prelude> let b = 2.0
Prelude> (fromIntegral a) + b
3.0
Others have addressed many aspects of this question quite well. I'd like to say a word about the rationale behind why + has the type signature Num a => a -> a -> a.
Firstly, the Num typeclass has no way to convert one artbitrary instance of Num into another. Suppose I have a data type for imaginary numbers; they are still numbers, but you really can't properly convert them into just an Int.
Secondly, which type signature would you prefer?
(+) :: (Num a, Num b) => a -> b -> a
(+) :: (Num a, Num b) => a -> b -> b
(+) :: (Num a, Num b, Num c) => a -> b -> c
After considering the other options, you realize that a -> a -> a is the simplest choice. Polymorphic results (as in the third suggestion above) are cool, but can sometimes be too generic to be used conveniently.
Thirdly, Haskell is not Blub. Most, though arguably not all, design decisions about Haskell do not take into account the conventions and expectations of popular languages. I frequently enjoy saying that the first step to learning Haskell is to unlearn everything you think you know about programming first. I'm sure most, if not all, experienced Haskellers have been tripped up by the Num typeclass, and various other curiosities of Haskell, because most have learned a more "mainstream" language first. But be patient, you will eventually reach Haskell nirvana. :)

Resources