How does equality work for numeric types? - haskell

I see that Haskell allows different numeric types to be compared:
*Main> :t 3
3 :: Num t => t
*Main> :t 3.0
3.0 :: Fractional t => t
*Main> 3 == 3.0
True
Where is the source code for the Eq instance for numeric types? If I create a new type, such as ComplexNumber, can I extend == to work for it? (I might want complex numbers with no imaginary parts to be potentially equal to real numbers.)

“Haskell allows different numeric types to be compared” no it doesn't. What Haskell actually allows is different types to be defined by the same literals. In particular, you can do
Prelude> let a = 3.7 :: Double
Prelude> let b = 1 :: Double
Prelude> a + b
4.7
OTOH, if I declared these explicitly with conflicting types, the addition would fail:
Prelude> let a = 3.7 :: Double
Prelude> let b = 1 :: Int
Prelude> a + b
<interactive>:31:5:
Couldn't match expected type ‘Double’ with actual type ‘Int’
In the second argument of ‘(+)’, namely ‘b’
In the expression: a + b
Now, Double is not the most general possible type for either a or b. In fact all number literals are polymorphic, but before any operation (like equality comparison) happens, such a polymorphic type needs to be pinned down to a concrete monomorphic one. Like,
Prelude> (3.0 :: Double) == (3 :: Double)
True
Because ==, contrary to your premise, actually requires both sides to have the same type, you can omit the signature on either side without changing anything:
Prelude> 3.0 == (3 :: Double)
True
In fact, even without any type annotation, GHCi will still treat both sides as Double. This is because of type defaulting – in this particular case, Fractional is the strongest constraint on the shared number type, and for Fractional, the default type is Double. OTOH, if both sides had been integral literals, then GHCi would have chosen Integer. This can sometimes make a difference, for instance
Prelude> 10000000000000000 == 10000000000000001
False
but
Prelude> 10000000000000000 ==(10000000000000001 :: Double)
True
because in the latter case, the final 1 is lost in the floating-point error.

There is nothing magical going on here. 3 can be of either a real number or an integer, but its type gets unified with a real number when compared to 3.0.
Note the type class:
class Eq a where
eq :: a -> a -> Bool
So eq really does only compare things of same type. And your example 3 == 3.0 gets its types unified and becomes 3.0 == 3.0 internally.
I'm unsure of any type tricks to make it unify with a user defined complex number type. My gut tells me it can not be done.

Related

Overloaded / Polymorphic Functions with different types

I'm learning Haskell and came into an unanswered question:
From the lesson I'm following:
(+) :: Num a => a -> a -> a
For any numeric type a, (+) takes 2 values of type a and returns a value of type a.
As per the examples:
1 + 1
-- 2 # type a is Int
3.0 + 4.0
-- 7.0 type a is Float
'a' + 'b'
-- Type Error: Char is not a Numeric type
This makes perfect sense, but i end up not understanding what goes on in the background, when:
1 + 3.0
Is the Type Inference system auto casting my Int to Float because it knows it will return Float?
You should investigate these sorts of questions in ghci. It's an invaluable learning resource:
$ ghci
GHCi, version 9.0.1: https://www.haskell.org/ghc/ :? for help
ghci> :t 1
1 :: Num p => p
ghci> :t 3.0
3.0 :: Fractional p => p
ghci> :t 1 + 3.0
1 + 3.0 :: Fractional a => a
First lesson: Numeric literals are polymorphic. 1 isn't an Int, it's polymorphic. It can be any instance of Num that is necessary for the code to compile. 3.0 isn't a Float, it's any instance of Fractional that is necessary for the code to compile. (The difference being the decimal in the literal - it restricts the types allowed.)
Second Lesson: when you combine things into an expression, types get unified. When you unify Num and Fractional constraints, you get a Fractional constraint. This is because Fractional is defined to require all instances of it also be instances of Num.
For a bit more, let's turn on warnings and see what additional information they provide.
ghci> :set -Wall
ghci> 1
<interactive>:5:1: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Integer’
(Show a0) arising from a use of ‘print’ at <interactive>:5:1
(Num a0) arising from a use of ‘it’ at <interactive>:5:1
• In a stmt of an interactive GHCi command: print it
1
ghci> 1 + 3.0
<interactive>:6:1: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Double’
(Show a0) arising from a use of ‘print’ at <interactive>:6:1-7
(Fractional a0) arising from a use of ‘it’ at <interactive>:6:1-7
• In a stmt of an interactive GHCi command: print it
4.0
When printing a value, ghci requires the type have a Show instance. Fortunately, that isn't too important of a detail here, but it's why the defaulting warnings refer to Show.
The lessons to observe here are that the default type for something with a Num instance if inference doesn't require something more specific is Integer, not Int. The default type for something with a Fractional instance if inference doesn't require something more specific is Double, not Float. (Float is basically never used. Forget it exists.)
So when inference runs, the expression 1 + 3.0 is inferred to have the type Fractional a => a. In the absence of further requirements on the type, defaulting kicks in and says "a is Double". Then that information flows back through the (+) to its arguments and requires each of them to also be Double. Fortunately, each argument is a polymorphic literal that can take the type Double. Type checking succeeds, instances are chosen, addition happens, the result is printed.
It's very important to this process that numeric literals are polymorphic. Haskell has no implicit conversions between any pair of types. Especially not numeric types. If you want to actually convert values from one type to another, you must call a function that does the conversion you desire. (fromIntegral, round, floor, ceiling, and realToFrac are the most common numeric conversion functions.) But when the values are polymorphic, it means inference can pick a matching type without needing a conversion.

How can Haskell integer literals be comparable without being in the Eq class?

In Haskell (at least with GHC v8.8.4), being in the Num class does NOT imply being in the Eq class:
$ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
λ>
λ> let { myEqualP :: Num a => a -> a -> Bool ; myEqualP x y = x==y ; }
<interactive>:6:60: error:
• Could not deduce (Eq a) arising from a use of ‘==’
from the context: Num a
bound by the type signature for:
myEqualP :: forall a. Num a => a -> a -> Bool
at <interactive>:6:7-41
Possible fix:
add (Eq a) to the context of
the type signature for:
myEqualP :: forall a. Num a => a -> a -> Bool
• In the expression: x == y
In an equation for ‘myEqualP’: myEqualP x y = x == y
λ>
It seems this is because for example Num instances can be defined for some functional types.
Furthermore, if we prevent ghci from overguessing the type of integer literals, they have just the Num type constraint:
λ>
λ> :set -XNoMonomorphismRestriction
λ>
λ> x=42
λ> :type x
x :: Num p => p
λ>
Hence, terms like x or 42 above have no reason to be comparable.
But still, they happen to be:
λ>
λ> y=43
λ> x == y
False
λ>
Can somebody explain this apparent paradox?
Integer literals can't be compared without using Eq. But that's not what is happening, either.
In GHCi, under NoMonomorphismRestriction (which is default in GHCi nowadays; not sure about in GHC 8.8.4) x = 42 results in a variable x of type forall p :: Num p => p.1
Then you do y = 43, which similarly results in the variable y having type forall q. Num q => q.2
Then you enter x == y, and GHCi has to evaluate in order to print True or False. That evaluation cannot be done without picking a concrete type for both p and q (which has to be the same). Each type has its own code for the definition of ==, so there's no way to run the code for == without deciding which type's code to use.3
However each of x and y can be used as any type in Num (because they have a definition that works for all of them)4. So we can just use (x :: Int) == y and the compiler will determine that it should use the Int definition for ==, or x == (y :: Double) to use the Double definition. We can even do this repeatedly with different types! None of these uses change the type of x or y; we're just using them each time at one of the (many) types they support.
Without the concept of defaulting, a bare x == y would just produce an Ambiguous type variable error from the compiler. The language designers thought that would be extremely common and extremely annoying with numeric literals in particular (because the literals are polymorphic, but as soon as you do any operation on them you need a concrete type). So they introduced rules that some ambiguous type variables should be defaulted to a concrete type if that allows compilation to continue.5
So what is actually happening when you do x == y is that the compiler is just picking Integer to use for x and y in that particular expression, because you haven't given it enough information to pin down any particular type (and because the defaulting rules apply in this situation). Integer has an Eq instance so it can use that, even though the most general types of x and y don't include the Eq constraint. Without picking something it couldn't possibly even attempt to call == (and of course the "something" it picks has to be in Eq or it still won't work).
If you turn on -Wtype-defaults (which is included in -Wall), the compiler will print a warning whenever it applies defaulting6, which makes the process more visible.
1 The forall p part is implicit in standard Haskell, because all type variables are automatically introduced with forall at the beginning of the type expression in which they appear. You have to turn on extensions to even write the forall manually; either ExplicitForAll just for the ability to write forall, or any one of the many extensions that actually add functionality that makes forall useful to write explicitly.
2 GHCi will probably pick p again for the type variable, rather than q. I'm just using a different one to emphasise that they're different variables.
3 Technically it's not each type that necessarily has a different ==, but each Eq instance. Some of those instances are polymorphic, so they apply to multiple types, but that only really comes up with types that have some structure (like Maybe a, etc). Basic types like Int, Integer, Double, Char, Bool, each have their own instance, and each of those instances has its own code for ==.
4 In the underlying system, a type like forall p. Num p => p is in fact much like a function; one that takes a Num instance for a concrete type as a parameter. To get a concrete value you have to first "apply the function" to a type's Num instance, and only then do you get an actual value that could be printed, compared with other things, etc. In standard Haskell these instance parameters are always invisibly passed around by the compiler; some extensions allow you to manipulate this process a little more directly.
This is the root of what's confusing about why x == y works when x and y are polymorphic variables. If you had to explicitly pass around the type/instance arguments it would be obvious what's going on here, because you would have to manually apply both x and y to something and compare the results.
5 The gist of the default rules is that if the constraints on an ambiguous type variable are:
all built-in classes
at least one of them is a numeric class (Num, Floating, etc)
then GHC will try Integer to see if that type checks and allows all other constraints to be resolved. If that doesn't work it will try Double, and if that doesn't work then it reports an error.
You can set the types it will try with a default declaration (the "default default" being default (Integer, Double)), but you can't customise the conditions under which it will try to default things, so changing the default types is of limited use in my experience.
GHCi however comes with extended default rules that are a bit more useful in an interpreter (because it has to do type inference line-by-line instead of on the whole module at once). You can turn those on in compiled code with ExtendedDefaultRules extension (or turn them off in GHCi with NoExtendedDefaultRules), but again, neither of those options is particularly useful in my experience. It's annoying that the interpreter and the compiler behave differently, but the fundamental difference between module-at-a-time compilation and line-at-a-time interpretation mean that switching either's default rules to work consistently with the other is even more annoying. (This is also why NoMonomorphismRestriction is in effect by default in the interpreter now; the monomorphism restriction does a decent job at achieving its goals in compiled code but is almost always wrong in interpreter sessions).
6 You can also use a typed hole in combination with the asTypeOf helper to get GHC to tell you what type it's inferring for a sub-expression like this:
λ :t x
x :: Num p => p
λ :t y
y :: Num p => p
λ (x `asTypeOf` _) == y
<interactive>:19:15: error:
• Found hole: _ :: Integer
• In the second argument of ‘asTypeOf’, namely ‘_’
In the first argument of ‘(==)’, namely ‘(x `asTypeOf` _)’
In the expression: (x `asTypeOf` _) == y
• Relevant bindings include
it :: Bool (bound at <interactive>:19:1)
Valid hole fits include
x :: forall p. Num p => p
with x
(defined at <interactive>:1:1)
it :: forall p. Num p => p
with it
(defined at <interactive>:10:1)
y :: forall p. Num p => p
with y
(defined at <interactive>:12:1)
You can see it tells us nice and simply Found hole: _ :: Integer, before proceeding with all the extra information it likes to give us about errors.
A typed hole (in its simplest form) just means writing _ in place of an expression. The compiler errors out on such an expression, but it tries to give you information about what you could use to "fill in the blank" in order to get it to compile; most helpfully, it tells you the type of something that would be valid in that position.
foo `asTypeOf` bar is an old pattern for adding a bit of type information. It returns foo but it restricts (this particular usage of) it to be the same type as bar (the actual value of bar is totally unused). So if you already have a variable d with type Double, x `asTypeOf` d will be the value of x as a Double.
Here I'm using asTypeOf "backwards"; instead of using the thing on the right to constrain the type of the thing on the left, I'm putting a hole on the right (which could have any type), but asTypeOf conveniently makes sure it's the same type as x without otherwise changing how x is used in the overall expression (so the same type inference still applies, including defaulting, which isn't always the case if you lift a small part of a larger expression out to ask GHCi for its type with :t; in particular :t x won't tell us Integer, but Num p => p).

How to determine the type of literal in Haskell?

When I test the type of literal in GHCi, I find
Prelude> :t 1
1 :: Num p => p
Prelude> :t 'c'
'c' :: Char
Prelude> :t "string"
"string" :: [Char]
Prelude> :t 1.0
1.0 :: Fractional p => p
The problem is how does Haskell to determine the type of such literal? where can I find the information about that?
Furthermore, Are there exist any way to change the way of GHC to interpret the type of literal?
For example:
-- do something
:t 1
1 :: Int -- interprets 1 as Int rather then Num p => p
:t 1.0
1.0 :: Double -- interprets 1.0 as Double rather then Fractional p => p
Thanks in advance.
You can ask ghci to default the type variables:
$ ghci
λ> let x = 3
λ> :type x
x :: Num p => p
λ> :type +d x
x :: Integer
λ> :type +d 1
1 :: Integer
λ> :type +d 1.0
1.0 :: Double
The :type +d will make ghci to chose the default types for the type variables. Also, this is the general Haskell defaulting rule:
default Num Integer
default Real Integer
default Enum Integer
default Integral Integer
default Fractional Double
default RealFrac Double
default Floating Double
default RealFloat Double
You can learn more about it here.
If you write 1, it has any possible number type. That's what Num p => p actually means.
If you use 1 in an expression, GHCi will attempt to figure out the correct type of number to use based on what functions you're calling on it, and then automatically give 1 the right type.
If GHCi cannot guess what the correct type is (because there's not enough context or because several types would fit), it defaults to Integer. (And for 1.0 it will default to Double. And for any other type constraint, it will try to default to () if possible.)
This is similar to how compiled code works. If you write a number in your source code, GHC (the compiler) will attempt to auto-detect what the correct type should be. The difference is, if the compiler can't figure it out, it won't "guess" or "default", it'll just give you a compile-time error and demand that you specify what you mean. That's desirable to make compiled code work how you expected, but it's tedious for interactively trying stuff out, which is why GHCi has defaulting.
The type of a single character is always Char.
The type of a string is always String or [Char]. (One is an alias to the other.)
The type of True and False is always Bool. And so on.
So it's only really numbers that have the possibility of multiple types.
[Well, there's an option to make strings polymorphic too, but we won't worry about that now...]
If you want messy details, you can read the Haskell Language Report (which is the official specification document that defines the Haskell language) and the GHCi user manual (which describes what GHCi does).

Haskell Type Coercion

I trying to wrap my head around Haskell type coercion. Meaning, when does can one pass a value into a function without casting and how that works. Here is a specific example, but I am looking for a more general explanation I can use going forward to try and understand what is going on:
Prelude> 3 * 20 / 4
15.0
Prelude> let c = 20
Prelude> :t c
c :: Integer
Prelude> 3 * c / 4
<interactive>:94:7:
No instance for (Fractional Integer)
arising from a use of `/'
Possible fix: add an instance declaration for (Fractional Integer)
In the expression: 3 * c / 4
In an equation for `it': it = 3 * c / 4
The type of (/) is Fractional a => a -> a -> a. So, I'm guessing that when I do "3 * 20" using literals, Haskell somehow assumes that the result of that expression is a Fractional. However, when a variable is used, it's type is predefined to be Integer based on the assignment.
My first question is how to fix this. Do I need to cast the expression or convert it somehow?
My second question is that this seems really weird to me that you can't do basic math without having to worry so much about int/float types. I mean there's an obvious way to convert automatically between these, why am I forced to think about this and deal with it? Am I doing something wrong to begin with?
I am basically looking for a way to easily write simple arithmetic expressions without having to worry about the neaty greaty details and keeping the code nice and clean. In most top-level languages the compiler works for me -- not the other way around.
If you just want the solution, look at the end.
You nearly answered your own question already. Literals in Haskell are overloaded:
Prelude> :t 3
3 :: Num a => a
Since (*) also has a Num constraint
Prelude> :t (*)
(*) :: Num a => a -> a -> a
this extends to the product:
Prelude> :t 3 * 20
3 * 20 :: Num a => a
So, depending on context, this can be specialized to be of type Int, Integer, Float, Double, Rational and more, as needed. In particular, as Fractional is a subclass of Num, it can be used without problems in a division, but then the constraint will become
stronger and be for class Fractional:
Prelude> :t 3 * 20 / 4
3 * 20 / 4 :: Fractional a => a
The big difference is the identifier c is an Integer. The reason why a simple let-binding in GHCi prompt isn't assigned an overloaded type is the dreaded monomorphism restriction. In short: if you define a value that doesn't have any explicit arguments,
then it cannot have overloaded type unless you provide an explicit type signature.
Numeric types are then defaulted to Integer.
Once c is an Integer, the result of the multiplication is Integer, too:
Prelude> :t 3 * c
3 * c :: Integer
And Integer is not in the Fractional class.
There are two solutions to this problem.
Make sure your identifiers have overloaded type, too. In this case, it would
be as simple as saying
Prelude> let c :: Num a => a; c = 20
Prelude> :t c
c :: Num a => a
Use fromIntegral to cast an integral value to an arbitrary numeric value:
Prelude> :t fromIntegral
fromIntegral :: (Integral a, Num b) => a -> b
Prelude> let c = 20
Prelude> :t c
c :: Integer
Prelude> :t fromIntegral c
fromIntegral c :: Num b => b
Prelude> 3 * fromIntegral c / 4
15.0
Haskell will never automatically convert one type into another when you pass it to a function. Either it's compatible with the expected type already, in which case no coercion is necessary, or the program fails to compile.
If you write a whole program and compile it, things generally "just work" without you having to think too much about int/float types; so long as you're consistent (i.e. you don't try to treat something as an Int in one place and a Float in another) the constraints just flow through the program and figure out the types for you.
For example, if I put this in a source file and compile it:
main = do
let c = 20
let it = 3 * c / 4
print it
Then everything's fine, and running the program prints 15.0. You can see from the .0 that GHC successfully figured out that c must be some kind of fractional number, and made everything work, without me having to give any explicit type signatures.
c can't be an integer because the / operator is for mathematical division, which isn't defined on integers. The operation of integer division is represented by the div function (usable in operator fashion as x `div` y). I think this might be what is tripping you up in your whole program? This is unfortunately just one of those things you have to learn by getting tripped up by it, if you're used to the situation in many other languages where / is sometimes mathematical division and sometimes integer division.
It's when you're playing around in the interpreter that things get messy, because there you tend to bind values with no context whatsoever. In interpreter GHCi has to execute let c = 20 on its own, because you haven't entered 3 * c / 4 yet. It has no way of knowing whether you intend that 20 to be an Int, Integer, Float, Double, Rational, etc
Haskell will pick a default type for numeric values; otherwise if you never use any functions that only work on one particular type of number you'd always get an error about ambiguous type variables. This normally works fine, because these default rules are applied while reading the whole module and so take into account all the other constraints on the type (like whether you've ever used it with /). But here there are no other constraints it can see, so the type defaulting picks the first cab off the rank and makes c an Integer.
Then, when you ask GHCi to evaluate 3 * c / 4, it's too late. c is an Integer, so must 3 * c be, and Integers don't support /.
So in the interpreter, yes, sometimes if you don't give an explicit type to a let binding GHC will pick an incorrect type, especially with numeric types. After that, you're stuck with whatever operations are supported by the concrete type GHCi picked, but when you get this kind of error you can always rebind the variable; e.g. let c = 20.0.
However I suspect in your real program the problem is simply that the operation you wanted was actually div rather than /.
Haskell is a bit unusual in this way. Yes you can't divide to integers together but it's rarely a problem.
The reason is that if you look at the Num typeclass, there's a function fromIntegral this allows you to convert literals into the appropriate type. This with type inference alleviates 99% of the cases where it'd be a problem. Quick example:
newtype Foo = Foo Integer
deriving (Show, Eq)
instance Num Foo where
fromInteger _ = Foo 0
negate = undefined
abs = undefined
(+) = undefined
(-) = undefined
(*) = undefined
signum = undefined
Now if we load this into GHCi
*> 0 :: Foo
Foo 0
*> 1 :: Foo
Foo 0
So you see we are able to do some pretty cool things with how GHCi parses a raw integer. This has a lot of practical uses in DSL's that we won't talk about here.
Next question was how to get from a Double to an Integer or vice versa. There's a function for that.
In the case of going from an Integer to a Double, we'd use fromInteger as well. Why?
Well the type signature for it is
(Num a) => Integer -> a
and since we can use (+) with Doubles we know they're a Num instance. And from there it's easy.
*> 0 :: Double
0.0
Last piece of the puzzle is Double -> Integer. Well a brief search on Hoogle shows
truncate
floor
round
-- etc ...
I'll leave that to you to search.
Type coercion in Haskell isn't automatic (or rather, it doesn't actually exist). When you write the literal 20 it's inferred to be of type Num a => a (conceptually anyway. I don't think it works quite like that) and will, depending on the context in which it is used (i.e. what functions you pass it to) be instantiated with an appropitiate type (I believe if no further constraints are applied, this will default to Integer when you need a concrete type at some point). If you need a different kind of Num, you need to convert the numbers e.g. (3* fromIntegral c / 4) in your example.
The type of (/) is Fractional a => a -> a -> a.
To divide Integers, use div instead of (/). Note that the type of div is
div :: Integral a => a -> a -> a
In most top-level languages the compiler works for me -- not the other way around.
I argue that the Haskell compiler works for you just as much, if not more so, than those of other languages you have used. Haskell is a very different language than the traditional imperative languages (such as C, C++, Java, etc.) you are probably used to. This means that the compiler works differently as well.
As others have stated, Haskell will never automatically coerce from one type to another. If you have an Integer which needs to be used as a Float, you need to do the conversion explicitly with fromInteger.

Why can't I add Integer to Double in Haskell?

Why is it that I can do:
1 + 2.0
but when I try:
let a = 1
let b = 2.0
a + b
<interactive>:1:5:
Couldn't match expected type `Integer' with actual type `Double'
In the second argument of `(+)', namely `b'
In the expression: a + b
In an equation for `it': it = a + b
This seems just plain weird! Does it ever trip you up?
P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!
The type signature of (+) is defined as Num a => a -> a -> a, which means that it works on any member of the Num typeclass, but both arguments must be of the same type.
The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do for the let expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1 and 2.0.
All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double, so it just assumes the other one was supposed to be a Double and executes the computation. However, when you use the let expression, it only has one number to work off of, so it decides 1 is an Integer and 2.0 is a Double.
EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.
The first works because numeric literals are polymorphic (they are interpreted as fromInteger literal resp. fromRational literal), so in 1 + 2.0, you really have fromInteger 1 + fromRational 2, in the absence of other constraints, the result type defaults to Double.
The second does not work because of the monomorphism restriction. If you bind something without a type signature and with a simple pattern binding (name = expresion), that entity gets assigned a monomorphic type. For the literal 1, we have a Num constraint, therefore, according to the defaulting rules, its type is defaulted to Integer in the binding let a = 1. Similarly, the fractional literal's type is defaulted to Double.
It will work, by the way, if you :set -XNoMonomorphismRestriction in ghci.
The reason for the monomorphism restriction is to prevent loss of sharing, if you see a value that looks like a constant, you don't expect it to be calculated more than once, but if it had a polymorphic type, it would be recomputed everytime it is used.
You can use GHCI to learn a little more about this. Use the command :t to get the type of an expression.
Prelude> :t 1
1 :: Num a => a
So 1 is a constant which can be any numeric type (Double, Integer, etc.)
Prelude> let a = 1
Prelude> :t a
a :: Integer
So in this case, Haskell inferred the concrete type for a is Integer. Similarly, if you write let b = 2.0 then Haskell infers the type Double. Using let made Haskell infer a more specific type than (perhaps) was necessary, and that leads to your problem. (Someone with more experience than me can perhaps comment as to why this is the case.) Since (+) has type Num a => a -> a -> a, the two arguments need to have the same type.
You can fix this with the fromIntegral function:
Prelude> :t fromIntegral
fromIntegral :: (Num b, Integral a) => a -> b
This function converts integer types to other numeric types. For example:
Prelude> let a = 1
Prelude> let b = 2.0
Prelude> (fromIntegral a) + b
3.0
Others have addressed many aspects of this question quite well. I'd like to say a word about the rationale behind why + has the type signature Num a => a -> a -> a.
Firstly, the Num typeclass has no way to convert one artbitrary instance of Num into another. Suppose I have a data type for imaginary numbers; they are still numbers, but you really can't properly convert them into just an Int.
Secondly, which type signature would you prefer?
(+) :: (Num a, Num b) => a -> b -> a
(+) :: (Num a, Num b) => a -> b -> b
(+) :: (Num a, Num b, Num c) => a -> b -> c
After considering the other options, you realize that a -> a -> a is the simplest choice. Polymorphic results (as in the third suggestion above) are cool, but can sometimes be too generic to be used conveniently.
Thirdly, Haskell is not Blub. Most, though arguably not all, design decisions about Haskell do not take into account the conventions and expectations of popular languages. I frequently enjoy saying that the first step to learning Haskell is to unlearn everything you think you know about programming first. I'm sure most, if not all, experienced Haskellers have been tripped up by the Num typeclass, and various other curiosities of Haskell, because most have learned a more "mainstream" language first. But be patient, you will eventually reach Haskell nirvana. :)

Resources