Range bounds of type Integral and/or RealFrac causing problems in nested list comprehensions - haskell

I'm trying to understand why for otherwise simple expressions, range bounds specified in Integral types cause ghc to reject a nested comprehension, when changing the range bounds to Num allows the same comprehension to execute successfully. For the cases below, ghc fails to compile a program invoking any of the fails functions, although the works all succeed. In ghci, the code is accepted, but invoking any of the fails functions generates a runtime error ("No instance for (Show (Double -> [(a20, Integer)])) arising from a use of ‘print’").
Note that these are stripped-down test cases to explore the boundary of what ghc will or will not accept in a comprehension; I'm not looking for alternate code to iterate over these specific and fairly trivial ranges. I also understand that in these particular cases, taking the floor of the upper bound doesn't really change the range being iterated over; however, it can be useful to force the iteration variables to have Integral type in subsequent filter expressions (not shown in these test cases).
fails0 x = [(y, z) | y <- [1..floor x], z <- [2..floor y]] -- second iteration expression depends on first, both are Integral
works0 x = [(y, z) | y <- [1..floor x], z <- [2..floor x]] -- second iteration expression independent of first
works1 x = [(y, z) | y <- [1..x], z <- [2..floor y]] -- remove first floor (1st list is Num, 2nd still Integral)
works2 x = [(y, z) | y <- [1..floor x], z <- [2..y]] -- remove second floor (both lists Integral)
fails1 x = [(y, z) | y <- [1..floor 5], z <- [2..floor y]] -- first iteration range is constant and Integral
works3 x = [(y, z) | y <- [1..5], z <- [2..floor y]] -- first iteration range is constant but Num
As you can see, defining the first range with Num bounds, or removing the call to floor in the second sequence's bounces, or removing the dependence of the second range's bounds on the iterated value of the first (yet leaving both of Integral type), is enough to make it work. Without any of those changes, ghc reports an error of the form
Ambiguous type variable ‘a0’ arising from a use of ‘fails0’ prevents the constraint ‘(RealFrac a0)’ from being solved. Probable fix: use a type annotation to specify what ‘a0’ should be. These potential instances exist:
instance RealFrac Double -- Defined in ‘GHC.Float’
instance RealFrac Float -- Defined in ‘GHC.Float’
...plus one instance involving out-of-scope types
(use -fprint-potential-instances to see them all)
Interestingly, the signatures of fails0 and works0 are nearly equivalent:
*Main> :t fails0
fails0
:: (RealFrac a1, RealFrac a2, Integral a2, Integral b) =>
a1 -> [(a2, b)]
*Main> :t works0
works0
:: (RealFrac a1, Integral a2, Integral b) => a1 -> [(a2, b)]
(Tested on ghc 8.4.4 and 8.10.4.)
Is this intended behavior per the language design, or an implementation limitation? Are there any best practices to ensure that more complex comprehensions, with dependencies between nested iteration ranges, will successfully compile?

You have three separate problems here: (1) ambiguous types, (2) a confusion between integer and fractional numbers, and (3) trying to print the unprintable.
Let's start with the ambiguity
Consider the following program:
f x = []
And then consider the following definition:
x = f 42
Question: what is the type of x? Well, it's a generic type [a] of course. But say we wanted to know the concrete type, for example we might want to, hypothetically, print it out by calling show x. So which instance of Show would the compiler use for it? Show [Int]? Or Show [String]? Or perhaps Show [Bool]?
The answer is: neither. There is simply no context that the compiler could use to determine the type of x to lookup the appropriate instance of Show.
This is what is called "ambiguous type variable". The result of f could be of any type, and there is no way to determine which. Ambiguous.
And the only way to resolve this is to add a type annotation:
x :: [Int]
x = f 42
And so it is in your case: fails0 returns a list of generic type, and there is no context from which to determine it.
Hold on a second! But then, how does works0 work?
The defaulting rules of course! :-)
In Haskell numbers are generic. Perhaps a bit too generic. So that when you write 5, it could mean Integer, or Int, or Double, or Float, and so on. And the only way to determine that is to add a type annotation. But of course, if we had to add a type annotation to every single number, that would be ridiculous, so GHC has special magic for this: without going into too much detail, whenever GHC sees a numeric-ish type, it tries to substitute a "reasonable" default, such as Int or Double.
And that's how works0 works. GHC determines its type by defaulting, so you don't have to add a type signature.
But for fails0 GHC doesn't use type defaulting, because it can't. Which brings us to...
Integer/fractional confusion
In the fails0 equation, what type do you think y should be? There are two bits of evidence for that:
y comes from a list from 1 to floor x, so it has to have the same type as floor x, which is any type that has an instance of Integral (according to the type of floor)
y is used as an argument of floor, so it must be of a type that has an instance of RealFrac (also according to the type of floor)
So this means that:
y :: (RealFrac a, Integral a) => a
y should be of a type that has both instances ReactFrac and Integral
And this is what GHCi is telling you when you ask it for type of fails0. In your example, it denotes the type of y as a2, and indeed, a2 has both RealFrac and Integral constraints.
GHCi is ok with it for now. It figured out the constraints, now it's waiting for you to call that function, and when you do, it dares you to choose a type a2 such that it has both constraints.
But you can't! If you look at the standard instances of RealFrac, and then standard instances of Integral, you'll see that there is no type that has both of them. And this makes perfect sense: the Integral class is supposed to describe integer-ish numbers, while the RealFrac class is supposed to describe fractions. Any given type can't be both!
But the compiler is still fine with it. It's content to accept such function type, because by itself it doesn't really contradict anything. It's only when you try to call it that you discover that there is no possible type you could choose for y. Or, rather, the GHC discovers that when it tries to use the defaulting rules.
And finally, printing in GHCi
The way GHCi works, it waits for you to type in some Haskell code, and once you do, it compiles that code, executes it, and prints the result. And in order to print the result, the result has to have a Show instance.
According to your description:
invoking any of the fails functions generates a runtime error ("No instance for (Show (Double -> [(a20, Integer)])) arising from a use of ‘print’").
See how GHCi is trying to find an instance Show (Double -> [(a20, Integer)])? It's trying to find a Show instance for a function (which don't exist, because functions are inherently non-printable)
And this means that the result of whatever you just typed is a function. I'm going to guess that you just typed fails0 and forgot to give it an argument. So GHCi is trying to print fails0 itself, not its return value.
If you do give it an argument, you will get a similar "ambiguous type" error.

Related

How can Haskell integer literals be comparable without being in the Eq class?

In Haskell (at least with GHC v8.8.4), being in the Num class does NOT imply being in the Eq class:
$ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
λ>
λ> let { myEqualP :: Num a => a -> a -> Bool ; myEqualP x y = x==y ; }
<interactive>:6:60: error:
• Could not deduce (Eq a) arising from a use of ‘==’
from the context: Num a
bound by the type signature for:
myEqualP :: forall a. Num a => a -> a -> Bool
at <interactive>:6:7-41
Possible fix:
add (Eq a) to the context of
the type signature for:
myEqualP :: forall a. Num a => a -> a -> Bool
• In the expression: x == y
In an equation for ‘myEqualP’: myEqualP x y = x == y
λ>
It seems this is because for example Num instances can be defined for some functional types.
Furthermore, if we prevent ghci from overguessing the type of integer literals, they have just the Num type constraint:
λ>
λ> :set -XNoMonomorphismRestriction
λ>
λ> x=42
λ> :type x
x :: Num p => p
λ>
Hence, terms like x or 42 above have no reason to be comparable.
But still, they happen to be:
λ>
λ> y=43
λ> x == y
False
λ>
Can somebody explain this apparent paradox?
Integer literals can't be compared without using Eq. But that's not what is happening, either.
In GHCi, under NoMonomorphismRestriction (which is default in GHCi nowadays; not sure about in GHC 8.8.4) x = 42 results in a variable x of type forall p :: Num p => p.1
Then you do y = 43, which similarly results in the variable y having type forall q. Num q => q.2
Then you enter x == y, and GHCi has to evaluate in order to print True or False. That evaluation cannot be done without picking a concrete type for both p and q (which has to be the same). Each type has its own code for the definition of ==, so there's no way to run the code for == without deciding which type's code to use.3
However each of x and y can be used as any type in Num (because they have a definition that works for all of them)4. So we can just use (x :: Int) == y and the compiler will determine that it should use the Int definition for ==, or x == (y :: Double) to use the Double definition. We can even do this repeatedly with different types! None of these uses change the type of x or y; we're just using them each time at one of the (many) types they support.
Without the concept of defaulting, a bare x == y would just produce an Ambiguous type variable error from the compiler. The language designers thought that would be extremely common and extremely annoying with numeric literals in particular (because the literals are polymorphic, but as soon as you do any operation on them you need a concrete type). So they introduced rules that some ambiguous type variables should be defaulted to a concrete type if that allows compilation to continue.5
So what is actually happening when you do x == y is that the compiler is just picking Integer to use for x and y in that particular expression, because you haven't given it enough information to pin down any particular type (and because the defaulting rules apply in this situation). Integer has an Eq instance so it can use that, even though the most general types of x and y don't include the Eq constraint. Without picking something it couldn't possibly even attempt to call == (and of course the "something" it picks has to be in Eq or it still won't work).
If you turn on -Wtype-defaults (which is included in -Wall), the compiler will print a warning whenever it applies defaulting6, which makes the process more visible.
1 The forall p part is implicit in standard Haskell, because all type variables are automatically introduced with forall at the beginning of the type expression in which they appear. You have to turn on extensions to even write the forall manually; either ExplicitForAll just for the ability to write forall, or any one of the many extensions that actually add functionality that makes forall useful to write explicitly.
2 GHCi will probably pick p again for the type variable, rather than q. I'm just using a different one to emphasise that they're different variables.
3 Technically it's not each type that necessarily has a different ==, but each Eq instance. Some of those instances are polymorphic, so they apply to multiple types, but that only really comes up with types that have some structure (like Maybe a, etc). Basic types like Int, Integer, Double, Char, Bool, each have their own instance, and each of those instances has its own code for ==.
4 In the underlying system, a type like forall p. Num p => p is in fact much like a function; one that takes a Num instance for a concrete type as a parameter. To get a concrete value you have to first "apply the function" to a type's Num instance, and only then do you get an actual value that could be printed, compared with other things, etc. In standard Haskell these instance parameters are always invisibly passed around by the compiler; some extensions allow you to manipulate this process a little more directly.
This is the root of what's confusing about why x == y works when x and y are polymorphic variables. If you had to explicitly pass around the type/instance arguments it would be obvious what's going on here, because you would have to manually apply both x and y to something and compare the results.
5 The gist of the default rules is that if the constraints on an ambiguous type variable are:
all built-in classes
at least one of them is a numeric class (Num, Floating, etc)
then GHC will try Integer to see if that type checks and allows all other constraints to be resolved. If that doesn't work it will try Double, and if that doesn't work then it reports an error.
You can set the types it will try with a default declaration (the "default default" being default (Integer, Double)), but you can't customise the conditions under which it will try to default things, so changing the default types is of limited use in my experience.
GHCi however comes with extended default rules that are a bit more useful in an interpreter (because it has to do type inference line-by-line instead of on the whole module at once). You can turn those on in compiled code with ExtendedDefaultRules extension (or turn them off in GHCi with NoExtendedDefaultRules), but again, neither of those options is particularly useful in my experience. It's annoying that the interpreter and the compiler behave differently, but the fundamental difference between module-at-a-time compilation and line-at-a-time interpretation mean that switching either's default rules to work consistently with the other is even more annoying. (This is also why NoMonomorphismRestriction is in effect by default in the interpreter now; the monomorphism restriction does a decent job at achieving its goals in compiled code but is almost always wrong in interpreter sessions).
6 You can also use a typed hole in combination with the asTypeOf helper to get GHC to tell you what type it's inferring for a sub-expression like this:
λ :t x
x :: Num p => p
λ :t y
y :: Num p => p
λ (x `asTypeOf` _) == y
<interactive>:19:15: error:
• Found hole: _ :: Integer
• In the second argument of ‘asTypeOf’, namely ‘_’
In the first argument of ‘(==)’, namely ‘(x `asTypeOf` _)’
In the expression: (x `asTypeOf` _) == y
• Relevant bindings include
it :: Bool (bound at <interactive>:19:1)
Valid hole fits include
x :: forall p. Num p => p
with x
(defined at <interactive>:1:1)
it :: forall p. Num p => p
with it
(defined at <interactive>:10:1)
y :: forall p. Num p => p
with y
(defined at <interactive>:12:1)
You can see it tells us nice and simply Found hole: _ :: Integer, before proceeding with all the extra information it likes to give us about errors.
A typed hole (in its simplest form) just means writing _ in place of an expression. The compiler errors out on such an expression, but it tries to give you information about what you could use to "fill in the blank" in order to get it to compile; most helpfully, it tells you the type of something that would be valid in that position.
foo `asTypeOf` bar is an old pattern for adding a bit of type information. It returns foo but it restricts (this particular usage of) it to be the same type as bar (the actual value of bar is totally unused). So if you already have a variable d with type Double, x `asTypeOf` d will be the value of x as a Double.
Here I'm using asTypeOf "backwards"; instead of using the thing on the right to constrain the type of the thing on the left, I'm putting a hole on the right (which could have any type), but asTypeOf conveniently makes sure it's the same type as x without otherwise changing how x is used in the overall expression (so the same type inference still applies, including defaulting, which isn't always the case if you lift a small part of a larger expression out to ask GHCi for its type with :t; in particular :t x won't tell us Integer, but Num p => p).

Haskell Show type class in a function

I have a function
mySucc :: (Enum a, Bounded a, Eq a, Show a) => a -> Maybe a
mySucc int
| int == maxBound = Nothing
| otherwise = Just $ succ int
When I want to print the output of this function in ghci, Haskell seems to be confused as to which instance of Show to use. Why is that? Shouldn't Haskell automatically resolve a's type during runtime and use it's Show?
My limited understanding of type class is that, if you mention a type (in my case a) and say that it belongs to a type class (Show), Haskell should automatically resolve the type. Isn't that how it resolves Bounded, Enum and Eq? Please correct me if my understanding is wrong.
Shouldn't Haskell automatically resolve a's type during runtime and use it's Show?
Generally speaking, types don't exist at runtime. The compiler typechecks your code, resolves any polymorphism, and then erases the types. But, this is kind of orthogonal to your main question.
My limited understanding of type class is that, if you mention a type (in my case a) and say that it belongs to a type class (Show), Haskell should automatically resolve the type
No. The compiler will automatically resolve the instance. What that means is, you don't need to explicitly pass a showing-method into your function. For example, instead of the function
showTwice :: Show a => a -> String
showTwice x = show x ++ show x
you could have a function that doesn't use any typeclass
showTwice' :: (a -> String) -> a -> String
showTwice' show' x = show' x ++ show' x
which can be used much the same way as showTwice if you give it the standard show as the first argument. But that argument would need to be manually passed around at each call site. That's what you can avoid by using the type class instead, but this still requires the type to be known first.
(Your mySucc doesn't actually use show in any way at all, so you might as well omit the Show a constraint completely.)
When your call to mySucc appears in a larger expression, chances are the type will in fact also be inferred automatically. For example, mySucc (length "bla") will use a ~ Int, because the result of length is fixed to Int; or mySucc 'y' will use a ~ Char. However, if all the subexpressions are polymorphic (and in Haskell, even number literals are polymorphic), then the compiler won't have any indication what type you actually want. In that case you can always specify it explicitly, either in the argument
> mySucc (3 :: Int)
Just 4
or in the result
> mySucc 255 :: Maybe Word8
Nothing
Are you writing mySucc 1? In this case, you get a error because 1 literal is a polymorphic value of type Num a => a.
Try calling mySucc 1 :: Maybe Int and it will work.

How to work around issue with ambiguity when monomorphic restriction turned *on*?

So, learning Haskell, I came across the dreaded monomorphic restriction, soon enough, with the following (in ghci):
Prelude> let f = print.show
Prelude> f 5
<interactive>:3:3:
No instance for (Num ()) arising from the literal `5'
Possible fix: add an instance declaration for (Num ())
In the first argument of `f', namely `5'
In the expression: f 5
In an equation for `it': it = f 5
So there's a bunch of material about this, e.g. here, and it is not so hard to workaround.
I can either add an explicit type signature for f, or I can turn off the monomorphic restriction (with ":set -XNoMonomorphismRestriction" directly in ghci, or in a .ghci file).
There's some discussion about the monomorphic restriction, but it seems like the general advice is that it is ok to turn this off (and I was told that this is actually off by default in newer versions of ghci).
So I turned this off.
But then I came across another issue:
Prelude> :set -XNoMonomorphismRestriction
Prelude> let (a,g) = System.Random.random (System.Random.mkStdGen 4) in a :: Int
<interactive>:4:5:
No instance for (System.Random.Random t0)
arising from the ambiguity check for `g'
The type variable `t0' is ambiguous
Possible fix: add a type signature that fixes these type variable(s)
Note: there are several potential instances:
instance System.Random.Random Bool -- Defined in `System.Random'
instance System.Random.Random Foreign.C.Types.CChar
-- Defined in `System.Random'
instance System.Random.Random Foreign.C.Types.CDouble
-- Defined in `System.Random'
...plus 33 others
When checking that `g' has the inferred type `System.Random.StdGen'
Probable cause: the inferred type is ambiguous
In the expression:
let (a, g) = System.Random.random (System.Random.mkStdGen 4)
in a :: Int
In an equation for `it':
it
= let (a, g) = System.Random.random (System.Random.mkStdGen 4)
in a :: Int
This is actually simplified from example code in the 'Real World Haskell' book, which wasn't working for me, and which you can find on this page: http://book.realworldhaskell.org/read/monads.html (it's the Monads chapter, and the getRandom example function, search for 'getRandom' on that page).
If I leave the monomorphic restriction on (or turn it on) then the code works. It also works (with the monomorphic restriction on) if I change it to:
Prelude> let (a,_) = System.Random.random (System.Random.mkStdGen 4) in a :: Int
-106546976
or if I specify the type of 'a' earlier:
Prelude> let (a::Int,g) = System.Random.random (System.Random.mkStdGen 4) in a :: Int
-106546976
but, for this second workaround, I have to turn on the 'scoped type variables' extension (with ":set -XScopedTypeVariables").
The problem is that in this case (problems when monomorphic restriction on) neither of the workarounds seem generally applicable.
For example, maybe I want to write a function that does something like this and works with arbitrary (or multiple) types, and of course in this case I most probably do want to hold on to the new generator state (in 'g').
The question is then: How do I work around this kind of issue, in general, and without specifying the exact type directly?
And, it would also be great (as a Haskell novice) to get more of an idea about exactly what is going on here, and why these issues occur..
When you define
(a,g) = random (mkStdGen 4)
then even if g itself is always of type StdGen, the value of g depends on the type of a, because different types can differ in how much they use the random number generator.
Moreover, when you (hypothetically) use g later, as long as a was polymorphic originally, there is no way to decide which type of a you want to use for calculating g.
So, taken alone, as a polymorphic definition, the above has to be disallowed because g actually is extremely ambiguous and this ambiguity cannot be fixed at the use site.
This is a general kind of problem with let/where bindings that bind several variables in a pattern, and is probably the reason why the ordinary monomorphism restriction treats them even stricter than single variable equations: With a pattern, you cannot even disable the MR by giving a polymorphic type signature.
When you use _ instead, presumably GHC doesn't worry about this ambiguity as long as it doesn't affect the calculation of a. Possibly it could have detected that g is unused in the former version, and treated it similarly, but apparently it doesn't.
As for workarounds without giving unnecessary explicit types, you might instead try replacing let/where by one of the binding methods in Haskell which are always monomorphic. The following all work:
case random (mkStdGen 4) of
(a,g) -> a :: Int
(\(a,g) -> a :: Int) (random (mkStdGen 4))
do (a,g) <- return $ random (mkStdGen 4)
return (a :: Int) -- The result here gets wrapped in the Monad

How are variable names chosen in type signatures inferred by GHC?

When I play with checking types of functions in Haskell with :t, for example like those in my previous question, I tend to get results such as:
Eq a => a -> [a] -> Bool
(Ord a, Num a, Ord a1, Num a1) => a -> a1 -> a
(Num t2, Num t1, Num t, Enum t2, Enum t1, Enum t) => [(t, t1, t2)]
It seems that this is not such a trivial question - how does the Haskell interpreter pick literals to symbolize typeclasses? When would it choose a rather than t? When would it choose a1 rather than b? Is it important from the programmer's point of view?
The names of the type variables aren't significant. The type:
Eq element => element -> [element] -> Bool
Is exactly the same as:
Eq a => a -> [a] -> Bool
Some names are simply easier to read/remember.
Now, how can an inferencer choose the best names for types?
Disclaimer: I'm absolutely not a GHC developer. However I'm working on a type-inferencer for Haskell in my bachelor thesis.
During inferencing the names chosen for the variables aren't probably that readable. In fact they are almost surely something along the lines of _N with N a number or aN with N a number.
This is due to the fact that you often have to "refresh" type variables in order to complete inferencing, so you need a fast way to create new names. And using numbered variables is pretty straightforward for this purpose.
The names displayed when inference is completed can be "pretty printed". The inferencer can rename the variables to use a, b, c and so on instead of _1, _2 etc.
The trick is that most operations have explicit type signatures. Some definitions require to quantify some type variables (class, data and instance for example).
All these names that the user explicitly provides can be used to display the type in a better way.
When inferencing you can somehow keep track of where the fresh type variables came from, in order to be able to rename them with something more sensible when displaying them to the user.
An other option is to refresh variables by adding a number to them. For example a fresh type of return could be Monad m0 => a0 -> m0 a0 (Here we know to use m and a simply because the class definition for Monad uses those names). When inferencing is finished you can get rid of the numbers and obtain the pretty names.
In general the inferencer will try to use names that were explicitly provided through signatures. If such a name was already used it might decide to add a number instead of using a different name (e.g. use b1 instead of c if b was already bound).
There are probably some other ad hoc rules. For example the fact that tuple elements have like t, t1, t2, t3 etc. is probably something done with a custom rule. In fact t doesn't appear in the signature for (,,) for example.
How does GHCi pick names for type variables? explains how many of these variable names come about. As Ganesh Sittampalam pointed out in a comment, something strange seems to be happening with arithmetic sequences. Both the Haskell 98 report and the Haskell 2010 report indicate that
[e1..] = enumFrom e1
GHCi, however, gives the following:
Prelude> :t [undefined..]
[undefined..] :: Enum t => [t]
Prelude> :t enumFrom undefined
enumFrom undefined :: Enum a => [a]
This makes it clear that the weird behavior has nothing to do with the Enum class itself, but rather comes in from some stage in translating the syntactic sequence to the enumFrom form. I wondered if maybe GHC wasn't really using that translation, but it really is:
{-# LANGUAGE NoMonomorphismRestriction #-}
module X (aoeu,htns) where
aoeu = [undefined..]
htns = enumFrom undefined
compiled using ghc -ddump-simpl enumlit.hs gives
X.htns :: forall a_aiD. GHC.Enum.Enum a_aiD => [a_aiD]
[GblId, Arity=1]
X.htns =
\ (# a_aiG) ($dEnum_aiH :: GHC.Enum.Enum a_aiG) ->
GHC.Enum.enumFrom # a_aiG $dEnum_aiH (GHC.Err.undefined # a_aiG)
X.aoeu :: forall t_aiS. GHC.Enum.Enum t_aiS => [t_aiS]
[GblId, Arity=1]
X.aoeu =
\ (# t_aiV) ($dEnum_aiW :: GHC.Enum.Enum t_aiV) ->
GHC.Enum.enumFrom # t_aiV $dEnum_aiW (GHC.Err.undefined # t_aiV)
so the only difference between these two representations is the assigned type variable name. I don't know enough about how GHC works to know where that t comes from, but at least I've narrowed it down!
Ørjan Johansen has noted in a comment that something similar seems to happen with function definitions and lambda abstractions.
Prelude> :t \x -> x
\x -> x :: t -> t
but
Prelude> :t map (\x->x) $ undefined
map (\x->x) $ undefined :: [b]
In the latter case, the type b comes from an explicit type signature given to map.
Are you familiar with the concepts of alpha equivalence and alpha substitution? This captures the notion that, for example, both of the following are completely equivalent and interconvertible (in certain circumstances) even though they differ:
\x -> (x, x)
\y -> (y, y)
The same concept can be extended to the level of types and type variables (see "System F" for further reading). Haskell in fact has a notion of "lambdas at the type level" for binding type variables, but it's hard to see because they're implicit by default. However, you can make them explicit by using the ExplicitForAll extension, and play around with explicitly binding your type variables:
ghci> :set -XExplicitForAll
ghci> let f x = x; f :: forall a. a -> a
In the second line, I use the forall keyword to introduce a new type variable, which is then used in a type.
In other words, it doesn't matter whether you choose a or t in your example, as long as the type expressions satisfy alpha-equivalence. Choosing type variable names so as to maximize human convenience is an entirely different topic, and probably far more complicated!

Why can't I add Integer to Double in Haskell?

Why is it that I can do:
1 + 2.0
but when I try:
let a = 1
let b = 2.0
a + b
<interactive>:1:5:
Couldn't match expected type `Integer' with actual type `Double'
In the second argument of `(+)', namely `b'
In the expression: a + b
In an equation for `it': it = a + b
This seems just plain weird! Does it ever trip you up?
P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!
The type signature of (+) is defined as Num a => a -> a -> a, which means that it works on any member of the Num typeclass, but both arguments must be of the same type.
The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do for the let expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1 and 2.0.
All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double, so it just assumes the other one was supposed to be a Double and executes the computation. However, when you use the let expression, it only has one number to work off of, so it decides 1 is an Integer and 2.0 is a Double.
EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.
The first works because numeric literals are polymorphic (they are interpreted as fromInteger literal resp. fromRational literal), so in 1 + 2.0, you really have fromInteger 1 + fromRational 2, in the absence of other constraints, the result type defaults to Double.
The second does not work because of the monomorphism restriction. If you bind something without a type signature and with a simple pattern binding (name = expresion), that entity gets assigned a monomorphic type. For the literal 1, we have a Num constraint, therefore, according to the defaulting rules, its type is defaulted to Integer in the binding let a = 1. Similarly, the fractional literal's type is defaulted to Double.
It will work, by the way, if you :set -XNoMonomorphismRestriction in ghci.
The reason for the monomorphism restriction is to prevent loss of sharing, if you see a value that looks like a constant, you don't expect it to be calculated more than once, but if it had a polymorphic type, it would be recomputed everytime it is used.
You can use GHCI to learn a little more about this. Use the command :t to get the type of an expression.
Prelude> :t 1
1 :: Num a => a
So 1 is a constant which can be any numeric type (Double, Integer, etc.)
Prelude> let a = 1
Prelude> :t a
a :: Integer
So in this case, Haskell inferred the concrete type for a is Integer. Similarly, if you write let b = 2.0 then Haskell infers the type Double. Using let made Haskell infer a more specific type than (perhaps) was necessary, and that leads to your problem. (Someone with more experience than me can perhaps comment as to why this is the case.) Since (+) has type Num a => a -> a -> a, the two arguments need to have the same type.
You can fix this with the fromIntegral function:
Prelude> :t fromIntegral
fromIntegral :: (Num b, Integral a) => a -> b
This function converts integer types to other numeric types. For example:
Prelude> let a = 1
Prelude> let b = 2.0
Prelude> (fromIntegral a) + b
3.0
Others have addressed many aspects of this question quite well. I'd like to say a word about the rationale behind why + has the type signature Num a => a -> a -> a.
Firstly, the Num typeclass has no way to convert one artbitrary instance of Num into another. Suppose I have a data type for imaginary numbers; they are still numbers, but you really can't properly convert them into just an Int.
Secondly, which type signature would you prefer?
(+) :: (Num a, Num b) => a -> b -> a
(+) :: (Num a, Num b) => a -> b -> b
(+) :: (Num a, Num b, Num c) => a -> b -> c
After considering the other options, you realize that a -> a -> a is the simplest choice. Polymorphic results (as in the third suggestion above) are cool, but can sometimes be too generic to be used conveniently.
Thirdly, Haskell is not Blub. Most, though arguably not all, design decisions about Haskell do not take into account the conventions and expectations of popular languages. I frequently enjoy saying that the first step to learning Haskell is to unlearn everything you think you know about programming first. I'm sure most, if not all, experienced Haskellers have been tripped up by the Num typeclass, and various other curiosities of Haskell, because most have learned a more "mainstream" language first. But be patient, you will eventually reach Haskell nirvana. :)

Resources