Type mysteries. Why does this piece of code compile? [duplicate] - haskell

This question already has answers here:
Type mysteries. Why does this code compile?
(3 answers)
Closed 7 years ago.
The code
default ()
h :: Bool
h = 1.0 == 1.0 --Error. Ambiguity.
does not compile. This is expected because there is ambiguity. It could be either Float or Double and Haskell doesn't know which one we want.
But the code
default ()
foo :: (Fractional a, Eq a) => a -> Bool
foo x = x == 1.0
compiles successfully. I don't fully understand why. Why isn't this also ambiguous?
I have the feeling that this is because whenever we call foo, we are guaranteed to have picked a concrete type in place of a, i.e., we are guaranteed to have fixed a to either Float or Double (or our custom type that has instances of both Fractional and Eq) at compile-time and therefore there is no ambiguity.
But this is just a feeling and I don't know if it's 100% accurate.

The definition for ambiguity in this context is given in Section 4.3.4 of Haskell Report:
We say that an expression e has an ambiguous type if, in its type forall us. cx => t, there is a type variable u in us that occurs in cx but not in t. Such types are invalid.
In foo :: (Fractional a, Eq a) => a -> Bool, there is no such variable (because a occurs in a -> Bool). In 1.0 == 1.0 the type is actually (Fractional a, Eq a) => Bool, so there is such a variable.
The reason this kind of ambiguity is an error is because there is no way for compiler to choose between different instantiations of the variable a, and the result of the program can depend on which one it chooses. In this case 1.0 == 1.0 should always be True for any reasonable instantiation, but 1) the compiler doesn't know this; 2) not all instantiations are reasonable.

The second piece of code compiles because
foo :: (Fractional a, Eq a) => a -> Bool
foo x = x == 1.0
(==) :: Eq a => a -> a
so the compiler will know that the float number literal 1.0 :: Rational a => a have the same type with x, which has an Eq instance i.e. the "meaning" of == in this function is deterministic for every call.
While in the first piece of code, the literal can be any type (Rational a, Eq a) => a, so the "meaning" of == is ambiguous, the value of h can be either True or False, depends on the type of the literal, which is not provided.

Here'a another, conceptual way to put it:
Both cases have intrinsic ambiguity, but the first one is a dead end — there is no hope for the compiler to infer its way out of the ambiguity.
The second case is also ambiguous, but the ambiguity is open — whoever calls the ambiguous code, must resolve the ambiguity, or delegate/bubble it further up the call stack.
That's all there is to it — polymorphism is controlled ambiguity. The first case is uncontrolled ambiguity so it's not useful (i.e. polymorphism-related) ambiguity. So it gets rejected until it's annotated to unambiguity.

Related

Assigning constrained literal to a polymorphic variable

While learning haskell with Haskell Programming from first principles found an exercise that puzzles me.
Here is the short version:
For the following definition:
a) i :: Num a => a
i = 1
b) Try replacing the type signature with the following:
i :: a
The replacement gives me an error:
error:
• No instance for (Num a) arising from the literal ‘1’
Possible fix:
add (Num a) to the context of
the type signature for:
i' :: forall a. a
• In the expression: 1
In an equation for ‘i'’: i' = 1
|
38 | i' = 1
| ^
It is more or less clear for me how Num constraint arises.
What is not clear why assigning 1 to polymorphic variable i' gives the error.
Why this works:
id 1
while this one doesn't:
i' :: a
i' = 1
id i'
Should it be possible to assign a more specific value to a less specific and lose some type info if there are no issues?
This is a common misunderstanding. You probably have something in mind like, in a class-OO language,
class Object {};
class Num: Object { public: Num add(...){...} };
class Int: Num { int i; ... };
And then you would be able to use an Int value as the argument to a function that expects a Num argument, or a Num value as the argument to a function that expects an Object.
But that's not at all how Haskell's type classes work. Num is not a class of values (like, in the above example it would be the class of all values that belong to one of the subclasses). Instead, it's the class of all types that represent specific flavours of numbers.
How is that different? Well, a polymorphic literal like 1 :: Num a => a does not generate a specific Num value that can then be upcasted to a more general class. Instead, it expects the caller to first pick a concrete type in which you want to render the number, then generates the number immediately in that type, and afterwards the type never changes.
In other words, a polymorphic value has an implicit type-level argument. Whoever wants to use i needs to do so in a context where both
It is unambiguous what type a should be used. (It doesn't necessarily need to be fixed right there: the caller could also itself be a polymorphic function.)
The compiler can prove that this type a has a Num instance.
In C++, the analogue of Haskell typeclasses / polymorphic literal is not [sub]classes and their objects, but instead templates that are constrained to a concept:
#include <concepts>
template<typename A>
concept Num = std::constructible_from<A, int>; // simplified
template<Num A>
A poly_1() {
return 1;
}
Now, poly_1 can be used in any setting that demands a type which fulfills the Num concept, i.e. in particular a type that is constructible_from an int, but not in a context which requires some other type.
(In older C++ such a template would just be duck-typed, i.e. it's not explicit that it requires a Num setting but the compiler would just try to use it as such and then give a type error upon noticing that 1 can't be converted to the specified type.)
tl;dr
A value i' declared as i' :: a must be usable¹ in place of any other value, with no exception. 1 is no such a value, as it can't be used, say, where a String is expected, just to make one example.
Longer version
Let's start form a less uncontroversial scenario where you do need a type constraint:
plus :: a -> a -> a
plus x y = x + y
This does not compile, because the signature is equivalent to plus :: forall a. a -> a -> a, and it is plainly not true that the RHS, x + y, is meaningful for any common type a that x and y are inhabitants of. So you can fix the above by providing a constraint guaranteeing that + is possible between two as, and you can do so by putting Num a => right after :: (or even by giving up on polymorphic types and just change a to Int).
But there are functions that don't require any constraints on their arguments. Here's three of them:
id :: a -> a
id x = x
const :: a -> b -> a
const x _ = x
Data.Tuple.swap :: (a, b) -> (b, a)
Data.Tuple.swap (a, b) = (b, a)
You can pass anything to these functions, and they'll always work, because their definitions make no assumption whatsoever on what can be done with those objects, as they just shuffle/ditch them.
Similarly,
i' :: a
i' = 1
cannot compile because it's not true that 1 can represent a value of any type a. It can't represent a String, for instance, whereas the signature i' :: a is expressing the idea that you can put i' in any place, e.g. where a Int is expected, as much as where a generic Num is expected, or where a String is expected, and so on.
In other words, the above signature says that you can use i' in both of these statements:
j = i' + 1
k = i' ++ "str"
So the question is: just like we found some functions that have signatures not constraining their arguments in any way, do we have a value that inhabits every single type you can think of?
Yes, there are some values like that, and here are two of them:
i' :: a
i' = error ""
j' :: a
j' = undefined
They're all "bottoms", or ⊥.
(¹) By "usable" I mean that when you write it in some place where the code compiles, the code keeps compiling.

Why can a Num act like a Fractional?

As expected, this works fine:
foo :: Fractional a => a
foo = undefined -- datum
bar :: Num a => a -> a
bar a = undefined -- function
baz :: Fractional a => a
baz = bar foo -- application
This works as expected because every Fractional is also a Num.
So as expected, we can pass a Fractional argument into a Num parameter.
On the other hand, the following works too. I don't understand why.
foo :: Fractional a => a -> a
foo a = undefined -- function
bar :: Num a => a
bar = undefined -- datum
baz :: Fractional a => a
baz = foo bar -- application
Works unexpectedly! There are Nums that are not Fractionals.
So why can I pass a Num argument into a Fractional parameter? Can you explain?
chi's answer gives a great high-level explanation of what's happening. I thought it might also be fun to give a slightly more low-level (but also more mechanical) way to understand this, so that you might be able to approach other similar problems, turn a crank, and get the right answer. I'm going to talk about types as a sort of protocol between the user of the value of that type and the implementer.
For forall a. t, the caller gets to choose a type, then they continue with protocol t (where a has been replaced with the caller's choice everywhere in t).
For Foo a => t, the caller must provide proof to the implementer that a is an instance of Foo. Then they continue with protocol t.
For t1 -> t2, the caller gets to choose a value of type t1 (e.g. by running protocol t1 with the roles of implementer and caller switched). Then they continue with protocol t2.
For any type t (that is, at any time), the implementer can cut the protocol short by just producing a value of the appropriate type. If none of the rules above apply (e.g. if we have reached a base type like Int or a bare type variable like a), the implementer must do so.
Now let's give some distinct names to your terms so we can differentiate them:
valFrac :: forall a. Fractional a => a
valNum :: forall a. Num a => a
idFrac :: forall a. Fractional a => a -> a
idNum :: forall a. Num a => a -> a
We also have two definitions we want to explore:
applyIdNum :: forall a. Fractional a => a
applyIdNum = idNum valFrac
applyIdFrac :: forall a. Fractional a => a
applyIdFrac = idFrac valNum
Let's talk about applyIdNum first. The protocol says:
Caller chooses a type a.
Caller proves it is Fractional.
Implementer provides a value of type a.
The implementation says:
Implementer starts the idNum protocol as the caller. So, she must:
Choose a type a. She quietly makes the same choice as her caller did.
Prove that a is an instance of Num. This is no problem, because she actually knows that a is Fractional, and this implies Num.
Provide a value of type a. Here she chooses valFrac. To be complete, she must then show that valFrac has the type a.
So the implementer now runs the valFrac protocol. She:
Chooses a type a. Here she quietly chooses the type that idNum is expecting, which happens to coincidentally be the same as the type that her caller chose for a.
Prove that a is an instance of Fractional. She uses the same proof her caller did.
The implementer of valFrac then promises to provide a value of type a, as needed.
For completeness, here is the analogous discussion for applyIdFrac. The protocol says:
Caller chooses a type a.
Caller proves that a is Fractional.
Implementer must provide a value of type a.
The implementation says:
Implementer will execute the idFrac protocol. So, she must:
Choose a type. Here she quietly chooses whatever her caller chose.
Prove that a is Fractional. She passes on her caller's proof of this.
Choose a value of type a. She will execute the valNum protocol to do this; and we must check that this produces a value of type a.
During the execution of the valNum protocol, she:
Chooses a type. Here she chooses the type that idFrac expects, namely a; this also happens to be the type her caller chose.
Prove that Num a holds. This she can do, because her caller supplied a proof that Fractional a, and you can extract a proof of Num a from a proof of Fractional a.
The implementer of valNum then provides a value of type a, as needed.
With all the details on the field, we can now try to zoom out and see the big picture. Both applyIdNum and applyIdFrac have the same type, namely forall a. Fractional a => a. So the implementer in both cases gets to assume that a is an instance of Fractional. But since all Fractional instances are Num instances, this means the implementer gets to assume both Fractional and Num apply. This makes it easy to use functions or values that assume either constraint in the implementation.
P.S. I repeatedly used the adverb "quietly" for choices of types needed during the forall a. t protocol. This is because Haskell tries very hard to hide these choices from the user. But you can make them explicit if you like with the TypeApplications extension; choosing type t in protocol f uses the syntax f #t. Instance proofs are still silently managed on your behalf, though.
The type a in baz :: Fractional a => a is chosen by whoever calls baz. It is their responsibility to guarantee that their choice of a type is in the Fractional class. Since Fractional is a subclass of Num, the type a must therefore be also a Num. Hence, baz can use both foo and bar.
In other words, because of the subclass relation, the signature
baz :: Fractional a => a
is essentially equivalent to
baz :: (Fractional a, Num a) => a
Your second example is actually of the same kind as the first one, it does not matter which one between foo, bar is the function and which one is the argument. You might also consider this:
foo :: Fractional a => a
foo = undefined
bar :: Num a => a
bar = undefined
baz :: Fractional a => a
baz = foo + bar -- Works
Works as expected because every Fractional is also a Num.
That is correct, but it's important to be precise about what this means. It means this: every type in the Fractional class is also in the Num class. It does not mean what someone with an OO or dynamic background might understand: “every value in a Num type is also in a Fractional type”. If this were the case, then your reasoning would make sense: then the Num value bar would be insufficiently general to be used in the foo function. ...or actually it wouldn't be, because in an OO language the number hierarchy would work in the other direction – other languages usually allow you to cast any numerical value to a fractional one, but the other direction would in these languages incur round, which reasonably strongly typed ones won't automatically do!
In Haskell, you need to worry about none of this, because there are never any implicit type conversions. bar and foo work on the exact same type, that this type happens a variable a is secondary. Now, both bar and foo constrain this single type in different ways, but because it's the same type that's constrained you simply get a combination (Num a, Fractional a) of both constraints, which due to Num a => Fractional a is equivalent to Fractional a alone.
TL;DR: it is not the case that Num a => a is a Num value, but rather it is a definition that can be a value of any type of Num, whatever that type is, specifically, as determined by each specific place where it is used.
We define it first, and we use it, later.
And if we've defined it generally, so that it can be used at many different specific types, we can use it later at many different use sites, which will each demand a specific type of value to be provided for it by our definition. As long as that specific type conforms to the type constraints as per the definition and the use site.
That's what being a polymorphic definition is all about.
It is not a polymorphic value. That is a concept from the dynamic world, but ours is a static one. The types in Haskell are not decided at run time. They are known upfront.
Here's what happens:
> numfunc :: Num a => a -> a; numfunc = undefined
> fraval :: Fractional a => a; fraval = undefined
> :t numfunc fraval
numfunc fraval :: Fractional a => a
numfunc demands its argument to be in Num. fraval is a polymorphic definition, able to provide a datum of any type which is in Fractional as might be demanded of it by a particular use. Whatever it will be, since it's in Fractional, it's guaranteed to also be in Num, so it is acceptable by numfunc.
Since we now know that a is in Fractional (because of fraval), the type of the whole application is now known to be in Fractional as well (because of the type of numfunc).
Technically,
fraval :: Fractional a => a -- can provide any Fractional
numfunc :: Num a => a -> a -- is able to process a Num
-------------------------------------------------
numfunc fraval :: (Num a, Fractional a) => a -- can provide a Fractional
And (Num a, Fractional a) is simplified to the intersection of the type classes, i.e. just Fractional a.
This of course means that if there's nothing else in the rest of the code somewhere, further specifying the types, we'll get an ambiguous type error (unless some type defaulting kicks in). But there might be. For now this is acceptable, and has a type -- a polymorphic type, meaning, something else will have to further specify it somewhere in the rest of the code, at any particular use site where it appears. So for now, as a general polymorphic definition, this is perfectly acceptable.
Next,
> frafunc :: Fractional a => a -> a; frafunc = undefined
> numval :: Num a => a; numval = undefined
> :t frafunc numval
frafunc numval :: Fractional a => a
frafunc demands its type to be in Fractional. numval is able to provide a datum of whatever type is demanded of it as long as that type is in Num. So it's perfectly happy to oblige any demand for a Fractional type of value. Of course something else in the code will have to further specialize the types, but whatever. For now it's all good.
Technically,
numval :: Num a => a -- can provide any Num
frafunc :: Fractional a => a -> a -- is able to process a Fractional
-------------------------------------------------
frafunc numval :: (Num a, Fractional a) => a -- can provide any Fractional
(I post this answer because I think the simplest things can be a stumbling block for beginners, and these simplest things can be taken for granted without even noticing, by the experts. As the saying goes, we don't know who discovered water, but it sure wasn't a fish.)

Polymorphism: a constant versus a function

I'm new to Haskell and come across a slightly puzzling example for me in the Haskell Programming from First Principles book. At the end of Chapter 6 it suddenly occurred to me that the following doesn't work:
constant :: (Num a) => a
constant = 1.0
However, the following works fine:
f :: (Num a) => a -> a
f x = 3*x
I can input any numerical value for x into the function f and nothing will break. It's not constrained to taking integers. This makes sense to me intuitively. But the example with the constant is totally confusing to me.
Over on a reddit thread for the book it was explained (paraphrasing) that the reason why the constant example doesn't work is that the type declaration forces the value of constant to only be things which aren't more specific than Num. So trying to assign a value to it which is from a subclass of Num like Fractional isn't kosher.
If that explanation is correct, then am I wrong in thinking that these two examples seem completely opposites of each other? In one case, the type declaration forces the value to be as general as possible. In the other case, the accepted values for the function can be anything that implements Num.
Can anyone set me straight on this?
It can sometimes help to read types as a game played between two actors, the implementor of the type and the user of the type. To do a good job of explaining this perspective, we have to introduce something that Haskell hides from you by default: we will add binders for all type variables. So your types would actually become:
constant :: forall a. Num a => a
f :: forall a. Num a => a -> a
Now, we will read type formation rules thusly:
forall a. t means: the caller chooses a type a, and the game continues as t
c => t means: the caller shows that constraint c holds, and the game continues as t
t -> t' means: the caller chooses a value of type t, and the game continues as t'
t (where t is a monomorphic type such as a bare variable or Integer or similar) means: the implementor produces a value of type a
We will need a few other details to truly understand things here, so I will quickly say them here:
When we write a number with no decimal points, the compiler implicitly converts this to a call to fromInteger applied to the Integer produced by parsing that number. We have fromInteger :: forall a. Num a => Integer -> a.
When we write a number with decimal points, the compiler implicitly converts this to a call to fromRational applied to the Rational produced by parsing that number. We have fromRational :: forall a. Fractional a => Rational -> a.
The Num class includes the method (*) :: forall a. Num a => a -> a -> a.
Now let's try to walk through your two examples slowly and carefully.
constant :: forall a. Num a => a
constant = 1.0 {- = fromRational (1 % 1) -}
The type of constant says: the caller chooses a type, shows that this type implements Num, and then the implementor must produce a value of that type. Now the implementor tries to play his own game by calling fromRational :: Fractional a => Rational -> a. He chooses the same type the caller did, and then makes an attempt to show that this type implements Fractional. Oops! He can't show that, because the only thing the caller proved to him was that a implements Num -- which doesn't guarantee that a also implements Fractional. Dang. So the implementor of constant isn't allowed to call fromRational at that type.
Now, let's look at f:
f :: forall a. Num a => a -> a
f x = 3*x {- = fromInteger 3 * x -}
The type of f says: the caller chooses a type, shows that the type implements Num, and chooses a value of that type. The implementor must then produce another value of that type. He is going to do this by playing his own game with (*) and fromInteger. In particular, he chooses the same type the caller did. But now fromInteger and (*) only demand that he prove that this type is an instance of Num -- so he passes off the proof the caller gave him of this and saves the day! Then he chooses the Integer 3 for the argument to fromInteger, and chooses the result of this and the value the caller handed him as the two arguments to (*). Everybody is satisfied, and the implementor gets to return a new value.
The point of this whole exposition is this: the Num constraint in both cases is enforcing exactly the same thing, namely, that whatever type we choose to instantiate a at must be a member of the Num class. It's just that in the definition constant = 1.0 being in Num isn't enough to do the operations we've written, whereas in f x = 3*x being in Num is enough to do the operations we've written. And since the operations we've chosen for the two things are so different, it should not be too surprising that one works and the other doesn't!
When you have a polymorphic value, the caller chooses which concrete type to use. The Haskell report defines the type of numeric literals, namely:
integer and floating literals have the typings (Num a) => a and
(Fractional a) => a, respectively
3 is an integer literal so has type Num a => a and (*) has type Num a => a -> a -> a so f has type Num a => a -> a.
In contrast, 3.0 has type Fractional a => a. Since Fractional is a subclass of Num your type signature for constant is invalid since the caller could choose a type for a which is Num but not Fractional e.g. Int or Integer.
They don't mean the opposite - they mean exactly the same ("as general as possible"). Typeclass gives you all guarantees that you can rely upon - if typeclass T provides function f, you can use it for all instances of T, but even if some of these instances are members of G (providing g) as well, requiring to be of T typeclass is not sufficient to call g.
In your case this means:
Members of Num are guaranteed to provide conversion from integers (i.e. default type for integral values, like 1 or 1000) - with fromInteger function.
However, they are not guaranteed to provide conversion from rational numbers (like 1.0) - Fractional typeclass does provide this as fromRational function, but it doesn't really matter, as you use only Num.

Polymorphic signature for non-polymorphic function: why not?

As an example, consider the trivial function
f :: (Integral b) => a -> b
f x = 3 :: Int
GHC complains that it cannot deduce (b ~ Int). The definition matches the signature in the sense that it returns something that is Integral (namely an Int). Why would/should GHC force me to use a more specific type signature?
Thanks
Type variables in Haskell are universally quantified, so Integral b => b doesn't just mean some Integral type, it means any Integral type. In other words, the caller gets to pick which concrete types should be used. Therefore, it is obviously a type error for the function to always return an Int when the type signature says I should be able to choose any Integral type, e.g. Integer or Word64.
There are extensions which allow you to use existentially quantified type variables, but they are more cumbersome to work with, since they require a wrapper type (in order to store the type class dictionary). Most of the time, it is best to avoid them. But if you did want to use existential types, it would look something like this:
{-# LANGUAGE ExistentialQuantification #-}
data SomeIntegral = forall a. Integral a => SomeIntegral a
f :: a -> SomeIntegral
f x = SomeIntegral (3 :: Int)
Code using this function would then have to be polymorphic enough to work with any Integral type. We also have to pattern match using case instead of let to keep GHC's brain from exploding.
> case f True of SomeIntegral x -> toInteger x
3
> :t toInteger
toInteger :: Integral a => a -> Integer
In the above example, you can think of x as having the type exists b. Integral b => b, i.e. some unknown Integral type.
The most general type of your function is
f :: a -> Int
With a type annotation, you can only demand that you want a more specific type, for example
f :: Bool -> Int
but you cannot declare a less specific type.
The Haskell type system does not allow you to make promises that are not warranted by your code.
As others have said, in Haskell if a function returns a result of type x, that means that the caller gets to decide what the actual type is. Not the function itself. In other words, the function must be able to return any possible type matching the signature.
This is different to most OOP languages, where a signature like this would mean that the function gets to choose what it returns. Apparently this confuses a few people...

Why can't I add Integer to Double in Haskell?

Why is it that I can do:
1 + 2.0
but when I try:
let a = 1
let b = 2.0
a + b
<interactive>:1:5:
Couldn't match expected type `Integer' with actual type `Double'
In the second argument of `(+)', namely `b'
In the expression: a + b
In an equation for `it': it = a + b
This seems just plain weird! Does it ever trip you up?
P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!
The type signature of (+) is defined as Num a => a -> a -> a, which means that it works on any member of the Num typeclass, but both arguments must be of the same type.
The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do for the let expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1 and 2.0.
All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double, so it just assumes the other one was supposed to be a Double and executes the computation. However, when you use the let expression, it only has one number to work off of, so it decides 1 is an Integer and 2.0 is a Double.
EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.
The first works because numeric literals are polymorphic (they are interpreted as fromInteger literal resp. fromRational literal), so in 1 + 2.0, you really have fromInteger 1 + fromRational 2, in the absence of other constraints, the result type defaults to Double.
The second does not work because of the monomorphism restriction. If you bind something without a type signature and with a simple pattern binding (name = expresion), that entity gets assigned a monomorphic type. For the literal 1, we have a Num constraint, therefore, according to the defaulting rules, its type is defaulted to Integer in the binding let a = 1. Similarly, the fractional literal's type is defaulted to Double.
It will work, by the way, if you :set -XNoMonomorphismRestriction in ghci.
The reason for the monomorphism restriction is to prevent loss of sharing, if you see a value that looks like a constant, you don't expect it to be calculated more than once, but if it had a polymorphic type, it would be recomputed everytime it is used.
You can use GHCI to learn a little more about this. Use the command :t to get the type of an expression.
Prelude> :t 1
1 :: Num a => a
So 1 is a constant which can be any numeric type (Double, Integer, etc.)
Prelude> let a = 1
Prelude> :t a
a :: Integer
So in this case, Haskell inferred the concrete type for a is Integer. Similarly, if you write let b = 2.0 then Haskell infers the type Double. Using let made Haskell infer a more specific type than (perhaps) was necessary, and that leads to your problem. (Someone with more experience than me can perhaps comment as to why this is the case.) Since (+) has type Num a => a -> a -> a, the two arguments need to have the same type.
You can fix this with the fromIntegral function:
Prelude> :t fromIntegral
fromIntegral :: (Num b, Integral a) => a -> b
This function converts integer types to other numeric types. For example:
Prelude> let a = 1
Prelude> let b = 2.0
Prelude> (fromIntegral a) + b
3.0
Others have addressed many aspects of this question quite well. I'd like to say a word about the rationale behind why + has the type signature Num a => a -> a -> a.
Firstly, the Num typeclass has no way to convert one artbitrary instance of Num into another. Suppose I have a data type for imaginary numbers; they are still numbers, but you really can't properly convert them into just an Int.
Secondly, which type signature would you prefer?
(+) :: (Num a, Num b) => a -> b -> a
(+) :: (Num a, Num b) => a -> b -> b
(+) :: (Num a, Num b, Num c) => a -> b -> c
After considering the other options, you realize that a -> a -> a is the simplest choice. Polymorphic results (as in the third suggestion above) are cool, but can sometimes be too generic to be used conveniently.
Thirdly, Haskell is not Blub. Most, though arguably not all, design decisions about Haskell do not take into account the conventions and expectations of popular languages. I frequently enjoy saying that the first step to learning Haskell is to unlearn everything you think you know about programming first. I'm sure most, if not all, experienced Haskellers have been tripped up by the Num typeclass, and various other curiosities of Haskell, because most have learned a more "mainstream" language first. But be patient, you will eventually reach Haskell nirvana. :)

Resources