Overloaded / Polymorphic Functions with different types - haskell

I'm learning Haskell and came into an unanswered question:
From the lesson I'm following:
(+) :: Num a => a -> a -> a
For any numeric type a, (+) takes 2 values of type a and returns a value of type a.
As per the examples:
1 + 1
-- 2 # type a is Int
3.0 + 4.0
-- 7.0 type a is Float
'a' + 'b'
-- Type Error: Char is not a Numeric type
This makes perfect sense, but i end up not understanding what goes on in the background, when:
1 + 3.0
Is the Type Inference system auto casting my Int to Float because it knows it will return Float?

You should investigate these sorts of questions in ghci. It's an invaluable learning resource:
$ ghci
GHCi, version 9.0.1: https://www.haskell.org/ghc/ :? for help
ghci> :t 1
1 :: Num p => p
ghci> :t 3.0
3.0 :: Fractional p => p
ghci> :t 1 + 3.0
1 + 3.0 :: Fractional a => a
First lesson: Numeric literals are polymorphic. 1 isn't an Int, it's polymorphic. It can be any instance of Num that is necessary for the code to compile. 3.0 isn't a Float, it's any instance of Fractional that is necessary for the code to compile. (The difference being the decimal in the literal - it restricts the types allowed.)
Second Lesson: when you combine things into an expression, types get unified. When you unify Num and Fractional constraints, you get a Fractional constraint. This is because Fractional is defined to require all instances of it also be instances of Num.
For a bit more, let's turn on warnings and see what additional information they provide.
ghci> :set -Wall
ghci> 1
<interactive>:5:1: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Integer’
(Show a0) arising from a use of ‘print’ at <interactive>:5:1
(Num a0) arising from a use of ‘it’ at <interactive>:5:1
• In a stmt of an interactive GHCi command: print it
1
ghci> 1 + 3.0
<interactive>:6:1: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Double’
(Show a0) arising from a use of ‘print’ at <interactive>:6:1-7
(Fractional a0) arising from a use of ‘it’ at <interactive>:6:1-7
• In a stmt of an interactive GHCi command: print it
4.0
When printing a value, ghci requires the type have a Show instance. Fortunately, that isn't too important of a detail here, but it's why the defaulting warnings refer to Show.
The lessons to observe here are that the default type for something with a Num instance if inference doesn't require something more specific is Integer, not Int. The default type for something with a Fractional instance if inference doesn't require something more specific is Double, not Float. (Float is basically never used. Forget it exists.)
So when inference runs, the expression 1 + 3.0 is inferred to have the type Fractional a => a. In the absence of further requirements on the type, defaulting kicks in and says "a is Double". Then that information flows back through the (+) to its arguments and requires each of them to also be Double. Fortunately, each argument is a polymorphic literal that can take the type Double. Type checking succeeds, instances are chosen, addition happens, the result is printed.
It's very important to this process that numeric literals are polymorphic. Haskell has no implicit conversions between any pair of types. Especially not numeric types. If you want to actually convert values from one type to another, you must call a function that does the conversion you desire. (fromIntegral, round, floor, ceiling, and realToFrac are the most common numeric conversion functions.) But when the values are polymorphic, it means inference can pick a matching type without needing a conversion.

Related

Is there a way to stop Haskell from complaining about the type being ambiguous? [duplicate]

I'm puzzled by how the Haskell compiler sometimes infers types that are less
polymorphic than what I'd expect, for example when using point-free definitions.
It seems like the issue is the "monomorphism restriction", which is on by default on
older versions of the compiler.
Consider the following Haskell program:
{-# LANGUAGE MonomorphismRestriction #-}
import Data.List(sortBy)
plus = (+)
plus' x = (+ x)
sort = sortBy compare
main = do
print $ plus' 1.0 2.0
print $ plus 1.0 2.0
print $ sort [3, 1, 2]
If I compile this with ghc I obtain no errors and the output of the executable is:
3.0
3.0
[1,2,3]
If I change the main body to:
main = do
print $ plus' 1.0 2.0
print $ plus (1 :: Int) 2
print $ sort [3, 1, 2]
I get no compile time errors and the output becomes:
3.0
3
[1,2,3]
as expected. However if I try to change it to:
main = do
print $ plus' 1.0 2.0
print $ plus (1 :: Int) 2
print $ plus 1.0 2.0
print $ sort [3, 1, 2]
I get a type error:
test.hs:13:16:
No instance for (Fractional Int) arising from the literal ‘1.0’
In the first argument of ‘plus’, namely ‘1.0’
In the second argument of ‘($)’, namely ‘plus 1.0 2.0’
In a stmt of a 'do' block: print $ plus 1.0 2.0
The same happens when trying to call sort twice with different types:
main = do
print $ plus' 1.0 2.0
print $ plus 1.0 2.0
print $ sort [3, 1, 2]
print $ sort "cba"
produces the following error:
test.hs:14:17:
No instance for (Num Char) arising from the literal ‘3’
In the expression: 3
In the first argument of ‘sort’, namely ‘[3, 1, 2]’
In the second argument of ‘($)’, namely ‘sort [3, 1, 2]’
Why does ghc suddenly think that plus isn't polymorphic and requires an Int argument?
The only reference to Int is in an application of plus, how can that matter
when the definition is clearly polymorphic?
Why does ghc suddenly think that sort requires a Num Char instance?
Moreover if I try to place the function definitions into their own module, as in:
{-# LANGUAGE MonomorphismRestriction #-}
module TestMono where
import Data.List(sortBy)
plus = (+)
plus' x = (+ x)
sort = sortBy compare
I get the following error when compiling:
TestMono.hs:10:15:
No instance for (Ord a0) arising from a use of ‘compare’
The type variable ‘a0’ is ambiguous
Relevant bindings include
sort :: [a0] -> [a0] (bound at TestMono.hs:10:1)
Note: there are several potential instances:
instance Integral a => Ord (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
instance Ord () -- Defined in ‘GHC.Classes’
instance (Ord a, Ord b) => Ord (a, b) -- Defined in ‘GHC.Classes’
...plus 23 others
In the first argument of ‘sortBy’, namely ‘compare’
In the expression: sortBy compare
In an equation for ‘sort’: sort = sortBy compare
Why isn't ghc able to use the polymorphic type Ord a => [a] -> [a] for sort?
And why does ghc treat plus and plus' differently? plus should have the
polymorphic type Num a => a -> a -> a and I don't really see how this is different
from the type of sort and yet only sort raises an error.
Last thing: if I comment the definition of sort the file compiles. However
if I try to load it into ghci and check the types I get:
*TestMono> :t plus
plus :: Integer -> Integer -> Integer
*TestMono> :t plus'
plus' :: Num a => a -> a -> a
Why isn't the type for plus polymorphic?
This is the canonical question about monomorphism restriction in Haskell
as discussed in [the meta question](https://meta.stackoverflow.com/questions/294053/can-we-provide-a-canonical-questionanswer-for-haskells-monomorphism-restrictio).
What is the monomorphism restriction?
The monomorphism restriction as stated by the Haskell wiki is:
a counter-intuitive rule in Haskell type inference.
If you forget to provide a type signature, sometimes this rule will fill
the free type variables with specific types using "type defaulting" rules.
What this means is that, in some circumstances, if your type is ambiguous (i.e. polymorphic)
the compiler will choose to instantiate that type to something not ambiguous.
How do I fix it?
First of all you can always explicitly provide a type signature and this will
avoid the triggering of the restriction:
plus :: Num a => a -> a -> a
plus = (+) -- Okay!
-- Runs as:
Prelude> plus 1.0 1
2.0
Note that only normal type signatures on variables count for this purpose, not expression type signatures. For example, writing this would still result in the restriction being triggered:
plus = (+) :: Num a => a -> a -> a
Alternatively, if you are defining a function, you can avoid point-free style,
and for example write:
plus x y = x + y
Turning it off
It is possible to simply turn off the restriction so that you don't have to do
anything to your code to fix it. The behaviour is controlled by two extensions:
MonomorphismRestriction will enable it (which is the default) while
NoMonomorphismRestriction will disable it.
You can put the following line at the very top of your file:
{-# LANGUAGE NoMonomorphismRestriction #-}
If you are using GHCi you can enable the extension using the :set command:
Prelude> :set -XNoMonomorphismRestriction
You can also tell ghc to enable the extension from the command line:
ghc ... -XNoMonomorphismRestriction
Note: You should really prefer the first option over choosing extension via command-line options.
Refer to GHC's page for an explanation of this and other extensions.
A complete explanation
I'll try to summarize below everything you need to know to understand what the
monomorphism restriction is, why it was introduced and how it behaves.
An example
Take the following trivial definition:
plus = (+)
you'd think to be able to replace every occurrence of + with plus. In particular since (+) :: Num a => a -> a -> a you'd expect to also have plus :: Num a => a -> a -> a.
Unfortunately this is not the case. For example if we try the following in GHCi:
Prelude> let plus = (+)
Prelude> plus 1.0 1
We get the following output:
<interactive>:4:6:
No instance for (Fractional Integer) arising from the literal ‘1.0’
In the first argument of ‘plus’, namely ‘1.0’
In the expression: plus 1.0 1
In an equation for ‘it’: it = plus 1.0 1
You may need to :set -XMonomorphismRestriction in newer GHCi versions.
And in fact we can see that the type of plus is not what we would expect:
Prelude> :t plus
plus :: Integer -> Integer -> Integer
What happened is that the compiler saw that plus had type Num a => a -> a -> a, a polymorphic type.
Moreover it happens that the above definition falls under the rules that I'll explain later and so
he decided to make the type monomorphic by defaulting the type variable a.
The default is Integer as we can see.
Note that if you try to compile the above code using ghc you won't get any errors.
This is due to how ghci handles (and must handle) the interactive definitions.
Basically every statement entered in ghci must be completely type checked before
the following is considered; in other words it's as if every statement was in a separate
module. Later I'll explain why this matter.
Some other example
Consider the following definitions:
f1 x = show x
f2 = \x -> show x
f3 :: (Show a) => a -> String
f3 = \x -> show x
f4 = show
f5 :: (Show a) => a -> String
f5 = show
We'd expect all these functions to behave in the same way and have the same type,
i.e. the type of show: Show a => a -> String.
Yet when compiling the above definitions we obtain the following errors:
test.hs:3:12:
No instance for (Show a1) arising from a use of ‘show’
The type variable ‘a1’ is ambiguous
Relevant bindings include
x :: a1 (bound at blah.hs:3:7)
f2 :: a1 -> String (bound at blah.hs:3:1)
Note: there are several potential instances:
instance Show Double -- Defined in ‘GHC.Float’
instance Show Float -- Defined in ‘GHC.Float’
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 24 others
In the expression: show x
In the expression: \ x -> show x
In an equation for ‘f2’: f2 = \ x -> show x
test.hs:8:6:
No instance for (Show a0) arising from a use of ‘show’
The type variable ‘a0’ is ambiguous
Relevant bindings include f4 :: a0 -> String (bound at blah.hs:8:1)
Note: there are several potential instances:
instance Show Double -- Defined in ‘GHC.Float’
instance Show Float -- Defined in ‘GHC.Float’
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 24 others
In the expression: show
In an equation for ‘f4’: f4 = show
So f2 and f4 don't compile. Moreover when trying to define these function
in GHCi we get no errors, but the type for f2 and f4 is () -> String!
Monomorphism restriction is what makes f2 and f4 require a monomorphic
type, and the different behaviour bewteen ghc and ghci is due to different
defaulting rules.
When does it happen?
In Haskell, as defined by the report, there are two distinct type of bindings.
Function bindings and pattern bindings. A function binding is nothing else than
a definition of a function:
f x = x + 1
Note that their syntax is:
<identifier> arg1 arg2 ... argn = expr
Modulo guards and where declarations. But they don't really matter.
where there must be at least one argument.
A pattern binding is a declaration of the form:
<pattern> = expr
Again, modulo guards.
Note that variables are patterns, so the binding:
plus = (+)
is a pattern binding. It's binding the pattern plus (a variable) to the expression (+).
When a pattern binding consists of only a variable name it's called a
simple pattern binding.
The monomorphism restriction applies to simple pattern bindings!
Well, formally we should say that:
A declaration group is a minimal set of mutually dependent bindings.
Section 4.5.1 of the report.
And then (Section 4.5.5 of the report):
a given declaration group is unrestricted if and only if:
every variable in the group is bound by a function binding (e.g. f x = x)
or a simple pattern binding (e.g. plus = (+); Section 4.4.3.2 ), and
an explicit type signature is given for every variable in the group that
is bound by simple pattern binding. (e.g. plus :: Num a => a -> a -> a; plus = (+)).
Examples added by me.
So a restricted declaration group is a group where, either there are
non-simple pattern bindings (e.g. (x:xs) = f something or (f, g) = ((+), (-))) or
there is some simple pattern binding without a type signature (as in plus = (+)).
The monomorphism restriction affects restricted declaration groups.
Most of the time you don't define mutual recursive functions and hence a declaration
group becomes just a binding.
What does it do?
The monomorphism restriction is described by two rules in Section
4.5.5 of the report.
First rule
The usual Hindley-Milner restriction on polymorphism is that only type
variables that do not occur free in the environment may be generalized.
In addition, the constrained type variables of a restricted declaration
group may not be generalized in the generalization step for that group.
(Recall that a type variable is constrained if it must belong to some
type class; see Section 4.5.2 .)
The highlighted part is what the monomorphism restriction introduces.
It says that if the type is polymorphic (i.e. it contain some type variable)
and that type variable is constrained (i.e. it has a class constraint on it:
e.g. the type Num a => a -> a -> a is polymorphic because it contains a and
also contrained because the a has the constraint Num over it.)
then it cannot be generalized.
In simple words not generalizing means that the uses of the function plus may change its type.
If you had the definitions:
plus = (+)
x :: Integer
x = plus 1 2
y :: Double
y = plus 1.0 2
then you'd get a type error. Because when the compiler sees that plus is
called over an Integer in the declaration of x it will unify the type
variable a with Integer and hence the type of plus becomes:
Integer -> Integer -> Integer
but then, when it will type check the definition of y, it will see that plus
is applied to a Double argument, and the types don't match.
Note that you can still use plus without getting an error:
plus = (+)
x = plus 1.0 2
In this case the type of plus is first inferred to be Num a => a -> a -> a
but then its use in the definition of x, where 1.0 requires a Fractional
constraint, will change it to Fractional a => a -> a -> a.
Rationale
The report says:
Rule 1 is required for two reasons, both of which are fairly subtle.
Rule 1 prevents computations from being unexpectedly repeated.
For example, genericLength is a standard function (in library Data.List)
whose type is given by
genericLength :: Num a => [b] -> a
Now consider the following expression:
let len = genericLength xs
in (len, len)
It looks as if len should be computed only once, but without Rule 1 it
might be computed twice, once at each of two different overloadings.
If the programmer does actually wish the computation to be repeated,
an explicit type signature may be added:
let len :: Num a => a
len = genericLength xs
in (len, len)
For this point the example from the wiki is, I believe, clearer.
Consider the function:
f xs = (len, len)
where
len = genericLength xs
If len was polymorphic the type of f would be:
f :: Num a, Num b => [c] -> (a, b)
So the two elements of the tuple (len, len) could actually be
different values! But this means that the computation done by genericLength
must be repeated to obtain the two different values.
The rationale here is: the code contains one function call, but not introducing
this rule could produce two hidden function calls, which is counter intuitive.
With the monomorphism restriction the type of f becomes:
f :: Num a => [b] -> (a, a)
In this way there is no need to perform the computation multiple times.
Rule 1 prevents ambiguity. For example, consider the declaration group
[(n,s)] = reads t
Recall that reads is a standard function whose type is given by the signature
reads :: (Read a) => String -> [(a,String)]
Without Rule 1, n would be assigned the type ∀ a. Read a ⇒ a and s
the type ∀ a. Read a ⇒ String.
The latter is an invalid type, because it is inherently ambiguous.
It is not possible to determine at what overloading to use s,
nor can this be solved by adding a type signature for s.
Hence, when non-simple pattern bindings are used (Section 4.4.3.2 ),
the types inferred are always monomorphic in their constrained type variables,
irrespective of whether a type signature is provided.
In this case, both n and s are monomorphic in a.
Well, I believe this example is self-explanatory. There are situations when not
applying the rule results in type ambiguity.
If you disable the extension as suggest above you will get a type error when
trying to compile the above declaration. However this isn't really a problem:
you already know that when using read you have to somehow tell the compiler
which type it should try to parse...
Second rule
Any monomorphic type variables that remain when type inference for an
entire module is complete, are considered ambiguous, and are resolved
to particular types using the defaulting rules (Section 4.3.4 ).
This means that. If you have your usual definition:
plus = (+)
This will have a type Num a => a -> a -> a where a is a
monomorphic type variable due to rule 1 described above. Once the whole module
is inferred the compiler will simply choose a type that will replace that a
according to the defaulting rules.
The final result is: plus :: Integer -> Integer -> Integer.
Note that this is done after the whole module is inferred.
This means that if you have the following declarations:
plus = (+)
x = plus 1.0 2.0
inside a module, before type defaulting the type of plus will be:
Fractional a => a -> a -> a (see rule 1 for why this happens).
At this point, following the defaulting rules, a will be replaced by Double
and so we will have plus :: Double -> Double -> Double and x :: Double.
Defaulting
As stated before there exist some defaulting rules, described in Section 4.3.4 of the Report,
that the inferencer can adopt and that will replace a polymorphic type with a monomorphic one.
This happens whenever a type is ambiguous.
For example in the expression:
let x = read "<something>" in show x
here the expression is ambiguous because the types for show and read are:
show :: Show a => a -> String
read :: Read a => String -> a
So the x has type Read a => a. But this constraint is satisfied by a lot of types:
Int, Double or () for example. Which one to choose? There's nothing that can tell us.
In this case we can resolve the ambiguity by telling the compiler which type we want,
adding a type signature:
let x = read "<something>" :: Int in show x
Now the problem is: since Haskell uses the Num type class to handle numbers,
there are a lot of cases where numerical expressions contain ambiguities.
Consider:
show 1
What should the result be?
As before 1 has type Num a => a and there are many type of numbers that could be used.
Which one to choose?
Having a compiler error almost every time we use a number isn't a good thing,
and hence the defaulting rules were introduced. The rules can be controlled
using a default declaration. By specifying default (T1, T2, T3) we can change
how the inferencer defaults the different types.
An ambiguous type variable v is defaultable if:
v appears only in contraints of the kind C v were C is a class
(i.e. if it appears as in: Monad (m v) then it is not defaultable).
at least one of these classes is Num or a subclass of Num.
all of these classes are defined in the Prelude or a standard library.
A defaultable type variable is replaced by the first type in the default list
that is an instance of all the ambiguous variable’s classes.
The default default declaration is default (Integer, Double).
For example:
plus = (+)
minus = (-)
x = plus 1.0 1
y = minus 2 1
The types inferred would be:
plus :: Fractional a => a -> a -> a
minus :: Num a => a -> a -> a
which, by defaulting rules, become:
plus :: Double -> Double -> Double
minus :: Integer -> Integer -> Integer
Note that this explains why in the example in the question only the sort
definition raises an error. The type Ord a => [a] -> [a] cannot be defaulted
because Ord isn't a numeric class.
Extended defaulting
Note that GHCi comes with extended defaulting rules (or here for GHC8),
which can be enabled in files as well using the ExtendedDefaultRules extensions.
The defaultable type variables need not only appear in contraints where all
the classes are standard and there must be at least one class that is among
Eq, Ord, Show or Num and its subclasses.
Moreover the default default declaration is default ((), Integer, Double).
This may produce odd results. Taking the example from the question:
Prelude> :set -XMonomorphismRestriction
Prelude> import Data.List(sortBy)
Prelude Data.List> let sort = sortBy compare
Prelude Data.List> :t sort
sort :: [()] -> [()]
in ghci we don't get a type error but the Ord a constraints results in
a default of () which is pretty much useless.
Useful links
There are a lot of resources and discussions about the monomorphism restriction.
Here are some links that I find useful and that may help you understand or deep further into the topic:
Haskell's wiki page: Monomorphism Restriction
The report
An accessible and nice blog post
Sections 6.2 and 6.3 of A History Of Haskell: Being Lazy With Class deals with the monomorphism restriction and type defaulting

How does equality work for numeric types?

I see that Haskell allows different numeric types to be compared:
*Main> :t 3
3 :: Num t => t
*Main> :t 3.0
3.0 :: Fractional t => t
*Main> 3 == 3.0
True
Where is the source code for the Eq instance for numeric types? If I create a new type, such as ComplexNumber, can I extend == to work for it? (I might want complex numbers with no imaginary parts to be potentially equal to real numbers.)
“Haskell allows different numeric types to be compared” no it doesn't. What Haskell actually allows is different types to be defined by the same literals. In particular, you can do
Prelude> let a = 3.7 :: Double
Prelude> let b = 1 :: Double
Prelude> a + b
4.7
OTOH, if I declared these explicitly with conflicting types, the addition would fail:
Prelude> let a = 3.7 :: Double
Prelude> let b = 1 :: Int
Prelude> a + b
<interactive>:31:5:
Couldn't match expected type ‘Double’ with actual type ‘Int’
In the second argument of ‘(+)’, namely ‘b’
In the expression: a + b
Now, Double is not the most general possible type for either a or b. In fact all number literals are polymorphic, but before any operation (like equality comparison) happens, such a polymorphic type needs to be pinned down to a concrete monomorphic one. Like,
Prelude> (3.0 :: Double) == (3 :: Double)
True
Because ==, contrary to your premise, actually requires both sides to have the same type, you can omit the signature on either side without changing anything:
Prelude> 3.0 == (3 :: Double)
True
In fact, even without any type annotation, GHCi will still treat both sides as Double. This is because of type defaulting – in this particular case, Fractional is the strongest constraint on the shared number type, and for Fractional, the default type is Double. OTOH, if both sides had been integral literals, then GHCi would have chosen Integer. This can sometimes make a difference, for instance
Prelude> 10000000000000000 == 10000000000000001
False
but
Prelude> 10000000000000000 ==(10000000000000001 :: Double)
True
because in the latter case, the final 1 is lost in the floating-point error.
There is nothing magical going on here. 3 can be of either a real number or an integer, but its type gets unified with a real number when compared to 3.0.
Note the type class:
class Eq a where
eq :: a -> a -> Bool
So eq really does only compare things of same type. And your example 3 == 3.0 gets its types unified and becomes 3.0 == 3.0 internally.
I'm unsure of any type tricks to make it unify with a user defined complex number type. My gut tells me it can not be done.

Strange Haskell expression with type Num ([Char] -> t) => t

While doing some exercises in GHCi I typed and got the following>
ghci> (1 "one")
<interactive>:187:1:
No instance for (Num ([Char] -> a0)) arising from a use of ‘it’
In a stmt of an interactive GHCi command: print it
which is an error, howeve if I ask GHCi for the type of the expression it does not give any error:
ghci> :type (1 "one")
(1 "one") :: Num ([Char] -> t) => t
What is the meaning of (1 "one")?
Why does this expression gives an error, but GHCi tells it is well typed?
What is the meaning of Num ([Char] -> t) => t?
Thanks.
Haskell Report to the rescue! (Quoting section 6.4.1)
An integer literal represents the application of the function fromInteger to the appropriate value of type Integer.
fromInteger has type:
Prelude> :t fromInteger
fromInteger :: Num a => Integer -> a
So 1 is actually syntax sugar for fromInteger (1 :: Integer). Your expression, then, is:
fromInteger 1 "one"
Which could be written as:
(fromInteger 1) "one"
Now, fromInteger produces a number (that is, a value of a type which is an instance of Num, as its type tells us). In your expression, this number is applied to a [Char] (the string "one"). GHC correctly combines these two pieces of information to deduce that your expression has type:
Num ([Char] -> t) => t
That is, it would be the result (of unspecified type t) of applying a function which is also a Num to a [Char]. That is a valid type in principle. The only problem is that there is no instance of Num for [Char] -> t (that is, functions that take strings are not numbers, which is not surprising).
P.S.: As Sibi and Ørjan point out, in GHC 7.10 and later you will only see the error mentioned in the question if the FlexibleContexts GHC extension is enabled; otherwise the type checker will instead complain about having fixed types and type constructors in the class constraint (that is, Char, [] and (->)).
Haskell is a very flexible language, but also a very logical one in a rather literal sense. So often, things that in most languages would just be syntax errors, Haskell will look at them and try its darnedest to make sense of them, with results that are really confusing but are really just the logical consequence of the rules of the language.
For example, if we type your example into Python, it basically tells us "what you just typed in makes zero sense":
Python 2.7.6 (default, Sep 9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> (1 "one")
File "<stdin>", line 1
(1 "one")
^
SyntaxError: invalid syntax
Ruby does the same thing:
irb(main):001:0> (1 "one")
SyntaxError: (irb):1: syntax error, unexpected tSTRING_BEG, expecting ')'
(1 "one")
^
from /usr/bin/irb:12:in `<main>'
But Haskell doesn't give up that easily! It sees (1 "one"), and it reasons that:
Expressions of the form f x are function applications, where f has type like a -> b, x has type a and f x has type b.
So in the expression 1 "one", 1 must be a function that takes "one" (a [Char]) as its argument.
Then given Haskell's treatment of numeric literals, it translates the 1 into fromInteger 1 :: Num b => [Char] -> b. fromInteger is a method of the Num class, meaning that the user is allowed to supply their own implementations of it for any type—including [Char] -> b if you are so inclined.
So the error message means that Haskell, instead of telling you that what you typed is nonsense, tells you that you haven't taught it how to construct a number of type Num b => [Char] -> b, because that's the really strange thing that would need to be true for the expression to make sense.
TL;DR: It's a garbled nonsense type that isn't worth getting worried over.
Integer literals can represent values of any type that implements the Num typeclass. So 1 or any other integer literal can be used anywhere you need a number.
doubleVal :: Double
doubleVal = 1
intVal :: Int
intVal = 1
integerVal :: Integer
integerVal = 1
This enables us to flexibly use integral literals in any numeric context.
When you just use an integer literal without any type context, ghci doesn't know what type it is.
Prelude> :type 1
1 :: Num a => a
ghci is saying "that '1' is of some type I don't know, but I do know that whatever type it is, that type implements the Num typeclass".
Every occurrence of an integer literal in Haskell source is wrapped with an implicit fromInteger function. So (1 "one") is implicitly converted to ((fromInteger (1::Integer)) "one"), and the subexpression (fromInteger (1::Integer)) has an as-yet unknown type Num a => a, again meaning it's some unknown type, but we know it provides an instance of the Num typeclass.
We can also see that it is applied like a function to "one", so we know that its type must have the form [Char] -> a0 where a0 is yet another unknown type. So a and [Char] -> a0 must be the same. Substituting that back into the Num a => a type we figured out above, we know that 1 must have type Num ([Char] -> a0) => [Char] -> a0), and the expression (1 "one") has type Num ([Char] -> a0) => a0. Read that last type as "There is some type a0 which is the result of applying a [Char] argument to a function, and that function type is an instance of the Num class.
So the expression itself has a valid type Num ([Char] -> a0) => a0.
Haskell has something called the Monomorphism restriction. One aspect of this is that all type variables in expressions have to have a specific, known type before you can evaluate them. GHC uses type defaulting rules in certain situations when it can, to accomodate the monomorphism restriction. However, GHC doesn't know of any type a0 it can plug into the type expression above that has a Num instance defined. So it has no way to deal with it, and gives you the "No Instance for Num..." message.

How to understand error messages for "1.2 % 3.4" for Haskell?

How to understand error messages for "1.2 % 3.4" for Haskell?
Prelude> :m +Data.Ratio
Prelude Data.Ratio> 4.3 % 1.2
<interactive>:11:1:
No instance for (Show a0) arising from a use of ‘print’
The type variable ‘a0’ is ambiguous
Note: there are several potential instances:
instance Show Double -- Defined in ‘GHC.Float’
instance Show Float -- Defined in ‘GHC.Float’
instance (Integral a, Show a) => Show (Ratio a)
-- Defined in ‘GHC.Real’
...plus 23 others
In a stmt of an interactive GHCi command: print it
Prelude Data.Ratio>
This type error message is a little unfriendly, me thinks.
What is actually happening here is that your expression has type
Prelude Data.Ratio> :t 4.3 % 1.2
4.3 % 1.2 :: (Integral a, Fractional a) => Ratio a
Integral a means your type a has to be integer-like, while Fractional a means that it has to be floating-point or rational-like. There are no types in standard Haskell which are both.
When evaluating this expression in order to print it, GHCi first infers its type, with the additional restriction that it needs to printable:
(Integral a, Fractional a, Show a) => Ratio a
Now since this is still too ambiguous to evaluate, GHCi tries defaulting a to either Integer or Double. But neither of them fits here, each is a member of just one of the classes Integral and Fractional.
Then, after failing at defaulting, GHCi gives an error message that tells you one of the constraints it was failing to satisfy. And particularly confusingly, it happens to choose the one which has nothing to do with why it failed...
Anyway, to fix your problem: % is not the function to divide two rationals, it is the function to construct a rational from an integer-like numerator and denominator. To divide them instead, use
4.3 / 1.2 :: Rational
The :: Rational tells GHCi that you want a rational (rather than the default Double it would otherwise choose), and floating-point notation does work to make a Rational - it is one of the features the Fractional typeclass provides.

Why can't I add Integer to Double in Haskell?

Why is it that I can do:
1 + 2.0
but when I try:
let a = 1
let b = 2.0
a + b
<interactive>:1:5:
Couldn't match expected type `Integer' with actual type `Double'
In the second argument of `(+)', namely `b'
In the expression: a + b
In an equation for `it': it = a + b
This seems just plain weird! Does it ever trip you up?
P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!
The type signature of (+) is defined as Num a => a -> a -> a, which means that it works on any member of the Num typeclass, but both arguments must be of the same type.
The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do for the let expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1 and 2.0.
All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double, so it just assumes the other one was supposed to be a Double and executes the computation. However, when you use the let expression, it only has one number to work off of, so it decides 1 is an Integer and 2.0 is a Double.
EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.
The first works because numeric literals are polymorphic (they are interpreted as fromInteger literal resp. fromRational literal), so in 1 + 2.0, you really have fromInteger 1 + fromRational 2, in the absence of other constraints, the result type defaults to Double.
The second does not work because of the monomorphism restriction. If you bind something without a type signature and with a simple pattern binding (name = expresion), that entity gets assigned a monomorphic type. For the literal 1, we have a Num constraint, therefore, according to the defaulting rules, its type is defaulted to Integer in the binding let a = 1. Similarly, the fractional literal's type is defaulted to Double.
It will work, by the way, if you :set -XNoMonomorphismRestriction in ghci.
The reason for the monomorphism restriction is to prevent loss of sharing, if you see a value that looks like a constant, you don't expect it to be calculated more than once, but if it had a polymorphic type, it would be recomputed everytime it is used.
You can use GHCI to learn a little more about this. Use the command :t to get the type of an expression.
Prelude> :t 1
1 :: Num a => a
So 1 is a constant which can be any numeric type (Double, Integer, etc.)
Prelude> let a = 1
Prelude> :t a
a :: Integer
So in this case, Haskell inferred the concrete type for a is Integer. Similarly, if you write let b = 2.0 then Haskell infers the type Double. Using let made Haskell infer a more specific type than (perhaps) was necessary, and that leads to your problem. (Someone with more experience than me can perhaps comment as to why this is the case.) Since (+) has type Num a => a -> a -> a, the two arguments need to have the same type.
You can fix this with the fromIntegral function:
Prelude> :t fromIntegral
fromIntegral :: (Num b, Integral a) => a -> b
This function converts integer types to other numeric types. For example:
Prelude> let a = 1
Prelude> let b = 2.0
Prelude> (fromIntegral a) + b
3.0
Others have addressed many aspects of this question quite well. I'd like to say a word about the rationale behind why + has the type signature Num a => a -> a -> a.
Firstly, the Num typeclass has no way to convert one artbitrary instance of Num into another. Suppose I have a data type for imaginary numbers; they are still numbers, but you really can't properly convert them into just an Int.
Secondly, which type signature would you prefer?
(+) :: (Num a, Num b) => a -> b -> a
(+) :: (Num a, Num b) => a -> b -> b
(+) :: (Num a, Num b, Num c) => a -> b -> c
After considering the other options, you realize that a -> a -> a is the simplest choice. Polymorphic results (as in the third suggestion above) are cool, but can sometimes be too generic to be used conveniently.
Thirdly, Haskell is not Blub. Most, though arguably not all, design decisions about Haskell do not take into account the conventions and expectations of popular languages. I frequently enjoy saying that the first step to learning Haskell is to unlearn everything you think you know about programming first. I'm sure most, if not all, experienced Haskellers have been tripped up by the Num typeclass, and various other curiosities of Haskell, because most have learned a more "mainstream" language first. But be patient, you will eventually reach Haskell nirvana. :)

Resources