What is causing this type ambiguity? - haskell

I have two relatively simple classes, MSet:
{-# Langage MultiParamTypeClasses #-}
-- A generalization of G set to Magmas
class MSet a b where
(+>>) :: a -> b -> a
instance MSet Integer Integer where
(+>>) = (+)
-- (+>>) constrained to a Magma
(<<+>>) ::
( MSet a a
)
=> a -> a -> a
(<<+>>) = (+>>)
When I boot into ghci and try and test these I have an issue:
*Main> 1 <<+>> 2
3
*Main> 1 +>> 2
<interactive>:31:1: error:
• Could not deduce (MSet a b0)
from the context: (MSet a b, Num a, Num b)
bound by the inferred type for ‘it’:
forall a b. (MSet a b, Num a, Num b) => a
at <interactive>:31:1-7
The type variable ‘b0’ is ambiguous
• In the ambiguity check for the inferred type for ‘it’
To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
When checking the inferred type
it :: forall a b. (MSet a b, Num a, Num b) => a
While (+>>) works when it is constrained to a Magma but is ambiguous when it is not.
Now I can do:
*Main> :set -XFlexibleContexts
*Main> 1 +>> (2 :: Integer)
3
But I don't understand what is going on here or why this annotation helps. I don't really get how the type checker disambiguates (<<+>>). If I add another instance, say Int, it continues to function even though it seems to be it should be ambiguous whether 1 is an Int or an Integer.
Why does one of these error and why do the other two not?

Essentially - you are asking GHC to solve "what is the type of 1 and 2 in the expression 1 +>> 2, and your typeclasses mean that the answer is ambiguous.
In depth
<<+>>
What is the type of 1 <<+>> 2? Why, (Num a, MSet a a) => a of course, because GHC has to be able to turn the literals into values (Num) and then +>> them (MSet), and the type signature of <<+>> says that both literals will have the same type.
What happens when you ask GHCi to print the value of 1 <<+>> 2? It tries to default a to Integer, which succeeds because Num Integer and MSet Integer Integer. It then evaluates the type-defaulted expression.
This is the reason the Int instance doesn't introduce ambiguity - GHCi isn't trying to infer the specific instance to use, it instead infers the type and then defaults the variables, leaving only the instance check.
+>>
What is the type of 1 +>> 2? Well... (Num a, Num b, MSet a b) => a, it seems like. Clearly you need MSet, but there's no longer an assurance that a and b unify. More unfortunately, b does not appear in the type of the term - which is the source of the type ambiguity. The type system doesn't know which choice of b to use.
What happens when you ask GHCi to print the value of 1 +>> 2? It first infers the type of the term, and gets the above type - and now it hits a type inference error before it can try to default a to Integer.
Why do the fixes work?
Adding type information will prevent the error
> 1 +>> (2 :: Integer) -- fine
because these changes eliminate the ambiguity of b. GHC doesn't need to infer b, so it doesn't get raise an inference error.
Weirdly, and I don't fully understand the reason for it, adding the annotation for a also seems to prevent the error
> (1 :: Integer) +>> 2 -- fine
> (1 +>> 2) :: Integer -- fine
though I suspect that's another GHCi-specific trick that's defaulting b to Integer in (1 +>>) :: (Num b, MSet Integer b) => b -> Integer. Don't quote me on that, though.
With fundeps
It is possible to eliminate the type ambiguity using FunctionalDependencies
class MSet a b | a -> b where
...
though that doesn't seem like it fits your use case. This solves the inference problem because in (Num a, Num b, MSet a b) => a, knowing a will be enough to deduce b from the fundep in MSet. Later, when GHCi defaults a to Integer, it can just look up the type of b.

Related

Why is `succ i` valid where `i :: Num a => a` (and not an `Enum a`)?

This seems to apply to both GHCi and GHC. I'll show an example with GHCi first.
Given i type has been inferred as follows:
Prelude> i = 1
Prelude> :t i
i :: Num p => p
Given that succ is a function defined on Enum:
Prelude> :i Enum
class Enum a where
succ :: a -> a
pred :: a -> a
-- …OMITTED…
and that Num is not a 'subclass' (if I can use that term) of Enum:
class Num a where
(+) :: a -> a -> a
(-) :: a -> a -> a
-- …OMITTED…
why succ i does not return an error?
Prelude> succ i
2 -- works, no error
I would expect :type i to be inferred to something like:
Prelude> i = 1
Prelude> :type i
i :: (Enum p, Num p) => p
(I'm using 'GHC v. 8.6.3')
ADDITION:
After reading #RobinZigmond comment and #AlexeyRomanov answer I have noticed that 1 could be interpreted as one of many types and one of many classes.
Thanks to #AlexeyRomanov answer I understand much more about the defaulting-rules used to decide what type to use for ambiguous expressions.
However I don't feel that Alexey answer addresses exactly my question. My question is about the type of i. It's not about the type of succ i.
It's about the mismatch between succ argument type (an Enum a) and the apparent type of i (a Num a).
I'm now starting to realise that my question must stem from a wrong assumption: 'that once i is inferred to be i :: Num a => a, then i can be nothing else'. Hence I was puzzled to see succ i was evaluated without errors.
GHC also seems to be inferring Enum a in addition to what was explicitly declared.
x :: Num a => a
x = 1
y = succ x -- works
However it is not adding Enum a when the type variable appears as a function:
my_succ :: Num a => a -> a
my_succ z = succ z -- fails compilation
To me it seems that the type constraints attached to a function are stricter to the ones applied to a variable.
GHC is saying my_succ :: forall a. Num a => a -> a and given
forall a doesn't appear in the type-signature of neither i nor x I thought that meant GHC is not going to infer any more classes for my_succ types.
But this seems again wrong: I've checked this idea with the following (first time I type RankNTypes) and apparently GHC still infers Enum a:
{-# LANGUAGE RankNTypes #-}
x :: forall a. Num a => a
x = 1
y = succ x
So it seems that inference rules for functions are stricter than the ones for variables?
Yes, succ i's type is inferred as you expect:
Prelude> :t succ i
succ i :: (Enum a, Num a) => a
This type is ambiguous, but it satisfies the conditions in the defaulting rules for GHCi:
Find all the unsolved constraints. Then:
Find those that are of form (C a) where a is a type variable, and partition those constraints into groups that share a common type variable a.
In this case, there's only one group: (Enum a, Num a).
Keep only the groups in which at least one of the classes is an interactive class (defined below).
This group is kept, because Num is an interactive class.
Now, for each remaining group G, try each type ty from the default-type list in turn; if setting a = ty would allow the constraints in G to be completely solved. If so, default a to ty.
The unit type () and the list type [] are added to the start of the standard list of types which are tried when doing type defaulting.
The default default-type list (sic) is (with the additions from the last clause) default ((), [], Integer, Double).
So when you do Prelude> succ i to actually evaluate this expression (note :t doesn't evaluate the expression it gets), a is set to Integer (first of this list satisfying the constraints), and the result is printed as 2.
You can see it's the reason by changing the default:
Prelude> default (Double)
Prelude> succ 1
2.0
For the updated question:
I'm now starting to realise that my question must stem from a wrong assumption: 'that once i is inferred to be i :: Num a => a, then i can be nothing else'. Hence I was puzzled to see succ i was evaluated without errors.
i can be nothing else (i.e. nothing that doesn't fit this type), but it can be used with less general (more specific) types: Integer, Int. Even with many of them in an expression at once:
Prelude> (i :: Double) ^ (i :: Integer)
1.0
And these uses don't affect the type of i itself: it's already defined and its type fixed. OK so far?
Well, adding constraints also makes the type more specific, so (Num a, Enum a) => a is more specific than (Num a) => a:
Prelude> i :: (Num a, Enum a) => a
1
Because of course any type a that satisfies both constraints in (Num a, Enum a) satisfies just Num a.
However it is not adding Enum a when the type variable appears as a function:
That's because you specified a signature which doesn't allow it to. If you don't give a signature, there's no reason to infer Num constraint. But e.g.
Prelude> f x = succ x + 1
will infer the type with both constraints:
Prelude> :t f
f :: (Num a, Enum a) => a -> a
So it seems that inference rules for functions are stricter than the ones for variables?
It's actually the other way around due to the monomorphism restriction (not in GHCi, by default). You've actually been a bit lucky not to run into it here, but the answer is already long enough. Searching for the term should give you explanations.
GHC is saying my_succ :: forall a. Num a => a -> a and given forall a doesn't appear in the type-signature of neither i nor x.
That's a red herring. I am not sure why it's shown in one case and not the other, but all of them have that forall a behind the scenes:
Haskell type signatures are implicitly quantified. When the language option ExplicitForAll is used, the keyword forall allows us to say exactly what this means. For example:
g :: b -> b
means this:
g :: forall b. (b -> b)
(Also, you just need ExplicitForAll and not RankNTypes to write down forall a. Num a => a.)

Problems With Type Inference on (^)

So, I'm trying to write my own replacement for Prelude, and I have (^) implemented as such:
{-# LANGUAGE RebindableSyntax #-}
class Semigroup s where
infixl 7 *
(*) :: s -> s -> s
class (Semigroup m) => Monoid m where
one :: m
class (Ring a) => Numeric a where
fromIntegral :: (Integral i) => i -> a
fromFloating :: (Floating f) => f -> a
class (EuclideanDomain i, Numeric i, Enum i, Ord i) => Integral i where
toInteger :: i -> Integer
quot :: i -> i -> i
quot a b = let (q,r) = (quotRem a b) in q
rem :: i -> i -> i
rem a b = let (q,r) = (quotRem a b) in r
quotRem :: i -> i -> (i, i)
quotRem a b = let q = quot a b; r = rem a b in (q, r)
-- . . .
infixr 8 ^
(^) :: (Monoid m, Integral i) => m -> i -> m
(^) x i
| i == 0 = one
| True = let (d, m) = (divMod i 2)
rec = (x*x) ^ d in
if m == one then x*rec else rec
(Note that the Integral used here is one I defined, not the one in Prelude, although it is similar. Also, one is a polymorphic constant that's the identity under the monoidal operation.)
Numeric types are monoids, so I can try to do, say 2^3, but then the typechecker gives me:
*AlgebraicPrelude> 2^3
<interactive>:16:1: error:
* Could not deduce (Integral i0) arising from a use of `^'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Integral Integer -- Defined at Numbers.hs:190:10
instance Integral Int -- Defined at Numbers.hs:207:10
* In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
<interactive>:16:3: error:
* Could not deduce (Numeric i0) arising from the literal `3'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Numeric Integer -- Defined at Numbers.hs:294:10
instance Numeric Complex -- Defined at Numbers.hs:110:10
instance Numeric Rational -- Defined at Numbers.hs:306:10
...plus four others
(use -fprint-potential-instances to see them all)
* In the second argument of `(^)', namely `3'
In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
I get that this arises because Int and Integer are both Integral types, but then why is it that in normal Prelude I can do this just fine? :
Prelude> :t (2^)
(2^) :: (Num a, Integral b) => b -> a
Prelude> :t 3
3 :: Num p => p
Prelude> 2^3
8
Even though the signatures for partial application in mine look identical?
*AlgebraicPrelude> :t (2^)
(2^) :: (Numeric m, Integral i) => i -> m
*AlgebraicPrelude> :t 3
3 :: Numeric a => a
How would I make it so that 2^3 would in fact work, and thus give 8?
A Hindley-Milner type system doesn't really like having to default anything. In such a system, you want types to be either properly fixed (rigid, skolem) or properly polymorphic, but the concept of “this is, like, an integer... but if you prefer, I can also cast it to something else” as many other languages have doesn't really work out.
Consequently, Haskell sucks at defaulting. It doesn't have first-class support for that, only a pretty hacky ad-hoc, hard-coded mechanism which mainly deals with built-in number types, but fails at anything more involved.
You therefore should try to not rely on defaulting. My opinion is that the standard signature for ^ is unreasonable; a better signature would be
(^) :: Num a => a -> Int -> a
The Int is probably controversial – of course Integer would be safer in a sense; however, an exponent too big to fit in Int generally means the results will be totally off the scale anyway and couldn't feasibly be calculated by iterated multiplication; so this kind of expresses the intend pretty well. And it gives best performance for the extremely common situation where you just write x^2 or similar, which is something where you very definitely don't want to have to put an extra signature in the exponent.
In the rather fewer cases where you have a concrete e.g. Integer number and want to use it in the exponent, you can always shove in an explicit fromIntegral. That's not nice, but rather less of an inconvenience.
As a general rule, I try to avoid† any function-arguments that are more polymorphic than the results. Haskell's polymorphism works best “backwards”, i.e. the opposite way as in dynamic language: the caller requests what type the result should be, and the compiler figures out from this what the arguments should be. This works pretty much always, because as soon as the result is somehow used in the main program, the types in the whole computation have to be linked to a tree structure.
OTOH, inferring the type of the result is often problematic: arguments may be optional, may themselves be linked only to the result, or given as polymorphic constants like Haskell number literals. So, if i doesn't turn up in the result of ^, avoid letting in occur in the arguments either.
†“Avoid” doesn't mean I don't ever write them, I just don't do so unless there's a good reason.

Haskell, multiple type classes for one argument

This is an example in Learn You A Haskell, chapter on higher order functions:
compareWithHundred :: (Num a, Ord a) => a -> Ordering
compareWithHundred x = compare 100 x
While the idea of the function is clear for me, I'm not sure why type signature is (Num a, Ord a). We only pass integer that is to be compared to the function, of type Int. What Ord stands for here, and why is implicitly passed argument in type signature?
That's not the only possible signature for this signature. It happens to be the most general one. compareWithHundred :: Int -> Ordering is actually a possible instantiation – the polymorphic a argument can be instatiated with any orderable number type, which does sure enough include Int, but also Integer, Rational, Double...
Prelude> let compareWithHundred :: (Num a, Ord a) => a -> Ordering; compareWithHundred x = compare 100 x
Prelude> compareWithHundred (99 :: Int)
GT
Prelude> compareWithHundred (100.3 :: Double)
LT
Not all number types permit you to order-compare them though – the classical example where this is not possible are complex numbers (which have “more than one direction” in which you could order them).
Prelude Data.Complex> compareWithHundred (100 :+ 30 :: Complex Double)
<interactive>:10:1:
No instance for (Ord (Complex Double))
arising from a use of ‘compareWithHundred’
In the expression: compareWithHundred (100 :+ 30 :: Complex Double)
In an equation for ‘it’:
it = compareWithHundred (100 :+ 30 :: Complex Double)
Hence you need to require both that the argument is a number (so there exists a value 100 which to compare with) and that the argument is in the Ord class. This combined constrained is written (Num a, Ord a).
I have something to add, in case you couldn't gather something from leftaroundabout's thorough answer.
Everything to the left of => in a type signature is a constraint. Read the type like this:
compareWithHundred :: (Num a, Ord a) => a -> Ordering
^^^^^^^^^^^^^^ ^ ^^^^^^^^
constraints | |
argument type |
result type
So you only pass one argument to the function because there is only one argument in the type signature, a. a is a type variable, and can be replaced with any type as long as that type satisfies the constraints.
The Num a says that whatever you replace a with has to be numeric (so it can be Int, Integer, Double, ...), and the Ord a says that it has to be comparable. leftroundabout's answer goes into more detail about why you need both, I just wanted to make sure you knew how to read the signature.
So it's perfectly legal in one sense to say compareWithHundred "foobar", the type checker says that that expression's type is Ordering, but then it will fail later when it tries to check that there is a Num String instance.
I hope this helps.

How can a instance with Num type class coercion to Fractional implicitly?

I tested the numeric coercion by using GHCI:
>> let c = 1 :: Integer
>> 1 / 2
0.5
>> c / 2
<interactive>:15:1: error:
• No instance for (Fractional Integer) arising from a use of ‘/’
• In the expression: c / 2
In an equation for ‘it’: it = c / 2
>> :t (/)
(/) :: Fractional a => a -> a -> a -- (/) needs Fractional type
>> (fromInteger c) / 2
0.5
>>:t fromInteger
fromInteger :: Num a => Integer -> a -- Just convert the Integer to Num not to Fractional
I can use fromInteger function to convert a Integer type to Num (fromInteger has the type fromInteger :: Num a => Integer -> a), but I cannot understand that how can the type Num be converted to Fractional implicitly?
I know that if an instance has type Fractional it must have type Num (class Num a => Fractional a where), but does it necessary that if an instance has type Num it can be used as an instance with Fractional type?
#mnoronha Thanks for your detailed reply. There is only one question confuse me. I know the reason that type a cannot be used in function (/) is that type a is with type Integer which is not an instance of type class Fractional (the function (/) requires that the type of arguments must be instance of Fractional). What I don't understand is that even by calling fromInteger to convert the type integer to atype which be an instance of Num, it does not mean a type be an instance of Fractional (because Fractional type class is more constrained than Num type class, so a type may not implement some functions required by Fractional type class). If a type does not fully fit the condition Fractional type class requires, how can it be use in the function (/) which asks the arguments type be instance of Fractional. Sorry for not native speaker and really thanks for your patience!
I tested that if a type only fits the parent type class, it cannot be used in a function which requires more constrained type class.
{-# LANGUAGE OverloadedStrings #-}
module Main where
class ParentAPI a where
printPar :: int -> a -> String
class (ParentAPI a) => SubAPI a where
printSub :: a -> String
data ParentDT = ParentDT Int
instance ParentAPI ParentDT where
printPar i p = "par"
testF :: (SubAPI a) => a -> String
testF a = printSub a
main = do
let m = testF $ ParentDT 10000
return ()
====
test-typeclass.hs:19:11: error:
• No instance for (SubAPI ParentDT) arising from a use of ‘testF’
• In the expression: testF $ ParentDT 10000
In an equation for ‘m’: m = testF $ ParentDT 10000
In the expression:
do { let m = testF $ ParentDT 10000;
return () }
I have found a doc explaining the numeric overloading ambiguity very clearly and may help others with the same confusion.
https://www.haskell.org/tutorial/numbers.html
First, note that both Fractional and Num are not types, but type classes. You can read more about them in the documentation or elsewhere, but the basic idea is that they define behaviors for types. Num is the most inclusive numeric typeclass, defining behaviors functions like (+), negate, which are common to pretty much all "numeric types." Fractional is a more constrained type class that describes "fractional numbers, supporting real division."
If we look at the type class definition for Fractional, we see that it is actually defined as a subclass of Num. That is, for a type a to be an have an instance Fractional, it must first be a member of the typeclass Num:
class Num a => Fractional a where
Let's consider some type that is constrained by Fractional. We know it implements the basic behaviors common to all members of Num. However, we can't expect it to implement behaviors from other type classes unless multiple constraints are specified (ex. (Num a, Ord a) => a. Take, for example, the function div :: Integral a => a -> a -> a (integral division). If we try to apply the function with an argument that is constrained by the typeclass Fractional (ex. 1.2 :: Fractional t => t), we encounter an error. Type classes restrict the sort of values a function deals with, allowing us to write more specific and useful functions for types that share behaviors.
Now let's look at the more general typeclass, Num. If we have a type variable a that is only constrained by Num a => a, we know that it will implement the (few) basic behaviors included in the Num type class definition, but we'd need more context to know more. What does this mean practically? We know from our Fractional class declaration that functions defined in the Fractional type class are applied to Num types. However, these Num types are a subset of all possible Num types.
The importance of all this, ultimately, has to do with the ground types (where type class constraints are most commonly seen in functions). a represents a type, with the notation Num a => a telling us that a is a type that includes an instance of the type class Num. a could be any of the types that include the instance (ex. Int, Natural). Thus, if we give a value a general type Num a => a, we know it can implement functions for every type where there is a type class defined. For example:
ghci>> let a = 3 :: (Num a => a)
ghci>> a / 2
1.5
Whereas if we'd defined a as a specific type or in terms of a more constrained type class, we would have not been able to expect the same results:
ghci>> let a = 3 :: Integral a => a
ghci>> a / 2
-- Error: ambiguous type variable
or
ghci>> let a = 3 :: Integer
ghci>> a / 2
-- Error: No instance for (Fractional Integer) arising from a use of ‘/’
(Edit responding to followup question)
This is definitely not the most concrete explanation, so readers feel free to suggest something more rigorous.
Suppose we have a function a that is just a type class constrained version of the id function:
a :: Num a => a -> a
a = id
Let's look at type signatures for some applications of the function:
ghci>> :t (a 3)
(a 3) :: Num a => a
ghci>> :t (a 3.2)
(a 3.2) :: Fractional a => a
While our function had the general type signature, as a result of its application the the type of the application is more restricted.
Now, let's look at the function fromIntegral :: (Num b, Integral a) => a -> b. Here, the return type is the general Num b, and this will be true regardless of input. I think the best way to think of this difference is in terms of precision. fromIntegral takes a more constrained type and makes it less constrained, so we know we'll always expect the result will be constrained by the type class from the signature. However, if we give an input constraint, the actual input could be more restricted than the constraint and the resulting type would reflect that.
The reason why this works comes down to the way universal quantification works. To help explain this I am going to add in explicit forall to the type signatures (which you can do yourself if you enable -XExplicitForAll or any other forall related extension), but if you just removed them (forall a. ... becomes just ...), everything will work fine.
The thing to remember is that when a function involves a type constrained by a typeclass, then what that means is that you can input/output ANY type within that typeclass, so it's actually better to have a less constrained typeclass.
So:
fromInteger :: forall a. Num a => Integer -> a
fromInteger 5 :: forall a. Num a => a
Means that you have a value that is of EVERY Num type. So not only can you use it in a function taking it in a Fractional, you could use it in a function that only takes in MyWeirdTypeclass a => ... as long as there is one single type that implements both Num and MyWeirdTypeclass. Hence why you can get the following just fine:
fromInteger 5 / 2 :: forall a. Fractional a => a
Now of course once you decide to divide by 2, it now wants the output type to be Fractional, and thus 5 and 2 will be interpreted as some Fractional type, so we won't run into issues where we try to divide Int values, as trying to make the above have type Int will fail to type check.
This is really powerful and awesome, but very much unfamiliar, as generally other languages either don't support this, or only support it for input arguments (e.g print in most languages can take in any printable type).
Now you may be curious when the whole superclass / subclass stuff comes into play, so when you are defining a function that takes in something of type Num a => a, then because a user can pass in ANY Num type, you are correct that in this situation you cannot use functions defined on some subclass of Num, only things that work on ALL Num values, like *:
double :: forall a. Num a => a -> a
double n = n * 2 -- in here `n` really has type `exists a. Num a => a`
So the following does not type check, and it wouldn't type check in any language, because you don't know that the argument is a Fractional.
halve :: Num a => a -> a
halve n = n / 2 -- in here `n` really has type `exists a. Num a => a`
What we have up above with fromInteger 5 / 2 is more equivalent to the following, higher rank function, note that the forall within parenthesis is required, and you need to use -XRankNTypes:
halve :: forall b. Fractional b => (forall a. Num a => a) -> b
halve n = n / 2 -- in here `n` has type `forall a. Num a => a`
Since this time you are taking in EVERY Num type (just like the fromInteger 5 you were dealing with before), not just ANY Num type. Now the downside of this function (and one reason why no one wants it) is that you really do have to pass in something of EVERY Num type:
halve (2 :: Int) -- does not work
halve (3 :: Integer) -- does not work
halve (1 :: Double) -- does not work
halve (4 :: Num a => a) -- works!
halve (fromInteger 5) -- also works!
I hope that clears things up a little. All you need for the fromInteger 5 / 2 to work is that there exists ONE single type that is both a Num and a Fractional, or in other words just a Fractional, since Fractional implies Num. Type defaulting doesn't help much with clearing up this confusion, as what you may not realize is that GHC is just arbitrarily picking Double, it could have picked any Fractional.

Strange Haskell expression with type Num ([Char] -> t) => t

While doing some exercises in GHCi I typed and got the following>
ghci> (1 "one")
<interactive>:187:1:
No instance for (Num ([Char] -> a0)) arising from a use of ‘it’
In a stmt of an interactive GHCi command: print it
which is an error, howeve if I ask GHCi for the type of the expression it does not give any error:
ghci> :type (1 "one")
(1 "one") :: Num ([Char] -> t) => t
What is the meaning of (1 "one")?
Why does this expression gives an error, but GHCi tells it is well typed?
What is the meaning of Num ([Char] -> t) => t?
Thanks.
Haskell Report to the rescue! (Quoting section 6.4.1)
An integer literal represents the application of the function fromInteger to the appropriate value of type Integer.
fromInteger has type:
Prelude> :t fromInteger
fromInteger :: Num a => Integer -> a
So 1 is actually syntax sugar for fromInteger (1 :: Integer). Your expression, then, is:
fromInteger 1 "one"
Which could be written as:
(fromInteger 1) "one"
Now, fromInteger produces a number (that is, a value of a type which is an instance of Num, as its type tells us). In your expression, this number is applied to a [Char] (the string "one"). GHC correctly combines these two pieces of information to deduce that your expression has type:
Num ([Char] -> t) => t
That is, it would be the result (of unspecified type t) of applying a function which is also a Num to a [Char]. That is a valid type in principle. The only problem is that there is no instance of Num for [Char] -> t (that is, functions that take strings are not numbers, which is not surprising).
P.S.: As Sibi and Ørjan point out, in GHC 7.10 and later you will only see the error mentioned in the question if the FlexibleContexts GHC extension is enabled; otherwise the type checker will instead complain about having fixed types and type constructors in the class constraint (that is, Char, [] and (->)).
Haskell is a very flexible language, but also a very logical one in a rather literal sense. So often, things that in most languages would just be syntax errors, Haskell will look at them and try its darnedest to make sense of them, with results that are really confusing but are really just the logical consequence of the rules of the language.
For example, if we type your example into Python, it basically tells us "what you just typed in makes zero sense":
Python 2.7.6 (default, Sep 9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> (1 "one")
File "<stdin>", line 1
(1 "one")
^
SyntaxError: invalid syntax
Ruby does the same thing:
irb(main):001:0> (1 "one")
SyntaxError: (irb):1: syntax error, unexpected tSTRING_BEG, expecting ')'
(1 "one")
^
from /usr/bin/irb:12:in `<main>'
But Haskell doesn't give up that easily! It sees (1 "one"), and it reasons that:
Expressions of the form f x are function applications, where f has type like a -> b, x has type a and f x has type b.
So in the expression 1 "one", 1 must be a function that takes "one" (a [Char]) as its argument.
Then given Haskell's treatment of numeric literals, it translates the 1 into fromInteger 1 :: Num b => [Char] -> b. fromInteger is a method of the Num class, meaning that the user is allowed to supply their own implementations of it for any type—including [Char] -> b if you are so inclined.
So the error message means that Haskell, instead of telling you that what you typed is nonsense, tells you that you haven't taught it how to construct a number of type Num b => [Char] -> b, because that's the really strange thing that would need to be true for the expression to make sense.
TL;DR: It's a garbled nonsense type that isn't worth getting worried over.
Integer literals can represent values of any type that implements the Num typeclass. So 1 or any other integer literal can be used anywhere you need a number.
doubleVal :: Double
doubleVal = 1
intVal :: Int
intVal = 1
integerVal :: Integer
integerVal = 1
This enables us to flexibly use integral literals in any numeric context.
When you just use an integer literal without any type context, ghci doesn't know what type it is.
Prelude> :type 1
1 :: Num a => a
ghci is saying "that '1' is of some type I don't know, but I do know that whatever type it is, that type implements the Num typeclass".
Every occurrence of an integer literal in Haskell source is wrapped with an implicit fromInteger function. So (1 "one") is implicitly converted to ((fromInteger (1::Integer)) "one"), and the subexpression (fromInteger (1::Integer)) has an as-yet unknown type Num a => a, again meaning it's some unknown type, but we know it provides an instance of the Num typeclass.
We can also see that it is applied like a function to "one", so we know that its type must have the form [Char] -> a0 where a0 is yet another unknown type. So a and [Char] -> a0 must be the same. Substituting that back into the Num a => a type we figured out above, we know that 1 must have type Num ([Char] -> a0) => [Char] -> a0), and the expression (1 "one") has type Num ([Char] -> a0) => a0. Read that last type as "There is some type a0 which is the result of applying a [Char] argument to a function, and that function type is an instance of the Num class.
So the expression itself has a valid type Num ([Char] -> a0) => a0.
Haskell has something called the Monomorphism restriction. One aspect of this is that all type variables in expressions have to have a specific, known type before you can evaluate them. GHC uses type defaulting rules in certain situations when it can, to accomodate the monomorphism restriction. However, GHC doesn't know of any type a0 it can plug into the type expression above that has a Num instance defined. So it has no way to deal with it, and gives you the "No Instance for Num..." message.

Resources