Cannot understand the error message from a illegal list of tuples - haskell

I know this list of tuples doesn't work because the elements of the tuples are not with the same type. But I couldn't understand the error message.
Prelude> [(1,2),("One",2)]
<interactive>:1:3: error:
? Could not deduce (Num [Char]) arising from the literal ‘1’
from the context: Num b
bound by the inferred type of it :: Num b => [([Char], b)]
at <interactive>:1:1-17
? In the expression: 1
In the expression: (1, 2)
In the expression: [(1, 2), ("One", 2)]
Here I think Num[Char]) represents "One". Then what does it means "arising from the literal ‘1’? Does it mean the type of the respective element in the same place has to be integer? And then again, what does it mean from the context: Num b? This make me very confused.

Note that [1, "One"] already triggers the error.
Let's dissecting he error: the key point is
Could not deduce (Num [Char]) arising from the literal ‘1’
This actually means "I need to use 1 :: [Char], since that's the only way the list could type check. However, I don't know how to interpret the literal 1 as a [Char]". (As a reminder, [Char] and String are exactly the same type, and both are the type of string literals such as "One".)
Haskell is a bit peculiar in its handling of numeric literals like 1. There are roughly treated as if the were Integers, the arbitrary-precision integer type, and then immediately transformed into the wanted type using the method fromInteger of typeclass Num.
class Num a where
fromInteger :: Integer -> a
...
In the standard libraries, this class has instances for all numeric types. The user can add others, e.g. for their user-defined numeric types.
A silly programmer could even add an instance for strings!
instance Num [Char] where
fromInteger n = "urk!" ++ show n
This instance is bogus since it can't reasonably define the other methods, but in principle it could be used. With this instance in scope, the original code type checks!
We can test it in GHCi:
> [(1,2),("One",2)]
[("urk!1",2),("One",2)]
Note how Haskell converted the literal as we told it to do.
A a final warning: don't even think to add such an instance to a serious program :) The class Num should be used only for numeric types, and strings are not numeric.

Related

Overloaded / Polymorphic Functions with different types

I'm learning Haskell and came into an unanswered question:
From the lesson I'm following:
(+) :: Num a => a -> a -> a
For any numeric type a, (+) takes 2 values of type a and returns a value of type a.
As per the examples:
1 + 1
-- 2 # type a is Int
3.0 + 4.0
-- 7.0 type a is Float
'a' + 'b'
-- Type Error: Char is not a Numeric type
This makes perfect sense, but i end up not understanding what goes on in the background, when:
1 + 3.0
Is the Type Inference system auto casting my Int to Float because it knows it will return Float?
You should investigate these sorts of questions in ghci. It's an invaluable learning resource:
$ ghci
GHCi, version 9.0.1: https://www.haskell.org/ghc/ :? for help
ghci> :t 1
1 :: Num p => p
ghci> :t 3.0
3.0 :: Fractional p => p
ghci> :t 1 + 3.0
1 + 3.0 :: Fractional a => a
First lesson: Numeric literals are polymorphic. 1 isn't an Int, it's polymorphic. It can be any instance of Num that is necessary for the code to compile. 3.0 isn't a Float, it's any instance of Fractional that is necessary for the code to compile. (The difference being the decimal in the literal - it restricts the types allowed.)
Second Lesson: when you combine things into an expression, types get unified. When you unify Num and Fractional constraints, you get a Fractional constraint. This is because Fractional is defined to require all instances of it also be instances of Num.
For a bit more, let's turn on warnings and see what additional information they provide.
ghci> :set -Wall
ghci> 1
<interactive>:5:1: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Integer’
(Show a0) arising from a use of ‘print’ at <interactive>:5:1
(Num a0) arising from a use of ‘it’ at <interactive>:5:1
• In a stmt of an interactive GHCi command: print it
1
ghci> 1 + 3.0
<interactive>:6:1: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Double’
(Show a0) arising from a use of ‘print’ at <interactive>:6:1-7
(Fractional a0) arising from a use of ‘it’ at <interactive>:6:1-7
• In a stmt of an interactive GHCi command: print it
4.0
When printing a value, ghci requires the type have a Show instance. Fortunately, that isn't too important of a detail here, but it's why the defaulting warnings refer to Show.
The lessons to observe here are that the default type for something with a Num instance if inference doesn't require something more specific is Integer, not Int. The default type for something with a Fractional instance if inference doesn't require something more specific is Double, not Float. (Float is basically never used. Forget it exists.)
So when inference runs, the expression 1 + 3.0 is inferred to have the type Fractional a => a. In the absence of further requirements on the type, defaulting kicks in and says "a is Double". Then that information flows back through the (+) to its arguments and requires each of them to also be Double. Fortunately, each argument is a polymorphic literal that can take the type Double. Type checking succeeds, instances are chosen, addition happens, the result is printed.
It's very important to this process that numeric literals are polymorphic. Haskell has no implicit conversions between any pair of types. Especially not numeric types. If you want to actually convert values from one type to another, you must call a function that does the conversion you desire. (fromIntegral, round, floor, ceiling, and realToFrac are the most common numeric conversion functions.) But when the values are polymorphic, it means inference can pick a matching type without needing a conversion.

How does fromIntegral work?

The type of fromIntegral is (Num b, Integral a) => a -> b. I'd like to understand how that's possible, what the code is that can convert any Integral number to any number type as needed.
The actual code for fromIntegral is listed as
fromIntegral = fromInteger . toInteger
The code for fromInteger is under instance Num Int and instance Num Integer They are respectively:
instance Num Int where
...
fromInteger i = I# (integerToInt i)
and
instance Num Integer where
...
fromInteger x = x
Assuming I# calls a C program that converts an Integer to an Int I don't see how either of these generate results that could be, say, added to a Float. How do they go from Int or Integer to something else?
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Thanks.
Because fromInteger is part of the Num class, every instance will have its own implementation. Neither of the two implementations (for Int and Integer) knows how to make a Float, but they aren't called when you're using fromInteger (or fromIntegral) to make a Float; that's what the Float instance of Num is for.
And so on for all other types. There is no one place that knows how to turn integers into any Num type; that would be impossible, since it would have to support user-defined Num instances that don't exist yet. Instead when each individual type is declared to be an instance of Num a way of doing that for that particular type must be provided (by implementing fromInteger).
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Actually, knowing what type it's expected to return from the expression the call is embedded in is exactly how it works.
Type checking/inference in Haskell works in two "directions" at once. It goes top-down, figuring out what types each expression should have, in order to fit into the bigger expression it's being used in. And it also goes "bottom-up", figuring out what type each expression should have from the smaller sub-expressions it's built out of. When it finds a place where those don't match, you get a type error (that's exactly where the "expected type" and "actual type" you see in type error messages cone from).
But because the compiler has that top-down knowledge (the "expected type") for every expression, it's perfectly able to figure out that a call of fromInteger is being used where a Float is expected, and so use the Float instance for Num in that call.
One aspect that distinguishes type classes from OOP interfaces is that type classes can dispatch on the result type of a method, not only on the type of its parameters. The classic example is the read :: Read a => String -> a function.
fromInteger has type fromInteger :: Num a => Integer -> a. The implementation is selected depending on the type of a. If the typechecker knows that a is a Float, the Num instance of Float will be used, not the one of Int or Integer.

Strange Haskell expression with type Num ([Char] -> t) => t

While doing some exercises in GHCi I typed and got the following>
ghci> (1 "one")
<interactive>:187:1:
No instance for (Num ([Char] -> a0)) arising from a use of ‘it’
In a stmt of an interactive GHCi command: print it
which is an error, howeve if I ask GHCi for the type of the expression it does not give any error:
ghci> :type (1 "one")
(1 "one") :: Num ([Char] -> t) => t
What is the meaning of (1 "one")?
Why does this expression gives an error, but GHCi tells it is well typed?
What is the meaning of Num ([Char] -> t) => t?
Thanks.
Haskell Report to the rescue! (Quoting section 6.4.1)
An integer literal represents the application of the function fromInteger to the appropriate value of type Integer.
fromInteger has type:
Prelude> :t fromInteger
fromInteger :: Num a => Integer -> a
So 1 is actually syntax sugar for fromInteger (1 :: Integer). Your expression, then, is:
fromInteger 1 "one"
Which could be written as:
(fromInteger 1) "one"
Now, fromInteger produces a number (that is, a value of a type which is an instance of Num, as its type tells us). In your expression, this number is applied to a [Char] (the string "one"). GHC correctly combines these two pieces of information to deduce that your expression has type:
Num ([Char] -> t) => t
That is, it would be the result (of unspecified type t) of applying a function which is also a Num to a [Char]. That is a valid type in principle. The only problem is that there is no instance of Num for [Char] -> t (that is, functions that take strings are not numbers, which is not surprising).
P.S.: As Sibi and Ørjan point out, in GHC 7.10 and later you will only see the error mentioned in the question if the FlexibleContexts GHC extension is enabled; otherwise the type checker will instead complain about having fixed types and type constructors in the class constraint (that is, Char, [] and (->)).
Haskell is a very flexible language, but also a very logical one in a rather literal sense. So often, things that in most languages would just be syntax errors, Haskell will look at them and try its darnedest to make sense of them, with results that are really confusing but are really just the logical consequence of the rules of the language.
For example, if we type your example into Python, it basically tells us "what you just typed in makes zero sense":
Python 2.7.6 (default, Sep 9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> (1 "one")
File "<stdin>", line 1
(1 "one")
^
SyntaxError: invalid syntax
Ruby does the same thing:
irb(main):001:0> (1 "one")
SyntaxError: (irb):1: syntax error, unexpected tSTRING_BEG, expecting ')'
(1 "one")
^
from /usr/bin/irb:12:in `<main>'
But Haskell doesn't give up that easily! It sees (1 "one"), and it reasons that:
Expressions of the form f x are function applications, where f has type like a -> b, x has type a and f x has type b.
So in the expression 1 "one", 1 must be a function that takes "one" (a [Char]) as its argument.
Then given Haskell's treatment of numeric literals, it translates the 1 into fromInteger 1 :: Num b => [Char] -> b. fromInteger is a method of the Num class, meaning that the user is allowed to supply their own implementations of it for any type—including [Char] -> b if you are so inclined.
So the error message means that Haskell, instead of telling you that what you typed is nonsense, tells you that you haven't taught it how to construct a number of type Num b => [Char] -> b, because that's the really strange thing that would need to be true for the expression to make sense.
TL;DR: It's a garbled nonsense type that isn't worth getting worried over.
Integer literals can represent values of any type that implements the Num typeclass. So 1 or any other integer literal can be used anywhere you need a number.
doubleVal :: Double
doubleVal = 1
intVal :: Int
intVal = 1
integerVal :: Integer
integerVal = 1
This enables us to flexibly use integral literals in any numeric context.
When you just use an integer literal without any type context, ghci doesn't know what type it is.
Prelude> :type 1
1 :: Num a => a
ghci is saying "that '1' is of some type I don't know, but I do know that whatever type it is, that type implements the Num typeclass".
Every occurrence of an integer literal in Haskell source is wrapped with an implicit fromInteger function. So (1 "one") is implicitly converted to ((fromInteger (1::Integer)) "one"), and the subexpression (fromInteger (1::Integer)) has an as-yet unknown type Num a => a, again meaning it's some unknown type, but we know it provides an instance of the Num typeclass.
We can also see that it is applied like a function to "one", so we know that its type must have the form [Char] -> a0 where a0 is yet another unknown type. So a and [Char] -> a0 must be the same. Substituting that back into the Num a => a type we figured out above, we know that 1 must have type Num ([Char] -> a0) => [Char] -> a0), and the expression (1 "one") has type Num ([Char] -> a0) => a0. Read that last type as "There is some type a0 which is the result of applying a [Char] argument to a function, and that function type is an instance of the Num class.
So the expression itself has a valid type Num ([Char] -> a0) => a0.
Haskell has something called the Monomorphism restriction. One aspect of this is that all type variables in expressions have to have a specific, known type before you can evaluate them. GHC uses type defaulting rules in certain situations when it can, to accomodate the monomorphism restriction. However, GHC doesn't know of any type a0 it can plug into the type expression above that has a Num instance defined. So it has no way to deal with it, and gives you the "No Instance for Num..." message.

Why can't I add Integer to Double in Haskell?

Why is it that I can do:
1 + 2.0
but when I try:
let a = 1
let b = 2.0
a + b
<interactive>:1:5:
Couldn't match expected type `Integer' with actual type `Double'
In the second argument of `(+)', namely `b'
In the expression: a + b
In an equation for `it': it = a + b
This seems just plain weird! Does it ever trip you up?
P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!
The type signature of (+) is defined as Num a => a -> a -> a, which means that it works on any member of the Num typeclass, but both arguments must be of the same type.
The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do for the let expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1 and 2.0.
All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double, so it just assumes the other one was supposed to be a Double and executes the computation. However, when you use the let expression, it only has one number to work off of, so it decides 1 is an Integer and 2.0 is a Double.
EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.
The first works because numeric literals are polymorphic (they are interpreted as fromInteger literal resp. fromRational literal), so in 1 + 2.0, you really have fromInteger 1 + fromRational 2, in the absence of other constraints, the result type defaults to Double.
The second does not work because of the monomorphism restriction. If you bind something without a type signature and with a simple pattern binding (name = expresion), that entity gets assigned a monomorphic type. For the literal 1, we have a Num constraint, therefore, according to the defaulting rules, its type is defaulted to Integer in the binding let a = 1. Similarly, the fractional literal's type is defaulted to Double.
It will work, by the way, if you :set -XNoMonomorphismRestriction in ghci.
The reason for the monomorphism restriction is to prevent loss of sharing, if you see a value that looks like a constant, you don't expect it to be calculated more than once, but if it had a polymorphic type, it would be recomputed everytime it is used.
You can use GHCI to learn a little more about this. Use the command :t to get the type of an expression.
Prelude> :t 1
1 :: Num a => a
So 1 is a constant which can be any numeric type (Double, Integer, etc.)
Prelude> let a = 1
Prelude> :t a
a :: Integer
So in this case, Haskell inferred the concrete type for a is Integer. Similarly, if you write let b = 2.0 then Haskell infers the type Double. Using let made Haskell infer a more specific type than (perhaps) was necessary, and that leads to your problem. (Someone with more experience than me can perhaps comment as to why this is the case.) Since (+) has type Num a => a -> a -> a, the two arguments need to have the same type.
You can fix this with the fromIntegral function:
Prelude> :t fromIntegral
fromIntegral :: (Num b, Integral a) => a -> b
This function converts integer types to other numeric types. For example:
Prelude> let a = 1
Prelude> let b = 2.0
Prelude> (fromIntegral a) + b
3.0
Others have addressed many aspects of this question quite well. I'd like to say a word about the rationale behind why + has the type signature Num a => a -> a -> a.
Firstly, the Num typeclass has no way to convert one artbitrary instance of Num into another. Suppose I have a data type for imaginary numbers; they are still numbers, but you really can't properly convert them into just an Int.
Secondly, which type signature would you prefer?
(+) :: (Num a, Num b) => a -> b -> a
(+) :: (Num a, Num b) => a -> b -> b
(+) :: (Num a, Num b, Num c) => a -> b -> c
After considering the other options, you realize that a -> a -> a is the simplest choice. Polymorphic results (as in the third suggestion above) are cool, but can sometimes be too generic to be used conveniently.
Thirdly, Haskell is not Blub. Most, though arguably not all, design decisions about Haskell do not take into account the conventions and expectations of popular languages. I frequently enjoy saying that the first step to learning Haskell is to unlearn everything you think you know about programming first. I'm sure most, if not all, experienced Haskellers have been tripped up by the Num typeclass, and various other curiosities of Haskell, because most have learned a more "mainstream" language first. But be patient, you will eventually reach Haskell nirvana. :)

Haskell Error: Couldn't match expected type `Integer' against inferred type `Int'

I have a haskell function that that calculates the size of the list of finite Ints. I need the output type to be an Integer because the value will actually be larger than the maximum bound of Int (the result will be -1 to be exact if the output type is an Int)
size :: a -> Integer
size a = (maxBound::Int) - (minBound::Int)
I understand the difference between Ints (bounded) and Integers (unbounded) but I'd like to make an Integer from an Int. I was wondering if there was a function like fromInteger, that will allow me to convert an Int to an Integer type.
You'll need to convert the values to Integers, which can be done by the fromIntegral function (numeric casting for Haskell):
fromIntegral :: (Integral a, Num b) => a -> b
It converts any type in the Integral class to any type in the (larger) Num class. E.g.
fromIntegral (maxBound::Int) - fromIntegral (minBound::Int)
However, I would not really trust the approach you're taking -- it seems very fragile. The behaviour in the presence of types that admit wraparound is pretty suspect.
What do you really mean by: "the size of the list of finite Ints". What is the size in this sense, if it isn't the length of the list?
I believe you are looking for:
fromIntegral :: (Integral a, Num b) => a -> b
which will convert an Integer to an Int
Perhaps you were assuming that Haskell, like many main-stream languages like C and (to a certain extent) Java, has implicit numeric coercions. It doesn't: Int and Integer are totally unrelated types and there is a special function for converting between them: fromIntegral. It belongs to the Num typeclass. Look at the documentation: essentially, fromIntegral does more than that: it is a generic "construct the representation of an arbitrary integral number", t.i. if you're implementing some kind of numbers and instantiating Num, you must provide a way to construct integral numbers of your type. For example, in the Num instance for complex numbers, fromIntegral creates a complex number with a zero imaginary part and an integral real part.
The only sense in which Haskell has implicit numeric coercions is that integer literals are overloaded, and when you write 42, the compiler implicitly interprets it as "fromIntegral (42::Integer)", so you can use integers in whatever contexts where a Num type is required.

Resources