How can a instance with Num type class coercion to Fractional implicitly? - haskell

I tested the numeric coercion by using GHCI:
>> let c = 1 :: Integer
>> 1 / 2
0.5
>> c / 2
<interactive>:15:1: error:
• No instance for (Fractional Integer) arising from a use of ‘/’
• In the expression: c / 2
In an equation for ‘it’: it = c / 2
>> :t (/)
(/) :: Fractional a => a -> a -> a -- (/) needs Fractional type
>> (fromInteger c) / 2
0.5
>>:t fromInteger
fromInteger :: Num a => Integer -> a -- Just convert the Integer to Num not to Fractional
I can use fromInteger function to convert a Integer type to Num (fromInteger has the type fromInteger :: Num a => Integer -> a), but I cannot understand that how can the type Num be converted to Fractional implicitly?
I know that if an instance has type Fractional it must have type Num (class Num a => Fractional a where), but does it necessary that if an instance has type Num it can be used as an instance with Fractional type?
#mnoronha Thanks for your detailed reply. There is only one question confuse me. I know the reason that type a cannot be used in function (/) is that type a is with type Integer which is not an instance of type class Fractional (the function (/) requires that the type of arguments must be instance of Fractional). What I don't understand is that even by calling fromInteger to convert the type integer to atype which be an instance of Num, it does not mean a type be an instance of Fractional (because Fractional type class is more constrained than Num type class, so a type may not implement some functions required by Fractional type class). If a type does not fully fit the condition Fractional type class requires, how can it be use in the function (/) which asks the arguments type be instance of Fractional. Sorry for not native speaker and really thanks for your patience!
I tested that if a type only fits the parent type class, it cannot be used in a function which requires more constrained type class.
{-# LANGUAGE OverloadedStrings #-}
module Main where
class ParentAPI a where
printPar :: int -> a -> String
class (ParentAPI a) => SubAPI a where
printSub :: a -> String
data ParentDT = ParentDT Int
instance ParentAPI ParentDT where
printPar i p = "par"
testF :: (SubAPI a) => a -> String
testF a = printSub a
main = do
let m = testF $ ParentDT 10000
return ()
====
test-typeclass.hs:19:11: error:
• No instance for (SubAPI ParentDT) arising from a use of ‘testF’
• In the expression: testF $ ParentDT 10000
In an equation for ‘m’: m = testF $ ParentDT 10000
In the expression:
do { let m = testF $ ParentDT 10000;
return () }
I have found a doc explaining the numeric overloading ambiguity very clearly and may help others with the same confusion.
https://www.haskell.org/tutorial/numbers.html

First, note that both Fractional and Num are not types, but type classes. You can read more about them in the documentation or elsewhere, but the basic idea is that they define behaviors for types. Num is the most inclusive numeric typeclass, defining behaviors functions like (+), negate, which are common to pretty much all "numeric types." Fractional is a more constrained type class that describes "fractional numbers, supporting real division."
If we look at the type class definition for Fractional, we see that it is actually defined as a subclass of Num. That is, for a type a to be an have an instance Fractional, it must first be a member of the typeclass Num:
class Num a => Fractional a where
Let's consider some type that is constrained by Fractional. We know it implements the basic behaviors common to all members of Num. However, we can't expect it to implement behaviors from other type classes unless multiple constraints are specified (ex. (Num a, Ord a) => a. Take, for example, the function div :: Integral a => a -> a -> a (integral division). If we try to apply the function with an argument that is constrained by the typeclass Fractional (ex. 1.2 :: Fractional t => t), we encounter an error. Type classes restrict the sort of values a function deals with, allowing us to write more specific and useful functions for types that share behaviors.
Now let's look at the more general typeclass, Num. If we have a type variable a that is only constrained by Num a => a, we know that it will implement the (few) basic behaviors included in the Num type class definition, but we'd need more context to know more. What does this mean practically? We know from our Fractional class declaration that functions defined in the Fractional type class are applied to Num types. However, these Num types are a subset of all possible Num types.
The importance of all this, ultimately, has to do with the ground types (where type class constraints are most commonly seen in functions). a represents a type, with the notation Num a => a telling us that a is a type that includes an instance of the type class Num. a could be any of the types that include the instance (ex. Int, Natural). Thus, if we give a value a general type Num a => a, we know it can implement functions for every type where there is a type class defined. For example:
ghci>> let a = 3 :: (Num a => a)
ghci>> a / 2
1.5
Whereas if we'd defined a as a specific type or in terms of a more constrained type class, we would have not been able to expect the same results:
ghci>> let a = 3 :: Integral a => a
ghci>> a / 2
-- Error: ambiguous type variable
or
ghci>> let a = 3 :: Integer
ghci>> a / 2
-- Error: No instance for (Fractional Integer) arising from a use of ‘/’
(Edit responding to followup question)
This is definitely not the most concrete explanation, so readers feel free to suggest something more rigorous.
Suppose we have a function a that is just a type class constrained version of the id function:
a :: Num a => a -> a
a = id
Let's look at type signatures for some applications of the function:
ghci>> :t (a 3)
(a 3) :: Num a => a
ghci>> :t (a 3.2)
(a 3.2) :: Fractional a => a
While our function had the general type signature, as a result of its application the the type of the application is more restricted.
Now, let's look at the function fromIntegral :: (Num b, Integral a) => a -> b. Here, the return type is the general Num b, and this will be true regardless of input. I think the best way to think of this difference is in terms of precision. fromIntegral takes a more constrained type and makes it less constrained, so we know we'll always expect the result will be constrained by the type class from the signature. However, if we give an input constraint, the actual input could be more restricted than the constraint and the resulting type would reflect that.

The reason why this works comes down to the way universal quantification works. To help explain this I am going to add in explicit forall to the type signatures (which you can do yourself if you enable -XExplicitForAll or any other forall related extension), but if you just removed them (forall a. ... becomes just ...), everything will work fine.
The thing to remember is that when a function involves a type constrained by a typeclass, then what that means is that you can input/output ANY type within that typeclass, so it's actually better to have a less constrained typeclass.
So:
fromInteger :: forall a. Num a => Integer -> a
fromInteger 5 :: forall a. Num a => a
Means that you have a value that is of EVERY Num type. So not only can you use it in a function taking it in a Fractional, you could use it in a function that only takes in MyWeirdTypeclass a => ... as long as there is one single type that implements both Num and MyWeirdTypeclass. Hence why you can get the following just fine:
fromInteger 5 / 2 :: forall a. Fractional a => a
Now of course once you decide to divide by 2, it now wants the output type to be Fractional, and thus 5 and 2 will be interpreted as some Fractional type, so we won't run into issues where we try to divide Int values, as trying to make the above have type Int will fail to type check.
This is really powerful and awesome, but very much unfamiliar, as generally other languages either don't support this, or only support it for input arguments (e.g print in most languages can take in any printable type).
Now you may be curious when the whole superclass / subclass stuff comes into play, so when you are defining a function that takes in something of type Num a => a, then because a user can pass in ANY Num type, you are correct that in this situation you cannot use functions defined on some subclass of Num, only things that work on ALL Num values, like *:
double :: forall a. Num a => a -> a
double n = n * 2 -- in here `n` really has type `exists a. Num a => a`
So the following does not type check, and it wouldn't type check in any language, because you don't know that the argument is a Fractional.
halve :: Num a => a -> a
halve n = n / 2 -- in here `n` really has type `exists a. Num a => a`
What we have up above with fromInteger 5 / 2 is more equivalent to the following, higher rank function, note that the forall within parenthesis is required, and you need to use -XRankNTypes:
halve :: forall b. Fractional b => (forall a. Num a => a) -> b
halve n = n / 2 -- in here `n` has type `forall a. Num a => a`
Since this time you are taking in EVERY Num type (just like the fromInteger 5 you were dealing with before), not just ANY Num type. Now the downside of this function (and one reason why no one wants it) is that you really do have to pass in something of EVERY Num type:
halve (2 :: Int) -- does not work
halve (3 :: Integer) -- does not work
halve (1 :: Double) -- does not work
halve (4 :: Num a => a) -- works!
halve (fromInteger 5) -- also works!
I hope that clears things up a little. All you need for the fromInteger 5 / 2 to work is that there exists ONE single type that is both a Num and a Fractional, or in other words just a Fractional, since Fractional implies Num. Type defaulting doesn't help much with clearing up this confusion, as what you may not realize is that GHC is just arbitrarily picking Double, it could have picked any Fractional.

Related

Haskell type substitution

I am going through the Haskell Book (http://haskellbook.com/) and got stuck at the following exercise:
f :: Float
f = 1.0
-- Question: Can you replace `f`'s signature by `f :: Num a => a`
At first, I thought the answer would be Yes. Float provides an instance for Num, so substituting a Num a => a by a Float value should be fine (I am thinking co-variance here).
This does not compile however:
Could not deduce (Fractional a) arising from the literal ‘1.0’
from the context: Num a
bound by the type signature for:
f :: forall a. Num a => a
at ...
Possible fix:
add (Fractional a) to the context of
the type signature for:
f :: forall a. Num a => a
• In the expression: 1.0
In an equation for ‘f’: f = 1.0
If I do this however, no problem:
f :: Fractional a => a
f = 1.0
Why can't I use a less specific constraint here like Num a => a?
UPD:
Actually, you can sum this up to:
1.0::(Num a => a)
vs
1.0::(Fractional a => a)
Why is the second working but not the first one? I thought Fractional was a subset of Num (meaning Fractional is compatible with Num)
UPD 2:
Thanks for your comments, but I am still confused. Why this works:
f :: Num a => a -> a
f a = a
f 1.0
while this not:
f :: Num a => a
f = 1.0
UPD 3:
I have just noticed something:
f :: Num a => a
f = (1::Int)
does not work either.
UPD 4
I have been reading all the answers/comments and as far as I understand:
f :: Num a => a
is the Scala equivalent of
def f[A: Num]: A
which would explain why many mentioned that a is defined by the caller. The only reason why we can write this:
f :: Num a => a
f = 1
is because 1 is typed as a Num a => a. Could someone please confirm this assumption? In any case, thank you all for your help.
If I have f :: Num a => a this means that I can use f wherever I need any numeric type. So, all of f :: Int, f :: Double must type check.
In your case, we can't have 1.0 :: Int for the same reason we can't have 543.24 :: Int, that is, Int does not represent fractional values. However, 1.0 does fit any fractional type (as 543.24 does).
Fractional indeed can be thought as a subset of Num. But if I have a value inside all the fractionals f :: forall a . Fractional a => a, I don't necessarily have a value in all the numeric types f :: forall a . Num a => a.
Note that, in a sense, the constraints are on the left side of =>, which makes them behave contra-variantly. I.e. cars are a subset of vehichles, but I can't conclude that a wheel that can be used in any car will be able to be used with any vehicle. Rather, the opposite: a wheel that can be used in any vehicle will be able to be used with any car.
So, you can roughly regard f :: forall a . Num a => a (values fitting in any numeric type, like 3 and 5) as a subtype of f :: forall a . Fractional a => a (values fitting in any fractional type, like 3,5, and 32.43).
Let's start with the monomorphic case:
f :: Float
f = 1.0
Here, you've said that f is a Float; not an Int, not a Double, not any other type. 1.0, on the other hand, is a polymorphic constant; it has type Fractional a => a, and so can be used to provide a value of any type that has a Fractional instance. You can restrict it to Float, or Double, etc. Since f has to be a Float, that's what 1.0 is restricted to.
If you try to change the signature
f :: Num a => a
f = 1.0
now you have a problem. You've now promised that f can be used to provide a value of any type that has a Num instance. That includes Int and Integer, but neither of those types has a Fractional instance. So, the compiler refuses to let you do this. 1.0 simply can't produce an Int should the need arise.
Likewise,
1.0::(Num a => a)
is a lie, because this isn't a restriction; it's an attempt at expanding the set of types that 1.0 can produce. Something with type Num a => a should be able to give me an Int, but 1.0 cannot do that.
1.0::(Fractional a => a)
works because you are just restating something that is already true. It's neither restricting 1.0 to a smaller set of types or trying to expand it.
Now we get something a little more interesting, because you are specifying a function type, not just a value type.
f :: Num a => a -> a
f a = a
This just says that f can take as its argument any value that is no more polymorphic than Num a => a. a can be any type that implements Num, or it can be a polymorphic value that is a subset of the types that represent Num.
You chose
f 1.0
which means a gets unified with Fractional a => a. Type inference then decides that the return type is also Fractional a => a, and returning the same value you passed in is allowed.
We've already covered why this
f :: Num a => a
f = 1.0
isn't allowed above, and
f :: Num a => a
f = (1::Int)
fails for the same reason. Int is simply too specific; it is not the same as Num a => a.
For example, (+) :: Num a => a -> a -> a requires two arguments of the same type. So, I might try to write
1 :: Integer + f
or
1 :: Float + f
. In both cases, I need f to be able to provide a value with the same type as the other argument, to satisfy the type of (+): if I want an Integer, I should be able to get an Integer, and if I want a Float, I should be able to get a Float. But if you could specify a value with something less specific than Num a => a, you wouldn't be able to keep that promise. 1.0 :: Fractional a => a can't provide an Integer, and 1 :: Int can't provide anything except an Int, not even an Integer.
Think of a polymorphic type as a function from a concrete type to a value; you can literally do this if you enable the TypeApplications extension.
Prelude> :set -XTypeApplications
Prelude> :{
Prelude| f :: Num a => a
Prelude| f = 1
Prelude| :}
Prelude> :t f
f :: Num a => a
Prelude> :t f #Int
f #Int :: Int
Prelude> f #Int
1
Prelude> :t f #Float
f #Float :: Float
Prelude> f #Float
1.0
Prelude> :t f #Rational
f #Rational :: Rational
Prelude> f #Rational
1 % 1
The reason these all work is because you promised that you could pass any type with a Num instance to f, and it could return a value of that type. But if you had been allowed to say f = 1.0, there is no way that f #Int could, in fact, return an Int, because 1.0 simply is not capable of producing an Int.
When values with polymorphic types like Num a => a are involved, there are two sides. The source of the value, and the use of the value. One side gets the flexibility to choose any specific type compatible with the polymorphic type (like using Float for Num a => a). The other side is restricted to using code that will work regardless of which specific type is involved - it can only make use of features that will work for every type compatible with the polymorphic type.
There is no free lunch; both sides cannot have the same freedom to pick any compatible type they like.
This is just as true for object-oriented subclass polymorphism, however OO polymorphism rules give the flexibility to the source side, and put all the restrictions on the use side (except for generics, which works like parametric polymorphism), while Haskell's parametric polymorphism gives the flexibility to the use side, and puts all the restrictions on the source side.
For example in an OO language with a general number class Num, and Int and Double as subclasses of this, then a method returning something of type Num would work the way you're expecting. You can return 1.0 :: Double, but the caller can't use any methods on the value that are provided specifically by Double (like say one that splits off the fractional part) because the caller must be programmed to work the same whether you return an Int or a Double (or even any brand new subclass of Num that is private to your code, that the caller cannot possibly know about).
Polymorphism in Haskell is based on type parameters rather than subtyping, which switches things around. The place in the code where f :: Num a => a is used has the freedom to demand any particular choice for a (subject to the Num constraint), and the code for f that is the source of the value must be programmed to work regardless of the use-site's choice. The use-site is even free to demand values of a type that is private to the code using f, that the implementer of f cannot possibly know about. (I can literally open up a new file, make any bizarre new type I like and give it an instance for Num, and any of the standard library functions written years ago that are polymorphic in Num will work with my type)
So this works:
f :: Float
f = 1.0
Because there are no type variables, so both source and use-site simply have to treat this as a Float. But this does not:
f :: Num a => a
f = 1.0
Because the place where f is used can demand any valid choice for a, and this code must be able to work for that choice. (It would not work when Int is chosen, for example, so the compiler must reject this definition of f). But this does work:
f :: Fractional a => a
f = 1.0
Because now the use site is only free to demand any type that is in Fractional, which excludes the ones (like Int) that floating point literals like 1.0 cannot support.
Note that this is exactly how "generics" work in object oriented languages, so if you're familiar with any language supporting generics, just treat Haskell types the way you do generic ones in OO languages.
One further thing that may be confusing you is that in this:
f :: Float
f = 1.0
The literal 1.0 isn't actually definitively a Float. Haskell literals are much more flexible than those of most other languages. Whereas e.g. Java says that 1.0 is definitely a value of type double (with some automatic conversion rules if you use a double where certain other types are expected), in Haskell that 1.0 is actually itself a thing with a polymorphic type. 1.0 has type Fractional a => a.
So the reason the f :: Fractional a => a definition worked is obvious, it's actually the f :: Float definition that needs some explanation. It's making use of exactly the rules I described above in the first section of my post. Your code f = 1.0 is a use-site of the value represented by 1.0, so it can demand any type it likes (subject to Fractional). In particular, it can demand that the literal 1.0 supply a value of type Float.
This again reinforces why the f :: Num a => a definition can't work. Here the type for f is promising to f's callers that they can demand any type they like (subject to Num). But it's going to fulfill that demand by just passing the demand down the chain to the literal 1.0, which has the most general type of Fractional a => a. So if the use-site of f demands a type that is in Num but outside Fractional, f would then try to demand that 1.0 supply that same non-Fractional type, which it can't.
Names do not have types. Values have types, and the type is an intrinsic part of the value. 1 :: Int and 1 :: Integer are two different values. Haskell does not implicitly convert values between types, though it can be trivial to define functions that take values of one type and return values of another. (For example, f :: Int -> Integer with f x = x will "convert" its Int arugment to an Integer.
A declaration like
f :: Num a => a
does not say that f has type Num a => a, it says that you can assign values of type Num a => a to f.
You can think of a polymorphic value like 1 :: Num a => a being all 1-like values in every type with a Num instance, including 1 :: Int, 1 :: Integer, 1 :: Rational, etc.
An assignment like f = 1 succeeds because the literal 1 has the expected type Num a => a.
An assignment like f = 1.0 fails because the literal 1.0 has a different type, Fractional a => a, and that type is too specific. It does not include all 1-like values that Num a => a may be called on to produce.
Suppose you declared g :: Fractional a => a. You can say g = 1.0, because the types match. You cannot say g = (1.0 :: Float), because the types do not match; Float has a Fractional instance, but it is just one of a possibly infinite set of types that could have Fractional instances.
You can say g = 1, because Fractional a => a is more specific than Num a => a, and has Fractional has Num as its superclass. The assignment "selects" the subset of 1 :: Num a => a that overlaps with (and for all intents and purposes is) 1 :: Fractional a => a and assigns that to g. Put another way, just 1 :: Num a => a can produce a value for any single type that has a Num instance, it can produce a value for any subset of types implied by a subclass of Num.
I am puzzled by this as well.
What gets stuck in my mind is something like:
// in pseudo-typescript
type Num = Int | Float;
let f: Num;
f = (1.0 as Float); // why this doesn't work
Fact is, Num a => a is just not a simple sum of numeric types.
It represents something that can morph into various kind of numeric types.
Thanks to chepner's explanation, now I can persuade myself like this:
if I have a Num a => a then I can get a Int from it, I can also get a Float from it, as well as a Double....
If I am able to install 1.1 into a Num a => a, then there is no way I can safely derive a Int from 1.1.
The expression 1 is able to bind to Num a => a is due to the fact that 1 itself is a polymorphic constant with type signature Num a => a.

Problems With Type Inference on (^)

So, I'm trying to write my own replacement for Prelude, and I have (^) implemented as such:
{-# LANGUAGE RebindableSyntax #-}
class Semigroup s where
infixl 7 *
(*) :: s -> s -> s
class (Semigroup m) => Monoid m where
one :: m
class (Ring a) => Numeric a where
fromIntegral :: (Integral i) => i -> a
fromFloating :: (Floating f) => f -> a
class (EuclideanDomain i, Numeric i, Enum i, Ord i) => Integral i where
toInteger :: i -> Integer
quot :: i -> i -> i
quot a b = let (q,r) = (quotRem a b) in q
rem :: i -> i -> i
rem a b = let (q,r) = (quotRem a b) in r
quotRem :: i -> i -> (i, i)
quotRem a b = let q = quot a b; r = rem a b in (q, r)
-- . . .
infixr 8 ^
(^) :: (Monoid m, Integral i) => m -> i -> m
(^) x i
| i == 0 = one
| True = let (d, m) = (divMod i 2)
rec = (x*x) ^ d in
if m == one then x*rec else rec
(Note that the Integral used here is one I defined, not the one in Prelude, although it is similar. Also, one is a polymorphic constant that's the identity under the monoidal operation.)
Numeric types are monoids, so I can try to do, say 2^3, but then the typechecker gives me:
*AlgebraicPrelude> 2^3
<interactive>:16:1: error:
* Could not deduce (Integral i0) arising from a use of `^'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Integral Integer -- Defined at Numbers.hs:190:10
instance Integral Int -- Defined at Numbers.hs:207:10
* In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
<interactive>:16:3: error:
* Could not deduce (Numeric i0) arising from the literal `3'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Numeric Integer -- Defined at Numbers.hs:294:10
instance Numeric Complex -- Defined at Numbers.hs:110:10
instance Numeric Rational -- Defined at Numbers.hs:306:10
...plus four others
(use -fprint-potential-instances to see them all)
* In the second argument of `(^)', namely `3'
In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
I get that this arises because Int and Integer are both Integral types, but then why is it that in normal Prelude I can do this just fine? :
Prelude> :t (2^)
(2^) :: (Num a, Integral b) => b -> a
Prelude> :t 3
3 :: Num p => p
Prelude> 2^3
8
Even though the signatures for partial application in mine look identical?
*AlgebraicPrelude> :t (2^)
(2^) :: (Numeric m, Integral i) => i -> m
*AlgebraicPrelude> :t 3
3 :: Numeric a => a
How would I make it so that 2^3 would in fact work, and thus give 8?
A Hindley-Milner type system doesn't really like having to default anything. In such a system, you want types to be either properly fixed (rigid, skolem) or properly polymorphic, but the concept of “this is, like, an integer... but if you prefer, I can also cast it to something else” as many other languages have doesn't really work out.
Consequently, Haskell sucks at defaulting. It doesn't have first-class support for that, only a pretty hacky ad-hoc, hard-coded mechanism which mainly deals with built-in number types, but fails at anything more involved.
You therefore should try to not rely on defaulting. My opinion is that the standard signature for ^ is unreasonable; a better signature would be
(^) :: Num a => a -> Int -> a
The Int is probably controversial – of course Integer would be safer in a sense; however, an exponent too big to fit in Int generally means the results will be totally off the scale anyway and couldn't feasibly be calculated by iterated multiplication; so this kind of expresses the intend pretty well. And it gives best performance for the extremely common situation where you just write x^2 or similar, which is something where you very definitely don't want to have to put an extra signature in the exponent.
In the rather fewer cases where you have a concrete e.g. Integer number and want to use it in the exponent, you can always shove in an explicit fromIntegral. That's not nice, but rather less of an inconvenience.
As a general rule, I try to avoid† any function-arguments that are more polymorphic than the results. Haskell's polymorphism works best “backwards”, i.e. the opposite way as in dynamic language: the caller requests what type the result should be, and the compiler figures out from this what the arguments should be. This works pretty much always, because as soon as the result is somehow used in the main program, the types in the whole computation have to be linked to a tree structure.
OTOH, inferring the type of the result is often problematic: arguments may be optional, may themselves be linked only to the result, or given as polymorphic constants like Haskell number literals. So, if i doesn't turn up in the result of ^, avoid letting in occur in the arguments either.
†“Avoid” doesn't mean I don't ever write them, I just don't do so unless there's a good reason.

Setting type of fractional to num

Could someone please explain why this compiles
Prelude> 1 :: Num a => a
and this doesn't
Prelude> 1.0 :: Num a => a
Second example would work with Fractional, but Num is superclass of Fractional. Just like it's superclass of Integral.
If we have
x :: Num a => a
the user of x can pick a as wanted. E.g.
x :: Int
What would this evaluate to if x = 1.5 ?
For this reason a floating point literal can't be given the polytype Num a => a, since its value won't (in general) fit all Numeric types.
Integral literals instead fit every numeric type, so they are allowed.
The superclass relationship between type classes does not establish relationships between types. It tells us only that if type t is a member of type class C, and B is a super class of C, then t will also be member of B.
It doesn't say that each value v::t can be used at any type that is also member of B. But this is what you are stating with:
3.14159 :: Num a => a
It's important to distinguish between polymorphism in OO languages and polymorphism in Haskell. OO polymorphism is covariant, while Haskell's parametric polymorphism is contravariant.
What this means is: in an OO language, if you have
class A {...}
class B: A {...}
i.e. A is a superclass of B, then any value of type B is also a value of type A. (Note that any particular value is actually not polymorphic but has a concrete type!) Thus, if you had
class Num {...}
class Fractional: Num {...}
then a Fractional value could indeed be used as a Num value. That's roughly what covariant means: any subclass value is also a superclass value; the values hierarchy goes the same direction as the type hierarchy.
In Haskell, classes are different. There is no such thing as a “value of type Num”, only values of concrete types a. That type may be in the Num class.
Unlike in OO languages, a value like 1 :: Num a => a is polymorphic: it can take on whatever type the environment demands, provided the type is in the Num class. (Actually that syntax is just shorthand for 1 :: ∀ a . Num a => a, to be read as “for all types a, you can have a value 1 of type a.) For example,
Prelude> let x = 1 :: Num a => a
Prelude> x :: Int
1
Prelude> x :: Double
1.0
You can also give x a more specific constraint of Fractional, since that's a subclass of Num. That just restricts what type the polymorphic value can be instantiated to:
Prelude> let x = 1 :: Fractional a => a
Prelude> x :: Int
<interactive>:6:1:
No instance for (Fractional Int) arising from a use of ‘x’
...
Prelude> x :: Double
1.0
because Int is not a fractional type.
Thus, Haskell's polymorphism is contravariant: polymorphic values restricted to a superclass can also be restricted to a subclass instead, but not the other way around. In particular, you can obviously have
Prelude> let y = 1.0 :: Fractional a => a
(y is the same as x'), but you can not generalise this to y' = 1.0 :: Num a => a. Which is a good thing as Ingo remarked since otherwise it would be possible to do
Prelude> 3.14159 :: Int
????

Haskell, multiple type classes for one argument

This is an example in Learn You A Haskell, chapter on higher order functions:
compareWithHundred :: (Num a, Ord a) => a -> Ordering
compareWithHundred x = compare 100 x
While the idea of the function is clear for me, I'm not sure why type signature is (Num a, Ord a). We only pass integer that is to be compared to the function, of type Int. What Ord stands for here, and why is implicitly passed argument in type signature?
That's not the only possible signature for this signature. It happens to be the most general one. compareWithHundred :: Int -> Ordering is actually a possible instantiation – the polymorphic a argument can be instatiated with any orderable number type, which does sure enough include Int, but also Integer, Rational, Double...
Prelude> let compareWithHundred :: (Num a, Ord a) => a -> Ordering; compareWithHundred x = compare 100 x
Prelude> compareWithHundred (99 :: Int)
GT
Prelude> compareWithHundred (100.3 :: Double)
LT
Not all number types permit you to order-compare them though – the classical example where this is not possible are complex numbers (which have “more than one direction” in which you could order them).
Prelude Data.Complex> compareWithHundred (100 :+ 30 :: Complex Double)
<interactive>:10:1:
No instance for (Ord (Complex Double))
arising from a use of ‘compareWithHundred’
In the expression: compareWithHundred (100 :+ 30 :: Complex Double)
In an equation for ‘it’:
it = compareWithHundred (100 :+ 30 :: Complex Double)
Hence you need to require both that the argument is a number (so there exists a value 100 which to compare with) and that the argument is in the Ord class. This combined constrained is written (Num a, Ord a).
I have something to add, in case you couldn't gather something from leftaroundabout's thorough answer.
Everything to the left of => in a type signature is a constraint. Read the type like this:
compareWithHundred :: (Num a, Ord a) => a -> Ordering
^^^^^^^^^^^^^^ ^ ^^^^^^^^
constraints | |
argument type |
result type
So you only pass one argument to the function because there is only one argument in the type signature, a. a is a type variable, and can be replaced with any type as long as that type satisfies the constraints.
The Num a says that whatever you replace a with has to be numeric (so it can be Int, Integer, Double, ...), and the Ord a says that it has to be comparable. leftroundabout's answer goes into more detail about why you need both, I just wanted to make sure you knew how to read the signature.
So it's perfectly legal in one sense to say compareWithHundred "foobar", the type checker says that that expression's type is Ordering, but then it will fail later when it tries to check that there is a Num String instance.
I hope this helps.

Polymorphism: a constant versus a function

I'm new to Haskell and come across a slightly puzzling example for me in the Haskell Programming from First Principles book. At the end of Chapter 6 it suddenly occurred to me that the following doesn't work:
constant :: (Num a) => a
constant = 1.0
However, the following works fine:
f :: (Num a) => a -> a
f x = 3*x
I can input any numerical value for x into the function f and nothing will break. It's not constrained to taking integers. This makes sense to me intuitively. But the example with the constant is totally confusing to me.
Over on a reddit thread for the book it was explained (paraphrasing) that the reason why the constant example doesn't work is that the type declaration forces the value of constant to only be things which aren't more specific than Num. So trying to assign a value to it which is from a subclass of Num like Fractional isn't kosher.
If that explanation is correct, then am I wrong in thinking that these two examples seem completely opposites of each other? In one case, the type declaration forces the value to be as general as possible. In the other case, the accepted values for the function can be anything that implements Num.
Can anyone set me straight on this?
It can sometimes help to read types as a game played between two actors, the implementor of the type and the user of the type. To do a good job of explaining this perspective, we have to introduce something that Haskell hides from you by default: we will add binders for all type variables. So your types would actually become:
constant :: forall a. Num a => a
f :: forall a. Num a => a -> a
Now, we will read type formation rules thusly:
forall a. t means: the caller chooses a type a, and the game continues as t
c => t means: the caller shows that constraint c holds, and the game continues as t
t -> t' means: the caller chooses a value of type t, and the game continues as t'
t (where t is a monomorphic type such as a bare variable or Integer or similar) means: the implementor produces a value of type a
We will need a few other details to truly understand things here, so I will quickly say them here:
When we write a number with no decimal points, the compiler implicitly converts this to a call to fromInteger applied to the Integer produced by parsing that number. We have fromInteger :: forall a. Num a => Integer -> a.
When we write a number with decimal points, the compiler implicitly converts this to a call to fromRational applied to the Rational produced by parsing that number. We have fromRational :: forall a. Fractional a => Rational -> a.
The Num class includes the method (*) :: forall a. Num a => a -> a -> a.
Now let's try to walk through your two examples slowly and carefully.
constant :: forall a. Num a => a
constant = 1.0 {- = fromRational (1 % 1) -}
The type of constant says: the caller chooses a type, shows that this type implements Num, and then the implementor must produce a value of that type. Now the implementor tries to play his own game by calling fromRational :: Fractional a => Rational -> a. He chooses the same type the caller did, and then makes an attempt to show that this type implements Fractional. Oops! He can't show that, because the only thing the caller proved to him was that a implements Num -- which doesn't guarantee that a also implements Fractional. Dang. So the implementor of constant isn't allowed to call fromRational at that type.
Now, let's look at f:
f :: forall a. Num a => a -> a
f x = 3*x {- = fromInteger 3 * x -}
The type of f says: the caller chooses a type, shows that the type implements Num, and chooses a value of that type. The implementor must then produce another value of that type. He is going to do this by playing his own game with (*) and fromInteger. In particular, he chooses the same type the caller did. But now fromInteger and (*) only demand that he prove that this type is an instance of Num -- so he passes off the proof the caller gave him of this and saves the day! Then he chooses the Integer 3 for the argument to fromInteger, and chooses the result of this and the value the caller handed him as the two arguments to (*). Everybody is satisfied, and the implementor gets to return a new value.
The point of this whole exposition is this: the Num constraint in both cases is enforcing exactly the same thing, namely, that whatever type we choose to instantiate a at must be a member of the Num class. It's just that in the definition constant = 1.0 being in Num isn't enough to do the operations we've written, whereas in f x = 3*x being in Num is enough to do the operations we've written. And since the operations we've chosen for the two things are so different, it should not be too surprising that one works and the other doesn't!
When you have a polymorphic value, the caller chooses which concrete type to use. The Haskell report defines the type of numeric literals, namely:
integer and floating literals have the typings (Num a) => a and
(Fractional a) => a, respectively
3 is an integer literal so has type Num a => a and (*) has type Num a => a -> a -> a so f has type Num a => a -> a.
In contrast, 3.0 has type Fractional a => a. Since Fractional is a subclass of Num your type signature for constant is invalid since the caller could choose a type for a which is Num but not Fractional e.g. Int or Integer.
They don't mean the opposite - they mean exactly the same ("as general as possible"). Typeclass gives you all guarantees that you can rely upon - if typeclass T provides function f, you can use it for all instances of T, but even if some of these instances are members of G (providing g) as well, requiring to be of T typeclass is not sufficient to call g.
In your case this means:
Members of Num are guaranteed to provide conversion from integers (i.e. default type for integral values, like 1 or 1000) - with fromInteger function.
However, they are not guaranteed to provide conversion from rational numbers (like 1.0) - Fractional typeclass does provide this as fromRational function, but it doesn't really matter, as you use only Num.

Resources