Assigning "bare" numbers to newtypes - haskell

Note the second line in this GHCi session. What is it about the Latitude type that allows me to use a "bare" number as a value, instead of having to invoke a constructor? I would like to do something similar with some of my own types.
λ> :m + Data.Geo.GPX.Type.Latitude
λ> let t = 45 :: Latitude
λ> t
45.0
I've examined the source code for the Latitude type, but I had trouble figuring it out at first. Eventually I found the answer, so I thought I'd document it here. See my answer below.

What makes this work is that the type is a Num. The easiest way to do that is to use "deriving Num", in which case I need the language pragma GeneralizedNewtypeDeriving. So I can create a type like the following,
newtype Seconds = Seconds Double deriving (Eq, Ord, Enum, Num, Fractional, Floating, Real, RealFrac, RealFloat, Show)
And then in GHCi,
λ> let s = 5 :: Seconds
λ> s
Seconds 5.0
Alternatively, I could explicitly implement Num.

According to the Haskell98 standard, numeric literals are actually calls to fromInteger and fromRational. This allows them to be converted to any type that implements those functions (fromInteger is in the Prelude.Num typeclass and fromRational is in the Prelude.Fractional typeclass).
The syntax of numeric literals is given in Section 2.5. An integer
literal represents the application of the function fromInteger to the
appropriate value of type Integer. Similarly, a floating literal
stands for an application of fromRational to a value of type Rational
(that is, Ratio Integer). Given the typings:
fromInteger :: (Num a) => Integer -> a
fromRational :: (Fractional a) => Rational -> a
integer and floating literals have the typings (Num a) => a and
(Fractional a) => a, respectively. Numeric literals are defined in
this indirect way so that they may be interpreted as values of any
appropriate numeric type. See Section 4.3.4 for a discussion of
overloading ambiguity.
http://www.haskell.org/onlinereport/basic.html#numeric-literals

Related

The type-specification operator is like down-casting in Object-Oriented languages?

I was going through the book Haskell Programming from First Principles and came across following code-snippet.
Prelude> fifteen = 15
Prelude> :t fifteen
fifteen :: Num a => a
Prelude> fifteenInt = fifteen :: Int
Prelude> fifteenDouble = fifteen :: Double
Prelude> :t fifteenInt
fifteenInt :: Int
Prelude> :t fifteenDouble
fifteenInt :: Double
Here, Num is the type-class that is like the base class in OO languages. What I mean is when I write a polymorphic function, I take a type variable that is constrained by Num type class. However, as seen above, casting fifteen as Int or Double works. Isn't it equivalent to down-casting in OO languages?
Wouldn't some more information (a bunch of Double type specific functions in this case) be required for me to be able to do that?
Thanks for helping me out.
No, it's not equivalent. Downcasting in OO is a runtime operation: you have a value whose concrete type you don't know, and you basically assert that it has some particular case – which is an error if it happens to be actually a different concrete type.
In Haskell, :: isn't really an operator at all. It just adds extra information to the typechecker at compile-time. I.e. if it compiles at all, you can always be sure that it will actually work at runtime.
The reason it works at all is that fifteen has no concrete type. It's like a template / generic in OO languages. So when you add the :: Double constraint, the compiler can then pick what type is instantiated for a. And Double is ok because it is a member of the Num typeclass, but don't confuse a typeclass with an OO class: an OO class specifies one concrete type, which may however have subtypes. In Haskell, subtypes don't exist, and a class is more like an interface in OO languages. You can also think of a typeclass as a set of types, and fifteen has potentially all of the types in the Num class; which one of these is actually used can be chosen with a signature.
Downcasting is not a good analogy. Rather, compare to generic functions.
Very roughly, you can pretend that your fifteen is a generic function
// pseudo code in OOP
A fifteen<A>() where A : Num
When you use fifteen :: Double in Haskell, you tell the compiler that the result of the above function is Double, and that enables the compiler to "call" the above OOP function as fifteen<Double>(), inferring the generic argument.
With some extension on, GHC Haskell has a more direct way to choose the generic parameter, namely the type application fifteen #Double.
There is a difference between the two ways in that ... :: Double specifies what is the return type, while #Double specifies what is the generic argument. In this fifteen case they are the same, but this is not always the case. For instance:
> list = [(15, True)]
> :t list
list :: Num a => [(a, Bool)]
Here, to choose a = Double, we need to write either list :: [(Double, Bool)] or list #Double.
In the type forall a. Num a => a†, the forall a and Num a are parameters specified by the “caller”, that is, the place where the definition (fifteen) used. The type parameter is implicitly filled in with a type argument by GHC during type inference; the Num constraint becomes an extra parameter, a “dictionary” comprising a record of functions ((+), (-), abs, &c.) for a particular Num instance, and which Num dictionary to pass in is determined from the type. The type argument exists only at compile time, and the dictionary is then typically inlined to specialise the function and enable further optimisations, so neither of these parameters typically has any runtime representation.
So in fifteen :: Double, the compiler deduces that a must be equal to Double, giving (a ~ Double, Num a) => a, which is simplified first to Num Double => Double, then to simply Double, because the constraint Num Double is satisfied by the existence of an instance Num Double definition. There is no subtyping or runtime downcasting going on, only the solution of equality constraints, statically.
The type argument can also be specified explicitly with the TypeApplications syntax of fifteen #Double, typically written like fifteen<Double> in OO languages.
The inferred type of fifteen includes a Num constraint because the literal 15 is implicitly a call to something like fromInteger (15 :: Integer)‡. fromInteger has the type Num a => Integer -> a and is a method of the Num typeclass, so you can think of a literal as “partially applying” the Integer argument while leaving the Num a argument unspecified, then the caller decides which concrete type to supply for a, and the compiler inserts a call to the fromInteger function in the Num dictionary passed in for that type.
† forall quantifiers are typically implicit, but can be written explicitly with various extensions, such as ExplicitForAll, ScopedTypeVariables, and RankNTypes.
‡ I say “something like” because this abuses the notation 15 :: Integer to denote a literal Integer, not circularly defined in terms of fromInteger again. (Else it would loop: fromInteger 15 = fromInteger (fromInteger 15) = fromInteger (fromInteger (fromInteger 15))…) This desugaring can be “magic” because it’s a part of the language itself, not something defined within the language.

Why can I call a function from a typeclass instance directly in the REPL (like compare from Ord)?

When I am in a REPL like GHCI with Prelude, and I write
*> compare 5 7
LT
Why can I call that function (compare) like that directly in the REPL?
I know that compare is defined in typeclass Ord. The typeclass definition for Ord of course shows that it is a subclass of Eq.
Here is my line of reasoning:
5 has type Num a => a, and Num typeclass is not a subclass of Eq.
Also,
Prelude> :t (compare 5)
(compare 5) :: (Num a, Ord a) => a -> Ordering
So, there is an additional constraint imposed here when I apply a numeric type argument. when I call compare 5 7, the types of the arguments are narrowed to something that does have an instance of Ord. I think the narrowing happens to the default concrete type associated with the typeclass: in the case of Num, this is Integer, which has an instance of Real, which has an instance of Ord.
However, coming from a non-functional programming background, I would have imagined that I would have to call compare on one of the numbers (like calling it on an object in OOP). If 5 is Integer, which does implement Ord, then why do I call compare in the REPL itself? This is obviously a question related to a paradigm shift for me and I still didn't get it. Hopefully someone can explain.
The type defaulting here comes into play. The interpreter can derive that 5 and 7 need to be of the same type, and members of the Ord and Num typeclass. The default for a Num is Integer, and since Integer is an instance of Ord as well, we can thus use Integer.
The interpreter thus considers 5 and 7 to be Integers here in that case, and thus it can evaluate the function and obtain LT.
GHCi has some additional defaulting rules, described in the GHCi documentation.
Methods like compare are associated with types, not particular values. The compiler needs to be able to deduce the type in order to select the correct typeclass instance, but that doesn't require any special assistance.
The type of compare is
compare :: (Ord a) => a -> a -> Ordering
Thus any of its arguments (of type a) can be used to look up the Ord instance.
As you correctly assumed, in the compare 5 7 example, the types of 5 and 7 default to Integer. Thus a in the compare type is deduced to be Integer and the Ord Integer instance is selected.
This selection does not necessarily go through a function argument. Consider e.g.
read :: (Read a) => String -> a
Here it is the result type that drives instance selection, but the type checker is just fine with it:
> read "(2, 3)" :: (Int, Int)
(2,3)
(What would the OO equivalent be? "(2, 3)".read()?)
In fact, methods don't even have to be functions:
maxBound :: (Bounded a) => a
This is a polymorphic value, not a function:
> maxBound :: Int
9223372036854775807
Class instances are uniquely connected to types, so as long as the type checker has enough information to figure out what that type variable represents, everything works out. That is, in
someMethod :: (SomeClass foo) => ...
foo has to appear somewhere in the type signature ... so the type checker can resolve SomeClass foo from the way someMethod is used at any given point (at least in the absence of certain language extensions).

How does fromIntegral work?

The type of fromIntegral is (Num b, Integral a) => a -> b. I'd like to understand how that's possible, what the code is that can convert any Integral number to any number type as needed.
The actual code for fromIntegral is listed as
fromIntegral = fromInteger . toInteger
The code for fromInteger is under instance Num Int and instance Num Integer They are respectively:
instance Num Int where
...
fromInteger i = I# (integerToInt i)
and
instance Num Integer where
...
fromInteger x = x
Assuming I# calls a C program that converts an Integer to an Int I don't see how either of these generate results that could be, say, added to a Float. How do they go from Int or Integer to something else?
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Thanks.
Because fromInteger is part of the Num class, every instance will have its own implementation. Neither of the two implementations (for Int and Integer) knows how to make a Float, but they aren't called when you're using fromInteger (or fromIntegral) to make a Float; that's what the Float instance of Num is for.
And so on for all other types. There is no one place that knows how to turn integers into any Num type; that would be impossible, since it would have to support user-defined Num instances that don't exist yet. Instead when each individual type is declared to be an instance of Num a way of doing that for that particular type must be provided (by implementing fromInteger).
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Actually, knowing what type it's expected to return from the expression the call is embedded in is exactly how it works.
Type checking/inference in Haskell works in two "directions" at once. It goes top-down, figuring out what types each expression should have, in order to fit into the bigger expression it's being used in. And it also goes "bottom-up", figuring out what type each expression should have from the smaller sub-expressions it's built out of. When it finds a place where those don't match, you get a type error (that's exactly where the "expected type" and "actual type" you see in type error messages cone from).
But because the compiler has that top-down knowledge (the "expected type") for every expression, it's perfectly able to figure out that a call of fromInteger is being used where a Float is expected, and so use the Float instance for Num in that call.
One aspect that distinguishes type classes from OOP interfaces is that type classes can dispatch on the result type of a method, not only on the type of its parameters. The classic example is the read :: Read a => String -> a function.
fromInteger has type fromInteger :: Num a => Integer -> a. The implementation is selected depending on the type of a. If the typechecker knows that a is a Float, the Num instance of Float will be used, not the one of Int or Integer.

Why can't I add Integer to Double in Haskell?

Why is it that I can do:
1 + 2.0
but when I try:
let a = 1
let b = 2.0
a + b
<interactive>:1:5:
Couldn't match expected type `Integer' with actual type `Double'
In the second argument of `(+)', namely `b'
In the expression: a + b
In an equation for `it': it = a + b
This seems just plain weird! Does it ever trip you up?
P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!
The type signature of (+) is defined as Num a => a -> a -> a, which means that it works on any member of the Num typeclass, but both arguments must be of the same type.
The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do for the let expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1 and 2.0.
All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double, so it just assumes the other one was supposed to be a Double and executes the computation. However, when you use the let expression, it only has one number to work off of, so it decides 1 is an Integer and 2.0 is a Double.
EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.
The first works because numeric literals are polymorphic (they are interpreted as fromInteger literal resp. fromRational literal), so in 1 + 2.0, you really have fromInteger 1 + fromRational 2, in the absence of other constraints, the result type defaults to Double.
The second does not work because of the monomorphism restriction. If you bind something without a type signature and with a simple pattern binding (name = expresion), that entity gets assigned a monomorphic type. For the literal 1, we have a Num constraint, therefore, according to the defaulting rules, its type is defaulted to Integer in the binding let a = 1. Similarly, the fractional literal's type is defaulted to Double.
It will work, by the way, if you :set -XNoMonomorphismRestriction in ghci.
The reason for the monomorphism restriction is to prevent loss of sharing, if you see a value that looks like a constant, you don't expect it to be calculated more than once, but if it had a polymorphic type, it would be recomputed everytime it is used.
You can use GHCI to learn a little more about this. Use the command :t to get the type of an expression.
Prelude> :t 1
1 :: Num a => a
So 1 is a constant which can be any numeric type (Double, Integer, etc.)
Prelude> let a = 1
Prelude> :t a
a :: Integer
So in this case, Haskell inferred the concrete type for a is Integer. Similarly, if you write let b = 2.0 then Haskell infers the type Double. Using let made Haskell infer a more specific type than (perhaps) was necessary, and that leads to your problem. (Someone with more experience than me can perhaps comment as to why this is the case.) Since (+) has type Num a => a -> a -> a, the two arguments need to have the same type.
You can fix this with the fromIntegral function:
Prelude> :t fromIntegral
fromIntegral :: (Num b, Integral a) => a -> b
This function converts integer types to other numeric types. For example:
Prelude> let a = 1
Prelude> let b = 2.0
Prelude> (fromIntegral a) + b
3.0
Others have addressed many aspects of this question quite well. I'd like to say a word about the rationale behind why + has the type signature Num a => a -> a -> a.
Firstly, the Num typeclass has no way to convert one artbitrary instance of Num into another. Suppose I have a data type for imaginary numbers; they are still numbers, but you really can't properly convert them into just an Int.
Secondly, which type signature would you prefer?
(+) :: (Num a, Num b) => a -> b -> a
(+) :: (Num a, Num b) => a -> b -> b
(+) :: (Num a, Num b, Num c) => a -> b -> c
After considering the other options, you realize that a -> a -> a is the simplest choice. Polymorphic results (as in the third suggestion above) are cool, but can sometimes be too generic to be used conveniently.
Thirdly, Haskell is not Blub. Most, though arguably not all, design decisions about Haskell do not take into account the conventions and expectations of popular languages. I frequently enjoy saying that the first step to learning Haskell is to unlearn everything you think you know about programming first. I'm sure most, if not all, experienced Haskellers have been tripped up by the Num typeclass, and various other curiosities of Haskell, because most have learned a more "mainstream" language first. But be patient, you will eventually reach Haskell nirvana. :)

Haskell Error: Couldn't match expected type `Integer' against inferred type `Int'

I have a haskell function that that calculates the size of the list of finite Ints. I need the output type to be an Integer because the value will actually be larger than the maximum bound of Int (the result will be -1 to be exact if the output type is an Int)
size :: a -> Integer
size a = (maxBound::Int) - (minBound::Int)
I understand the difference between Ints (bounded) and Integers (unbounded) but I'd like to make an Integer from an Int. I was wondering if there was a function like fromInteger, that will allow me to convert an Int to an Integer type.
You'll need to convert the values to Integers, which can be done by the fromIntegral function (numeric casting for Haskell):
fromIntegral :: (Integral a, Num b) => a -> b
It converts any type in the Integral class to any type in the (larger) Num class. E.g.
fromIntegral (maxBound::Int) - fromIntegral (minBound::Int)
However, I would not really trust the approach you're taking -- it seems very fragile. The behaviour in the presence of types that admit wraparound is pretty suspect.
What do you really mean by: "the size of the list of finite Ints". What is the size in this sense, if it isn't the length of the list?
I believe you are looking for:
fromIntegral :: (Integral a, Num b) => a -> b
which will convert an Integer to an Int
Perhaps you were assuming that Haskell, like many main-stream languages like C and (to a certain extent) Java, has implicit numeric coercions. It doesn't: Int and Integer are totally unrelated types and there is a special function for converting between them: fromIntegral. It belongs to the Num typeclass. Look at the documentation: essentially, fromIntegral does more than that: it is a generic "construct the representation of an arbitrary integral number", t.i. if you're implementing some kind of numbers and instantiating Num, you must provide a way to construct integral numbers of your type. For example, in the Num instance for complex numbers, fromIntegral creates a complex number with a zero imaginary part and an integral real part.
The only sense in which Haskell has implicit numeric coercions is that integer literals are overloaded, and when you write 42, the compiler implicitly interprets it as "fromIntegral (42::Integer)", so you can use integers in whatever contexts where a Num type is required.

Resources