I'm just starting Learn You a Haskell for Great Good, and I'm having a bit of trouble with type classes. I would like to create a function that takes any number type and forces it to be a double.
My first thought was to define
numToDouble :: Num -> Double
But I don't think that worked because Num isn't a type, it's a typeclass (which seems to me to be a set of types). So looking at read, shows (Read a) => String -> a. I'm reading that as "read takes a string, and returns a thing of type a which is specified by the user". So I wrote the following
numToDouble :: (Num n) => n -> Double
numToDouble i = ((i) :: Double)
Which looks to me like "take thing of type n (must be in the Num typeclass, and convert it to a Double". This seems reasonable becuase I can do 20::Double
This produces the following output
Could not deduce (n ~ Double)
from the context (Num n)
bound by the type signature for numToDouble :: Num n => n -> Double
I have no idea what I'm reading. Based on what I can find, it seems like this has something to do with polymorphism?
Edit:
To be clear, my question is: Why isn't this working?
The reason you can say "20::Double" is that in Haskell an integer literal has type "Num a => a", meaning it can be any numeric type you like.
You are correct that a typeclass is a set of types. To be precise, it is the set of types that implement the functions in the "where" clause of the typeclass. Your type signature for your numToDouble correctly expresses what you want to do.
All you know about a value of type "n" in your function is that it implements the Num interface. This consists of +, -, *, negate, abs, signum and fromInteger. The last is the only one that does type conversion, but its not any use for what you want.
Bear in mind that Complex is also an instance of Num. What should numToDouble do with that? The Right Thing is not obvious, which is part of the reason you are having problems.
However lower down the type hierarchy you have the Real typeclass, which has instances for all the more straightforward numerical types you probably want to work with, like floats, doubles and the various types of integers. That includes a function "toRational" which converts any real value into a ratio, from which you can convert it to a Double using "fromRational", which is a function of the "Fractional" typeclass.
So try:
toDouble :: (Real n) => n -> Double
toDouble = fromRational . toRational
But of course this is actually too specific. GHCI says:
Prelude> :type fromRational . toRational
fromRational . toRational :: (Fractional c, Real a) => a -> c
So it converts any real type to any Fractional type (the latter covers anything that can do division, including things that are not instances of Real, like Complex) When messing around with numeric types I keep finding myself using it as a kind of generic numerical coercion.
Edit: as leftaroundabout says,
realToFrac = fromRational . toRational
You can't "convert" anything per se in Haskell. Between specific types, there may be the possibility to convert – with dedicated functions.
In your particular example, it certainly shouldn't work. Num is the class1 of all types that can be treated as numerical types, and that have numerical values in them (at least integer ones, so here's one such conversion function fromInteger).
But these types can apart from that have any other stuff in them, which oftentimes is not in the reals and can thus not be approximated by Double. The most obvious example is Complex.
The particular class that has only real numbers in it is, suprise, called Real. What is indeed a bit strange is that its method is a conversion toRational, since the rationals don't quite cover the reals... but they're dense within them, so it's kind of ok. At any rate, you can use that function to implement your desired conversion:
realToDouble :: Real n => n -> Double
realToDouble i = fromRational $ toRational i
Incidentally, that combination fromRational . toRational is already a standard function: realToFrac, a bit more general.
Calling type classes "sets of types" is kind of ok, much like you can often get away without calling any kind of collection in maths a set – but it's not really correct. The most problematic thing is, you can't really say some type is not in a particular class: type classes are open, so at any place in a project you could declare an instance for some type to a given class.
Just to be 100% clear, the problem is
(i) :: Double
This does not convert i to a Double, it demands that i already is a Double. That isn't what you mean at all.
The type signature for your function is correct. (Or at least, it means exactly what you think it means.) But your function's implementation is wrong.
If you want to convert one type of data to another, you have to actually call a function of some sort.
Unfortunately, Num itself only allows you to convert an Integer to any Num instance. You're trying to convert something that isn't necessarily an Integer, so this doesn't help. As others have said, you probably want fromRational or similar...
There is no such thing as numeric casts in Haskell. When you write i :: Double, what that means isn't "cast i to Double"; it's just an assertion that i's type is Double. In your case, however, your function's signature also asserts that i's type is Num n => n, i.e., any type n (chosen by the caller) that implements Num; so for example, n could be Integer. Those two assertions cannot be simultaneously true, hence you get an error.
The confusing thing is that you can say 1 :: Double. But that's because in Haskell, a numeric literal like 1 has the same meaning as fromInteger one, where one :: Integer is the Integer whose value is one.
But that only works for numeric literals. This is one of the surprising things if you come to Haskell from almost any other language. In most languages you can use expressions of mixed numeric types rather freely and rely on implicit coercions to "do what I mean"; in Haskell, on the other hand you have to use functions like fromIntegral or fromRational all the time. And while most statically typed languages have a syntax for casting from one numeric type to another, in Haskell you just use a function.
Related
I was going through the book Haskell Programming from First Principles and came across following code-snippet.
Prelude> fifteen = 15
Prelude> :t fifteen
fifteen :: Num a => a
Prelude> fifteenInt = fifteen :: Int
Prelude> fifteenDouble = fifteen :: Double
Prelude> :t fifteenInt
fifteenInt :: Int
Prelude> :t fifteenDouble
fifteenInt :: Double
Here, Num is the type-class that is like the base class in OO languages. What I mean is when I write a polymorphic function, I take a type variable that is constrained by Num type class. However, as seen above, casting fifteen as Int or Double works. Isn't it equivalent to down-casting in OO languages?
Wouldn't some more information (a bunch of Double type specific functions in this case) be required for me to be able to do that?
Thanks for helping me out.
No, it's not equivalent. Downcasting in OO is a runtime operation: you have a value whose concrete type you don't know, and you basically assert that it has some particular case – which is an error if it happens to be actually a different concrete type.
In Haskell, :: isn't really an operator at all. It just adds extra information to the typechecker at compile-time. I.e. if it compiles at all, you can always be sure that it will actually work at runtime.
The reason it works at all is that fifteen has no concrete type. It's like a template / generic in OO languages. So when you add the :: Double constraint, the compiler can then pick what type is instantiated for a. And Double is ok because it is a member of the Num typeclass, but don't confuse a typeclass with an OO class: an OO class specifies one concrete type, which may however have subtypes. In Haskell, subtypes don't exist, and a class is more like an interface in OO languages. You can also think of a typeclass as a set of types, and fifteen has potentially all of the types in the Num class; which one of these is actually used can be chosen with a signature.
Downcasting is not a good analogy. Rather, compare to generic functions.
Very roughly, you can pretend that your fifteen is a generic function
// pseudo code in OOP
A fifteen<A>() where A : Num
When you use fifteen :: Double in Haskell, you tell the compiler that the result of the above function is Double, and that enables the compiler to "call" the above OOP function as fifteen<Double>(), inferring the generic argument.
With some extension on, GHC Haskell has a more direct way to choose the generic parameter, namely the type application fifteen #Double.
There is a difference between the two ways in that ... :: Double specifies what is the return type, while #Double specifies what is the generic argument. In this fifteen case they are the same, but this is not always the case. For instance:
> list = [(15, True)]
> :t list
list :: Num a => [(a, Bool)]
Here, to choose a = Double, we need to write either list :: [(Double, Bool)] or list #Double.
In the type forall a. Num a => a†, the forall a and Num a are parameters specified by the “caller”, that is, the place where the definition (fifteen) used. The type parameter is implicitly filled in with a type argument by GHC during type inference; the Num constraint becomes an extra parameter, a “dictionary” comprising a record of functions ((+), (-), abs, &c.) for a particular Num instance, and which Num dictionary to pass in is determined from the type. The type argument exists only at compile time, and the dictionary is then typically inlined to specialise the function and enable further optimisations, so neither of these parameters typically has any runtime representation.
So in fifteen :: Double, the compiler deduces that a must be equal to Double, giving (a ~ Double, Num a) => a, which is simplified first to Num Double => Double, then to simply Double, because the constraint Num Double is satisfied by the existence of an instance Num Double definition. There is no subtyping or runtime downcasting going on, only the solution of equality constraints, statically.
The type argument can also be specified explicitly with the TypeApplications syntax of fifteen #Double, typically written like fifteen<Double> in OO languages.
The inferred type of fifteen includes a Num constraint because the literal 15 is implicitly a call to something like fromInteger (15 :: Integer)‡. fromInteger has the type Num a => Integer -> a and is a method of the Num typeclass, so you can think of a literal as “partially applying” the Integer argument while leaving the Num a argument unspecified, then the caller decides which concrete type to supply for a, and the compiler inserts a call to the fromInteger function in the Num dictionary passed in for that type.
† forall quantifiers are typically implicit, but can be written explicitly with various extensions, such as ExplicitForAll, ScopedTypeVariables, and RankNTypes.
‡ I say “something like” because this abuses the notation 15 :: Integer to denote a literal Integer, not circularly defined in terms of fromInteger again. (Else it would loop: fromInteger 15 = fromInteger (fromInteger 15) = fromInteger (fromInteger (fromInteger 15))…) This desugaring can be “magic” because it’s a part of the language itself, not something defined within the language.
The type of fromIntegral is (Num b, Integral a) => a -> b. I'd like to understand how that's possible, what the code is that can convert any Integral number to any number type as needed.
The actual code for fromIntegral is listed as
fromIntegral = fromInteger . toInteger
The code for fromInteger is under instance Num Int and instance Num Integer They are respectively:
instance Num Int where
...
fromInteger i = I# (integerToInt i)
and
instance Num Integer where
...
fromInteger x = x
Assuming I# calls a C program that converts an Integer to an Int I don't see how either of these generate results that could be, say, added to a Float. How do they go from Int or Integer to something else?
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Thanks.
Because fromInteger is part of the Num class, every instance will have its own implementation. Neither of the two implementations (for Int and Integer) knows how to make a Float, but they aren't called when you're using fromInteger (or fromIntegral) to make a Float; that's what the Float instance of Num is for.
And so on for all other types. There is no one place that knows how to turn integers into any Num type; that would be impossible, since it would have to support user-defined Num instances that don't exist yet. Instead when each individual type is declared to be an instance of Num a way of doing that for that particular type must be provided (by implementing fromInteger).
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Actually, knowing what type it's expected to return from the expression the call is embedded in is exactly how it works.
Type checking/inference in Haskell works in two "directions" at once. It goes top-down, figuring out what types each expression should have, in order to fit into the bigger expression it's being used in. And it also goes "bottom-up", figuring out what type each expression should have from the smaller sub-expressions it's built out of. When it finds a place where those don't match, you get a type error (that's exactly where the "expected type" and "actual type" you see in type error messages cone from).
But because the compiler has that top-down knowledge (the "expected type") for every expression, it's perfectly able to figure out that a call of fromInteger is being used where a Float is expected, and so use the Float instance for Num in that call.
One aspect that distinguishes type classes from OOP interfaces is that type classes can dispatch on the result type of a method, not only on the type of its parameters. The classic example is the read :: Read a => String -> a function.
fromInteger has type fromInteger :: Num a => Integer -> a. The implementation is selected depending on the type of a. If the typechecker knows that a is a Float, the Num instance of Float will be used, not the one of Int or Integer.
I wanted to write a simple recursive function to help me check some values of a maths problem, however I can't seem to add any valid inputs.
Here's the function in question
fun 1 = 1
fun n = fun (ceiling n) + 3
It appears to be of type (Integral a1, Num a, RealFrac a1) => a1 -> a
Giving fun any number as input yields the following error:
Could not deduce (Integral a10) arising from a use of ‘fun’
from the context (Num a)
bound by the inferred type of it :: Num a =>
Look at the inferred type signature, you have Integral a1 and RealFrac a1 as constraints, and the function overall just returns a Num a. What you're saying is that this function takes a type that is both an Integral and RealFrac, already a contradiction logically but technically possible to implement in Haskell, and returns any numeric type whatsoever. This comes from your use of ceiling, which has the type (Integral b, RealFrac a) => a -> b, which you then pass to fun again. So n must be an Integral, but since you pass the result of ceiling n to fun it must also be a RealFrac. The second problem is that you simply haven't given the compiler enough information to know exactly which types you're wanting to use.
My first suggestion is to give fun the type signature you think it should have, which I'm guessing is probably Double -> Int. If you do this you'll get a type error of No instance for (Integral Double) arising from use of ceiling ..., meaning that you're trying to use a Double as an Integral when Double doesn't implement Integral for obvious reasons. This tells you precisely which part of your definition is suspect. You can choose to cast the result of ceiling to Double using fromIntegral or fromInteger.
Beyond all this, this function will never terminate unless it's called with the value 1. It will essentially just build up a giant thunk of adding 3 over and over again and will just eat up RAM and CPU.
That's because ceiling returns an Integral, and you're recursively calling fun with that value, so it deduces that fun must accept Integral values, but ceiling takes RealFrac values, and you're passing to ceiling the argument you got from the application of fun, so it deduces that fun must also take RealFrac values. Hence the constraints.
In situations like this, it's best to limit the options Haskell considers during type inference by annotation your function with the type you think it should have — the error message will become more concrete/narrowed down then and will be more localized to the code under question; otherwise, type inference will analyze your entire program and consider too many options.
(also the function itself is ill-defined as the other answer points out, but that is not causing the type error, obviously)
I trying to wrap my head around Haskell type coercion. Meaning, when does can one pass a value into a function without casting and how that works. Here is a specific example, but I am looking for a more general explanation I can use going forward to try and understand what is going on:
Prelude> 3 * 20 / 4
15.0
Prelude> let c = 20
Prelude> :t c
c :: Integer
Prelude> 3 * c / 4
<interactive>:94:7:
No instance for (Fractional Integer)
arising from a use of `/'
Possible fix: add an instance declaration for (Fractional Integer)
In the expression: 3 * c / 4
In an equation for `it': it = 3 * c / 4
The type of (/) is Fractional a => a -> a -> a. So, I'm guessing that when I do "3 * 20" using literals, Haskell somehow assumes that the result of that expression is a Fractional. However, when a variable is used, it's type is predefined to be Integer based on the assignment.
My first question is how to fix this. Do I need to cast the expression or convert it somehow?
My second question is that this seems really weird to me that you can't do basic math without having to worry so much about int/float types. I mean there's an obvious way to convert automatically between these, why am I forced to think about this and deal with it? Am I doing something wrong to begin with?
I am basically looking for a way to easily write simple arithmetic expressions without having to worry about the neaty greaty details and keeping the code nice and clean. In most top-level languages the compiler works for me -- not the other way around.
If you just want the solution, look at the end.
You nearly answered your own question already. Literals in Haskell are overloaded:
Prelude> :t 3
3 :: Num a => a
Since (*) also has a Num constraint
Prelude> :t (*)
(*) :: Num a => a -> a -> a
this extends to the product:
Prelude> :t 3 * 20
3 * 20 :: Num a => a
So, depending on context, this can be specialized to be of type Int, Integer, Float, Double, Rational and more, as needed. In particular, as Fractional is a subclass of Num, it can be used without problems in a division, but then the constraint will become
stronger and be for class Fractional:
Prelude> :t 3 * 20 / 4
3 * 20 / 4 :: Fractional a => a
The big difference is the identifier c is an Integer. The reason why a simple let-binding in GHCi prompt isn't assigned an overloaded type is the dreaded monomorphism restriction. In short: if you define a value that doesn't have any explicit arguments,
then it cannot have overloaded type unless you provide an explicit type signature.
Numeric types are then defaulted to Integer.
Once c is an Integer, the result of the multiplication is Integer, too:
Prelude> :t 3 * c
3 * c :: Integer
And Integer is not in the Fractional class.
There are two solutions to this problem.
Make sure your identifiers have overloaded type, too. In this case, it would
be as simple as saying
Prelude> let c :: Num a => a; c = 20
Prelude> :t c
c :: Num a => a
Use fromIntegral to cast an integral value to an arbitrary numeric value:
Prelude> :t fromIntegral
fromIntegral :: (Integral a, Num b) => a -> b
Prelude> let c = 20
Prelude> :t c
c :: Integer
Prelude> :t fromIntegral c
fromIntegral c :: Num b => b
Prelude> 3 * fromIntegral c / 4
15.0
Haskell will never automatically convert one type into another when you pass it to a function. Either it's compatible with the expected type already, in which case no coercion is necessary, or the program fails to compile.
If you write a whole program and compile it, things generally "just work" without you having to think too much about int/float types; so long as you're consistent (i.e. you don't try to treat something as an Int in one place and a Float in another) the constraints just flow through the program and figure out the types for you.
For example, if I put this in a source file and compile it:
main = do
let c = 20
let it = 3 * c / 4
print it
Then everything's fine, and running the program prints 15.0. You can see from the .0 that GHC successfully figured out that c must be some kind of fractional number, and made everything work, without me having to give any explicit type signatures.
c can't be an integer because the / operator is for mathematical division, which isn't defined on integers. The operation of integer division is represented by the div function (usable in operator fashion as x `div` y). I think this might be what is tripping you up in your whole program? This is unfortunately just one of those things you have to learn by getting tripped up by it, if you're used to the situation in many other languages where / is sometimes mathematical division and sometimes integer division.
It's when you're playing around in the interpreter that things get messy, because there you tend to bind values with no context whatsoever. In interpreter GHCi has to execute let c = 20 on its own, because you haven't entered 3 * c / 4 yet. It has no way of knowing whether you intend that 20 to be an Int, Integer, Float, Double, Rational, etc
Haskell will pick a default type for numeric values; otherwise if you never use any functions that only work on one particular type of number you'd always get an error about ambiguous type variables. This normally works fine, because these default rules are applied while reading the whole module and so take into account all the other constraints on the type (like whether you've ever used it with /). But here there are no other constraints it can see, so the type defaulting picks the first cab off the rank and makes c an Integer.
Then, when you ask GHCi to evaluate 3 * c / 4, it's too late. c is an Integer, so must 3 * c be, and Integers don't support /.
So in the interpreter, yes, sometimes if you don't give an explicit type to a let binding GHC will pick an incorrect type, especially with numeric types. After that, you're stuck with whatever operations are supported by the concrete type GHCi picked, but when you get this kind of error you can always rebind the variable; e.g. let c = 20.0.
However I suspect in your real program the problem is simply that the operation you wanted was actually div rather than /.
Haskell is a bit unusual in this way. Yes you can't divide to integers together but it's rarely a problem.
The reason is that if you look at the Num typeclass, there's a function fromIntegral this allows you to convert literals into the appropriate type. This with type inference alleviates 99% of the cases where it'd be a problem. Quick example:
newtype Foo = Foo Integer
deriving (Show, Eq)
instance Num Foo where
fromInteger _ = Foo 0
negate = undefined
abs = undefined
(+) = undefined
(-) = undefined
(*) = undefined
signum = undefined
Now if we load this into GHCi
*> 0 :: Foo
Foo 0
*> 1 :: Foo
Foo 0
So you see we are able to do some pretty cool things with how GHCi parses a raw integer. This has a lot of practical uses in DSL's that we won't talk about here.
Next question was how to get from a Double to an Integer or vice versa. There's a function for that.
In the case of going from an Integer to a Double, we'd use fromInteger as well. Why?
Well the type signature for it is
(Num a) => Integer -> a
and since we can use (+) with Doubles we know they're a Num instance. And from there it's easy.
*> 0 :: Double
0.0
Last piece of the puzzle is Double -> Integer. Well a brief search on Hoogle shows
truncate
floor
round
-- etc ...
I'll leave that to you to search.
Type coercion in Haskell isn't automatic (or rather, it doesn't actually exist). When you write the literal 20 it's inferred to be of type Num a => a (conceptually anyway. I don't think it works quite like that) and will, depending on the context in which it is used (i.e. what functions you pass it to) be instantiated with an appropitiate type (I believe if no further constraints are applied, this will default to Integer when you need a concrete type at some point). If you need a different kind of Num, you need to convert the numbers e.g. (3* fromIntegral c / 4) in your example.
The type of (/) is Fractional a => a -> a -> a.
To divide Integers, use div instead of (/). Note that the type of div is
div :: Integral a => a -> a -> a
In most top-level languages the compiler works for me -- not the other way around.
I argue that the Haskell compiler works for you just as much, if not more so, than those of other languages you have used. Haskell is a very different language than the traditional imperative languages (such as C, C++, Java, etc.) you are probably used to. This means that the compiler works differently as well.
As others have stated, Haskell will never automatically coerce from one type to another. If you have an Integer which needs to be used as a Float, you need to do the conversion explicitly with fromInteger.
I have a haskell function that that calculates the size of the list of finite Ints. I need the output type to be an Integer because the value will actually be larger than the maximum bound of Int (the result will be -1 to be exact if the output type is an Int)
size :: a -> Integer
size a = (maxBound::Int) - (minBound::Int)
I understand the difference between Ints (bounded) and Integers (unbounded) but I'd like to make an Integer from an Int. I was wondering if there was a function like fromInteger, that will allow me to convert an Int to an Integer type.
You'll need to convert the values to Integers, which can be done by the fromIntegral function (numeric casting for Haskell):
fromIntegral :: (Integral a, Num b) => a -> b
It converts any type in the Integral class to any type in the (larger) Num class. E.g.
fromIntegral (maxBound::Int) - fromIntegral (minBound::Int)
However, I would not really trust the approach you're taking -- it seems very fragile. The behaviour in the presence of types that admit wraparound is pretty suspect.
What do you really mean by: "the size of the list of finite Ints". What is the size in this sense, if it isn't the length of the list?
I believe you are looking for:
fromIntegral :: (Integral a, Num b) => a -> b
which will convert an Integer to an Int
Perhaps you were assuming that Haskell, like many main-stream languages like C and (to a certain extent) Java, has implicit numeric coercions. It doesn't: Int and Integer are totally unrelated types and there is a special function for converting between them: fromIntegral. It belongs to the Num typeclass. Look at the documentation: essentially, fromIntegral does more than that: it is a generic "construct the representation of an arbitrary integral number", t.i. if you're implementing some kind of numbers and instantiating Num, you must provide a way to construct integral numbers of your type. For example, in the Num instance for complex numbers, fromIntegral creates a complex number with a zero imaginary part and an integral real part.
The only sense in which Haskell has implicit numeric coercions is that integer literals are overloaded, and when you write 42, the compiler implicitly interprets it as "fromIntegral (42::Integer)", so you can use integers in whatever contexts where a Num type is required.