Haskell Error: Couldn't match expected type `Integer' against inferred type `Int' - haskell

I have a haskell function that that calculates the size of the list of finite Ints. I need the output type to be an Integer because the value will actually be larger than the maximum bound of Int (the result will be -1 to be exact if the output type is an Int)
size :: a -> Integer
size a = (maxBound::Int) - (minBound::Int)
I understand the difference between Ints (bounded) and Integers (unbounded) but I'd like to make an Integer from an Int. I was wondering if there was a function like fromInteger, that will allow me to convert an Int to an Integer type.

You'll need to convert the values to Integers, which can be done by the fromIntegral function (numeric casting for Haskell):
fromIntegral :: (Integral a, Num b) => a -> b
It converts any type in the Integral class to any type in the (larger) Num class. E.g.
fromIntegral (maxBound::Int) - fromIntegral (minBound::Int)
However, I would not really trust the approach you're taking -- it seems very fragile. The behaviour in the presence of types that admit wraparound is pretty suspect.
What do you really mean by: "the size of the list of finite Ints". What is the size in this sense, if it isn't the length of the list?

I believe you are looking for:
fromIntegral :: (Integral a, Num b) => a -> b
which will convert an Integer to an Int

Perhaps you were assuming that Haskell, like many main-stream languages like C and (to a certain extent) Java, has implicit numeric coercions. It doesn't: Int and Integer are totally unrelated types and there is a special function for converting between them: fromIntegral. It belongs to the Num typeclass. Look at the documentation: essentially, fromIntegral does more than that: it is a generic "construct the representation of an arbitrary integral number", t.i. if you're implementing some kind of numbers and instantiating Num, you must provide a way to construct integral numbers of your type. For example, in the Num instance for complex numbers, fromIntegral creates a complex number with a zero imaginary part and an integral real part.
The only sense in which Haskell has implicit numeric coercions is that integer literals are overloaded, and when you write 42, the compiler implicitly interprets it as "fromIntegral (42::Integer)", so you can use integers in whatever contexts where a Num type is required.

Related

What is the correct way to use fromIntegral to convert an Integer to real-fractional types?

I am trying to use fromIntegral to convert from Integer to a real-fractional type. The idea is to have a helper method to later be used for the comparison between two instances of a Fraction (whether they are the same).
I have read some of the documentation related to:
fromIntegral :: (Integral a, Num b) => a -> b
realToFrac :: (Real a, Fractional b) => a -> b
Where I am having trouble is taking the concept and make an implementation of the helper method that takes a Num data type with fractions (numerator and denominator) and returns what I think is a real-fractional type value. Here is what I have been able to do so far:
data Num = Fraction {numerator :: Integer, denominator :: Integer}
helper :: Num -> Fractional
helper (Fraction num denom) = realToFrac(num/denom)
You need to learn about the difference between types and type classes. In OO languages, both are kind of the same concept, but in Haskell they're not.
A type contains concrete values. E.g. the type Bool contains the value True.
A class contains types. E.g. the Ord class doesn't contain any values, but it does contain the types which contain values that can be compared.
In case of numbers in Haskell it's a bit confusing that you can't really tell from the name whether you're dealing with a type or a class. Fractional is a class, whereas Rational is a type (which is an instance of Fractional, but so is e.g. Float).
In your example... first let's give that type a better name
data MyRational = Fraction {numerator :: Integer, denominator :: Integer}
...you have two possibilities what helper could actually do: convert to a concrete Rational value
helper' :: MyRational -> Rational
or a generic Fractional-type one
helper'' :: Fractional r => MyRational -> r
The latter is strictly more general, because Rational is an instance of Fractional (i.e. you can in fact use helper'' as a MyRational -> Rational function, but also as a MyRational -> Double function).
In either case,
helper (Fraction num denom) = realToFrac(num/denom)
does not work because you're trying to carry out the division on integer values and only then converting the result. Instead, you need to convert the integers to something fractional and then carry out the division in that type.

How does fromIntegral work?

The type of fromIntegral is (Num b, Integral a) => a -> b. I'd like to understand how that's possible, what the code is that can convert any Integral number to any number type as needed.
The actual code for fromIntegral is listed as
fromIntegral = fromInteger . toInteger
The code for fromInteger is under instance Num Int and instance Num Integer They are respectively:
instance Num Int where
...
fromInteger i = I# (integerToInt i)
and
instance Num Integer where
...
fromInteger x = x
Assuming I# calls a C program that converts an Integer to an Int I don't see how either of these generate results that could be, say, added to a Float. How do they go from Int or Integer to something else?
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Thanks.
Because fromInteger is part of the Num class, every instance will have its own implementation. Neither of the two implementations (for Int and Integer) knows how to make a Float, but they aren't called when you're using fromInteger (or fromIntegral) to make a Float; that's what the Float instance of Num is for.
And so on for all other types. There is no one place that knows how to turn integers into any Num type; that would be impossible, since it would have to support user-defined Num instances that don't exist yet. Instead when each individual type is declared to be an instance of Num a way of doing that for that particular type must be provided (by implementing fromInteger).
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Actually, knowing what type it's expected to return from the expression the call is embedded in is exactly how it works.
Type checking/inference in Haskell works in two "directions" at once. It goes top-down, figuring out what types each expression should have, in order to fit into the bigger expression it's being used in. And it also goes "bottom-up", figuring out what type each expression should have from the smaller sub-expressions it's built out of. When it finds a place where those don't match, you get a type error (that's exactly where the "expected type" and "actual type" you see in type error messages cone from).
But because the compiler has that top-down knowledge (the "expected type") for every expression, it's perfectly able to figure out that a call of fromInteger is being used where a Float is expected, and so use the Float instance for Num in that call.
One aspect that distinguishes type classes from OOP interfaces is that type classes can dispatch on the result type of a method, not only on the type of its parameters. The classic example is the read :: Read a => String -> a function.
fromInteger has type fromInteger :: Num a => Integer -> a. The implementation is selected depending on the type of a. If the typechecker knows that a is a Float, the Num instance of Float will be used, not the one of Int or Integer.

Why does (-) fail to typecheck when I place a Double Matrix on the left and a Double on the right?

Since hmatrix provides an instance of Num for Matrix types, I can express element-wise subtraction like:
m = (2><2)[1..] :: Double Matrix
m' = m - 3
That works great, as 3 is a Num, and results in a matrix created by subtracting 3 from each element of m.
Why does this not also work:
m' = m - (3::Double)
The error I'm getting is:
Couldn't match expected type ‘Matrix Double’
with actual type ‘Double’
In the second argument of ‘(-)’, namely ‘(3 :: Double)’
In the expression: m - (3 :: Double)
I expected the compiler to understand that a Double is also a Num. Why is that seemingly not the case?
What happens when you do m - 3 with m :: Matrix Double is that 3 :: Matrix Double. The fact that Matrix Double is an instance of Num means that the compilers knows how to translate the litteral 3. However when you do m - (3 :: Double), you get a type error because (-) :: (Num a) => a -> a -> a, so the type of the element you subtract must be instances of Num and match. Hence you can subtract two Doubles, two Matrix Doubles but not a Matrix Double and a Double.
After all, this seems fairly logical to me, it doesn't make sense to subtract a matrix and a scalar.
This is a common misunderstanding of people new to Haskell's style of typeclass based overloading, especially those who are used to the subclass-based overloading used in popular OO languages.
The subtraction operator has type Num a => a -> a -> a; so it takes two arguments of any type that is in the type class Num. It seems like what is happening when you do m - 3 is that the subtraction operator is accepting a Matrix Double on the left and some simple numeric type on the right. But that is actually incorrect.
When a type signature like Num a => a -> a -> a uses the same type variable multiple times, you can pick any type you like (subject to the contstraints before the =>: Num a in this case) to use for a but it has to be the exact same type everywhere that a appears. Matrix Double -> Double -> ??? is not a valid instantiation of the type Num a => a -> a -> a (and if it were, how would you know what it returned?).
The reason m - 3 works is that because both arguments have to be the same type, and m is definitely of type Matrix Double, the compiler sees that 3 must also be of type Matrix Double. So instead of using the 3 appearing in the source text to build a Double (or Integer, or one of many other numeric types), it uses the source text 3 to build a Matrix Double. Effectively the type inference has changed the way the compiler reads the source code text 3.
But if you use m' = m - (3::Double) then you're not letting it just figure out what type 3 must have to make the use of the subtraction operator valid, you're telling it that this 3 is specifically a Double. There's no way both facts to be true (your :: Double assertion and the requirement that the subtraction operator gets two arguments of the same type), so you get a type error.

Can't input values into function, because type cannot be deduced

I wanted to write a simple recursive function to help me check some values of a maths problem, however I can't seem to add any valid inputs.
Here's the function in question
fun 1 = 1
fun n = fun (ceiling n) + 3
It appears to be of type (Integral a1, Num a, RealFrac a1) => a1 -> a
Giving fun any number as input yields the following error:
Could not deduce (Integral a10) arising from a use of ‘fun’
from the context (Num a)
bound by the inferred type of it :: Num a =>
Look at the inferred type signature, you have Integral a1 and RealFrac a1 as constraints, and the function overall just returns a Num a. What you're saying is that this function takes a type that is both an Integral and RealFrac, already a contradiction logically but technically possible to implement in Haskell, and returns any numeric type whatsoever. This comes from your use of ceiling, which has the type (Integral b, RealFrac a) => a -> b, which you then pass to fun again. So n must be an Integral, but since you pass the result of ceiling n to fun it must also be a RealFrac. The second problem is that you simply haven't given the compiler enough information to know exactly which types you're wanting to use.
My first suggestion is to give fun the type signature you think it should have, which I'm guessing is probably Double -> Int. If you do this you'll get a type error of No instance for (Integral Double) arising from use of ceiling ..., meaning that you're trying to use a Double as an Integral when Double doesn't implement Integral for obvious reasons. This tells you precisely which part of your definition is suspect. You can choose to cast the result of ceiling to Double using fromIntegral or fromInteger.
Beyond all this, this function will never terminate unless it's called with the value 1. It will essentially just build up a giant thunk of adding 3 over and over again and will just eat up RAM and CPU.
That's because ceiling returns an Integral, and you're recursively calling fun with that value, so it deduces that fun must accept Integral values, but ceiling takes RealFrac values, and you're passing to ceiling the argument you got from the application of fun, so it deduces that fun must also take RealFrac values. Hence the constraints.
In situations like this, it's best to limit the options Haskell considers during type inference by annotation your function with the type you think it should have — the error message will become more concrete/narrowed down then and will be more localized to the code under question; otherwise, type inference will analyze your entire program and consider too many options.
(also the function itself is ill-defined as the other answer points out, but that is not causing the type error, obviously)

Type signature of num to double?

I'm just starting Learn You a Haskell for Great Good, and I'm having a bit of trouble with type classes. I would like to create a function that takes any number type and forces it to be a double.
My first thought was to define
numToDouble :: Num -> Double
But I don't think that worked because Num isn't a type, it's a typeclass (which seems to me to be a set of types). So looking at read, shows (Read a) => String -> a. I'm reading that as "read takes a string, and returns a thing of type a which is specified by the user". So I wrote the following
numToDouble :: (Num n) => n -> Double
numToDouble i = ((i) :: Double)
Which looks to me like "take thing of type n (must be in the Num typeclass, and convert it to a Double". This seems reasonable becuase I can do 20::Double
This produces the following output
Could not deduce (n ~ Double)
from the context (Num n)
bound by the type signature for numToDouble :: Num n => n -> Double
I have no idea what I'm reading. Based on what I can find, it seems like this has something to do with polymorphism?
Edit:
To be clear, my question is: Why isn't this working?
The reason you can say "20::Double" is that in Haskell an integer literal has type "Num a => a", meaning it can be any numeric type you like.
You are correct that a typeclass is a set of types. To be precise, it is the set of types that implement the functions in the "where" clause of the typeclass. Your type signature for your numToDouble correctly expresses what you want to do.
All you know about a value of type "n" in your function is that it implements the Num interface. This consists of +, -, *, negate, abs, signum and fromInteger. The last is the only one that does type conversion, but its not any use for what you want.
Bear in mind that Complex is also an instance of Num. What should numToDouble do with that? The Right Thing is not obvious, which is part of the reason you are having problems.
However lower down the type hierarchy you have the Real typeclass, which has instances for all the more straightforward numerical types you probably want to work with, like floats, doubles and the various types of integers. That includes a function "toRational" which converts any real value into a ratio, from which you can convert it to a Double using "fromRational", which is a function of the "Fractional" typeclass.
So try:
toDouble :: (Real n) => n -> Double
toDouble = fromRational . toRational
But of course this is actually too specific. GHCI says:
Prelude> :type fromRational . toRational
fromRational . toRational :: (Fractional c, Real a) => a -> c
So it converts any real type to any Fractional type (the latter covers anything that can do division, including things that are not instances of Real, like Complex) When messing around with numeric types I keep finding myself using it as a kind of generic numerical coercion.
Edit: as leftaroundabout says,
realToFrac = fromRational . toRational
You can't "convert" anything per se in Haskell. Between specific types, there may be the possibility to convert – with dedicated functions.
In your particular example, it certainly shouldn't work. Num is the class1 of all types that can be treated as numerical types, and that have numerical values in them (at least integer ones, so here's one such conversion function fromInteger).
But these types can apart from that have any other stuff in them, which oftentimes is not in the reals and can thus not be approximated by Double. The most obvious example is Complex.
The particular class that has only real numbers in it is, suprise, called Real. What is indeed a bit strange is that its method is a conversion toRational, since the rationals don't quite cover the reals... but they're dense within them, so it's kind of ok. At any rate, you can use that function to implement your desired conversion:
realToDouble :: Real n => n -> Double
realToDouble i = fromRational $ toRational i
Incidentally, that combination fromRational . toRational is already a standard function: realToFrac, a bit more general.
Calling type classes "sets of types" is kind of ok, much like you can often get away without calling any kind of collection in maths a set – but it's not really correct. The most problematic thing is, you can't really say some type is not in a particular class: type classes are open, so at any place in a project you could declare an instance for some type to a given class.
Just to be 100% clear, the problem is
(i) :: Double
This does not convert i to a Double, it demands that i already is a Double. That isn't what you mean at all.
The type signature for your function is correct. (Or at least, it means exactly what you think it means.) But your function's implementation is wrong.
If you want to convert one type of data to another, you have to actually call a function of some sort.
Unfortunately, Num itself only allows you to convert an Integer to any Num instance. You're trying to convert something that isn't necessarily an Integer, so this doesn't help. As others have said, you probably want fromRational or similar...
There is no such thing as numeric casts in Haskell. When you write i :: Double, what that means isn't "cast i to Double"; it's just an assertion that i's type is Double. In your case, however, your function's signature also asserts that i's type is Num n => n, i.e., any type n (chosen by the caller) that implements Num; so for example, n could be Integer. Those two assertions cannot be simultaneously true, hence you get an error.
The confusing thing is that you can say 1 :: Double. But that's because in Haskell, a numeric literal like 1 has the same meaning as fromInteger one, where one :: Integer is the Integer whose value is one.
But that only works for numeric literals. This is one of the surprising things if you come to Haskell from almost any other language. In most languages you can use expressions of mixed numeric types rather freely and rely on implicit coercions to "do what I mean"; in Haskell, on the other hand you have to use functions like fromIntegral or fromRational all the time. And while most statically typed languages have a syntax for casting from one numeric type to another, in Haskell you just use a function.

Resources