Can't input values into function, because type cannot be deduced - haskell

I wanted to write a simple recursive function to help me check some values of a maths problem, however I can't seem to add any valid inputs.
Here's the function in question
fun 1 = 1
fun n = fun (ceiling n) + 3
It appears to be of type (Integral a1, Num a, RealFrac a1) => a1 -> a
Giving fun any number as input yields the following error:
Could not deduce (Integral a10) arising from a use of ‘fun’
from the context (Num a)
bound by the inferred type of it :: Num a =>

Look at the inferred type signature, you have Integral a1 and RealFrac a1 as constraints, and the function overall just returns a Num a. What you're saying is that this function takes a type that is both an Integral and RealFrac, already a contradiction logically but technically possible to implement in Haskell, and returns any numeric type whatsoever. This comes from your use of ceiling, which has the type (Integral b, RealFrac a) => a -> b, which you then pass to fun again. So n must be an Integral, but since you pass the result of ceiling n to fun it must also be a RealFrac. The second problem is that you simply haven't given the compiler enough information to know exactly which types you're wanting to use.
My first suggestion is to give fun the type signature you think it should have, which I'm guessing is probably Double -> Int. If you do this you'll get a type error of No instance for (Integral Double) arising from use of ceiling ..., meaning that you're trying to use a Double as an Integral when Double doesn't implement Integral for obvious reasons. This tells you precisely which part of your definition is suspect. You can choose to cast the result of ceiling to Double using fromIntegral or fromInteger.
Beyond all this, this function will never terminate unless it's called with the value 1. It will essentially just build up a giant thunk of adding 3 over and over again and will just eat up RAM and CPU.

That's because ceiling returns an Integral, and you're recursively calling fun with that value, so it deduces that fun must accept Integral values, but ceiling takes RealFrac values, and you're passing to ceiling the argument you got from the application of fun, so it deduces that fun must also take RealFrac values. Hence the constraints.
In situations like this, it's best to limit the options Haskell considers during type inference by annotation your function with the type you think it should have — the error message will become more concrete/narrowed down then and will be more localized to the code under question; otherwise, type inference will analyze your entire program and consider too many options.
(also the function itself is ill-defined as the other answer points out, but that is not causing the type error, obviously)

Related

How does fromIntegral work?

The type of fromIntegral is (Num b, Integral a) => a -> b. I'd like to understand how that's possible, what the code is that can convert any Integral number to any number type as needed.
The actual code for fromIntegral is listed as
fromIntegral = fromInteger . toInteger
The code for fromInteger is under instance Num Int and instance Num Integer They are respectively:
instance Num Int where
...
fromInteger i = I# (integerToInt i)
and
instance Num Integer where
...
fromInteger x = x
Assuming I# calls a C program that converts an Integer to an Int I don't see how either of these generate results that could be, say, added to a Float. How do they go from Int or Integer to something else?
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Thanks.
Because fromInteger is part of the Num class, every instance will have its own implementation. Neither of the two implementations (for Int and Integer) knows how to make a Float, but they aren't called when you're using fromInteger (or fromIntegral) to make a Float; that's what the Float instance of Num is for.
And so on for all other types. There is no one place that knows how to turn integers into any Num type; that would be impossible, since it would have to support user-defined Num instances that don't exist yet. Instead when each individual type is declared to be an instance of Num a way of doing that for that particular type must be provided (by implementing fromInteger).
fromInteger will be embedded in an expression which requires that it produce a certain type. It can't know what the required type will be? So what happens?
Actually, knowing what type it's expected to return from the expression the call is embedded in is exactly how it works.
Type checking/inference in Haskell works in two "directions" at once. It goes top-down, figuring out what types each expression should have, in order to fit into the bigger expression it's being used in. And it also goes "bottom-up", figuring out what type each expression should have from the smaller sub-expressions it's built out of. When it finds a place where those don't match, you get a type error (that's exactly where the "expected type" and "actual type" you see in type error messages cone from).
But because the compiler has that top-down knowledge (the "expected type") for every expression, it's perfectly able to figure out that a call of fromInteger is being used where a Float is expected, and so use the Float instance for Num in that call.
One aspect that distinguishes type classes from OOP interfaces is that type classes can dispatch on the result type of a method, not only on the type of its parameters. The classic example is the read :: Read a => String -> a function.
fromInteger has type fromInteger :: Num a => Integer -> a. The implementation is selected depending on the type of a. If the typechecker knows that a is a Float, the Num instance of Float will be used, not the one of Int or Integer.

Polymorphism: a constant versus a function

I'm new to Haskell and come across a slightly puzzling example for me in the Haskell Programming from First Principles book. At the end of Chapter 6 it suddenly occurred to me that the following doesn't work:
constant :: (Num a) => a
constant = 1.0
However, the following works fine:
f :: (Num a) => a -> a
f x = 3*x
I can input any numerical value for x into the function f and nothing will break. It's not constrained to taking integers. This makes sense to me intuitively. But the example with the constant is totally confusing to me.
Over on a reddit thread for the book it was explained (paraphrasing) that the reason why the constant example doesn't work is that the type declaration forces the value of constant to only be things which aren't more specific than Num. So trying to assign a value to it which is from a subclass of Num like Fractional isn't kosher.
If that explanation is correct, then am I wrong in thinking that these two examples seem completely opposites of each other? In one case, the type declaration forces the value to be as general as possible. In the other case, the accepted values for the function can be anything that implements Num.
Can anyone set me straight on this?
It can sometimes help to read types as a game played between two actors, the implementor of the type and the user of the type. To do a good job of explaining this perspective, we have to introduce something that Haskell hides from you by default: we will add binders for all type variables. So your types would actually become:
constant :: forall a. Num a => a
f :: forall a. Num a => a -> a
Now, we will read type formation rules thusly:
forall a. t means: the caller chooses a type a, and the game continues as t
c => t means: the caller shows that constraint c holds, and the game continues as t
t -> t' means: the caller chooses a value of type t, and the game continues as t'
t (where t is a monomorphic type such as a bare variable or Integer or similar) means: the implementor produces a value of type a
We will need a few other details to truly understand things here, so I will quickly say them here:
When we write a number with no decimal points, the compiler implicitly converts this to a call to fromInteger applied to the Integer produced by parsing that number. We have fromInteger :: forall a. Num a => Integer -> a.
When we write a number with decimal points, the compiler implicitly converts this to a call to fromRational applied to the Rational produced by parsing that number. We have fromRational :: forall a. Fractional a => Rational -> a.
The Num class includes the method (*) :: forall a. Num a => a -> a -> a.
Now let's try to walk through your two examples slowly and carefully.
constant :: forall a. Num a => a
constant = 1.0 {- = fromRational (1 % 1) -}
The type of constant says: the caller chooses a type, shows that this type implements Num, and then the implementor must produce a value of that type. Now the implementor tries to play his own game by calling fromRational :: Fractional a => Rational -> a. He chooses the same type the caller did, and then makes an attempt to show that this type implements Fractional. Oops! He can't show that, because the only thing the caller proved to him was that a implements Num -- which doesn't guarantee that a also implements Fractional. Dang. So the implementor of constant isn't allowed to call fromRational at that type.
Now, let's look at f:
f :: forall a. Num a => a -> a
f x = 3*x {- = fromInteger 3 * x -}
The type of f says: the caller chooses a type, shows that the type implements Num, and chooses a value of that type. The implementor must then produce another value of that type. He is going to do this by playing his own game with (*) and fromInteger. In particular, he chooses the same type the caller did. But now fromInteger and (*) only demand that he prove that this type is an instance of Num -- so he passes off the proof the caller gave him of this and saves the day! Then he chooses the Integer 3 for the argument to fromInteger, and chooses the result of this and the value the caller handed him as the two arguments to (*). Everybody is satisfied, and the implementor gets to return a new value.
The point of this whole exposition is this: the Num constraint in both cases is enforcing exactly the same thing, namely, that whatever type we choose to instantiate a at must be a member of the Num class. It's just that in the definition constant = 1.0 being in Num isn't enough to do the operations we've written, whereas in f x = 3*x being in Num is enough to do the operations we've written. And since the operations we've chosen for the two things are so different, it should not be too surprising that one works and the other doesn't!
When you have a polymorphic value, the caller chooses which concrete type to use. The Haskell report defines the type of numeric literals, namely:
integer and floating literals have the typings (Num a) => a and
(Fractional a) => a, respectively
3 is an integer literal so has type Num a => a and (*) has type Num a => a -> a -> a so f has type Num a => a -> a.
In contrast, 3.0 has type Fractional a => a. Since Fractional is a subclass of Num your type signature for constant is invalid since the caller could choose a type for a which is Num but not Fractional e.g. Int or Integer.
They don't mean the opposite - they mean exactly the same ("as general as possible"). Typeclass gives you all guarantees that you can rely upon - if typeclass T provides function f, you can use it for all instances of T, but even if some of these instances are members of G (providing g) as well, requiring to be of T typeclass is not sufficient to call g.
In your case this means:
Members of Num are guaranteed to provide conversion from integers (i.e. default type for integral values, like 1 or 1000) - with fromInteger function.
However, they are not guaranteed to provide conversion from rational numbers (like 1.0) - Fractional typeclass does provide this as fromRational function, but it doesn't really matter, as you use only Num.

Why the result of (/) function may also be Integral?

Consider:
> :t (/)
(/) :: Fractional a => a -> a -> a
and
> :t (round 10 / 100)
(round 10 / 100) :: (Fractional a, Integral a) => a
How could (/) result in Integral?
I believe you are misreading the result type
result :: (Fractional a, Integral a) => a
This type is a "contract" between the user of result and the implementor of result. Let's take it apart:
The user must first choose a type a
The user must then prove that the chosen a belongs to classes Fractional and Integral. Roughly, this means that the user has to provide a definition for the methods of such classes.
Finally, the implementor will provide a value of type a.
Part 2 is the crucial step. As we can see, the type of result does not promise to construct some value whose type is both a fractional and an integral. Quite the opposite: it requires that whoever wants to use that result value has to find such a type.
Concretely, this means that result is unusable. GHC does not raise a type error because it has no deep knowledge about the intended meaning of the type classes. Indeed, from a purely theoretical point of view, one could define a custom type and provide fractional/integral instances, e.g.
data A = A0 | A1 | A2
instance Num A where
...
instance Fractional A where
...
instance Integral A where
...
with some weird implementation, such as performing fractional operations modulo 3, but not integral operations.
Anyway, since something like A could be defined, GHC can not reject the type above.

Type signature of num to double?

I'm just starting Learn You a Haskell for Great Good, and I'm having a bit of trouble with type classes. I would like to create a function that takes any number type and forces it to be a double.
My first thought was to define
numToDouble :: Num -> Double
But I don't think that worked because Num isn't a type, it's a typeclass (which seems to me to be a set of types). So looking at read, shows (Read a) => String -> a. I'm reading that as "read takes a string, and returns a thing of type a which is specified by the user". So I wrote the following
numToDouble :: (Num n) => n -> Double
numToDouble i = ((i) :: Double)
Which looks to me like "take thing of type n (must be in the Num typeclass, and convert it to a Double". This seems reasonable becuase I can do 20::Double
This produces the following output
Could not deduce (n ~ Double)
from the context (Num n)
bound by the type signature for numToDouble :: Num n => n -> Double
I have no idea what I'm reading. Based on what I can find, it seems like this has something to do with polymorphism?
Edit:
To be clear, my question is: Why isn't this working?
The reason you can say "20::Double" is that in Haskell an integer literal has type "Num a => a", meaning it can be any numeric type you like.
You are correct that a typeclass is a set of types. To be precise, it is the set of types that implement the functions in the "where" clause of the typeclass. Your type signature for your numToDouble correctly expresses what you want to do.
All you know about a value of type "n" in your function is that it implements the Num interface. This consists of +, -, *, negate, abs, signum and fromInteger. The last is the only one that does type conversion, but its not any use for what you want.
Bear in mind that Complex is also an instance of Num. What should numToDouble do with that? The Right Thing is not obvious, which is part of the reason you are having problems.
However lower down the type hierarchy you have the Real typeclass, which has instances for all the more straightforward numerical types you probably want to work with, like floats, doubles and the various types of integers. That includes a function "toRational" which converts any real value into a ratio, from which you can convert it to a Double using "fromRational", which is a function of the "Fractional" typeclass.
So try:
toDouble :: (Real n) => n -> Double
toDouble = fromRational . toRational
But of course this is actually too specific. GHCI says:
Prelude> :type fromRational . toRational
fromRational . toRational :: (Fractional c, Real a) => a -> c
So it converts any real type to any Fractional type (the latter covers anything that can do division, including things that are not instances of Real, like Complex) When messing around with numeric types I keep finding myself using it as a kind of generic numerical coercion.
Edit: as leftaroundabout says,
realToFrac = fromRational . toRational
You can't "convert" anything per se in Haskell. Between specific types, there may be the possibility to convert – with dedicated functions.
In your particular example, it certainly shouldn't work. Num is the class1 of all types that can be treated as numerical types, and that have numerical values in them (at least integer ones, so here's one such conversion function fromInteger).
But these types can apart from that have any other stuff in them, which oftentimes is not in the reals and can thus not be approximated by Double. The most obvious example is Complex.
The particular class that has only real numbers in it is, suprise, called Real. What is indeed a bit strange is that its method is a conversion toRational, since the rationals don't quite cover the reals... but they're dense within them, so it's kind of ok. At any rate, you can use that function to implement your desired conversion:
realToDouble :: Real n => n -> Double
realToDouble i = fromRational $ toRational i
Incidentally, that combination fromRational . toRational is already a standard function: realToFrac, a bit more general.
Calling type classes "sets of types" is kind of ok, much like you can often get away without calling any kind of collection in maths a set – but it's not really correct. The most problematic thing is, you can't really say some type is not in a particular class: type classes are open, so at any place in a project you could declare an instance for some type to a given class.
Just to be 100% clear, the problem is
(i) :: Double
This does not convert i to a Double, it demands that i already is a Double. That isn't what you mean at all.
The type signature for your function is correct. (Or at least, it means exactly what you think it means.) But your function's implementation is wrong.
If you want to convert one type of data to another, you have to actually call a function of some sort.
Unfortunately, Num itself only allows you to convert an Integer to any Num instance. You're trying to convert something that isn't necessarily an Integer, so this doesn't help. As others have said, you probably want fromRational or similar...
There is no such thing as numeric casts in Haskell. When you write i :: Double, what that means isn't "cast i to Double"; it's just an assertion that i's type is Double. In your case, however, your function's signature also asserts that i's type is Num n => n, i.e., any type n (chosen by the caller) that implements Num; so for example, n could be Integer. Those two assertions cannot be simultaneously true, hence you get an error.
The confusing thing is that you can say 1 :: Double. But that's because in Haskell, a numeric literal like 1 has the same meaning as fromInteger one, where one :: Integer is the Integer whose value is one.
But that only works for numeric literals. This is one of the surprising things if you come to Haskell from almost any other language. In most languages you can use expressions of mixed numeric types rather freely and rely on implicit coercions to "do what I mean"; in Haskell, on the other hand you have to use functions like fromIntegral or fromRational all the time. And while most statically typed languages have a syntax for casting from one numeric type to another, in Haskell you just use a function.

Haskell Error: Couldn't match expected type `Integer' against inferred type `Int'

I have a haskell function that that calculates the size of the list of finite Ints. I need the output type to be an Integer because the value will actually be larger than the maximum bound of Int (the result will be -1 to be exact if the output type is an Int)
size :: a -> Integer
size a = (maxBound::Int) - (minBound::Int)
I understand the difference between Ints (bounded) and Integers (unbounded) but I'd like to make an Integer from an Int. I was wondering if there was a function like fromInteger, that will allow me to convert an Int to an Integer type.
You'll need to convert the values to Integers, which can be done by the fromIntegral function (numeric casting for Haskell):
fromIntegral :: (Integral a, Num b) => a -> b
It converts any type in the Integral class to any type in the (larger) Num class. E.g.
fromIntegral (maxBound::Int) - fromIntegral (minBound::Int)
However, I would not really trust the approach you're taking -- it seems very fragile. The behaviour in the presence of types that admit wraparound is pretty suspect.
What do you really mean by: "the size of the list of finite Ints". What is the size in this sense, if it isn't the length of the list?
I believe you are looking for:
fromIntegral :: (Integral a, Num b) => a -> b
which will convert an Integer to an Int
Perhaps you were assuming that Haskell, like many main-stream languages like C and (to a certain extent) Java, has implicit numeric coercions. It doesn't: Int and Integer are totally unrelated types and there is a special function for converting between them: fromIntegral. It belongs to the Num typeclass. Look at the documentation: essentially, fromIntegral does more than that: it is a generic "construct the representation of an arbitrary integral number", t.i. if you're implementing some kind of numbers and instantiating Num, you must provide a way to construct integral numbers of your type. For example, in the Num instance for complex numbers, fromIntegral creates a complex number with a zero imaginary part and an integral real part.
The only sense in which Haskell has implicit numeric coercions is that integer literals are overloaded, and when you write 42, the compiler implicitly interprets it as "fromIntegral (42::Integer)", so you can use integers in whatever contexts where a Num type is required.

Resources