What is '(Floating a, Num (a -> a))' in Haskell? - haskell

In Haskell, I just know that
:type ((+)(1))
((+)(1)) :: Num a => a -> a
((+)(1) 2
3
But how about
:type abs(sqrt)
abs(sqrt) :: (Floating a, Num (a -> a)) => a -> a
Actually, I try many times but fail to use the function 'abs(sqrt)'. Then I have a few questions. What is the type(class?) '(Floating a, Num (a -> a))'? Is it possible to use the function 'abs(sqrt)'? How?

A type class is a way to generalize functions so that they can be polymorphic and others can implement those functions for their own types. Take as an example the type class Show, which in a simplified form looks like
class Show a where
show :: a -> String
This says that any type that implements the Show typeclass can be converted to a String (there's some more complication for more realistic constraints, but the point of having Show is to be able to convert values to Strings).
In this case, the function show has the full type Show a => a -> String.
If we examine the function sqrt, its type is
> :type sqrt
sqrt :: Floating a => a -> a
And for abs:
> :type abs
abs :: Num b => b -> b
If you ask GHCi what the types are it will use the type variable a in both cases, but I've used b in the type signature for abs to make it clear that these are different type variables of the same name, and it will help avoid confusion in the next step.
These type signatures mean that sqrt takes a value whose type implements the Floating typeclass (use :info Floating to see all the members) and returns a value of that same type, and that the abs function takes a value whose type implements the Num typeclass and returns a value of that same type.
The expression abs(show) is equivalently parsed as abs sqrt, meaning that sqrt is the first and only argument passed to abs. However, we just said that abs takes a value of a Num type, but sqrt is a function, not a number. Why does Haskell accept this instead of complaining? The reason can be seen a little more clearly when we perform substitution with the type signatures. Since sqrt's type is Floating a => a -> a, this must match the argument b in abs's type signature, so by substituting b with Floating a => a -> a we get that abs sqrt :: (Floating a, Num (a -> a)) => a -> a.
Haskell actually allows the function type to implement the Num typeclass, you could do it yourself although it would likely be nonsensical. However, just because something wouldn't seem to make sense to GHC, so long as the types can be cleanly solved it will allow it.
You can't really use this function, it just doesn't really make sense. There is no built-in instance of Num (a -> a) for any a, so you'd have to define your own. You can, however, compose the functions abs and sqrt using the composition operator .:
> :type abs . sqrt
abs . sqrt :: Floating c => c -> c
And this does make sense. This function is equivalent to
myfunc x = abs (sqrt x)
Note here that x is first applied to sqrt, and then the result of that computation is passed to abs, rather than passing the function sqrt to abs.

When you see Num (a -> a) it generally means you made a mistake somewhere.
Perhaps you really wanted: abs . sqrt which has type Floating c => c -> c - i.e. it's a function of a Floating type (e.g. Float, Double) to the same Floating type.

It is probably not possible to use this function.
What's likely happening here is that the type is saying that abs(sqrt) has the constraints that a must be of type class Floating and (a -> a) must be of type class Num. In other words, the sqrt function needs to be able to be treated as if it was a number.
Unfortunately, sqrt is not of type class Num so there won't be any input that will work here (not that it would make sense anyway). However, some versions of GHCi allow you to get the type of as if it were possible.
Have a look at Haskell type length + 1 for a similar type problem.
As ErikR has said, perhaps you meant to write abs . sqrt instead.

Related

How to check instance of fractional class for zero value in haskell?

If i have a value which is restricted to be fractional, is there a way i can check it either for being a zero value or for some neutral value? I'm trying to implement safe division with signature like this:
safe_idv :: (Fractional q) => q -> q -> Maybe q
I've checked at Hoogle if there is some method in minimal definition which can help, but to no avail.
Thanks in advance
Update: Since the question caused some confusion, I want to avoid changing the constraints on q, and pattern matching still requires Eq (i guess implicitly).
For following definition:
safe_div :: (Fractional q) => q -> q -> Result q
safe_div a 0 = Err ["dvision by zero"]
safe_div a b = Ok (a / b)
following error is raised:
* Could not deduce (Eq q) arising from the literal `0'
from the context: Fractional q
bound by the type signature for:
safe_div :: forall q. Fractional q => q -> q -> Result q
at AST2.hs:11:1-48
Possible fix:
add (Eq q) to the context of
the type signature for:
safe_div :: forall q. Fractional q => q -> q -> Result q
* In the pattern: 0
In an equation for `safe_div':
safe_div a 0 = Err ["dvision by zero"]
With the signature safe_div :: (Fractional q) => q -> q -> Result q, you can only use methods from Fractional or its superclasses on your q values.
(Or other predefined functions that impose no more constraints than Fractional, but those will ultimately have to be implemented by the class methods. So they can't do anything you couldn't do directly with the methods yourself.)
From Fractional itself that gives us:
(/) :: Fractional a => a -> a -> a
recip :: Fractional a => a -> a
fromRational :: Fractional a => Rational -> a
Well none of those look terribly helpful. But Num is a superclass of Fractional, so we have those methods too:
(+) :: Num a => a -> a -> a
(-) :: Num a => a -> a -> a
(*) :: Num a => a -> a -> a
negate :: Num a => a -> a
abs :: Num a => a -> a
signum :: Num a => a -> a
fromInteger :: Num a => Integer -> a
These also aren't going to help us. abs and signum at first seem like they might, since their "purpose" is telling us certain properties about the number; signum even says this:
For real numbers, the signum is either -1 (negative), 0 (zero) or 1 (positive).
Which sounds exactly the kind of thing we want! The only trouble is that signum communicates the result of its inspection as a value of the same type. If we couldn't tell if our number x is equal to zero, how are we going to tell whether signum x is equal to zero? We're right back at the problem we started with.
The fact is every single method of both Fractional and Num only ever returns a value of the (unknown) type implementing the class. That basically means if you don't know what they type actually is, it's impossible to get any information out of them; the only thing you can do with a value of an unknown Fractional type is pass it to another Fractional (or Num) method, which will also only give you back a value of the same unknown type. There's no way to compare it to anything (which would require something returning a Bool, Ordering, or at least a Maybe, Either, etc). There's no way to convert it to text so it can be shown to a user (which would require something returning a String, Text, etc). You can do further calculations, but the only thing we can ever learn is that the calculation didn't error out, and only by trying it and hoping (which is exactly what you're trying to avoid!).
The only way you can implement your desired function is to add more constraints. Eq is exactly the class of types which can be compared, and you want to compare values of your type, so it just makes sense that you will have to constrain your function to operate within this class of types.
However, anyone calling this function polymorphically is in the same boat. It's very useful for intermediate functions (like this one) to work with unknown Fractional types, so that they can be called with any fractional type. But at the outermost level where someone first decided to call one of these functions, it's only ever useful to call these with a concrete type they actually know something about. Nobody wants to do calculations on numbers where they can't inspect the result in any way! This means that even though your safe_div function (as it is currently written) cannot assume any details of any particular type (such as whether it can be compared for equality), it in fact will only ever realistically be called with specific types like Double, Float, etc, all of which do support Eq. So in practice adding the Eq constraint is hardly limiting who can call it.
I imagine the reason you don't want to change the constraint is that you've already coded up functions where you use this, and they only have Fractional constraints (meaning they can't call safe_div :: (Eq a, Fractional a) => a -> a -> Result a). Unfortunately they'll have to be updated to add the Eq constraint too. The fact is that the interface of just Fractional gives only the ability to do basic arithmetic. To do comparisons and branching calculations you need more. So all your functions that want to do more than basic arithmetic (and that includes any that call anything that does more than basic arithmetic, not just ones that do the comparison and branching themselves) need more constraints than just Fractional. Fortunately the same reasoning as above applies: it is extremely unlikely that you would ever need to call any of these functions with a type that doesn't support Eq, so there really is very little point in resisting the additional constraint.

How can a instance with Num type class coercion to Fractional implicitly?

I tested the numeric coercion by using GHCI:
>> let c = 1 :: Integer
>> 1 / 2
0.5
>> c / 2
<interactive>:15:1: error:
• No instance for (Fractional Integer) arising from a use of ‘/’
• In the expression: c / 2
In an equation for ‘it’: it = c / 2
>> :t (/)
(/) :: Fractional a => a -> a -> a -- (/) needs Fractional type
>> (fromInteger c) / 2
0.5
>>:t fromInteger
fromInteger :: Num a => Integer -> a -- Just convert the Integer to Num not to Fractional
I can use fromInteger function to convert a Integer type to Num (fromInteger has the type fromInteger :: Num a => Integer -> a), but I cannot understand that how can the type Num be converted to Fractional implicitly?
I know that if an instance has type Fractional it must have type Num (class Num a => Fractional a where), but does it necessary that if an instance has type Num it can be used as an instance with Fractional type?
#mnoronha Thanks for your detailed reply. There is only one question confuse me. I know the reason that type a cannot be used in function (/) is that type a is with type Integer which is not an instance of type class Fractional (the function (/) requires that the type of arguments must be instance of Fractional). What I don't understand is that even by calling fromInteger to convert the type integer to atype which be an instance of Num, it does not mean a type be an instance of Fractional (because Fractional type class is more constrained than Num type class, so a type may not implement some functions required by Fractional type class). If a type does not fully fit the condition Fractional type class requires, how can it be use in the function (/) which asks the arguments type be instance of Fractional. Sorry for not native speaker and really thanks for your patience!
I tested that if a type only fits the parent type class, it cannot be used in a function which requires more constrained type class.
{-# LANGUAGE OverloadedStrings #-}
module Main where
class ParentAPI a where
printPar :: int -> a -> String
class (ParentAPI a) => SubAPI a where
printSub :: a -> String
data ParentDT = ParentDT Int
instance ParentAPI ParentDT where
printPar i p = "par"
testF :: (SubAPI a) => a -> String
testF a = printSub a
main = do
let m = testF $ ParentDT 10000
return ()
====
test-typeclass.hs:19:11: error:
• No instance for (SubAPI ParentDT) arising from a use of ‘testF’
• In the expression: testF $ ParentDT 10000
In an equation for ‘m’: m = testF $ ParentDT 10000
In the expression:
do { let m = testF $ ParentDT 10000;
return () }
I have found a doc explaining the numeric overloading ambiguity very clearly and may help others with the same confusion.
https://www.haskell.org/tutorial/numbers.html
First, note that both Fractional and Num are not types, but type classes. You can read more about them in the documentation or elsewhere, but the basic idea is that they define behaviors for types. Num is the most inclusive numeric typeclass, defining behaviors functions like (+), negate, which are common to pretty much all "numeric types." Fractional is a more constrained type class that describes "fractional numbers, supporting real division."
If we look at the type class definition for Fractional, we see that it is actually defined as a subclass of Num. That is, for a type a to be an have an instance Fractional, it must first be a member of the typeclass Num:
class Num a => Fractional a where
Let's consider some type that is constrained by Fractional. We know it implements the basic behaviors common to all members of Num. However, we can't expect it to implement behaviors from other type classes unless multiple constraints are specified (ex. (Num a, Ord a) => a. Take, for example, the function div :: Integral a => a -> a -> a (integral division). If we try to apply the function with an argument that is constrained by the typeclass Fractional (ex. 1.2 :: Fractional t => t), we encounter an error. Type classes restrict the sort of values a function deals with, allowing us to write more specific and useful functions for types that share behaviors.
Now let's look at the more general typeclass, Num. If we have a type variable a that is only constrained by Num a => a, we know that it will implement the (few) basic behaviors included in the Num type class definition, but we'd need more context to know more. What does this mean practically? We know from our Fractional class declaration that functions defined in the Fractional type class are applied to Num types. However, these Num types are a subset of all possible Num types.
The importance of all this, ultimately, has to do with the ground types (where type class constraints are most commonly seen in functions). a represents a type, with the notation Num a => a telling us that a is a type that includes an instance of the type class Num. a could be any of the types that include the instance (ex. Int, Natural). Thus, if we give a value a general type Num a => a, we know it can implement functions for every type where there is a type class defined. For example:
ghci>> let a = 3 :: (Num a => a)
ghci>> a / 2
1.5
Whereas if we'd defined a as a specific type or in terms of a more constrained type class, we would have not been able to expect the same results:
ghci>> let a = 3 :: Integral a => a
ghci>> a / 2
-- Error: ambiguous type variable
or
ghci>> let a = 3 :: Integer
ghci>> a / 2
-- Error: No instance for (Fractional Integer) arising from a use of ‘/’
(Edit responding to followup question)
This is definitely not the most concrete explanation, so readers feel free to suggest something more rigorous.
Suppose we have a function a that is just a type class constrained version of the id function:
a :: Num a => a -> a
a = id
Let's look at type signatures for some applications of the function:
ghci>> :t (a 3)
(a 3) :: Num a => a
ghci>> :t (a 3.2)
(a 3.2) :: Fractional a => a
While our function had the general type signature, as a result of its application the the type of the application is more restricted.
Now, let's look at the function fromIntegral :: (Num b, Integral a) => a -> b. Here, the return type is the general Num b, and this will be true regardless of input. I think the best way to think of this difference is in terms of precision. fromIntegral takes a more constrained type and makes it less constrained, so we know we'll always expect the result will be constrained by the type class from the signature. However, if we give an input constraint, the actual input could be more restricted than the constraint and the resulting type would reflect that.
The reason why this works comes down to the way universal quantification works. To help explain this I am going to add in explicit forall to the type signatures (which you can do yourself if you enable -XExplicitForAll or any other forall related extension), but if you just removed them (forall a. ... becomes just ...), everything will work fine.
The thing to remember is that when a function involves a type constrained by a typeclass, then what that means is that you can input/output ANY type within that typeclass, so it's actually better to have a less constrained typeclass.
So:
fromInteger :: forall a. Num a => Integer -> a
fromInteger 5 :: forall a. Num a => a
Means that you have a value that is of EVERY Num type. So not only can you use it in a function taking it in a Fractional, you could use it in a function that only takes in MyWeirdTypeclass a => ... as long as there is one single type that implements both Num and MyWeirdTypeclass. Hence why you can get the following just fine:
fromInteger 5 / 2 :: forall a. Fractional a => a
Now of course once you decide to divide by 2, it now wants the output type to be Fractional, and thus 5 and 2 will be interpreted as some Fractional type, so we won't run into issues where we try to divide Int values, as trying to make the above have type Int will fail to type check.
This is really powerful and awesome, but very much unfamiliar, as generally other languages either don't support this, or only support it for input arguments (e.g print in most languages can take in any printable type).
Now you may be curious when the whole superclass / subclass stuff comes into play, so when you are defining a function that takes in something of type Num a => a, then because a user can pass in ANY Num type, you are correct that in this situation you cannot use functions defined on some subclass of Num, only things that work on ALL Num values, like *:
double :: forall a. Num a => a -> a
double n = n * 2 -- in here `n` really has type `exists a. Num a => a`
So the following does not type check, and it wouldn't type check in any language, because you don't know that the argument is a Fractional.
halve :: Num a => a -> a
halve n = n / 2 -- in here `n` really has type `exists a. Num a => a`
What we have up above with fromInteger 5 / 2 is more equivalent to the following, higher rank function, note that the forall within parenthesis is required, and you need to use -XRankNTypes:
halve :: forall b. Fractional b => (forall a. Num a => a) -> b
halve n = n / 2 -- in here `n` has type `forall a. Num a => a`
Since this time you are taking in EVERY Num type (just like the fromInteger 5 you were dealing with before), not just ANY Num type. Now the downside of this function (and one reason why no one wants it) is that you really do have to pass in something of EVERY Num type:
halve (2 :: Int) -- does not work
halve (3 :: Integer) -- does not work
halve (1 :: Double) -- does not work
halve (4 :: Num a => a) -- works!
halve (fromInteger 5) -- also works!
I hope that clears things up a little. All you need for the fromInteger 5 / 2 to work is that there exists ONE single type that is both a Num and a Fractional, or in other words just a Fractional, since Fractional implies Num. Type defaulting doesn't help much with clearing up this confusion, as what you may not realize is that GHC is just arbitrarily picking Double, it could have picked any Fractional.

Polymorphism: a constant versus a function

I'm new to Haskell and come across a slightly puzzling example for me in the Haskell Programming from First Principles book. At the end of Chapter 6 it suddenly occurred to me that the following doesn't work:
constant :: (Num a) => a
constant = 1.0
However, the following works fine:
f :: (Num a) => a -> a
f x = 3*x
I can input any numerical value for x into the function f and nothing will break. It's not constrained to taking integers. This makes sense to me intuitively. But the example with the constant is totally confusing to me.
Over on a reddit thread for the book it was explained (paraphrasing) that the reason why the constant example doesn't work is that the type declaration forces the value of constant to only be things which aren't more specific than Num. So trying to assign a value to it which is from a subclass of Num like Fractional isn't kosher.
If that explanation is correct, then am I wrong in thinking that these two examples seem completely opposites of each other? In one case, the type declaration forces the value to be as general as possible. In the other case, the accepted values for the function can be anything that implements Num.
Can anyone set me straight on this?
It can sometimes help to read types as a game played between two actors, the implementor of the type and the user of the type. To do a good job of explaining this perspective, we have to introduce something that Haskell hides from you by default: we will add binders for all type variables. So your types would actually become:
constant :: forall a. Num a => a
f :: forall a. Num a => a -> a
Now, we will read type formation rules thusly:
forall a. t means: the caller chooses a type a, and the game continues as t
c => t means: the caller shows that constraint c holds, and the game continues as t
t -> t' means: the caller chooses a value of type t, and the game continues as t'
t (where t is a monomorphic type such as a bare variable or Integer or similar) means: the implementor produces a value of type a
We will need a few other details to truly understand things here, so I will quickly say them here:
When we write a number with no decimal points, the compiler implicitly converts this to a call to fromInteger applied to the Integer produced by parsing that number. We have fromInteger :: forall a. Num a => Integer -> a.
When we write a number with decimal points, the compiler implicitly converts this to a call to fromRational applied to the Rational produced by parsing that number. We have fromRational :: forall a. Fractional a => Rational -> a.
The Num class includes the method (*) :: forall a. Num a => a -> a -> a.
Now let's try to walk through your two examples slowly and carefully.
constant :: forall a. Num a => a
constant = 1.0 {- = fromRational (1 % 1) -}
The type of constant says: the caller chooses a type, shows that this type implements Num, and then the implementor must produce a value of that type. Now the implementor tries to play his own game by calling fromRational :: Fractional a => Rational -> a. He chooses the same type the caller did, and then makes an attempt to show that this type implements Fractional. Oops! He can't show that, because the only thing the caller proved to him was that a implements Num -- which doesn't guarantee that a also implements Fractional. Dang. So the implementor of constant isn't allowed to call fromRational at that type.
Now, let's look at f:
f :: forall a. Num a => a -> a
f x = 3*x {- = fromInteger 3 * x -}
The type of f says: the caller chooses a type, shows that the type implements Num, and chooses a value of that type. The implementor must then produce another value of that type. He is going to do this by playing his own game with (*) and fromInteger. In particular, he chooses the same type the caller did. But now fromInteger and (*) only demand that he prove that this type is an instance of Num -- so he passes off the proof the caller gave him of this and saves the day! Then he chooses the Integer 3 for the argument to fromInteger, and chooses the result of this and the value the caller handed him as the two arguments to (*). Everybody is satisfied, and the implementor gets to return a new value.
The point of this whole exposition is this: the Num constraint in both cases is enforcing exactly the same thing, namely, that whatever type we choose to instantiate a at must be a member of the Num class. It's just that in the definition constant = 1.0 being in Num isn't enough to do the operations we've written, whereas in f x = 3*x being in Num is enough to do the operations we've written. And since the operations we've chosen for the two things are so different, it should not be too surprising that one works and the other doesn't!
When you have a polymorphic value, the caller chooses which concrete type to use. The Haskell report defines the type of numeric literals, namely:
integer and floating literals have the typings (Num a) => a and
(Fractional a) => a, respectively
3 is an integer literal so has type Num a => a and (*) has type Num a => a -> a -> a so f has type Num a => a -> a.
In contrast, 3.0 has type Fractional a => a. Since Fractional is a subclass of Num your type signature for constant is invalid since the caller could choose a type for a which is Num but not Fractional e.g. Int or Integer.
They don't mean the opposite - they mean exactly the same ("as general as possible"). Typeclass gives you all guarantees that you can rely upon - if typeclass T provides function f, you can use it for all instances of T, but even if some of these instances are members of G (providing g) as well, requiring to be of T typeclass is not sufficient to call g.
In your case this means:
Members of Num are guaranteed to provide conversion from integers (i.e. default type for integral values, like 1 or 1000) - with fromInteger function.
However, they are not guaranteed to provide conversion from rational numbers (like 1.0) - Fractional typeclass does provide this as fromRational function, but it doesn't really matter, as you use only Num.

How can an arbitrary Num contain any other numeric type?

I'm just starting with Haskell, and I thought I'd start by making a random image generator. I looked around a bit and found JuicyPixels, which offers a neat function called generateImage. The example that they give doesn't seem to work out of the box.
Their example:
imageCreator :: String -> IO ()
imageCreator path = writePng path $ generateImage pixelRenderer 250 300
where pixelRenderer x y = PixelRGB8 x y 128
when I try this, I get that generateImage expects an Int -> Int -> PixelRGB8 whereas pixelRenderer is of type Pixel8 -> Pixel8 -> PixelRGB8. PixelRGB8 is of type Pixel8 -> Pixel8 -> Pixel8 -> PixelRGB8, so it makes sense that pixelRenderer is doing some type inference to determine that x and y are of type Pixel8. If I define a type signature that asserts that they are of type Int (so the function gets accepted by generateImage, PixelRGB8 complains that it needs Pixel8s, not Ints.
Pixel8 is just a type alias for Word8. After some hair pulling, I discovered that the way to convert an Int to a Word8 is by using fromIntegral.
The type signature for fromIntegral is (Integral a, Num b) => a -> b. It seems to me that the function doesn't actually know what you want to convert it to, so it converts to the very generic Num class. So theoretically, the output of this is a variable of any type that fits the type class Num (correct me if I'm mistaken here--as I understand it, classes are kind of like "interfaces" where types are more like classes/primitives in OOP). If I assign a variable
let n = fromIntegral 5
:t n -- n :: Num b => b
So I'm wondering... what is 'b'? I can use this variable as anything, and it will implicitly cast to any numeric type, as it seems. Not only will it implicitly cast to a Word8, it will implicitly cast to a Pixel8, meaning fromPixel effectively gets turned from (as I understood it) (Integral a, Num b) => a -> b to (Integral a) => a -> Pixel8 depending on context.
Can someone please clarify exactly what's happening here? Why can I use a generic Num as any type that fits Num, both mechanically and "ethically"? I don't understand how the implicit conversion is implemented (if I were to create my own class, I feel like I would need to add explicit conversion functions). I also don't really know why this works; here I can use a pretty unsafe type and convert it implicitly to anything else. (for example, fromIntegral 50000 gets translated to 80 if I implicitly convert it to a Word8)
A common implementation of type classes such as Num is dictionary-passing. Roughly, when the compiler sees something like
f :: Num a => a -> a
f x = x + 2
it transforms it into something like
f :: (Integer -> a, a -> a -> a) -> a -> a
-- ^-- the "dictionary"
f (dictFromInteger, dictPlus) x = dictPlus x (dictFromInteger 2)
The latter basically says: "pass me an implementation for these methods of class Num for your type a, and I will use them to produce a function a -> a for you".
Values such as your n :: Num b => b are no different. They are compiled into things such as
n :: (Integer -> b) -> b
n dictFromInteger = dictFromInteger 5 -- roughly
As you can see, this turns innocent-looking integer literals into functions, which can (and does) impact performance. However, in many circumstances the compiler can realize that the full polymorphic version is not actually needed, and remove all the dictionaries.
For instance, if you write f 3 but f expects Int, the "polymorphic" 3 can be converted at compile time. So type inference can aid the optimization phase (and user-written type annotation can greatly help here). Further, some other optimizations can be triggered manually, e.g. using the GHC SPECIALIZE pragma. Finally, the dreaded monomorphism restriction tries hard to force non-functions to remain non-functions after translation, at the cost of some loss of polymorphism. However, the MR is now being regarded as harmful, since it can cause puzzling type errors in some contexts.

Polymorphic signature for non-polymorphic function: why not?

As an example, consider the trivial function
f :: (Integral b) => a -> b
f x = 3 :: Int
GHC complains that it cannot deduce (b ~ Int). The definition matches the signature in the sense that it returns something that is Integral (namely an Int). Why would/should GHC force me to use a more specific type signature?
Thanks
Type variables in Haskell are universally quantified, so Integral b => b doesn't just mean some Integral type, it means any Integral type. In other words, the caller gets to pick which concrete types should be used. Therefore, it is obviously a type error for the function to always return an Int when the type signature says I should be able to choose any Integral type, e.g. Integer or Word64.
There are extensions which allow you to use existentially quantified type variables, but they are more cumbersome to work with, since they require a wrapper type (in order to store the type class dictionary). Most of the time, it is best to avoid them. But if you did want to use existential types, it would look something like this:
{-# LANGUAGE ExistentialQuantification #-}
data SomeIntegral = forall a. Integral a => SomeIntegral a
f :: a -> SomeIntegral
f x = SomeIntegral (3 :: Int)
Code using this function would then have to be polymorphic enough to work with any Integral type. We also have to pattern match using case instead of let to keep GHC's brain from exploding.
> case f True of SomeIntegral x -> toInteger x
3
> :t toInteger
toInteger :: Integral a => a -> Integer
In the above example, you can think of x as having the type exists b. Integral b => b, i.e. some unknown Integral type.
The most general type of your function is
f :: a -> Int
With a type annotation, you can only demand that you want a more specific type, for example
f :: Bool -> Int
but you cannot declare a less specific type.
The Haskell type system does not allow you to make promises that are not warranted by your code.
As others have said, in Haskell if a function returns a result of type x, that means that the caller gets to decide what the actual type is. Not the function itself. In other words, the function must be able to return any possible type matching the signature.
This is different to most OOP languages, where a signature like this would mean that the function gets to choose what it returns. Apparently this confuses a few people...

Resources