I defined a function as follows in Haskell
func x | x > 0 = 4
| otherwise = error "Non positive number"
Its inferred type is
func :: (Ord a, Num a, Num t) => a -> t
Why can't its type be
func :: (Ord a, Num a) => a -> a
The type func :: (Ord a, Num a) => a -> a means that returned type should match with one you passed in. But since you are returning a constant and it doesn't depend on input, its type can be any type of class Num. Thus, compiler infers func :: (Ord a, Num a, Num t) => a -> t.
The inferred type is correct. Let's see how it gets inferred. We will only look at the returned value first:
func x | … = 4
| … = error "…"
The latter term, error "…" :: a, is unconstrained. However, 4 has the type Num a => a, therefore your function will have type
func :: (Num result) => ????? -> result
Now let's have a look at x:
func x | x > 0 = …
| …
Due to the use of (>) :: Ord a => a -> a- > Bool and 0 :: Num a, x's has to be an instance of both Num and Ord. So from that point of perspective, we have
func :: (Num a, Ord a) => a -> ????
The important part is that 4 and x don't interact. Therefore func has
func :: (Num a, Ord a, Num result) => a -> result
Of course, result and a could be the same type. By the way, as soon as you connect x and 4, the type will get more specific:
func x | x > 0 = 4 + x - x
| …
will have type (Num a, Ord a) => a -> a.
As you may or may not be aware, Haskell’s numeric literals are polymorphic. That means that a number literal, like 0 or 4, has the type Num a => a. You seem to already understand this, since the type signature you expect includes a Num constraint.
And indeed, the type signature you have provided is a valid one. If you included it, it would be accepted by the compiler, and if it’s the type you actually want, by all means include it. However, the reason it is different from the inferred type is that Haskell will attempt to infer the most general type possible, and the inferred type is more general than the one you’ve written.
The question is, why is the inferred type permitted? Well, consider this similar but distinct function:
func' x | x > 0 = "ok"
| otherwise = error "Non positive number"
This function’s type is (Ord a, Num a) => a -> String. Notably, the input of the function has absolutely no bearing on the output, aside from whether or not the result is ⊥. For this reason, the result can be absolutely anything, including a string.
However, consider yet another function, again similar but distinct:
func'' x | x > 0 = x - 1
| otherwise = error "Non positive number"
This function must have the type you described, since it uses x to produce a new value.
However, look again at the original func definition. Note how the literal on the right hand side, the 4, does not have anything to do with x. That definition is actually quite a bit closer to func' than func'', so the numeric literal has the type Num t => t, and it never needs to be specialized. Therefore, the inferred type allows the two numeric types to be separate, and you have the more general type written in your question.
Related
I'm trying to define a Complex datatype, and I want the constructors to be able to take any number as instance, so I would like to use a generic type as long as it does implement a Num instance
I'm usind GADTs in order to do so since to my understanding the DataTypeContexts language extension was a "misfeature" even if I think that would have been useful in this case...
In any case this is my code:
data Complex where
Real :: ( Num num, Show num ) => num -> Complex
Imaginary :: ( Num num, Show num ) => num -> Complex
Complex :: ( Num num, Show num ) => num -> num -> Complex
real :: ( Num num, Show num ) => Complex -> num
real (Real r) = r
real (Imaginary _i ) = 0
real (Complex r _i ) = r
here the real implementation gives the following error:
Couldn't match expected type ‘num’ with actual type ‘num1’
‘num1’ is a rigid type variable bound by
a pattern with constructor:
Real :: forall num. (Num num, Show num) => num -> Complex,
in an equation for ‘real’
at <...>/Complex.hs:29:7-12
‘num’ is a rigid type variable bound by
the type signature for:
real :: forall num. (Num num, Show num) => Complex -> num
at <...>/Complex.hs:28:1-47
• In the expression: r
In an equation for ‘real’: real (Real r) = r
• Relevant bindings include
r :: num1
(bound at <...>/Complex.hs:29:12)
real :: Complex -> num
(bound at <...>/Complex.hs:29:1)
which to my understanding is due to the return type do be interpreted as different...
so I tried removing the type definition and letting ghc do his magic with the type but turns out the type signature was the same...
can anyone please explain to me what is wrong here?
Problem is, these definitions allow you to choose different types when (1) constructing a Complex value and when (2) applying the real function. These two situations are not connected to each other in any way, so there is nothing to force the type to be the same between them. For example:
c :: Complex
c = Real (42 :: Int)
d :: Double
d = real c
The definition of d requires the real function to return a Double, but there is no Double wrapped inside of c, there is only Int.
As for solutions, there are two possible ones: (1) establish a connection between these two points, forcing the type to be the same, and (2) allow the type inside to be converted to any other numeric type.
To establish a type-level connection between two points, we need to use a type that is present at both points. What type would that be? Quite obviously, that's the type of c. So we need to make the type of c somehow convey what's wrapped inside it:
data Complex num = Real num | Imaginary num | Complex num num
real :: Complex num -> num
real = ...
-- Usage:
c :: Complex Int
c = Real 42
d :: Int
d = real c
Note that I don't actually need GADTs for this.
To allow type conversion, you'll need to require another type class for the num type. The class Num has a way to convert from any integral type, but there is no way to convert to any such type, because it doesn't make sense: 3.1415 can't be meaningfully converted to an integral type.
So you'll have to come up with your own way to convert, and implement it for all allowed types too:
class Convert a where
toNum :: Num n => a -> n
data Complex where
Real :: ( Num num, Show num, Convert num ) => num -> Complex
...
real :: Num num => Complex -> num
real (Real r) = toNum r
...
Just to be clear, I consider the second option quite insane. I only provided it for illustration. Don't do it. Go with option 1.
I tried to find out how Haskell is able to resolve types from a guarded equation where no type is specified through the web.
Like in e.g. following function definition, ghci is able to resolve the type and tells me exactly what it is.
fun a b c
| a == c = b
| b < 0 = a+b
| otherwise = c
How does it do this? I know how this works for if-then-else constructs (basically: start with generic version, add constraints) but I am wondering which extra steps are needed here?
fun has three arguments and a result. So initially the compiler assumes they might each be a different type:
fun :: alpha -> beta -> gamma -> delta -- the type vars are placeholders
Ignoring the guards, look at the rhs result of the equations: they each must be type delta. So set up a series of type-level equations from the term-level equations. Use ~ between types to say they must be the same.
The first equation has result b, so b's type must be same as the result, i.e. beta ~ delta.
The third equation has result c, so c's type must be the same as the result, gamma ~ delta.
The second equation has rhs + operator :: Num a => a -> a -> a. We'll come back to the Num a =>. This is saying + has two arguments and a result, all of which are same type. That type is the result of fun's rhs so must be delta. The second arg to + is b, so b's type must be same as the result, beta ~ delta. We already had that from the first equation, so this is just confirming consistency.
The first arg to + is a, so a's type must be same again, alpha ~ delta.
We have alpha ~ beta ~ gamma ~ delta. So go back to the initial signature (which was as general as possible), and substitute equals for equals:
fun :: (constraints) => alpha -> alpha -> alpha -> alpha
Constraints
Pick those up on the fly from the operators.
We already saw Num because of operator +.
The first equation's a == c gives us Eq.
The second equation's b < 0 gives us Ord.
Actually the appearance of 0 gives us another Num, because 0 :: Num a => a, and < :: Ord a => a -> a -> Bool IOW the arguments to < must be same type and same constraints.
So pile up those constraints at the front of fun's type, eliminating duplicates
fun :: (Num alpha, Eq alpha, Ord alpha) => alpha -> alpha -> alpha -> alpha
Is that what you're favourite compiler is telling you? It's probably using type variable a rather than alpha. It's probably not showing Eq alpha.
Eliminating unnecessary superclass constraints
Main> :i Ord
...
class Eq a => Ord a where
...
The Eq a => is telling every Ord type must come with an Eq already. So in giving the signature for fun, the compiler is assuming away its Eq.
fun :: (Num a, Ord a) => a -> a -> a -> a
QED
fun a b c
| a == c = b
| b < 0 = a+b
| otherwise = c
Is it clear how this translates to if/else?
fun a b c
= if a == c
then b
else if b < 0
then a+b
else c
And if we add some human-readable annotations:
fun a b c -- Start reasoning with types of `a`, `b`, `c`, and `result :: r`
= if a == c -- types `a` and `c` are equal, aka `a ~ c`. Also `Eq a, Eq c`.
then b -- types `b ~ r`
else if b < 0 -- `Num b, Ord b`
then a+b -- `a ~ b ~ r` and `Num a, Num b`
else c -- `c ~ r`
If you take all these facts together the type boils down pretty quickly.
a ~ b ~ r and c ~ r
So we know there's actually only one type which we'll just call a and rename all the other types variables in the facts.
Num a, Eq a, Ord a
As a small cognitive savings we know Ord implies Eq so we can end up with constraints Num a, Ord a.
All that swept the mechanics - leveraging of implications such as (==) t1 t2 ~~> t1 = t2 - under the rug but hopefully in an acceptable fashion.
In Haskell, when we talk type declaration.
I've seen both -> and =>.
As an example: I can make my own type declaration.
addMe :: Int -> Int -> Int
addMe x y = x + y
And it works just fine.
But if we take a look at :t sqrt we get:
sqrt :: Floating a => a -> a
At what point do we use => and when do we use ->?
When do we use "fat arrow" and when do we use "thin arrow"?
-> is for explicit functions. I.e. when f is something that can be written in an expression of the form f x, the signature must have one of these arrows in it†. Specifically, the type of x (the argument) must appear to the left of a -> arrow.
It's best to not think of => as a function arrow at all, at least at first‡. It's an implication arrow in the logical sense: if a is a type with the property Floating a, then it follows that the signature of sqrt is a -> a.
For your addMe example, which is a function with two arguments, the signature must always have the form x -> y -> z. Possibly there can also be a q => in front of that; that doesn't influence the function-ishness, but may have some saying in what particular types are allowed. Generally, such constraints are not needed if the types are already fixed and concrete. Like, you could in principle impose a constraint on Int:
addMe :: Num Int => Int -> Int -> Int
addMe x y = x + y
...but that doesn't really accomplish anything, because everybody knows that the particular type Int is an instance of the Num class. Where you need such constraints is when the type is not fixed but a type variable (i.e. lowercase), i.e. if the function is polymorphic. You can't just write
addMe' :: a -> a -> a
addMe' x y = x + y
because that signature would suggest the function works for any type a whatsoever, but it can't work for all types (how would you add, for example, two strings? ok perhaps not the best example, but how would you multiply two strings?)
Hence you need the constraint
addMe' :: Num a => a -> a -> a
addMe' x y = x + y
This means, you don't care what exact type a is, but you do require it to be a numerical type. Anybody can use the function with their own type MyNumType, but they need to ensure that Num MyNumType is fulfilled: then it follows that addMe' can have signature MyNumType -> MyNumType -> MyNumType.
The way to ensure this is to either use a standard type which you know to be numerical, for instance addMe' 5.9 3.7 :: Double would work, or give an instance declaration for your custom type and the Num class. Only do the latter if you're sure it's a good idea; usually the standard num types are all you'll need.
†Note that the arrow may not be visible in the signature: it's possible to have a type synonym for a function type, for example when type IntEndofunc = Int -> Int, then f :: IntEndofunc; f x = x+x is ok. But you can think of the typedef as essentially just a syntactic wrapper; it's still the same type and does have the arrow in it.
‡It so happens that logical implication and function application can be seen as two aspects of the same mathematical concept. Furthermore, GHC actually implements class constraints as function arguments, so-called dictionaries. But all this happens behind the scenes, so if anything they're implicit functions. In standard Haskell, you will never see the LHS of a => type as the type of some actual argument the function is applied to.
The "thin arrow" is used for function types (t1 -> t2 being the type of a function that takes a value of type t1 and produces a value of type t2).
The "fat arrow" is used for type constraints. It separates the list of type constraints on a polymorphic function from the rest of the type. So given Floating a => a -> a, we have the function type a -> a, the type of a function that can take arguments of any type a and produces a result of that same type, with the added constraint Floating a, meaning that the function can in fact only be used with types that implement the Floating type class.
the -> is the constructor of functions and the => is used to constraints, a sort of "interface" in Haskell called typeclass.
A little example:
sum :: Int -> Int -> Int
sum x y = x + y
that function only allows Int types, but if you want a huge int or a small int, you probably want Integer, and how to tell it to use both?
sum2 :: Integral a => a -> a -> a
sum2 x y = x + y
now if you try to do:
sum2 3 1.5
it will give you an error
also, you may want to know if two data are equals, you want:
equals :: Eq a => a -> a -> Bool
equals x y = x == y
now if you do:
3 == 4
that's ok
but if you create:
data T = A | B
equals A B
it will give to you:
error:
• No instance for (Eq T) arising from a use of ‘equals’
• In the expression: equals A B
In an equation for ‘it’: it = equals A B
if you want for that to work, you must just do:
data T = A | B deriving Eq
equals A B
False
This seems to apply to both GHCi and GHC. I'll show an example with GHCi first.
Given i type has been inferred as follows:
Prelude> i = 1
Prelude> :t i
i :: Num p => p
Given that succ is a function defined on Enum:
Prelude> :i Enum
class Enum a where
succ :: a -> a
pred :: a -> a
-- …OMITTED…
and that Num is not a 'subclass' (if I can use that term) of Enum:
class Num a where
(+) :: a -> a -> a
(-) :: a -> a -> a
-- …OMITTED…
why succ i does not return an error?
Prelude> succ i
2 -- works, no error
I would expect :type i to be inferred to something like:
Prelude> i = 1
Prelude> :type i
i :: (Enum p, Num p) => p
(I'm using 'GHC v. 8.6.3')
ADDITION:
After reading #RobinZigmond comment and #AlexeyRomanov answer I have noticed that 1 could be interpreted as one of many types and one of many classes.
Thanks to #AlexeyRomanov answer I understand much more about the defaulting-rules used to decide what type to use for ambiguous expressions.
However I don't feel that Alexey answer addresses exactly my question. My question is about the type of i. It's not about the type of succ i.
It's about the mismatch between succ argument type (an Enum a) and the apparent type of i (a Num a).
I'm now starting to realise that my question must stem from a wrong assumption: 'that once i is inferred to be i :: Num a => a, then i can be nothing else'. Hence I was puzzled to see succ i was evaluated without errors.
GHC also seems to be inferring Enum a in addition to what was explicitly declared.
x :: Num a => a
x = 1
y = succ x -- works
However it is not adding Enum a when the type variable appears as a function:
my_succ :: Num a => a -> a
my_succ z = succ z -- fails compilation
To me it seems that the type constraints attached to a function are stricter to the ones applied to a variable.
GHC is saying my_succ :: forall a. Num a => a -> a and given
forall a doesn't appear in the type-signature of neither i nor x I thought that meant GHC is not going to infer any more classes for my_succ types.
But this seems again wrong: I've checked this idea with the following (first time I type RankNTypes) and apparently GHC still infers Enum a:
{-# LANGUAGE RankNTypes #-}
x :: forall a. Num a => a
x = 1
y = succ x
So it seems that inference rules for functions are stricter than the ones for variables?
Yes, succ i's type is inferred as you expect:
Prelude> :t succ i
succ i :: (Enum a, Num a) => a
This type is ambiguous, but it satisfies the conditions in the defaulting rules for GHCi:
Find all the unsolved constraints. Then:
Find those that are of form (C a) where a is a type variable, and partition those constraints into groups that share a common type variable a.
In this case, there's only one group: (Enum a, Num a).
Keep only the groups in which at least one of the classes is an interactive class (defined below).
This group is kept, because Num is an interactive class.
Now, for each remaining group G, try each type ty from the default-type list in turn; if setting a = ty would allow the constraints in G to be completely solved. If so, default a to ty.
The unit type () and the list type [] are added to the start of the standard list of types which are tried when doing type defaulting.
The default default-type list (sic) is (with the additions from the last clause) default ((), [], Integer, Double).
So when you do Prelude> succ i to actually evaluate this expression (note :t doesn't evaluate the expression it gets), a is set to Integer (first of this list satisfying the constraints), and the result is printed as 2.
You can see it's the reason by changing the default:
Prelude> default (Double)
Prelude> succ 1
2.0
For the updated question:
I'm now starting to realise that my question must stem from a wrong assumption: 'that once i is inferred to be i :: Num a => a, then i can be nothing else'. Hence I was puzzled to see succ i was evaluated without errors.
i can be nothing else (i.e. nothing that doesn't fit this type), but it can be used with less general (more specific) types: Integer, Int. Even with many of them in an expression at once:
Prelude> (i :: Double) ^ (i :: Integer)
1.0
And these uses don't affect the type of i itself: it's already defined and its type fixed. OK so far?
Well, adding constraints also makes the type more specific, so (Num a, Enum a) => a is more specific than (Num a) => a:
Prelude> i :: (Num a, Enum a) => a
1
Because of course any type a that satisfies both constraints in (Num a, Enum a) satisfies just Num a.
However it is not adding Enum a when the type variable appears as a function:
That's because you specified a signature which doesn't allow it to. If you don't give a signature, there's no reason to infer Num constraint. But e.g.
Prelude> f x = succ x + 1
will infer the type with both constraints:
Prelude> :t f
f :: (Num a, Enum a) => a -> a
So it seems that inference rules for functions are stricter than the ones for variables?
It's actually the other way around due to the monomorphism restriction (not in GHCi, by default). You've actually been a bit lucky not to run into it here, but the answer is already long enough. Searching for the term should give you explanations.
GHC is saying my_succ :: forall a. Num a => a -> a and given forall a doesn't appear in the type-signature of neither i nor x.
That's a red herring. I am not sure why it's shown in one case and not the other, but all of them have that forall a behind the scenes:
Haskell type signatures are implicitly quantified. When the language option ExplicitForAll is used, the keyword forall allows us to say exactly what this means. For example:
g :: b -> b
means this:
g :: forall b. (b -> b)
(Also, you just need ExplicitForAll and not RankNTypes to write down forall a. Num a => a.)
Super basic question - but I can't seem to get a clear answer. The below function won't compile:
randomfunc :: a -> a -> b
randomfunc e1 e2
| e1 > 2 && e2 > 2 = "Both greater"
| otherwise = "Not both greater"
main = do
let x = randomfunc 2 1
putStrLn $ show x
I'm confused as to why this won't work. Both parameters are type 'a' (Ints) and the return parameter is type 'b' (String)?
Error:
"Couldn't match expected type ‘b’ with actual type ‘[Char]’"
Not quite. Your function signature indicates: for all types a and b, randomfunc will return something of type b if given two things of type a.
However, randomFunc returns a String ([Char]). And since you compare e1 with 2 each other, you cannot use all a's, only those that can be used with >:
(>) :: Ord a => a -> a -> Bool
Note that e1 > 2 also needs a way to create such an an a from 2:
(> 2) :: (Num a, Ord a) => a -> Bool
So either use a specific type, or make sure that you handle all those constraints correctly:
randomfunc :: Int -> Int -> String
randomFunc :: (Ord a, Num a) => a -> a -> String
Both parameters are type 'a' (Ints) and the return parameter is type 'b' (String)?
In a Haskell type signature, when you write names that begin with a lowercase letter such as a, the compiler implicitly adds forall a. to the beginning of the type. So, this is what the compiler actually sees:
randomfunc :: forall a b. a -> a -> b
The type signature claims that your function will work for whatever ("for all") types a and b the caller throws at you. But this is not true for your function, since it only works on Int and String respectively.
You need to make your type more specific:
randomfunc :: Int -> Int -> String
On the other hand, perhaps you intended to ask the compiler to fill out a and b for you automatically, rather than to claim that it will work for all a and b. In that case, what you are really looking for is the PartialTypeSignatures feature:
{-# LANGUAGE PartialTypeSignatures #-}
randomfunc :: _a -> _a -> _b