The case of the disappearing constraint: Oddities of a higher-rank type - haskell

All the experiments described below were done with GHC 8.0.1.
This question is a follow-up to RankNTypes with type aliases confusion. The issue there boiled down to the types of functions like this one...
{-# LANGUAGE RankNTypes #-}
sleight1 :: a -> (Num a => [a]) -> a
sleight1 x (y:_) = x + y
... which are rejected by the type checker...
ThinAir.hs:4:13: error:
* No instance for (Num a) arising from a pattern
Possible fix:
add (Num a) to the context of
the type signature for:
sleight1 :: a -> (Num a => [a]) -> a
* In the pattern: y : _
In an equation for `sleight1': sleight1 x (y : _) = x + y
... because the higher-rank constraint Num a cannot be moved outside of the type of the second argument (as would be possible if we had a -> a -> (Num a => [a]) instead). That being so, we end up trying to add a higher-rank constraint to a variable already quantified over the whole thing, that is:
sleight1 :: forall a. a -> (Num a => [a]) -> a
With this recapitulation done, we might try to simplify the example a bit. Let's replace (+) with something that doesn't require Num, and uncouple the type of the problematic argument from that of the result:
sleight2 :: a -> (Num b => b) -> a
sleight2 x y = const x y
This doesn't work just like before (save for a slight change in the error message):
ThinAir.hs:7:24: error:
* No instance for (Num b) arising from a use of `y'
Possible fix:
add (Num b) to the context of
the type signature for:
sleight2 :: a -> (Num b => b) -> a
* In the second argument of `const', namely `y'
In the expression: const x y
In an equation for `sleight2': sleight2 x y = const x y
Failed, modules loaded: none.
Using const here, however, is perhaps unnecessary, so we might try writing the implementation ourselves:
sleight3 :: a -> (Num b => b) -> a
sleight3 x y = x
Surprisingly, this actually works!
Prelude> :r
[1 of 1] Compiling Main ( ThinAir.hs, interpreted )
Ok, modules loaded: Main.
*Main> :t sleight3
sleight3 :: a -> (Num b => b) -> a
*Main> sleight3 1 2
1
Even more bizarrely, there seems to be no actual Num constraint on the second argument:
*Main> sleight3 1 "wat"
1
I'm not quite sure about how to make that intelligible. Perhaps we might say that, just like we can juggle undefined as long as we never evaluate it, an unsatisfiable constraint can stick around in a type just fine as long as it is not used for unification anywhere in the right-hand side. That, however, feels like a pretty weak analogy, specially given that non-strictness as we usually understand it is a notion involving values, and not types. Furthermore, that leaves us no closer from grasping how in the world String unifies with Num b => b -- assuming that such a thing actually happens, something which I'm not at all sure of. What, then, is an accurate description of what is going on when a constraint seemingly vanishes in this manner?

Oh, it gets even weirder:
Prelude> sleight3 1 ("wat"+"man")
1
Prelude Data.Void> sleight3 1 (37 :: Void)
1
See, there is an actual Num constraint on that argument. Only, because (as chi already commented) the b is in a covariant position, this is not a constraint you have to provide when calling sleight3. Rather, you can just pick any type b, then whatever it is, sleight3 will provide a Num instance for it!
Well, clearly that's bogus. sleight3 can't provide such a num instance for strings, and most definitely not for Void. But it also doesn't actually need to because, quite like you said, the argument for which that constraint would apply is never evaluated. Recall that a constrained-polymorphic value is essentially just a function of a dictionary argument. sleight3 simply promises to provide such a dictionary before it actually gets to use y, but then it doesn't use y in any way, so it's fine.
It's basically the same as with a function like this:
defiant :: (Void -> Int) -> String
defiant f = "Haha"
Again, the argument function clearly can not possibly yield an Int because there doesn't exist a Void value to evaluate it with. But this isn't needed either, because f is simply ignored!
By contrast, sleight2 x y = const x y does kinda sorta use y: the second argument to const is just a rank-0 type, so the compiler needs to resolve any needed dictionaries at that point. Even if const ultimately also throws y away, it still “forces” enough of this value to make it evident that it's not well-typed.

Related

Haskell syb Data.Generics not working as expected

On a ghci prompt everywhere (mkT (\x -> 2 * x)) (8.7, 21, "word") evaluates to (8.7, 42, "word").
I expected the 8.7 to be doubled as well. Why am I wrong?
This is the result of mkT monomorphizing its argument in this particular case, but it turns out there's no broader way to address the issue. mkT isn't doing anything wrong.
It's worth looking first at why everywhere (* 2) doesn't type-check.
ghci> :t everywhere
everywhere
:: (forall a. Data a => a -> a) -> forall a. Data a => a -> a
ghci> :t (* 2)
(* 2) :: Num a => a -> a
ghci> :t everywhere (* 2)
<interactive>:1:13: error:
• Could not deduce (Num a) arising from a use of ‘*’
from the context: Data a
bound by a type expected by the context:
forall a. Data a => a -> a
at <interactive>:1:12-16
Possible fix:
add (Num a) to the context of
a type expected by the context:
forall a. Data a => a -> a
• In the expression: (*)
In the first argument of ‘everywhere’, namely ‘(* 2)’
In the expression: everywhere (* 2)
everywhere has a higher-rank type - the first forall a. is inside the parentheses. I kind of dislike documenting the type that way - it uses a as a type variable in two completely separate ways. But there are two different scopes, and that matters. What it's saying is that any function passed to it must be polymorphic over all instances of Data.
But the type of (* 2) doesn't match up there. It won't work with any instance of Data. It requires more - it requires that it be provided an instance of Num. So the error message dutifully reports that it can't deduce (Num a) from the context Data a. So this isn't going to work. The pieces don't fit together.
This is where mkT comes into play:
ghci> :t mkT
mkT :: (Typeable a, Typeable b) => (b -> b) -> a -> a
Its type is a bit funny. It looks almost like it does nothing at all, but Typeable is a funny class. mkT actually compares a and b for type equality, using those Typeable constraints. If they're the same, it applies the function you provided. Otherwise, it just acts as the identity function.
What it does when it's applied to a function is where things are going wrong for you:
ghci> :t mkT (* 2)
mkT (* 2) :: Typeable a => a -> a
It's still polymorphic in a, but the b it used to have has vanished. It had to pick a specific type b to work against, and it did that by defaulting to Integer. (See ghc's extended defaulting rules for details on how that works in ghci.) So...
ghci> mkT (* 2) 3.5
3.5
ghci> mkT (* 2) 7
14
ghci> mkT (* 2) (7 :: Int)
7
At the type level, mkT has to monomorphize its argument. That's the only way it can make use of the Typeable constraint when used in a context where a relevant variable no longer appears in its type.
(To tie the loop back to everywhere, the reason mkT (* 2) works as an argument to everywhere is because Data is a subclass of Typeable. The Data constraint implies that the Typeable requirement will be satisfied.)
So what can you do about this? Well, it's impossible to write it truly generically because of Haskell's open world assumption. Anywhere in the program, any type might be declared an instance of Num with arbitrary implementations of (*) and fromInteger. In order to work with everywhere, there would need to be some mechanism to go from knowing something is an instance of Data to looking up its Num instance. This just isn't possible at run time. Types have been erased. There may be some residues like Typeable dictionaries being carried around, but they don't provide any means to look up other instance dictionaries. And while you might be able to envision a language where that sort of lookup is possible, it actually would be very harmful to allow it in Haskell. It would invalidate the ability to reason about types parametrically, which would be a giant loss.
The best you can do is write transformation functions that work on multiple types:
ghci> let f = mkT (* (2 :: Int)) . mkT (* (2 :: Double)) . mkT (* (2 :: Integer))
ghci> f 5
10
ghci> f 2.7
5.4
ghci> f (9 :: Int)
18
ghci> f "hello"
"hello"
It's verbose and you can probably write something better by hand if you so desire. But it at least works, at least to some extent. And it doesn't require breaking foundational assumptions in the language design, which is always a bonus.
Here is a simplification of your case that doesn't use any Data stuff.
module MyModule where
dbl x = 2 * x
myId :: (a->a) -> a -> a
myId f = f
myDbl = myId dbl
Don't type this to the ghci prompt, rather, create a .hs file and load it.
Now check what type myDbl has.
Prelude > :l MyModule
[1 of 1] Compiling MyModule ( MyModule.hs, interpreted )
Ok, one module loaded.
*MyModule > :t MyModule.myDbl
MyModule.myDbl :: Integer -> Integer
Surprise! Why is it compiling at all? And why the weird types?
Because of the defaulting rules. (Basically, "if you don't know what to do with Num a, just use Integer"). Since myId cannot deal with dbl :: Num a => a -> a, Haskell allows it to take the Integer version.
Disable defaulting by adding default () at the top, and this module no longer compiles.
mkT is no different from myId in this respect.

Why is `succ i` valid where `i :: Num a => a` (and not an `Enum a`)?

This seems to apply to both GHCi and GHC. I'll show an example with GHCi first.
Given i type has been inferred as follows:
Prelude> i = 1
Prelude> :t i
i :: Num p => p
Given that succ is a function defined on Enum:
Prelude> :i Enum
class Enum a where
succ :: a -> a
pred :: a -> a
-- …OMITTED…
and that Num is not a 'subclass' (if I can use that term) of Enum:
class Num a where
(+) :: a -> a -> a
(-) :: a -> a -> a
-- …OMITTED…
why succ i does not return an error?
Prelude> succ i
2 -- works, no error
I would expect :type i to be inferred to something like:
Prelude> i = 1
Prelude> :type i
i :: (Enum p, Num p) => p
(I'm using 'GHC v. 8.6.3')
ADDITION:
After reading #RobinZigmond comment and #AlexeyRomanov answer I have noticed that 1 could be interpreted as one of many types and one of many classes.
Thanks to #AlexeyRomanov answer I understand much more about the defaulting-rules used to decide what type to use for ambiguous expressions.
However I don't feel that Alexey answer addresses exactly my question. My question is about the type of i. It's not about the type of succ i.
It's about the mismatch between succ argument type (an Enum a) and the apparent type of i (a Num a).
I'm now starting to realise that my question must stem from a wrong assumption: 'that once i is inferred to be i :: Num a => a, then i can be nothing else'. Hence I was puzzled to see succ i was evaluated without errors.
GHC also seems to be inferring Enum a in addition to what was explicitly declared.
x :: Num a => a
x = 1
y = succ x -- works
However it is not adding Enum a when the type variable appears as a function:
my_succ :: Num a => a -> a
my_succ z = succ z -- fails compilation
To me it seems that the type constraints attached to a function are stricter to the ones applied to a variable.
GHC is saying my_succ :: forall a. Num a => a -> a and given
forall a doesn't appear in the type-signature of neither i nor x I thought that meant GHC is not going to infer any more classes for my_succ types.
But this seems again wrong: I've checked this idea with the following (first time I type RankNTypes) and apparently GHC still infers Enum a:
{-# LANGUAGE RankNTypes #-}
x :: forall a. Num a => a
x = 1
y = succ x
So it seems that inference rules for functions are stricter than the ones for variables?
Yes, succ i's type is inferred as you expect:
Prelude> :t succ i
succ i :: (Enum a, Num a) => a
This type is ambiguous, but it satisfies the conditions in the defaulting rules for GHCi:
Find all the unsolved constraints. Then:
Find those that are of form (C a) where a is a type variable, and partition those constraints into groups that share a common type variable a.
In this case, there's only one group: (Enum a, Num a).
Keep only the groups in which at least one of the classes is an interactive class (defined below).
This group is kept, because Num is an interactive class.
Now, for each remaining group G, try each type ty from the default-type list in turn; if setting a = ty would allow the constraints in G to be completely solved. If so, default a to ty.
The unit type () and the list type [] are added to the start of the standard list of types which are tried when doing type defaulting.
The default default-type list (sic) is (with the additions from the last clause) default ((), [], Integer, Double).
So when you do Prelude> succ i to actually evaluate this expression (note :t doesn't evaluate the expression it gets), a is set to Integer (first of this list satisfying the constraints), and the result is printed as 2.
You can see it's the reason by changing the default:
Prelude> default (Double)
Prelude> succ 1
2.0
For the updated question:
I'm now starting to realise that my question must stem from a wrong assumption: 'that once i is inferred to be i :: Num a => a, then i can be nothing else'. Hence I was puzzled to see succ i was evaluated without errors.
i can be nothing else (i.e. nothing that doesn't fit this type), but it can be used with less general (more specific) types: Integer, Int. Even with many of them in an expression at once:
Prelude> (i :: Double) ^ (i :: Integer)
1.0
And these uses don't affect the type of i itself: it's already defined and its type fixed. OK so far?
Well, adding constraints also makes the type more specific, so (Num a, Enum a) => a is more specific than (Num a) => a:
Prelude> i :: (Num a, Enum a) => a
1
Because of course any type a that satisfies both constraints in (Num a, Enum a) satisfies just Num a.
However it is not adding Enum a when the type variable appears as a function:
That's because you specified a signature which doesn't allow it to. If you don't give a signature, there's no reason to infer Num constraint. But e.g.
Prelude> f x = succ x + 1
will infer the type with both constraints:
Prelude> :t f
f :: (Num a, Enum a) => a -> a
So it seems that inference rules for functions are stricter than the ones for variables?
It's actually the other way around due to the monomorphism restriction (not in GHCi, by default). You've actually been a bit lucky not to run into it here, but the answer is already long enough. Searching for the term should give you explanations.
GHC is saying my_succ :: forall a. Num a => a -> a and given forall a doesn't appear in the type-signature of neither i nor x.
That's a red herring. I am not sure why it's shown in one case and not the other, but all of them have that forall a behind the scenes:
Haskell type signatures are implicitly quantified. When the language option ExplicitForAll is used, the keyword forall allows us to say exactly what this means. For example:
g :: b -> b
means this:
g :: forall b. (b -> b)
(Also, you just need ExplicitForAll and not RankNTypes to write down forall a. Num a => a.)

Problems With Type Inference on (^)

So, I'm trying to write my own replacement for Prelude, and I have (^) implemented as such:
{-# LANGUAGE RebindableSyntax #-}
class Semigroup s where
infixl 7 *
(*) :: s -> s -> s
class (Semigroup m) => Monoid m where
one :: m
class (Ring a) => Numeric a where
fromIntegral :: (Integral i) => i -> a
fromFloating :: (Floating f) => f -> a
class (EuclideanDomain i, Numeric i, Enum i, Ord i) => Integral i where
toInteger :: i -> Integer
quot :: i -> i -> i
quot a b = let (q,r) = (quotRem a b) in q
rem :: i -> i -> i
rem a b = let (q,r) = (quotRem a b) in r
quotRem :: i -> i -> (i, i)
quotRem a b = let q = quot a b; r = rem a b in (q, r)
-- . . .
infixr 8 ^
(^) :: (Monoid m, Integral i) => m -> i -> m
(^) x i
| i == 0 = one
| True = let (d, m) = (divMod i 2)
rec = (x*x) ^ d in
if m == one then x*rec else rec
(Note that the Integral used here is one I defined, not the one in Prelude, although it is similar. Also, one is a polymorphic constant that's the identity under the monoidal operation.)
Numeric types are monoids, so I can try to do, say 2^3, but then the typechecker gives me:
*AlgebraicPrelude> 2^3
<interactive>:16:1: error:
* Could not deduce (Integral i0) arising from a use of `^'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Integral Integer -- Defined at Numbers.hs:190:10
instance Integral Int -- Defined at Numbers.hs:207:10
* In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
<interactive>:16:3: error:
* Could not deduce (Numeric i0) arising from the literal `3'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Numeric Integer -- Defined at Numbers.hs:294:10
instance Numeric Complex -- Defined at Numbers.hs:110:10
instance Numeric Rational -- Defined at Numbers.hs:306:10
...plus four others
(use -fprint-potential-instances to see them all)
* In the second argument of `(^)', namely `3'
In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
I get that this arises because Int and Integer are both Integral types, but then why is it that in normal Prelude I can do this just fine? :
Prelude> :t (2^)
(2^) :: (Num a, Integral b) => b -> a
Prelude> :t 3
3 :: Num p => p
Prelude> 2^3
8
Even though the signatures for partial application in mine look identical?
*AlgebraicPrelude> :t (2^)
(2^) :: (Numeric m, Integral i) => i -> m
*AlgebraicPrelude> :t 3
3 :: Numeric a => a
How would I make it so that 2^3 would in fact work, and thus give 8?
A Hindley-Milner type system doesn't really like having to default anything. In such a system, you want types to be either properly fixed (rigid, skolem) or properly polymorphic, but the concept of “this is, like, an integer... but if you prefer, I can also cast it to something else” as many other languages have doesn't really work out.
Consequently, Haskell sucks at defaulting. It doesn't have first-class support for that, only a pretty hacky ad-hoc, hard-coded mechanism which mainly deals with built-in number types, but fails at anything more involved.
You therefore should try to not rely on defaulting. My opinion is that the standard signature for ^ is unreasonable; a better signature would be
(^) :: Num a => a -> Int -> a
The Int is probably controversial – of course Integer would be safer in a sense; however, an exponent too big to fit in Int generally means the results will be totally off the scale anyway and couldn't feasibly be calculated by iterated multiplication; so this kind of expresses the intend pretty well. And it gives best performance for the extremely common situation where you just write x^2 or similar, which is something where you very definitely don't want to have to put an extra signature in the exponent.
In the rather fewer cases where you have a concrete e.g. Integer number and want to use it in the exponent, you can always shove in an explicit fromIntegral. That's not nice, but rather less of an inconvenience.
As a general rule, I try to avoid† any function-arguments that are more polymorphic than the results. Haskell's polymorphism works best “backwards”, i.e. the opposite way as in dynamic language: the caller requests what type the result should be, and the compiler figures out from this what the arguments should be. This works pretty much always, because as soon as the result is somehow used in the main program, the types in the whole computation have to be linked to a tree structure.
OTOH, inferring the type of the result is often problematic: arguments may be optional, may themselves be linked only to the result, or given as polymorphic constants like Haskell number literals. So, if i doesn't turn up in the result of ^, avoid letting in occur in the arguments either.
†“Avoid” doesn't mean I don't ever write them, I just don't do so unless there's a good reason.

When are type signatures necessary in Haskell?

Many introductory texts will tell you that in Haskell type signatures are "almost always" optional. Can anybody quantify the "almost" part?
As far as I can tell, the only time you need an explicit signature is to disambiguate type classes. (The canonical example being read . show.) Are there other cases I haven't thought of, or is this it?
(I'm aware that if you go beyond Haskell 2010 there are plenty for exceptions. For example, GHC will never infer rank-N types. But rank-N types are a language extension, not part of the official standard [yet].)
Polymorphic recursion needs type annotations, in general.
f :: (a -> a) -> (a -> b) -> Int -> a -> b
f f1 g n x =
if n == (0 :: Int)
then g x
else f f1 (\z h -> g (h z)) (n-1) x f1
(Credit: Patrick Cousot)
Note how the recursive call looks badly typed (!): it calls itself with five arguments, despite f having only four! Then remember that b can be instantiated with c -> d, which causes an extra argument to appear.
The above contrived example computes
f f1 g n x = g (f1 (f1 (f1 ... (f1 x))))
where f1 is applied n times. Of course, there is a much simpler way to write an equivalent program.
Monomorphism restriction
If you have MonomorphismRestriction enabled, then sometimes you will need to add a type signature to get the most general type:
{-# LANGUAGE MonomorphismRestriction #-}
-- myPrint :: Show a => a -> IO ()
myPrint = print
main = do
myPrint ()
myPrint "hello"
This will fail because myPrint is monomorphic. You would need to uncomment the type signature to make it work, or disable MonomorphismRestriction.
Phantom constraints
When you put a polymorphic value with a constraint into a tuple, the tuple itself becomes polymorphic and has the same constraint:
myValue :: Read a => a
myValue = read "0"
myTuple :: Read a => (a, String)
myTuple = (myValue, "hello")
We know that the constraint affects the first part of the tuple but does not affect the second part. The type system doesn't know that, unfortunately, and will complain if you try to do this:
myString = snd myTuple
Even though intuitively one would expect myString to be just a String, the type checker needs to specialize the type variable a and figure out whether the constraint is actually satisfied. In order to make this expression work, one would need to annotate the type of either snd or myTuple:
myString = snd (myTuple :: ((), String))
In Haskell, as I'm sure you know, types are inferred. In other words, the compiler works out what type you want.
However, in Haskell, there are also polymorphic typeclasses, with functions that act in different ways depending on the return type. Here's an example of the Monad class, though I haven't defined everything:
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
fail :: String -> m a
We're given a lot of functions with just type signatures. Our job is to make instance declarations for different types that can be treated as Monads, like Maybe t or [t].
Have a look at this code - it won't work in the way we might expect:
return 7
That's a function from the Monad class, but because there's more than one Monad, we have to specify what return value/type we want, or it automatically becomes an IO Monad. So:
return 7 :: Maybe Int
-- Will return...
Just 7
return 6 :: [Int]
-- Will return...
[6]
This is because [t] and Maybe have both been defined in the Monad type class.
Here's another example, this time from the random typeclass. This code throws an error:
random (mkStdGen 100)
Because random returns something in the Random class, we'll have to define what type we want to return, with a StdGen object tupelo with whatever value we want:
random (mkStdGen 100) :: (Int, StdGen)
-- Returns...
(-3650871090684229393,693699796 2103410263)
random (mkStdGen 100) :: (Bool, StdGen)
-- Returns...
(True,4041414 40692)
This can all be found at learn you a Haskell online, though you'll have to do some long reading. This, I'm pretty much 100% certain, it the only time when types are necessary.

Invalid use of function in Haskell with no type error

http://i.imgur.com/NGKpHbJ.png
thats the image of the output ^ .
the declarations are here:
let add1 x = x + 1
let multi2 x = x * 2
let wtf x = ((add1 multi2) x)
(wtf 3)
<interactive>:8:1:
No instance for (Num (a0 -> a0)) arising from a use of `it'
In a stmt of an interactive GHCi command: print it
?>
Can anyone explain to me why Haskell says that the type of the invalid expression is Num and why it wont print the number?
I can't understand what is going on on the type system.
add1 multi2 applies add1 to a function, but add1 expects a number. So you might expect this to be an error because functions aren't numbers, but the thing is that they could be. In Haskell a number is a value of a type that's an instance of the Num type class and you can add instances whenever you want.
That is, you can write instance Num (a -> a) where ... and then functions will be numbers. So now mult2 + 1 will do something that produces a new function of the same type as mult2 (what exactly that will be depends on how you defined + in the instance of course), so add1 mult2 produces a function of type Num a -> a -> a and applying that function to x gives you a value of the same type as x.
So what the type wtf :: (Num (a -> a), Num a) => a -> a is telling you is "Under the condition that you a is a numeric type and you define an instance for Num (a -> a)", wtf will take a number and produce a number of the same type. And when you then actually try to use the function, you get an error because you did not define an instance for Num (a -> a).
(Re-written somewhat in response to comment)
Your line of code:
((add1 multi2) x)
means: apply the add1 function to the argument multi2, then apply the resulting function to the argument x. Since adding 1 to a function doesn't make sense, this won't work, so we get a compile-time type error.
The error is explaining that the compiler cannot find a typeclass instance to make functions work like numbers. Numbers must be part of the Num typeclass so they can be added, multiplied etc.
No instance for (Num (a0 -> a0)
In other words, the type a0-> a0 (which is a function type) doesn't have a Num typeclass instance, so adding 1 to it fails. This is a compile-time error; the code is never executed, so GHCi cannot print any output from your function.
The type of your wtf function is:
wtf :: (Num (a -> a), Num a) => a -> a
which says:
Given that a is a numeric type
and a -> a (function) is a numeric type
then wtf will take a number and return a number
The second condition fails at compile time because there's no defined way to treat a function as a number.

Resources