Wrapping / Unwrapping Universally Quantified Types - haskell

I have imported a data type, X, defined as
data X a = X a
Locally, I have defined a universally quantified data type, Y
type Y = forall a. X a
Now I need to define two functions, toY and fromY. For fromY, this definition works fine:
fromY :: Y -> X a
fromY a = a
but if I try the same thing for toY, I get an error
Couldn't match type 'a' with 'a0'
'a' is a rigid type variable bound by the type signature for 'toY :: X a -> y'
'a0' is a rigid type variable bound by the type signature for 'toY :: X a -> X a0'
Expected type: X a0
Actual type: X a
If I understand correctly, the type signature for toY is expanding to forall a. X a -> (forall a0. X a0) because Y is defined as a synonym, rather than a newtype, and so the two as in the definitions don't match up.
But if this is the case, then why does fromY type-check successfully? And is there any way around this problem other than using unsafeCoerce?

You claim to define an existential type, but you do not.
type Y = forall a. X a
defines a universally quantified type. For something to have type Y, it must have type X a for every a. To make an existentially quantified type, you always need to use data, and I find the GADT syntax easier to understand than the traditional existential one.
data Y where
Y :: forall a . X a -> Y
The forall there is actually optional, but I think clarifies things.
I'm too sleepy right now to work out your other questions, but I'll try again tomorrow if no one else does.

Remark:
This is more like a comment but I could not really put it there as it would have been unreadable; please forgive me this one time.
Aside from what dfeuer already told you, you might see (when you use his answer) that toY is now really easy to do but you might have trouble defining fromY – because you basically lose the type-information, so this will not work:
{-# LANGUAGE GADTs #-}
module ExTypes where
data X a = X a
data Y where
Y :: X a -> Y
fromY :: Y -> X a
fromY (Y a) = a
as here you have two different as – one from the constructor Y and one from X a – indeed if you strip the definition and try to compile: fromY (Y a) = a the compiler will tell you that the type a escapes:
Couldn't match expected type `t' with actual type `X a'
because type variable `a' would escape its scope
This (rigid, skolem) type variable is bound by
a pattern with constructor
Y :: forall a. X a -> Y,
in an equation for `fromY'
I think the only thing you will have left now will be something like this:
useY :: (forall a . X a -> b) -> Y -> b
useY f (Y x) = f x
but this might prove not to be too useful.
The thing is that you normally should constrain the forall a there a bit more (with type-classes) to get any meaningful behavior – but of course I cannot help here.
This wiki article might be interesting for you on the details.

Related

Multiple types for f in this picture?

https://youtu.be/brE_dyedGm0?t=1362
data T a where
T1 :: Bool -> T Bool
T2 :: T a
f x y = case x of
T1 x -> True
T2 -> y
Simon is saying that f could be typed as T a -> a -> a, but I would think the return value MUST be a Bool since that is an explicit result in a branch of the case expression. This is in regards to Haskell GADTs. Why is this the case?
This is kind of the whole point of GADTs. Matching on a constructor can cause additional type information to come into scope.
Let's look at what happens when GHC checks the following definition:
f :: T a -> a -> a
f x y = case x of
T1 _ -> True
T2 -> y
Let's look at the T2 case first, just to get it out of the way. y and the result have the same type, and T2 is polymorphic, so you can declare its type is also T a. All good.
Then comes the trickier case. Note that I removed the binding of the name x inside the case as the shadowing might be confusing, the inner value isn't used, and it doesn't change anything in the explanation. When you match on a GADT constructor like T1, one that explicitly sets a type variable, it introduces additional constraints inside that branch which add that type information. In this case, matching on T1 introduces a (a ~ Bool) constraint. This type equality says that a and Bool match each other. Therefore the literal True with type Bool matches the a written in the type signature. y isn't used, so the branch is consistent with T a -> a -> a.
So it all matches up. T a -> a -> a is a valid type for that definition. But as Simon is saying, it's ambiguous. T a -> Bool -> Bool is also a valid type for that definition. Neither one is more general than the other, so the definition doesn't have a principle type. So the definition is rejected unless a type is provided, because inference cannot pick a single most-correct type for it.
A value of type T a with a different from Bool can never have the form T1 x (since that has only type T Bool).
Hence, in such case, the T1 x branch in the case becomes inaccessible and can be ignored during type checking/inference.
More concretely: GADTs allow the type checker to assume type-level equations during pattern matching, and exploit such equations later on. When checking
f :: T a -> a -> a
f x y = case x of
T1 x -> True
T2 -> y
the type checker performs the following reasoning:
f :: T a -> a -> a
f x y = case x of
T1 x -> -- assume: a ~ Bool
True -- has type Bool, hence it has also type a
T2 -> -- assume: a~a (pointless)
y -- has type a
Thanks to GADTs, both branches of the case have type a, hence the whole case expression has type a and the function definition type checks.
More generally, when x :: T A and the GADT constructor was defined as K :: ... -> T B then, when type checking we can make the following assumption:
case x of
K ... -> -- assume: A ~ B
Note that A and B can be types involving type variables (as in a~Bool above), so that allows one obtain useful information about them and exploit it later on.

Why one function compiles successfully, and the second doesn't, when the second only differs with type declaration of inner function? [duplicate]

I have a problem with Haskell's scoping in where definitions. When I have the following function f, where I want to pass the x to the locally defined function f1 without explicitely using it as a parameter, I get an error saying that the type of x is incompatible with the one in the output of f1, although it should be the same:
f :: Eq a => a -> [a]
f x = f1 x
where
f1 :: Eq a => a -> [a]
f1 y = [ x, y ]
The error is the following:
Couldn't match expected type `a1' against inferred type `a'
`a1' is a rigid type variable bound by
the type signature for `f1' at test.hs:4:11
`a' is a rigid type variable bound by
the type signature for `f' at test.hs:1:8
In the expression: x
In the expression: [x, y]
In the definition of `f1': f1 y = [x, y]
Failed, modules loaded: none.
When I however pass the x as an additional parameter, as I did in the following code with the function g, it works fine:
g :: Eq a => a -> [a]
g x = g1 x x
where
g1 :: Eq a => a -> a -> [a]
g1 x y = [ x, y ]
Is there a way to make the type a in f compatible to the type a (or a1) in f1?
Dave is right above. Another way to think of it is that even though both of your type signatures refer to the variable a, it's not actually the same type variable. In the Haskell-prime notation, both signatures can be more explicitly written as:
forall a . Eq a => a -> [a]
meaning that for both functions, they can accept an argument of any type whatsoever (within Eq). This is obviously not the case here. In standard Haskell 98, the only option is to forgo the type signature for f1. But GHC (and others?) support lexically scoped type variables. So you could write
{-# LANGUAGE ScopedTypeVariables #-}
f :: forall a. Eq a => a -> [a]
f x = f1 x
where
f1 :: a -> [a]
f1 y = [ x, y ]
and that would work fine.
The problem with your code is the locally scoped f1 type signature. It specifies that f1 can take any type
f1 :: Eq a => a -> [a]
Even though this is a local function, you've generalized this function to be able to take a type that won't exist within f, whatever this function receives HAS to come from f, so the type signature is unnecessary.
Just remove the f1 type signature.
Edit: Read my post back to myself, it's a bit unclear. a in f1 is a parameterized type that can take anything, but the arguments passed to it are already bound in f. So this function can only receive what its parent function receives, the type signature you're giving it breaks that rule. Hope that's a little more clear.

Code unexpectedly accepted by GHC/GHCi

I don't understand why this code should pass type-checking:
foo :: (Maybe a, Maybe b)
foo = let x = Nothing in (x,x)
Since each component is bound to the same variable x, I would expect that the most general type for this expression to be (Maybe a, Maybe a). I get the same results if I use a where instead of a let. Am I missing something?
Briefly put, the type of x gets generalized by let. This is a key step in the Hindley-Milner type inference algorithm.
Concretely, let x = Nothing initially assigns x the type Maybe t, where t is a fresh type variable. Then, the type gets generalized, universally quantifying all its type variables (technically: except those in use elsewhere, but here we only have t). This causes x :: forall t. Maybe t. Note that this is exactly the same type as Nothing :: forall t. Maybe t.
Hence, each time we use x in our code, that refers to a potentially different type Maybe t, much like Nothing. Using (x, x) gets the same type as (Nothing, Nothing) for this reason.
Instead, lambdas do not feature the same generalization step. By comparison (\x -> (x, x)) Nothing "only" has type forall t. (Maybe t, Maybe t), where both components are forced to be of the same type. Here x is again assigned type Maybe t, with t fresh, but it is not generalized. Then (x, x) is assigned type (Maybe t, Maybe t). Only at the top-level we generalize adding forall t, but at that point is too late to obtain a heterogeneous pair.

Why is a function type required to be "wrapped" for the type checker to be satisfied?

The following program type-checks:
{-# LANGUAGE RankNTypes #-}
import Numeric.AD (grad)
newtype Fun = Fun (forall a. Num a => [a] -> a)
test1 [u, v] = (v - (u * u * u))
test2 [u, v] = ((u * u) + (v * v) - 1)
main = print $ fmap (\(Fun f) -> grad f [1,1]) [Fun test1, Fun test2]
But this program fails:
main = print $ fmap (\f -> grad f [1,1]) [test1, test2]
With the type error:
Grad.hs:13:33: error:
• Couldn't match type ‘Integer’
with ‘Numeric.AD.Internal.Reverse.Reverse s Integer’
Expected type: [Numeric.AD.Internal.Reverse.Reverse s Integer]
-> Numeric.AD.Internal.Reverse.Reverse s Integer
Actual type: [Integer] -> Integer
• In the first argument of ‘grad’, namely ‘f’
In the expression: grad f [1, 1]
In the first argument of ‘fmap’, namely ‘(\ f -> grad f [1, 1])’
Intuitively, the latter program looks correct. After all, the
following, seemingly equivalent program does work:
main = print $ [grad test1 [1,1], grad test2 [1,1]]
It looks like a limitation in GHC's type system. I would like to know
what causes the failure, why this limitation exists, and any possible
workarounds besides wrapping the function (per Fun above).
(Note: this is not caused by the monomorphism restriction; compiling
with NoMonomorphismRestriction does not help.)
This is an issue with GHC's type system. It is really GHC's type system by the way; the original type system for Haskell/ML like languages don't support higher rank polymorphism, let alone impredicative polymorphism which is what we're using here.
The issue is that in order to type check this we need to support foralls at any position in a type. Not only bunched all the way at the front of the type (the normal restriction which allows for type inference). Once you leave this area type inference becomes undecidable in general (for rank n polymorphism and beyond). In our case, the type of [test1, test2] would need to be [forall a. Num a => a -> a] which is a problem considering that it doesn't fit into the scheme discussed above. It would require us to use impredicative polymorphism, so called because a ranges over types with foralls in them and so a could be replaced with the type in which it's being used.
So, therefore there's going to be some cases that misbehave just because the problem is not fully solvable. GHC does have some support for rank n polymorphism and a bit of support for impredicative polymorphism but it's generally better to just use newtype wrappers to get reliable behavior. To the best of my knowledge, GHC also discourages using this feature precisely because it's so hard to figure out exactly what the type inference algorithm will handle.
In summary, math says that there will be flaky cases and newtype wrappers are the best, if somewhat dissatisfying way, to cope with it.
The type inference algorithm will not infer higher rank types (those with forall at the left of ->). If I remember correctly, it becomes undecidable. Anyway, consider this code
foo f = (f True, f 'a')
what should its type be? We could have
foo :: (forall a. a -> a) -> (Bool, Char)
but we could also have
foo :: (forall a. a -> Int) -> (Int, Int)
or, for any type constructor F :: * -> *
foo :: (forall a. a -> F a) -> (F Bool, F Char)
Here, as far as I can see, we can not find a principal type -- a type which is the most general type we can assign to foo.
If a principal type does not exist, the type inference machinery can only pick a suboptimal type for foo, which can cause type errors later on. This is bad. Instead, GHC relies on a Hindley-Milner style type inference engine, which was greatly extended so to cover more advanced Haskell types. This mechanism, unlike plain Hindley-Milner, will assign f a polymorphic type provided the user explicitly required that, e.g. by giving foo a signature.
Using a wrapper newtype like Fun also instructs GHC in a similar way, providing the polymorphic type for f.

The case of the disappearing constraint: Oddities of a higher-rank type

All the experiments described below were done with GHC 8.0.1.
This question is a follow-up to RankNTypes with type aliases confusion. The issue there boiled down to the types of functions like this one...
{-# LANGUAGE RankNTypes #-}
sleight1 :: a -> (Num a => [a]) -> a
sleight1 x (y:_) = x + y
... which are rejected by the type checker...
ThinAir.hs:4:13: error:
* No instance for (Num a) arising from a pattern
Possible fix:
add (Num a) to the context of
the type signature for:
sleight1 :: a -> (Num a => [a]) -> a
* In the pattern: y : _
In an equation for `sleight1': sleight1 x (y : _) = x + y
... because the higher-rank constraint Num a cannot be moved outside of the type of the second argument (as would be possible if we had a -> a -> (Num a => [a]) instead). That being so, we end up trying to add a higher-rank constraint to a variable already quantified over the whole thing, that is:
sleight1 :: forall a. a -> (Num a => [a]) -> a
With this recapitulation done, we might try to simplify the example a bit. Let's replace (+) with something that doesn't require Num, and uncouple the type of the problematic argument from that of the result:
sleight2 :: a -> (Num b => b) -> a
sleight2 x y = const x y
This doesn't work just like before (save for a slight change in the error message):
ThinAir.hs:7:24: error:
* No instance for (Num b) arising from a use of `y'
Possible fix:
add (Num b) to the context of
the type signature for:
sleight2 :: a -> (Num b => b) -> a
* In the second argument of `const', namely `y'
In the expression: const x y
In an equation for `sleight2': sleight2 x y = const x y
Failed, modules loaded: none.
Using const here, however, is perhaps unnecessary, so we might try writing the implementation ourselves:
sleight3 :: a -> (Num b => b) -> a
sleight3 x y = x
Surprisingly, this actually works!
Prelude> :r
[1 of 1] Compiling Main ( ThinAir.hs, interpreted )
Ok, modules loaded: Main.
*Main> :t sleight3
sleight3 :: a -> (Num b => b) -> a
*Main> sleight3 1 2
1
Even more bizarrely, there seems to be no actual Num constraint on the second argument:
*Main> sleight3 1 "wat"
1
I'm not quite sure about how to make that intelligible. Perhaps we might say that, just like we can juggle undefined as long as we never evaluate it, an unsatisfiable constraint can stick around in a type just fine as long as it is not used for unification anywhere in the right-hand side. That, however, feels like a pretty weak analogy, specially given that non-strictness as we usually understand it is a notion involving values, and not types. Furthermore, that leaves us no closer from grasping how in the world String unifies with Num b => b -- assuming that such a thing actually happens, something which I'm not at all sure of. What, then, is an accurate description of what is going on when a constraint seemingly vanishes in this manner?
Oh, it gets even weirder:
Prelude> sleight3 1 ("wat"+"man")
1
Prelude Data.Void> sleight3 1 (37 :: Void)
1
See, there is an actual Num constraint on that argument. Only, because (as chi already commented) the b is in a covariant position, this is not a constraint you have to provide when calling sleight3. Rather, you can just pick any type b, then whatever it is, sleight3 will provide a Num instance for it!
Well, clearly that's bogus. sleight3 can't provide such a num instance for strings, and most definitely not for Void. But it also doesn't actually need to because, quite like you said, the argument for which that constraint would apply is never evaluated. Recall that a constrained-polymorphic value is essentially just a function of a dictionary argument. sleight3 simply promises to provide such a dictionary before it actually gets to use y, but then it doesn't use y in any way, so it's fine.
It's basically the same as with a function like this:
defiant :: (Void -> Int) -> String
defiant f = "Haha"
Again, the argument function clearly can not possibly yield an Int because there doesn't exist a Void value to evaluate it with. But this isn't needed either, because f is simply ignored!
By contrast, sleight2 x y = const x y does kinda sorta use y: the second argument to const is just a rank-0 type, so the compiler needs to resolve any needed dictionaries at that point. Even if const ultimately also throws y away, it still “forces” enough of this value to make it evident that it's not well-typed.

Resources