On a ghci prompt everywhere (mkT (\x -> 2 * x)) (8.7, 21, "word") evaluates to (8.7, 42, "word").
I expected the 8.7 to be doubled as well. Why am I wrong?
This is the result of mkT monomorphizing its argument in this particular case, but it turns out there's no broader way to address the issue. mkT isn't doing anything wrong.
It's worth looking first at why everywhere (* 2) doesn't type-check.
ghci> :t everywhere
everywhere
:: (forall a. Data a => a -> a) -> forall a. Data a => a -> a
ghci> :t (* 2)
(* 2) :: Num a => a -> a
ghci> :t everywhere (* 2)
<interactive>:1:13: error:
• Could not deduce (Num a) arising from a use of ‘*’
from the context: Data a
bound by a type expected by the context:
forall a. Data a => a -> a
at <interactive>:1:12-16
Possible fix:
add (Num a) to the context of
a type expected by the context:
forall a. Data a => a -> a
• In the expression: (*)
In the first argument of ‘everywhere’, namely ‘(* 2)’
In the expression: everywhere (* 2)
everywhere has a higher-rank type - the first forall a. is inside the parentheses. I kind of dislike documenting the type that way - it uses a as a type variable in two completely separate ways. But there are two different scopes, and that matters. What it's saying is that any function passed to it must be polymorphic over all instances of Data.
But the type of (* 2) doesn't match up there. It won't work with any instance of Data. It requires more - it requires that it be provided an instance of Num. So the error message dutifully reports that it can't deduce (Num a) from the context Data a. So this isn't going to work. The pieces don't fit together.
This is where mkT comes into play:
ghci> :t mkT
mkT :: (Typeable a, Typeable b) => (b -> b) -> a -> a
Its type is a bit funny. It looks almost like it does nothing at all, but Typeable is a funny class. mkT actually compares a and b for type equality, using those Typeable constraints. If they're the same, it applies the function you provided. Otherwise, it just acts as the identity function.
What it does when it's applied to a function is where things are going wrong for you:
ghci> :t mkT (* 2)
mkT (* 2) :: Typeable a => a -> a
It's still polymorphic in a, but the b it used to have has vanished. It had to pick a specific type b to work against, and it did that by defaulting to Integer. (See ghc's extended defaulting rules for details on how that works in ghci.) So...
ghci> mkT (* 2) 3.5
3.5
ghci> mkT (* 2) 7
14
ghci> mkT (* 2) (7 :: Int)
7
At the type level, mkT has to monomorphize its argument. That's the only way it can make use of the Typeable constraint when used in a context where a relevant variable no longer appears in its type.
(To tie the loop back to everywhere, the reason mkT (* 2) works as an argument to everywhere is because Data is a subclass of Typeable. The Data constraint implies that the Typeable requirement will be satisfied.)
So what can you do about this? Well, it's impossible to write it truly generically because of Haskell's open world assumption. Anywhere in the program, any type might be declared an instance of Num with arbitrary implementations of (*) and fromInteger. In order to work with everywhere, there would need to be some mechanism to go from knowing something is an instance of Data to looking up its Num instance. This just isn't possible at run time. Types have been erased. There may be some residues like Typeable dictionaries being carried around, but they don't provide any means to look up other instance dictionaries. And while you might be able to envision a language where that sort of lookup is possible, it actually would be very harmful to allow it in Haskell. It would invalidate the ability to reason about types parametrically, which would be a giant loss.
The best you can do is write transformation functions that work on multiple types:
ghci> let f = mkT (* (2 :: Int)) . mkT (* (2 :: Double)) . mkT (* (2 :: Integer))
ghci> f 5
10
ghci> f 2.7
5.4
ghci> f (9 :: Int)
18
ghci> f "hello"
"hello"
It's verbose and you can probably write something better by hand if you so desire. But it at least works, at least to some extent. And it doesn't require breaking foundational assumptions in the language design, which is always a bonus.
Here is a simplification of your case that doesn't use any Data stuff.
module MyModule where
dbl x = 2 * x
myId :: (a->a) -> a -> a
myId f = f
myDbl = myId dbl
Don't type this to the ghci prompt, rather, create a .hs file and load it.
Now check what type myDbl has.
Prelude > :l MyModule
[1 of 1] Compiling MyModule ( MyModule.hs, interpreted )
Ok, one module loaded.
*MyModule > :t MyModule.myDbl
MyModule.myDbl :: Integer -> Integer
Surprise! Why is it compiling at all? And why the weird types?
Because of the defaulting rules. (Basically, "if you don't know what to do with Num a, just use Integer"). Since myId cannot deal with dbl :: Num a => a -> a, Haskell allows it to take the Integer version.
Disable defaulting by adding default () at the top, and this module no longer compiles.
mkT is no different from myId in this respect.
Related
For example:
map (+1) 2
in ghci yields
<interactive>:23:1: error:
* Non type-variable argument in the constraint: Num [b]
(Use FlexibleContexts to permit this)
* When checking the inferred type
it :: forall b. (Num b, Num [b]) => [b]
I've seen many questions similar to mine, but all seem only to answer what we can deduce from this (that the type of the second argument to map is wrong), and how to fix it - but not what error actually means . Where do things go wrong precisely?
The error arises during the type-deduction of your statement.
Since
(+1) is of type Num a => a -> a
2 is of type Num a => a
map is of type (a -> b) -> [a] -> [b]
We know that map (+1) has to be of type (Num b) => [b] -> [b], and therefore map (+1) 2 of type (Num b, Num [b]) => [b]. But [b] is not just a type-variable, it's List of some type variable, where list is a data constructor. In an alternative version of Haskell where no syntactic sugar for lists exists, we might write (Num b, Num (List b)).
This is a problem because by default, Haskell does not support non type-variable arguments for constraints. So the precise nature of the problem is not that Haskell doesn't know how to map over numbers - it's that it doesn't allow values of the type that our function call produces.
But that rule isn't strictly needed. By adding -XFlexibleContexts when calling ghci, types of the sort that our method produces are now allowed. The reason for this is that the literal 2 in Haskell doesn't really represent a number - it represents an object of type Num a => a, which is constructed from the Integral 2 using fromIntegral. So the statement map (+1) 2 is equivalent to map (+1) (fromIntegral (2::Integer)). This means that the literal "2" can represent anything, given the proper instantiation - including lists.
2 has the type Num a => a; we haven't specified what a is, except that it has to have a Num instance.
map (+1) has the type Num b => [b] -> [b]; we have specified what b is, except it has to have a Num instance.
When we determine the type of map (+1) 2, we are basically unifying Num a ~ Num b => [b].
2 :: Num a => a
map (+1) :: Num b => [b] -> [b]
map (+1) 2 :: (Num b, Num [b]) => [b]
And this is the problem. Num requires a type variable like a or b, not a polymorphic type like [b], as its argument.
Here’s an actual answer:
GHC tries to infer the types by itself, and failed, or in GHC lingo:
When checking the inferred type …
Basically the conflict is:
• GHC is happy to have any value as a parameter for map (+1), as long as it is compatible with its type. And map (+1) wants a list of numbers, since + wants two numbers and 1 already is one number, and map always wants a list of values compatible with that.
• And here’s the kicker: GHC is happy to have 2 be any type! Because to save you the hassle of always having to specify if a literal 2 is of a certain type, GHC just treats numeric literals as Integer and replaces it with fromInteger 2. So since fromInteger is from the class Num, 2 can be anything that implemented that class Num! So 2 could be a list too!
So GHC is stuck trying to fit those things together, and tells you:
“It needs to be some value, that is a number. … And where the list of those values is also a number!”
Or in Haskell:
it :: forall b. (Num b, Num [b]) => [b]
But there is no instance that makes lists numbers!
Or in GHC lingo:
Non type-variable argument in the constraint: Num [b]
That’s why you nowadays get this, yes definitely badly designed and confusing error message.
(Almost all GHC error messages are a cruel joke. The best thing you can do to understand them, is to read them bottom to top! And know all of Haskell’s type system freedoms and the GHC extensions that are forcing GHC to not make assumptions that would make these errors a much simpler sub-class of errors otherwise.)
Obviously, you’d normally just replace 2 with an actual list of some sort. But we’re not going to do that here. Since GHC wants crazy, GHC shall get crazy:
Silly resolution
So since it leaves us the option of making lists (of numbers) numbers, that’s what we shall do:
instance Num b => Num [b] where
(+) = zipWith (+)
(*) = zipWith (*)
abs = map abs
signum = map signum
fromInteger i = [fromInteger i]
-- or repeat [fromInteger i], if we’re evil. ;)
negate = map negate
So now, it works perfectly fine:
> map (+1) 2
[3]
Since 2 becomes fromInteger 2 :: Num a => [a], which puts 2 in a list ([2]), and then map (+1) is happy and turns it into [2+1]. Which evaluates to [3].
This possibility is why the error message wasn’t a much simpler
“Error: 2 is not a list, but map expects a list!”.
So, I'm trying to write my own replacement for Prelude, and I have (^) implemented as such:
{-# LANGUAGE RebindableSyntax #-}
class Semigroup s where
infixl 7 *
(*) :: s -> s -> s
class (Semigroup m) => Monoid m where
one :: m
class (Ring a) => Numeric a where
fromIntegral :: (Integral i) => i -> a
fromFloating :: (Floating f) => f -> a
class (EuclideanDomain i, Numeric i, Enum i, Ord i) => Integral i where
toInteger :: i -> Integer
quot :: i -> i -> i
quot a b = let (q,r) = (quotRem a b) in q
rem :: i -> i -> i
rem a b = let (q,r) = (quotRem a b) in r
quotRem :: i -> i -> (i, i)
quotRem a b = let q = quot a b; r = rem a b in (q, r)
-- . . .
infixr 8 ^
(^) :: (Monoid m, Integral i) => m -> i -> m
(^) x i
| i == 0 = one
| True = let (d, m) = (divMod i 2)
rec = (x*x) ^ d in
if m == one then x*rec else rec
(Note that the Integral used here is one I defined, not the one in Prelude, although it is similar. Also, one is a polymorphic constant that's the identity under the monoidal operation.)
Numeric types are monoids, so I can try to do, say 2^3, but then the typechecker gives me:
*AlgebraicPrelude> 2^3
<interactive>:16:1: error:
* Could not deduce (Integral i0) arising from a use of `^'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Integral Integer -- Defined at Numbers.hs:190:10
instance Integral Int -- Defined at Numbers.hs:207:10
* In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
<interactive>:16:3: error:
* Could not deduce (Numeric i0) arising from the literal `3'
from the context: Numeric m
bound by the inferred type of it :: Numeric m => m
at <interactive>:16:1-3
The type variable `i0' is ambiguous
These potential instances exist:
instance Numeric Integer -- Defined at Numbers.hs:294:10
instance Numeric Complex -- Defined at Numbers.hs:110:10
instance Numeric Rational -- Defined at Numbers.hs:306:10
...plus four others
(use -fprint-potential-instances to see them all)
* In the second argument of `(^)', namely `3'
In the expression: 2 ^ 3
In an equation for `it': it = 2 ^ 3
I get that this arises because Int and Integer are both Integral types, but then why is it that in normal Prelude I can do this just fine? :
Prelude> :t (2^)
(2^) :: (Num a, Integral b) => b -> a
Prelude> :t 3
3 :: Num p => p
Prelude> 2^3
8
Even though the signatures for partial application in mine look identical?
*AlgebraicPrelude> :t (2^)
(2^) :: (Numeric m, Integral i) => i -> m
*AlgebraicPrelude> :t 3
3 :: Numeric a => a
How would I make it so that 2^3 would in fact work, and thus give 8?
A Hindley-Milner type system doesn't really like having to default anything. In such a system, you want types to be either properly fixed (rigid, skolem) or properly polymorphic, but the concept of “this is, like, an integer... but if you prefer, I can also cast it to something else” as many other languages have doesn't really work out.
Consequently, Haskell sucks at defaulting. It doesn't have first-class support for that, only a pretty hacky ad-hoc, hard-coded mechanism which mainly deals with built-in number types, but fails at anything more involved.
You therefore should try to not rely on defaulting. My opinion is that the standard signature for ^ is unreasonable; a better signature would be
(^) :: Num a => a -> Int -> a
The Int is probably controversial – of course Integer would be safer in a sense; however, an exponent too big to fit in Int generally means the results will be totally off the scale anyway and couldn't feasibly be calculated by iterated multiplication; so this kind of expresses the intend pretty well. And it gives best performance for the extremely common situation where you just write x^2 or similar, which is something where you very definitely don't want to have to put an extra signature in the exponent.
In the rather fewer cases where you have a concrete e.g. Integer number and want to use it in the exponent, you can always shove in an explicit fromIntegral. That's not nice, but rather less of an inconvenience.
As a general rule, I try to avoid† any function-arguments that are more polymorphic than the results. Haskell's polymorphism works best “backwards”, i.e. the opposite way as in dynamic language: the caller requests what type the result should be, and the compiler figures out from this what the arguments should be. This works pretty much always, because as soon as the result is somehow used in the main program, the types in the whole computation have to be linked to a tree structure.
OTOH, inferring the type of the result is often problematic: arguments may be optional, may themselves be linked only to the result, or given as polymorphic constants like Haskell number literals. So, if i doesn't turn up in the result of ^, avoid letting in occur in the arguments either.
†“Avoid” doesn't mean I don't ever write them, I just don't do so unless there's a good reason.
All the experiments described below were done with GHC 8.0.1.
This question is a follow-up to RankNTypes with type aliases confusion. The issue there boiled down to the types of functions like this one...
{-# LANGUAGE RankNTypes #-}
sleight1 :: a -> (Num a => [a]) -> a
sleight1 x (y:_) = x + y
... which are rejected by the type checker...
ThinAir.hs:4:13: error:
* No instance for (Num a) arising from a pattern
Possible fix:
add (Num a) to the context of
the type signature for:
sleight1 :: a -> (Num a => [a]) -> a
* In the pattern: y : _
In an equation for `sleight1': sleight1 x (y : _) = x + y
... because the higher-rank constraint Num a cannot be moved outside of the type of the second argument (as would be possible if we had a -> a -> (Num a => [a]) instead). That being so, we end up trying to add a higher-rank constraint to a variable already quantified over the whole thing, that is:
sleight1 :: forall a. a -> (Num a => [a]) -> a
With this recapitulation done, we might try to simplify the example a bit. Let's replace (+) with something that doesn't require Num, and uncouple the type of the problematic argument from that of the result:
sleight2 :: a -> (Num b => b) -> a
sleight2 x y = const x y
This doesn't work just like before (save for a slight change in the error message):
ThinAir.hs:7:24: error:
* No instance for (Num b) arising from a use of `y'
Possible fix:
add (Num b) to the context of
the type signature for:
sleight2 :: a -> (Num b => b) -> a
* In the second argument of `const', namely `y'
In the expression: const x y
In an equation for `sleight2': sleight2 x y = const x y
Failed, modules loaded: none.
Using const here, however, is perhaps unnecessary, so we might try writing the implementation ourselves:
sleight3 :: a -> (Num b => b) -> a
sleight3 x y = x
Surprisingly, this actually works!
Prelude> :r
[1 of 1] Compiling Main ( ThinAir.hs, interpreted )
Ok, modules loaded: Main.
*Main> :t sleight3
sleight3 :: a -> (Num b => b) -> a
*Main> sleight3 1 2
1
Even more bizarrely, there seems to be no actual Num constraint on the second argument:
*Main> sleight3 1 "wat"
1
I'm not quite sure about how to make that intelligible. Perhaps we might say that, just like we can juggle undefined as long as we never evaluate it, an unsatisfiable constraint can stick around in a type just fine as long as it is not used for unification anywhere in the right-hand side. That, however, feels like a pretty weak analogy, specially given that non-strictness as we usually understand it is a notion involving values, and not types. Furthermore, that leaves us no closer from grasping how in the world String unifies with Num b => b -- assuming that such a thing actually happens, something which I'm not at all sure of. What, then, is an accurate description of what is going on when a constraint seemingly vanishes in this manner?
Oh, it gets even weirder:
Prelude> sleight3 1 ("wat"+"man")
1
Prelude Data.Void> sleight3 1 (37 :: Void)
1
See, there is an actual Num constraint on that argument. Only, because (as chi already commented) the b is in a covariant position, this is not a constraint you have to provide when calling sleight3. Rather, you can just pick any type b, then whatever it is, sleight3 will provide a Num instance for it!
Well, clearly that's bogus. sleight3 can't provide such a num instance for strings, and most definitely not for Void. But it also doesn't actually need to because, quite like you said, the argument for which that constraint would apply is never evaluated. Recall that a constrained-polymorphic value is essentially just a function of a dictionary argument. sleight3 simply promises to provide such a dictionary before it actually gets to use y, but then it doesn't use y in any way, so it's fine.
It's basically the same as with a function like this:
defiant :: (Void -> Int) -> String
defiant f = "Haha"
Again, the argument function clearly can not possibly yield an Int because there doesn't exist a Void value to evaluate it with. But this isn't needed either, because f is simply ignored!
By contrast, sleight2 x y = const x y does kinda sorta use y: the second argument to const is just a rank-0 type, so the compiler needs to resolve any needed dictionaries at that point. Even if const ultimately also throws y away, it still “forces” enough of this value to make it evident that it's not well-typed.
Many introductory texts will tell you that in Haskell type signatures are "almost always" optional. Can anybody quantify the "almost" part?
As far as I can tell, the only time you need an explicit signature is to disambiguate type classes. (The canonical example being read . show.) Are there other cases I haven't thought of, or is this it?
(I'm aware that if you go beyond Haskell 2010 there are plenty for exceptions. For example, GHC will never infer rank-N types. But rank-N types are a language extension, not part of the official standard [yet].)
Polymorphic recursion needs type annotations, in general.
f :: (a -> a) -> (a -> b) -> Int -> a -> b
f f1 g n x =
if n == (0 :: Int)
then g x
else f f1 (\z h -> g (h z)) (n-1) x f1
(Credit: Patrick Cousot)
Note how the recursive call looks badly typed (!): it calls itself with five arguments, despite f having only four! Then remember that b can be instantiated with c -> d, which causes an extra argument to appear.
The above contrived example computes
f f1 g n x = g (f1 (f1 (f1 ... (f1 x))))
where f1 is applied n times. Of course, there is a much simpler way to write an equivalent program.
Monomorphism restriction
If you have MonomorphismRestriction enabled, then sometimes you will need to add a type signature to get the most general type:
{-# LANGUAGE MonomorphismRestriction #-}
-- myPrint :: Show a => a -> IO ()
myPrint = print
main = do
myPrint ()
myPrint "hello"
This will fail because myPrint is monomorphic. You would need to uncomment the type signature to make it work, or disable MonomorphismRestriction.
Phantom constraints
When you put a polymorphic value with a constraint into a tuple, the tuple itself becomes polymorphic and has the same constraint:
myValue :: Read a => a
myValue = read "0"
myTuple :: Read a => (a, String)
myTuple = (myValue, "hello")
We know that the constraint affects the first part of the tuple but does not affect the second part. The type system doesn't know that, unfortunately, and will complain if you try to do this:
myString = snd myTuple
Even though intuitively one would expect myString to be just a String, the type checker needs to specialize the type variable a and figure out whether the constraint is actually satisfied. In order to make this expression work, one would need to annotate the type of either snd or myTuple:
myString = snd (myTuple :: ((), String))
In Haskell, as I'm sure you know, types are inferred. In other words, the compiler works out what type you want.
However, in Haskell, there are also polymorphic typeclasses, with functions that act in different ways depending on the return type. Here's an example of the Monad class, though I haven't defined everything:
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
fail :: String -> m a
We're given a lot of functions with just type signatures. Our job is to make instance declarations for different types that can be treated as Monads, like Maybe t or [t].
Have a look at this code - it won't work in the way we might expect:
return 7
That's a function from the Monad class, but because there's more than one Monad, we have to specify what return value/type we want, or it automatically becomes an IO Monad. So:
return 7 :: Maybe Int
-- Will return...
Just 7
return 6 :: [Int]
-- Will return...
[6]
This is because [t] and Maybe have both been defined in the Monad type class.
Here's another example, this time from the random typeclass. This code throws an error:
random (mkStdGen 100)
Because random returns something in the Random class, we'll have to define what type we want to return, with a StdGen object tupelo with whatever value we want:
random (mkStdGen 100) :: (Int, StdGen)
-- Returns...
(-3650871090684229393,693699796 2103410263)
random (mkStdGen 100) :: (Bool, StdGen)
-- Returns...
(True,4041414 40692)
This can all be found at learn you a Haskell online, though you'll have to do some long reading. This, I'm pretty much 100% certain, it the only time when types are necessary.
here's my question:
this works perfectly:
type Asdf = [Integer]
type ListOfAsdf = [Asdf]
Now I want to do the same but with the Integral class restriction:
type Asdf2 a = (Integral a) => [a]
type ListOfAsdf2 = (Integral a) => [Asdf2 a]
I got this error:
Illegal polymorphic or qualified type: Asdf2 a
Perhaps you intended to use -XImpredicativeTypes
In the type synonym declaration for `ListOfAsdf2'
I have tried a lot of things but I am still not able to create a type with a class restriction as described above.
Thanks in advance!!! =)
Dak
Ranting Against the Anti-Existentionallists
I always dislike the anti-existential type talk in Haskell as I often find existentials useful. For example, in some quick check tests I have code similar to (ironically untested code follows):
data TestOp = forall a. Testable a => T String a
tests :: [TestOp]
tests = [T "propOne:" someProp1
,T "propTwo:" someProp2
]
runTests = mapM runTest tests
runTest (T s a) = putStr s >> quickCheck a
And even in a corner of some production code I found it handy to make a list of types I'd need random values of:
type R a = Gen -> (a,Gen)
data RGen = forall a. (Serialize a, Random a) => RGen (R a)
list = [(b1, str1, random :: RGen (random :: R Type1))
,(b2, str2, random :: RGen (random :: R Type2))
]
Answering Your Question
{-# LANGUAGE ExistentialQuantification #-}
data SomeWrapper = forall a. Integral a => SW a
If you need a context, the easiest way would be to use a data declaration:
data (Integral a) => IntegralData a = ID [a]
type ListOfIntegralData a = [IntegralData a]
*Main> :t [ ID [1234,1234]]
[ID [1234,1234]] :: Integral a => [IntegralData a]
This has the (sole) effect of making sure an Integral context is added to every function that uses the IntegralData data type.
sumID :: Integral a => IntegralData a -> a
sumID (ID xs) = sum xs
The main reason a type synonym isn't working for you is that type synonyms are designed as
just that - something that replaces a type, not a type signature.
But if you want to go existential the best way is with a GADT, because it handles all the quantification issues for you:
{-# LANGUAGE GADTs #-}
data IntegralGADT where
IG :: Integral a => [a] -> IntegralGADT
type ListOfIG = [ IntegralGADT ]
Because this is essentially an existential type, you can mix them up:
*Main> :t [IG [1,1,1::Int], IG [234,234::Integer]]
[IG [1,1,1::Int],IG [234,234::Integer]] :: [ IntegralGADT ]
Which you might find quite handy, depending on your application.
The main advantage of a GADT over a data declaration is that when you pattern match, you implicitly get the Integral context:
showPointZero :: IntegralGADT -> String
showPointZero (IG xs) = show $ (map fromIntegral xs :: [Double])
*Main> showPointZero (IG [1,2,3])
"[1.0,2.0,3.0]"
But existential quantification is sometimes used for the wrong reasons,
(eg wanting to mix all your data up in one list because that's what you're
used to from dynamically typed languages, and you haven't got used to
static typing and its advantages yet).
Here I think it's more trouble than it's worth, unless you need to mix different
Integral types together without converting them. I can't see a reason
why this would help, because you'll have to convert them when you use them.
For example, you can't define
unIG (IG xs) = xs
because it doesn't even type check. Rule of thumb: you can't do stuff that mentions the type a on the right hand side.
However, this is OK because we convert the type a:
unIG :: Num b => IntegralGADT -> [b]
unIG (IG xs) = map fromIntegral xs
Here existential quantification has forced you convert your data when I think your original plan was to not have to!
You may as well convert everything to Integer instead of this.
If you want things simple, keep them simple. The data declaration is the simplest way of ensuring you don't put data in your data type unless it's already a member of some type class.