I've been playing with the uncurry function in GHCi and I've found something I couldn't quite get at all. When I apply uncurry to the (+) function and bind that to some variable like in the code below, the compiler infers its type to be specific to Integer:
Prelude> let add = uncurry (+)
Prelude> :t add
add :: (Integer, Integer) -> Integer
However, when ask for the type of the following expression I get (what I expect to be) the correct result:
Prelude> :t uncurry (+)
uncurry (+) :: (Num a) => (a, a) -> a
What would cause that? Is it particular to GHCi?
The same applies to let add' = (+).
NOTE: I could not reproduce that using a compiled file.
This has nothing to do with ghci. This is the monomorphism restriction being irritating. If you try to compile the following file:
add = uncurry (+)
main = do
print $ add (1,2 :: Int)
print $ add (1,2 :: Double)
You will get an error. If you expand:
main = do
print $ uncurry (+) (1,2 :: Int)
print $ uncurry (+) (1,2 :: Double)
Everything is fine, as expected. The monomorphism restriction refuses to make something that "looks like a value" (i.e. defined with no arguments on the left-hand side of the equals) typeclass polymorphic, because that would defeat the caching that would normally occur. Eg.
foo :: Integer
foo = expensive computation
bar :: (Num a) => a
bar = expensive computation
foo is guaranteed only to be computed once (well, in GHC at least), whereas bar will be computed every time it is mentioned. The monomorphism restriction seeks to save you from the latter case by defaulting to the former when it looks like that's what you wanted.
If you only use the function once (or always at the same type), type inference will take care of inferring the right type for you. In that case ghci is doing something slightly different by guessing sooner. But using it at two different types shows what is going on.
When in doubt, use a type signature (or turn off the wretched thing with {-# LANGUAGE NoMonomorphismRestriction #-}).
There's magic involved in extended defaulting rules with ghci. Basically, among other things, Num constraints get defaulted to Integer and Floating constraints to Double, when otherwise there would be an error (in this case, due to the evil monomorphism restriction).
Related
In Haskell (at least with GHC v8.8.4), being in the Num class does NOT imply being in the Eq class:
$ ghci
GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help
λ>
λ> let { myEqualP :: Num a => a -> a -> Bool ; myEqualP x y = x==y ; }
<interactive>:6:60: error:
• Could not deduce (Eq a) arising from a use of ‘==’
from the context: Num a
bound by the type signature for:
myEqualP :: forall a. Num a => a -> a -> Bool
at <interactive>:6:7-41
Possible fix:
add (Eq a) to the context of
the type signature for:
myEqualP :: forall a. Num a => a -> a -> Bool
• In the expression: x == y
In an equation for ‘myEqualP’: myEqualP x y = x == y
λ>
It seems this is because for example Num instances can be defined for some functional types.
Furthermore, if we prevent ghci from overguessing the type of integer literals, they have just the Num type constraint:
λ>
λ> :set -XNoMonomorphismRestriction
λ>
λ> x=42
λ> :type x
x :: Num p => p
λ>
Hence, terms like x or 42 above have no reason to be comparable.
But still, they happen to be:
λ>
λ> y=43
λ> x == y
False
λ>
Can somebody explain this apparent paradox?
Integer literals can't be compared without using Eq. But that's not what is happening, either.
In GHCi, under NoMonomorphismRestriction (which is default in GHCi nowadays; not sure about in GHC 8.8.4) x = 42 results in a variable x of type forall p :: Num p => p.1
Then you do y = 43, which similarly results in the variable y having type forall q. Num q => q.2
Then you enter x == y, and GHCi has to evaluate in order to print True or False. That evaluation cannot be done without picking a concrete type for both p and q (which has to be the same). Each type has its own code for the definition of ==, so there's no way to run the code for == without deciding which type's code to use.3
However each of x and y can be used as any type in Num (because they have a definition that works for all of them)4. So we can just use (x :: Int) == y and the compiler will determine that it should use the Int definition for ==, or x == (y :: Double) to use the Double definition. We can even do this repeatedly with different types! None of these uses change the type of x or y; we're just using them each time at one of the (many) types they support.
Without the concept of defaulting, a bare x == y would just produce an Ambiguous type variable error from the compiler. The language designers thought that would be extremely common and extremely annoying with numeric literals in particular (because the literals are polymorphic, but as soon as you do any operation on them you need a concrete type). So they introduced rules that some ambiguous type variables should be defaulted to a concrete type if that allows compilation to continue.5
So what is actually happening when you do x == y is that the compiler is just picking Integer to use for x and y in that particular expression, because you haven't given it enough information to pin down any particular type (and because the defaulting rules apply in this situation). Integer has an Eq instance so it can use that, even though the most general types of x and y don't include the Eq constraint. Without picking something it couldn't possibly even attempt to call == (and of course the "something" it picks has to be in Eq or it still won't work).
If you turn on -Wtype-defaults (which is included in -Wall), the compiler will print a warning whenever it applies defaulting6, which makes the process more visible.
1 The forall p part is implicit in standard Haskell, because all type variables are automatically introduced with forall at the beginning of the type expression in which they appear. You have to turn on extensions to even write the forall manually; either ExplicitForAll just for the ability to write forall, or any one of the many extensions that actually add functionality that makes forall useful to write explicitly.
2 GHCi will probably pick p again for the type variable, rather than q. I'm just using a different one to emphasise that they're different variables.
3 Technically it's not each type that necessarily has a different ==, but each Eq instance. Some of those instances are polymorphic, so they apply to multiple types, but that only really comes up with types that have some structure (like Maybe a, etc). Basic types like Int, Integer, Double, Char, Bool, each have their own instance, and each of those instances has its own code for ==.
4 In the underlying system, a type like forall p. Num p => p is in fact much like a function; one that takes a Num instance for a concrete type as a parameter. To get a concrete value you have to first "apply the function" to a type's Num instance, and only then do you get an actual value that could be printed, compared with other things, etc. In standard Haskell these instance parameters are always invisibly passed around by the compiler; some extensions allow you to manipulate this process a little more directly.
This is the root of what's confusing about why x == y works when x and y are polymorphic variables. If you had to explicitly pass around the type/instance arguments it would be obvious what's going on here, because you would have to manually apply both x and y to something and compare the results.
5 The gist of the default rules is that if the constraints on an ambiguous type variable are:
all built-in classes
at least one of them is a numeric class (Num, Floating, etc)
then GHC will try Integer to see if that type checks and allows all other constraints to be resolved. If that doesn't work it will try Double, and if that doesn't work then it reports an error.
You can set the types it will try with a default declaration (the "default default" being default (Integer, Double)), but you can't customise the conditions under which it will try to default things, so changing the default types is of limited use in my experience.
GHCi however comes with extended default rules that are a bit more useful in an interpreter (because it has to do type inference line-by-line instead of on the whole module at once). You can turn those on in compiled code with ExtendedDefaultRules extension (or turn them off in GHCi with NoExtendedDefaultRules), but again, neither of those options is particularly useful in my experience. It's annoying that the interpreter and the compiler behave differently, but the fundamental difference between module-at-a-time compilation and line-at-a-time interpretation mean that switching either's default rules to work consistently with the other is even more annoying. (This is also why NoMonomorphismRestriction is in effect by default in the interpreter now; the monomorphism restriction does a decent job at achieving its goals in compiled code but is almost always wrong in interpreter sessions).
6 You can also use a typed hole in combination with the asTypeOf helper to get GHC to tell you what type it's inferring for a sub-expression like this:
λ :t x
x :: Num p => p
λ :t y
y :: Num p => p
λ (x `asTypeOf` _) == y
<interactive>:19:15: error:
• Found hole: _ :: Integer
• In the second argument of ‘asTypeOf’, namely ‘_’
In the first argument of ‘(==)’, namely ‘(x `asTypeOf` _)’
In the expression: (x `asTypeOf` _) == y
• Relevant bindings include
it :: Bool (bound at <interactive>:19:1)
Valid hole fits include
x :: forall p. Num p => p
with x
(defined at <interactive>:1:1)
it :: forall p. Num p => p
with it
(defined at <interactive>:10:1)
y :: forall p. Num p => p
with y
(defined at <interactive>:12:1)
You can see it tells us nice and simply Found hole: _ :: Integer, before proceeding with all the extra information it likes to give us about errors.
A typed hole (in its simplest form) just means writing _ in place of an expression. The compiler errors out on such an expression, but it tries to give you information about what you could use to "fill in the blank" in order to get it to compile; most helpfully, it tells you the type of something that would be valid in that position.
foo `asTypeOf` bar is an old pattern for adding a bit of type information. It returns foo but it restricts (this particular usage of) it to be the same type as bar (the actual value of bar is totally unused). So if you already have a variable d with type Double, x `asTypeOf` d will be the value of x as a Double.
Here I'm using asTypeOf "backwards"; instead of using the thing on the right to constrain the type of the thing on the left, I'm putting a hole on the right (which could have any type), but asTypeOf conveniently makes sure it's the same type as x without otherwise changing how x is used in the overall expression (so the same type inference still applies, including defaulting, which isn't always the case if you lift a small part of a larger expression out to ask GHCi for its type with :t; in particular :t x won't tell us Integer, but Num p => p).
When I ask the type of the + operator it is as you would expect
Prelude> :t (+)
(+) :: Num a => a -> a -> a
When I assign the operator to a variable then the type signatures changes
Prelude> let x = (+)
Prelude> :t x
x :: Integer -> Integer -> Integer
Why would the type of an operator change when it is assigned?
This is the "dreaded monomorphism restriction". Essentially, when you define a
new top-level name, that
doesn't look like a function definition
then, by default, Haskell tries to be smart and pick a less-than-entirely general type for it. The reasons were originally to make Haskell easier to use (without it it's perhaps easier to write programs with ambiguous types), but more recently it seems to just trip everyone up since it's very unexpected behavior.
The resolution?
Use GHC 7.8. After version 7.8, GHCi sessions automatically...
Use -XNoMonomorphismRestriction, which turns off this behavior, or
Provide a type annotation like
let { x :: Num a => a -> a -> a; x = (+) }
In normal Haskell code, method (3) is most highly recommended. When using GHCi, (1) and (2) are more convenient.
I want to be able to type the following in ghci:
map showTypeSignature [(+),(-),show]
I want ghci to return the following list of Strings:
["(+) :: Num a => a -> a -> a","(-) :: Num a => a -> a -> a","show :: Show a => a -> String"]
Naturally, the first place that I run into trouble is that I cannot construct the first list, as the type signatures of the functions don't match. What can I do to construct such a list? How does ghci accomplish the printing of type signatures? Where is the ghci command :t defined (its source)?
What you're asking for isn't really possible. You cannot easily determine the type signature of a Haskell term from within Haskell. At run-time, there's hardly any type information available. The GHCi command :t is a GHCi command, not an interpreted Haskell function, for a reason.
To do something that comes close to what you want you'll have to use GHC itself, as a library. GHC offers the GHC API for such purposes. But then you'll not be able to use arbitrary Haskell terms, but will have to start with a String representation of your terms. Also, invoking the compiler at run-time will necessarily produce an IO output.
kosmikus is right, this doesn't really work out. And shouldn't, the static type system is one of Haskell's most distinguishing features!
However, you can emulate this for monomorphic functions quite well using the Dynamic existential:
showTypeSignature :: Dynamic -> String
showTypeSignature = show . dynTypeRep
Prelude Data.Dynamic> map showTypeSignature [toDyn (+), toDyn (-), toDyn (show)]
["Integer -> Integer -> Integer","Integer -> Integer -> Integer","() -> [Char]"]
As you see ghci had to boil the functions down to monomorphic type here for this to work, which particularly for show is patently useless.
The answers you have about why you can't do this are very good, but there may be another option. If you don't care about getting a Haskell list, and just want to see the types of a bunch of things, you can define a custom GHCi command, say :ts, that shows you the types of a list of things; to wit,
Prelude> :ts (+) (-) show
(+) :: Num a => a -> a -> a
(-) :: Num a => a -> a -> a
show :: Show a => a -> String
To do this, we use :def; :def NAME EXPR, where NAME is an identifier and EXPR is a Haskell expression of type String -> IO String, defines the GHCi command :NAME. Running :NAME ARGS evaluates EXPR ARGS to produce a string, and then runs the resulting string in GHCi. This is less confusing than it sounds. Here's what we do:
Prelude> :def ts return . unlines . map (":type " ++) . words
Prelude> :ts (+) (-) show
(+) :: Num a => a -> a -> a
(-) :: Num a => a -> a -> a
show :: Show a => a -> String
What's going on? This defines :ts to evaluate return . unlines . map (":t " ++) . words, which does the following:
words: Takes a string and splits it on whitespace; e.g., "(+) (-) show" becomes ["(+)", "(-)", "show"].
map (":type " ++): Takes each of the words from before and prepends ":type "; e.g., ["(+)", "(-)", "show"] becomes [":type (+)", ":type (-)", ":type show"]. Notice that we now have a list of GHCi commands.
unlines: Takes a list of strings and puts newlines after each one; e.g., [":type (+)", ":type (-)", ":type show"] becomes ":type (+)\n:type (-)\n:type show\n". Notice that if we pasted this string into GHCi, it would produce the type signatures we want.
return: Lifts a String to an IO String, because that's the type we need.
Thus, :ts name₁ name₂ ... nameₙ will evaluate :type name₁, :type name₂, …, :type nameₙ in succession and prints out the results. Again, you can't get a real list this way, but if you just want to see the types, this will work.
I'm trying to learn haskell and I've been going over chapter 6 and 7 of Learn you a Haskell. Why don't the following two function definitions give the same result? I thought (f . g) x = f (g (x))?
Def 1
let{ t :: Eq x => [x] -> Int; t xs = length( nub xs)}
t [1]
1
Def 2
let t = length . nub
t [1]
<interactive>:78:4:
No instance for (Num ()) arising from the literal `1'
Possible fix: add an instance declaration for (Num ())
In the expression: 1
In the first argument of `t', namely `[1]'
In the expression: t [1]
The problem is with your type signatures and the dreaded monomorphism restriction. You have a type signature in your first version but not in your second; ironically, it would have worked the other way around!
Try this:
λ>let t :: Eq x => [x] -> Int; t = length . nub
λ>t [1]
1
The monomorphism restriction forces things that don't look like functions to have a monomorphic type unless they have an explicit type signature. The type you want for t is polymorphic: note the type variable x. However, with the monomorphism restriction, x gets "defaulted" to (). Check this out:
λ>let t = length . nub
λ>:t t
t :: [()] -> Int
This is very different from the version with the type signature above!
The compiler chooses () for the monomorphic type because of defaulting. Defaulting is just the process Haskell uses to choose a type from a typeclass. All this really means is that, in the repl, Haskell will try using the () type if it encounters an ambiguous type variable in the Show, Eq or Ord classes. Yes, this is basically arbitrary, but it's pretty handy for playing around without having to write type signatures everywhere! Also, the defaulting rules are more conservative in files, so this is basically just something that happens in GHCi.
In fact, defaulting to () seems to mostly be a hack to make printf work correctly in GHCi! It's an obscure Haskell curio, but I'd ignore it in practice.
Apart from including a type signature, you could also just turn the monomorphism restriction off in the repl:
λ>:set -XNoMonomorphismRestriction
This is fine in GHCi, but I would not use it in real modules--instead, make sure to always include a type signature for top-level definitions inside files.
EDIT: Ever since GHC 7.8.1, the monomorphism restriction is turned off by default in GHCi. This means that all this code would work fine with a recent version of GHCi and you do not need to set the flag explicitly. It can still be an issue for values defined in a file with no type signature, however.
This is another instance of the "Dreaded" Monomorphism Restriction which leads GHCi to infer a monomorphic type for the composed function. You can disable it in GHCi with
> :set -XNoMonomorphismRestriction
Why is it that I can do:
1 + 2.0
but when I try:
let a = 1
let b = 2.0
a + b
<interactive>:1:5:
Couldn't match expected type `Integer' with actual type `Double'
In the second argument of `(+)', namely `b'
In the expression: a + b
In an equation for `it': it = a + b
This seems just plain weird! Does it ever trip you up?
P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!
The type signature of (+) is defined as Num a => a -> a -> a, which means that it works on any member of the Num typeclass, but both arguments must be of the same type.
The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do for the let expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1 and 2.0.
All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double, so it just assumes the other one was supposed to be a Double and executes the computation. However, when you use the let expression, it only has one number to work off of, so it decides 1 is an Integer and 2.0 is a Double.
EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.
The first works because numeric literals are polymorphic (they are interpreted as fromInteger literal resp. fromRational literal), so in 1 + 2.0, you really have fromInteger 1 + fromRational 2, in the absence of other constraints, the result type defaults to Double.
The second does not work because of the monomorphism restriction. If you bind something without a type signature and with a simple pattern binding (name = expresion), that entity gets assigned a monomorphic type. For the literal 1, we have a Num constraint, therefore, according to the defaulting rules, its type is defaulted to Integer in the binding let a = 1. Similarly, the fractional literal's type is defaulted to Double.
It will work, by the way, if you :set -XNoMonomorphismRestriction in ghci.
The reason for the monomorphism restriction is to prevent loss of sharing, if you see a value that looks like a constant, you don't expect it to be calculated more than once, but if it had a polymorphic type, it would be recomputed everytime it is used.
You can use GHCI to learn a little more about this. Use the command :t to get the type of an expression.
Prelude> :t 1
1 :: Num a => a
So 1 is a constant which can be any numeric type (Double, Integer, etc.)
Prelude> let a = 1
Prelude> :t a
a :: Integer
So in this case, Haskell inferred the concrete type for a is Integer. Similarly, if you write let b = 2.0 then Haskell infers the type Double. Using let made Haskell infer a more specific type than (perhaps) was necessary, and that leads to your problem. (Someone with more experience than me can perhaps comment as to why this is the case.) Since (+) has type Num a => a -> a -> a, the two arguments need to have the same type.
You can fix this with the fromIntegral function:
Prelude> :t fromIntegral
fromIntegral :: (Num b, Integral a) => a -> b
This function converts integer types to other numeric types. For example:
Prelude> let a = 1
Prelude> let b = 2.0
Prelude> (fromIntegral a) + b
3.0
Others have addressed many aspects of this question quite well. I'd like to say a word about the rationale behind why + has the type signature Num a => a -> a -> a.
Firstly, the Num typeclass has no way to convert one artbitrary instance of Num into another. Suppose I have a data type for imaginary numbers; they are still numbers, but you really can't properly convert them into just an Int.
Secondly, which type signature would you prefer?
(+) :: (Num a, Num b) => a -> b -> a
(+) :: (Num a, Num b) => a -> b -> b
(+) :: (Num a, Num b, Num c) => a -> b -> c
After considering the other options, you realize that a -> a -> a is the simplest choice. Polymorphic results (as in the third suggestion above) are cool, but can sometimes be too generic to be used conveniently.
Thirdly, Haskell is not Blub. Most, though arguably not all, design decisions about Haskell do not take into account the conventions and expectations of popular languages. I frequently enjoy saying that the first step to learning Haskell is to unlearn everything you think you know about programming first. I'm sure most, if not all, experienced Haskellers have been tripped up by the Num typeclass, and various other curiosities of Haskell, because most have learned a more "mainstream" language first. But be patient, you will eventually reach Haskell nirvana. :)