How can I set up Haskell's GHCI to interactively evaluate functions to their signature (type) instead of getting errors? - haskell

To see the function's signature in Haskell GHCI, I have to prefix it with :t:
Prelude> f = \x -> x+1
Prelude> :t f
f :: Num a => a -> a
But typing that prefix every time grows quickly old. If I leave it out, I get error:
Prelude> f
<interactive>:5:1: error:
• No instance for (Show (a0 -> a0)) arising from a use of ‘print’
(maybe you haven't applied a function to enough arguments?)
• In the first argument of ‘print’, namely ‘it’
In a stmt of an interactive GHCi command: print it
Instead of getting this error message, I would like see some useful information about my function f similar to the one I get with :t f (possibly even more information about f).
How can I set up the GHCI to achieve this functionality, i.e. getting function's info upon entering it without :t?

You probably can't do this today. I've opened a feature request to see about adding options to control what GHCi reports about type errors at the prompt.

GHCi will already happily show you the types of anything you type into the prompt, with the option :set +t. The only issue is that show is called on the thing, and there is no proper manner for showing functions - and the type is only printed for an expression which can be shown in a valid manner. However, you can get around this quite easily:
>newtype ShowType a = ShowType a
newtype ShowType a = ShowType a
>instance Show (ShowType a) where show _ = "The type is"
>:set +t
>ShowType const
The type is
it :: ShowType (a -> b -> a)
Unfortunately, this creates quite a lot of syntactic noise. My preferred solution is to add the following to the .ghci file:
:set +t
instance {-# OVERLAPS #-} Show a where show _ = "The type is"
Adding such a Show instance to any real Haskell module would be a grave mistake, but within the .ghci module, it only scopes over expressions typed into the prompt, so it seems okay to me. With this, you get:
>const
The type is
it :: a -> b -> a
>show
The type is
it :: Show a => a -> String
Conveniently, when you have a function whose type is 'technically' valid but has unsatisfiable constraints, you still get a type error:
>:t \x -> x `div` (x / x)
\x -> x `div` (x / x) :: (Integral a, Fractional a) => a -> a
^^^^^^^^^^^^^^^^^^^^^^^^^^
>\x -> x `div` (x / x)
<interactive>:12:1: error:
* Ambiguous type variable `a0' arising from a use of `it'
prevents the constraint `(Fractional a0)' from being solved.
However, the absolute simplest solution is to :set +t and to give a let statement when your expression is non-Showable:
>let _f = show
_f :: Show a => a -> String
Unfortunately, if the left-hand side is the wildcard - let _ = show - then the type is not printed.

Related

Haskell syb Data.Generics not working as expected

On a ghci prompt everywhere (mkT (\x -> 2 * x)) (8.7, 21, "word") evaluates to (8.7, 42, "word").
I expected the 8.7 to be doubled as well. Why am I wrong?
This is the result of mkT monomorphizing its argument in this particular case, but it turns out there's no broader way to address the issue. mkT isn't doing anything wrong.
It's worth looking first at why everywhere (* 2) doesn't type-check.
ghci> :t everywhere
everywhere
:: (forall a. Data a => a -> a) -> forall a. Data a => a -> a
ghci> :t (* 2)
(* 2) :: Num a => a -> a
ghci> :t everywhere (* 2)
<interactive>:1:13: error:
• Could not deduce (Num a) arising from a use of ‘*’
from the context: Data a
bound by a type expected by the context:
forall a. Data a => a -> a
at <interactive>:1:12-16
Possible fix:
add (Num a) to the context of
a type expected by the context:
forall a. Data a => a -> a
• In the expression: (*)
In the first argument of ‘everywhere’, namely ‘(* 2)’
In the expression: everywhere (* 2)
everywhere has a higher-rank type - the first forall a. is inside the parentheses. I kind of dislike documenting the type that way - it uses a as a type variable in two completely separate ways. But there are two different scopes, and that matters. What it's saying is that any function passed to it must be polymorphic over all instances of Data.
But the type of (* 2) doesn't match up there. It won't work with any instance of Data. It requires more - it requires that it be provided an instance of Num. So the error message dutifully reports that it can't deduce (Num a) from the context Data a. So this isn't going to work. The pieces don't fit together.
This is where mkT comes into play:
ghci> :t mkT
mkT :: (Typeable a, Typeable b) => (b -> b) -> a -> a
Its type is a bit funny. It looks almost like it does nothing at all, but Typeable is a funny class. mkT actually compares a and b for type equality, using those Typeable constraints. If they're the same, it applies the function you provided. Otherwise, it just acts as the identity function.
What it does when it's applied to a function is where things are going wrong for you:
ghci> :t mkT (* 2)
mkT (* 2) :: Typeable a => a -> a
It's still polymorphic in a, but the b it used to have has vanished. It had to pick a specific type b to work against, and it did that by defaulting to Integer. (See ghc's extended defaulting rules for details on how that works in ghci.) So...
ghci> mkT (* 2) 3.5
3.5
ghci> mkT (* 2) 7
14
ghci> mkT (* 2) (7 :: Int)
7
At the type level, mkT has to monomorphize its argument. That's the only way it can make use of the Typeable constraint when used in a context where a relevant variable no longer appears in its type.
(To tie the loop back to everywhere, the reason mkT (* 2) works as an argument to everywhere is because Data is a subclass of Typeable. The Data constraint implies that the Typeable requirement will be satisfied.)
So what can you do about this? Well, it's impossible to write it truly generically because of Haskell's open world assumption. Anywhere in the program, any type might be declared an instance of Num with arbitrary implementations of (*) and fromInteger. In order to work with everywhere, there would need to be some mechanism to go from knowing something is an instance of Data to looking up its Num instance. This just isn't possible at run time. Types have been erased. There may be some residues like Typeable dictionaries being carried around, but they don't provide any means to look up other instance dictionaries. And while you might be able to envision a language where that sort of lookup is possible, it actually would be very harmful to allow it in Haskell. It would invalidate the ability to reason about types parametrically, which would be a giant loss.
The best you can do is write transformation functions that work on multiple types:
ghci> let f = mkT (* (2 :: Int)) . mkT (* (2 :: Double)) . mkT (* (2 :: Integer))
ghci> f 5
10
ghci> f 2.7
5.4
ghci> f (9 :: Int)
18
ghci> f "hello"
"hello"
It's verbose and you can probably write something better by hand if you so desire. But it at least works, at least to some extent. And it doesn't require breaking foundational assumptions in the language design, which is always a bonus.
Here is a simplification of your case that doesn't use any Data stuff.
module MyModule where
dbl x = 2 * x
myId :: (a->a) -> a -> a
myId f = f
myDbl = myId dbl
Don't type this to the ghci prompt, rather, create a .hs file and load it.
Now check what type myDbl has.
Prelude > :l MyModule
[1 of 1] Compiling MyModule ( MyModule.hs, interpreted )
Ok, one module loaded.
*MyModule > :t MyModule.myDbl
MyModule.myDbl :: Integer -> Integer
Surprise! Why is it compiling at all? And why the weird types?
Because of the defaulting rules. (Basically, "if you don't know what to do with Num a, just use Integer"). Since myId cannot deal with dbl :: Num a => a -> a, Haskell allows it to take the Integer version.
Disable defaulting by adding default () at the top, and this module no longer compiles.
mkT is no different from myId in this respect.

Misunderstandment of map parameters in Haskell

I'm trying to create a function, which will multiply each element in a list called s by a parameter x.
First, I experimented around in ghci and found that fn1 s = map (* 2) s works. The I tried to make the function more general by including the factor x as a parameter fn2 x s = map (* x) s. However this leads to an error, when I call the function:
<interactive>:12:1: error:
• Non type-variable argument in the constraint: Num [a]
(Use FlexibleContexts to permit this)
• When checking the inferred type
it :: forall a. (Num a, Num [a]) => [[a]] -> [[a]]
After some additional experimentation I found that I can solve the problem by surrounding the * operator with ()
fn3 x s = map ((*) x) s
What I need help with is why the latter piece of code works while the previous does not.
Any help is appreciated.
This would happen if you forget to provide the x parameter when calling fn2, for example:
> fn2 [1,2,3]
The compiler sees [1,2,3] where x should be, and it also sees (* x) in the body of the function, and it reckons that [1,2,3] must be a valid argument for operator *. And since operator * is defined in type class Num, the compiler infers that there must be an instance Num [a] - which is exactly what it says in the error message.
The result of such call would be another function, which still "expects" the missing parameter s and once given it, will return a list of the same type as s. Since it's clear from the provided arguments that x :: [a], and you're using map to transform x to the same type, the compiler infers that s :: [[a]], and so the result of calling fn2 like that is [[a]] -> [[a]], which is what the error message says.
Now, the requirement of an instance Num [a] in itself is not a big deal. In fact, if you enable the FlexibleContexts extension (as the error message tells you), this particular error goes away, and you will get another one, complaining that there is no instance Num [a] for any a. And that is the real problem. There is no instance Num [a], because, well, lists are not numbers.

Haskell function composition -- putStrLn . show [duplicate]

I'm puzzled by how the Haskell compiler sometimes infers types that are less
polymorphic than what I'd expect, for example when using point-free definitions.
It seems like the issue is the "monomorphism restriction", which is on by default on
older versions of the compiler.
Consider the following Haskell program:
{-# LANGUAGE MonomorphismRestriction #-}
import Data.List(sortBy)
plus = (+)
plus' x = (+ x)
sort = sortBy compare
main = do
print $ plus' 1.0 2.0
print $ plus 1.0 2.0
print $ sort [3, 1, 2]
If I compile this with ghc I obtain no errors and the output of the executable is:
3.0
3.0
[1,2,3]
If I change the main body to:
main = do
print $ plus' 1.0 2.0
print $ plus (1 :: Int) 2
print $ sort [3, 1, 2]
I get no compile time errors and the output becomes:
3.0
3
[1,2,3]
as expected. However if I try to change it to:
main = do
print $ plus' 1.0 2.0
print $ plus (1 :: Int) 2
print $ plus 1.0 2.0
print $ sort [3, 1, 2]
I get a type error:
test.hs:13:16:
No instance for (Fractional Int) arising from the literal ‘1.0’
In the first argument of ‘plus’, namely ‘1.0’
In the second argument of ‘($)’, namely ‘plus 1.0 2.0’
In a stmt of a 'do' block: print $ plus 1.0 2.0
The same happens when trying to call sort twice with different types:
main = do
print $ plus' 1.0 2.0
print $ plus 1.0 2.0
print $ sort [3, 1, 2]
print $ sort "cba"
produces the following error:
test.hs:14:17:
No instance for (Num Char) arising from the literal ‘3’
In the expression: 3
In the first argument of ‘sort’, namely ‘[3, 1, 2]’
In the second argument of ‘($)’, namely ‘sort [3, 1, 2]’
Why does ghc suddenly think that plus isn't polymorphic and requires an Int argument?
The only reference to Int is in an application of plus, how can that matter
when the definition is clearly polymorphic?
Why does ghc suddenly think that sort requires a Num Char instance?
Moreover if I try to place the function definitions into their own module, as in:
{-# LANGUAGE MonomorphismRestriction #-}
module TestMono where
import Data.List(sortBy)
plus = (+)
plus' x = (+ x)
sort = sortBy compare
I get the following error when compiling:
TestMono.hs:10:15:
No instance for (Ord a0) arising from a use of ‘compare’
The type variable ‘a0’ is ambiguous
Relevant bindings include
sort :: [a0] -> [a0] (bound at TestMono.hs:10:1)
Note: there are several potential instances:
instance Integral a => Ord (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
instance Ord () -- Defined in ‘GHC.Classes’
instance (Ord a, Ord b) => Ord (a, b) -- Defined in ‘GHC.Classes’
...plus 23 others
In the first argument of ‘sortBy’, namely ‘compare’
In the expression: sortBy compare
In an equation for ‘sort’: sort = sortBy compare
Why isn't ghc able to use the polymorphic type Ord a => [a] -> [a] for sort?
And why does ghc treat plus and plus' differently? plus should have the
polymorphic type Num a => a -> a -> a and I don't really see how this is different
from the type of sort and yet only sort raises an error.
Last thing: if I comment the definition of sort the file compiles. However
if I try to load it into ghci and check the types I get:
*TestMono> :t plus
plus :: Integer -> Integer -> Integer
*TestMono> :t plus'
plus' :: Num a => a -> a -> a
Why isn't the type for plus polymorphic?
This is the canonical question about monomorphism restriction in Haskell
as discussed in [the meta question](https://meta.stackoverflow.com/questions/294053/can-we-provide-a-canonical-questionanswer-for-haskells-monomorphism-restrictio).
What is the monomorphism restriction?
The monomorphism restriction as stated by the Haskell wiki is:
a counter-intuitive rule in Haskell type inference.
If you forget to provide a type signature, sometimes this rule will fill
the free type variables with specific types using "type defaulting" rules.
What this means is that, in some circumstances, if your type is ambiguous (i.e. polymorphic)
the compiler will choose to instantiate that type to something not ambiguous.
How do I fix it?
First of all you can always explicitly provide a type signature and this will
avoid the triggering of the restriction:
plus :: Num a => a -> a -> a
plus = (+) -- Okay!
-- Runs as:
Prelude> plus 1.0 1
2.0
Note that only normal type signatures on variables count for this purpose, not expression type signatures. For example, writing this would still result in the restriction being triggered:
plus = (+) :: Num a => a -> a -> a
Alternatively, if you are defining a function, you can avoid point-free style,
and for example write:
plus x y = x + y
Turning it off
It is possible to simply turn off the restriction so that you don't have to do
anything to your code to fix it. The behaviour is controlled by two extensions:
MonomorphismRestriction will enable it (which is the default) while
NoMonomorphismRestriction will disable it.
You can put the following line at the very top of your file:
{-# LANGUAGE NoMonomorphismRestriction #-}
If you are using GHCi you can enable the extension using the :set command:
Prelude> :set -XNoMonomorphismRestriction
You can also tell ghc to enable the extension from the command line:
ghc ... -XNoMonomorphismRestriction
Note: You should really prefer the first option over choosing extension via command-line options.
Refer to GHC's page for an explanation of this and other extensions.
A complete explanation
I'll try to summarize below everything you need to know to understand what the
monomorphism restriction is, why it was introduced and how it behaves.
An example
Take the following trivial definition:
plus = (+)
you'd think to be able to replace every occurrence of + with plus. In particular since (+) :: Num a => a -> a -> a you'd expect to also have plus :: Num a => a -> a -> a.
Unfortunately this is not the case. For example if we try the following in GHCi:
Prelude> let plus = (+)
Prelude> plus 1.0 1
We get the following output:
<interactive>:4:6:
No instance for (Fractional Integer) arising from the literal ‘1.0’
In the first argument of ‘plus’, namely ‘1.0’
In the expression: plus 1.0 1
In an equation for ‘it’: it = plus 1.0 1
You may need to :set -XMonomorphismRestriction in newer GHCi versions.
And in fact we can see that the type of plus is not what we would expect:
Prelude> :t plus
plus :: Integer -> Integer -> Integer
What happened is that the compiler saw that plus had type Num a => a -> a -> a, a polymorphic type.
Moreover it happens that the above definition falls under the rules that I'll explain later and so
he decided to make the type monomorphic by defaulting the type variable a.
The default is Integer as we can see.
Note that if you try to compile the above code using ghc you won't get any errors.
This is due to how ghci handles (and must handle) the interactive definitions.
Basically every statement entered in ghci must be completely type checked before
the following is considered; in other words it's as if every statement was in a separate
module. Later I'll explain why this matter.
Some other example
Consider the following definitions:
f1 x = show x
f2 = \x -> show x
f3 :: (Show a) => a -> String
f3 = \x -> show x
f4 = show
f5 :: (Show a) => a -> String
f5 = show
We'd expect all these functions to behave in the same way and have the same type,
i.e. the type of show: Show a => a -> String.
Yet when compiling the above definitions we obtain the following errors:
test.hs:3:12:
No instance for (Show a1) arising from a use of ‘show’
The type variable ‘a1’ is ambiguous
Relevant bindings include
x :: a1 (bound at blah.hs:3:7)
f2 :: a1 -> String (bound at blah.hs:3:1)
Note: there are several potential instances:
instance Show Double -- Defined in ‘GHC.Float’
instance Show Float -- Defined in ‘GHC.Float’
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 24 others
In the expression: show x
In the expression: \ x -> show x
In an equation for ‘f2’: f2 = \ x -> show x
test.hs:8:6:
No instance for (Show a0) arising from a use of ‘show’
The type variable ‘a0’ is ambiguous
Relevant bindings include f4 :: a0 -> String (bound at blah.hs:8:1)
Note: there are several potential instances:
instance Show Double -- Defined in ‘GHC.Float’
instance Show Float -- Defined in ‘GHC.Float’
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
...plus 24 others
In the expression: show
In an equation for ‘f4’: f4 = show
So f2 and f4 don't compile. Moreover when trying to define these function
in GHCi we get no errors, but the type for f2 and f4 is () -> String!
Monomorphism restriction is what makes f2 and f4 require a monomorphic
type, and the different behaviour bewteen ghc and ghci is due to different
defaulting rules.
When does it happen?
In Haskell, as defined by the report, there are two distinct type of bindings.
Function bindings and pattern bindings. A function binding is nothing else than
a definition of a function:
f x = x + 1
Note that their syntax is:
<identifier> arg1 arg2 ... argn = expr
Modulo guards and where declarations. But they don't really matter.
where there must be at least one argument.
A pattern binding is a declaration of the form:
<pattern> = expr
Again, modulo guards.
Note that variables are patterns, so the binding:
plus = (+)
is a pattern binding. It's binding the pattern plus (a variable) to the expression (+).
When a pattern binding consists of only a variable name it's called a
simple pattern binding.
The monomorphism restriction applies to simple pattern bindings!
Well, formally we should say that:
A declaration group is a minimal set of mutually dependent bindings.
Section 4.5.1 of the report.
And then (Section 4.5.5 of the report):
a given declaration group is unrestricted if and only if:
every variable in the group is bound by a function binding (e.g. f x = x)
or a simple pattern binding (e.g. plus = (+); Section 4.4.3.2 ), and
an explicit type signature is given for every variable in the group that
is bound by simple pattern binding. (e.g. plus :: Num a => a -> a -> a; plus = (+)).
Examples added by me.
So a restricted declaration group is a group where, either there are
non-simple pattern bindings (e.g. (x:xs) = f something or (f, g) = ((+), (-))) or
there is some simple pattern binding without a type signature (as in plus = (+)).
The monomorphism restriction affects restricted declaration groups.
Most of the time you don't define mutual recursive functions and hence a declaration
group becomes just a binding.
What does it do?
The monomorphism restriction is described by two rules in Section
4.5.5 of the report.
First rule
The usual Hindley-Milner restriction on polymorphism is that only type
variables that do not occur free in the environment may be generalized.
In addition, the constrained type variables of a restricted declaration
group may not be generalized in the generalization step for that group.
(Recall that a type variable is constrained if it must belong to some
type class; see Section 4.5.2 .)
The highlighted part is what the monomorphism restriction introduces.
It says that if the type is polymorphic (i.e. it contain some type variable)
and that type variable is constrained (i.e. it has a class constraint on it:
e.g. the type Num a => a -> a -> a is polymorphic because it contains a and
also contrained because the a has the constraint Num over it.)
then it cannot be generalized.
In simple words not generalizing means that the uses of the function plus may change its type.
If you had the definitions:
plus = (+)
x :: Integer
x = plus 1 2
y :: Double
y = plus 1.0 2
then you'd get a type error. Because when the compiler sees that plus is
called over an Integer in the declaration of x it will unify the type
variable a with Integer and hence the type of plus becomes:
Integer -> Integer -> Integer
but then, when it will type check the definition of y, it will see that plus
is applied to a Double argument, and the types don't match.
Note that you can still use plus without getting an error:
plus = (+)
x = plus 1.0 2
In this case the type of plus is first inferred to be Num a => a -> a -> a
but then its use in the definition of x, where 1.0 requires a Fractional
constraint, will change it to Fractional a => a -> a -> a.
Rationale
The report says:
Rule 1 is required for two reasons, both of which are fairly subtle.
Rule 1 prevents computations from being unexpectedly repeated.
For example, genericLength is a standard function (in library Data.List)
whose type is given by
genericLength :: Num a => [b] -> a
Now consider the following expression:
let len = genericLength xs
in (len, len)
It looks as if len should be computed only once, but without Rule 1 it
might be computed twice, once at each of two different overloadings.
If the programmer does actually wish the computation to be repeated,
an explicit type signature may be added:
let len :: Num a => a
len = genericLength xs
in (len, len)
For this point the example from the wiki is, I believe, clearer.
Consider the function:
f xs = (len, len)
where
len = genericLength xs
If len was polymorphic the type of f would be:
f :: Num a, Num b => [c] -> (a, b)
So the two elements of the tuple (len, len) could actually be
different values! But this means that the computation done by genericLength
must be repeated to obtain the two different values.
The rationale here is: the code contains one function call, but not introducing
this rule could produce two hidden function calls, which is counter intuitive.
With the monomorphism restriction the type of f becomes:
f :: Num a => [b] -> (a, a)
In this way there is no need to perform the computation multiple times.
Rule 1 prevents ambiguity. For example, consider the declaration group
[(n,s)] = reads t
Recall that reads is a standard function whose type is given by the signature
reads :: (Read a) => String -> [(a,String)]
Without Rule 1, n would be assigned the type ∀ a. Read a ⇒ a and s
the type ∀ a. Read a ⇒ String.
The latter is an invalid type, because it is inherently ambiguous.
It is not possible to determine at what overloading to use s,
nor can this be solved by adding a type signature for s.
Hence, when non-simple pattern bindings are used (Section 4.4.3.2 ),
the types inferred are always monomorphic in their constrained type variables,
irrespective of whether a type signature is provided.
In this case, both n and s are monomorphic in a.
Well, I believe this example is self-explanatory. There are situations when not
applying the rule results in type ambiguity.
If you disable the extension as suggest above you will get a type error when
trying to compile the above declaration. However this isn't really a problem:
you already know that when using read you have to somehow tell the compiler
which type it should try to parse...
Second rule
Any monomorphic type variables that remain when type inference for an
entire module is complete, are considered ambiguous, and are resolved
to particular types using the defaulting rules (Section 4.3.4 ).
This means that. If you have your usual definition:
plus = (+)
This will have a type Num a => a -> a -> a where a is a
monomorphic type variable due to rule 1 described above. Once the whole module
is inferred the compiler will simply choose a type that will replace that a
according to the defaulting rules.
The final result is: plus :: Integer -> Integer -> Integer.
Note that this is done after the whole module is inferred.
This means that if you have the following declarations:
plus = (+)
x = plus 1.0 2.0
inside a module, before type defaulting the type of plus will be:
Fractional a => a -> a -> a (see rule 1 for why this happens).
At this point, following the defaulting rules, a will be replaced by Double
and so we will have plus :: Double -> Double -> Double and x :: Double.
Defaulting
As stated before there exist some defaulting rules, described in Section 4.3.4 of the Report,
that the inferencer can adopt and that will replace a polymorphic type with a monomorphic one.
This happens whenever a type is ambiguous.
For example in the expression:
let x = read "<something>" in show x
here the expression is ambiguous because the types for show and read are:
show :: Show a => a -> String
read :: Read a => String -> a
So the x has type Read a => a. But this constraint is satisfied by a lot of types:
Int, Double or () for example. Which one to choose? There's nothing that can tell us.
In this case we can resolve the ambiguity by telling the compiler which type we want,
adding a type signature:
let x = read "<something>" :: Int in show x
Now the problem is: since Haskell uses the Num type class to handle numbers,
there are a lot of cases where numerical expressions contain ambiguities.
Consider:
show 1
What should the result be?
As before 1 has type Num a => a and there are many type of numbers that could be used.
Which one to choose?
Having a compiler error almost every time we use a number isn't a good thing,
and hence the defaulting rules were introduced. The rules can be controlled
using a default declaration. By specifying default (T1, T2, T3) we can change
how the inferencer defaults the different types.
An ambiguous type variable v is defaultable if:
v appears only in contraints of the kind C v were C is a class
(i.e. if it appears as in: Monad (m v) then it is not defaultable).
at least one of these classes is Num or a subclass of Num.
all of these classes are defined in the Prelude or a standard library.
A defaultable type variable is replaced by the first type in the default list
that is an instance of all the ambiguous variable’s classes.
The default default declaration is default (Integer, Double).
For example:
plus = (+)
minus = (-)
x = plus 1.0 1
y = minus 2 1
The types inferred would be:
plus :: Fractional a => a -> a -> a
minus :: Num a => a -> a -> a
which, by defaulting rules, become:
plus :: Double -> Double -> Double
minus :: Integer -> Integer -> Integer
Note that this explains why in the example in the question only the sort
definition raises an error. The type Ord a => [a] -> [a] cannot be defaulted
because Ord isn't a numeric class.
Extended defaulting
Note that GHCi comes with extended defaulting rules (or here for GHC8),
which can be enabled in files as well using the ExtendedDefaultRules extensions.
The defaultable type variables need not only appear in contraints where all
the classes are standard and there must be at least one class that is among
Eq, Ord, Show or Num and its subclasses.
Moreover the default default declaration is default ((), Integer, Double).
This may produce odd results. Taking the example from the question:
Prelude> :set -XMonomorphismRestriction
Prelude> import Data.List(sortBy)
Prelude Data.List> let sort = sortBy compare
Prelude Data.List> :t sort
sort :: [()] -> [()]
in ghci we don't get a type error but the Ord a constraints results in
a default of () which is pretty much useless.
Useful links
There are a lot of resources and discussions about the monomorphism restriction.
Here are some links that I find useful and that may help you understand or deep further into the topic:
Haskell's wiki page: Monomorphism Restriction
The report
An accessible and nice blog post
Sections 6.2 and 6.3 of A History Of Haskell: Being Lazy With Class deals with the monomorphism restriction and type defaulting

RankNTypes with type aliases confusion [duplicate]

This question already has answers here:
Understanding a rank 2 type alias with a class constraint
(2 answers)
Closed 6 years ago.
I'm trying to understand how type constraints work with type aliases. First, let's assume I have next type alias:
type NumList a = Num a => [a]
And I have next function:
addFirst :: a -> NumList a -> NumList
addFirst x (y:_) = x + y
This function fails with next error:
Type.hs:9:13: error:
• No instance for (Num a) arising from a pattern
Possible fix:
add (Num a) to the context of
the type signature for:
addFirst :: a -> NumList a -> a
• In the pattern: y : _
In an equation for ‘addFirst’: ad
Which is obvious. This problem already described here:
Understanding a rank 2 type alias with a class constraint
And I understand why we need {-# LANGUAGE RankNTypes #-} for such type aliases to work and why previous example doesn't work. But what I don't understand is why next example compiles fine (on ghc 8):
prepend :: a -> NumList a -> NumList a
prepend = (:)
Of course it fails in ghci if I try to pass wrong value:
λ: prepend 1 []
[1]
λ: prepend "xx" []
<interactive>:3:1: error:
• No instance for (Num [Char]) arising from a use of ‘prepend’
• When instantiating ‘it’, initially inferred to have
this overly-general type:
NumList [Char]
NB: This instantiation can be caused by the monomorphism restriction.
Seems like type type checking delayed at runtime :(
Moreover, some simple and seems to be the same piece of code doesn't compile:
first :: NumList a -> a
first = head
And produces next error:
Type.hs:12:9: error:
• No instance for (Num a)
Possible fix:
add (Num a) to the context of
the type signature for:
first :: NumList a -> a
• In the expression: head
In an equation for ‘first’: first = head
Can somebody explain what is going on here? I expect some consistency in whether function type checks or not.
Seems like type type checking delayed at runtime :(
Not really. Here it's may be a bit surprising because you get the type error in ghci after having loaded the file. However it can be explained: the file itself is perfectly fine but that does not mean that all the expressions you can build up using the functions defined in it will be well-typed.
Higher-rank polymorphism has nothing to do with it. (+) for instance is defined in the prelude but if you try to evaluate 2 + "argh" in ghci, you'll get a type-error too:
No instance for (Num [Char]) arising from a use of ‘+’
In the expression: 2 + "argh"
In an equation for ‘it’: it = 2 + "argh"
Now, let's see what the problem is with first: it claims that given a NumList a, it can produce an a, no questions asked. But we know how to build NumList a out of thin air! Indeed the Num a constraints means that 0 is an a and makes [0] a perfectly valid NumList a. Which means that if first were accepted then all the types would be inhabited:
first :: NumList a -> a
first = head
elt :: a
elt = first [0]
In particular Void would be too:
argh :: Void
argh = elt
Argh indeed!

Error while compiling print Either value

I'm trying to compile simple code snippet.
main = (putStrLn . show) (Right 3.423)
Compile results in the following error:
No instance for (Show a0) arising from a use of `show'
The type variable `a0' is ambiguous
Possible fix: add a type signature that fixes these type variable(s)
Note: there are several potential instances:
instance Show Double -- Defined in `GHC.Float'
instance Show Float -- Defined in `GHC.Float'
instance (Integral a, Show a) => Show (GHC.Real.Ratio a)
-- Defined in `GHC.Real'
...plus 42 others
In the second argument of `(.)', namely `show'
In the expression: putStrLn . show
In the expression: (putStrLn . show) (Right 3.423)
When i execute same snippet from ghci everything works as expected.
Prelude> let main = (putStrLn . show) (Right 3.423)
Prelude> main
Right 3.423
So the question is what is going on?
The problem is that GHC can't determine what the full type of Right 3.423 is, it can only determine that it has the type Either a Double, and the instance of Show for Either looks like instance (Show a, Show b) => Show (Either a b). Without that extra constraint on Either a Double, GHC doesn't know how to print it.
The reason why it works in interactive mode is because of the dreaded monomorphism restriction, which makes GHCi more aggressive in the defaults it chooses. This can be disabled with :set -XNoMonomorphismRestriction, and that is going to become the default in future versions of GHC since it causes a lot of problems for beginners.
The solution to this problem is to put a type signature on Right 3.423 in your source code, like
main = (putStrLn . show) (Right 3.423 :: Either () Double)
Here I've just used () for a, since we don't care about it anyway and it's the "simplest" type that can be shown. You could put String or Int or Double or whatever you want there, so long as it implements Show.
A tip, putStrLn . show is exactly the definition of print, so you can just do
main = print (Right 3.423 :: Either () Double)
As #ØrjanJohansen points out, this is not the monomorphism restriction, but rather the ExtendedDefaultRules extension that GHCi uses, which essentially does exactly what I did above and shoves () into type variables to make things work in the interactive session.

Resources