Evaluation, let and where in Haskell - haskell

I'm currently learning Haskell and trying to understand how typeclasses are evaluated, and how let and where work. This code runs fine:
{-# LANGUAGE FlexibleInstances #-}
class Expr a where
literal :: Integer -> a
instance Expr Integer where
literal = id
instance Expr [Integer] where
literal i = [i]
coerceInteger :: Integer -> Integer
coerceInteger = id
main = print $ coerceInteger (literal 100) : literal 100 -- Prints [100,100]
but changing the main function to
main = print $ coerceInteger expr : expr
where expr = literal 200
causes a compiler error:
Couldn't match expected type `[Integer]' with actual type `Integer'
In the second argument of `(:)', namely `expr'
In the second argument of `($)', namely `coerceInteger expr : expr'
In the expression: print $ coerceInteger expr : expr
I'm guessing this is because in the first main method the literal 100 is evaluated twice, whereas in the second example literal 200 is only evaluated once and so the compiler is forced to choose a type.
How can I factor out that code to avoid repeating myself, without causing this error? I tried using let expr = literal 300 in ... but ran into the same issue.

The problem is that the literal 200 is interpreted differently in the two different contexts with your first example. Think of it as
((:) :: a -> [a] -> [a])
((coerceInteger :: Integer -> Integer) (literal 100 :: Expr a => a))
(literal 100 :: Expr a => a)
Just based off the types, the compiler determines that the first literal 100 must have type Integer because it's being passed to coerceInteger, since it has to take a value of type Integer. This also sets the type of (:) to now be Integer -> [Integer] -> [Integer], implying that the last literal 100 has to have type [Integer].
In the second example, you're saying that both of them have the same value, and therefore the same type, which is impossible because the second must be a list for (:) to type check.
This actually occurs because of the dreaded Monomorphism restriction. You can fix this problem in two ways: One, turn off the monomorphism restriction with {-# LANGUAGE NoMonomorphismRestriction #-}, or you can provide an explicit type to expr that keeps it generalized:
main :: IO ()
main = print $ coerceInteger expr : expr
where
expr :: Expr a => a
expr = literal 100
Either of these approaches work, and whatever you decide to do I would recommend always providing type signatures to help avoid these problems.
In fact, once you add the type signature you can even do things like
main :: IO ()
main = print $ coerceInteger expr : expr : expr : expr : expr : expr
where
expr :: Expr a => a
expr = literal 100
without any problems, this will print out [100, 100, 100, 100, 100, 100]. The initial coerceInteger is needed though, because otherwise the compiler won't know what to instantiate it as and therefore won't have a Show instance for print.

Related

Why does ghc warn that ^2 requires "defaulting the constraint to type 'Integer'?

If I compile the following source file with ghc -Wall:
main = putStr . show $ squareOfSum 5
squareOfSum :: Integral a => a -> a
squareOfSum n = (^2) $ sum [1..n]
I get:
powerTypes.hs:4:18: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Integer’
(Integral b0) arising from a use of ‘^’ at powerTypes.hs:4:18-19
(Num b0) arising from the literal ‘2’ at powerTypes.hs:4:19
• In the expression: (^ 2)
In the expression: (^ 2) $ sum [1 .. n]
In an equation for ‘squareOfSum’:
squareOfSum n = (^ 2) $ sum [1 .. n]
|
4 | squareOfSum n = (^2) $ sum [1..n]
| ^^
I understand that the type of (^) is:
Prelude> :t (^)
(^) :: (Integral b, Num a) => a -> b -> a
which means it works for any a^b provided a is a Num and b is an Integral. I also understand the type hierarchy to be:
Num --> Integral --> Int or Integer
where --> denotes "includes" and the first two are typeclasses while the last two are types.
Why does ghc not conclusively infer that 2 is an Int, instead of "defaulting the constraints to Integer". Why is ghc defaulting anything? Is replacing 2 with 2 :: Int a good way to resolve this warning?
In Haskell, numeric literals have a polymorphic type
2 :: Num a => a
This means that the expression 2 can be used to generate a value in any numeric type. For instance, all these expression type-check:
2 :: Int
2 :: Integer
2 :: Float
2 :: Double
2 :: MyCustomTypeForWhichIDefinedANumInstance
Technically, each time we use 2 we would have to write 2 :: T to choose the actual numeric type T we want. Fortunately, this is often not needed since type inference can frequently deduce T from the context. E.g.,
foo :: Int -> Int
foo x = x + 2
Here, x is an Int because of the type annotation, and + requires both operands to have the same type, hence Haskell infers 2 :: Int. Technically, this is because (+) has type
(+) :: Num a => a -> a -> a
Sometimes, however, type inference can not deduce T from the context. Consider this example involving a custom type class:
class C a where bar :: a -> String
instance C Int where bar x = "Int: " ++ show x
instance C Integer where bar x = "Integer: " ++ show x
test :: String
test = bar 2
What is the value of test? Well, if 2 is an Int, then we have test = "Int: 2". If it is an Integer, then we have test = "Integer: 2". If it's another numeric type T, we can not find an instance for C T.
This code is inherently ambiguous. In such a case, Haskell mandates that numeric types that can not be deduced are defaulted to Integer (the programmer can change this default to another type, but it's not relevant now). Hence we have test = "Integer: 2".
While this mechanism makes our code type check, it might cause an unintended result: for all we know, the programmer might have wanted 2 :: Int instead. Because of this, GHC chooses the default, but warns about it.
In your code, (^) can work with any Integral type for the exponent. But, in principle, x ^ (2::Int) and x ^ (2::Integer) could lead to different results. We know this is not the case since we know the semantics of (^), but for the compiler (^) is only a random function with that type, which could behave differently on Int and Integer. Consider, e.g.,
a ^ n = if n + 3000000000 < 0 then 0 else 1
When n = 2, if we use n :: Int the if guard could be true on a 32 bit system. This is not the case when using n :: Integer which never overflows.
The standard solution, in these cases, is to resolve the warning using something like x ^ (2 :: Int).

Type is inferred differently in a lexical binding in GHCi [duplicate]

Numeric literals have a polymorphic type:
*Main> :t 3
3 :: (Num t) => t
But if I bind a variable to such a literal, the polymorphism is lost:
x = 3
...
*Main> :t x
x :: Integer
If I define a function, on the other hand, it is of course polymorphic:
f x = 3
...
*Main> :t f
f :: (Num t1) => t -> t1
I could provide a type signature to ensure the x remains polymorphic:
x :: Num a => a
x = 3
...
*Main> :t x
x :: (Num a) => a
But why is this necessary? Why isn't the polymorphic type inferred?
It's the monomorphism restriction which says that all values, which are defined without parameters and don't have an explicit type annotation, should have a monomorphic type. This restriction can be disabled in ghc and ghci using -XNoMonomorphismRestriction.
The reason for the restriction is that without this restriction long_calculation 42 would be evaluated twice, while most people would probably expect/want it to only be evaluated once:
longCalculation :: Num a => a -> a
longCalculation = ...
x = longCalculation 42
main = print $ x + x
To expand on sepp2k's answer a bit: if you try to compile the following (or load it into GHCi), you get an error:
import Data.List (sort)
f = head . sort
This is a violation of the monomorphism restriction because we have a class constraint (introduced by sort) but no explicit arguments: we're (somewhat mysteriously) told that we have an Ambiguous type variable in the constraint Ord a.
Your example (let x = 3) has a similarly ambiguous type variable, but it doesn't give the same error, because it's saved by Haskell's "defaulting" rules:
Any monomorphic type variables that
remain when type inference for an
entire module is complete, are
considered ambiguous, and are resolved
to particular types using the
defaulting rules (Section 4.3.4).
See this answer for more information about the defaulting rules—the important point is that they only work for certain numeric classes, so x = 3 is fine while f = sort isn't.
As a side note: if you'd prefer that x = 3 end up being an Int instead of an Integer, and y = 3.0 be a Rational instead of a Double, you can use a "default declaration" to override the default defaulting rules:
default (Int, Rational)

Why does currying anonymous functions change Haskell's type inference from Num to Integer? [duplicate]

Numeric literals have a polymorphic type:
*Main> :t 3
3 :: (Num t) => t
But if I bind a variable to such a literal, the polymorphism is lost:
x = 3
...
*Main> :t x
x :: Integer
If I define a function, on the other hand, it is of course polymorphic:
f x = 3
...
*Main> :t f
f :: (Num t1) => t -> t1
I could provide a type signature to ensure the x remains polymorphic:
x :: Num a => a
x = 3
...
*Main> :t x
x :: (Num a) => a
But why is this necessary? Why isn't the polymorphic type inferred?
It's the monomorphism restriction which says that all values, which are defined without parameters and don't have an explicit type annotation, should have a monomorphic type. This restriction can be disabled in ghc and ghci using -XNoMonomorphismRestriction.
The reason for the restriction is that without this restriction long_calculation 42 would be evaluated twice, while most people would probably expect/want it to only be evaluated once:
longCalculation :: Num a => a -> a
longCalculation = ...
x = longCalculation 42
main = print $ x + x
To expand on sepp2k's answer a bit: if you try to compile the following (or load it into GHCi), you get an error:
import Data.List (sort)
f = head . sort
This is a violation of the monomorphism restriction because we have a class constraint (introduced by sort) but no explicit arguments: we're (somewhat mysteriously) told that we have an Ambiguous type variable in the constraint Ord a.
Your example (let x = 3) has a similarly ambiguous type variable, but it doesn't give the same error, because it's saved by Haskell's "defaulting" rules:
Any monomorphic type variables that
remain when type inference for an
entire module is complete, are
considered ambiguous, and are resolved
to particular types using the
defaulting rules (Section 4.3.4).
See this answer for more information about the defaulting rules—the important point is that they only work for certain numeric classes, so x = 3 is fine while f = sort isn't.
As a side note: if you'd prefer that x = 3 end up being an Int instead of an Integer, and y = 3.0 be a Rational instead of a Double, you can use a "default declaration" to override the default defaulting rules:
default (Int, Rational)

ambiguity error with `reads` in ghc-7.8

I am testing the code for Write yourself a Scheme in 48 hours with GHC-7.8.2, which gives me an error about ambiguity that I don't recall encountering in previous versions of GHC.
The excerpt is below, with the problem line marked:
data LispVal = Atom String
| List [LispVal]
| DottedList [LispVal] LispVal
| Number Integer
| String String
| Bool Bool
unpackNum :: LispVal -> Integer
unpackNum (Number n) = n
unpackNum (String n) = let parsed = reads n in --problem line
if null parsed
then 0
else fst $ parsed !! 0
unpackNum (List [n]) = unpackNum n
unpackNum _ = 0
, and the error says:
No instance for (Read a0) arising from a use of ¡®parsed¡¯
The type variable ¡®a0¡¯ is ambiguous
Note: there are several potential instances:
instance Read a => Read (Control.Applicative.ZipList a)
-- Defined in ¡®Control.Applicative¡¯
instance Read () -- Defined in ¡®GHC.Read¡¯
instance (Read a, Read b) => Read (a, b) -- Defined in ¡®GHC.Read¡¯
...plus 26 others
If I change the problem line to
unpackNum (String n) = let parsed = reads n ::[(Integer,String)] in
then everything works fine.
I don't see why GHC failed to infer the type for ReadS from the signature of unpackNum. Can someone please explain what triggered the error?
(
-- EDIT --
Just some follow-up. From what I understand, the function type unpackNum :: LispVal -> Integer and the fact that fst $ parsed !! 0 is a return value of it tells that parsed has type [(Integer,b)], and from type ReadS a = String -> [(a,String)], the parsed should be [(a, String)]. Shouldn't these two types unify to [(Integer, String)] and fix the type for parsed?
Can someone please explain why NoMonomorphismRestriction would break the above reasoning?
-- EDIT2 --
From the answers, I can understand how NoMonomorphismRestriction could cause the issue here. Still, what I don't understand is the fact that how this "two type for the same expression" behavior consistent with laziness in Haskell. In the example parsed or reads n is the same expression in one block and should be evaluated only once. How can it have type a the first time of evaluation and Integer the second time?
)
Thanks,
This is triggered if NoMonomorphismRestriction is active; which, btw, is now the case by default in GHCi since 7.8 (see release notes, Section 1.5.2.3).
If the monomorphism restriction is disabled, the definition of parsed gets a polymorphic type, namely
parsed :: Read a => [(a, String)]
and then the first use in null parsed doesn't have sufficient contextual information to resolve what a is.
This happens to be one of the few cases where the monomorphism restriction actually does some good. Because with the polymorphic type, even if both use sites had sufficient type
information to resolve the class constraint, the actual parsing would happen twice.
The best solution is still to use pattern matching as suggested in acomar's answer.
The types should unify but don't in the presence of the NoMonomorphismRestriction (as noted in the comments by #FedorGogolev and #kosmikus). However, the following more idiomatic approach removes the need for the type annotation in any case:
data LispVal = Atom String
| List [LispVal]
| DottedList [LispVal] LispVal
| Number Integer
| String String
| Bool Bool
unpackNum :: LispVal -> Integer
unpackNum (Number n) = n
unpackNum (String n) = case reads n of
[] -> 0
((x, _):xs) -> x
unpackNum (List [n]) = unpackNum n
unpackNum _ = 0
The Difference Between Case and Null
It boils down to the fact that null is a function whereas case is straight syntax.
null :: [a] -> Bool
So with -XNoMonomorphismRestriction enabled, this is left as polymorphic as possible when the argument is supplied. The function doesn't restrict the argument type in any way, and so the compiler is unable to determine the return type of reads, causing the error. At the site of the function call, the type is ambiguous. In the case of the case statement, the compiler has the entire expression to work with, and so has the pattern matches to refine the return type of reads.

Why are polymorphic values not inferred in Haskell?

Numeric literals have a polymorphic type:
*Main> :t 3
3 :: (Num t) => t
But if I bind a variable to such a literal, the polymorphism is lost:
x = 3
...
*Main> :t x
x :: Integer
If I define a function, on the other hand, it is of course polymorphic:
f x = 3
...
*Main> :t f
f :: (Num t1) => t -> t1
I could provide a type signature to ensure the x remains polymorphic:
x :: Num a => a
x = 3
...
*Main> :t x
x :: (Num a) => a
But why is this necessary? Why isn't the polymorphic type inferred?
It's the monomorphism restriction which says that all values, which are defined without parameters and don't have an explicit type annotation, should have a monomorphic type. This restriction can be disabled in ghc and ghci using -XNoMonomorphismRestriction.
The reason for the restriction is that without this restriction long_calculation 42 would be evaluated twice, while most people would probably expect/want it to only be evaluated once:
longCalculation :: Num a => a -> a
longCalculation = ...
x = longCalculation 42
main = print $ x + x
To expand on sepp2k's answer a bit: if you try to compile the following (or load it into GHCi), you get an error:
import Data.List (sort)
f = head . sort
This is a violation of the monomorphism restriction because we have a class constraint (introduced by sort) but no explicit arguments: we're (somewhat mysteriously) told that we have an Ambiguous type variable in the constraint Ord a.
Your example (let x = 3) has a similarly ambiguous type variable, but it doesn't give the same error, because it's saved by Haskell's "defaulting" rules:
Any monomorphic type variables that
remain when type inference for an
entire module is complete, are
considered ambiguous, and are resolved
to particular types using the
defaulting rules (Section 4.3.4).
See this answer for more information about the defaulting rules—the important point is that they only work for certain numeric classes, so x = 3 is fine while f = sort isn't.
As a side note: if you'd prefer that x = 3 end up being an Int instead of an Integer, and y = 3.0 be a Rational instead of a Double, you can use a "default declaration" to override the default defaulting rules:
default (Int, Rational)

Resources