Type inference subtleties - haskell

I'm having some difficulty with understanding why the inferred type signature is different from what I would expect. Let's have an example (I tried to make it as short as possible):
import Control.Applicative
import Data.Word
import Text.ParserCombinators.Parsec
import Text.ParserCombinators.Parsec.Token
import Text.Parsec.Language (emptyDef)
import Text.Parsec.Prim
import Data.Functor.Identity
--parseUInt' :: Num b => ParsecT String u Identity b
parseUInt' = fromInteger <$> decimal (makeTokenParser emptyDef)
--parseUInt1 = fromInteger <$> decimal (makeTokenParser emptyDef)
--parseUInt2 = fromInteger <$> decimal (makeTokenParser emptyDef)
parsePairOfInts = do
x <- parseUInt'
char ','
y <- parseUInt'
return $ (x, y)
parseLine :: String -> Either ParseError (Word32, Word8)
parseLine = parse parsePairOfInts "(error)"
main = print . show $ parseLine "1,2"
This code does NOT compile:
test.hs:21:19:
Couldn't match type ‘Word32’ with ‘Word8’
Expected type: Parsec String () (Word32, Word8)
Actual type: ParsecT String () Identity (Word32, Word32)
In the first argument of ‘parse’, namely ‘parsePairOfInts’
In the expression: parse parsePairOfInts "(error)"
Failed, modules loaded: none.
But if I uncomment the type signature of parseUInt' it compiles just fine.
At the same time, if I query type information in GHCi, it looks like this:
λ>:t (fromInteger <$> decimal (makeTokenParser emptyDef))
(fromInteger <$> decimal (makeTokenParser emptyDef))
:: Num b => ParsecT String u Identity b
But if I do NOT specify the type signature explicitly, the 'b' type is fixed to Word32 somehow.
If I replace parseUInt' with two different (but still the same implementation) functions parseUInt1 and parseUInt2, the code compile too.
I thought that if I don't specify a function's type, the inferred type must be the least restrictive (Num b =>...) but it's not the case somehow.
What I'm really missing here?

I think this is the dreaded MonomorphismRestriction in action. If you don't provide a type signature then ghc tries to infer a concrete type signature if the function is instantiated to a concrete type elsewhere in the code. ghc sees that you use the function to parse a Word32 as the first line of parsePairOfInts and then fixes parseUInt' to that type before reaching the second usage of parseUInt' two lines down. This then leads to a type error because the type has already been instantiated to Word32 and now the type needs to be Word8.

Looks like the monomorphism restriction again. You defined something that doesn't "look" like a polymorphic value, so the compiler inferred a monomorphic type for it.
This turns out not to be the type you wanted, so you'll have to be clear that you intend the polymorphism, by adding a type signature.

Related

Haskell ad-hoc polymorphism on values, calculating the length of list of ad-hoc polymorphism

I am trying to understand one phenomenon from my code below:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Arrow
import Control.Monad
import Data.List
import qualified Data.Map as M
import Data.Function
import Data.Ratio
class (Show a, Eq a) => Bits a where
zer :: a
one :: a
instance Bits Int where
zer = 0
one = 1
instance Bits Bool where
zer = False
one = True
instance Bits Char where
zer = '0'
one = '1'
When I try this:
b = zer:[]
It works perfectly, but when I try:
len = length b
I get this error:
<interactive>:78:8: error:
• Ambiguous type variable ‘a0’ arising from a use of ‘b’
prevents the constraint ‘(Bits a0)’ from being solved.
Probable fix: use a type annotation to specify what ‘a0’ should be.
These potential instances exist:
instance [safe] Bits Bool -- Defined at main.hs:18:10
instance [safe] Bits Char -- Defined at main.hs:22:10
instance [safe] Bits Int -- Defined at main.hs:14:10
• In the first argument of ‘length’, namely ‘b’
In the expression: length b
In an equation for ‘it’: it = length b
Can someone explain to me why it is possible to create list from values zer and one, but if I want to calculate length of the list I get an error?
It's perhaps a little easier to understand the meaning of this error in the following example:
roundTrip :: String -> String
roundTrip = show . read
So roundTrip reads a String, and then shows it back into a (presumably identical) String.
But read is a polymorphic function: it parses the input string in a manner that depends on the output type. Parsing an Int is a rather different ask than parsing a Bool!
The elaborator decides which concrete implementation of read to use by looking at the inferred return type of read. But in the expression show . read, the intermediate type could be any type a which implements both Show and Read. How is the compiler supposed to choose an implementation?
You might argue that in your example it doesn't matter because length :: [a] -> Int treats its type argument uniformly. length [zer] is always 1, no matter which instance of Bits you're going through. That sort of situation is difficult for a compiler to detect in general, though, so it's simpler and more predictable to just always reject ambiguous types.
You can fix the issue by giving a concrete type annotation.
> length ([zer] :: [Bool])
1

Explicit type conversion?

This is an example function:
import qualified Data.ByteString.Lazy as LAZ
import qualified Data.ByteString.Lazy.Char8 as CHA
import Network.Wreq
makeRequest :: IO (Network.Wreq.Response LAZ.ByteString)
makeRequest = do
res <- get "http://www.example.com"
let resBody = res ^. responseBody :: CHA.ByteString
--Do stuff....
return (res)
I'm struggling to understand the exact purpose of CHA.ByteString in this line:
let resBody = res ^. responseBody :: CHA.ByteString
Is this explicitly stating the type must be CHA.ByteString? Or does it serve another role?
Yes, this is just explicitly stating that the type must be CHA.ByteString. This does (by itself) not incur any sort of conversion, it's just a hint for the compiler (and/or the reader) that res must have this type.
These kinds of local annotations are needed when a value is both produced from a function with polymorphic result, and only consumed by functions with polymorphic argument. A simple example:
f :: Int -> Int
f = fromEnum . toEnum
Here, toEnum converts an integer to a arbitrary enumerable type – could for instance be Char. Whatever type you'd choose, fromEnum would be able to convert it back... trouble is, there is no way to decide which type should be used for the intermediate result!
No instance for (Enum a0) arising from a use of ‘fromEnum’
The type variable ‘a0’ is ambiguous
Note: there are several potential instances:
instance Integral a => Enum (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
instance Enum Ordering -- Defined in ‘GHC.Enum’
instance Enum Integer -- Defined in ‘GHC.Enum’
...plus 7 others
In the first argument of ‘(.)’, namely ‘fromEnum’
In the expression: fromEnum . toEnum
In an equation for ‘f’: f = fromEnum . toEnum
For some simple number classes, Haskell has defaults, so e.g. fromIntegral . round will automatically use Integer. But there are no defaults for types like ByteString, so with a polymorphic-result function like responseBody, you either need to pass the result to a monomorphic function that can only accept CHA.ByteString, or you need to add an explicit annotation that this should be the type.
The notation x :: T reads expression x has type T
This may be necessary in the presence of type classes and higher ranked types to enable the compiler to type check the program. For example:
main = print . show . read $ "1234"
is ambiguous, since the compiler cannot know which of the overloaded read functions to use.
In addition, it is possible to narrow the type the compiler would infer. Example:
1 :: Int
Finally, a type signature like this is often used to make the program more readable.

haskell - "Ambiguous type variable" after qualified import

I have a little problem to understand an error message in haskell.
For instance:
import qualified Data.Map as M
test = M.empty
This code runs as it should do without getting any error message.
The output looks like:
*Main> test
fromList []
But if I try something like that
import qualified Data.Map as M
test = do print M.empty
I get an error message like this
Ambiguous type variable `k0' in the constraint:
(Show k0) arising from a use of `print'
Probable fix: add a type signature that fixes these type variable(s)
In a stmt of a 'do' block: print M.empty
In the expression: do { print M.empty }
In an equation for `test': test = do { print M.empty }
So I think it has something to do with the print statement.
But if I try it in the console (ghci)
Prelude Data.Map> print empty
fromList []
everything works fine.
So I hope someone can explain me where the problem is.
Thanks in advance.
This code runs as it should do without getting any error message.
In a source file, it shouldn't.
import qualified Data.Map as M
test = M.empty
The inferred type of test is Ord k => Map k a, a polymorphic type with a constrained type variable. Since test is not a function and has no type signature, by the monomorphism restriction, its type must be made monomorphic by resolving the constrained type variables to a default type. Since the only constraint here is Ord, the defaulting rules forbid that type variable to be defaulted (there must be at least one numeric constraint for defaulting to be allowed).
Thus, compilation is required to fail by the language standard.
In ghci, however, there are extended defaulting rules that allow to default the type. If you want to print test, a further Show constraint is introduced on both type variables, and ghci defaults the type of test to Map () () when asked to print it.
This is because Data.Map.empty has the type Map k a. A map of keys of type k to values, type a.
print on the other hand has the type print :: Show a => a -> IO (), which means that it can only display types that are instances of Show, while M.empty has type Map k a, has no such constraint. It can be any type k or a -- it is not required to be able to show them.
So, basically, print doesn't know what type it's being asked to display.
As for why it works in ghci; I'm not entirely sure. Maybe one of the resident Haskell wizards can shed some light on that.

Haskell rank two polymorphism compile error

Given the following definitions:
import Control.Monad.ST
import Data.STRef
fourty_two = do
x <- newSTRef (42::Int)
readSTRef x
The following compiles under GHC:
main = (print . runST) fourty_two -- (1)
But this does not:
main = (print . runST) $ fourty_two -- (2)
But then as bdonlan points out in a comment, this does compile:
main = ((print . runST) $) fourty_two -- (3)
But, this does not compile
main = (($) (print . runST)) fourty_two -- (4)
Which seems to indicate that (3) only compiles due to special treatment of infix $, however, it still doesn't explain why (1) does compile.
Questions:
1) I've read the following two questions (first, second), and I've been led to believe $ can only be instantiated with monomorphic types. But I would similarly assume . can only be instantiated with monomorphic types, and as a result would similarly fail.
Why does the first code succeed but the second code does not? (e.g. is there a special rule GHC has for the first case that it can't apply in the second?)
2) Is there a current GHC extension that compiles the second code? (perhaps ImpredicativePolymorphism did this at some point, but it seems deprecated, has anything replaced it?)
3) Is there any way to define say `my_dollar` using GHC extensions to do what $ does, but is also able to handle polymorphic types, so (print . runST) `my_dollar` fourty_two compiles?
Edit: Proposed Answer:
Also, the following fails to compile:
main = ((.) print runST) fourty_two -- (5)
This is the same as (1), except not using the infix version of ..
As a result, it seems GHC has special rules for both $ and ., but only their infix versions.
I'm not sure I understand why the second doesn't work. We can look at the type of print . runST and observe that it is sufficiently polymorphic, so the blame doesn't lie with (.). I suspect that the special rule that GHC has for infix ($) just isn't quite sufficient. SPJ and friends might be open to re-examining it if you propose this fragment as a bug on their tracker.
As for why the third example works, well, that's just because again the type of ((print . runST) $) is sufficiently polymorphic; in fact, it's equal to the type of print . runST.
Nothing has replaced ImpredicativePolymorphism, because the GHC folks haven't seen any use cases where the extra programmer convenience outweighed the extra potential for compiler bugs. (I don't think they'd see this as compelling, either, though of course I'm not the authority.)
We can define a slightly less polymorphic ($$):
{-# LANGUAGE RankNTypes #-}
infixl 0 $$
($$) :: ((forall s. f s a) -> b) -> ((forall s. f s a) -> b)
f $$ x = f x
Then your example typechecks okay with this new operator:
*Main> (print . runST) $$ fourty_two
42
I can't say with too much authority on this subject, but here's what I think may be happening:
Consider what the typechecker has to do in each of these cases. (print . runST) has type Show b => (forall s. ST s t) -> IO (). fourty_two has type ST x Int.
The forall here is an existential type qualifier - here it means that the argument passed in has to be universal on s. That is, you must pass in a polymorphic type that supports any value for s whatsoever. If you don't explicitly state forall, Haskell puts it at the outermost level of the type definition. This means that fourty_two :: forall x. ST x Int and (print . runST) :: forall t. Show t => (forall s. ST s t) -> IO ()
Now, we can match forall x. ST x Int with forall s. ST s t by letting t = Int, x = s. So the direct call case works. What happens if we use $, though?
$ has type ($) :: forall a b. (a -> b) -> a -> b. When we resolve a and b, since the type for $ doesn't have any explicit type scoping like this, the x argument of fourty_two gets lifted out to the outermost scope in the type for ($) - so ($) :: forall x t. (a = forall s. ST s t -> b = IO ()) -> (a = ST x t) -> IO (). At this point, it tries to match a and b, and fails.
If you instead write ((print . runST) $) fourty_two, then the compiler first resolves the type of ((print . runST $). It resolves the type for ($) to be forall t. (a = forall s. ST s t -> b = IO ()) -> a -> b; note that since the second occurance of a is unconstrained, we don't have that pesky type variable leaking out to the outermost scope! And so the match succeeds, the function is partially applied, and the overall type of the expression is forall t. (forall s. ST s t) -> IO (), which is right back where we started, and so it succeeds.

How do I resolve this compile error: Ambiguous type variable `a1' in the constraint

One could think of this case as follows:
The application dynamically loads a module, or there is a list of functions from which the user chooses, etc. We have a mechanism for determining whether a certain type will successfully work with a function in that module. So now we want to call into that function. We need to force it to make the call. The function could take a concrete type, or a polymorphic one and it's the case below with just a type class constraint that I'm running into problems with it.
The following code results in the errors below. I think it could be resolved by specifying concrete types but I do not want to do that. The code is intended to work with any type that is an instance of the class. Specifying a concrete type defeats the purpose.
This is simulating one part of a program that does not know about the other and does not know the types of what it's dealing with. I have a separate mechanism that allows me to be sure that the types do match up properly, that the value sent in really is an instance of the type class. That's why in this case, I don't mind using unsafeCoerce. But basically I need a way to tell the compiler that I really do know it's ok and do it anyway even though it doesn't know enough to type check.
{-# LANGUAGE ExistentialQuantification, RankNTypes, TypeSynonymInstances #-}
module Main where
import Unsafe.Coerce
main = do
--doTest1 $ Hider "blue"
doTest2 $ Hider "blue"
doTest1 :: Hider -> IO ()
doTest1 hh#(Hider h) =
test $ unsafeCoerce h
doTest2 :: Hider -> IO ()
doTest2 hh#(Hider h) =
test2 hh
test :: HasString a => a -> IO ()
test x = print $ toString x
test2 :: Hider -> IO ()
test2 (Hider x) = print $ toString (unsafeCoerce x)
data Hider = forall a. Hider a
class HasString a where
toString :: a -> String
instance HasString String where
toString = id
Running doTest1
[1 of 1] Compiling Main ( Test.hs, Test.o )
Test.hs:12:3:
Ambiguous type variable `a1' in the constraint:
(HasString a1) arising from a use of `test'
Probable fix: add a type signature that fixes these type variable(s)
In the expression: test
In the expression: test $ unsafeCoerce h
In an equation for `doTest1':
doTest1 hh#(Hider h) = test $ unsafeCoerce h
Running doTest2
[1 of 1] Compiling Main ( Test.hs, Test.o )
Test.hs:12:3:
Ambiguous type variable `a1' in the constraint:
(HasString a1) arising from a use of `test'
Probable fix: add a type signature that fixes these type variable(s)
In the expression: test
In the expression: test $ unsafeCoerce h
In an equation for `doTest1':
doTest1 hh#(Hider h) = test $ unsafeCoerce h
I think it could be resolved by specifying concrete types but I do not want to do that.
There's no way around it though with unsafeCoerce. In this particular case, the compiler can't infer the type of unsafeCoerce, because test is still to polymorphic. Even though there is just one instance of HasString, the type system won't use that fact to infer the type.
I don't have enough information about your particular application of this pattern, but I'm relatively sure that you need to rethink the way you use the type system in your program. But if you really want to do this, you might want to look into Data.Typeable instead of unsafeCoerce.
Modify your data type slightly:
data Hider = forall a. HasString a => Hider a
Make it an instance of the type class in the obvious way:
instance HasString Hider where
toString (Hider x) = toString x
Then this should work, without use of unsafeCoerce:
doTest3 :: Hider -> IO ()
doTest3 hh = print $ toString hh
This does mean that you can no longer place a value into a Hider if it doesn't implement HasString, but that's probably a good thing.
There's probably a name for this pattern, but I can't think what it is off the top of my head.

Resources