Explicit type conversion? - haskell

This is an example function:
import qualified Data.ByteString.Lazy as LAZ
import qualified Data.ByteString.Lazy.Char8 as CHA
import Network.Wreq
makeRequest :: IO (Network.Wreq.Response LAZ.ByteString)
makeRequest = do
res <- get "http://www.example.com"
let resBody = res ^. responseBody :: CHA.ByteString
--Do stuff....
return (res)
I'm struggling to understand the exact purpose of CHA.ByteString in this line:
let resBody = res ^. responseBody :: CHA.ByteString
Is this explicitly stating the type must be CHA.ByteString? Or does it serve another role?

Yes, this is just explicitly stating that the type must be CHA.ByteString. This does (by itself) not incur any sort of conversion, it's just a hint for the compiler (and/or the reader) that res must have this type.
These kinds of local annotations are needed when a value is both produced from a function with polymorphic result, and only consumed by functions with polymorphic argument. A simple example:
f :: Int -> Int
f = fromEnum . toEnum
Here, toEnum converts an integer to a arbitrary enumerable type – could for instance be Char. Whatever type you'd choose, fromEnum would be able to convert it back... trouble is, there is no way to decide which type should be used for the intermediate result!
No instance for (Enum a0) arising from a use of ‘fromEnum’
The type variable ‘a0’ is ambiguous
Note: there are several potential instances:
instance Integral a => Enum (GHC.Real.Ratio a)
-- Defined in ‘GHC.Real’
instance Enum Ordering -- Defined in ‘GHC.Enum’
instance Enum Integer -- Defined in ‘GHC.Enum’
...plus 7 others
In the first argument of ‘(.)’, namely ‘fromEnum’
In the expression: fromEnum . toEnum
In an equation for ‘f’: f = fromEnum . toEnum
For some simple number classes, Haskell has defaults, so e.g. fromIntegral . round will automatically use Integer. But there are no defaults for types like ByteString, so with a polymorphic-result function like responseBody, you either need to pass the result to a monomorphic function that can only accept CHA.ByteString, or you need to add an explicit annotation that this should be the type.

The notation x :: T reads expression x has type T
This may be necessary in the presence of type classes and higher ranked types to enable the compiler to type check the program. For example:
main = print . show . read $ "1234"
is ambiguous, since the compiler cannot know which of the overloaded read functions to use.
In addition, it is possible to narrow the type the compiler would infer. Example:
1 :: Int
Finally, a type signature like this is often used to make the program more readable.

Related

Haskell ad-hoc polymorphism on values, calculating the length of list of ad-hoc polymorphism

I am trying to understand one phenomenon from my code below:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Arrow
import Control.Monad
import Data.List
import qualified Data.Map as M
import Data.Function
import Data.Ratio
class (Show a, Eq a) => Bits a where
zer :: a
one :: a
instance Bits Int where
zer = 0
one = 1
instance Bits Bool where
zer = False
one = True
instance Bits Char where
zer = '0'
one = '1'
When I try this:
b = zer:[]
It works perfectly, but when I try:
len = length b
I get this error:
<interactive>:78:8: error:
• Ambiguous type variable ‘a0’ arising from a use of ‘b’
prevents the constraint ‘(Bits a0)’ from being solved.
Probable fix: use a type annotation to specify what ‘a0’ should be.
These potential instances exist:
instance [safe] Bits Bool -- Defined at main.hs:18:10
instance [safe] Bits Char -- Defined at main.hs:22:10
instance [safe] Bits Int -- Defined at main.hs:14:10
• In the first argument of ‘length’, namely ‘b’
In the expression: length b
In an equation for ‘it’: it = length b
Can someone explain to me why it is possible to create list from values zer and one, but if I want to calculate length of the list I get an error?
It's perhaps a little easier to understand the meaning of this error in the following example:
roundTrip :: String -> String
roundTrip = show . read
So roundTrip reads a String, and then shows it back into a (presumably identical) String.
But read is a polymorphic function: it parses the input string in a manner that depends on the output type. Parsing an Int is a rather different ask than parsing a Bool!
The elaborator decides which concrete implementation of read to use by looking at the inferred return type of read. But in the expression show . read, the intermediate type could be any type a which implements both Show and Read. How is the compiler supposed to choose an implementation?
You might argue that in your example it doesn't matter because length :: [a] -> Int treats its type argument uniformly. length [zer] is always 1, no matter which instance of Bits you're going through. That sort of situation is difficult for a compiler to detect in general, though, so it's simpler and more predictable to just always reject ambiguous types.
You can fix the issue by giving a concrete type annotation.
> length ([zer] :: [Bool])
1

How fromIntegral or read works

The fromIntegral returns a Num data type. But it seems that Num is able to coerce to Double or Integer with no issue. Similarly, the read function is able to return whatever that is required to fit its type signature. How does this works? And if I do need to make a similar function, how do I do it?
The type checker is able to infer not only the types of function arguments but also the return type. Actually there is no special case there. If you store the result of fromIntegral or read in Integer, the version for this type will get called. You can create your own function in the same way.
For example:
class Foo a where
foo :: a
instance Foo Integer where
foo = 7
instance Foo String where
foo = "Hello"
x :: Integer
x = 3
main = do
putStrLn foo
print (foo + x)
Because putStrLn has type String -> IO () the type checker finds that the type of foo in putStrLn foo must be String for the program to compile. The type of foo is Foo a => a. So it deduces that a == String and searches for Foo String instance. Such instance exists and it causes thefoo :: Foo String => String value to be selected there.
Similar reasoning happens in print (foo + x). x is known to be Integer, and because the type of (+) is Num a => a -> a -> a the type checker deduces that the left argument of the addition operator must also be Integer so it searches for Foo Integer instance and substitutes the Integer variant.
There is no direction from function arguments to (possible) return type like in C++. The type checker may even deduce function arguments based on knowledge of what the function is expected to return. Another example:
twice :: a -> [a]
twice a = [a,a]
y :: [Integer]
y = twice foo
The function argument here is of type Foo a => a which is not enough to decide which foo to use. But because the result is already known to be [Integer] the type checker finds that it has to provide value of type Integer to twice and does so by using the appropriate instance of foo.
read is a member of a typeclass which means that its implementation can depend on a type parameter.
Also, numeric literals like 1, 42, etc. are syntatic sugar for function calls fromInteger 1, fromInteger 42, etc., and fromInteger itself is a member of the Num typeclass.
Thus, the literal 42 (or fromInteger 42) can return an Int or Double or any other Num instance depending on the context in which it is called.
I'm being a little particular about terminology here, because I think the words you're using betray some misunderstandings about how Haskell works.
The fromIntegral returns a Num data type.
More precisely, fromIntegral takes as input any type that has an instance of class Integral, and can return any type that is an instance of class Num. The prelude defines it like this:
fromIntegral :: (Integral a, Num b) => a -> b
fromIntegral = fromInteger . toInteger
Types with an Integral instance implement thetoInteger function, and all types with a Num instance implement fromInteger. fromIntegral uses the toInteger associated with the Integral a instance to turn a value of type a into an Integer. Then it uses the fromInteger from the Num b instance to convert that Integer into a value of type b.
But it seems that Num is able to coerce to Double or Integer with no issue.
Every time a haskell function is used, it is "expanded". Its definition is substituted for the function call, and the parameters used are substituted into the definition. Each time it is expanded in a different context, it can have different types in its type variables. So each time a function is expanded, it takes certain concrete types and returns a certain concrete type. It doesn't return a typeless thing that gets coerced at some later time.
Similarly, the read function is able to return whatever that is required to fit its type signature.
read :: Read a => String -> a
read takes a String as input, and can be used in any context that returns values of a type for which an instance of class Read exists. When it is expanded during execution of a program, this type is known. It uses the particular definitions in the Read a instance to parse the string into the correct type.

How to work around issue with ambiguity when monomorphic restriction turned *on*?

So, learning Haskell, I came across the dreaded monomorphic restriction, soon enough, with the following (in ghci):
Prelude> let f = print.show
Prelude> f 5
<interactive>:3:3:
No instance for (Num ()) arising from the literal `5'
Possible fix: add an instance declaration for (Num ())
In the first argument of `f', namely `5'
In the expression: f 5
In an equation for `it': it = f 5
So there's a bunch of material about this, e.g. here, and it is not so hard to workaround.
I can either add an explicit type signature for f, or I can turn off the monomorphic restriction (with ":set -XNoMonomorphismRestriction" directly in ghci, or in a .ghci file).
There's some discussion about the monomorphic restriction, but it seems like the general advice is that it is ok to turn this off (and I was told that this is actually off by default in newer versions of ghci).
So I turned this off.
But then I came across another issue:
Prelude> :set -XNoMonomorphismRestriction
Prelude> let (a,g) = System.Random.random (System.Random.mkStdGen 4) in a :: Int
<interactive>:4:5:
No instance for (System.Random.Random t0)
arising from the ambiguity check for `g'
The type variable `t0' is ambiguous
Possible fix: add a type signature that fixes these type variable(s)
Note: there are several potential instances:
instance System.Random.Random Bool -- Defined in `System.Random'
instance System.Random.Random Foreign.C.Types.CChar
-- Defined in `System.Random'
instance System.Random.Random Foreign.C.Types.CDouble
-- Defined in `System.Random'
...plus 33 others
When checking that `g' has the inferred type `System.Random.StdGen'
Probable cause: the inferred type is ambiguous
In the expression:
let (a, g) = System.Random.random (System.Random.mkStdGen 4)
in a :: Int
In an equation for `it':
it
= let (a, g) = System.Random.random (System.Random.mkStdGen 4)
in a :: Int
This is actually simplified from example code in the 'Real World Haskell' book, which wasn't working for me, and which you can find on this page: http://book.realworldhaskell.org/read/monads.html (it's the Monads chapter, and the getRandom example function, search for 'getRandom' on that page).
If I leave the monomorphic restriction on (or turn it on) then the code works. It also works (with the monomorphic restriction on) if I change it to:
Prelude> let (a,_) = System.Random.random (System.Random.mkStdGen 4) in a :: Int
-106546976
or if I specify the type of 'a' earlier:
Prelude> let (a::Int,g) = System.Random.random (System.Random.mkStdGen 4) in a :: Int
-106546976
but, for this second workaround, I have to turn on the 'scoped type variables' extension (with ":set -XScopedTypeVariables").
The problem is that in this case (problems when monomorphic restriction on) neither of the workarounds seem generally applicable.
For example, maybe I want to write a function that does something like this and works with arbitrary (or multiple) types, and of course in this case I most probably do want to hold on to the new generator state (in 'g').
The question is then: How do I work around this kind of issue, in general, and without specifying the exact type directly?
And, it would also be great (as a Haskell novice) to get more of an idea about exactly what is going on here, and why these issues occur..
When you define
(a,g) = random (mkStdGen 4)
then even if g itself is always of type StdGen, the value of g depends on the type of a, because different types can differ in how much they use the random number generator.
Moreover, when you (hypothetically) use g later, as long as a was polymorphic originally, there is no way to decide which type of a you want to use for calculating g.
So, taken alone, as a polymorphic definition, the above has to be disallowed because g actually is extremely ambiguous and this ambiguity cannot be fixed at the use site.
This is a general kind of problem with let/where bindings that bind several variables in a pattern, and is probably the reason why the ordinary monomorphism restriction treats them even stricter than single variable equations: With a pattern, you cannot even disable the MR by giving a polymorphic type signature.
When you use _ instead, presumably GHC doesn't worry about this ambiguity as long as it doesn't affect the calculation of a. Possibly it could have detected that g is unused in the former version, and treated it similarly, but apparently it doesn't.
As for workarounds without giving unnecessary explicit types, you might instead try replacing let/where by one of the binding methods in Haskell which are always monomorphic. The following all work:
case random (mkStdGen 4) of
(a,g) -> a :: Int
(\(a,g) -> a :: Int) (random (mkStdGen 4))
do (a,g) <- return $ random (mkStdGen 4)
return (a :: Int) -- The result here gets wrapped in the Monad

Type inference subtleties

I'm having some difficulty with understanding why the inferred type signature is different from what I would expect. Let's have an example (I tried to make it as short as possible):
import Control.Applicative
import Data.Word
import Text.ParserCombinators.Parsec
import Text.ParserCombinators.Parsec.Token
import Text.Parsec.Language (emptyDef)
import Text.Parsec.Prim
import Data.Functor.Identity
--parseUInt' :: Num b => ParsecT String u Identity b
parseUInt' = fromInteger <$> decimal (makeTokenParser emptyDef)
--parseUInt1 = fromInteger <$> decimal (makeTokenParser emptyDef)
--parseUInt2 = fromInteger <$> decimal (makeTokenParser emptyDef)
parsePairOfInts = do
x <- parseUInt'
char ','
y <- parseUInt'
return $ (x, y)
parseLine :: String -> Either ParseError (Word32, Word8)
parseLine = parse parsePairOfInts "(error)"
main = print . show $ parseLine "1,2"
This code does NOT compile:
test.hs:21:19:
Couldn't match type ‘Word32’ with ‘Word8’
Expected type: Parsec String () (Word32, Word8)
Actual type: ParsecT String () Identity (Word32, Word32)
In the first argument of ‘parse’, namely ‘parsePairOfInts’
In the expression: parse parsePairOfInts "(error)"
Failed, modules loaded: none.
But if I uncomment the type signature of parseUInt' it compiles just fine.
At the same time, if I query type information in GHCi, it looks like this:
λ>:t (fromInteger <$> decimal (makeTokenParser emptyDef))
(fromInteger <$> decimal (makeTokenParser emptyDef))
:: Num b => ParsecT String u Identity b
But if I do NOT specify the type signature explicitly, the 'b' type is fixed to Word32 somehow.
If I replace parseUInt' with two different (but still the same implementation) functions parseUInt1 and parseUInt2, the code compile too.
I thought that if I don't specify a function's type, the inferred type must be the least restrictive (Num b =>...) but it's not the case somehow.
What I'm really missing here?
I think this is the dreaded MonomorphismRestriction in action. If you don't provide a type signature then ghc tries to infer a concrete type signature if the function is instantiated to a concrete type elsewhere in the code. ghc sees that you use the function to parse a Word32 as the first line of parsePairOfInts and then fixes parseUInt' to that type before reaching the second usage of parseUInt' two lines down. This then leads to a type error because the type has already been instantiated to Word32 and now the type needs to be Word8.
Looks like the monomorphism restriction again. You defined something that doesn't "look" like a polymorphic value, so the compiler inferred a monomorphic type for it.
This turns out not to be the type you wanted, so you'll have to be clear that you intend the polymorphism, by adding a type signature.

How do I resolve this compile error: Ambiguous type variable `a1' in the constraint

One could think of this case as follows:
The application dynamically loads a module, or there is a list of functions from which the user chooses, etc. We have a mechanism for determining whether a certain type will successfully work with a function in that module. So now we want to call into that function. We need to force it to make the call. The function could take a concrete type, or a polymorphic one and it's the case below with just a type class constraint that I'm running into problems with it.
The following code results in the errors below. I think it could be resolved by specifying concrete types but I do not want to do that. The code is intended to work with any type that is an instance of the class. Specifying a concrete type defeats the purpose.
This is simulating one part of a program that does not know about the other and does not know the types of what it's dealing with. I have a separate mechanism that allows me to be sure that the types do match up properly, that the value sent in really is an instance of the type class. That's why in this case, I don't mind using unsafeCoerce. But basically I need a way to tell the compiler that I really do know it's ok and do it anyway even though it doesn't know enough to type check.
{-# LANGUAGE ExistentialQuantification, RankNTypes, TypeSynonymInstances #-}
module Main where
import Unsafe.Coerce
main = do
--doTest1 $ Hider "blue"
doTest2 $ Hider "blue"
doTest1 :: Hider -> IO ()
doTest1 hh#(Hider h) =
test $ unsafeCoerce h
doTest2 :: Hider -> IO ()
doTest2 hh#(Hider h) =
test2 hh
test :: HasString a => a -> IO ()
test x = print $ toString x
test2 :: Hider -> IO ()
test2 (Hider x) = print $ toString (unsafeCoerce x)
data Hider = forall a. Hider a
class HasString a where
toString :: a -> String
instance HasString String where
toString = id
Running doTest1
[1 of 1] Compiling Main ( Test.hs, Test.o )
Test.hs:12:3:
Ambiguous type variable `a1' in the constraint:
(HasString a1) arising from a use of `test'
Probable fix: add a type signature that fixes these type variable(s)
In the expression: test
In the expression: test $ unsafeCoerce h
In an equation for `doTest1':
doTest1 hh#(Hider h) = test $ unsafeCoerce h
Running doTest2
[1 of 1] Compiling Main ( Test.hs, Test.o )
Test.hs:12:3:
Ambiguous type variable `a1' in the constraint:
(HasString a1) arising from a use of `test'
Probable fix: add a type signature that fixes these type variable(s)
In the expression: test
In the expression: test $ unsafeCoerce h
In an equation for `doTest1':
doTest1 hh#(Hider h) = test $ unsafeCoerce h
I think it could be resolved by specifying concrete types but I do not want to do that.
There's no way around it though with unsafeCoerce. In this particular case, the compiler can't infer the type of unsafeCoerce, because test is still to polymorphic. Even though there is just one instance of HasString, the type system won't use that fact to infer the type.
I don't have enough information about your particular application of this pattern, but I'm relatively sure that you need to rethink the way you use the type system in your program. But if you really want to do this, you might want to look into Data.Typeable instead of unsafeCoerce.
Modify your data type slightly:
data Hider = forall a. HasString a => Hider a
Make it an instance of the type class in the obvious way:
instance HasString Hider where
toString (Hider x) = toString x
Then this should work, without use of unsafeCoerce:
doTest3 :: Hider -> IO ()
doTest3 hh = print $ toString hh
This does mean that you can no longer place a value into a Hider if it doesn't implement HasString, but that's probably a good thing.
There's probably a name for this pattern, but I can't think what it is off the top of my head.

Resources