How to tell GHC what fromIntegral should do - haskell

data N_ary = N_ary Int String Int deriving Eq
stores numbers to various bases. For example 15 to the bases 2, 10, and 16 are N_ary 1 "1111" 2, N_ary 1 "15" 10, and N_ary 1 "F" 16 respectively. (The first field is -1, 0, or 1 as a sign.)
I defined an operator infixl 5 ~> for converting things into N_ary objects and a class for convertible types.
class N_aryAble a where
(~>) :: a -> Int -> N_ary
I had no problems with instance N_aryAble Integer or instance N_aryAble N_ary (to change one base to another), but I ran into a problem with
instance N_aryAble Int where
int ~> base = fromIntegral int ~> base
Ambiguous type variable ‘a0’ arising from a use of ‘fromIntegral’
prevents the constraint ‘(Num a0)’ from being solved.
...
Ambiguous type variable ‘a0’ arising from a use of ‘~>’
prevents the constraint ‘(N_aryAble a0)’ from being solved.
...
Type signatures within instance declarations are not allowed without a special setting.
instance N_aryAble Int where
(~>) :: Int -> Int -> N_ary
int ~> base = fromIntegral int ~> base
Illegal type signature in instance declaration:
(~>) :: Int -> Int -> N_ary
(Use InstanceSigs to allow this)
The following works.
instance N_aryAble Int where
int ~> base = fromIntegral int + (0::Integer) ~> base
> (5::Int) ~> 2 ==> N_ary 1 "101" 2
But that seems ugly and ad hoc. Is there a better way?
Thanks.

You can provide a type annotation to the fromIntegral call, so to make it non ambiguous.
instance N_aryAble Int where
int ~> base = (fromIntegral int :: Integer) ~> base

The problem is not that the compiler can't deduce what from to convert the type from. That could be fixed with a signature for that particular instance method, which BTW you could write like
instance N_aryAble Int where
(~>) = (~~>)
where (~~>) :: Int -> Int -> N_ary
int ~~> base = fromIntegral int ~> base
But that information is already clear from the class method signature, so it won't help you.
No, the problem is that you don't specify what type to convert to, and because the argument to ~> is again polymorphic the compiler has nothing else to infer it from. This could as well convert Int to Int, causing an infinite recursion loop because you end up with the same ~> instantiation you're trying to define!
You can either clarify this with a signature to the result of fromIntegral as shown by chi, or you can simply use the to--version of the conversion function that's monomorphic with an Integer result:
instance N_aryAble Int where
int ~> base = toInteger int ~> base

Related

Why are floatRange, floatRadix and floatDigits functions?

According to Hackage, these functions of the RealFloat class are
... constant function[s] ...
If they always remain at the same value, no matter the argument, as suggested by this description, why not simply use:
class (RealFrac a, Floating a) => RealFloat a where
floatRadix :: Integer
floatDigits :: Int
floatRange :: (Int, Int)
...
Your proposed non-function methods would have type
floatRadix' :: RealFloat a => Integer
floatDigits' :: RealFloat a => Int
...
Those are ambiguous types: there is an a type variable, but it doesn't actually appear to the right of the => and thus can't be inferred from the context. Which is, in standard Haskell, really the only way you can infer such a type variable: local type signatures also can only to the signature head, not the constraint. so whether you write (floatDigits' :: Int) or (floatDigits' :: RealFloat Double => Int), it won't actually work – the compiler can't infer that you mean the instance RealFloat Double version of the method.
class RealFloat' a where
floatDigits' :: Int
instance RealFloat' Double where
floatDigits' = floatDigits (0 :: Double)
*Main> floatDigits' :: Int
<interactive>:3:1: error:
• No instance for (RealFloat' a0)
arising from a use of ‘floatDigits'’
• In the expression: floatDigits' :: Int
In an equation for ‘it’: it = floatDigits' :: Int
*Main> floatDigits' :: RealFloat Double => Int
<interactive>:4:1: error:
• Could not deduce (RealFloat' a0)
arising from a use of ‘floatDigits'’
from the context: RealFloat Double
bound by an expression type signature:
RealFloat Double => Int
at :4:17-39
The type variable ‘a0’ is ambiguous
• In the expression: floatDigits' :: RealFloat Double => Int
In an equation for ‘it’:
it = floatDigits' :: RealFloat Double => Int
For this reason, Haskell does not allow you to write methods with ambiguous type in the first place. Actually trying to compile the class as I wrote it above gives this error message:
• Could not deduce (RealFloat' a0)
from the context: RealFloat' a
bound by the type signature for:
floatDigits' :: forall a. RealFloat' a => Int
at /tmp/wtmpf-file3738.hs:2:3-21
The type variable ‘a0’ is ambiguous
• In the ambiguity check for ‘floatDigits'’
To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
When checking the class method:
floatDigits' :: forall a. RealFloat' a => Int
In the class declaration for ‘RealFloat'’
The highlighted line cites however a GHC extension that says “it's ok, I know what I'm doing”. So if you add {-# LANGUAGE AllowAmbiguousTypes #-} to the top of the file with the class RealFloat' in it, the compiler will accept that.
What's the point though, when the instance can't be resolved at the use site? Well, it can actually be resolved, but only using another pretty new GHC extension:
*Main> :set -XTypeApplications
*Main> floatDigits' #Double
53
The problem with this is that you would construct the functions for multiple instances, like:
instance RealFloat Float where
-- ...
floatRadix = 2
floatDigits = 24
floatRange = (-125, 128)
instance RealFloat Double where
-- ...
floatRadix = 2
floatDigits = 53
floatRange = (-1021, 1024)
But now it creates a problem when you query for example floatDigits: for what instance should we take? The one for Float, or the one for Double (or another type)? All of these are valid candidates.
By using an a parameter, we can make a disambiguation, for example:
Prelude> floatDigits (0 :: Float)
24
Prelude> floatDigits (0 :: Double)
53
but it holds that the value of the parameter does not matter, for example:
Prelude> floatDigits (undefined :: Float)
24
Prelude> floatDigits (undefined :: Double)
53
The RealFloat class is very old. Back when it was designed, nobody had worked out really good ways to pass extra type information to a function. At that time it was common to take an argument of the relevant type and expect the user to pass undefined at that type. As leftaroundabout explained, GHC now has extensions that do this pretty nicely most of the time. But before TypeApplications, two other techniques were invented to do this job more cleanly.
To disambiguate without GHC extensions, you can use either proxy passing or newtype-based tagging. I believe both techniques were given their final forms by Edward Kmett with a last polymorphic spin by Shachaf Ben-Kiki (see Who invented proxy passing and when?). Proxy passing tends to give an easy-to-use API, while the newtype approach can be more efficient under certain circumstances. Here's the proxy-passing approach. This requires you to pass an argument of some type. Traditionally, the caller will use Data.Proxy.Proxy, which is defined
data Proxy a = Proxy
Here's how the class would look with proxy passing:
class (RealFrac a, Floating a) => RealFloat a where
floatRadix :: proxy a -> Integer
floatDigits :: proxy a -> Int
...
And here's how it would be used. Note that there's no need to pass in a value of the type you're talking about; you just pass the proxy constructor.
foo :: Int
foo = floatDigits (Proxy :: Proxy Double)
To avoid passing a runtime argument at all, you can use tagging. This is often done with the tagged package, but it's quite easy to roll your own too. You could even reuse Control.Applicative.Const, but that doesn't communicate the intention so well.
newtype Tagged t a = Tagged
{ unTagged :: a }
Here's how the class would look:
class (RealFrac a, Floating a) => RealFloat a where
floatRadix :: Tagged a Integer
floatDigits :: Tagged a Int
...
And here's how you'd use it:
foo :: Int
foo = unTagged (floatDigits :: Tagged Double Int)

Weird type inference in Haskell [duplicate]

I have the following function:
f :: (Int -> Int) -> Int
f = undefined
Now I want to call f with 5 (which is incorrect):
f 5
Obviously, this should not compile, because 5 is not a function from Int to Int.
So I would expect an error message like Couldn't match expected type Int -> Int with Int.
But instead I get:
No instance for (Num (Int -> Int)) arising from the literal `5'
In the first argument of `f', namely `5'
In the expression: f 5
In an equation for `it': it = f 5
Why did Num appear here?
5 is of any type in type class Num. These types include Int, Double, Integer, etc.
Functions are not in type class Num by default. Yet, a Num instance for functions might be added by the user, e.g. defining the sum of two functions in a pointwise fashion. In such case, the literal 5 can stand for the constant-five function.
Techncally, the literal stands for fromInteger 5, where the 5 is an Integer constant. The call f 5 is therefore actually f (fromInteger 5), which tries to convert five into Int -> Int. This requires an instance of Num (Int -> Int).
Hence, GHC does not state in its error that 5 can not be a function (since it could be, if the user declared it such, providing a suitable fromInteger). It just states, correctly, that no Num instance can be found for integer functions.

No instance for (Ord int) arising from a use of `>', Haskell

other questions and problems, although similar, are not quite like this one. in this specific compiler error, the Haskell GHC won't compile the following code, for the following reason. I don't understand at all - the code is pretty straight forward.
--factorial
fact :: int -> int
fact 0 = 1
fact n | n > 0 = n * fact(n - 1)
main = print (fact 10)
(error:)
No instance for (Ord int) arising from a use of `>'
Possible fix:
add (Ord int) to the context of
the type signature for fact :: int -> int
In the expression: n > 0
In a stmt of a pattern guard for
an equation for `fact':
n > 0
In an equation for `fact': fact n | n > 0 = n * fact (n - 1)
Can you explain the problem to me?
Int is what you want:
fact :: int -> int
-->
fact :: Int -> Int
Since in Haskell, types need to begin with a cap.
Edit: Thank Yuras for commenting this:
Or if you want you could use a type class:
fact :: Integral a => a -> a
And you can name the type variable whichever you like, including int. Also, Num might fit your purpose better if you want to define factorial over general numbers.

Ambiguous type variables with Haskell functional dependencies

I was playing around with the FunctionalDependencies-Extension of Haskell, along with MultiParamTypeClasses. I defined the following:
class Add a b c | a b -> c where
(~+) :: a -> b -> c
(~-) :: a -> b -> c
neg :: a -> a
zero :: a
which works fine (I've tried with instances for Int and Double with the ultimate goal of being able to add Int and Doubles without explicit conversion).
When I try to define default implementations for neg or (~-) like so:
class Add ...
...
neg n = zero ~- n
GHCi (7.0.4) tells me the following:
Ambiguous type variables `a0', `b0', `c0' in the constraint:
(Add a0 b0 c0) arising from a use of `zero'
Probable fix: add a type signature that fixes these type variable(s)
In the first argument of `(~-)', namely `zero'
In the expression: zero ~- n
In an equation for `neg': neg n = zero ~- n
Ambiguous type variable `a0' in the constraint:
(Add a0 a a) arising from a use of `~-'
Probable fix: add a type signature that fixes these type variable(s)
In the expression: zero ~- n
In an equation for `neg': neg n = zero ~- n
I think I do understand the problem here. GHC does not know which zero to use, since it could be any zero yielding anything which in turn is fed into a ~- which we only know of, that it has an a in it's right argument and yields an a.
So how can I specify that it should be the zero from the very same instance, i.e. how can I express something like:
neg n = (zero :: Add a b c) ~- n
I think the a, b and c here are not the a b c form the surrounding class, but any a b and c, so how can I express a type which is a reference to the local type variables?
Pull neg and zero out into a superclass that only uses the one type:
class Zero a where
neg :: a -> a
zero :: a
class Zero a => Add a b c | a b -> c where
(~+) :: a -> b -> c
(~-) :: a -> b -> c
The point is that your way, zero :: Int could be the zero from Add Int Int Int, or the zero from Add Int Double Double, and there is no way to disambiguate between the two, regardless of whether you're referring to it from inside a default implementation or an instance declaration or normal code.
(You may object that the zero from Add Int Int Int and the zero from Add Int Double Double will have the same value, but the compiler can't know that someone isn't going to define Add Int Char Bool in a different module and give zero a different value there.)
By splitting the typeclass into two, we remove the ambiguity.
You cannot express the zero function as part of the Add class. All type variables in the class declaration must be encountered in the type declaration for each function of the class; otherwise, Haskell won't be able to decide which type instance to use because it is given too few constraints.
In other words, zero is not a property of the class you are modelling. You are basically saying: "For any three types a, b, c, there must exist a zero value for the type a", which makes no sense; you could pick any b and c and it would solve the problem, hence b and c are completely unusable, so if you have an Add Int Int Int or an Add Int (Maybe String) Boat, Haskell does not know which instance to prefer. You need to separate the property of negation and "zeroness" into a separate class(es) of types:
class Invertible a where
invert :: a -> a
neg :: Invertible a => a -> a
neg = invert
class Zero a where
zero :: a
class Add a b c | a b -> c where
(~+) :: a -> b -> c
(~-) :: a -> b -> c
I don't see why you would then even need the Invertible and Zero constraints in Add; you can always add numbers without knowing their zero value, can you not? Why express neg as a requirement for ~+; there are some numbers that should be addable without them being negatable (Natural numbers for instance)? Just keep the class concerns separate.

Convert type family instances to Int

I have this code:
type family Id obj :: *
type instance Id Box = Int
And I want to make it so I can always get an Int from the Id type family. I recognize that a conversion will be required.
I thought maybe creating a class would work:
class IdToInt a where
idToInt :: Id a -> Int
instance IdToInt Box where
idToInt s = s
And that actually compiles. But when I try to use it:
testFunc :: Id a -> Int
testFunc x = idToInt x
I get error:
src/Snowfall/Spatial.hs:29:22:
Couldn't match type `Id a0' with `Id a'
NB: `Id' is a type function, and may not be injective
In the first argument of `idToInt', namely `x'
In the expression: idToInt x
In an equation for `testFunc': testFunc x = idToInt x
So, how can I create a conversion for a type family Id to get an Int?
Based on the answer by ehird, I tried the following but it doesn't work either:
class IdStuff a where
type Id a :: *
idToInt :: Id a -> Int
instance IdStuff Box where
type Id Box = Int
idToInt s = s
testFunc :: (IdStuff a) => Id a -> Int
testFunc x = idToInt x
It gives error:
src/Snowfall/Spatial.hs:45:22:
Could not deduce (Id a0 ~ Id a)
from the context (IdStuff a)
bound by the type signature for
testFunc :: IdStuff a => Id a -> Int
at src/Snowfall/Spatial.hs:45:1-22
NB: `Id' is a type function, and may not be injective
In the first argument of `idToInt', namely `x'
In the expression: idToInt x
In an equation for `testFunc': testFunc x = idToInt x
You can't. You'll need testFunc :: (IdToInt a) => Id a -> Int. Type families are open, so anyone can declare
type instance Id Blah = ()
at any time, and offer no conversion function. The best thing to do is to put the type family in the class:
class HasId a where
type Id a
idToInt :: Id a -> Int
instance IdToInt Box where
type Id Box = Int
idToInt s = s
You'll still need the context, though.
You cannot use a function of type IdToInt a => Id a -> Int because there is no way to determine what type a is. The following example demonstrates this.
type family Id a :: *
type instance Id () = Int
type instance Id Char = Int
class IdToInt a where idToInt :: Id a -> Int
instance IdToInt () where idToInt x = x + 1
instance IdToInt Char where idToInt x = x - 1
main = print $ idToInt 1
Because Id () = Id Char = Int, the type of idToInt in the above context is Int -> Int, which is equal to Id () -> Int and Id Char -> Int. Remember that overloaded methods are chosen based on type. Both class instances define idToInt functions that have type Int -> Int, so the type checker cannot decide which one to use.
You should use a data family instead of a type family, and declare newtype instances.
data family Id a :: *
newtype instance Id () = IdUnit Int
newtype instance Id Char = IdChar Int
With a newtype instance, Id () and Id Char are both ints, but they have different types. The type of an Id informs the type checker which overloaded function to use.
As others have pointed out, the problem is that the compiler can't figure out which a to use. Data families are one solution, but an alternative that's sometimes easier to work with is to use a type witness.
Change your class to
class IdToInt a where
idToInt :: a -> Id a -> Int
instance IdToInt Box where
idToInt _ s = s
-- if you use this a lot, it's sometimes useful to create type witnesses to use
box = undefined :: Box
-- you can use it like
idToInt box someId
-- or
idToInt someBox (getId someBox)
The question you need to answer is, for any given Id, is there only one type a it should appear with? That is, is there a one to one correspondence between as and Id as? If so, data families are the correct approach. If not, you may prefer a witness.

Resources