Overloading operators :: HaskellNewbie - haskell

I'm making a new type so I can calculate big numbers with some precision. Basically its a Double and a Integer to represent a number as Double * 10 ^ Integer. Now I began to make the program and it was going "ok" till I tried to make a instance of my new number so I could simple use + to add my new numbers up. This makes it easier for me to use in my existing programs. But I'm just getting a errors along the lines of "Could not deduce" from my code (I'll post an example below). I somewhat understand the error, but I can seem to get around the problem. If you wish to compile the code, comment lines 4 and 5.
I'v been working on this for hours and is "killing" me.
newtype Sci f p = Sci (f ,p) deriving (Eq,Show)
instance (Floating a,Integral b) => Num (Sci a b) where
Sci (a,b) * Sci (c,d) = fixSci( Sci(a*c,b*d) )
mulSci :: Sci Double Integer -> Sci Double Integer -> Sci Double Integer
mulSci (Sci(a,b)) (Sci(c,d)) = fixSci (Sci(a*c,b*d))
mkSci :: Double -> Sci Double Integer
mkSci 0 = Sci(0, 0)
mkSci n = let lg = (floor ((log10 . abs) n)) in Sci((n/(10**(fromIntegral lg))), if lg > 0 then lg else 0)
fixSci :: Sci Double Integer -> Sci Double Integer
fixSci (Sci(a,b)) = let n = mkSci a in (\(Sci(c,d)) -> Sci(c,b+d)) n
fromSci (Sci(a,b)) = a*10**(fromIntegral b)
showSci (Sci(a,b)) = (show a)++"e"++(show b)
lx :: Double
lx = log 10
log10 :: Double -> Double
log10 y = log y / lx
-- ~ main = putStrLn $ showSci $ fixSci $ Sci(95,0)
main = putStrLn $ showSci $ mkSci 95
Here is an example error:
sci.hs:5:40:
Could not deduce (a ~ Double)
from the context (Floating a, Integral b)
bound by the instance declaration at sci.hs:4:10-49
`a' is a rigid type variable bound by
the instance declaration at sci.hs:4:20
In the first argument of `(*)', namely `a'
In the expression: a * c
In the first argument of `Sci', namely `(a * c, b * d)'
sci.hs:5:44:
Could not deduce (b ~ Integer)
from the context (Floating a, Integral b)
bound by the instance declaration at sci.hs:4:10-49
`b' is a rigid type variable bound by
the instance declaration at sci.hs:4:31
In the first argument of `(*)', namely `b'
In the expression: b * d
In the first argument of `Sci', namely `(a * c, b * d)'
Any help is much appreciated!

Admittedly, reading and understanding GHC error messages is not easy. Therefore, your question is valid.
Obviously, the messages are referring to your definition of the * operator. We can tell this from the line number mentioned in sci.hs:5:40 and from the fact that (*) is the only one you defined in your Num instance.
Remember the general type of *, it is
(*) :: Num n => n -> n -> n
which means: For all types n that have a Num instance, if you give 2 values of that type to *, you will get back another value of same type. Or simpler: (*) will work for any numeric type, as long as the factors are of the same type and the result will have the same type as the factors.
Needless to say, your implementation of * must fulfill this contract. And because your Sci type itself is polymorphic, your incarnation has the type:
(*) :: (Floating a, Integral b) => Sci a b -> Sci a b -> Sci a b
So your are claiming that your multiplication will work for type Sci a b for any types a and b, as long as a is a Floating type and b is an Intergal type. That is, it should work for Sci Float Int as well as for Sci Double Integer.
By now, you should already know what the error is: Your implementation doesn't live up to that promise. In fact, it only works for Sci Double Integer, because your helper function fixSci can only work with Sci Double Integer.
The error message tries to tell you just that, in a more technical form. It lists
the type conflict a vs Double ("You promised to give me anything, but all I got was a lousy Double!")
the constraints actually in scope ("You said, a was an instance of Floating) and where it was introduced.
The expression(s) that led the compiler to infer Double
Unfortunately, the compiler stops short (for brevity, as it is known that newbies never read error messages anyway :) before the interesting part. It could have included something like
in the expression fixSci (Sci (a*c) (b*d)),
and since fixSci takes an argument of type Sci Double Integer
I concluded that Sci (a*c) (b*d) must be Sci Double Integer
and hence (a*c) must be Double.

Not really related to your question, but to your code:
Should it really be mulSci (Sci(a,b)) (Sci(c,d)) = fixSci (Sci(a*c,b*d))?
Shouldn't it be b+d instead of b*d?
Why don't you just set newtype Sci = Sci Double Integer?
Then you would have
instance Num Sci where
(Sci m1 e1) * (Sci m2 e2) = fixSci (Sci (m1*m2) (e1+e2))
(Sci m1 e1) + (Sci m2 e2) = <some rather complicated expression>

Not your original question, but your showSci function bothers me:
Don't derive Show, write your own show function:
instance Show Sci where
show (Sci (a,b)) = (show a) ++ "e" ++ (show b)
Then main becomes:
main = print $ mkSci 95

Related

Why does ghc warn that ^2 requires "defaulting the constraint to type 'Integer'?

If I compile the following source file with ghc -Wall:
main = putStr . show $ squareOfSum 5
squareOfSum :: Integral a => a -> a
squareOfSum n = (^2) $ sum [1..n]
I get:
powerTypes.hs:4:18: warning: [-Wtype-defaults]
• Defaulting the following constraints to type ‘Integer’
(Integral b0) arising from a use of ‘^’ at powerTypes.hs:4:18-19
(Num b0) arising from the literal ‘2’ at powerTypes.hs:4:19
• In the expression: (^ 2)
In the expression: (^ 2) $ sum [1 .. n]
In an equation for ‘squareOfSum’:
squareOfSum n = (^ 2) $ sum [1 .. n]
|
4 | squareOfSum n = (^2) $ sum [1..n]
| ^^
I understand that the type of (^) is:
Prelude> :t (^)
(^) :: (Integral b, Num a) => a -> b -> a
which means it works for any a^b provided a is a Num and b is an Integral. I also understand the type hierarchy to be:
Num --> Integral --> Int or Integer
where --> denotes "includes" and the first two are typeclasses while the last two are types.
Why does ghc not conclusively infer that 2 is an Int, instead of "defaulting the constraints to Integer". Why is ghc defaulting anything? Is replacing 2 with 2 :: Int a good way to resolve this warning?
In Haskell, numeric literals have a polymorphic type
2 :: Num a => a
This means that the expression 2 can be used to generate a value in any numeric type. For instance, all these expression type-check:
2 :: Int
2 :: Integer
2 :: Float
2 :: Double
2 :: MyCustomTypeForWhichIDefinedANumInstance
Technically, each time we use 2 we would have to write 2 :: T to choose the actual numeric type T we want. Fortunately, this is often not needed since type inference can frequently deduce T from the context. E.g.,
foo :: Int -> Int
foo x = x + 2
Here, x is an Int because of the type annotation, and + requires both operands to have the same type, hence Haskell infers 2 :: Int. Technically, this is because (+) has type
(+) :: Num a => a -> a -> a
Sometimes, however, type inference can not deduce T from the context. Consider this example involving a custom type class:
class C a where bar :: a -> String
instance C Int where bar x = "Int: " ++ show x
instance C Integer where bar x = "Integer: " ++ show x
test :: String
test = bar 2
What is the value of test? Well, if 2 is an Int, then we have test = "Int: 2". If it is an Integer, then we have test = "Integer: 2". If it's another numeric type T, we can not find an instance for C T.
This code is inherently ambiguous. In such a case, Haskell mandates that numeric types that can not be deduced are defaulted to Integer (the programmer can change this default to another type, but it's not relevant now). Hence we have test = "Integer: 2".
While this mechanism makes our code type check, it might cause an unintended result: for all we know, the programmer might have wanted 2 :: Int instead. Because of this, GHC chooses the default, but warns about it.
In your code, (^) can work with any Integral type for the exponent. But, in principle, x ^ (2::Int) and x ^ (2::Integer) could lead to different results. We know this is not the case since we know the semantics of (^), but for the compiler (^) is only a random function with that type, which could behave differently on Int and Integer. Consider, e.g.,
a ^ n = if n + 3000000000 < 0 then 0 else 1
When n = 2, if we use n :: Int the if guard could be true on a 32 bit system. This is not the case when using n :: Integer which never overflows.
The standard solution, in these cases, is to resolve the warning using something like x ^ (2 :: Int).

why is this snippet valid with an explicit value, but invalid as a function?

I'm trying to work a problem where I need to calculate the "small" divisors of an integer. I'm just bruteforcing through all numbers up to the square root of the given number, so to get the divisors of 10 I'd write:
[k|k<-[1...floor(sqrt 10)],rem 10 k<1]
This seems to work well. But as soon as I plug this in a function
f n=[k|k<-[1...floor(sqrt n)],rem n k<1]
And actually call this function, I do get an error
f 10
No instance for (Floating t0) arising from a use of `it'
The type variable `t0' is ambiguous
Note: there are several potential instances:
instance Floating Double -- Defined in `GHC.Float'
instance Floating Float -- Defined in `GHC.Float'
In the first argument of `print', namely `it'
In a stmt of an interactive GHCi command: print it
As far as I undrestand the actual print function that prints the result to the console is causing trouble, but I cannot find out what is wrong. It says the type is ambiguous, but the function can clearly only return a list of integers. Then again I checked the type, and it the (inferred) type of f is
f :: (Floating t, Integral t, RealFrac t) => t -> [t]
I can understand that fshould be able to accept any real numerical value, but can anyone explain why the return type should be anything else than Integral or int?
[k|k<-[1...floor(sqrt 10)],rem 10 k<1]
this works because the first 10 is not the same as the latter one - to see this, we need the type signature of your functions:
sqrt :: Floating a => a -> a
rem :: Integral a => a -> a -> a
so the first one means that it works for stuff that have a floating point representation - a.k.a. Float, Double ..., and the second one works for Int, Integer (bigint), Word8 (unsigned 8bit integers)...
so for the 10 in sqrt 10 the compiler says - ahh this is a floating point number, null problemo, and for the 10 in rem 10 k, ahh this is an integer like number, null problemo as well.
But when you bundle them up in a function - you are saying n has to be a floating point and an integral number, the compiler knows no such thing and - complains.
So what do we do to fix that (and a side note ranges in haskell are indicated by .. not ...!). So let us start by taking a concrete solution and generalize it.
f :: Int -> [Int]
f n = [k|k <- [1..n'],rem n k < 1]
where n' = floor $ sqrt $ fromIntegral n
the neccessary part was converting the Int to a floating point number. But if you are putting that in a library all your users need to stick with using Int which is okay, but far from ideal - so how do we generalize (as promised)? We use GHCi to do that for us, using a lazy language we ourselves tend to be lazy as well.
We start by commenting out the type-signature
-- f :: Int -> [Int]
f n = [k|k <- [1..n'],rem n k < 1]
where n' = floor $ sqrt $ fromIntegral n
$> ghci MyLib.hs
....
MyLib > :type f
f :: Integral a => a -> [a]
then we can take this and put it into the library and if someone worked with Word8 or Integer that would work as well.
Another solution would be to use rem (floor n) k < 1 and have
f :: Floating a, Integral b => a -> [b]
as the type, but that would be kind of awkward.

How do I cast from Integer to Fractional

Let's say I have the following Haskell type description:
divide_by_hundred :: Integer -> IO()
divide_by_hundred n = print(n/100)
Why is it that when I attempt to run this through ghc I get:
No instance for (Fractional Integer) arising from a use of `/'
Possible fix: add an instance declaration for (Fractional Integer)
In the first argument of `print', namely `(n / 100)'
In the expression: print (n / 100)
In an equation for `divide_by_hundred':
divide_by_hundred n = print (n / 100)
By running :t (/)
I get:
(/) :: Fractional a => a -> a -> a
which, to me, suggests that the (/) can take any Num that can be expressed as fractional (which I was under the impression should include Integer, though I am unsure as how to verify this), as long as both inputs to / are of the same type.
This is clearly not accurate. Why? And how would I write a simple function to divide an Integer by 100?
Haskell likes to keep to the mathematically accepted meaning of operators. / should be the inverse of multiplication, but e.g. 5 / 4 * 4 couldn't possibly yield 5 for a Fractional Integer instance1.
So if you actually mean to do truncated integer division, the language forces you2 to make that explicit by using div or quot. OTOH, if you actually want the result as a fraction, you can use / fine, but you first need to convert to a type with a Fractional instance. For instance,
Prelude> let x = 5
Prelude> :t x
x :: Integer
Prelude> let y = fromIntegral x / 100
Prelude> y
5.0e-2
Prelude> :t y
y :: Double
Note that GHCi has selected the Double instance here because that's the simples default; you could also do
Prelude> let y' = fromIntegral x / 100 :: Rational
Prelude> y'
1 % 20
1Strictly speaking, this inverse identity doesn't quite hold for the Double instance either because of floating-point glitches, but there it's true at least approximately.
2Actually, not the language but the standard libraries. You could define
instance Fractional Integer where
(/) = div
yourself, then your original code would work just fine. Only, it's a bad idea!
You can use div for integer division:
div :: Integral a => a -> a -> a
Or you can convert your integers to fractionals using fromIntegral:
fromIntegral :: (Integral a, Num b) => a -> b
So in essence:
divide_by_hundred :: Integer -> IO()
divide_by_hundred n = print $ fromIntegral n / 100
Integers do not implement Fractional, which you can see in the manual.

Unintuitive type signature in Haskell

I made this (what I thought to be) fairly straightforward code to calculate the third side of a triangle:
toRadians :: Int -> Double
toRadians d = let deg = mod d 360
in deg/180 * pi
lawOfCosines :: Int -> Int -> Int -> Double
lawOfCosines a b gamma = sqrt $ a*a + b*b - 2*a*b*(cos (toRadians gamma))
However, when I tried to load it into GHCi, I got the following errors:
[1 of 1] Compiling Main ( law_of_cosines.hs, interpreted )
law_of_cosines.hs:3:18:
Couldn't match expected type `Double' with actual type `Int'
In the first argument of `(/)', namely `deg'
In the first argument of `(*)', namely `deg / 180'
In the expression: deg / 180 * pi
law_of_cosines.hs:6:26:
No instance for (Floating Int)
arising from a use of `sqrt'
Possible fix: add an instance declaration for (Floating Int)
In the expression: sqrt
In the expression:
sqrt $ a * a + b * b - 2 * a * b * (cos (toRadians gamma))
In an equation for `lawOfCosines':
lawOfCosines a b gamma
= sqrt $ a * a + b * b - 2 * a * b * (cos (toRadians gamma))
law_of_cosines.hs:6:57:
Couldn't match expected type `Int' with actual type `Double'
In the return type of a call of `toRadians'
In the first argument of `cos', namely `(toRadians gamma)'
In the second argument of `(*)', namely `(cos (toRadians gamma))'
It turns out the fix was to remove my type signatures, upon which it worked fine.
toRadians d = let deg = mod d 360
in deg/180 * pi
lawOfCosines a b gamma = sqrt $ a*a + b*b - 2*a*b*(cos (toRadians gamma))
And when I query the type of toRadians and lawOfCosines:
*Main> :t toRadians
toRadians :: (Floating a, Integral a) => a -> a
*Main> :t lawOfCosines
lawOfCosines :: (Floating a, Integral a) => a -> a -> a -> a
*Main>
Can someone explain to me what's going on here? Why the "intuitive" type signatures I had written were in fact incorrect?
The problem is in toRadians: mod has the type Integral a => a -> a -> a, therefore, deg has the type Integral i => i (so either Int or Integer).
You then try and use / on deg, but / doesn't take integral numbers (divide integrals with div):
(/) :: Fractional a => a -> a -> a
The solution is to simply use fromIntegral :: (Integral a, Num b) => a -> b:
toRadians :: Int -> Double
toRadians d = let deg = mod d 360
in (fromIntegral deg)/180 * pi
Seeing Floating a and Integral a in a type signature together always sets off my internal alarm bells, as these classes are supposed to be mutually exclusive - at least, there are no standard numeric types that are instances of both classes. GHCi tells me (along with a lot of other stuff):
> :info Integral
...
instance Integral Integer -- Defined in `GHC.Real'
instance Integral Int -- Defined in `GHC.Real'
> :info Floating
...
instance Floating Float -- Defined in `GHC.Float'
instance Floating Double -- Defined in `GHC.Float'
To see why these classes are mutually exclusive, let's have a look at some of the methods in both classes (this is going to be a bit handwavy). fromInteger in Integral converts an Integral number to an Integer, without loss of precision. In a way, Integral captures the essence of being (a subset of) the mathematical integers.
On the other hand, Floating contains methods such as pi and exp, which have a pronounced 'real number' flavour.
If there were a type that was both Floating and Integral, you could write toInteger pi and have a integer that was equal to 3.14159... - and that's not possible :-)
That said, you should change all your type signatures to use Double instead of Int; after all, not all triangles have integer sides, or angles that are an integral number of degrees!
If you absolutely don't want that for whatever reason, you also need to convert the sides (the a and b arguments) in lawOfCosines to Double. That's possible via
lawOfCosines aInt bInt gamma = sqrt $ a*a + b*b - 2*a*b*(cos (toRadians gamma)) where
a = fromInteger aInt
b = fromInteger bInt
The type signature for toRadians says it takes an Int but returns a Double. In some programming languages, the conversion from one to the other (but not back) happens automatically. Haskell is not such a language; you must manually request conversion, using fromIntegral.
The errors you are seeing are all coming from various operations which don't work on Int, or from trying to add Int to Double, or similar. (E.g., / doesn't work for Int, pi doesn't work for Int, sqrt doesn't work for Int...)

Is it square check

I am trying to write function to check if the argument is square of integer:
isSquare :: Int -> Bool
isSquare x = truncate(sqrt(x)) * truncate(sqrt(x)) == x
When I loading the function I get the error:
Prelude> :load "some.hs"
[1 of 1] Compiling Main ( some.hs, interpreted )
some.hs:2:13:
No instance for (RealFrac Int)
arising from a use of `truncate' at some.hs:2:13-29
Possible fix: add an instance declaration for (RealFrac Int)
In the first argument of `(*)', namely `truncate (sqrt (x))'
In the first argument of `(==)', namely
`truncate (sqrt (x)) * truncate (sqrt (x))'
In the expression: truncate (sqrt (x)) * truncate (sqrt (x)) == x
some.hs:2:22:
No instance for (Floating Int)
arising from a use of `sqrt' at some.hs:2:22-28
Possible fix: add an instance declaration for (Floating Int)
In the first argument of `truncate', namely `(sqrt (x))'
In the first argument of `(*)', namely `truncate (sqrt (x))'
In the first argument of `(==)', namely
`truncate (sqrt (x)) * truncate (sqrt (x))'
Failed, modules loaded: none.
But if i try to execute:
Prelude> truncate(sqrt(9))*truncate(sqrt(9))==9
True
all is fine.
Why I get the error and how to fix it ?
You're getting the errors because of type mismatches. The type of sqrt is sqrt :: Floating a => a -> a, and the type of truncate is truncate :: (RealFrac a, Integral b) => a -> b. The former says that sqrt takes as input any floating-point number, and returns one of the same type as output; the latter says it can truncate any real fractional number1 into any integral number. However, you assert that x is an Int, and an Int isn't a floating-point number. Thus, the second error: "No instance for (Floating Int) arising from a use of `sqrt'". This says that because of sqrt x, it wanted Int to be a floating-point number, but there's no definition for that. Your first error is similar: since sqrt :: Floating a => a -> a, its output is the same as its input, so you're trying to call truncate on an integer. This of course makes no sense, since Int is not a RealFrac, and that's why you get the first error. Fixing this is easy:
isSquare :: Int -> Bool
isSquare x = let x' = truncate $ sqrt (fromIntegral x :: Double) in x'*x' == x
The fromIntegral function has the type fromIntegral :: (Integral a, Num b) => a -> b; it can convert any integral number into any number at all. This is why we need to tell Haskell that we want it to produce a Double; it'd default to that anyway, but it's nice to be clear (though not necessary). Double is an instance both of Floating and RealFrac, so you can sqrt and truncate it. I also rearranged your code a little; the way it is up there is how I'd write it, since this way we only compute the truncation and sqrt once. Also, note that if you remove the type signature, Haskell will infer the more general type isSquare :: Integral a => a -> Bool, since you never assume that x is precisely an Int.
The reason that truncate(sqrt(9))*truncate(sqrt(9))==9 successfully returned True is because of the type of 9. You can ask GHCi to tell you this:
Prelude> :t 9
9 :: (Num t) => t
In Haskell, all integral numeric literals have the type Num t => t (9.0, or any number with a decimal point, has the type Fractional t => t). This means that they can be any kind of number at all, which is a good thing. Otherwise, 9 would have to just be an Int or Integer, and defining new number types—or even using both Int and Integer!2—would be a royal pain. Thus, when you write truncate(sqrt(9)), GHCi determines that 9 must be an instance of Floating (from sqrt) and RealFrac (from truncate), which it defaults to Double, making everything work. This defaulting is standard behavior for numeric types (it's why you could leave out the :: Double in my definition of isSquare), though not for anything else (except in GHCi, which extends it for convenience). Since 9 isn't just an Int, but x is, you don't need to convert 9, but you do need to convert x.
1: The difference between Floating and RealFrac is that, for instance, Complex Double is an instance of Floating but not RealFrac, and Rational is an instance of RealFrac but not Floating. Float and Double are instances of both.
2: In case you haven't come across this, the difference is that Int is finite-precision, and Integer is arbitrary-precision.
You're treating integers as floats. Hence, the types don't match.
Use fromIntegral:
isSquare :: Int -> Bool
isSquare n = truncate(sqrt(x)) * truncate(sqrt(x)) == n
where x = fromIntegral n
Not all that efficient but a cute way of determining if a number is a square, using integer arithmetic only:
isSquare x = x == head (dropWhile (< x) squares)
where squares = scanl1 (+) [1,3..]

Resources