I'm new to Haskell, and just stumbled across this problem. I'm trying to figure out an explaination, but I don't have enough experience with Haskell types to be sure.
The function:
mystery :: Int -> Int -> Float -> Bool
mystery x y z = not ((x==y) && ((fromIntegral y) == z ))
behaves as it seems like it would. It's basically checking if the values are all NOT equal, but doing a type conversion from an Integral y to make sure it can be compared with z
If this is true, then why does:
case1 = do
if mystery 1 1 1.00000001 -- a very small number
then putStrLn "True"
else putStrLn "False"
Print False (ie. The values are all equal, so 1 == 1 == 1.00000001) whereas:
case2 = do
if mystery 1 1 1.0000001 -- a larger number
then putStrLn "True"
else putStrLn "False"
Prints True? (ie. The values are not all equal)
I know it likely has something to do with precision, but I don't get it. Any help is greatly appreciated.
Floating point operations are generally approximate, and == is not one of the exceptions to that rule. Single-precision floating point (Float) runs out of precision pretty quickly, while the more-generally-useful double-precision floating point (Double) has some more. In either case, your decimal fraction will be converted approximately to binary floating point, and then the equality test will also be approximate. General rule: floating point representations are not numbers, and they are not even legitimate instances of the Eq class. If you want to use them, you need to pay attention to their limitations.
In this case, you need to think about when you want to consider the integer equal to the floating point representation. You may or may not want to rely directly on the built-in comparison and rounding operations.
For some of the details you'll have to think about, check out the classic What Every Computer Scientist Should Know About Floating-Point Arithmetic, and don't skip the corrections and updates in the footnotes.
Your code can be simplified to:
> (1.00000001 :: Float) == 1
True
Looks like Float simply doesn't have enough precision to store the last bits of 1.00000001, so it gets truncated to plain 1.
1/10^n can't be represented in base2 floating point (IEEE 754), so the value is probably truncated.
Semantically, for integer comparison it's probably more accurate to truncate the floating point value.
mystery :: Int -> Int -> Float -> Bool
mystery x y z = not (x == y && y == truncate z)
Related
I googled "Haskell machine precision" and didn't find any information about how to get the machine precision in Haskell. Is there a built-in way to get it?
Otherwise I implemented it as follows (I translated some C++ code found on a serious site):
machinePrecision :: Double
machinePrecision = until isSmall half 1.0
where
isSmall :: Double -> Bool
isSmall x = 1.0 + x / 2.0 == 1.0
half :: Double -> Double
half x = x / 2.0
Is it a correct way to get the machine precision of double numbers in Haskell?
Some packages, including ieee754 and numeric-limits define a value epsilon that is the smallest representable x such that 1 and 1+x can be distinguished. That appears to be the machinePrecision value you're trying to calculate.
You can use these packages if you want, but you can also just use the two-line definition:
epsilon :: Double
epsilon = 2.2204460492503131e-16
It will never be anything else, notwithstanding any documentation in the Prelude about hypothetical Doubles that aren't IEEE754 doubles.
If you need an equivalent value for Float the definition is:
epsilon :: Float
epsilon = 1.19209290e-07
These definitions are exactly the same ones used in the ieee754 package, except in that package they're instances of a class method.
The values in Lennart Augustsson's numeric-limits package are calculated on the fly using encodeFloat and decodeFloat, a principled but "somewhat excessive" approach.
You can also use the following alternative definitions, which are equivalent:
epsilon :: Double
epsilon = 2**(-52)
epsilon :: Float
epsilon = 2**(-23)
If you are worried about portability to other Haskell implementations, start by patting yourself on the back for being one of the eight people on the planet writing Haskell for non-GHC targets, and then take comfort in the fact that there are no existing Haskell implementations whose Double is something other than an IEEE double.
Haskell distinguishes negative zero:
ghci> (isNegativeZero (0 :: Float), isNegativeZero (-0 :: Float))
(False,True)
JSON also allows for distinguishing them, since both "0" and "-0" are valid, syntactically.
But Aeson throws away the sign bit:
ghci> isNegativeZero <$> eitherDecode "-0"
Right False
Why? How can I decode a JSON document while distinguishing non-negative and negative zero?
It looks like in Data.Aeson the floating point number is constructed using Data.Scientific.scientific
scientific :: Integer -> Int -> Scientific
scientific c e constructs a scientific number which corresponds to the Fractional number: fromInteger c * 10 ^^ e.
Since the mantissa is an Integer, where we have 0 == -0, it can not construct a negative zero. Not the best API for constructing special floating point values, it seems.
Perhaps you should file a bug for aeson, asking for a workaround in the parser.
I'm relatively new to haskell, and know that we can use the negation to negate a list. The negation does not negate the number zero in a regular list of type float, int, and integer. But what if you had a list of a different data type? If I negate a different data type, then the number zero in that list will also be negated. Is there a way to not negate numbers like 0 and 0.0 in the list?
You say
The negation does not negate the number zero in a regular list of type float
but this assertion is incorrect. See:
> negate 0 :: Float
-0.0
> negate 0 :: Double
-0.0
> map negate [0] :: [Float]
[-0.0]
The behavior of the rest of your code follows directly from this fact. For further reading I highly recommend What Every Computer Scientist Should Know About Floating-Point Arithmetic, which includes an in-depth discussion of why floating point must have a negative zero distinct from zero. But the short version is this sentence from page 201:
If zero did not have a sign, then the relation (1/(1/x)) = x would fail to hold when x = ±∞.
Why in Haskell 0^0 == 1 ? Why not 0^0 == 0? Or maybe should raise some error...
*Main> 0^0
1
*Main> 0**0
1.0
Thanks on advance
GHCi, version 7.10.3
It makes a bit of sense when you look at the signatures.
(^) :: (Num a, Integral b) => a -> b -> a
This one is designed to work for nonnegative integer exponents. It's likely implemented recursively, so it behaves like repeated multiplication. Thus, it makes sense that "anything to the zero power is one" is the rule that takes precedent, since we're really talking about repeated multiplication.
(^^) :: (Fractional a, Integral b) => a -> b -> a
This one is a lot like the previous one, except that it works on negative exponents too, since its base is fractional. Still, it behaves like repeated multiplication or repeated division (if the exponent is positive or negative, respectively), so again, it makes sense that repeating either of those operations zero times should result in 1, the multiplicative identity.
(**) :: Floating a => a -> a -> a
In Haskell, the floating point types generally conform to the IEEE standard, and IEEE specifically defines pow(0.0, 0.0) as 1.0. So Haskell, I imagine, is simply conforming to a standard so that it behaves consistently with other languages in this case.
Haskell does it that way because mathematics defines it that way. Math does it that way because 0⁰ = 1·0⁰, which is 1 multiplied by something else zero times, which is 1 not multiplied by anything. Mathematicians figure it makes more sense to stick to the rule that anything to the zeroth power is 1 (the nullary product) than the rule that zero to any power is zero.
This makes a lot of sense when you try to define exponents in terms of multiplications and divisions. For example, if you were trying to define ^ in Haskell, you might come up with:
(^) a b = product $ replicate b a
This is equivalent to:
(^) a b = foldr (*) 1 (replicate b a)
A list containing zero numbers is empty. The product of an empty list is 1, or else a lot of things would break, like product (xs++[]) not being equal to (product xs) * (product []).
Or if you wrote the simplest possible recursive solution:
(^) _ 0 = 1
(^) a b = a*(a^(b-1))
You would then need a special case in addition to the base and recursive cases to define 0⁰ as anthing other than 1.
PS
As #leftroundabout points out, my answer assumes we’re using discrete math. Computer scientists almost always are, and Haskell was designed by academic computer scientists.
If we are working with continuous functions on a computer, we’re necessarily doing numeric approximations. In that case, the most efficient implementation will be the one that uses the FPU of the machine we’re running on. In 2017, that will follow the IEEE standard, which says that pow( 0.0, 0.0 ) = 1.0.
It’s just slightly simpler to write and prove statements about an exponent function that follows the convention.
This is simply a law of mathematics. Any positive number raised to the 0 power is equal to 1. There should be no error
Haskell is functional programming language. Functional languages use λ-calculus in their basis. Number literals are encoded using Church encoding in λ-calculus. So if you encode 0^0 by Church and then normalize λ-term using β-reductions you will get 1 like this:
0^0 = (λn.λm.λs.λz.m n s z) (λs.λz.z) (λs.λz.z) = λs.λz.s z = 1
I think this should explain why Haskell decided to follow chosen model.
my professor assigned me a pretty basic lab that is mostly done. Essentially what it should do resembles divMod. It should output the quotient and the remainder using a recursive function. Below is the code. I am not quite sure what is going on syntax wise also if someone could maybe explain what might go in the "Fill this in" part. I understand that a < b is the simple case meaning the quotient is zero and the remainder is a. So q = 0 and r = a. This will eventually be achieved by repeatedly subtracting b from a. Let 17 be a and 5 be b, so as follows: 17-5=12 then 12-5=7 then 7-5=2 which means the quotient is 3 and remainder is 2. So I understand whats going on I just cannot write it in haskell. Thanks for any help. Sorry for the super lengthy question.
divalg :: Int -> Int -> (Int, Int)
divalg a b | a < b = --Fill this in--
| otherwise = let (q, r) = divalg (a - b) b
in --Fill this in--
From the type signature, you can see that divalg takes two Ints and returns a pair of Ints, which you correctly identified as the quotient and remainder. Thus in the base case (where a < b), you should do that: return a tuple containing the quotient and remainder.
In the recursive case, the recursive call is already written. When thinking about recursion, assume the recursive call "does the right thing". In this case, the "right thing" is to return the quotient and remainder of (a-b)/b. I'll leave the math to you, but the basic idea is that you need to modify the tuple (q,r) to get a new tuple containing the quotient/remainder for a/b. How do I know this is the right thing to do? Because the type signature told me so.
In short, your code will look something like this:
| a < b = (___, ___)
| otherwise = let ...
in (___, ___)