2022 Update: This bug was filed as a GHC ticket and is now fixed: https://gitlab.haskell.org/ghc/ghc/issues/17231 so this is no longer an issue.
Using ghci 8.6.5
I want to calculate the square root of an Integer input, then round it to the bottom and return an Integer.
square :: Integer -> Integer
square m = floor $ sqrt $ fromInteger m
It works.
The problem is, for this specific big number as input:
4141414141414141*4141414141414141
I get a wrong result.
Putting my function aside, I test the case in ghci:
> sqrt $ fromInteger $ 4141414141414141*4141414141414141
4.1414141414141405e15
wrong... right?
BUT SIMPLY
> sqrt $ 4141414141414141*4141414141414141
4.141414141414141e15
which is more like what I expect from the calculation...
In my function I have to make some type conversion, and I reckon fromIntegral is the way to go. So, using that, my function gives a wrong result for the 4141...41 input.
I can't figure out what ghci does implicitly in terms of type conversion, right before running sqrt. Because ghci's conversion allows for a correct calculation.
Why I say this is an anomaly: the problem does not occur with other numbers like 5151515151515151 or 3131313131313131 or 4242424242424242 ...
Is this a Haskell bug?
TLDR
It comes down to how one converts an Integer value to a Double that is not exactly representable. Note that this can happen not just because Integer is too big (or too small), but Float and Double values by design "skip around" integral values as their magnitude gets larger. So, not every integral value in the range is exactly representable either. In this case, an implementation has to pick a value based on the rounding-mode. Unfortunately, there are multiple candidates; and what you are observing is that the candidate picked by Haskell gives you a worse numeric result.
Expected Result
Most languages, including Python, use what's known as "round-to-nearest-ties-to-even" rounding mechanism; which is the default IEEE754 rounding mode and is typically what you would get unless you explicitly set a rounding mode when issuing a floating-point related instruction in a compliant processor. Using Python as the "reference" here, we get:
>>> float(long(4141414141414141)*long(4141414141414141))
1.7151311090705027e+31
I haven't tried in other languages that support so called big-integers, but I'd expect most of them would give you this result.
How Haskell converts Integer to Double
Haskell, however, uses what's known as truncation, or round-towards-zero. So you get:
*Main> (fromIntegral $ 4141414141414141*4141414141414141) :: Double
1.7151311090705025e31
Turns out this is a "worse" approximation in this case (cf. to the Python produced value above), and you get the unexpected result in your original example.
The call to sqrt is really red-herring at this point.
Show me the code
It all originates from this code: (https://hackage.haskell.org/package/integer-gmp-1.0.2.0/docs/src/GHC.Integer.Type.html#doubleFromInteger)
doubleFromInteger :: Integer -> Double#
doubleFromInteger (S# m#) = int2Double# m#
doubleFromInteger (Jp# bn#(BN# bn#))
= c_mpn_get_d bn# (sizeofBigNat# bn) 0#
doubleFromInteger (Jn# bn#(BN# bn#))
= c_mpn_get_d bn# (negateInt# (sizeofBigNat# bn)) 0#
which in turn calls: (https://github.com/ghc/ghc/blob/master/libraries/integer-gmp/cbits/wrappers.c#L183-L190):
/* Convert bignum to a `double`, truncating if necessary
* (i.e. rounding towards zero).
*
* sign of mp_size_t argument controls sign of converted double
*/
HsDouble
integer_gmp_mpn_get_d (const mp_limb_t sp[], const mp_size_t sn,
const HsInt exponent)
{
...
which purposefully says the conversion is done rounding-toward zero.
So, that explains the behavior you get.
Why does Haskell do this?
None of this explains why Haskell uses round-towards-zero for integer-to-double conversion. I'd strongly argue that it should use the default rounding mode, i.e., round-nearest-ties-to-even. I can't find any mention whether this was a conscious choice, and it at least disagrees with what Python does. (Not that I'd consider Python the gold standard, but it does tend to get these things right.)
My best guess is it was just coded that way, without a conscious choice; but perhaps other people familiar with the history of numeric programming in Haskell can remember better.
What to do
Interestingly, I found the following discussion dating all the way back to 2008 as a Python bug: https://bugs.python.org/issue3166. Apparently, Python used to do the wrong thing here as well, but they fixed the behavior. It's hard to track the exact history, but it appears Haskell and Python both made the same mistake; Python recovered, but it went unnoticed in Haskell. If this was a conscious choice, I'd like to know why.
So, that's where it stands. I'd recommend opening a GHC ticket so it can be at least documented properly that this is the "chosen" behavior; or better, fix it so that it uses the default rounding mode instead.
Update:
GHC ticket opened: https://gitlab.haskell.org/ghc/ghc/issues/17231
2022 Update:
This is now fixed in GHC; at least as of GHC 9.2.2; but possibly earlier:
GHCi, version 9.2.2: https://www.haskell.org/ghc/ :? for help
Prelude> (fromIntegral $ 4141414141414141*4141414141414141) :: Double
1.7151311090705027e31
Not all Integers are exactly representable as Doubles. For those that aren't, fromInteger is in the bad position of needing to make a choice: which Double should it return? I can't find anything in the Report which discusses what to do here, wow!
One obvious solution would be to return a Double that has no fractional part and which represents the integer with the smallest absolute difference from the original of any Double that exists. Unfortunately this appears not to be the decision made by GHC's fromInteger.
Instead, GHC's choice is to return the Double with the largest magnitude that does not exceed the magnitude of the original number. So:
> 17151311090705026844052714160127 :: Double
1.7151311090705025e31
> 17151311090705026844052714160128 :: Double
1.7151311090705027e31
(Don't be fooled by how short the displayed number is in the second one: the Double there is the exact representation of the integer on the line above it; the digits stop there because there are enough to uniquely identify a single Double.)
Why does this matter for you? Well, the true answer to 4141414141414141*4141414141414141 is:
> 4141414141414141*4141414141414141
17151311090705026668707274767881
If fromInteger converted this to the nearest Double, as in plan (1) above, it would choose 1.7151311090705027e31. But since it returns the largest Double less than the input as in plan (2) above, and 17151311090705026844052714160128 is technically bigger, it returns the less accurate representation 1.7151311090705025e31.
Meanwhile, 4141414141414141 itself is exactly representable as a Double, so if you first convert to Double, then square, you get Double's semantics of choosing the representation that is closest to the correct answer, hence plan (1) instead of plan (2).
This explains the discrepancy in sqrt's output: doing your computations in Integer first and getting an exact answer, then converting to Double at the last second, paradoxically is less accurate than converting to Double immediately and doing your computations with rounding the whole way, because of how fromInteger does its conversion! Ouch.
I suspect a patch to modify fromInteger to do something better would be looked on favorably by GHCHQ; in any case I know I would look favorably on it!
Related
I'm working with linear problems on rationals in Z3. To use Z3 I take SBV.
An example of a problem I pose is:
import Data.SBV
solution1 = do
x <- sRational "x"
w <- sRational "w"
constrain $ x.< w
constrain $ x + 2*w .>=0 .|| x .== 1
My question is:
Are these kinds of problems decidable?
I couldn't find a list of decidable theories or a way to tell if a theory is decidable.
The closest I found is this. The theory about the real ones is decidable, but is it the same for rational numbers? Intuition tells me that it is, but I have not found the information that allows me to assure it.
Thanks in advance
SBV models rationals using the standard "two integers" idea; that is, it represents the numerator and the denominator separately as integers. This means that if you add two symbolic rationals, you'll have a non-linear term over the integers. So, in theory, the problem will be in the semi-decidable fragment. That is, even if you restrict your multiplications to concrete scalars, addition of symbolic rationals will give rise to non-linear terms over integers.
Having said that, I had good luck using rationals; where z3 was able to decide most problems of interest without much difficulty. If it proves to be an issue, you should switch to SReal type (i.e., algebraic reals), for which z3 has a decision procedure. But of course, the models you get can now include algebraic reals, such as square-root-of-2, etc. (i.e., the roots of any polynomial with integer coefficients.)
Side note If your problem allows for delta-sat (i.e., satisfiability with perturbations), you should look into dReal (http://dreal.github.io), which SBV also supports as a backend solver. But perhaps that's not what you had in mind.
Theoretical note
Strictly speaking, linear arithmetic over rationals is decidable; see Section 3 of https://www.cs.ox.ac.uk/people/james.worrell/lecture15-2015.pdf for a proof. However, SMT solvers do not support rationals out-of-the-box; and SBV (as I mentioned above), uses two symbolic integers to represent rationals. So, adding two rationals will give rise to multiplication of two symbolic integers, taking you out of the decidable fragment. Of course, in practice, the solvers are quite adept at coming up with solutions even in the presence of non-linear terms; it's just that you're not always guaranteed. So, a more strict answer to your question is while linear arithmetic over rationals is decidable, the translation used by SBV puts the problem into the non-linear integer arithmetic domain, and hence decidability is not guaranteed. In any case, SMTLib does not come with a theory of rationals, so you're kind of out-of-luck when it comes to first class support for them.
I guess a rational solution will exist iff an integer solution exists to a suitably scaled collection of constraints. For example, x=1/2(=5/10), w=3/5(=6/10) is a solution to your example problem. Scaling your problem by 10, we have the equivalent constraint set:
10*x < 10*w
(10*x + 20*w >= 0) || (10*x == 10)
Writing x'=10*x and w'=10*w, this means that x'=5, w'=6 is an integer solution to:
x' < w'
(x' + w' >= 0) || (x' == 10)
Presburger famously showed that first-order logic plus integers and addition is decidable. (Multiplication by a constant is also allowed, since it can be expanded to an addition -- e.g. 3*x is x+x+x.)
I guess the only trick left is to show that it's possible to choose what scaling to use without having solved the problem yet. Nothing obvious occurs to me off the top of my head, but it seems reasonable that this should be doable. For example, perhaps if you take the product of all the nonzero numerators and denominators in your constraint set, you can show that the set of rationals with that product as their denominator is indistinguishable from the full set of rationals. (If so, you could look through the proof to see if it still works with a smaller denominator.)
I'm not a z3 expert, so I can't talk about how this translates to whether that tool specifically is suitable, but it seems likely to me that it is possible to create a suitable tool.
Disclaimer: I'm a total Isabelle beginner.
I'm trying to export the "sqrt" function or rather functions and definitions using "sqrt" to Haskell. My first try was just:
theory Scratch
imports Complex_Main
begin
definition val :: "real" where "val = sqrt 4"
export_code val in Haskell
end
Which resulted in the following error:
Wellsortedness error
(in code equation root ?n ?x ≡
if equal_nat_inst.equal_nat ?n zero_nat_inst.zero_nat then zero_real_inst.zero_real
else the_inv_into top_set_inst.top_set
(λy. times_real_inst.times_real (sgn_real_inst.sgn_real y)
(abs_real_inst.abs_real y ^ ?n))
?x,
with dependency "val" -> "sqrt" -> "root"):
Type real not of sort {enum,equal}
No type arity real :: enum
So I tried to replace "sqrt" with Haskell's "Prelude.sqrt":
code_printing
constant sqrt ⇀ (Haskell) "Prelude.sqrt _"
export_code val in Haskell
Which still resulted in the same error. Which seems rather odd to me, because replacing "plus" with some arbitrary function "f" seems to be fine:
definition val' :: "nat" where "val' = plus 49 1"
code_printing
constant plus ⇀ (Haskell) "_ `f` _"
export_code val' in Haskell
How do I resolve this issue?
I'm not sure about the code_printing issue, but what do you expect to happen here? Wellsortedness error during code generation usually means that what you're trying to export is simply not computable (or at least Isabelle doesn't know how).
What do you expect something like sqrt 2 to compile to in Haskell? What about sqrt pi? You cannot hope to generate executable code for all real numbers. Isabelle's default implementation restricts itself to rational numbers.
Doing code-printing to replace Isabelle's sqrt with Haskell's sqrt is only going to give you a type error, since Haskell's sqrt works on floating point numbers, and not on Isabelle's exported real type.
There is a file in ~~/src/HOL/Library/Code_Real_Approx_By_Float that maps Isabelle's operations on real numbers to floating point approximations in Standard ML and OCaml, but this is for experimentation only, since you lose all correctness guarantees if you do that sort of thing.
Lastly, there is an entry in the Archive of Formal Proofs that provides exact executable algebraic real numbers, so that you can do at least some operations with square root etc., but this is a big piece of work and the performance can be pretty bad in some cases.
There is also a sqrt operation on natural numbers in Isabelle (i.e. it rounds down) in ~~/src/HOL/Library/Discrete, and that can easily be exported to Haskell.
In the AFP there also is an entry Sqrt_Babylonian, which contains algorithms to compute sqrt up to a given precision epsilon > 0, without any floating point rounding errors.
Regarding the complexity of algebraic numbers that Manuel mentioned, it really depends on your input. If you use nested square-roots or combine different square-roots (like sqrt 2 + ... + sqrt 50), then the performance will degrade soonish. However, if you rarely use square-roots or always use the same square-root in multiple locations, then algebraic numbers might be fast enough.
When I tried to convert a very long integer to Int, I was surprised that no error was thrown:
Prelude> read "123456789012345678901234567890" :: Int
-4362896299872285998
readMaybe from Text.Read module gives the same result.
Two questions:
Which function should I call to perform a safe conversion?
How can the most type safe language on Earth allow such unsafe things?
Update 1:
This is my attempt to write a version of read that checks bounds:
{-# LANGUAGE ScopedTypeVariables #-}
parseIntegral :: forall a . (Integral a, Bounded a) => String -> Maybe a
parseIntegral s = integerToIntegral (read s :: Integer) where
integerToIntegral n | n < fromIntegral (minBound :: a) = Nothing
integerToIntegral n | n > fromIntegral (maxBound :: a) = Nothing
integerToIntegral n = Just $ fromInteger n
Is it the best I can do?
Background: why unchecked overflow is actually wonderful
Haskell 98 leaves overflow behavior explicitly unspecified, which is good for implementers and bad for everyone else. Haskell 2010 discusses it in two sections—in the section inherited from Haskell 98, it's left explicitly unspecified, whereas in the sections on Data.Int and Data.Word, it is specified. This inconsistency will hopefully be resolved eventually.
GHC is kind enough to specify it explicitly:
All arithmetic is performed modulo 2^n, where n is the number of bits in the type.
This is an extremely useful specification. In particular, it guarantees that Int, Word, Int64, Word32, etc., form rings, and even principal ideal rings, under addition and multiplication. This means that arithmetic will always work right—you can transform expressions and equations in lots of different ways without breaking things. Throwing exceptions on overflow would break all these properties, making it much more difficult to write and reason about programs. The only times you really need to be careful are when you use comparison operators like < and compare—fixed width integers do not form ordered groups, so these operators are a bit touchy.
Why it makes sense not to check reads
Reading an integer involves many multiplications and additions. It also needs to be fast. Checking to make sure the read is "valid" is not so easy to do quickly. In particular, while it's easy to find out whether an addition has overflowed, it is not easy to find out whether a multiplication has. The only sensible ways I can think of to perform a checked read for Int are
Read as an Integer, check, then convert. Integer arithmetic is significantly more expensive than Int arithmetic. For smaller things, like Int16, the read can be done with Int, checking for Int16 overflow, then narrowed. This is cheaper, but still not free.
Compare the number in decimal to maxBound (or, for a negative number, minBound) while reading. This seems more likely to be reasonably efficient, but there will still be some cost. As the first section of this answer explains, there is nothing inherently wrong with overflow, so it's not clear that throwing an error is actually better than giving an answer modulo 2^n.
If isn't "unsafe", in that the behaviour of the problem isn't undefined. (It's perfectly defined, just probably not what you wanted.) For example, unsafeWriteAray is unsafe, in that if you make a mistake with it, it writes data into arbitrary memory locations, either causing your application to segfault, or merely making corrupt its own memory, causing it behave in arbitrary undefined ways.
So much for splitting hairs. If you want to deal with such huge numbers, Integer is the only way to go. But you probably knew that already.
As for why there's no overflow check... Sometimes you actually want a number to overflow. (E.g., you might convert to Word8 without explicitly ANDing out the bottom 8 bits.) At any rate, every possible arithmetic operation can potentially overflow (e.g., maxBound + 1 = minBound, and that's just normal addition.) Do you really want every single arithmetic operation to have an overflow check, slowing your program down at every step?
You get the exact same behaviour in C, C++ or C#. I guess the difference is, in C# we have the checked keyword, which allows you to automatically check for overflow. Maybe somebody has a Haskell package for doing checked arithmetic… For now, it's probably simpler to just implement this one check yourself.
I can't understand the following behavior of ranges in Haskell. Enumerating 1 to 1 gives me a list contain only 1, and 2 to 2 gives me a list contain only 2 as given below.
Prelude> [1..1]
[1]
Prelude> [2..2]
[2]
But enumerating infinity to infinity gives me a list which is infinite in length and all the elements are infinity, as shown below.
Prelude> [1/0..1/0]
[Infinity,Infinity,Infinity,Infinity,Infinity,Infinity,Interrupted.
I am aware that Infinity is a concept not to be treated as a number, but what justifies that behavior ?
Haskell's Double (the default you get when using /) follows the IEEE 754 standard for floating point numbers, which defines how Infinity behaves. This is why 1/0 is Infinity.
By this standard (and, to be fair, by logic), Infinity + 1 == Infinity, and the Enum instance for Double just adds 1 each time.
This is another sign that the Enum instance for Double is not entirely well-formed and does not conform to the expectations we usually have for Enum instances. So, as a rule of thumb, you should probably avoid using .. for floating point numbers: even if you understand how it works, it will be confusing for others reading your code. As another example of confusing behavior, consider:
Prelude> [0.1..1]
[0.1,1.1]
If you want to get into a lot more details about the Enum instance for Double, you can read through this pretty lengthy Haskell-cafe thread.
As Luis Wasserman and Tikhon Jelvis have indicated, the basic problem is that the Num and Enum instances for Float and Double are weird, and really should not exist at all. In fact, the Enum class itself is pretty weird, as it attempts to serve several different purposes at once, none of them well—it is probably best considered a historical accident and a convenience rather than a good example of what a typeclass should look like. The same is true, to a significant extent, of the Num and Integral classes. When using any of these classes, you must pay close attention to the specific types you are operating on.
The enumFromTo methods for floating point are based on the following function:
numericEnumFromTo :: (Ord a, Fractional a) => a -> a -> [a]
numericEnumFromTo n m = takeWhile (<= m + 1/2) (numericEnumFrom n)
So 2 <= 1+1/2 is false, but because of the weirdness of floating point, infinity <= infinity + 1/2 is true.
As Tikhon Jelvis indicates, it's generally best not to use any Enum methods, including numeric ranges, with floating point.
I believe the implementation of [a..b] continues incrementing a by one until it is greater than b. This is never going to happen with infinity, so it goes forever.
I believe your code will probably default to Double the way you've written it, which has well-defined semantics for infinity. IIRC, Haskell follows http://en.wikipedia.org/wiki/IEEE_floating_point.
In Eiffel one is allowed to use an expanded class which doesn't allocate from the heap. From a developer's perspective one rarely has to think about conversion from Int to Float as it is automatic. My question is this: Why did Haskell not choose a similar approach to modeling Num. Specifically, lets consider the Int instance. Here is the rationale for my question:
[1..3] = [1,2,3]
[1..3.5] = [1.0,2.0,3.0,4.0] -- rounds up
The second list was something that I was not expecting because there are by definition infinite floating point numbers between any two integers. Of course once we test the sequence it is clear that it is returning the floor of the floating point number rounded up. One of the reasons these conversions are needed is allow us to compute mean of a set of Integers for example.
In Eiffel the number type hierarchy is a bit more programmer friendly and the conversion happens as needed: for example creating a sequence can still be a set of Ints that result in a floating point mean. This has a readability advantage.
Is there a reason that expanded class was not implemented in Haskell?Any references will greatly help.
#ony: the point about parallel strategies: wont we face the same issue when using primitives? The manual does discourage using primitives and that makes sense to me in general where ever we can use primitives we probably need to use the abstract type. The issue I faced when trying to a mean of numbers is the missing Fractional Int instance and as to why does 5/3 not promote to a floating point instead of having to create floating point array to achieve the same result. There must be a reason as to why Fractional instance of Int and Integer is not defined? That could help me understand the rationale better.
#leftroundabout: the question is not about expanded classes per se but the convenience that such a feature can offer although that feature alone is not sufficient to handle the type promotion to float from an int for example as mentioned in my response to #ony. Lets take the classic example of a mean and try to define it as
> [Int] :: Double
let mean xs = sum xs / length xs (--not valid haskell code)
[Int] :: Double
let mean = sum xs / fromIntegral (length xs)
I would have liked it if I did not have to call the fromIntegral to get the mean function to work and that ties to the missing Fractional Int. Although the explanation seems to make sense, it has to, what I dont understand is if I am clear that I expect a double and I state it in my type signature is that not sufficient to do the appropriate conversion?
[a..b] is shorthand for enumFromTo a b, a method of the Enum typeclass. It begins at a and succs until the first time b is exceeded.
[a,b..c] is shorthand for enumFromThenTo a b c is similar to enumFromTo except instead of succing it adds the difference b-a each time. By default this difference is computed by roundtripping through Int so fractional differences may or may not be respected. That said, Double works as you'd expect
Prelude> [0.0, 0.5.. 10]
[0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,4.5,5.0,5.5,6.0,6.5,7.0,7.5,8.0,8.5,9.0,9.5,10.0]
[a..] is shorthand for enumFrom a which just succs forever.
[a,b..] is shorthand for enumFromThen a b which just adds (b-a) forever.
As for behaviour #J.Abrahamson already replied. That's definition enumFromThenTo.
As for design...
Actually GHC have Float# that represents unboxed type (can be allocated anywhere, but value is strict).
Since Haskell is a lazy language it assumes that most of the values are not required initially, until they actually referred with a primitive with a strict arguments.
Consider length [2..10]. In this case without optimization Haskell may even avoid generation of numbers and simply build up a list (without values). Probably more useful example takeWhile (<100) [x*(x-1) | x <- [2..]].
But you shouldn't think that we have overhead here since you are writing in language that abstracts away all that stuff with thumbs (except of strict notation). Haskell compiler have to take this as a work for itself. I.e. when compiler will be able to tell that all elements of list will be referenced (transformed to normal form) and it decides to process it within one stack of returns it can allocate it on stack.
Also with such approach you can gain more out of your code by using multiple CPU cores. Imagine that using Strategies your list being processed on a different cores and thus they should share common data on heap (not on stack).