I've been working with System.Numerics.Complex recently, and I've started to notice the typical floating-point "drift" where the value stored gets calculated a tenth of a millionth off or something like that, which is well-known and common with the float type and even the double type. I looked into the Complex struct, and sure enough, it used double variables. Why does it use double values to store its data and not decimal values, which are designed to prevent this? How do I work around this?
To answer your question:
doubles are several orders of magnitude faster, as operations are done at the hardware level
base-2 floats can actually be more accurate for large computations, as there is less "wobble" when shifting up and down exponents: 1 bit of precision is less than 1 decimal digit. Moreover, base-2 can use an implicit leading bit, which means they can represent more numbers than other bases.
complex numbers are typically used for scientific/engineering applications, where small relative errors of approx 10-16 are outweighed by other sources of error (e.g. due to measurement or the model).
decimals on the other hand are typically used for "accounting" type operations, where round-off error is typically negligible (i.e. addition of small numbers, multiplication by integers, etc.)
Related
I wrote an application in Haskell that calls Z3 solver to solve constrains with some complex formulas. Thanks to Haskell I can quickly switch the data type I'm working with.
When using SBV's AlgReal type for computations, I get results in sensible time, however switching to Float or Double types makes Z3 consume ~2Gb of RAM and doesn't result even in 30 minutes.
Is this expected that producing floating point solutions require much more time, or it is some mistake on my side?
As with any question regarding solver performance, it is impossible to make generalizations. Christoph Wintersteiger (https://stackoverflow.com/users/869966/christoph-wintersteiger) would be the expert on this to opine, but I'm not sure how closely he follows this group. Chris: If you're reading this, I'd love to hear your thoughts!
There's also the risk of comparing apples-to-oranges here: Reals and floats are two completely different logics, with different decision procedures, heuristics, algorithms, etc. I'm sure you can find problems where one outperforms the other, with no clear "performance" winner.
Having said all that, here are some things that make floating-point (FP) tricky:
Rewriting is almost impossible with FP, since rules like associativity simply
don't hold for FP addition/multiplication. So, there are fewer opportunities for
simplification before bit-blasting.
Similarly a * 1/a == 1 doesn't hold for floats. Neither does x + 1 /= x or (x + a == x) -> (a == 0) and many other "obvious" simplifications that you'd love to be able to make. All of this complicates reasoning.
Existence of NaN values make equality non-reflexive: Nothing compares equal to NaN including itself. So, substitution of equals-for-equals is also problematic and requires side conditions.
Likewise, the existence of +0 and -0, which compare equal but behave differently due to rounding complicate matters. The typical example is x == 0 -> fma(a, b, x) == a * b doesn't hold (where fma is fused multiply-add), because depending on the sign of zero these two expressions can produce different values for different rounding modes.
Combination of floats with integers and reals introduce non-linearity, which is always a soft-spot for solvers. So, if you're using FP, it is advisable not to mix it with other theories as the combination itself creates extra complexity.
Operations like multiplication, division, and remainder (yes, there's a floating-point remainder operation!) are inherently very complex and lead to extremely large SAT formulas. In particular, multiplication is a known operation that challenges most SAT/BDD engines, due to lack of any good variable ordering and splitting heuristics. Solvers end-up bit-blasting fairly quickly, resulting in extremely large state-spaces. I have observed that solvers have a hard time dealing with FP division and remainder operations even when the input is completely concrete, imagine what happens when they are fully symbolic!
The logic of reals have a decision procedure with a double-exponential complexity. However, techniques like Fourier-Motzkin elimination (https://en.wikipedia.org/wiki/Fourier%E2%80%93Motzkin_elimination), while remaining exponential, perform really well in practice.
FP solvers are relatively new, and it's a niche area with nascent research. So existing solvers tend to be quite conservative and quickly bit-blast and reduce the problem to bit-vector logic. I would expect them to improve over time, just like all the other theories did.
Again, I want to emphasize comparing solver performance on these two different logics is misguided as they are totally different beasts. But hopefully, the above points illustrate why floating-point is tricky in practice.
A great paper to read about the treatment of IEEE754 floats in SMT solvers is: http://smtlib.cs.uiowa.edu/papers/BTRW14.pdf. You can see the myriad of operations it supports and get a sense of the complexity.
The code below would result in moneyDouble = 365.24567874299998 and I need it to be exactly 365.245678743
I wouldn't mind having to set a precision and getting some extra zeros to the right.
This number is used to calculate money transaction so it needs to be exact.
std::string money ("365.245678743");
std::string::size_type sz; // alias of size_t
double moneyDouble = std::stod (money,&sz);
Floating-point numbers and exact precision don't mix, period [link]. For this reason, monetary calculations should never be done in floating-point [link]. Use a fixed-point arithmetic library, or just use integer arithmetic and interpret it as whatever fractional part you need. Since your precision requirements seem to be very high (lots of decimals), a big number library might be necessary.
While library recommendations are off-topic on Stack Overflow, this old question seems to offer a good set of links to libraries you might find useful.
The result of your erroneous output of moneyDouble is because moneyDouble is a floating point number. They cannot express tenths, hundredths, thousandths, etc exactly.
Furthermore, floating-point numbers are really expressed in binary form, meaning that only (some) binary numbers can be expressed exactly in floating point. Not to mention that they have finite accuracy, so they can only store only a limited number of digits (including those after the decimal point).
Your best bet is to use fixed-point arithmetic, integer arithmetic, or implement a rational-number class, and you might need number libraries since you may have to deal with very big numbers in very high precision.
See Is floating point math broken? for more information about the unexpected results of floating-point accuracy.
This comparison prints '0'b. Don't understand why... As I know strings are converted automatically to float in PL/I if needed.
put skip list('-2.34e-1'=-2.34e-1);
I have tested this in our environment (Enterprise PL/I V4.5 on z/OS) and found the same behaviour - under certain compile-options.
Using the option FLOAT(NODFP) (i.e. do not use native support for decimal floating point, I think the option was introduced with Enterprise PL/I V4.4) the following happens:
the literal -2.34e-1 is converted to its internal representation as bin float(6), i.e. short binary floating point
the literal '-2.34e-1' is compared with a bin float(6) value, so it has to be converted to a bin float as well
since -0.234 does not have an exact representation as a binary fraction it seems the compiler converts it to a bin float(54), i.e. an extended binary floating point value, to get maximum precision.
So since -0.234 has an infinite number of digits after the decimal point in its binary representation but the two converted values preserve a different number of digits the values do not compare equal.
Under FLOAT(DFP) (i.e. when using the machines DFP support)
the internal representation of the literal -2.34e-1 is an actual decimal floating point and thus exact
as is the representation of '-2.34e-1'
so under this compile-option both compare equal and the output of your program is '1'b
So your problem is a combination of the compilers different choice of data-representation and resulting rounding-errors from using binary floating point of different precision.
I know Haskell has native data types which allow you to have really big integers so things like
>> let x = 131242358045284502395482305
>> x
131242358045284502395482305
work as expected. I was wondering if there was a similar "large precision float" native structure I could be using, so things like
>> let x = 5.0000000000000000000000001
>> x
5.0000000000000000000000001
could be possible. If I enter this in Haskell, it truncates down to 5 if I go beyond 15 decimal places (double precision).
Depending on exactly what you are looking for:
Float and Double - pretty much what you know and "love" from Floats and Doubles in all other languages.
Rational which is a Ratio of Integers
FixedPoint - This package provides arbitrary sized fixed point values. For example, if you want a number that is represented by 64 integral bits and 64 fractional bits you can use FixedPoint6464. If you want a number that is 1024 integral bits and 8 fractional bits then use $(mkFixedPoint 1024 8) to generate type FixedPoint1024_8.
EDIT: And yes, I just learned about the numbers package mentioned above - very cool.
Haskell does not have high-precision floating-point numbers naitively.
For a package/module/library for this purpose, I'd refer to this answer to another post. There's also an example which shows how to use this package, called numbers.
If you need a high precision /fast/ floating point calculations, you may need to use FFI and long doubles, as the native Haskell type is not implemented yet (see https://ghc.haskell.org/trac/ghc/ticket/3353).
I believe the standard package for arbitrary precision floating point numbers is now https://hackage.haskell.org/package/scientific
I would like to know whether i can assume that same operations on same 64-bit floating point numbers gives exactly the same results on any modern PC and in most common programming languages? (C++, Java, C#, etc.). We can assume, that we are operating on numbers and result is also a number (no NaNs, INFs and so on).
I know there are two very simmilar standards of computation using floating point numbers (IEEE 854-1987 and IEEE 754-2008). However I don't know how it is in practice.
Modern processors that implement 64-bit floating-point typically implement something that is close to the IEEE 754-1985 standard, recently superseded by the 754-2008 standard.
The 754 standard specifies what result you should get from certain basic operations, notably addition, subtraction, multiplication, division, square root, and negation. In most cases, the numeric result is specified precisely: The result must be the representable number that is closest to the exact mathematical result in the direction specified by the rounding mode (to nearest, toward infinity, toward zero, or toward negative infinity). In "to nearest" mode, the standard also specifies how ties are broken.
Because of this, operations that do not involve exception conditions such as overflow will get the same results on different processors that conform to the standard.
However, there are several issues that interfere with getting identical results on different processors. One of them is that the compiler is often free to implement sequences of floating-point operations in a variety of ways. For example, if you write "a = bc + d" in C, where all variables are declared double, the compiler is free to compute "bc" in either double-precision arithmetic or something with more range or precision. If, for example, the processor has registers capable of holding extended-precision floating-point numbers and doing arithmetic with extended-precision does not take any more CPU time than doing arithmetic with double-precision, a compiler is likely to generate code using extended-precision. On such a processor, you might not get the same results as you would on another processor. Even if the compiler does this regularly, it might not in some circumstances because the registers are full during a complicated sequence, so it stores the intermediate results in memory temporarily. When it does that, it might write just the 64-bit double rather than the extended-precision number. So a routine containing floating-point arithmetic might give different results just because it was compiled with different code, perhaps inlined in one place, and the compiler needed registers for something else.
Some processors have instructions to compute a multiply and an add in one instruction, so "bc + d" might be computed with no intermediate rounding and get a more accurate result than on a processor that first computes bc and then adds d.
Your compiler might have switches to control behavior like this.
There are some places where the 754-1985 standard does not require a unique result. For example, when determining whether underflow has occurred (a result is too small to be represented accurately), the standard allows an implementation to make the determination either before or after it rounds the significand (the fraction bits) to the target precision. So some implementations will tell you underflow has occurred when other implementations will not.
A common feature in processors is to have an "almost IEEE 754" mode that eliminates the difficulty of dealing with underflow by substituting zero instead of returning the very small number that the standard requires. Naturally, you will get different numbers when executing in such a mode than when executing in the more compliant mode. The non-compliant mode may be the default set by your compiler and/or operating system, for reasons of performance.
Note that an IEEE 754 implementation is typically not provided just by hardware but by a combination of hardware and software. The processor may do the bulk of the work but rely on the software to handle certain exceptions, set certain modes, and so on.
When you move beyond the basic arithmetic operations to things like sine and cosine, you are very dependent on the library you use. Transcendental functions are generally calculated with carefully engineered approximations. The implementations are developed independently by various engineers and get different results from each other. On one system, the sin function may give results accurate within an ULP (unit of least precision) for small arguments (less than pi or so) but larger errors for large arguments. On another system, the sin function might give results accurate within several ULP for all arguments. No current math library is known to produce correctly rounded results for all inputs. There is a project, crlibm (Correctly Rounded Libm), that has done some good work toward this goal, and they have developed implementations for significant parts of the math library that are correctly rounded and have good performance, but not all of the math library yet.
In summary, if you have a manageable set of calculations, understand your compiler implementation, and are very careful, you can rely on identical results on different processors. Otherwise, getting completely identical results is not something you can rely on.
If you mean getting exactly the same result, then the answer is no.
You might even get different results for debug (non-optimized) builds vs. release builds (optimized) on the same machine in some cases, so don't even assume that the results might be always identical on different machines.
(This can happen e.g. on a computer with an Intel processor, if the optimizer keeps a variable for an intermediate result in a register, that is stored in memory in the unoptimized build. Since Intel FPU registers are 80 bit, and double variables are 64 bit, the intermediate result will be stored with greater precision in the optimized build, causing different values in later results.).
In practice, however, you may often get the same results, but you shouldn't rely on it.
Modern FPUs all implement IEEE754 floats in single and double formats, and some in extended format. A certain set of operations are supported (pretty much anything in math.h), with some special instructions floating around out there.
assuming you are talking about applying multiple operations, I do not think you will get exact numbers. CPU architecture, compiler use, optimization settings will change the results of your computations.
if you mean the exact order of operations (at the assembly level), I think you will still get variations.for example Intel chips use extended precision (80 bits) internally, which may not be the case for other CPUs. (I do not think extended precision is mandated)
The same C# program can bring out different numerical results on the same PC, once compiled in debug mode without optimization, second time compiled in release mode with optimization enabled. That's my personal experience. We did not regard this when we set up an automatic regression test suite for one of our programs for the first time, and were completely surprised that a lot of our tests failed without any apparent reason.
For C# on x86, 80-bit FP registers are used.
The C# standard says that the processor must operate at the same precision as, or greater than, the type itself (i.e. 64-bit in the case of a 'double'). Promotions are allowed, except for storage. That means that locals and parameters could be at greater than 64-bit precision.
In other words, assigning a member variable to a local variable could (and in fact will under certain circumstances) be enough to give an inequality.
See also: Float/double precision in debug/release modes
For the 64-bit data type, I only know of "double precision" / "binary64" from the IEEE 754 (1985 and 2008 don't differ much here for common cases) being used.
Note: The radix types defined in IEEE 854-1987 are superseded by IEEE 754-2008 anyways.