Can I change the truncate default round() response in python? - rounding

I just learned about the IEEE 754 floating-point standard, it says that if the least significant bit retained in the halfway case would be odd, add one; if even, truncate.
So +23.5 and +24.5 both round to +24, while -23.5 and -24.5 to -24
can I override the python built-in round behavior to work always going up when the float number is {number}.5?

Related

Division in Python unintended output [duplicate]

Why do some numbers lose accuracy when stored as floating point numbers?
For example, the decimal number 9.2 can be expressed exactly as a ratio of two decimal integers (92/10), both of which can be expressed exactly in binary (0b1011100/0b1010). However, the same ratio stored as a floating point number is never exactly equal to 9.2:
32-bit "single precision" float: 9.19999980926513671875
64-bit "double precision" float: 9.199999999999999289457264239899814128875732421875
How can such an apparently simple number be "too big" to express in 64 bits of memory?
In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:
5179139571476070 * 2 -49
Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.
9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.
Seeing the Data
First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):
def float_to_bin_parts(number, bits=64):
if bits == 32: # single precision
int_pack = 'I'
float_pack = 'f'
exponent_bits = 8
mantissa_bits = 23
exponent_bias = 127
elif bits == 64: # double precision. all python floats are this
int_pack = 'Q'
float_pack = 'd'
exponent_bits = 11
mantissa_bits = 52
exponent_bias = 1023
else:
raise ValueError, 'bits argument must be 32 or 64'
bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]
There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.
Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.
When we call that function with our example, 9.2, here's what we get:
>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']
Interpreting the Data
You'll see I've split the return value into three components. These components are:
Sign
Exponent
Mantissa (also called Significand, or Fraction)
Sign
The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.
Exponent
The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).
Mantissa
The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:
6.0221413x1023
The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:
1.0010011001100110011001100110011001100110011001100110
This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.
When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:
0.0010011001100110011001100110011001100110011001100110
In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)
Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.
Recapping the Components
Sign (first component): 0 for positive, 1 for negative
Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa
Calculating the Number
Putting all three parts together, we're given this binary number:
1.0010011001100110011001100110011001100110011001100110 x 1011
Which we can then convert from binary to decimal:
1.1499999999999999 x 23 (inexact!)
And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:
9.1999999999999993
Representing as a Fraction
9.2
Now that we've built the number, it's possible to reconstruct it into a simple fraction:
1.0010011001100110011001100110011001100110011001100110 x 1011
Shift mantissa to a whole number:
10010011001100110011001100110011001100110011001100110 x 1011-110100
Convert to decimal:
5179139571476070 x 23-52
Subtract the exponent:
5179139571476070 x 2-49
Turn negative exponent into division:
5179139571476070 / 249
Multiply exponent:
5179139571476070 / 562949953421312
Which equals:
9.1999999999999993
9.5
>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']
Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.
Assemble the binary scientific notation:
1.0011 x 1011
Shift the decimal point:
10011 x 1011-100
Subtract the exponent:
10011 x 10-1
Binary to decimal:
19 x 2-1
Negative exponent to division:
19 / 21
Multiply exponent:
19 / 2
Equals:
9.5
Further reading
The Floating-Point Guide: What Every Programmer Should Know About Floating-Point Arithmetic, or, Why don’t my numbers add up? (floating-point-gui.de)
What Every Computer Scientist Should Know About Floating-Point Arithmetic (Goldberg 1991)
IEEE Double-precision floating-point format (Wikipedia)
Floating Point Arithmetic: Issues and Limitations (docs.python.org)
Floating Point Binary
This isn't a full answer (mhlester already covered a lot of good ground I won't duplicate), but I would like to stress how much the representation of a number depends on the base you are working in.
Consider the fraction 2/3
In good-ol' base 10, we typically write it out as something like
0.666...
0.666
0.667
When we look at those representations, we tend to associate each of them with the fraction 2/3, even though only the first representation is mathematically equal to the fraction. The second and third representations/approximations have an error on the order of 0.001, which is actually much worse than the error between 9.2 and 9.1999999999999993. In fact, the second representation isn't even rounded correctly! Nevertheless, we don't have a problem with 0.666 as an approximation of the number 2/3, so we shouldn't really have a problem with how 9.2 is approximated in most programs. (Yes, in some programs it matters.)
Number bases
So here's where number bases are crucial. If we were trying to represent 2/3 in base 3, then
(2/3)10 = 0.23
In other words, we have an exact, finite representation for the same number by switching bases! The take-away is that even though you can convert any number to any base, all rational numbers have exact finite representations in some bases but not in others.
To drive this point home, let's look at 1/2. It might surprise you that even though this perfectly simple number has an exact representation in base 10 and 2, it requires a repeating representation in base 3.
(1/2)10 = 0.510 = 0.12 = 0.1111...3
Why are floating point numbers inaccurate?
Because often-times, they are approximating rationals that cannot be represented finitely in base 2 (the digits repeat), and in general they are approximating real (possibly irrational) numbers which may not be representable in finitely many digits in any base.
While all of the other answers are good there is still one thing missing:
It is impossible to represent irrational numbers (e.g. π, sqrt(2), log(3), etc.) precisely!
And that actually is why they are called irrational. No amount of bit storage in the world would be enough to hold even one of them. Only symbolic arithmetic is able to preserve their precision.
Although if you would limit your math needs to rational numbers only the problem of precision becomes manageable. You would need to store a pair of (possibly very big) integers a and b to hold the number represented by the fraction a/b. All your arithmetic would have to be done on fractions just like in highschool math (e.g. a/b * c/d = ac/bd).
But of course you would still run into the same kind of trouble when pi, sqrt, log, sin, etc. are involved.
TL;DR
For hardware accelerated arithmetic only a limited amount of rational numbers can be represented. Every not-representable number is approximated. Some numbers (i.e. irrational) can never be represented no matter the system.
There are infinitely many real numbers (so many that you can't enumerate them), and there are infinitely many rational numbers (it is possible to enumerate them).
The floating-point representation is a finite one (like anything in a computer) so unavoidably many many many numbers are impossible to represent. In particular, 64 bits only allow you to distinguish among only 18,446,744,073,709,551,616 different values (which is nothing compared to infinity). With the standard convention, 9.2 is not one of them. Those that can are of the form m.2^e for some integers m and e.
You might come up with a different numeration system, 10 based for instance, where 9.2 would have an exact representation. But other numbers, say 1/3, would still be impossible to represent.
Also note that double-precision floating-points numbers are extremely accurate. They can represent any number in a very wide range with as much as 15 exact digits. For daily life computations, 4 or 5 digits are more than enough. You will never really need those 15, unless you want to count every millisecond of your lifetime.
Why can we not represent 9.2 in binary floating point?
Floating point numbers are (simplifying slightly) a positional numbering system with a restricted number of digits and a movable radix point.
A fraction can only be expressed exactly using a finite number of digits in a positional numbering system if the prime factors of the denominator (when the fraction is expressed in it's lowest terms) are factors of the base.
The prime factors of 10 are 5 and 2, so in base 10 we can represent any fraction of the form a/(2b5c).
On the other hand the only prime factor of 2 is 2, so in base 2 we can only represent fractions of the form a/(2b)
Why do computers use this representation?
Because it's a simple format to work with and it is sufficiently accurate for most purposes. Basically the same reason scientists use "scientific notation" and round their results to a reasonable number of digits at each step.
It would certainly be possible to define a fraction format, with (for example) a 32-bit numerator and a 32-bit denominator. It would be able to represent numbers that IEEE double precision floating point could not, but equally there would be many numbers that can be represented in double precision floating point that could not be represented in such a fixed-size fraction format.
However the big problem is that such a format is a pain to do calculations on. For two reasons.
If you want to have exactly one representation of each number then after each calculation you need to reduce the fraction to it's lowest terms. That means that for every operation you basically need to do a greatest common divisor calculation.
If after your calculation you end up with an unrepresentable result because the numerator or denominator you need to find the closest representable result. This is non-trivil.
Some Languages do offer fraction types, but usually they do it in combination with arbitary precision, this avoids needing to worry about approximating fractions but it creates it's own problem, when a number passes through a large number of calculation steps the size of the denominator and hence the storage needed for the fraction can explode.
Some languages also offer decimal floating point types, these are mainly used in scenarios where it is imporant that the results the computer gets match pre-existing rounding rules that were written with humans in mind (chiefly financial calculations). These are slightly more difficult to work with than binary floating point, but the biggest problem is that most computers don't offer hardware support for them.

Does default memory allocated for a datatype play a role in rounding? In what manner a float is rounded if it exceeds allocated memory?

Having a file test2.py with the following contents:
print(2.0000000000000003)
print(2.0000000000000002)
I get this output:
$ python3 test2.py
2.0000000000000004
2.0
I thought lack of memory allocated for float might be causing this but 2.0000000000000003 and 2.0000000000000002 need same amount of memory.
IEEE 754 64-bit binary floating point always uses 64 bits to store a number. It can exactly represent a finite subset of the binary fractions. Looking only at the normal numbers, if N is a power of two in its range, it can represent a number of the form, in binary, 1.s*N where s is a string of 52 zeros and ones.
All the 32 bit binary integers, including 2, are exactly representable.
The smallest exactly representable number greater than 2 is 2.000000000000000444089209850062616169452667236328125. It is twice the binary fraction 1.0000000000000000000000000000000000000000000000000001.
2.0000000000000003 is closer to 2.000000000000000444089209850062616169452667236328125 than to 2, so it rounds up and prints as 2.0000000000000004.
2.0000000000000002 is closer to 2.0, so it rounds down to 2.0.
To store numbers between 2.0 and 2.000000000000000444089209850062616169452667236328125 would require a different floating point format likely to take more than 64 bits for each number.
Floats are not stored as integers are, with each bit signaling a yes/no term of 1,2,4,8,16,32,... value that you add up to get the complete number. They are stored as sign + mantissa + exponent in base 2. Several combinations have special meaning (NaN, +-inf, -0,...). Positive and negative numbers are idential in mantissa and exponent, only the sign differs.
At any given time they have a specific bit-length they are "put into". They can not overflow.
They have however a minimal accuracy, if you try to fit numbers into them that would need a bigger accuracy you get rounding errors - thats what you see in your example.
More on floats and storage (with example):
http://stupidpythonideas.blogspot.de/2015/01/ieee-floats-and-python.html
(which links to a more technical https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)
More on accuracy of floats:
- Floating Point Arithmetic: Issues and Limitations

Python Decimal division giving unexpected answer

I don't understand the logic behind Decimal division:
from decimal import *
getcontext().prec = 2
This works perfectly:
>>> Decimal("11") / 2
Decimal('5.5')
But this already not:
>>> Decimal("21") / 2
Decimal('10')
Why is it not "10.5"?...
In your case this happened because of insufficient precision.
Value was rounded respectively to "rounding" property of Decimal context.
As described in python docs, decimal context precision come into play during arithmetic operations.
The precision determines accuracy for calculations. This means that you may have different results depending on value of the precision.
Quote from python doc:
Widely differing results indicate insufficient precision, rounding
mode issues, ill-conditioned inputs, or a numerically unstable
algorithm.
So the only thing that you have to do is to increase value of decimal precision.
By default it is equals 28.
Decimal's context prec regards all digits in the numbers, not just the numbers after the decimal point.
So, 5.5 has 2 digits, as have 10. If you want to see 10.5 you have to set precision to 3 digits.

How COBOL handles division rounding by default?

In COBOL what is the result of
COMPUTE RESULT1 = 97 / 100
COMPUTE RESULT2 = 50 / 100
COMPUTE RESULT3 = 32 / 100
COMPUTE RESULT4 = -97 / 100
COMPUTE RESULT5 = -50 / 100
COMPUTE RESULT6 = -32 / 100
When RESULT1/2/3 are:
PIC S9(4)
PIC S9(4)V9
PIC S9(4)V99
Or, in other words, what is the default rounding mode for COBOL divisions?
Edit: What happens with negative values?
Even "discard" is sort of a rounding mode, is it equivalent to rounding towards negative infinity or towards zero?
COBOL does no rounding, unless you tell it to.
What it does do, if you don't tell it to do rounding, is low-order truncation. Some people may prefer to term that something else, it doesn't really matter, the effect is the same. Truncation.
Negative values are dealt with in the same way as positive values (retain a significant digit beyond what is required for the final result, and add one to that (also see later explanation): -0.009 would, to two decimal places, round to -0.01; -0.004 would round to -0.00.
If you specify no decimal places for a field, any fractional part will be simply discarded.
So, when all the targets of your COMPUTEs are 9(4), they will all contain zero, including the negative values.
When all the targets of your COMPUTEs are 9(4)V9, without rounding, they will contain 0.9, 0.5 and 0.3 respectively with the low-order (from the second decimal digit) decimal part being truncated.
And when all the targets of your COMPUTEs are 9(4)V99, they will contain 0.97, 0.50 and 0.32 with the low-order decimal part beyond that being truncated.
You do rounding in the language by using the ROUNDED phrase for the result of any arithmetic verb (ADD, SUBTRACT, MULTIPLY, DIVIDE, COMPUTE).
ADD some-name some-other-name GIVIING some-result
ROUNDED
COMPUTE some-result ROUNDED = some-name
+ some-other-name
The above are equivalent to each other.
To the 1985 Standard, ROUNDED takes the final result with one extra decimal place and adjusts the actual field with the defined decimal places by adding "one" at the smallest possible unit (for V99, it will add one hundredth, at V999 it will add one thousandth, at no decimal places it will add one, with any scaling amount (see PICture character P) it will add one).
You can consider the addition of one to be made to an absolute value, with the result retaining the original sign. Or you can consider it as done in any other way which achieves the same result. The Standard describes the rounding, the implementation meets the Standard in whatever way it likes. My description is a description for human understanding. No compiler need implement it in the way I have described, but logically the results will be the same.
Don't get hung up on how it is implemented.
The 2014 Standard, superseding the 2002 Standard, has further options for rounding, which for the 85 Standard you'd have to code for (very easy use of REDEFINES).
`ROUNDED` `MODE IS` `AWAY-FROM-ZERO` `NEAREST-AWAY-FROM-ZERO` `NEAREST-EVEN` `NEAREST-TOWARD-ZERO` `PROHIBITED` `TOWARD-GREATER` `TOWARD-LESSER TRUNCATION`
Only one "mode" may be specified at a time, and the default if MODE IS is not specified is TRUNCATION, establishing backward-compatibility (and satisfying those who feel the everything is rounding of some type.
The PROHIBITED option is interesting. If the result field has, for instance, two decimal places, then POHIBITED requires that the calculated result has only two high-order decimal places and all lower-order values are zero.
It is important to note with a COMPUTE that only the final result is rounded, intermediate results are not. If you need intermediate rounding, you need one COMPUTE (or other arithmetic verb) per rounded result.

fortran intrinsic function sqrt() behaviour

I have a scenario where i'm trying to execute a complex application on AIX and Linux.
During the execution the code makes use of the intrinsic function sqrt() for computation, but the result obtained is different on both the machines.
Does anyone know the reason for this behavior? Is there anyway to overcome this?
P.S
Some values are equal on both machines but majority of them are different.
Processors that follow the IEEE 754 specification must return the exact result for square root (or correctly rounded when exact cannot be represented). For the same input values, floating point format, and rounding mode, different IEEE 754 compliant processors must return an identical result. No variation is allowed. Possible reasons for seeing different results:
One of the processors does not follow the IEEE 754 floating point specification.
The values are really the same, but a print related bug or difference makes them appear different.
The rounding mode or precision control is not set the same on both systems.
One system attempts to follow the IEEE 754 specification but has an imperfection in its square root function.
Did you compare binary output to eliminate the possibility of a print formatting bug or difference?
Most processors today support IEEE 754 floating point. An example where IEEE 754 accuracy is not guaranteed is with the OpenCL native_sqrt function. OpenCL defines native_sqrt (in addition to IEEE 754 compliant sqrt) so that speed can be traded for accuracy if desired.
Bugs in IEEE 754 sqrt implementations are not too common today. A difficult case for an IEEE 754 sqrt function is when the rounding mode is set to nearest and the actual result is very near the midway point between two floating point representations. A method for generating these difficult square root arguments can be found in a paper by William Kahan, How to Test Whether SQRT is Rounded Correctly.
There may be slight differences in the numeric representation of the hardware on the two computers or in the algorithm used for the sqrt function of the two compilers. Finite precision arithmetic is not as the same the arithmetic of real numbers and slight differences in calculations should be expected. To judge whether the differences are unusual, you should state the numeric type that you are using (as asked by ChuckCottrill) and give examples. What is the relative difference. For values of order unity, 1E-9 is an expected difference for single precision floating point.
Check the floating point formats available for each cpu. Are you using single-precision or double-precision floating point? You need to use a floating point format with similar precision on both machines if you want comparable/similar answers.
Floating point is an approximation. A single precision floating point only uses 24 bits (including sign bit) for mantissa, and the other 8 bits for exponent. This allows for about 8 digits of precision. Double precision floating point uses 53 bits, allowing for much more precision.
Lacking detail about the binary values of the differing floating point numbers on the two systesm, and the printed representations of these values, you have rounding or representation differences.

Resources