Round a decimal number in python - python-3.x

How can I round this number 838062.5 to 838063 instead of to 838062, using the round() function in Python?

Use math.ceil to round up:
import math
print(math.ceil(838062.5)) # prints 838063

The Python documentation for the round() function explains this interesting phenomenon:
If two multiples are equally close, rounding is done toward the even choice (so, for example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2).
While you could use floor and ceiling operations, matching the various cases may be a pain. So, I recommend using the decimal module, which allows you to always round half-values up.
>>> import decimal
>>> decimal.getcontext().rounding = decimal.ROUND_HALF_UP
>>> n = decimal.Decimal(838062.5)
>>> n
Decimal('838062.5')
>>> round(n)
838062

All I had to do was add 0.5 and take the floor value:
import math
x = 838062.5 + 0.5
print(math.floor(x))

Related

Why does this n choose r python code not work?

These 2 variations of n choose r code got different answer although followed the correct definition
I saw that this code works,
import math
def nCr(n,r):
f = math.factorial
return f(n) // f(r) // f(n-r)
But mine did not:
import math
def nCr(n,r):
f = math.factorial
return int(f(n) / (f(r) * f(n-r)))
Use test case nCr(80,20) will show the difference in result. Please advise why are they different in Python 3, thank you!
No error message. The right answer should be 3535316142212174320, but mine got 3535316142212174336.
That's because int(a / b) isn't the same as a // b.
int(a / b) evaluates a / b first, which is floating-point division. And floating-point numbers are prone to inaccuracies, roundoff errors and the like, as .1 + .2 == 0.30000000000000004. So, at some point, your code attempts to divide really big numbers, which causes roundoff errors since floating-point numbers are of fixed size, and thus cannot be infinitely precise.
a // b is integer division, which is a different thing. Python's integers can be arbitrarily huge, and their division doesn't cause roundoff errors, so you get the correct result.
Speaking about floating-point numbers being of fixed size. Take a look at this:
>>> import math
>>> f = math.factorial
>>> f(20) * f(80-20)
20244146256600469630315959326642192021057078172611285900283370710785170642770591744000000000000000000
>>> f(80) / _
3.5353161422121743e+18
The number 3.5353161422121743e+18 is represented exactly as shown here: there is no information about the digits after the last 3 in 53...43 because there's nowhere to store it. But int(3.5353161422121743e+18) must put something there! Yet it doesn't have enough information. So it puts whatever it wants to so that float(int(3.5353161422121743e+18)) == 3.5353161422121743e+18.

Python floating point precision sum

I have the following array in python
n = [565387674.45, 321772103.48,321772103.48, 214514735.66,214514735.65,
357524559.41]
if I sum all these elements, I get this:
sum(n)
1995485912.1300004
But, this sum should be:
1995485912.13
In this way, I know about floating point "error". I already used the isclose() function from numpy to check the corrected value, but
how much is this limit? Is there any way to reduce this "error"?
The main issue here is that the error propagates to other operations, for example, the below assertion must be true:
assert (sum(n) - 1995485911) ** 100 - (1995485912.13 - 1995485911) ** 100 == 0.
This is problem with floating point numbers. One solution is having them represented in string form and using decimal module:
n = ['565387674.45', '321772103.48', '321772103.48', '214514735.66', '214514735.65',
'357524559.41']
from decimal import Decimal
s = sum(Decimal(i) for i in n)
print(s)
Prints:
1995485912.13
You could use round(num, n) function which rounds the number to the desired decimal places. So in your example you would use round(sum(n), 2)

Accuracy of python decimals

How do I make the decimal in Python more accurate, so that I can calculate up to
0.0000000000001 * 0.0000000000000000001 = ?
I need to add decimals like 0.0000000145 and 0.00000000000000012314 and also multiply them and get the exact result. Is there a needed code, or is there a module? Thanks in advance.
I need something that is more accurate than decimal.Decimal.
Not sure why you're getting downvoted.
decimal.Decimal represents numbers using floating point in base 10. Since it isn't implemented directly in hardware, you can control the level of precision (which defaults to 28 places):
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
0.142857
However, you may prefer to use the mpmath module instead, which supports arbitrary precision real and complex floating point calculations:
>>> from mpmath import mp
>>> mp.dps = 50
>>> print(mp.quad(lambda x: mp.exp(-x**2), [-mp.inf, mp.inf]) ** 2)
3.1415926535897932384626433832795028841971693993751
Maybe do something like
format(0.0000000000001 * 0.0000000000000000001,'.40f')
The '.40f' could be changed to get more accuracy!
eg.: '.70f' or something like that.
Here 70 implies I want 70 digits of accuracy.

Setting the length of a float or number

I've been looking around to try and set the length of floats or decimals to 2 places, I'm doing this for a set of course work, I have tried getcontext but it does nothing.
from decimal import *
getcontext().prec = 2
price = ("22.5")
#I would like this to be 22.50, but as it comes form a list and I use float a bit, so I have to convert it to decimal (?)
price = Decimal(price)
print (price)
But the output is:
22.5
If anyone knows a better way to set the length of a decimal to 2 decimal places (using it in money) or where I'm going wrong, it would be helpful.
"float" is short for "floating point". Read about floating point on https://en.wikipedia.org/wiki/Floating-point_arithmetic and then never ever ever use it to represent money.
You're on the right track with Decimal. You just need to watch out for the distinction between the precision of the representation and the display.
The prec attribute of the context controls the precision of the representation of values that result from different operations. It does not control the precision of explicitly constructed values. And it does not control the precision of the display.
Consider:
>>> getcontext().prec = 2
>>> Decimal("1") / Decimal("3")
Decimal('0.33')
>>>
vs
>>> getcontext().prec = 2
>>> Decimal("1") / Decimal("2")
Decimal('0.5')
>>>
vs
>>> getcontext().prec = 2
>>> Decimal("0.12345")
Decimal('0.12345')
>>>
To specify the precision for display purposes of a Decimal, you just have to take more control over the display code. Don't rely on str(Decimal(...)).
One option is to normalize the decimal for display:
>>> getcontext().prec = 2
>>> Decimal("0.12345").normalize()
Decimal('0.12')
This respects the prec setting from the context.
Another option is to quantize it to a specific precision:
>>> Decimal("0.12345").quantize(Decimal("1.00"))
Decimal('0.12')
This is independent of the prec setting from the context.
Decimals can also be rounded:
>>> round(Decimal("123.4567"), 2)
123.46
Though be very careful with this as the result of rounding is a float.
You can also format a Decimal directly into a string:
>>> "{:.2f}".format(Decimal("1.234"))
'1.23'
Try this:
print "{:.2f}".format(price)
This way there will be no need for globally set the precision.

Distinguishing large integers from near integers in python

I want to avoid my code mistaking a near integer for an integer. For example, 58106601358565889 has as its square root 241,053,109.00000001659385359763188, but when I used the following boolean test, 58106601358565889 fooled me into thinking it was a perfect square:
a = 58106601358565889
b = math.sqrt(a)
print(b == int(b))
The precision isn't necessarily the problem, because if I re-check, I get the proper (False) conclusion:
print(a == b**2)
What would be a better way to test for a true versus a near integer? The math.sqrt is buried in another definition in my code, and I would like to avoid having to insert a check of a squared square root, if possible. I apologize if this is not a good question; I'm new to python.
import numpy as np
import math
from decimal import *
a = 58106601358565889
b = np.sqrt(a)
c = math.sqrt(a)
d = Decimal(58106601358565889).sqrt()
print(d)
print(int(d))
print(c)
print(int(c))
print(b)
print(int(b))
o/p
241053109.0000000165938535976
241053109
241053109.0
241053109
241053109.0
241053109
I would say use decimal.
Expected code :
from decimal import *
d = Decimal(58106601358565889).sqrt()
print(d == int(d))
o/p
False
This isn't a matter of distinguishing integers from non-integers, because b really is an integer*. The precision of a Python float isn't enough to represent the square root of a to enough digits to get any of its fractional component. The second check you did:
print(a == b**2)
only prints False because while b is an integer, b**2 still isn't a.
If you want to test whether very large integers are exact squares, consider implementing a square root algorithm yourself.
*as in 0 fractional part, not as in isinstance(b, int).
It's not the precision of the int that is the problem - it's the limited precision of floats
>>> import math
>>> math.sqrt(58106601358565889)
241053109.0
>>> math.sqrt(58106601358565889) - 241053109
0.0
I think the double check would be the obvious solution
You could also look at the gmpy2 library. It has a function for calculating the integer square root and also the integer square root plus remainder. There are no precision constraints.
>>> import gmpy2
>>> gmpy2.isqrt(58106601358565889)
mpz(241053109)
>>> gmpy2.isqrt_rem(58106601358565889)
(mpz(241053109), mpz(8))
>>>
Disclaimer: I maintain gmpy2.

Resources