Setting the length of a float or number - python-3.x

I've been looking around to try and set the length of floats or decimals to 2 places, I'm doing this for a set of course work, I have tried getcontext but it does nothing.
from decimal import *
getcontext().prec = 2
price = ("22.5")
#I would like this to be 22.50, but as it comes form a list and I use float a bit, so I have to convert it to decimal (?)
price = Decimal(price)
print (price)
But the output is:
22.5
If anyone knows a better way to set the length of a decimal to 2 decimal places (using it in money) or where I'm going wrong, it would be helpful.

"float" is short for "floating point". Read about floating point on https://en.wikipedia.org/wiki/Floating-point_arithmetic and then never ever ever use it to represent money.
You're on the right track with Decimal. You just need to watch out for the distinction between the precision of the representation and the display.
The prec attribute of the context controls the precision of the representation of values that result from different operations. It does not control the precision of explicitly constructed values. And it does not control the precision of the display.
Consider:
>>> getcontext().prec = 2
>>> Decimal("1") / Decimal("3")
Decimal('0.33')
>>>
vs
>>> getcontext().prec = 2
>>> Decimal("1") / Decimal("2")
Decimal('0.5')
>>>
vs
>>> getcontext().prec = 2
>>> Decimal("0.12345")
Decimal('0.12345')
>>>
To specify the precision for display purposes of a Decimal, you just have to take more control over the display code. Don't rely on str(Decimal(...)).
One option is to normalize the decimal for display:
>>> getcontext().prec = 2
>>> Decimal("0.12345").normalize()
Decimal('0.12')
This respects the prec setting from the context.
Another option is to quantize it to a specific precision:
>>> Decimal("0.12345").quantize(Decimal("1.00"))
Decimal('0.12')
This is independent of the prec setting from the context.
Decimals can also be rounded:
>>> round(Decimal("123.4567"), 2)
123.46
Though be very careful with this as the result of rounding is a float.
You can also format a Decimal directly into a string:
>>> "{:.2f}".format(Decimal("1.234"))
'1.23'

Try this:
print "{:.2f}".format(price)
This way there will be no need for globally set the precision.

Related

Round a decimal number in python

How can I round this number 838062.5 to 838063 instead of to 838062, using the round() function in Python?
Use math.ceil to round up:
import math
print(math.ceil(838062.5)) # prints 838063
The Python documentation for the round() function explains this interesting phenomenon:
If two multiples are equally close, rounding is done toward the even choice (so, for example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2).
While you could use floor and ceiling operations, matching the various cases may be a pain. So, I recommend using the decimal module, which allows you to always round half-values up.
>>> import decimal
>>> decimal.getcontext().rounding = decimal.ROUND_HALF_UP
>>> n = decimal.Decimal(838062.5)
>>> n
Decimal('838062.5')
>>> round(n)
838062
All I had to do was add 0.5 and take the floor value:
import math
x = 838062.5 + 0.5
print(math.floor(x))

Accuracy of python decimals

How do I make the decimal in Python more accurate, so that I can calculate up to
0.0000000000001 * 0.0000000000000000001 = ?
I need to add decimals like 0.0000000145 and 0.00000000000000012314 and also multiply them and get the exact result. Is there a needed code, or is there a module? Thanks in advance.
I need something that is more accurate than decimal.Decimal.
Not sure why you're getting downvoted.
decimal.Decimal represents numbers using floating point in base 10. Since it isn't implemented directly in hardware, you can control the level of precision (which defaults to 28 places):
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
0.142857
However, you may prefer to use the mpmath module instead, which supports arbitrary precision real and complex floating point calculations:
>>> from mpmath import mp
>>> mp.dps = 50
>>> print(mp.quad(lambda x: mp.exp(-x**2), [-mp.inf, mp.inf]) ** 2)
3.1415926535897932384626433832795028841971693993751
Maybe do something like
format(0.0000000000001 * 0.0000000000000000001,'.40f')
The '.40f' could be changed to get more accuracy!
eg.: '.70f' or something like that.
Here 70 implies I want 70 digits of accuracy.

math.sqrt function python gives same result for two different values [duplicate]

Why does the math module return the wrong result?
First test
A = 12345678917
print 'A =',A
B = sqrt(A**2)
print 'B =',int(B)
Result
A = 12345678917
B = 12345678917
Here, the result is correct.
Second test
A = 123456758365483459347856
print 'A =',A
B = sqrt(A**2)
print 'B =',int(B)
Result
A = 123456758365483459347856
B = 123456758365483467538432
Here the result is incorrect.
Why is that the case?
Because math.sqrt(..) first casts the number to a floating point and floating points have a limited mantissa: it can only represent part of the number correctly. So float(A**2) is not equal to A**2. Next it calculates the math.sqrt which is also approximately correct.
Most functions working with floating points will never be fully correct to their integer counterparts. Floating point calculations are almost inherently approximative.
If one calculates A**2 one gets:
>>> 12345678917**2
152415787921658292889L
Now if one converts it to a float(..), one gets:
>>> float(12345678917**2)
1.5241578792165828e+20
But if you now ask whether the two are equal:
>>> float(12345678917**2) == 12345678917**2
False
So information has been lost while converting it to a float.
You can read more about how floats work and why these are approximative in the Wikipedia article about IEEE-754, the formal definition on how floating points work.
The documentation for the math module states "It provides access to the mathematical functions defined by the C standard." It also states "Except when explicitly noted otherwise, all return values are floats."
Those together mean that the parameter to the square root function is a float value. In most systems that means a floating point value that fits into 8 bytes, which is called "double" in the C language. Your code converts your integer value into such a value before calculating the square root, then returns such a value.
However, the 8-byte floating point value can store at most 15 to 17 significant decimal digits. That is what you are getting in your results.
If you want better precision in your square roots, use a function that is guaranteed to give full precision for an integer argument. Just do a web search and you will find several. Those usually do a variation of the Newton-Raphson method to iterate and eventually end at the correct answer. Be aware that this is significantly slower that the math module's sqrt function.
Here is a routine that I modified from the internet. I can't cite the source right now. This version also works for non-integer arguments but just returns the integer part of the square root.
def isqrt(x):
"""Return the integer part of the square root of x, even for very
large values."""
if x < 0:
raise ValueError('square root not defined for negative numbers')
n = int(x)
if n == 0:
return 0
a, b = divmod(n.bit_length(), 2)
x = (1 << (a+b)) - 1
while True:
y = (x + n//x) // 2
if y >= x:
return x
x = y
If you want to calculate sqrt of really large numbers and you need exact results, you can use sympy:
import sympy
num = sympy.Integer(123456758365483459347856)
print(int(num) == int(sympy.sqrt(num**2)))
The way floating-point numbers are stored in memory makes calculations with them prone to slight errors that can nevertheless be significant when exact results are needed. As mentioned in one of the comments, the decimal library can help you here:
>>> A = Decimal(12345678917)
>>> A
Decimal('123456758365483459347856')
>>> B = A.sqrt()**2
>>> B
Decimal('123456758365483459347856.0000')
>>> A == B
True
>>> int(B)
123456758365483459347856
I use version 3.6, which has no hardcoded limit on the size of integers. I don't know if, in 2.7, casting B as an int would cause overflow, but decimal is incredibly useful regardless.

Distinguishing large integers from near integers in python

I want to avoid my code mistaking a near integer for an integer. For example, 58106601358565889 has as its square root 241,053,109.00000001659385359763188, but when I used the following boolean test, 58106601358565889 fooled me into thinking it was a perfect square:
a = 58106601358565889
b = math.sqrt(a)
print(b == int(b))
The precision isn't necessarily the problem, because if I re-check, I get the proper (False) conclusion:
print(a == b**2)
What would be a better way to test for a true versus a near integer? The math.sqrt is buried in another definition in my code, and I would like to avoid having to insert a check of a squared square root, if possible. I apologize if this is not a good question; I'm new to python.
import numpy as np
import math
from decimal import *
a = 58106601358565889
b = np.sqrt(a)
c = math.sqrt(a)
d = Decimal(58106601358565889).sqrt()
print(d)
print(int(d))
print(c)
print(int(c))
print(b)
print(int(b))
o/p
241053109.0000000165938535976
241053109
241053109.0
241053109
241053109.0
241053109
I would say use decimal.
Expected code :
from decimal import *
d = Decimal(58106601358565889).sqrt()
print(d == int(d))
o/p
False
This isn't a matter of distinguishing integers from non-integers, because b really is an integer*. The precision of a Python float isn't enough to represent the square root of a to enough digits to get any of its fractional component. The second check you did:
print(a == b**2)
only prints False because while b is an integer, b**2 still isn't a.
If you want to test whether very large integers are exact squares, consider implementing a square root algorithm yourself.
*as in 0 fractional part, not as in isinstance(b, int).
It's not the precision of the int that is the problem - it's the limited precision of floats
>>> import math
>>> math.sqrt(58106601358565889)
241053109.0
>>> math.sqrt(58106601358565889) - 241053109
0.0
I think the double check would be the obvious solution
You could also look at the gmpy2 library. It has a function for calculating the integer square root and also the integer square root plus remainder. There are no precision constraints.
>>> import gmpy2
>>> gmpy2.isqrt(58106601358565889)
mpz(241053109)
>>> gmpy2.isqrt_rem(58106601358565889)
(mpz(241053109), mpz(8))
>>>
Disclaimer: I maintain gmpy2.

Limiting floats to a varying number (decided by the end-user) of decimal points in Python

So, I've learned quite a few ways to control the precision when I'm dealing with floats.
Here is an example of 3 different techniques:
somefloat=0.0123456789
print("{0:.10f}".format(somefloat))
print("%.5f" % somefloat)
print(Decimal(somefloat).quantize(Decimal(".01")))
This will print:
0.0123456789
0.01235
0.01
In all of the above examples, the precision itself is a fixed value, but how could I turn the precision itself a variable that could be
be entered by the end-user?
I mean, the fixed precision values are now inside quatations marks, and I can't seem to find a way to add any variable there. Is there a way, anyway?
I'm on Python 3.
Using format:
somefloat=0.0123456789
precision = 5
print("{0:.{1}f}".format(somefloat, precision))
# 0.01235
Using old-style string interpolation:
print("%.*f" % (precision, somefloat))
# 0.01235
Using decimal:
import decimal
D = decimal.Decimal
q = D(10) ** -precision
print(D(somefloat).quantize(q))
# 0.01235

Resources