Python3.4 limiting floats to two decimal points - python-3.x

I am using python 3.4
and I want to limiting the a float number to two decimal points
round(1.2377, 2)
format(1.2377, '.2f')
These two would give my 1.24, but I don't want 1.24, I need 1.23, how do I do it?

You can convert to string and slice then convert to float :
>>> num=1.2377
>>> float(str(num)[:-2])
1.23
read more about Floating Point Arithmetic: Issues and Limitations

Related

Pandas : Precision error when converting string to float

Using pandas to deal with timestamps, I am concatening two columns and then convert the result in floating. It appears that when I display the two columns I observe two different results. How can the conversion from string to float can affect the value? Thanks for your help.
Here is the content of the data.csv file
epoch_day,epoch_ns
1533081601,224423000
Here is my test program:
import pandas as pd
pd.options.display.float_format = '{:.10f}'.format
df_mid = pd.read_csv("data.csv")
df_mid['result_1']=df_mid['epoch_day'].astype(str).str.cat(df_mid['epoch_ns'].astype(str), sep =".")
df_mid['result_2'] = df_mid['epoch_day'].astype(str).str.cat(df_mid['epoch_ns'].astype(str), sep =".").astype(float)
print(df_mid)
The result is :
epoch_day epoch_ns result_1 result_2
0 1533081601 224423000 1533081601.224423000 1533081601.2244229317
Thanks for your help
FX
Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. Most decimal fractions cannot be represented exactly as binary fractions.
When you convert your string, python creates a float which is the closest binary fraction for your input.
You can actually see to which decimal number this corresponds by running the following:
from decimal import Decimal
Decimal(1533081601.224423000)
OUTPUT: Decimal('1533081601.224422931671142578125')
You can see the Python documentation for more info https://docs.python.org/2/tutorial/floatingpoint.html

Accuracy of python decimals

How do I make the decimal in Python more accurate, so that I can calculate up to
0.0000000000001 * 0.0000000000000000001 = ?
I need to add decimals like 0.0000000145 and 0.00000000000000012314 and also multiply them and get the exact result. Is there a needed code, or is there a module? Thanks in advance.
I need something that is more accurate than decimal.Decimal.
Not sure why you're getting downvoted.
decimal.Decimal represents numbers using floating point in base 10. Since it isn't implemented directly in hardware, you can control the level of precision (which defaults to 28 places):
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
0.142857
However, you may prefer to use the mpmath module instead, which supports arbitrary precision real and complex floating point calculations:
>>> from mpmath import mp
>>> mp.dps = 50
>>> print(mp.quad(lambda x: mp.exp(-x**2), [-mp.inf, mp.inf]) ** 2)
3.1415926535897932384626433832795028841971693993751
Maybe do something like
format(0.0000000000001 * 0.0000000000000000001,'.40f')
The '.40f' could be changed to get more accuracy!
eg.: '.70f' or something like that.
Here 70 implies I want 70 digits of accuracy.

Setting the length of a float or number

I've been looking around to try and set the length of floats or decimals to 2 places, I'm doing this for a set of course work, I have tried getcontext but it does nothing.
from decimal import *
getcontext().prec = 2
price = ("22.5")
#I would like this to be 22.50, but as it comes form a list and I use float a bit, so I have to convert it to decimal (?)
price = Decimal(price)
print (price)
But the output is:
22.5
If anyone knows a better way to set the length of a decimal to 2 decimal places (using it in money) or where I'm going wrong, it would be helpful.
"float" is short for "floating point". Read about floating point on https://en.wikipedia.org/wiki/Floating-point_arithmetic and then never ever ever use it to represent money.
You're on the right track with Decimal. You just need to watch out for the distinction between the precision of the representation and the display.
The prec attribute of the context controls the precision of the representation of values that result from different operations. It does not control the precision of explicitly constructed values. And it does not control the precision of the display.
Consider:
>>> getcontext().prec = 2
>>> Decimal("1") / Decimal("3")
Decimal('0.33')
>>>
vs
>>> getcontext().prec = 2
>>> Decimal("1") / Decimal("2")
Decimal('0.5')
>>>
vs
>>> getcontext().prec = 2
>>> Decimal("0.12345")
Decimal('0.12345')
>>>
To specify the precision for display purposes of a Decimal, you just have to take more control over the display code. Don't rely on str(Decimal(...)).
One option is to normalize the decimal for display:
>>> getcontext().prec = 2
>>> Decimal("0.12345").normalize()
Decimal('0.12')
This respects the prec setting from the context.
Another option is to quantize it to a specific precision:
>>> Decimal("0.12345").quantize(Decimal("1.00"))
Decimal('0.12')
This is independent of the prec setting from the context.
Decimals can also be rounded:
>>> round(Decimal("123.4567"), 2)
123.46
Though be very careful with this as the result of rounding is a float.
You can also format a Decimal directly into a string:
>>> "{:.2f}".format(Decimal("1.234"))
'1.23'
Try this:
print "{:.2f}".format(price)
This way there will be no need for globally set the precision.

PYTHON - input decimal to fraction

When working on python, I was able to convert a fraction to a decimal where the user would input a numerator, then a denominator and then the n/d = the result (fairly simple). But i can't work out how to convert a decimal into a fraction. I want the user to input any decimal ( ie 0.5) and then find the simplest form of x (1/2). Any help would be greatly appreciated. Thanks.
Use the fractions module.
from fractions import Fraction
f1 = Fraction(14, 8)
print(f) # Output: 7/4
print(float(f)) # Output: 1.75
f1 = Fraction(1.75)
print(f) # Output: 7/4
print(float(f)) # Output: 1.75
It accepts both pairs of numerator/denominator as well as float decimal numbers to construct a Fraction object.

Limiting floats to a varying number (decided by the end-user) of decimal points in Python

So, I've learned quite a few ways to control the precision when I'm dealing with floats.
Here is an example of 3 different techniques:
somefloat=0.0123456789
print("{0:.10f}".format(somefloat))
print("%.5f" % somefloat)
print(Decimal(somefloat).quantize(Decimal(".01")))
This will print:
0.0123456789
0.01235
0.01
In all of the above examples, the precision itself is a fixed value, but how could I turn the precision itself a variable that could be
be entered by the end-user?
I mean, the fixed precision values are now inside quatations marks, and I can't seem to find a way to add any variable there. Is there a way, anyway?
I'm on Python 3.
Using format:
somefloat=0.0123456789
precision = 5
print("{0:.{1}f}".format(somefloat, precision))
# 0.01235
Using old-style string interpolation:
print("%.*f" % (precision, somefloat))
# 0.01235
Using decimal:
import decimal
D = decimal.Decimal
q = D(10) ** -precision
print(D(somefloat).quantize(q))
# 0.01235

Resources