How can I keep python from truncating large numbers after division - python-3.3

I am trying to do division with very large numbers. I know that python can handle them before the division, but is there a way to keep python from truncating the answer?
an example follows:
s =
68729682406644277238837486231747530924247154108646671752192618583088487405790957964732883069102561043436779663935595172042357306594916344606074564712868078287608055203024658359439017580883910978666185875717415541084494926500475167381168505927378181899753839260609452265365274850901879881203714
M =
2047
s/(2*M) = 1.6787904837968803e+289
It can remember the 292 digit number s but when it divides the large number it gets truncated.
Is there any way that I can get an exact answer?
Thanks

If you are only concerned with the integer part of the answer, you can use // which is the integer division operator:
s // (2*M)
It looks like your s is a multiple of M so it sounds like this is what you are looking for.
In Python (3 and later), the / operator is the floating point division operator, while // is the integer division operator. Previous versions of Python had only / and would do different things depending on whether the operands were both integers or not. This was confusing, so a new // operator was introduced and / was redefined to be always floating point.

Related

Why doesn't precision in Mathematica work consistently, or sometimes not at all?

Consider the following terminating decimal numbers.
3.1^2 = 9.61
3.1^4 = 92.3521
3.1^8 = 8528.91037441
The following shows how Mathematica treats these expressions
In[1]:= 3.1^2
Out[1]= 9.61
In[2]:= 3.1^4
Out[2]= 92.352
So far so good, but
In[3]:= 3.1^8
Out[3]= 8528.91
doesn't provide enough precision.
So let's try N[], NumberForm[], and DecimalForm[] with a precision of 12
In[4]:= N[3.1^8,12]
Out[4]= 8528.91
In[5]:= NumberForm[3.1^8,12]
Out[5]= 8528.91037441
In[6]:= DecimalForm[3.1^8,12]
Out[6]= 8528.91037441
In this case DecimialForm[] and NumberForm[] work as expected, but N[] only provided the default precision of 6, even though I asked for 12. So DecimalForm[] or NumberForm[] seem to be the way to go if you want exact results when the inputs are terminating decimals.
Next consider rational numbers with infinite repeating decimals like 1/3.
In[7]:= N[1/3,20]
Out[7]= 0.33333333333333333333
In[9]:= NumberForm[1/3, 20]
Out[9]=
1/3
In[9]:= DecimalForm[1/3, 20]
Out[9]=
1/3
Unlike the previous case, N[] seems to be the proper way to go here, whereas NumberForm[] and DecimalForm[] do not respect precisions.
Finally consider irrational numbers like Sqrt[2] and Pi.
In[10]:= N[Sqrt[2],20]
Out[10]= 1.4142135623730950488
In[11]:= NumberForm[Sqrt[2], 20]
Out[11]=
sqrt(2)
In[12]:= DecimalForm[Sqrt[2], 20]
Out[12]=
sqrt(2)
In[13]:= N[π^12,30]
Out[13]= 924269.181523374186222579170358
In[14]:= NumberForm[Pi^12,30]
Out[14]=
π^12
In[15]:= DecimalForm[Pi^12,30]
Out[15]=
π^12
In these cases N[] works, but NumberForm[] and DecimalForm[] do not. However, note that N[] switches to scientific notation at π^13, even with a larger precision. Is there a way to avoid this switch?
In[16]:= N[π^13,40]
Out[16]= 2.903677270613283404988596199487803130470*10^6
So there doesn't seem to be a consistent way of formulating how to get decimal numbers with requested precisions and at the same time avoiding scientific notation. Sometimes N[] works, othertimes DecimalForm[] or NumberForm[] works, and at othertimes nothing seems to work.
Have I missed something or are there bugs in the system?
It isn't a bug because it is designed purposefully to behave this way. Precision is limited by the precision of your machine, your configuration of Mathematica, and the algorithm and performance constraints of the calculation.
The documentation for N[expr, n] states it attempts to give a result with n‐digit precision. When it cannot give the requested precision it gets as close as it can. DecimalForm and NumberForm work the same way.
https://reference.wolfram.com/language/ref/N.html explains the various cases behind this:
Unless numbers in expr are exact, or of sufficiently high precision, N[expr,n] may not be able to give results with n‐digit precision.
N[expr,n] may internally do computations to more than n digits of precision.
$MaxExtraPrecision specifies the maximum number of extra digits of precision that will ever be used internally.
The precision n is given in decimal digits; it need not be an integer.
n must lie between $MinPrecision and $MaxPrecision. $MaxPrecision can be set to Infinity.
n can be smaller than $MachinePrecision.
N[expr] gives a machine‐precision number, so long as its magnitude is between $MinMachineNumber and $MaxMachineNumber.
N[expr] is equivalent to N[expr,MachinePrecision].
N[0] gives the number 0. with machine precision.
N converts all nonzero numbers to Real or Complex form.
N converts each successive argument of any function it encounters to numerical form, unless the head of the function has an attribute such as NHoldAll.
You can define numerical values of functions using N[f[args]]:=value and N[f[args],n]:=value.
N[expr,{p,a}] attempts to generate a result with precision at most p and accuracy at most a.
N[expr,{Infinity,a}] attempts to generate a result with accuracy a.
N[expr,{Infinity,1}] attempts to find a numerical approximation to the integer part of expr.

python 3 complex mathematics wrong answer

So I was trying to get e^(pi*I)=-1, but python 3 gives me another, weird result:
print(cmath.exp(cmath.pi * cmath.sqrt(-1)))
Result:
(-1+1.2246467991473532e-16j)
This should in theory return -1, no?
(Partial answer to the revised question.)
In theory, the result should be -1, but in practice the theory is slightly wrong.
The cmath unit uses floating-point variables to do its calculations--one float value for the real part of a complex number and another float value for the imaginary part. Therefore the unit experiences the limitations of floating point math. For more on those limitations, see the canonical question Is floating point math broken?.
In brief, floating point values are usually mere approximations to real values. The value of cmath.pi is not actually pi, it is just the best approximation that will fit into the floating-point unit of many computers. So you are not really calculating e^(pi*I), just an approximation of it. The returned value has the exact, correct real part, -1, which is somewhat surprising to me. The imaginary part "should be" zero, but the actual result agrees with zero to 15 decimal places, or over 15 significant digits compared to the start value. That is the usual precision for floating point.
If you require exact answers, you should not be working with floating point values. Perhaps you should try an algebraic solution, such as the sympy module.
(The following was my original answer, which applied to the previous version of the question, where the result was an error message.)
The error message shows that you did not type what you thought you typed. Instead of cmath.exp on the outside of the expression, you typed math.exp. The math version of the exponential function expects a float value. You gave it a complex value (cmath.pi * cmath.sqrt(-1)) so Python thought you wanted to convert that complex value to float.
When I type the expression you give at the top of your question, with the cmath properly typed, I get the result
(-1+1.2246467991473532e-16j)
which is very close to the desired value of -1.
Found the answer.
First of all, python 3 cannot properly compute irrational numbers, and so e^(pi*I) will not return -1, as per This answer
Secondly, python 3 returns any complex number as a cartesian pair (real + imaginary).
The fix was to extract the real part of the number:
print(cmath.exp(cmath.pi * cmath.sqrt(-1)).real)

Are Python's decimal module calculations actually done with integers?

I'm using Python3's decimal module. Is the underlying arithmetic done using the processors floating point types, or does it use integers? The notion that the results are 'exact' and of arbitrary precision suggests to me that integer maths is used below the surface.
Indeed it is integer math, not float math for sure. Roughly speaking every float is two parts - before and after the decimal dot (integer and the remainder). Thanks to that the calculations are done using integer arithmetic and hence are not rounded up so they are staying precise even if you sum up a very large value with a very small fraction.
This comes at a price - the number of operations is significantly larger and it is not always necessary to be so precise at all times. That is why most of the calculations are done using float arithmetic that may cause a loss of precision when there are many arithmetic operations on floats or there are significant differences between the values (e.g. 10^10 ratio and more). There is a separate field of computer science: numerical analysis or numerical methods that study the clever ways to get the most of the speed of float calculations while maintaining highest precision possible.

python division result not true and different results

I am trying to solve fractional knapsack problem.
I have to find items with maximum calories per weight. I will fill my bag up to defined/limited weight with maximum calories.
Though algorithm is true, I can't find true result because of python division weirdness
When I try to find items with max calories per weight (python3)
print ((calories_list[i]/weight_list[i])*10)
# calories[i] 500 and weight[i] 30 (they're integers)
166.66666666666669
on the other hand, I opened terminal and typed python3
>>> 500/30
16.666666666666668
#when multiply with 10, it must be 16.666666666666668 not
#166.66666666666669
as you see, it gives different results
most of all, the important thing is that the real answer
500/30=16.6666666667
I got stucked here two days ago, please help me
Thanks you
As explained in the Python FAQ:
The float type in CPython uses a C double for storage. A float object’s value is stored in binary floating-point with a fixed precision (typically 53 bits) and Python uses C operations, which in turn rely on the hardware implementation in the processor, to perform floating-point operations. This means that as far as floating-point operations are concerned, Python behaves like many popular languages including C and Java.
You could use the decimal module as an alternative:
>>> from decimal import Decimal
>>> Decimal(500)/Decimal(30)
Decimal('16.66666666666666666666666667')

Supress scientific notation without knowing length of number?

In python, how could I go about supressing scientific notation with complete precision WITHOUT knowing the length of number?
I need python to dynamically be able to return the number in normal form with exact precision no matter how large the number is, and to do it without any trailing zeros. The numbers will always be integers but they will be getting very large and I need them to be completely accurate. Even a single digit being rounded or changed would mess up my program.
Any ideas?
Use the decimal class.
Unlike hardware based binary floating point, the decimal module has a user alterable
precision (defaulting to 28 places) which can be as large as needed for a given
problem.
From https://docs.python.org/library/decimal.html

Resources