Why does this n choose r python code not work? - python-3.x

These 2 variations of n choose r code got different answer although followed the correct definition
I saw that this code works,
import math
def nCr(n,r):
f = math.factorial
return f(n) // f(r) // f(n-r)
But mine did not:
import math
def nCr(n,r):
f = math.factorial
return int(f(n) / (f(r) * f(n-r)))
Use test case nCr(80,20) will show the difference in result. Please advise why are they different in Python 3, thank you!
No error message. The right answer should be 3535316142212174320, but mine got 3535316142212174336.

That's because int(a / b) isn't the same as a // b.
int(a / b) evaluates a / b first, which is floating-point division. And floating-point numbers are prone to inaccuracies, roundoff errors and the like, as .1 + .2 == 0.30000000000000004. So, at some point, your code attempts to divide really big numbers, which causes roundoff errors since floating-point numbers are of fixed size, and thus cannot be infinitely precise.
a // b is integer division, which is a different thing. Python's integers can be arbitrarily huge, and their division doesn't cause roundoff errors, so you get the correct result.
Speaking about floating-point numbers being of fixed size. Take a look at this:
>>> import math
>>> f = math.factorial
>>> f(20) * f(80-20)
20244146256600469630315959326642192021057078172611285900283370710785170642770591744000000000000000000
>>> f(80) / _
3.5353161422121743e+18
The number 3.5353161422121743e+18 is represented exactly as shown here: there is no information about the digits after the last 3 in 53...43 because there's nowhere to store it. But int(3.5353161422121743e+18) must put something there! Yet it doesn't have enough information. So it puts whatever it wants to so that float(int(3.5353161422121743e+18)) == 3.5353161422121743e+18.

Related

Why am I getting a complex number for (-24)**0.8? [duplicate]

In math, you are allowed to take cubic roots of negative numbers, because a negative number multiplied by two other negative numbers results in a negative number. Raising something to a fractional power 1/n is the same as taking the nth root of it. Therefore, the cubic root of -27, or (-27)**(1.0/3.0) comes out to -3.
But in Python 2, when I type in (-27)**(1.0/3.0), it gives me an error:
Traceback (most recent call last):
File "python", line 1, in <module>
ValueError: negative number cannot be raised to a fractional power
Python 3 doesn't produce an exception, but it gives a complex number that doesn't look anything like -3:
>>> (-27)**(1.0/3.0)
(1.5000000000000004+2.598076211353316j)
Why don't I get the result that makes mathematical sense? And is there a workaround for this?
-27 has a real cube root (and two non-real cube roots), but (-27)**(1.0/3.0) does not mean "take the real cube root of -27".
First, 1.0/3.0 doesn't evaluate to exactly one third, due to the limits of floating-point representation. It evaluates to exactly
0.333333333333333314829616256247390992939472198486328125
though by default, Python won't print the exact value.
Second, ** is not a root-finding operation, whether real roots or principal roots or some other choice. It is the exponentiation operator. General exponentiation of negative numbers to arbitrary real powers is messy, and the usual definitions don't match with real nth roots; for example, the usual definition of (-27)^(1/3) would give you the principal root, a complex number, not -3.
Python 2 decides that it's probably better to raise an error for stuff like this unless you make your intentions explicit, for example by exponentiating the absolute value and then applying the sign:
def real_nth_root(x, n):
# approximate
# if n is even, x must be non-negative, and we'll pick the non-negative root.
if n % 2 == 0 and x < 0:
raise ValueError("No real root.")
return (abs(x) ** (1.0/n)) * (-1 if x < 0 else 1)
or by using complex exp and log to take the principal root:
import cmath
def principal_nth_root(x, n):
# still approximate
return cmath.exp(cmath.log(x)/n)
or by just casting to complex for complex exponentiation (equivalent to the exp-log thing up to rounding error):
>>> complex(-27)**(1.0/3.0)
(1.5000000000000004+2.598076211353316j)
Python 3 uses complex exponentiation for negative-number-to-noninteger, which gives the principal nth root for y == 1.0/n:
>>> (-27)**(1/3) # Python 3
(1.5000000000000004+2.598076211353316j)
The type coercion rules documented by builtin pow apply here, since you're using a float for the exponent.
Just make sure that either the base or the exponent is a complex instance and it works:
>>> (-27+0j)**(1.0/3.0)
(1.5000000000000004+2.598076211353316j)
>>> (-27)**(complex(1.0/3.0))
(1.5000000000000004+2.598076211353316j)
To find all three roots, consider numpy:
>>> import numpy as np
>>> np.roots([1, 0, 0, 27])
array([-3.0+0.j , 1.5+2.59807621j, 1.5-2.59807621j])
The list [1, 0, 0, 27] here refers to the coefficients of the equation 1x³ + 0x² + 0x + 27.
I do not think Python, or your version of it, supports this function. I pasted the same equation into my Python interpreter, (IDLE) and it solved it, with no errors. I am using Python 3.2.

why is np.exp(x) not equal to np.exp(1)**x

Why is why is np.exp(x) not equal to np.exp(1)**x?
For example:
np.exp(400)
>>>5.221469689764144e+173
np.exp(1)**400
>>>5.221469689764033e+173
np.exp(400)-np.exp(1)**400
>>>1.1093513018771065e+160
This is optimisation of numpy that raise this diff.
Indeed, you have to understand how is calculated the Euler number in math:
e = (1/n)**n with n == inf.
I think numpy stop at a certain order:
You have in the numpy exp documentation here that is not very clear about how the Euler number is calculated.
Because of this order that is not equal to infinity, you have this small difference in the two calculations.
Indeed the value np.exp(400) is calculated using this: (1 + 400/n)**n
>>> (1 + 400/n)**n
5.221642085428121e+173
>>> numpy.exp(400)
5.221469689764144e+173
Here you have n = 1000000000000 wich is very small and raise this difference at 10e-5.
Indeed there is no exact value of the Euler number. Like Pi, you can only have an approched value.
It looks like a rounding issue. In the first case it's internally using a very precise value of e, while in the second you get a less precise value, which when multiplied 400 times the precision issues become more apparent.
The actual result when using the Windows calculator is 5.2214696897641439505887630066496e+173, so you can see your first outcome is fine, while the second is not.
5.2214696897641439505887630066496e+173 // calculator
5.221469689764144e+173 // exp(400)
5.221469689764033e+173 // exp(1)**400
Starting from your result, it looks it's using a value with 15 digits of precision.
2.7182818284590452353602874713527 // e
2.7182818284590450909589085441968 // 400th root of the 2nd result

Python floating point precision sum

I have the following array in python
n = [565387674.45, 321772103.48,321772103.48, 214514735.66,214514735.65,
357524559.41]
if I sum all these elements, I get this:
sum(n)
1995485912.1300004
But, this sum should be:
1995485912.13
In this way, I know about floating point "error". I already used the isclose() function from numpy to check the corrected value, but
how much is this limit? Is there any way to reduce this "error"?
The main issue here is that the error propagates to other operations, for example, the below assertion must be true:
assert (sum(n) - 1995485911) ** 100 - (1995485912.13 - 1995485911) ** 100 == 0.
This is problem with floating point numbers. One solution is having them represented in string form and using decimal module:
n = ['565387674.45', '321772103.48', '321772103.48', '214514735.66', '214514735.65',
'357524559.41']
from decimal import Decimal
s = sum(Decimal(i) for i in n)
print(s)
Prints:
1995485912.13
You could use round(num, n) function which rounds the number to the desired decimal places. So in your example you would use round(sum(n), 2)

math.sqrt function python gives same result for two different values [duplicate]

Why does the math module return the wrong result?
First test
A = 12345678917
print 'A =',A
B = sqrt(A**2)
print 'B =',int(B)
Result
A = 12345678917
B = 12345678917
Here, the result is correct.
Second test
A = 123456758365483459347856
print 'A =',A
B = sqrt(A**2)
print 'B =',int(B)
Result
A = 123456758365483459347856
B = 123456758365483467538432
Here the result is incorrect.
Why is that the case?
Because math.sqrt(..) first casts the number to a floating point and floating points have a limited mantissa: it can only represent part of the number correctly. So float(A**2) is not equal to A**2. Next it calculates the math.sqrt which is also approximately correct.
Most functions working with floating points will never be fully correct to their integer counterparts. Floating point calculations are almost inherently approximative.
If one calculates A**2 one gets:
>>> 12345678917**2
152415787921658292889L
Now if one converts it to a float(..), one gets:
>>> float(12345678917**2)
1.5241578792165828e+20
But if you now ask whether the two are equal:
>>> float(12345678917**2) == 12345678917**2
False
So information has been lost while converting it to a float.
You can read more about how floats work and why these are approximative in the Wikipedia article about IEEE-754, the formal definition on how floating points work.
The documentation for the math module states "It provides access to the mathematical functions defined by the C standard." It also states "Except when explicitly noted otherwise, all return values are floats."
Those together mean that the parameter to the square root function is a float value. In most systems that means a floating point value that fits into 8 bytes, which is called "double" in the C language. Your code converts your integer value into such a value before calculating the square root, then returns such a value.
However, the 8-byte floating point value can store at most 15 to 17 significant decimal digits. That is what you are getting in your results.
If you want better precision in your square roots, use a function that is guaranteed to give full precision for an integer argument. Just do a web search and you will find several. Those usually do a variation of the Newton-Raphson method to iterate and eventually end at the correct answer. Be aware that this is significantly slower that the math module's sqrt function.
Here is a routine that I modified from the internet. I can't cite the source right now. This version also works for non-integer arguments but just returns the integer part of the square root.
def isqrt(x):
"""Return the integer part of the square root of x, even for very
large values."""
if x < 0:
raise ValueError('square root not defined for negative numbers')
n = int(x)
if n == 0:
return 0
a, b = divmod(n.bit_length(), 2)
x = (1 << (a+b)) - 1
while True:
y = (x + n//x) // 2
if y >= x:
return x
x = y
If you want to calculate sqrt of really large numbers and you need exact results, you can use sympy:
import sympy
num = sympy.Integer(123456758365483459347856)
print(int(num) == int(sympy.sqrt(num**2)))
The way floating-point numbers are stored in memory makes calculations with them prone to slight errors that can nevertheless be significant when exact results are needed. As mentioned in one of the comments, the decimal library can help you here:
>>> A = Decimal(12345678917)
>>> A
Decimal('123456758365483459347856')
>>> B = A.sqrt()**2
>>> B
Decimal('123456758365483459347856.0000')
>>> A == B
True
>>> int(B)
123456758365483459347856
I use version 3.6, which has no hardcoded limit on the size of integers. I don't know if, in 2.7, casting B as an int would cause overflow, but decimal is incredibly useful regardless.

Distinguishing large integers from near integers in python

I want to avoid my code mistaking a near integer for an integer. For example, 58106601358565889 has as its square root 241,053,109.00000001659385359763188, but when I used the following boolean test, 58106601358565889 fooled me into thinking it was a perfect square:
a = 58106601358565889
b = math.sqrt(a)
print(b == int(b))
The precision isn't necessarily the problem, because if I re-check, I get the proper (False) conclusion:
print(a == b**2)
What would be a better way to test for a true versus a near integer? The math.sqrt is buried in another definition in my code, and I would like to avoid having to insert a check of a squared square root, if possible. I apologize if this is not a good question; I'm new to python.
import numpy as np
import math
from decimal import *
a = 58106601358565889
b = np.sqrt(a)
c = math.sqrt(a)
d = Decimal(58106601358565889).sqrt()
print(d)
print(int(d))
print(c)
print(int(c))
print(b)
print(int(b))
o/p
241053109.0000000165938535976
241053109
241053109.0
241053109
241053109.0
241053109
I would say use decimal.
Expected code :
from decimal import *
d = Decimal(58106601358565889).sqrt()
print(d == int(d))
o/p
False
This isn't a matter of distinguishing integers from non-integers, because b really is an integer*. The precision of a Python float isn't enough to represent the square root of a to enough digits to get any of its fractional component. The second check you did:
print(a == b**2)
only prints False because while b is an integer, b**2 still isn't a.
If you want to test whether very large integers are exact squares, consider implementing a square root algorithm yourself.
*as in 0 fractional part, not as in isinstance(b, int).
It's not the precision of the int that is the problem - it's the limited precision of floats
>>> import math
>>> math.sqrt(58106601358565889)
241053109.0
>>> math.sqrt(58106601358565889) - 241053109
0.0
I think the double check would be the obvious solution
You could also look at the gmpy2 library. It has a function for calculating the integer square root and also the integer square root plus remainder. There are no precision constraints.
>>> import gmpy2
>>> gmpy2.isqrt(58106601358565889)
mpz(241053109)
>>> gmpy2.isqrt_rem(58106601358565889)
(mpz(241053109), mpz(8))
>>>
Disclaimer: I maintain gmpy2.

Categories

Resources