Approximating Pi within error function - python-3.x

So the question I'm having problems with is:
The mathematical
constant π (pi) is an irrational number with value approximately
3.1415928... The precise value of π is equal to the following infinite sum: π = 4/1 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11 + ... We can get a good
approximation of π by computing the sum of the first few terms. Write
a function approxPi(error) that takes as a parameter a floating point value
error and approximates the constant π within error by computing the
above sum, term by term, until the absolute value of the difference
between the current sum and the previous sum (with one fewer terms) is
no greater than error. Once the function finds that the difference is
less than error, it should return the new sum.
Example of correct input: approxPi(0.01) should return 3.1465677471829556
My code:
def approxPi(error):
i=5 #counter for list appending
lst = [4,(-4/3)] #list values will be appended to
currentSum = sum(lst) #calcs sum of the list
previousSum = sum(lst[:len(lst)-1]) #calcs sum of list with one less term
while abs(currentSum - previousSum) > error: #while loop for error value
if len(lst) % 2==0: #alternates appending positive or negative value
lst.append((4/i)) #appends to list
else:
lst.append((-4/i))
i+=2 #increments counter
return currentSum #returns sum of the whole list once error value has been reached
When I run the code I get stuck in the loop until memory usage reaches 100% and my system locks up. Any tips on what I am doing wrong? This is a homework problem so please don't just post an answer.

Related

why is np.exp(x) not equal to np.exp(1)**x

Why is why is np.exp(x) not equal to np.exp(1)**x?
For example:
np.exp(400)
>>>5.221469689764144e+173
np.exp(1)**400
>>>5.221469689764033e+173
np.exp(400)-np.exp(1)**400
>>>1.1093513018771065e+160
This is optimisation of numpy that raise this diff.
Indeed, you have to understand how is calculated the Euler number in math:
e = (1/n)**n with n == inf.
I think numpy stop at a certain order:
You have in the numpy exp documentation here that is not very clear about how the Euler number is calculated.
Because of this order that is not equal to infinity, you have this small difference in the two calculations.
Indeed the value np.exp(400) is calculated using this: (1 + 400/n)**n
>>> (1 + 400/n)**n
5.221642085428121e+173
>>> numpy.exp(400)
5.221469689764144e+173
Here you have n = 1000000000000 wich is very small and raise this difference at 10e-5.
Indeed there is no exact value of the Euler number. Like Pi, you can only have an approched value.
It looks like a rounding issue. In the first case it's internally using a very precise value of e, while in the second you get a less precise value, which when multiplied 400 times the precision issues become more apparent.
The actual result when using the Windows calculator is 5.2214696897641439505887630066496e+173, so you can see your first outcome is fine, while the second is not.
5.2214696897641439505887630066496e+173 // calculator
5.221469689764144e+173 // exp(400)
5.221469689764033e+173 // exp(1)**400
Starting from your result, it looks it's using a value with 15 digits of precision.
2.7182818284590452353602874713527 // e
2.7182818284590450909589085441968 // 400th root of the 2nd result

How to understand this efficient implementation of PageRank calculation

For reference, I'm using this page. I understand the original pagerank equation
but I'm failing to understand why the sparse-matrix implementation is correct. Below is their code reproduced:
def compute_PageRank(G, beta=0.85, epsilon=10**-4):
'''
Efficient computation of the PageRank values using a sparse adjacency
matrix and the iterative power method.
Parameters
----------
G : boolean adjacency matrix. np.bool8
If the element j,i is True, means that there is a link from i to j.
beta: 1-teleportation probability.
epsilon: stop condition. Minimum allowed amount of change in the PageRanks
between iterations.
Returns
-------
output : tuple
PageRank array normalized top one.
Number of iterations.
'''
#Test adjacency matrix is OK
n,_ = G.shape
assert(G.shape==(n,n))
#Constants Speed-UP
deg_out_beta = G.sum(axis=0).T/beta #vector
#Initialize
ranks = np.ones((n,1))/n #vector
time = 0
flag = True
while flag:
time +=1
with np.errstate(divide='ignore'): # Ignore division by 0 on ranks/deg_out_beta
new_ranks = G.dot((ranks/deg_out_beta)) #vector
#Leaked PageRank
new_ranks += (1-new_ranks.sum())/n
#Stop condition
if np.linalg.norm(ranks-new_ranks,ord=1)<=epsilon:
flag = False
ranks = new_ranks
return(ranks, time)
To start, I'm trying to trace the code and understand how it relates to the PageRank equation. For the line under the with statement (new_ranks = G.dot((ranks/deg_out_beta))), this looks like the first part of the equation (the beta times M) BUT it seems to be ignoring all divide by zeros. I'm confused by this because the PageRank algorithm requires us to replace zero columns with ones (except along the diagonal). I'm not sure how this is accounted for here.
The next line new_ranks += (1-new_ranks.sum())/n is what I presume to be the second part of the equation. I can understand what this does, but I can't see how this translates to the original equation. I would've thought we would do something like new_ranks += (1-beta)*ranks.sum()/n.
This happens because in the row sums
e.T * M * r = e.T * r
by the column sum construction of M. The convex combination with coefficient beta has the effect that the sum over the new r vector is again 1. Now what the algorithm does is to take the first matrix-vector product b=beta*M*r and then find a constant c so that r_new = b+c*e has row sum one. In theory this should be the same as what the formula says, but in the floating point practice this approach corrects and prevents floating point error accumulation in the sum of r.
Computing it this way also allows to ignore zero columns, as the compensation for them is automatically computed.

Python infinite recursion with formula

### Run the code below and understand the error messages
### Fix the code to sum integers from 1 up to k
###
def f(k):
return f(k-1) + k
print(f(10))
I am confused on how to fix this code while using recursion, I keep getting the error messages
[Previous line repeated 995 more times]
RecursionError: maximum recursion depth exceeded
Is there a simple way to fix this without using any while loops or creating more than 1 variable?
A recursion should have a termination condition, i.e. the base case. When your variable attains that value there are no more recursive function calls.
e.g. in your code,
def f(k):
if(k == 1):
return k
return f(k-1) + k
print(f(10))
we define the base case 1, if you want to take the sum of values from n to 1. You can put any other number, positive or negative there, if you want the sum to extend upto that number. e.g. maybe you want to take sum from n to -3, then base case would be k == -3.
Python doesn't have optimized tail recursion. You f function call k time. If k is very big number then Python trow RecursionError. You can see what is limit of recursion via sys.getrecursionlimit and change via sys.setrecursionlimit. But changing limit is not good idea. Instead of changing you can change your code logic or pattern.
Your recursion never terminates. You could try:
def f(k):
return k if k < 2 else f(k-1) + k
print(f(10))
You are working out the sum of all of all numbers from 1 to 10 which in essence returns the 10th triangular number. Eg. the number of black circles in each triangle
Using the formula on OEIS gives you this as your code.
def f(k):
return int(k*(k+1)/2)
print(f(10))
How do we know int() doesn't break this? k and k + 1 are adjacent numbers and one of them must have a factor of two so this formula will always return an integer if given an integer.

Keep Getting ZeroDivisonError Whenever using module

So I am working on a problem which need me to get factors of a certain number. So as always I am using the module % in order to see if a number is divisible by a certain number and is equal to zero. But when ever I am trying to do this I keep getting an error saying ZeroDivisionError . I tried adding a block of code like this so python does not start counting from zero instead it starts to count from one for potenial in range(number + 1): But this does not seem to work. Below is the rest of my code any help will be appreciated.
def Factors(number):
factors = []
for potenial in range(number + 1):
if number % potenial == 0:
factors.append(potenial)
return factors
In your for loop you are iterating from 0 (range() assumes starting number to be 0 if only 1 argument is given) up to "number". There is a ZeroDivisionError since you are trying to calculate number modulo 0 (number % 0) at the start of the for loop. When calculating the modulo, Python tries to divide number by 0 causing the ZeroDivisionError. Here is the corrected code (fixed the indentation):
def get_factors(number):
factors = []
for potential in range(1, number + 1):
if number % potential == 0:
factors.append(potential)
return factors
However, there are betters ways of calculating factors. For example, you can iterate only up to sqrt(n) where n is the number and then calculate "factor pairs" e.g. if 3 is a factor of 15 then 15/3 which is 5 is also a factor of 15.
I encourage you to try an implement a more efficient algorithm.
Stylistic note: According to PEP 8, function names should be lowercase with words separated by underscores. Uppercase names generally indicate class definitions.

Precision error in Python

So I am ultimately trying to use Horner's rule (http://mathworld.wolfram.com/HornersRule.html) to evaluate polynomials, and creating a function to evaluate the polynomial. Whatever, so my problem is with how I wrote the function; it works for easy polynomials like 3x^2 + 2x^1 + 5 and so on. But once you get to evaluating a polynomial with a floating point number (something crazy like 1.8953343e-20, etc. ) it loses it's precision.
Because I am using this function to evaluate roots of a polynomial using Newton's Method (http://tutorial.math.lamar.edu/Classes/CalcI/NewtonsMethod.aspx), I need this to be precise, so it doesn't lose it's value through a small rounding error and whatnot.
I have already troubleshooted with two other people that the problem lies within the evaluatePoly() function, and not my other functions that evaluates Newton's Method. Also, I originally evaluated the polynomial normally (multiplying x to the degree, multiplying by constant, etc.) and it pulled out the correct answer. However, the assignment requires one to use Horner's rule for easier calculation.
This is my following code:
def evaluatePoly(poly, x_):
"""Evaluates the polynomial at x = x_ and returns the result as a floating
point number using Horner's rule"""
#http://mathworld.wolfram.com/HornersRule.html
total = 0.
polyRev = poly[::-1]
for nn in polyRev:
total = total * x_
total = total + nn
return total
Note: I have already tried setting nn, x_, (total * x_) as floats using float().
This is the output I am receiving:
Polynomial: 5040x^0 + 1602x^1 + 1127x^2 - 214x^3 - 75x^4 + 4x^5 + 1x^6
Derivative: 1602x^0 + 2254x^1 - 642x^2 - 300x^3 + 20x^4 + 6x^5
(6.9027369297630505, False)
Starting at 100.00, no root found, last estimate was 6.90, giving value f(6.90) = -6.366463e-12
(6.9027369297630505, False)
Starting at 10.00, no root found, last estimate was 6.90, giving value f(6.90) = -6.366463e-12
(-2.6575456505038764, False)
Starting at 0.00, no root found, last estimate was -2.66, giving value f(-2.66) = 8.839758e+03
(-8.106973924480215, False)
Starting at -10.00, no root found, last estimate was -8.11, giving value f(-8.11) = -1.364242e-11
(-8.106973924480215, False)
Starting at -100.00, no root found, last estimate was -8.11, giving value f(-8.11) = -1.364242e-11
This is the output I need:
Polynomial: 5040x^0 + 1602x^1 + 1127x^2 - 214x^3 - 75x^4 + 4x^5 + 1x^6
Derivative: 1602x^0 + 2254x^1 - 642x^2 - 300x^3 + 20x^4 + 6x^5
(6.9027369297630505, False)
Starting at 100.00, no root found, last estimate was 6.90,giving value f(6.90) = -2.91038e-11
(6.9027369297630505, False)
Starting at 10.00, no root found, last estimate was 6.90,giving value f(6.90) = -2.91038e-11
(-2.657545650503874, False)
Starting at 0.00, no root found, last estimate was -2.66,giving value f(-2.66) = 8.83976e+03
(-8.106973924480215, True)
Starting at -10.00, root found at x = -8.11, giving value f(-8.11)= 0.000000e+00
(-8.106973924480215, True)
Starting at -100.00, root found at x = -8.11, giving value f(-8.11)= 0.000000e+00
Note: Please ignore the tuples of the errored output; That is the result of my newton's method, where the first result is the root and the second result is indicating whether it is a root or not.
Try this:
def evaluatePoly(poly, x_):
'''Evaluate the polynomial poly at x = x_ and return the result as a
floating-point number using Horner's rule'''
total= 0
degree =0
for coef in poly:
total += (x_**degree) * coef
degree += 1
Evaluation of a polynomial close to a root requires, by construction of the problem, that large terms cancel to yield a small result. The size of the intermediate terms can be bounded in a worst-case sense by the evaluation of the polynomial that has all its coefficients set to the absolute values of the coefficients of the original polynomial. For the first root that gives
In [6]: x0=6.9027369297630505
In [7]: evaluatePoly(poly,x0)
Out[7]: -6.366462912410498e-12
In [8]: evaluatePoly([abs(c) for c in poly],abs(x0))
Out[8]: 481315.82997756737
This value is a first estimate for the magnification factor of the floating point errors, multiplied with the machine epsilon 2.22e-16 this gives a bound on the accumulated floating point errors of any evaluation method of 1.07e-10 and indeed both evaluation methods give values comfortably below this bound, indicating that the root was found within the capabilities of the floating point format.
Looking at the graph of the evaluation around x0 one sees that the basic assumption of a smooth curve fails at that magnification, and the x-axis is crossed in a jump so that no better value for x0 can be found:

Resources