Numerical evaluation of the exponential integral function using trapezoidal rule - python-3.x

I want to evaluate the exponential integral function numerically using trapezoidal rule.This function is defined as:
The reference is available here.
This function is already available in some libraries for example scipy.special. For some reasons I do not want to use these libraries. Instead, I need to evaluate this function directly by the trapezoidal rule. I wrote the trapezoidal rule and checked it to make sure it works fine.
Then I used it for numerical evaluation of the Ei function. Unfortunately the results is not correct for example I want to evaluate Ei(1) which is equal to 1.89511 but the code I have written returns infinity which is wrong. Here is the code:
import numpy as np
# Integration using Trapezoidal rule
def trapezoidal(f, a, b, n):
h = float(b - a) / n
s = 0.0
s += f(a)/2.0
for i in range(1, n):
s += f(a + i*h)
s += f(b)/2.0
return s * h
# Define integrand
def Ei(t):
return - np.exp(-t) / t
# Define Ei(1)
A = trapezoidal(Ei, -1, np.inf, 20)
print (A)
# Checking Ei(1)
from scipy.special import expi
print (expi(1))
Do you know how I can modify the above code so I can get correct results?
Thanks!

1) You cannot define range ending woth +inf and divide it onto 20 parts.
Instead you can choose arbitrary right limit and increase it until difference abs(integral(limit(i+1))-integral(limit(i))) becomes negligible
2) Consider function evaluation in zero point (if occurs). It causes division by zero
If argument is too close to zero, try to shift it a bit.

Related

Implementing alternative Fibonacci sequence

So I'm struggling with Question 3. I think the representation of L would be a function that goes something like this:
import numpy as np
def L(a, b):
#L is 2x2 Matrix, that is
return(np.dot([[0,1],[1,1]],[a,b]))
def fibPow(n):
if(n==1):
return(L(0,1))
if(n%2==0):
return np.dot(fibPow(n/2), fibPow(n/2))
else:
return np.dot(L(0,1),np.dot(fibPow(n//2), fibPow(n//2)))
Given b I'm pretty sure I'm wrong. What should I be doing? Any help would be appreciated. I don't think I'm supposed to use the golden ratio property of the Fibonacci series. What should my a and b be?
EDIT: I've updated my code. For some reason it doesn't work. L will give me the right answer, but my exponentiation seems to be wrong. Can someone tell me what I'm doing wrong
With an edited code, you are almost there. Just don't cram everything into one function. That leads to subtle mistakes, which I think you may enjoy to find.
Now, L is not function. As I said before, it is a matrix. And the core of the problem is to compute its nth power. Consider
L = [[0,1], [1,1]]
def nth_power(matrix, n):
if n == 1:
return matrix
if (n % 2) == 0:
temp = nth_power(matrix, n/2)
return np.dot(temp, temp)
else:
temp = nth_power(matrix, n // 2)
return np.dot(matrix, np.dot(temp, temp))
def fibPow(n):
Ln = nth_power(L, n)
return np.dot(L, [0,1])[1]
The nth_power is almost identical to your approach, with some trivial optimization. You may optimize it further by eliminating recursion.
First thing first, there is no L(n, a, b). There is just L(a, b), a well defined linear operator which transforms a vector a, b into a vector b, a+b.
Now a huge hint: a linear operator is a matrix (in this case, 2x2, and very simple). Can you spell it out?
Now, applying this matrix n times in a row to an initial vector (in this case, 0, 1), by matrix magic is equivalent to applying nth power of L once to the initial vector. This is what Question 2 is about.
Once you determine how this matrix looks like, fibPow reduces to computing its nth power, and multiplying the result by 0, 1. To get O(log n) complexity, check out exponentiation by squaring.

Unexpected solution using JiTCDDE

I'm trying to investigate the behavior of the following Delayed Differential Equation using Python:
y''(t) = -y(t)/τ^2 - 2y'(t)/τ - Nd*f(y(t-T))/τ^2,
where f is a cut-off function which is essentially equal to the identity when the absolute value of its argument is between 1 and 10 and otherwise is equal to 0 (see figure 1), and Nd, τ and T are constants.
For this I'm using the package JiTCDDE. This provides a reasonable solution to the above equation. Nevertheless, when I try to add a noise on the right hand side of the equation, I obtain a solution which stabilize to a non-zero constant after a few oscillations. This is not a mathematical solution of the equation (the only possible constant solution being equal to zero). I don't understand why this problem arises and if it is possible to solve it.
I reproduce my code below. Here, for the sake of simplicity, I substituted the noise with an high-frequency cosine, which is introduced in the system of equation as the initial condition for a dummy variable (the cosine could have been introduced directly in the system, but for a general noise this doesn't seem possible). To simplify further the problem, I removed also the term involving the f function, as the problem arises also without it. Figure 2 shows the plot of the function given by the code.
from jitcdde import jitcdde, y, t
import numpy as np
from matplotlib import pyplot as plt
import math
from chspy import CubicHermiteSpline
# Definition of function f:
def functionf(x):
return x/4*(1+symengine.erf(x**2-Bmin**2))*(1-symengine.erf(x**2-Bmax**2))
#parameters:
τ = 42.9
T = 35.33
Nd = 8.32
# Definition of the initial conditions:
dt = .01 # Time step.
totT = 10000. # Total time.
Nmax = int(totT / dt) # Number of time steps.
Vt = np.linspace(0., totT, Nmax) # Vector of times.
# Definition of the "noise"
X = np.zeros(Nmax)
for i in range(Nmax):
X[i]=math.cos(Vt[i])
past=CubicHermiteSpline(n=3)
for time, datum in zip(Vt,X):
regular_past = [10.,0.]
past.append((
time-totT,
np.hstack((regular_past,datum)),
np.zeros(3)
))
noise= lambda t: y(2,t-totT)
# Integration of the DDE
g = [
y(1),
-y(0)/τ**2-2*y(1)/τ+0.008*noise(t)
]
g.append(0)
DDE = jitcdde(g)
DDE.add_past_points(past)
DDE.adjust_diff()
data = []
for time in np.arange(DDE.t, DDE.t+totT, 1):
data.append( DDE.integrate(time)[0] )
plt.plot(data)
plt.show()
Incidentally, I noticed that even without noise, the solution seems to be discontinuous at the point zero (y is set to be equal to zero for negative times), and I don't understand why.
As the comments unveiled, your problem eventually boiled down to this:
step_on_discontinuities assumes delays that are small with respect to the integration time and performs steps that are placed on those times where the delayed components points to the integration start (0 in your case). This way initial discontinuities are handled.
However, implementing an input with a delayed dummy variable introduces a large delay into the system, totT in your case.
The respective step for step_on_discontinuities would be at totT itself, i.e., after the desired integration time.
Thus when you reach for time in np.arange(DDE.t, DDE.t+totT, 1): in your code, DDE.t is totT.
Therefore you have made a big step before you actually start integrating and observing which may seem like a discontinuity and lead to weird results, in particular you do not see the effect of your input, because it has already “ended” at this point.
To avoid this, use adjust_diff or integrate_blindly instead of step_on_discontinuities.

How do I call a list of numpy functions without a for loop?

I'm doing data analysis that involves minimizing the least-square-error between a set of points and a corresponding set of orthogonal functions. In other words, I'm taking a set of y-values and a set of functions, and trying to zero in on the x-value that gets all of the functions closest to their corresponding y-value. Everything is being done in a 'data_set' class. The functions that I'm comparing to are all stored in one list, and I'm using a class method to calculate the total lsq-error for all of them:
self.fits = [np.poly1d(np.polyfit(self.x_data, self.y_data[n],10)) for n in range(self.num_points)]
def error(self, x, y_set):
arr = [(y_set[n] - self.fits[n](x))**2 for n in range(self.num_points)]
return np.sum(arr)
This was fine when I had significantly more time than data, but now I'm taking thousands of x-values, each with a thousand y-values, and that for loop is unacceptably slow. I've been trying to use np.vectorize:
#global scope
def func(f,x):
return f(x)
vfunc = np.vectorize(func, excluded=['x'])
…
…
#within data_set class
def error(self, x, y_set):
arr = (y_set - vfunc(self.fits, x))**2
return np.sum(arr)
func(self.fits[n], x) works fine as long as n is valid, and as far as I can tell from the docs, vfunc(self.fits, x) should be equivalent to
[self.fits[n](x) for n in range(self.num_points)]
but instead it throws:
ValueError: cannot copy sequence with size 10 to array axis with dimension 11
10 is the degree of the polynomial fit, and 11 is (by definition) the number of terms in it, but I have no idea why they're showing up here. If I change the fit order, the error message reflects the change. It seems like np.vectorize is taking each element of self.fits as a list rather than a np.poly1d function.
Anyway, if someone could either help me understand np.vectorize better, or suggest another way to eliminate that loop, that would be swell.
As the functions in question all have a very similar structure we can "manually" vectorize once we've extracted the poly coefficients. In fact, the function is then a quite simple one-liner, eval_many below:
import numpy as np
def poly_vec(list_of_polys):
O = max(p.order for p in list_of_polys)+1
C = np.zeros((len(list_of_polys), O))
for p, c in zip(list_of_polys, C):
c[len(c)-p.order-1:] = p.coeffs
return C
def eval_many(x,C):
return C#np.vander(x,11).T
# make example
list_of_polys = [np.poly1d(v) for v in np.random.random((1000,11))]
x = np.random.random((2000,))
# put all coeffs in one master matrix
C = poly_vec(list_of_polys)
# test
assert np.allclose(eval_many(x,C), [p(x) for p in list_of_polys])
from timeit import timeit
print('vectorized', timeit(lambda: eval_many(x,C), number=100)*10)
print('loopy ', timeit(lambda: [p(x) for p in list_of_polys], number=10)*100)
Sample run:
vectorized 6.817315469961613
loopy 56.35076989419758

Scipy.integrate gives odd results; are there best practices?

I am still struggling with scipy.integrate.quad.
Sparing all the details, I have an integral to evaluate. The function is of the form of the integral of a product of functions in x, like so:
Z(k) = f(x) g(k/x) / abs(x)
I know for certain the range of integration is between tow positive numbers. Oddly, when I pick a wide range that I know must contain all values of x that are positive - like integrating from 1 to 10,000,000 - it intgrates fast and gives an answer which looks right. But when I fingure out the exact limits - which I know sice f(x) is zero over a lot of the real line - and use those, I get another answer that is different. They aren't very different, though I know the second is more accurate.
After much fiddling I got it to work OK, but then needed to add in an exopnentiation - I was at least getting a 'smooth' answer for the computed function of z. I had this working in an OK way before I added in the exponentiation (which is needed), but now the function that gets generated (z) becomes more and more oscillatory and peculiar.
Any idea what is happening here? I know this code comes from an old Fortran library, so there must be some known issues, but I can't find references.
Here is the core code:
def normal(x, mu, sigma) :
return (1.0/((2.0*3.14159*sigma**2)**0.5)*exp(-(x-
mu)**2/(2*sigma**2)))
def integrand(x, z, mu, sigma, f) :
return np.exp(normal(z/x, mu, sigma)) * getP(x, f._x, f._y) / abs(x)
for _z in range (int(z_min), int(z_max) + 1, 1000):
z.append(_z)
pResult = quad(integrand, lb, ub,
args=(float(_z), MU-SIGMA**2/2, SIGMA, X),
points = [100000.0],
epsabs = 1, epsrel = .01) # drop error estimate of tuple
p.append(pResult[0]) # drop error estimate of tuple
By the way, getP() returns a linearly interpolated, piecewise continuous,but non-smooth function to give the integrator values that smoothly fit between the discrete 'buckets' of the histogram.
As with many numerical methods, it can be very sensitive to asymptotes, zeros, etc. The only choice is to keep giving it 'hints' if it will accept them.

My Python trapezoidal code is saying x not defined

I am trying to find the integration of Sin^2(x)/x^2 using trapezoidal rule but while running it is saying x id not defined. Can anyone suggest me what is wrong?
import math
c=math.sin`
def trapezoidal(f, a, b, N):
if x!=0:
h=(b-a)/N
s=0.0
s+=f(a)/2.0
for i in range (1,N):
s+=f(a+i*h)
s+=f(b)/2.0
Y=s*h
else:
y=1
return Y`
for n in range (1,11):
N=2**n
result=trapezoidal(lambda x:((c(x)*c(x))/(x**2)), 0, 1000, N)
print(repr(n).rjust(10), repr(result).rjust(30))`
You're not calling the x in the function. In your case you want to loop over every element in the function f.
If you don't want to use the pythonic way, you can have a look at numpy and scipy. These packages offer optimised functions for basic operation such as integrations and matrix calculations. Have a look at np.trapz (https://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html), sp.integrate (https://docs.scipy.org/doc/scipy-0.18.1/reference/tutorial/integrate.html)

Resources