Determening begin parameters 2D gaussian fit - python-3.x

I'm working on some code which needs to be able to preform a 2d gaussian fitting. I mostly based my code on following question: Fitting a 2D Gaussian function using scipy.optimize.curve_fit - ValueError and minpack.error . Now is problem that I don't really have an initial guess about the different parameters that need to be used.
I've tried this:
def twoD_Gaussian(x_data_tuple, amplitude, xo, yo, sigma_x, sigma_y, theta, offset):
(x,y) = x_data_tuple
xo = float(xo)
yo = float(yo)
a = (np.cos(theta)**2)/(2*sigma_x**2) + (np.sin(theta)**2)/(2*sigma_y**2)
b = -(np.sin(2*theta))/(4*sigma_x**2) + (np.sin(2*theta))/(4*sigma_y**2)
c = (np.sin(theta)**2)/(2*sigma_x**2) + (np.cos(theta)**2)/(2*sigma_y**2)
g = offset + amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo)
+ c*((y-yo)**2)))
return g.ravel()
The data.reshape(201,201) is just something I took from the aformentioned question.
mean_gauss_x = sum(x * data.reshape(201,201)) / sum(data.reshape(201,201))
sigma_gauss_x = np.sqrt(sum(data.reshape(201,201) * (x - mean_gauss_x)**2) / sum(data.reshape(201,201)))
mean_gauss_y = sum(y * data.reshape(201,201)) / sum(data.reshape(201,201))
sigma_gauss_y = np.sqrt(sum(data.reshape(201,201) * (y - mean_gauss_y)**2) / sum(data.reshape(201,201)))
initial_guess = (np.max(data), mean_gauss_x, mean_gauss_y, sigma_gauss_x, sigma_gauss_y,0,10)
popt, pcov = curve_fit(twoD_Gaussian, (x, y), data, p0=initial_guess)
data_fitted = twoD_Gaussian((x, y), *popt)
If I try this, I get following error message: ValueError: setting an array element with a sequence.
Is the reasoning about the begin parameters correct?
And why do I get this error?

If I use the runnable code from the linked question and substitute your definition of initial_guess:
mean_gauss_x = sum(x * data.reshape(201,201)) / sum(data.reshape(201,201))
sigma_gauss_x = np.sqrt(sum(data.reshape(201,201) * (x - mean_gauss_x)**2) / sum(data.reshape(201,201)))
mean_gauss_y = sum(y * data.reshape(201,201)) / sum(data.reshape(201,201))
sigma_gauss_y = np.sqrt(sum(data.reshape(201,201) * (y - mean_gauss_y)**2) / sum(data.reshape(201,201)))
initial_guess = (np.max(data), mean_gauss_x, mean_gauss_y, sigma_gauss_x, sigma_gauss_y,0,10)
Then
print(inital_guess)
yields
(13.0, array([...]), array([...]), array([...]), array([...]), 0, 10)
Notice that some of the values in initial_guess are arrays. The optimize.curve_fit function expects initial_guess to be a tuple of scalars. This is the source of the problem.
The error message
ValueError: setting an array element with a sequence
often arises when an array-like is supplied when a scalar value is expected. It is a hint that the source of the problem may have to do with an array having the wrong number of dimensions. For example, it might arise if you pass a 1D array to a function that expects a scalar.
Let's look at this piece of code taken from the linked question:
x = np.linspace(0, 200, 201)
y = np.linspace(0, 200, 201)
X, Y = np.meshgrid(x, y)
x and y are 1D arrays, while X and Y are 2D arrays. (I've capitalized all 2D arrays to help distinguish them from 1D arrays).
Now notice that Python sum and NumPy's sum method behave differently when applied to 2D arrays:
In [146]: sum(X)
Out[146]:
array([ 0., 201., 402., 603., 804., 1005., 1206., 1407.,
1608., 1809., 2010., 2211., 2412., 2613., 2814., 3015.,
...
38592., 38793., 38994., 39195., 39396., 39597., 39798., 39999.,
40200.])
In [147]: X.sum()
Out[147]: 4040100.0
The Python sum function is equivalent to
total = 0
for item in X:
total += item
Since X is a 2D array, the loop for item in X is iterating over the rows of X. Each item is therefore a 1D array representing a row of X. Thus, total ends up being a 1D array.
In contrast, X.sum() sums all the elements in X and returns a scalar.
Since initial_guess should be a tuple of scalars,
everywhere you use sum you should instead use the NumPy sum method. For example, replace
mean_gauss_x = sum(x * data) / sum(data)
with
mean_gauss_x = (X * DATA).sum() / (DATA.sum())
import numpy as np
import scipy.optimize as optimize
import matplotlib.pyplot as plt
# define model function and pass independant variables x and y as a list
def twoD_Gaussian(data, amplitude, xo, yo, sigma_x, sigma_y, theta, offset):
X, Y = data
xo = float(xo)
yo = float(yo)
a = (np.cos(theta) ** 2) / (2 * sigma_x ** 2) + (np.sin(theta) ** 2) / (
2 * sigma_y ** 2
)
b = -(np.sin(2 * theta)) / (4 * sigma_x ** 2) + (np.sin(2 * theta)) / (
4 * sigma_y ** 2
)
c = (np.sin(theta) ** 2) / (2 * sigma_x ** 2) + (np.cos(theta) ** 2) / (
2 * sigma_y ** 2
)
g = offset + amplitude * np.exp(
-(a * ((X - xo) ** 2) + 2 * b * (X - xo) * (Y - yo) + c * ((Y - yo) ** 2))
)
return g.ravel()
# Create x and y indices
x = np.linspace(0, 200, 201)
y = np.linspace(0, 200, 201)
X, Y = np.meshgrid(x, y)
# create data
data = twoD_Gaussian((X, Y), 3, 100, 100, 20, 40, 0, 10)
data_noisy = data + 0.2 * np.random.normal(size=data.shape)
DATA = data.reshape(201, 201)
# add some noise to the data and try to fit the data generated beforehand
mean_gauss_x = (X * DATA).sum() / (DATA.sum())
sigma_gauss_x = np.sqrt((DATA * (X - mean_gauss_x) ** 2).sum() / (DATA.sum()))
mean_gauss_y = (Y * DATA).sum() / (DATA.sum())
sigma_gauss_y = np.sqrt((DATA * (Y - mean_gauss_y) ** 2).sum() / (DATA.sum()))
initial_guess = (
np.max(data),
mean_gauss_x,
mean_gauss_y,
sigma_gauss_x,
sigma_gauss_y,
0,
10,
)
print(initial_guess)
# (13.0, 100.00000000000001, 100.00000000000001, 57.106515650488404, 57.43620227324201, 0, 10)
# initial_guess = (3,100,100,20,40,0,10)
popt, pcov = optimize.curve_fit(twoD_Gaussian, (X, Y), data_noisy, p0=initial_guess)
data_fitted = twoD_Gaussian((X, Y), *popt)
fig, ax = plt.subplots(1, 1)
ax.imshow(
data_noisy.reshape(201, 201),
cmap=plt.cm.jet,
origin="bottom",
extent=(X.min(), X.max(), Y.min(), Y.max()),
)
ax.contour(X, Y, data_fitted.reshape(201, 201), 8, colors="w")
plt.show()

Related

Why can't I get this Runge-Kutta solver to converge as the time step decreases?

For reasons, I need to implement the Runge-Kutta4 method in PyTorch (so no, I'm not going to use scipy.odeint). I tried and I get weird results on the simplest test case, solving x'=x with x(0)=1 (analytical solution: x=exp(t)). Basically, as I reduce the time step, I cannot get the numerical error to go down. I'm able to do it with a simpler Euler method, but not with the Runge-Kutta 4 method, which makes me suspect some floating point issue here (maybe I'm missing some hidden conversion from double precision to single)?
import torch
import numpy as np
import matplotlib.pyplot as plt
def Euler(f, IC, time_grid):
y0 = torch.tensor([IC])
time_grid = time_grid.to(y0[0])
values = y0
for i in range(0, time_grid.shape[0] - 1):
t_i = time_grid[i]
t_next = time_grid[i+1]
y_i = values[i]
dt = t_next - t_i
dy = f(t_i, y_i) * dt
y_next = y_i + dy
y_next = y_next.unsqueeze(0)
values = torch.cat((values, y_next), dim=0)
return values
def RungeKutta4(f, IC, time_grid):
y0 = torch.tensor([IC])
time_grid = time_grid.to(y0[0])
values = y0
for i in range(0, time_grid.shape[0] - 1):
t_i = time_grid[i]
t_next = time_grid[i+1]
y_i = values[i]
dt = t_next - t_i
dtd2 = 0.5 * dt
f1 = f(t_i, y_i)
f2 = f(t_i + dtd2, y_i + dtd2 * f1)
f3 = f(t_i + dtd2, y_i + dtd2 * f2)
f4 = f(t_next, y_i + dt * f3)
dy = 1/6 * dt * (f1 + 2 * (f2 + f3) +f4)
y_next = y_i + dy
y_next = y_next.unsqueeze(0)
values = torch.cat((values, y_next), dim=0)
return values
# differential equation
def f(T, X):
return X
# initial condition
IC = 1.
# integration interval
def integration_interval(steps, ND=1):
return torch.linspace(0, ND, steps)
# analytical solution
def analytical_solution(t_range):
return np.exp(t_range)
# test a numerical method
def test_method(method, t_range, analytical_solution):
numerical_solution = method(f, IC, t_range)
L_inf_err = torch.dist(numerical_solution, analytical_solution, float('inf'))
return L_inf_err
if __name__ == '__main__':
Euler_error = np.array([0.,0.,0.])
RungeKutta4_error = np.array([0.,0.,0.])
indices = np.arange(1, Euler_error.shape[0]+1)
n_steps = np.power(10, indices)
for i, n in np.ndenumerate(n_steps):
t_range = integration_interval(steps=n)
solution = analytical_solution(t_range)
Euler_error[i] = test_method(Euler, t_range, solution).numpy()
RungeKutta4_error[i] = test_method(RungeKutta4, t_range, solution).numpy()
plots_path = "./plots"
a = plt.figure()
plt.xscale('log')
plt.yscale('log')
plt.plot(n_steps, Euler_error, label="Euler error", linestyle='-')
plt.plot(n_steps, RungeKutta4_error, label="RungeKutta 4 error", linestyle='-.')
plt.legend()
plt.savefig(plots_path + "/errors.png")
The result:
As you can see, the Euler method converges (slowly, as expected of a first order method). However, the Runge-Kutta4 method does not converge as the time step gets smaller and smaller. The error goes down initially, and then up again. What's the issue here?
The reason is indeed a floating point precision issue. torch defaults to single precision, so once the truncation error becomes small enough, the total error is basically determined by the roundoff error, and reducing the truncation error further by increasing the number of steps <=> decreasing the time step doesn't lead to any decrease in the total error.
To fix this, we need to enforce double precision 64bit floats for all floating point torch tensors and numpy arrays. Note that the right way to do this is to use respectively torch.float64 and np.float64 rather than, e.g., torch.double and np.double, because the former are fixed-sized float values, (always 64bit) while the latter depend on the machine and/or compiler. Here's the fixed code:
import torch
import numpy as np
import matplotlib.pyplot as plt
def Euler(f, IC, time_grid):
y0 = torch.tensor([IC], dtype=torch.float64)
time_grid = time_grid.to(y0[0])
values = y0
for i in range(0, time_grid.shape[0] - 1):
t_i = time_grid[i]
t_next = time_grid[i+1]
y_i = values[i]
dt = t_next - t_i
dy = f(t_i, y_i) * dt
y_next = y_i + dy
y_next = y_next.unsqueeze(0)
values = torch.cat((values, y_next), dim=0)
return values
def RungeKutta4(f, IC, time_grid):
y0 = torch.tensor([IC], dtype=torch.float64)
time_grid = time_grid.to(y0[0])
values = y0
for i in range(0, time_grid.shape[0] - 1):
t_i = time_grid[i]
t_next = time_grid[i+1]
y_i = values[i]
dt = t_next - t_i
dtd2 = 0.5 * dt
f1 = f(t_i, y_i)
f2 = f(t_i + dtd2, y_i + dtd2 * f1)
f3 = f(t_i + dtd2, y_i + dtd2 * f2)
f4 = f(t_next, y_i + dt * f3)
dy = 1/6 * dt * (f1 + 2 * (f2 + f3) +f4)
y_next = y_i + dy
y_next = y_next.unsqueeze(0)
values = torch.cat((values, y_next), dim=0)
return values
# differential equation
def f(T, X):
return X
# initial condition
IC = 1.
# integration interval
def integration_interval(steps, ND=1):
return torch.linspace(0, ND, steps, dtype=torch.float64)
# analytical solution
def analytical_solution(t_range):
return np.exp(t_range, dtype=np.float64)
# test a numerical method
def test_method(method, t_range, analytical_solution):
numerical_solution = method(f, IC, t_range)
L_inf_err = torch.dist(numerical_solution, analytical_solution, float('inf'))
return L_inf_err
if __name__ == '__main__':
Euler_error = np.array([0.,0.,0.], dtype=np.float64)
RungeKutta4_error = np.array([0.,0.,0.], dtype=np.float64)
indices = np.arange(1, Euler_error.shape[0]+1)
n_steps = np.power(10, indices)
for i, n in np.ndenumerate(n_steps):
t_range = integration_interval(steps=n)
solution = analytical_solution(t_range)
Euler_error[i] = test_method(Euler, t_range, solution).numpy()
RungeKutta4_error[i] = test_method(RungeKutta4, t_range, solution).numpy()
plots_path = "./plots"
a = plt.figure()
plt.xscale('log')
plt.yscale('log')
plt.plot(n_steps, Euler_error, label="Euler error", linestyle='-')
plt.plot(n_steps, RungeKutta4_error, label="RungeKutta 4 error", linestyle='-.')
plt.legend()
plt.savefig(plots_path + "/errors.png")
Result:
Now, as we decrease the time step, the error of the RungeKutta4 approximation decreases with the correct rate.

Function to generate more than one random point in a given circle

So I have defined a fucntion to generate a random number in a given circle:
def rand_gen(R,c):
# random angle
alpha = 2 * math.pi * random.random()
# random radius
r = R * math.sqrt(random.random())
# calculating coordinates
x = r * math.cos(alpha) + c[0]
y = r * math.sin(alpha) + c[1]
return (x,y)
Now I want to add a parameter n (rand_gen(R,c,n)) such that we can get n such numbers instead of one
You can achieve that goal also using generator function:
import random
import math
def rand_gen(R,c,n):
while n:
# random angle
alpha = 2 * math.pi * random.random()
# random radius
r = R * math.sqrt(random.random())
# calculating coordinates
x = r * math.cos(alpha) + c[0]
y = r * math.sin(alpha) + c[1]
n -= 1
yield (x,y)
points_gen = rand_gen(10, (3,4), 4)
for point in points_gen:
print(point)
points = [*rand_gen(10, (3,4), 4)]
print(points)
If you want to generate n random points, you can extend your function with a parameter and for-loop:
import math
import random
def rand_gen(R, c, n):
out = []
for i in range(n):
# random angle
alpha = 2 * math.pi * random.random()
# random radius
r = R * math.sqrt(random.random())
# calculating coordinates
x = r * math.cos(alpha) + c[0]
y = r * math.sin(alpha) + c[1]
out.append((x,y))
return out
print(rand_gen(10, (3, 4), 3))
Prints (for example):
[(-4.700562169626218, 5.62666979720004), (6.481518730246707, -1.849892172014873), (0.41713910134636345, -1.9065302305716285)]
But better approach in my opinion would be leave the function as is and generate n points using list comprehension:
lst = [rand_gen(10, (3, 4)) for _ in range(3)]
print(lst)
Prints:
[(-1.891474340814674, -2.922205399732765), (12.557063558442614, 1.9323688240857821), (-5.450160078420653, 7.974550456763403)]

Different shape arrays operations

A bit of background:
I want to calculate the array factor of a MxN antenna array, which is given by the following equation:
Where w_i are the complex weight of the i-th element, (x_i,y_i,z_i) is the position of the i-th element, k is the wave number, theta and phi are the elevation and azimuth respectively, and i ranges from 0 to MxN-1.
In the code I have:
-theta and phi are np.mgrid with shape (200,200) each,
-w_i, and (x,y,z)_i are np.array with shape (NxM,) each
so AF is a np.array with shape (200,200) (sum over i).There is no problem so far, and I can get AF easily doing:
af = zeros([theta.shape[0],phi.shape[0]])
for i in range(self.size[0]*self.size[1]):
af = af + ( w[i]*e**(-1j*(k * x_pos[i]*sin(theta)*cos(phi) + k * y_pos[i]* sin(theta)*sin(phi)+ k * z_pos[i] * cos(theta))) )
Now, each w_i depends on frequency, so AF too, and now I have w_i with shape (NxM,1000) (I have 1000 samples of each w_i in frequency). I tried to use the above code changing
af = zeros([1000,theta.shape[0],phi.shape[0]])
but I get 'operands could not be broadcast together'. I can solve this by using a for loop through the 1000 values, but it is slow and is a bit ugly. So, what is the correct way to do the summation, or the correct way to properly define w_i and AF ?
Any help would be appreciated. Thanks.
edit
The code with the new dimension I'm trying to add is the next:
from numpy import *
class AntennaArray:
def __init__(self,f,asize=None,tipo=None,dx=None,dy=None):
self.Lambda = 299792458 / f
self.k = 2*pi/self.Lambda
self.size = asize
self.type = tipo
self._AF_DATA_SIZE = 200
self.theta,self.phi = mgrid[0 : pi : self._AF_DATA_SIZE*1j,0 : 2*pi : self._AF_DATA_SIZE*1j]
self.element_pos = None
self.element_amp = None
self.element_pha = None
if dx == None:
self.dx = self.Lambda/2
else:
self.dx = dx
if dy == None:
self.dy = self.Lambda/2
else:
self.dy = dy
self.generate_array()
def generate_array(self):
M = self.size[0]
N = self.size[1]
dx = self.dx
dy = self.dy
x_pos = arange(0,dx*N,dx)
y_pos = arange(0,dy*M,dy)
z_pos = 0
ele = zeros([N*M,3])
for i in range(M):
ele[i*N:(i+1)*N,0] = x_pos[:]
for i in range(M):
ele[i*N:(i+1)*N,1] = y_pos[i]
self.element_pos = ele
#self.array_factor = self.calculate_array_factor()
def calculate_array_factor(self):
theta,phi = self.theta,self.phi
k = self.k
x_pos = self.element_pos[:,0]
y_pos = self.element_pos[:,1]
z_pos = self.element_pos[:,2]
w = self.element_amp*exp(1j*self.element_pha)
if len(self.element_pha.shape) > 1:
#I have f_size samples of w_i(f)
f_size = self.element_pha.shape[1]
af = zeros([f_size,theta.shape[0],phi.shape[0]])
else:
#I only have w_i
af = zeros([theta.shape[0],phi.shape[0]])
for i in range(self.size[0]*self.size[1]):
**strong text**#This for loop does the summation over i
af = af + ( w[i]*e**(-1j*(k * x_pos[i]*sin(theta)*cos(phi) + k * y_pos[i]* sin(theta)*sin(phi)+ k * z_pos[i] * cos(theta))) )
return af
I tried to test it with the next main
from numpy import *
f_points = 10
M = 2
N = 2
a = AntennaArray(5.8e9,[M,N])
a.element_amp = ones([M*N,f_points])
a.element_pha = zeros([M*N,f_points])
af = a.calculate_array_factor()
But I get
ValueError: 'operands could not be broadcast together with shapes (10,) (200,200) '
Note that if I set
a.element_amp = ones([M*N])
a.element_pha = zeros([M*N])
This works well.
Thanks.
I had a look at the code, and I think this for loop:
af = zeros([theta.shape[0],phi.shape[0]])
for i in range(self.size[0]*self.size[1]):
af = af + ( w[i]*e**(-1j*(k * x_pos[i]*sin(theta)*cos(phi) + k * y_pos[i]* sin(theta)*sin(phi)+ k * z_pos[i] * cos(theta))) )
is wrong in many ways. You are mixing dimensions, you cannot loop that way.
And by the way, to make full use of numpy efficiency, never loop over the arrays. It slows down the execution significantly.
I tried to rework that part.
First, I advice you to not use from numpy import *, it's bad practice (see here). Use import numpy as np. I reintroduced the np abbreviation, so you can understand what comes from numpy.
Frequency independent case
This first snippet assumes that w is a 1D array of length 4: I am neglecting the frequency dependency of w, to show you how you can get what you already obtained without the for loop and using instead the power of numpy.
af_points = w[:,np.newaxis,np.newaxis]*np.e**(-1j*
(k * x_pos[:,np.newaxis,np.newaxis]*np.sin(theta)*np.cos(phi) +
k * y_pos[:,np.newaxis,np.newaxis]*np.sin(theta)*np.sin(phi) +
k * z_pos[:,np.newaxis,np.newaxis]*np.cos(theta)
))
af = np.sum(af_points, axis=0)
I am using numpy broadcasting to obtain a 3D array named af_points, whose shape is (4, 200, 200). To do it, I use np.newaxis to extend the number of axis of an array in order to use broadcasting correctly. More here on np.newaxis.
So, w[:,np.newaxis,np.newaxis] is an array of shape (4, 1, 1). Similarly for x_pos[:,np.newaxis,np.newaxis], y_pos[:,np.newaxis,np.newaxis] and z_pos[:,np.newaxis,np.newaxis]. Since the angles have shape (200, 200), broadcasting can be done, and af_points has shape (4, 200, 200).
Finally the sum is done by np.sum, summing over the first axis to obtain a (200, 200) array.
Frequency dependent case
Now w has shape (4, 10), where 10 are the frequency points. The idea is the same, just consider that the frequency is an additional dimension in your numpy arrays: now af_points will be an array of shape (4, 10, 200, 200) where 10 are the f_points you have defined.
To keep it understandable, I've split the calculation:
#exp_point is only the exponent, frequency independent. Will be a (4, 200, 200) array.
exp_points = np.e**(-1j*
(k * x_pos[:,np.newaxis,np.newaxis]*np.sin(theta)*np.cos(phi) +
k * y_pos[:,np.newaxis,np.newaxis]*np.sin(theta)*np.sin(phi) +
k * z_pos[:,np.newaxis,np.newaxis]*np.cos(theta)
))
af_points = w[:,:,np.newaxis,np.newaxis] * exp_points[:,np.newaxis,:,:]
af = np.sum(af_points, axis=0)
And now af has shape (10, 200, 200).

Numpy tensor implementation slower than loop

I have two functions that compute the same metric. One ends up using a list comprehension to cycle through a calculation, the other uses only numpy tensor operations. The functions take in a (N, 3) array, where N is the number of points in 3D space. When N <~ 3000 the tensor function is faster, when N >~ 3000 the list comprehension is faster. Both seem to have linear time complexity in terms of N i.e two time-N lines cross at N=~3000.
def approximate_area_loop(section, num_area_divisions):
n_a_d = num_area_divisions
interp_vectors = get_section_interp_(section)
a1 = section[:-1]
b1 = section[1:]
a2 = interp_vectors[:-1]
b2 = interp_vectors[1:]
c = lambda u: (1 - u) * a1 + u * a2
d = lambda u: (1 - u) * b1 + u * b2
x = lambda u, v: (1 - v) * c(u) + v * d(u)
area = np.sum([np.linalg.norm(np.cross((x((i + 1)/n_a_d, j/n_a_d) - x(i/n_a_d, j/n_a_d)),\
(x(i/n_a_d, (j +1)/n_a_d) - x(i/n_a_d, j/n_a_d))), axis = 1)\
for i in range(n_a_d) for j in range(n_a_d)])
Dt = section[-1, 0] - section[0, 0]
return area, Dt
def approximate_area_tensor(section, num_area_divisions):
divisors = np.linspace(0, 1, num_area_divisions + 1)
interp_vectors = get_section_interp_(section)
a1 = section[:-1]
b1 = section[1:]
a2 = interp_vectors[:-1]
b2 = interp_vectors[1:]
c = np.multiply.outer(a1, (1 - divisors)) + np.multiply.outer(a2, divisors) # c_areas_vecs_divs
d = np.multiply.outer(b1, (1 - divisors)) + np.multiply.outer(b2, divisors) # d_areas_vecs_divs
x = np.multiply.outer(c, (1 - divisors)) + np.multiply.outer(d, divisors) # x_areas_vecs_Divs_divs
u = x[:, :, 1:, :-1] - x[:, :, :-1, :-1] # u_areas_vecs_Divs_divs
v = x[:, :, :-1, 1:] - x[:, :, :-1, :-1] # v_areas_vecs_Divs_divs
sub_area_norm_vecs = np.cross(u, v, axis = 1) # areas_crosses_Divs_divs
sub_areas = np.linalg.norm(sub_area_norm_vecs, axis = 1) # areas_Divs_divs (values are now sub areas)
area = np.sum(sub_areas)
Dt = section[-1, 0] - section[0, 0]
return area, Dt
Why does the list comprehension version work faster at large N? Surely the tensor version should be faster? I'm wondering if it's something to do with the size of the calculations meaning it's too big to be done in cache? Please ask if I haven't included enough information, I'd really like to get to the bottom of this.
The bottleneck in the fully vectorized function was indeed in the np.linalg.norm as #hpauljs comment suggested.
Norm was used only to get the magnitude of all the vectors contained in axis 1. A much simpler and faster method was to just:
sub_areas = np.sqrt((sub_area_norm_vecs*sub_area_norm_vecs).sum(axis = 1))
This gives exactly the same results and sped up the code by up to 25 times faster than the loop implementation (even when the loop doesn't use linalg.norm either).

Tensor("pow:0", ...) must be from the same graph as Tensor("Cast_2:0", ...)

I am trying to model something which needs to do the definite integration. The code is showing as below:
import tensorflow as tf
from numpy import pi, inf
from tensorflow import log, sqrt, exp, pow
from scipy.integrate import quad # for integration
def risk_neutral_pdf(phi, a, S, K, r, sigma, Mt, p_dict):
phii = tf.complex(0., phi)
A = tf.cast(0., tf.complex64)
B = tf.cast(0., tf.complex64)
p_dict['gamma'] = p_dict['gamma'] + p_dict['lamda'] + .5
p_dict['lamda'] = -.5
for t in range(Mt-1, -1, -1):
temp = 1. - 2. * p_dict['alpha'] * B
A = A + (phii + a) * r + p_dict['omega'] * B - .5 * log(temp)
B = B * p_dict['beta'] + (phii + a) * (p_dict['lamda'] + p_dict['gamma']) - \
.5 * p_dict['gamma']**2. + (.5*((phii + a) - p_dict['gamma'])**2. / temp)
return tf.real(S**a * (S/K)**phii * exp(A + B * sigma**2.) / phii)
p_dict={'lamda': 0.205, 'omega': 5.02e-6, 'beta': 0.589, 'gamma': 421.39, 'alpha': 1.32e-6}
S = 100.
K = 100.
r = 0.
Mt = 0
sq_ht = sqrt(.15**2/252.)
sigma = sq_ht
P1 = tf.py_func(lambda z: quad(risk_neutral_pdf, z, inf, args=(1., S, K, r, sigma, Mt, p_dict))[0],
[0.], tf.float64)
with tf.Session() as sess:
res = sess.run(P1)
print(res)
The result returns "InvalidArgumentError (see above for traceback): ValueError: Tensor("pow:0", shape=(), dtype=float32) must be from the same graph as Tensor("Cast_2:0", shape=(), dtype=complex64)." However, no matter how I change the code or reference the solution in "ValueError: Tensor A must be from the same graph as Tensor B", it does not work. I am wondering if I did wrong when putting the tf.reset_default_graph() at the top place or should the code needs be done some changes.
Thank you. (Tensroflow version: 1.6.0)
Update:
I find that the sigma variable has been sqrt before passing into the risk_neutral_pdf function and be powered when return which is not necessary. So after modifying the return to return tf.real(S**a * (S/K)**phii * exp(A + B * sigma) / phii) and the sq_ht to .15**2/252.. The error changes to "TypeError: a float is required", which I think caused by quad and Tensor. Any ideas to solve??
Many thanks.

Resources