Errors when trying to solve large system of PDEs - python-3.x

I have a system of 18 linear (i'm assuming) PDEs. Howe ever they are very awkwardly shaped PDEs. Every time I try to use FiPy to solve the system of coupled PDEs I get an error. I have fixed most of them on my own but I can't seem to get past this one.
First you will need a list of equations that will be added later in the program:
import time
from fipy import Variable, FaceVariable, CellVariable, Grid1D, Grid2D, ExplicitDiffusionTerm, TransientTerm, DiffusionTerm, Viewer,PowerLawConvectionTerm, ImplicitSourceTerm
from fipy.tools import numerix
import math
import numpy as np
import sympy as sp
Temperature = 500.0 #Temperature in Celsius
Temp_K = Temperature + 273.15 #Temperature in Kelvin
Ea = [342,7,42,45,34] #Activation energy in order from the list excel/word file
#Frequency factor ko (Initial k)
k_0 =[5.9 * (10**15), 1.3 * (10**13), 1.0 * (10**12), 5.0 * (10**11), 1.2 * (10**13)]
fk1 = [math.exp((-1.0 * x)/(R*Temp_K)) for x in Ea] #Determines k value at given temperature value (exp(-Ea/R*T))
final_k = [fk1*k_0 for fk1,k_0 in zip(fk1,k_0)] #Multiplys by the inital k value to determine the k value at the given temperature (ko * exp(-Ea/R*T))
final_kcm = [x for x in final_k]
def rhs(eq):
eq_list = [-1.0*final_kcm[0]*EDC - final_kcm[1]*EDC*R1 - final_kcm[2]*EDC*R2 - final_kcm[3]*EDC*R4 - final_kcm[4]*EDC*R5 - final_kcm[5]*EDC*R6,
final_kcm[2]*EDC*R2 - final_kcm[8]*EC*R1 + final_kcm[13]*VCM*R2,
final_kcm[1]*EDC*R1 + final_kcm[6]*R2*R1 + final_kcm[7]*R1*R3 + final_kcm[8]*R1*EC + final_kcm[9]*R1*C11 + final_kcm[10]*R1*C112 + final_kcm[12]*R1*VCM + 2.0*final_kcm[20]*R1*C2H2,
2.0*final_kcm[20]*R2*C2H2,
final_kcm[15]*R5*VCM,
return eq_list[eq]
Here is the rest of the program I use to generate the system of PDEs.
EDC = CellVariable(mesh=mesh, hasOld=True, value=10)
EC = CellVariable(mesh=mesh, hasOld=True, value=0)
HCl = CellVariable(mesh=mesh, hasOld=True, value=0)
Coke = CellVariable(mesh=mesh, hasOld=True, value=0)
CP = CellVariable(mesh=mesh, hasOld=True, value=0)
EDC.constrain(10, mesh.facesLeft)
EC.constrain(0., mesh.facesLeft)
HCl.constrain(0., mesh.facesLeft)
Coke.constrain(0., mesh.facesLeft)
CP.constrain(0., mesh.facesLeft)
nsp =18
u_x = [[ [0,]*nsp for n in range(nsp) ]]
for z in range(nsp):
u_x[0][z][z] = 1.0
eq0 = TransientTerm(var = EDC) == PowerLawConvectionTerm(coeff = u_x, var = EDC) + ImplicitSourceTerm(rhs(0),var = EDC)
eq1 = TransientTerm(var = EC) == PowerLawConvectionTerm(coeff = u_x, var = EC) + ImplicitSourceTerm(rhs(1),var = (EC))
eq2 = TransientTerm(var = HCl) == PowerLawConvectionTerm(coeff = u_x, var = HCl) + ImplicitSourceTerm(rhs(2),var = (HCl))
eq3 = TransientTerm(var = Coke) == PowerLawConvectionTerm(coeff = u_x, var = Coke) + ImplicitSourceTerm(rhs(3),var = (Coke))
eq4 = TransientTerm(var = CP) == PowerLawConvectionTerm(coeff = u_x, var = CP) + ImplicitSourceTerm(rhs(4),var = (CP))
eqn = eq0 & eq1 & eq2 & eq3 & eq4
if __name__ == '__main__':
viewer = Viewer(vars = (EDC,EC,HCl,Coke,CP))
viewer.plot()
for t in range(1):
EDC.updateOld()
EC.updateOld()
HCl.updateOld()
Coke.updateOld()
CP.updateOld()
eqn.solve(dt=1.e-3)
Sorry for the long code but I can't really show it any other way. Anyway this is the error it returns :
File
"C:\Users\tjcze\Anaconda3\lib\site-packages\fipy\matrices\scipyMatrix.py",
line 218, in addAt
assert(len(id1) == len(id2) == len(vector))
AssertionError
What should I do to get this system to work correctly?

The error is because of whatever you're trying to do with u_x. u_x should be a rank-1 FaceVariable.
Its shape should be #dims x #faces (or #dims x 1); it should not be (#eqns x #eqns).
Setting u_x = [1.] gets rid of the AssertionError.
You will then get a series of warnings:
UserWarning: sweep() or solve() are likely to produce erroneous results when `var` does not contain floats.
Fix this by initializing all of your CellVariables with floats, e.g.,
EDC = CellVariable(mesh=mesh, hasOld=True, value=10.)
instead of
EDC = CellVariable(mesh=mesh, hasOld=True, value=10)
With those changes, the code runs. It doesn't do anything interesting, but that's hardly surprising, as it's way too complicated at this stage. 18 equations only obfuscates things. I strongly recommend you troubleshoot this problem with <= 2 equations.
At this point, your equations aren't coupled at all (eq0 only implicitly depends on EDC, eq1 only on EC, etc.). This isn't wrong, but not terribly useful. Certainly no point in the eq = eq0 & eq1 & ... syntax. EDC is the only variable with a chance of evolving, and it's constant, so it won't evolve, either.
In future, please provide examples that actually run (up to the point of the error, anyway).

Related

Optimizing asymmetrically reweighted penalized least squares smoothing (from matlab to python)

I'm trying to apply the method for baselinining vibrational spectra, which is announced as an improvement over asymmetric and iterative re-weighted least-squares algorithms in the 2015 paper (doi:10.1039/c4an01061b), where the following matlab code was provided:
function z = baseline(y, lambda, ratio)
% Estimate baseline with arPLS in Matlab
N = length(y);
D = diff(speye(N), 2);
H = lambda*D'*D;
w = ones(N, 1);
while true
W = spdiags(w, 0, N, N);
% Cholesky decomposition
C = chol(W + H);
z = C \ (C' \ (w.*y) );
d = y - z;
% make d-, and get w^t with m and s
dn = d(d<0);
m = mean(d);
s = std(d);
wt = 1./ (1 + exp( 2* (d-(2*s-m))/s ) );
% check exit condition and backup
if norm(w-wt)/norm(w) < ratio, break; end
end
that I rewrote into python:
def baseline_arPLS(y, lam, ratio):
# Estimate baseline with arPLS
N = len(y)
k = [numpy.ones(N), -2*numpy.ones(N-1), numpy.ones(N-2)]
offset = [0, 1, 2]
D = diags(k, offset).toarray()
H = lam * numpy.matmul(D.T, D)
w_ = numpy.ones(N)
while True:
W = spdiags(w_, 0, N, N, format='csr')
# Cholesky decomposition
C = cholesky(W + H)
z_ = spsolve(C.T, w_ * y)
z = spsolve(C, z_)
d = y - z
# make d- and get w^t with m and s
dn = d[d<0]
m = numpy.mean(dn)
s = numpy.std(dn)
wt = 1. / (1 + numpy.exp(2 * (d - (2*s-m)) / s))
# check exit condition and backup
norm_wt, norm_w = norm(w_-wt), norm(w_)
if (norm_wt / norm_w) < ratio:
break
w_ = wt
return(z)
Except for the input vector y the method requires parameters lam and ratio and it runs ok for values lam<1.e+07 and ratio>1.e-01, but outputs poor results. When values are changed outside this range, for example lam=1e+07, ratio=1e-02 the CPU starts heating up and job never finishes (I interrupted it after 1min). Also in both cases the following warning shows up:
/usr/local/lib/python3.9/site-packages/scipy/sparse/linalg/dsolve/linsolve.py: 144: SparseEfficencyWarning: spsolve requires A to be CSC or CSR matrix format warn('spsolve requires A to be CSC or CSR format',
although I added the recommended format='csr' option to the spdiags call.
And here's some synthetic data (similar to one in the paper) for testing purposes. The noise was added along with a 3rd degree polynomial baseline The method works well for parameters bl_1 and fails to converge for bl_2:
import numpy
from matplotlib import pyplot
from scipy.sparse import spdiags, diags, identity
from scipy.sparse.linalg import spsolve
from numpy.linalg import cholesky, norm
import sys
x = numpy.arange(0, 1000)
noise = numpy.random.uniform(low=0, high = 10, size=len(x))
poly_3rd_degree = numpy.poly1d([1.2e-06, -1.23e-03, .36, -4.e-04])
poly_baseline = poly_3rd_degree(x)
y = 100 * numpy.exp(-((x-300)/15)**2)+\
200 * numpy.exp(-((x-750)/30)**2)+ \
100 * numpy.exp(-((x-800)/15)**2) + noise + poly_baseline
bl_1 = baseline_arPLS(y, 1e+07, 1e-01)
bl_2 = baseline_arPLS(y, 1e+07, 1e-02)
pyplot.figure(1)
pyplot.plot(x, y, 'C0')
pyplot.plot(x, poly_baseline, 'C1')
pyplot.plot(x, bl_1, 'k')
pyplot.show()
sys.exit(0)
All this is telling me that I'm doing something very non-optimal in my python implementation. Since I'm not knowledgeable enough about the intricacies of scipy computations I'm kindly asking for suggestions on how to achieve convergence in this calculations.
(I encountered an issue in running the "straight" matlab version of the code because the line D = diff(speye(N), 2); truncates the last two rows of the matrix, creating dimension mismatch later in the function. Following the description of matrix D's appearance I substituted this line by directly creating a tridiagonal matrix using the diags function.)
Guided by the comment #hpaulj made, and suspecting that the loop exit wasn't coded properly, I re-visited the paper and found out that the authors actually implemented an exit condition that was not featured in their matlab script. Changing the while loop condition provides an exit for any set of parameters; my understanding is that algorithm is not guaranteed to converge in all cases, which is why this condition is necessary but was omitted by error. Here's the edited version of my python code:
def baseline_arPLS(y, lam, ratio):
# Estimate baseline with arPLS
N = len(y)
k = [numpy.ones(N), -2*numpy.ones(N-1), numpy.ones(N-2)]
offset = [0, 1, 2]
D = diags(k, offset).toarray()
H = lam * numpy.matmul(D.T, D)
w_ = numpy.ones(N)
i = 0
N_iterations = 100
while i < N_iterations:
W = spdiags(w_, 0, N, N, format='csr')
# Cholesky decomposition
C = cholesky(W + H)
z_ = spsolve(C.T, w_ * y)
z = spsolve(C, z_)
d = y - z
# make d- and get w^t with m and s
dn = d[d<0]
m = numpy.mean(dn)
s = numpy.std(dn)
wt = 1. / (1 + numpy.exp(2 * (d - (2*s-m)) / s))
# check exit condition and backup
norm_wt, norm_w = norm(w_-wt), norm(w_)
if (norm_wt / norm_w) < ratio:
break
w_ = wt
i += 1
return(z)

I want my Python functions to accept numpy ndarrays

I'm writing a script to plot a 3D representation of some thermodynamic property, Z = f(pr, Tr). pr and Tr are created with [numpy.]arange() and then they are mapped with [numpy.]meshgrid() the following way:
Tr = arange(1.0, 2.6, 0.10)
pr = arange(0.5, 9.0, 0.25)
Xpr, YTr = meshgrid(pr, Tr)
Xpr and YTr are then passed to the function that calculates the aforementioned property:
Z = function(Xpr, YTr)
("function" is just a generic name that is later replaced by the actual function name).
The values stored in Z are finally plotted:
fig = plt.figure(1, figsize=(7, 6))
ax = fig.add_subplot(projection='3d')
surf = ax.plot_surface(Xpr, YTr, Z, cmap=cm.jet, linewidth=0, antialiased=False)
Everything works fine when "function" is something quite straightforward like:
def zshell(pr_, Tr_):
A = -0.101 - 0.36*Tr_ + 1.3868*sqrt(Tr_ - 0.919)
B = 0.021 + 0.04275/(Tr_ - 0.65)
E = 0.6222 - 0.224*Tr_
F = 0.0657/(Tr_ - 0.85) - 0.037
G = 0.32*exp(-19.53*(Tr_ - 1.0))
D = 0.122*exp(-11.3*(Tr_ - 1.0))
C = pr_*(E + F*pr_ + G*pr_**4)
z = A + B*pr_ + (1.0 - A)*exp(-C) - D*(pr_/10.0)**4
return z
But it fails when the function is something like this:
def zvdw(pr_, Tr_):
A = 0.421875*pr_/Tr_**2 # 0.421875 = 27.0/64.0
B = 0.125*pr_/Tr_ # 0.125 = 1.0/8.0
z = 9.5e-01
erro = 1.0
while erro >= 1.0e-06:
c2 = -(B + 1.0)
c1 = A
c0 = -A*B
f = z**3 + c2*z**2 + c1*z + c0
df = 3.0e0*z**2 + 2.0e0*c2*z + c1
zf = z - f/df
erro = abs((zf - z)/z)
z = zf
return z
I strongly suspect that the failure is caused by the iterative method inside function zvdw(pr_, Tr_) (zvdw has been previously tested and works perfectly well when float arguments are passed to it). That is the error message I get:
Traceback (most recent call last):
File "/home/fausto/.../TestMesh.py", line 81, in <module>
Z = zvdw(Xpr, YTr)
File "/home/fausto/.../TestMesh.py", line 63, in zvdw
while erro >= 1.0e-06:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Yet the error message doesn't seem (to me) to be directly related to the while statement.
Any ideas?
Tl;dr
There is a numpy functionality for that. You just replace
def zvdw(pr_, Tr_):
by
#np.vectorize
def zvdw(pr_, Tr_):
And it works.
Get it faster
Unfortunately the resulting picture looked ugly since your mesh is to sparse. I replaced your step size in TR and pr. Unfortunately here we run into the limitation of np.vectorize. From the numpy documentation
The vectorize function is provided primarily for convenience, not for
performance. The implementation is essentially a for loop.
I.e. it is painfully slow. Already at 0.0010 and 0.0025 it took 17.5 seconds. So it is not realistic to go a factor of 10 each smaller since that would take ~100 times longer. Fortunately your code is simple enough that I can use #numba.vectorize which on my machine was a factor of ~23 faster.
Notice that this only works for some python code. Numba compiles the python code to llvm code so it can run fast. But it is unable to do that for arbitrary python code. See https://numba.pydata.org/numba-doc/dev/user/vectorize.html
And here is your picture
Code seems not to work with np.vectorize/numba.vectorize
That is very odd. Here is a literal copy paste of code that works on my machine:
import numpy as np
import matplotlib.pyplot as plt
Tr = np.arange(1.0, 2.6, 0.10)
pr = np.arange(0.5, 9.0, 0.25)
Xpr, YTr = np.meshgrid(pr, Tr)
def zshell(pr_, Tr_):
A = -0.101 - 0.36*Tr_ + 1.3868*sqrt(Tr_ - 0.919)
B = 0.021 + 0.04275/(Tr_ - 0.65)
E = 0.6222 - 0.224*Tr_
F = 0.0657/(Tr_ - 0.85) - 0.037
G = 0.32*exp(-19.53*(Tr_ - 1.0))
D = 0.122*exp(-11.3*(Tr_ - 1.0))
C = pr_*(E + F*pr_ + G*pr_**4)
z = A + B*pr_ + (1.0 - A)*exp(-C) - D*(pr_/10.0)**4
return z
#np.vectorize
def zvdw(pr_, Tr_):
A = 0.421875*pr_/Tr_**2 # 0.421875 = 27.0/64.0
B = 0.125*pr_/Tr_ # 0.125 = 1.0/8.0
z = 9.5e-01
erro = 1.0
while erro >= 1.0e-06:
c2 = -(B + 1.0)
c1 = A
c0 = -A*B
f = z**3 + c2*z**2 + c1*z + c0
df = 3.0e0*z**2 + 2.0e0*c2*z + c1
zf = z - f/df
erro = abs((zf - z)/z)
z = zf
return z
Z = zvdw(Xpr, YTr)
fig = plt.figure(1, figsize=(7, 6))
ax = fig.add_subplot(projection='3d')
surf = ax.plot_surface(Xpr, YTr, Z, cmap=plt.cm.jet, linewidth=0, antialiased=False)

Monte Carlo simulation of a system of polymer chain

I want to perform Monte Carlo simulation to the particles which are interacting via Lennard-Jones potential + FENE potential. I'm getting negative values in the FENE potential which have the log value in it. The error is "RuntimeWarning: invalid value encountered in log return (-0.5 * K * R**2 * np.log(1-((np.sqrt(rij2) - r0) / R)**2))" The FENE potential is given by:
import numpy as np
def gen_chain(N, R0):
x = np.linspace(1, (N-1)*0.8*R0, num=N)
y = np.zeros(N)
z = np.zeros(N)
return np.column_stack((x, y, z))
def lj(rij2):
sig_by_r6 = np.power(sigma/rij2, 3)
sig_by_r12 = np.power(sig_by_r6, 2)
lje = 4.0 * epsilon * (sig_by_r12 - sig_by_r6)
return lje
def fene(rij2):
return (-0.5 * K * R**2 * np.log(1-((np.sqrt(rij2) - r0) / R)**2))
def total_energy(coord):
# Non-bonded
e_nb = 0
for i in range(N):
for j in range(i-1):
ri = coord[i]
rj = coord[j]
rij = ri - rj
rij2 = np.dot(rij, rij)
if (np.sqrt(rij2) < rcutoff):
e_nb += lj(rij2)
# Bonded
e_bond = 0
for i in range(1, N):
ri = coord[i]
rj = coord[i-1]
rij = ri - rj
rij2 = np.dot(rij, rij)
e_bond += fene(rij2)
return e_nb + e_bond
def move(coord):
trial = np.ndarray.copy(coord)
for i in range(N):
delta = (2.0 * np.random.rand(3) - 1) * max_delta
trial[i] += delta
return trial
def accept(delta_e):
beta = 1.0/T
if delta_e <= 0.0:
return True
random_number = np.random.rand(1)
p_acc = np.exp(-beta*delta_e)
if random_number < p_acc:
return True
return False
if __name__ == "__main__":
# FENE parameters
K = 40
R = 0.3
r0 = 0.7
# LJ parameters
sigma = r0/0.33
epsilon = 1.0
# MC parameters
N = 50 # number of particles
rcutoff = 2.5*sigma
max_delta = 0.01
n_steps = 10000000
T = 0.5
coord = gen_chain(N, R)
energy_current = total_energy(coord)
traj = open('traj.xyz', 'w')
for step in range(n_steps):
if step % 1000 == 0:
traj.write(str(N) + '\n\n')
for i in range(N):
traj.write("C %10.5f %10.5f %10.5f\n" % (coord[i][0], coord[i][1], coord[i][2]))
print(step, energy_current)
coord_trial = move(coord)
energy_trial = total_energy(coord_trial)
delta_e = energy_trial - energy_current
if accept(delta_e):
coord = coord_trial
energy_current = energy_trial
traj.close()
The problem is that calculating rij2 = np.dot(rij, rij) in total energy with the constant values you use is always a very small number. Looking at the expression inside the log used to calculate FENE, np.log(1-((np.sqrt(rij2) - r0) / R)**2), I first noticed that you're taking the square root of rij2 which is not consistent with the formula you provided.
Secondly, notice that ((rij2 - r0) / R)**2 is the same as ((r0 - rij2) / R)**2, since the sign gets lost when squaring. Because rij2 is very small (already in the first iteration -- I checked by printing the values), this will be more or less equal to ((r0 - 0.05)/R)**2 which will be a number bigger than 1. Once you subtract this value from 1 in the log expression, 1-((np.sqrt(rij2) - r0) / R)**2 will be equal to np.nan (standing for "Not A Number"). This will propagate through all the function calls (for example, calling energy_trial = total_energy(coord_trial) will effectively set energy_trial to np.nan), until an error will be raised by some function.
Maybe you could do something with np.isnan() call, documented here. Moreover, you should check how you iterate through the coord (there's some inconsistencies throughout the code) -- I suggest you check the code review community as well.

Simaltaneous trignometric equation in python

My task is to simulate a 3 DOF robotic arm system in python. The aim is to trace out a 3D Gaussian surface. Input data is coordinates of the end effector gained using 3D Gaussian equation. The code is supposed to solve a system of the trigonometric equation to find out angles theta1, theta2 & theta3. I used "solve([eqn1,eqn2,eqn3],theta1,theta2,theta3)" to solve for the system of equation. Python is taking a too long time to respond.
I tried solving using fsolve, but the results were incomparable to the solution from MATLAB. Btw, I got a very good simulation in MATLAB using "solve".
I am using Python 3.7.3 through spyder under Anaconda 2019.03 environment in Windows 10.
Generating data set (Gaussian surface) : P matrix
a = 8
x0 = 0
y0 = 0
sigmax = 3
sigmay = 3
P_x = np.arange(7,-8,-1)
P_y = np.arange(7,-8,-1)
xx , yy = np.meshgrid(P_x,P_y)
s_x,s_y = np.subtract(xx,x0),np.subtract(yy,y0)
sq_x,sq_y = np.square(s_x),np.square(s_y)
X,Y = np.divide(sq_x,2*(sigmax**2)),np.divide(sq_y,2*(sigmay**2))
E = np.exp(-X-Y)
zz = np.multiply(E,a)
xx,yy,zz = xx.flatten(),yy.flatten(),zz.flatten()
P = np.vstack([yy,xx,zz]).T
import numpy as np
from sympy import *
from sympy import Symbol, solve, Eq
for j in range(len(P)):
theta1 = Symbol('theta1',real = True)
theta2 = Symbol('theta2',real = True)
theta3 = Symbol('theta3',real = True)
x,y,z = P[j,0],P[j,1],P[j,2]
eq1 = Eq(R1*cos(theta1) + R2*cos(theta1)*cos(theta2) +
R3*cos(theta1)*cos(theta3) - x)
eq2 = Eq(R1*sin(theta1) + R2*cos(theta2)*sin(theta1) +
R3*cos(theta3)*sin(theta1) - y)
eq3 = Eq(R2*sin(theta2) + R3*sin(theta3) -z )
solve([eq1,eq2,eq3],theta1,theta2,theta3)
solution[j,0] = degrees(theta1)
solution[j,1] = degrees(theta3)
solution[j,2] = degrees(theta2)
Expected result: Solution matrix with values of theta1, theta2 & theta3.
Python takes too much time to respond.

PyMC3- Custom theano Op to do numerical integration

I am using PyMC3 for parameter estimation using a particular likelihood function which has to be defined. I googled it and found out that I should use the densitydist method for implementing the user defined likelihood functions but it is not working. How to incorporate a user defined likelihood function in PyMC3 and to find out the maximum a posteriori (MAP) estimate for my model? My code is given below. Here L is the analytic form of my Likelihood function. I have some observational data for the radial velocity(vr) and postion (r) for some objects, which is imported from excel file.
data_ = np.array(pandas.read_excel('aaa.xlsx',header=None))
gamma=3.77;
G = 4.302*10**-6;
rmin = 3.0;
R = 95.7;
vr=data_[:,1];
r= data_[:,0];
h= np.pi;
class integrateOut(theano.Op):
def __init__(self,f,t,t0,tf,*args,**kwargs):
super(integrateOut,self).__init__()
self.f = f
self.t = t
self.t0 = t0
self.tf = tf
def make_node(self,*inputs):
self.fvars=list(inputs)
try:
self.gradF = tt.grad(self.f,self.fvars)
except:
self.gradF = None
return theano.Apply(self,self.fvars,[tt.dscalar().type()])
def perform(self,node, inputs, output_storage):
args = tuple(inputs)
f = theano.function([self.t]+self.fvars,self.f)
output_storage[0][0] = quad(f,self.t0,self.tf,args=args)[0]
def grad(self,inputs,grads):
return [integrateOut(g,self.t,self.t0,self.tf)(*inputs)*grads[0] \
for g in self.gradF]
basic_model = pm.Model()
with basic_model:
M=[]
beta=[]
interval=0.01*10**12
M=pm.Uniform('M',
lower=0.5*10**12,upper=3.50*10**12,transform='interval')
beta=pm.Uniform('beta',lower=2.001,upper=2.999,transform='interval')
gamma=3.77
logp=[]
arr=[]
vnew=[]
rnew=[]
theta = tt.scalar('theta')
beta = tt.scalar('beta')
z = tt.cos(theta)**(2*( (gamma/(beta - 2)) - 3/2) + 3)
intZ = integrateOut(z,theta,-(np.pi)/2,(np.pi)/2)(beta)
gradIntZ = tt.grad(intZ,[beta])
funcIntZ = theano.function([beta],intZ)
funcGradIntZ = theano.function([beta],gradIntZ)
for j in np.arange(0,59,1):
vnew.append(vr[j]+(0.05*vr[j]*float(dm.Decimal(rm.randrange(1,
20))/10)));
rnew.append(r[j]+(0.05*r[j]*float(dm.Decimal(rm.randrange(1,
20))/10)));
vn=np.array(vnew)
rn=np.array(rnew)
for beta in np.arange (2.01,2.99,0.01):
for M in np.arange (0.5,2.50,0.01):
i=np.arange(0,59,1)
q =( gamma/(beta - 2)) - 3/2
B = (G*M*10**12)/((beta -2 )*( R**(3 - beta)))
K = (gamma - 3)/((rmin**(3 - gamma))*funcIntZ(beta)*m.sqrt(2*B))
logp= -np.log(K*((1 -(( 1/(2*B) )*((vn[i]**2)*rn[i]**(beta -
2))))**(q+1))*(rn[i]**(1-gamma +(beta/2))))
arr.append(logp.sum())
def logp_func(rn,vn):
return min(np.array(arr))
logpvar = pm.DensityDist("logpvar", logp_func, observed={"rn": rn,"vn":vn})
start = pm.find_MAP(model=basic_model)
step = pm.Metropolis()
basicmodeltrace = pm.sample(10000, step=step,
start=start,random_seed=1,progressbar=True)
print(pm.summary(basicmodeltrace))
map_estimate = pm.find_MAP(model=basic_model)
print(map_estimate)
I am getting the following error message:
ValueError: Cannot compute test value: input 0 (theta) of Op
Elemwise{cos,no_inplace}(theta) missing default value.
Backtrace when that variable is created:
I am unable to get the output since the numerical integration is not working. I have used custom theano op for numerical integration code which i got from Custom Theano Op to do numerical integration . The integration works if I run it seperately inputting a particular value of beta, but not within the model.
I made a few changes to your code, this still does not work, but I hope it is closer to a solution. Please check this thread, as someone is trying so solve essentially the same problem.
class integrateOut(theano.Op):
def __init__(self, f, t, t0, tf,*args, **kwargs):
super(integrateOut,self).__init__()
self.f = f
self.t = t
self.t0 = t0
self.tf = tf
def make_node(self, *inputs):
self.fvars=list(inputs)
try:
self.gradF = tt.grad(self.f, self.fvars)
except:
self.gradF = None
return theano.Apply(self, self.fvars, [tt.dscalar().type()])
def perform(self,node, inputs, output_storage):
args = tuple(inputs)
f = theano.function([self.t] + self.fvars,self.f)
output_storage[0][0] = quad(f, self.t0, self.tf, args=args)[0]
def grad(self,inputs,grads):
return [integrateOut(g, self.t, self.t0, self.tf)(*inputs)*grads[0] \
for g in self.gradF]
gamma = 3.77
G = 4.302E-6
rmin = 3.0
R = 95.7
vr = data[:,1]
r = data[:,0]
h = np.pi
interval = 1E10
vnew = []
rnew = []
for j in np.arange(0,59,1):
vnew.append(vr[j]+(0.05*vr[j] * float(dm.Decimal(rm.randrange(1, 20))/10)))
rnew.append(r[j]+(0.05*r[j] * float(dm.Decimal(rm.randrange(1, 20))/10)))
vn = np.array(vnew)
rn = np.array(rnew)
def integ(gamma, beta, theta):
z = tt.cos(theta)**(2*((gamma/(beta - 2)) - 3/2) + 3)
return integrateOut(z, theta, -(np.pi)/2, (np.pi)/2)(beta)
with pm.Model() as basic_model:
M = pm.Uniform('M', lower=0.5*10**12, upper=3.50*10**12)
beta = pm.Uniform('beta', lower=2.001, upper=2.999)
theta = pm.Normal('theta', 0, 10**2)
def logp_func(rn,vn):
q = (gamma/(beta - 2)) - 3/2
B = (G*M*1E12) / ((beta -2 )*(R**(3 - beta)))
K = (gamma - 3) / ((rmin**(3 - gamma)) * integ(gamma, beta, theta) * (2*B)**0.5)
logp = - np.log(K*((1 -((1/(2*B))*((vn**2)*rn**(beta -
2))))**(q+1))*(rn**(1-gamma +(beta/2))))
return logp.sum()
logpvar = pm.DensityDist("logpvar", logp_func, observed={"rn": rn,"vn":vn})
start = pm.find_MAP()
#basicmodeltrace = pm.sample()
print(start)

Resources