I have this monstrous function
$f(x,p,t) = \int_{-\infty}^{\infty} sech^{2}(x + y/2)sech^{2}(x - y/2) × [2 sinh(x + y/2) sinh(x − y/2) + \sqrt(2) sinh(x − y/2)exp(i3t/2) +\sqrt(2) sinh(x + y/2)exp(-i3t/2) + 1]exp(−ipy)dy$
This is essentially a Fourier transform but there is a shift involved. In any case, I know scipy_integrate can handle this integral. But my goal is to plug in tensors in this function W so that I can use the autograd module to compute partial derivatives. Is there some way in pytorch I can approximate this integral. I can write out a Simpson’s rule formula but wondering if there is a better approximation out there in pytorch before I write my substandard approximations.
Thank you very much for your help.
Related
I have there tensors with shapes a = (B,12,512,512), b= (B,12,512,512), c = (B,2,512,512).
I want to multiply a* c[:,0,:,:] + b*c[:,1,:,:]
As far as I know, gradient calculation does not support indexing operation, so I need to implement this calculation without using indexing. How can I implement it in a vectorized way with Pytorch?
Thanks.
I want to minimize an equation. The equation consists of elements which are all tensors.
f=alpha + (vnorm/2) #Equation to minimize
where, vnorm=norm(v)*norm(v)
v is a tensor vector of n*1 and alpha is a tensor of 1*1
Now I need to minimize f with respect to a contraint, that is–
(A # v)+alpha<=0 #Constraint involve in the minimization
where A is a tensor of 2*n.
How should I formulate the above equation and the the constraint to minimize the same in Pytorch ? I was successful in doing the same with 'scipy' but I want to do it in Pytorch so that I can make the minimization process faster taking the help of the tensors.
I have a model, say
y[i]<-dnorm (mu[i],sigma^2)
mu[i]<- x[i,1]* theta1+ x[i,2]*theta2 + b0
I would like to put a multivariate prior on theta1 and theta2, say
c(theta1,theta2) ~ dmnorm (Mean, Sigma) where Mean could be a vector (0,0) and Sigma is covariance matrix. But the JAGS did not allow me to do it... Does anyone know how can I give a multivariate prior to theta1 and theta2?
Thanks!
You would need to have a multivariate node. For example,
theta[1:2] ~ dmnorm(Mean[1:2], Sigma[1:2,1:2])
mu[i]<- x[i,1]* theta[1]+ x[i,2]*theta[2] + b0
i have a multivariate quadratic sum
x0 + x1*y1 + x0*y0 + y1
the variables in this case are
{x0,x1,y0,y1,x0*y0,x1*y1,x0*y1,x1*y0}
So giving the previous quadratic sum as an input , how can I take back the matrix [1,0,0,1,1,1,0,0] according to the previous basis.
I'm working in SageMath.
Need an mllib expert to help explain the linear regression code. In LeastSquaresGradient.compute
override def compute(
data: Vector,
label: Double,
weights: Vector,
cumGradient: Vector): Double = {
val diff = dot(data, weights) - label
axpy(diff, data, cumGradient)
diff * diff / 2.0
}
cumGradient is computed using axpy, which is simply y += a * x, or here
cumGradient += diff * data
I thought for a long time but can make the connection to the gradient calculation as defined in the gradient descent documentation. In theory the gradient is the slope of the loss against delta in one particular weighting parameter. I don't see anything in this axpy implementation that remotely resemble that.
Can someone shed some light?
It is not really a programming question but to give you some idea what is going on cost function for least square regression is defined as
where theta is weights vector.
Partial derivatives of the above cost function are:
and if computed over all theta:
It should be obvious that above is equivalent to cumGradient += diff * data computed for all data points and to quote Wikipedia
in a rectangular coordinate system, the gradient is the vector field whose components are the partial derivatives of f