I have a model, say
y[i]<-dnorm (mu[i],sigma^2)
mu[i]<- x[i,1]* theta1+ x[i,2]*theta2 + b0
I would like to put a multivariate prior on theta1 and theta2, say
c(theta1,theta2) ~ dmnorm (Mean, Sigma) where Mean could be a vector (0,0) and Sigma is covariance matrix. But the JAGS did not allow me to do it... Does anyone know how can I give a multivariate prior to theta1 and theta2?
Thanks!
You would need to have a multivariate node. For example,
theta[1:2] ~ dmnorm(Mean[1:2], Sigma[1:2,1:2])
mu[i]<- x[i,1]* theta[1]+ x[i,2]*theta[2] + b0
Related
I want to minimize an equation. The equation consists of elements which are all tensors.
f=alpha + (vnorm/2) #Equation to minimize
where, vnorm=norm(v)*norm(v)
v is a tensor vector of n*1 and alpha is a tensor of 1*1
Now I need to minimize f with respect to a contraint, that is–
(A # v)+alpha<=0 #Constraint involve in the minimization
where A is a tensor of 2*n.
How should I formulate the above equation and the the constraint to minimize the same in Pytorch ? I was successful in doing the same with 'scipy' but I want to do it in Pytorch so that I can make the minimization process faster taking the help of the tensors.
I have this monstrous function
$f(x,p,t) = \int_{-\infty}^{\infty} sech^{2}(x + y/2)sech^{2}(x - y/2) × [2 sinh(x + y/2) sinh(x − y/2) + \sqrt(2) sinh(x − y/2)exp(i3t/2) +\sqrt(2) sinh(x + y/2)exp(-i3t/2) + 1]exp(−ipy)dy$
This is essentially a Fourier transform but there is a shift involved. In any case, I know scipy_integrate can handle this integral. But my goal is to plug in tensors in this function W so that I can use the autograd module to compute partial derivatives. Is there some way in pytorch I can approximate this integral. I can write out a Simpson’s rule formula but wondering if there is a better approximation out there in pytorch before I write my substandard approximations.
Thank you very much for your help.
I trained a simple quadratic SVM using sklearn.svm.SVC on 3 features. In other words, X is nx3, Y is length n and I simply ran the following code with no problem:
svc = SVC(kernel='poly', degree = 2)
svc.fit(X,Y)
As my goal is to plot this boundary in 3D, I am trying to figure out which features each of the resulting coefficients correspond to. Naturally, a quadratic function with 3 features will result in an intercept term and 10 coefficients where each coefficient corresponds to:
x1^2, x2^2, x3^2, x1x2, x1x3, x2x3, x1x2x3, x1, x2, x3
However, svc.dual_coef returns an array of the 10 coefficients but I do not know which of them correspond to which of the 10 features, is there a way to figure this out?
Thanks!
Given following problem:
I have 2 solutions:
First is to calculate difference in absolute angles, then renormalize angle. bad idea, 2 x atan2() is slow, renormalisation is inefficient.
angle = clamp_to_range( atan2(P1.y, P1.x) - atan2(P0.y, P0.x));
Second is to calculate dot product, normalize, calculate arccos(). Also bad idea, because angle sign will be incorrect.
angle = acos( dot(P0, P1) / sqrt( dot(P0,P0) * dot(P1, P1) ) );
I feel, that there should be some formula. How to solve given problem efficiently?
It is possible to use only one atan2 but both cross product and scalar product of vectors:
angle = atan2(Cross(P0, P1), Dot(P0, P1);
Do you really need the angle in radians / degrees, instead of as a unit vector or rotation matrix?
An xy unit vector can represent angle instead of absolute direction; the angle is the angle between the vertical (or horizontal) axis and the unit vector. Trig functions are very slow compared to simple multiply / add / subtract, and still slow compared to div / sqrt, so representing angles as vectors is usually a good thing.
You can calculate its components using the Cross(P0, P1) and Dot(P0, P1), but then normalize them into an xy unit vector instead of using atan2 on them.
See also Rotate Object Towards Direction in 2D on gamedev.SE, and Is it better to track rotation with a vector or a float?
This is easy to vectorize with SIMD, much moreso than a SIMD atan2. rsqrtps exists mostly to speed up x *= 1.0 / sqrt(foo) (and reusing the same multiplier for a SIMD vector of y values) for normalization. But rsqrtps is very low accuracy so you often need a Newton Raphson iteration to refine. The most recent CPUs (Skylake) have good FP sqrt / div throughput, so you could just normalize the naive way with _mm_sqrt_ps and leave optimization for later. See Fast vectorized rsqrt and reciprocal with SSE/AVX depending on precision.
Need an mllib expert to help explain the linear regression code. In LeastSquaresGradient.compute
override def compute(
data: Vector,
label: Double,
weights: Vector,
cumGradient: Vector): Double = {
val diff = dot(data, weights) - label
axpy(diff, data, cumGradient)
diff * diff / 2.0
}
cumGradient is computed using axpy, which is simply y += a * x, or here
cumGradient += diff * data
I thought for a long time but can make the connection to the gradient calculation as defined in the gradient descent documentation. In theory the gradient is the slope of the loss against delta in one particular weighting parameter. I don't see anything in this axpy implementation that remotely resemble that.
Can someone shed some light?
It is not really a programming question but to give you some idea what is going on cost function for least square regression is defined as
where theta is weights vector.
Partial derivatives of the above cost function are:
and if computed over all theta:
It should be obvious that above is equivalent to cumGradient += diff * data computed for all data points and to quote Wikipedia
in a rectangular coordinate system, the gradient is the vector field whose components are the partial derivatives of f