I need to render a 2d gaussian and still be able to differentiate with respect to the 2d mean, which has type float. The standard deviation of the gaussian can be constant. Same for the size of the matrix that is generated.
Any idea how to do this in tensorflow?
CLARIFICATION:
I need a function draw2dGaussian(mean2d) which returns a 2d matrix M. The matrix M will show a discretized 2d gaussian centered at the location mean2d. Note that mean2d is a pair of 2 floats. The matrix M will be 0 at the points far enough from the mean2d.
The requirement of this function draw2dGaussian is that it has to be differentiable with respect to mean2d.
I think openDR http://files.is.tue.mpg.de/black/papers/OpenDR.pdf might be able to offer such a function, but I was wondering if somebody had a simpler solution.
You are looking for the reparametrization trick. For a one-dimensional gaussian, N(mean, var) can be written as mean + sqrt(var) * N(0, 1). A similar construction applies to 2d gaussians but with a covariance matrix instead of a constant variance.
Related
Given a vector such as
x = torch.rand(10000)
What is the most efficient way to compute the symmetrix matrix
using pytorch? I can only think of doing matrix multiplications like
torch.matmul(x.view(-1,1), x.view(1,-1))
or
torch.matmul(x.unsqueeze(1),x.unsqueeze(0))
but I wonder if there are more efficient ways which make use of the symmetry of the resultant matrix to calculate it.
I am trying to implement polynomial regression using the least squares method. There was a problem while plotting the 3rd graph, it is not displayed.
I think it's about the implementation of the formula y=ax+b.
But in my case, in first I got experimental data values using inline functions polyfit and polyval.
x=0:0.1:5;
y=3*x+2;
y1=y+randn(size(y));
k=1;#Polynom
X1=0:0.01:10
B=polyfit(x,y1,k);
Y1=polyval(B,X1);
And after all, I am already using a linear model to solve the polynomial regression using the method of least squares.
Y2=Y1'*x+B'; -----this problem formula
subplot(3,2,3);
plot(x,Y1,'-b',X1,y1,'LineWidth');
title('y1=ax+b');
xlabel('x');
ylabel('y');
grid on;
As a result, no graph is drawn.
check size of the vector: x and Y1 are not same length, same for X1 and y1.
You probably want to plot as:
plot(x,y1,'-b',X1,Y1,'LineWidth', 1);
I have a series of points in two 3D systems. With them, I use np.linalg.lstsq to calculate the affine transformation matrix (4x4) between both. However, due to my project, I have to "disable" the shear in the transform. Is there a way to decompose the matrix into the base transformations? I have found out how to do so for Translation and Scaling but I don't know how to separate Rotation and Shear.
If not, is there a way to calculate a transformation matrix from the points that doesn't include shear?
I can only use numpy or tensorflow to solve this problem btw.
I'm not sure I understand what you're asking.
Anyway If you have two sets of 3D points P and Q, you can use Kabsch algorithm to find out a rotation matrix R and a translation vector T such that the sum of square distances between (RP+T) and Q is minimized.
You can of course combine R and T into a 4x4 matrix (of rotation and translation only. without shear or scale).
That is, I want to check if the linear system derived from a radiosity problem is convergent.
I also want to know is there any book/paper giving a proof on the convergence of the radiosity problem?
Thanks.
I assume you're solving B = (I - rho*F) B (based on the wikipedia article)
Gauss-Seidel and Jacobi iteration methods are both guaranteed to converge if the matrix is diagonally dominant (Gauss-Seidel is also guaranteed to converge if the matrix is symmetric and positive definite).
The rows of the F matrix (view factors) sum to 1, so if rho (reflectivity) is < 1, which physically it should be, the matrix will be diagonally dominant.
Given a general planar 3D polygon, is there a general way to find the orthonormal basis for that planar polygon?
The most straight forward way to do it is to assume to take the first 3 points of the polygon, and form two vectors each, and these are the two orthonormal basis vectors that we are looking for. But the problem for this approach is that these 3 points may line on the same line in the polygon, and hence instead of getting two orthonormal vectors, we get only one.
Another approach to find the second orthonormal vector is to loop through the polygon and find another point that forms a different orthonormal vector than the first one, but this approach is susceptible to numerical errors (e.g, what if the second vector is almost the same with the first vector? The numerical errors can be significant).
Is there any other better approach?
You can use the cross product of any two lines connected by any two vertices. If the cross product is too low then you're in degenerate territory.
You can also take the centroid (the avg of the points, which is guaranteed to lie on the same plane) and do pick the largest of any two cross products of the vectors from the centroid to any vertex. This will be the most accurate normal. Please note that if the largest cross product is small, you may have an inaccurate normal.
If you can't find any cross product that isn't close to 0, your original poly is degenerate and a normal will be hard to find. You could use arbitrary precision or adaptive precision algebra in this case, but, of course, the round-off error is already significant in the source data, so this may not help. If possible, remove degenerate polys first, and if you have to, sew the mesh back up :).
It's a bit ott but one way would be to compute the covariance matrix of the points, and then diagonalise that. If the points are indeed planar then one of the eigenvalues of the covariance matrix will be zero (or rather very small, due to finite precision arithmetic) and the corresponding eigenvector will be a normal to the plane; the other two eigenvectors will span the plane of the polygon.
If you have N points, and the i'th coordinate of the k'th point is p[k,i], then the mean (vector) and (3x3) covariance matrix can be computed by
m[i] = Sum{ k | p[k,i]}/N (i=1..3)
C[i,j] = Sum{ k | (p[k,i]-m[i])*(p[k,j]-m[j]) }/N (i,j=1..3)
Note that C is symmetric, so that to find how to diagonalise it you might want to look up the "symmetric eigenvalue problem"