Drawing form singular multivariate truncated normal - gaussian

Is there any algorithm for generating draws from a multivariate truncated normal with singular covariance matrix?

Related

Intersection check of a hyperplane and a hyper rectangle

Given a N-dimensional hyperplane and a N-dimensional hyper rectangle represented by its min point and max point, how to determine if they intersect?
A naive method by computing the intersection between edges of the hyper rectangle and the hyperplane gives a O(2^N) time complexity.
I wonder if this task can be finished in a more efficient way.

Render 2d gaussian - take gradient with respect to the mean

I need to render a 2d gaussian and still be able to differentiate with respect to the 2d mean, which has type float. The standard deviation of the gaussian can be constant. Same for the size of the matrix that is generated.
Any idea how to do this in tensorflow?
CLARIFICATION:
I need a function draw2dGaussian(mean2d) which returns a 2d matrix M. The matrix M will show a discretized 2d gaussian centered at the location mean2d. Note that mean2d is a pair of 2 floats. The matrix M will be 0 at the points far enough from the mean2d.
The requirement of this function draw2dGaussian is that it has to be differentiable with respect to mean2d.
I think openDR http://files.is.tue.mpg.de/black/papers/OpenDR.pdf might be able to offer such a function, but I was wondering if somebody had a simpler solution.
You are looking for the reparametrization trick. For a one-dimensional gaussian, N(mean, var) can be written as mean + sqrt(var) * N(0, 1). A similar construction applies to 2d gaussians but with a covariance matrix instead of a constant variance.

How to decompose affine matrix?

I have a series of points in two 3D systems. With them, I use np.linalg.lstsq to calculate the affine transformation matrix (4x4) between both. However, due to my project, I have to "disable" the shear in the transform. Is there a way to decompose the matrix into the base transformations? I have found out how to do so for Translation and Scaling but I don't know how to separate Rotation and Shear.
If not, is there a way to calculate a transformation matrix from the points that doesn't include shear?
I can only use numpy or tensorflow to solve this problem btw.
I'm not sure I understand what you're asking.
Anyway If you have two sets of 3D points P and Q, you can use Kabsch algorithm to find out a rotation matrix R and a translation vector T such that the sum of square distances between (RP+T) and Q is minimized.
You can of course combine R and T into a 4x4 matrix (of rotation and translation only. without shear or scale).

how do I decide angle while regressing values corresponding to a rotated rectangle?

I am unable to interpret it. This is the paragraph (last two lines are important for my problem).
The simplest model we explore is a direct regression from
the raw RGB-D image to grasp coordinates. The raw image is
given to the model which uses convolutional layers to extract
features from the image. The fully connected layers terminate
in an output layer with six output neurons corresponding to
the coordinates of a grasp. Four of the neurons correspond
to location and height. Grasp angles are two-fold rotationally
symmetric so we parameterize by using the two additional
coordinates: the sine and cosine of twice the angle.
what does the bold line means?
More elaborate,
First, why twice the angle?
Second: what is two-fold rotationally symmetric?
Third: why can't I just regress angle directly?
This paragraph is from this paper - page 2, right col, section B.
Second question is answered here: http://mathstat.slu.edu/escher/index.php/Rotational_Symmetry and as I understand it, this means that within a 360 degree circle the object can be rotated twice and visually appear the same. As the subject of the paper is robotic vision, a grasp rectangle rotated by 45 degrees appears the same as a rectangle rotated by (180 + 45) degrees.
First and third questions: It appears that the neural network referenced in the paper could have used the angle directly as an input. Due to the rotational symmetry the robotic vision could only resolve 180 degrees of rotation for the grasp rectangles, and to increase the sine and cosine outputs to a full 360 degrees for the neural network's neuron input range the detected visual rotation angle was doubled as a form of data conditioning.

How is 3D plane normal vector related to its rotation

What i am trying to do http://www.youtube.com/watch?v=CaTI2d0tQME 3:15
In my 3D api there is quad.rotation[x,y,z], quad[x,y,z] which is center of it and width/height. I understand that vertices are being calculated from all of the given. And normal can be calculated from vertices but i have a feeling i should be able to get it just from the rotation?
Yes you can !
Your quad must be axis-oriented (along the X, Y or Z axis, which is its normal vector in its local space). Compose this vector with the quad rotation matrix and you will have your new, nice and shiny normal vector in world space !
A little warning : if the quad transformation matrix is generated by any 3D engine, it could contain scaling factors that will mess the normal vector up. In this case, the classical solution is to compute the transposed inverse of the matrix, or to generate your custom transformation matrix with the quad's rotation values.

Resources