If we have a point in NDC space and we want to transform it to view space using inverse perspective projection, why do we need to apply perspective divide at the end? Shouldn't it be perspective multiplication, since applying the forward perspective projection from view to NDC space has the perspective divide?
I found it here: http://www.cse.chalmers.se/edu/course/TDA362/tutorials/ssao.html
float fragmentDepth = texture(depthTexture, texCoord).r;
// Normalized Device Coordinates (clip space)
vec4 ndc = vec4(texCoord.x * 2.0 - 1.0, texCoord.y * 2.0 - 1.0,
fragmentDepth * 2.0 - 1.0, 1.0);
// Transform to view space
vec3 vs_pos = homogenize(inverseProjectionMatrix * ndc);
where
vec3 homogenize(vec4 v) { return vec3((1.0 / v.w) * v); }
If you want to transform a Homogeneous coordinate to a Cartesian coordinate, you have to transform it so that the w component is 1:
(x, y, z, w) -> (x', y', z', 1)
Therefore you have to divide the 4 components of the coordinate by the w component of the coordinate:
(x, y, z, w) -> (x/w, y/w, z/w, w/w)
When you transform from clip space to normalized device space, this is called Perspective divide.
Note that when transforming from normalized device space to view space, the coordinate is transformed by the inverse projection matrix and not by the projection matrix. You also have to divide by the w component after the inverse projection.
Alternatively you can multiply the normalized device coordinate (x', y', z', 1) by a value selected in such a way that the w component is 1 after the transformation with the inverse projection matrix. However, there is no good reason to go through the effort of finding this value before transforming.
Related
I have cone->p (vertex of the cone), cone->orient (axis vector), cone->k (half-angle tangent), cone->minm and cone->maxm (2 height values, for cone caps). Also I have point intersection which is on the cone. How do I find the cone (side surface) normal vector at intersection point using only these parameters?
Сame up with simpler method:
Find distance Dis from intersection point I to base P
Make axis orientation vector of length
D = Dis * sqrt(1+k^2)
and make point on axis at this distance
A = P + Normalized(Orient) * D
Now
Normal = I - A
Old answer:
Make orthogonal projection of point I (intersection) onto cone axis using vector `IP = I - P' and scalar (dot) product:
AxProj = P + Orient * dot(IP, Orient) / dot(Orient, Orient)
Vector from AxPr to I (perpendicular to axis):
AxPerp = I - AxProj
Vector, tangent to cone surface, using vector product:
T = IP x AxPerp
Vector, normal to cone surface:
N = T x IP
If I is the intersection point on the cone's surface and you know its coordinates, and P is the vertex of the cone, whose coordinates you also know, then this is enough:
Normal = (axis x PI) x PI
Normal = Normal / norm(Normal)
where axis is the vector aligned with the axis of the cone.
I have three non-colinear 3D points, let's say pt1, pt2, pt3. I've computed the plane P using the sympy.Plane. How can I find the orientation of this plane(P) i.e. RPY(euler angles) or in quaternion?
I never used sympy, but you should be able to find a function to get the angle between 2 vectors (your normal vector and the world Y axis.)
theta = yaxis.angle_between(P.normal_vector)
then get the rotation axis, which is the normalized cross product of those same vectors.
axis = yaxis.cross(P.normal_vector).normal()
Then construct a quaternion from the axis and angle
q = Quaternion.from_axis_angle(axis, theta)
I am trying to superimpose two 3D triangles for a molecular modeling problem. It seemed simple enough. I translated the first point of each triangle to the origin, 0,0,0. I then calculated the angle I would have to rotate around the z axis to put the second point on the x axis. Using the formula for x,y,z for Rz(theta) this would be the angle where y=0,
y=xsin(theta)+ycos(theta)=0, and rearranging, tan(theta)=-y/x
The angle would be the arctan(-y/x). But, plugging this value for the angle back into the original equation above does not give zero except in the case where x=y, and the tangent is one. Seems like simple algebra - why doesn't this work?
Thanks for any help.
As the other comments suggested you most likely got confused withing projections and goniometrics. There is also safer way without goniometrics using vector math (linear algebra).
create transform matrix m0 representing aligned plane to first triangle t0
by aligned I mean one of the edges of triangle should lie in the one of the plane basis vectors. That is simple you just set one basis vector as the edge in question, origin as one of its point and exploit the cross product to get the remainding vectors.
so if our triangle has points p0,p1,p2 and our basis vectors are x,y,z with origin o then:
x = p1-p0; x /= |x|;
y = p2-p0;
z = cross(x,y); z /= |z|;
y = cross(z,x); y /= |y|;
o = p0
so just feed those to the transform matrix (see the link in bottom of answer)
create transform matrix m1 representing aligned plane to second triangle t1
its the same as #1
compute final transform matrix m converting t1 to t0
that is simple:
m = Inverse(m1)*m0
Now any point from t1 can be aligned to t0 simply by multiplying m matrix by the point. Do not forget to use homogenuous coordinates so point(x,y,z,1)
Here small C++/OpenGL example:
//---------------------------------------------------------------------------
double t0[3][3]= // 1st triangle
{
-0.5,-0.5,-1.2,
+0.5,-0.5,-0.8,
0.0,+0.5,-1.0,
};
double t1[3][3]= // 2nd triangle
{
+0.5,-0.6,-2.1,
+1.5,-0.5,-2.3,
+1.2,+0.3,-2.2,
};
double arot=0.0; // animation angle
//---------------------------------------------------------------------------
void gl_draw() // main rendering code
{
int i;
double m0[16],m1[16],m[16],x[3],y[3],z[3],t2[3][3];
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslated(0.0,0.0,-10.0);
glRotatef(arot,0.0,1.0,0.0);
// render original triangles
glBegin(GL_TRIANGLES);
glColor3f(1.0,0.0,0.0); for (i=0;i<3;i++) glVertex3dv(t0[i]);
glColor3f(0.0,0.0,1.0); for (i=0;i<3;i++) glVertex3dv(t1[i]);
glEnd();
// x,y,z = t0 plane basis vectors
vector_sub(x,t0[1],t0[0]); // x is fisrt edge
vector_one(x,x); // normalized
vector_sub(y,t0[2],t0[0]); // y is last edge
vector_mul(z,x,y); // z = cross(x,y) ... perpendicular vector to x,y
vector_one(z,z);
vector_mul(y,z,x); // y = cross(z,x) ... perpendicular vector to z,x
vector_one(y,y);
// m0 = transform matrix representing t0 plane
m0[ 3]=0.0; for (i=0;i<3;i++) m0[ 0+i]=x[i];
m0[ 7]=0.0; for (i=0;i<3;i++) m0[ 4+i]=y[i];
m0[11]=0.0; for (i=0;i<3;i++) m0[ 8+i]=z[i];
m0[15]=1.0; for (i=0;i<3;i++) m0[12+i]=t0[0][i];
// x,y,z = t1 plane basis vectors
vector_sub(x,t1[1],t1[0]); // x is fisrt edge
vector_one(x,x); // normalized
vector_sub(y,t1[2],t1[0]); // y is last edge
vector_mul(z,x,y); // z = cross(x,y) ... perpendicular vector to x,y
vector_one(z,z);
vector_mul(y,z,x); // y = cross(z,x) ... perpendicular vector to z,x
vector_one(y,y);
// m1 = transform matrix representing t1 plane
m1[ 3]=0.0; for (i=0;i<3;i++) m1[ 0+i]=x[i];
m1[ 7]=0.0; for (i=0;i<3;i++) m1[ 4+i]=y[i];
m1[11]=0.0; for (i=0;i<3;i++) m1[ 8+i]=z[i];
m1[15]=1.0; for (i=0;i<3;i++) m1[12+i]=t1[0][i];
// m = transform t1 -> t0 = Inverse(m1)*m0
matrix_inv(m,m1);
matrix_mul(m,m,m0);
// t2 = transformed t1
for (i=0;i<3;i++) matrix_mul_vector(t2[i],m,t1[i]);
// render transformed triangle
glLineWidth(2.0);
glBegin(GL_LINE_LOOP);
glColor3f(0.0,1.0,0.0); for (i=0;i<3;i++) glVertex3dv(t2[i]);
glLineWidth(1.0);
glEnd();
glFlush();
SwapBuffers(hdc);
}
//---------------------------------------------------------------------------
I used my own matrix and vector math hope the comments are enough if not see:
Understanding 4x4 homogenous transform matrices
For info about the matrices and you will find the sources and equations for the math used there too. Here preview for my test case:
Where red is t0 triangle blue is t1 triangle and green is the m*t1 transformed triangle. As you can see no need for goniometrics/euler angles at all. I rotate the stuff by arot just to visually check if the green triangle really align to the blue to prove me I did not make a silly mistake.
Now its unclear how exactly you want to align, so for example if you want maximal coverage or something either try all 3 combinations and remember the best or align to closest or largest edges of both triangles etc ...
I have a shape made out of several triangles which is positioned somewhere in world space with scale, rotate, translate. I also have a plane on which I would like to project (orthogonal) the shape.
I could multiply every vertex of every triangle in the shape with the objects transformation matrix to find out where it is located in world coordinates, and then project this point onto the plane.
But I don't need to draw the projection, and instead I would like to transform the plane with the inverse transformation matrix of the shape, and then project all the vertices onto the (inverse transformed) plane. Since it only requires me to transform the plane once and not every vertex.
My plane has a normal (xyz) and a distance (d). How do I multiply it with a 4x4 transformation matrix so that it turns out ok?
Can you create a vec4 as xyzd and multiply that? Or maybe create a vector xyz1 and then what to do with d?
You need to convert your plane to a different representation. One where N is the normal, and O is any point on the plane. The normal you already know, it's your (xyz). A point on the plane is also easy, it's your normal N times your distance d.
Transform O by the 4x4 matrix in the normal way, this becomes your new O. You will need a Vector4 to multiply with a 4x4 matrix, set the W component to 1 (x, y, z, 1).
Also transform N by the 4x4 matrix, but set the W component to 0 (x, y, z, 0). Setting the W component to 0 means that your normals won't get translated. If your matrix is composed of more that just translating and rotating, then this step isn't so simple. Instead of multiplying by your transformation matrix, you have to multiply by the transpose of the inverse of the matrix i.e. Matrix4.Transpose(Matrix4.Invert(Transform)), there's a good explanation on why here.
You now have a new normal vector N and a new position vector O. However I suppose you want it in xyzd form again? No problem. As before, xyz is your normal N all that's left is to calculate d. d is the distance of the plane from the origin, along the normal vector. Hence, it is simply the dot product of O and N.
There you have it! If you tell me what language you're doing this in, I'd happily type it up in code as well.
EDIT, In pseudocode:
The plane is vector3 xyz and number d, the matrix is a matrix4x4 M
vector4 O = (xyz * d, 1)
vector4 N = (xyz, 0)
O = M * O
N = transpose(invert(M)) * N
xyz = N.xyz
d = dot(O.xyz, N.xyz)
xyz and d represent the new plane
This question is a bit old but I would like to correct the accepted answer.
You do not need to convert your plane representation.
Any point lies on the plane if
It can be written as dot product :
You are looking for the plane transformed by your 4x4 matrix .
For the same reason, you must have
So and with some arrangements
TLDR : if p=(a,b,c,d), p' = transpose(inverse(M))*p
Notation:
n is a normal represented as a (1x3) row-vector
n' is the transformed normal of n according to transform matrix T
(n|d) is a plane represented as a (1x4) row-vector (with n the plane's normal and d the plane's distance to the origin)
(n'|d') is the transformed plane of (n|d) according to transform matrix T
T is a (4x4) (affine) column-major transformation matrix (i.e. transforming a column-vector t is defined as t' = T t).
Transforming a normal n:
n' = n adj(T)
Transforming a plane (n|d):
(n'|d') = (n|d) adj(T)
Here, adj is the adjugate of a matrix which is defined as follows in terms of the inverse and determinant of a matrix:
T^-1 = adj(T)/det(T)
Note:
The adjugate is generally not equal to the inverse of a transformation matrix T. If T includes a reflection, det(T) = -1, reversing the winding order!
Re-normalizing n' is mathematically not required (but maybe numerically depending on the implementation) since scaling is taken care off by the determinant. Thanks to Adrian Leonhard.
You can directly transform the plane without first decomposing and recomposing a plane (normal and point).
I am rendering textured quads from an orthographic perspective and would like to simulate 'depth' by modifying UVs and the vertex positions of the quads four points (top left, top right, bottom left, bottom right).
I've found if I make the top left and bottom right corners y position be the same I don't get a linear 'skew' but rather a warped one where the texture covering the top triangle (which makes up the quad) seems to get squashed while the bottom triangles texture looks normal.
I can change UVs, any of the four points on the quad (but only in 2D space, it's orthographic projection anyway so 3D space won't matter much). So basically I'm trying to simulate perspective on a two dimensional quad in orthographic projection, any ideas? Is it even mathematically possible/feasible?
ideally what I'd like is a situation where I can set an x/y rotation as well as a virtual z 'position' (which simulates z depth) through a function and see it internally calclate the positions/uvs to create the 3D effect. It seems like this should all be mathematical where a set of 2D transforms can be applied to each corner of the quad to simulate depth, I just don't know how to make it happen. I'd guess it requires trigonometry or something, I'm trying to crunch the math but not making much progress.
here's what I mean:
Top left is just the card, center is the card with a y rotation of X degrees and right most is a card with an x and y rotation of different degrees.
To compute the 2D coordinates of the corners, just choose the coordinates in 3D and apply the 3D perspective equations :
Original card corner (x,y,z)
Apply a rotation ( by matrix multiplication ) you get ( x',y',z')
Apply a perspective projection ( choose some camera origin, direction and field of view )
For the most simple case it's :
x'' = x' / z
y'' = y' / z
The bigger problem now is the texturing used to get the texture coordinates from pixel coordinates :
The correct way for you is to use an homographic transformation of the form :
U(x,y) = ( ax + cy + e ) / (gx + hy + 1)
V(x,y) = ( bx + dy + f ) / (gx + hy + 1)
Which is fact is the result of the perpective equations applied to a plane.
a,b,c,d,e,f,g,h are computed so that ( with U,V in [0..1] ) :
U(top'',left'') = (0,0)
U(top'',right'') = (0,1)
U(bottom'',left'') = (1,0)
U(bottom'',right'') = (1,1)
But your 2D rendering framework probably uses instead a bilinear interpolation :
U( x , y ) = a + b * x + c * y + d * ( x * y )
V( x , y ) = e + f * x + g * y + h * ( x * y )
In that case you get a bad looking result.
And it is even worse if the renderer splits the quad in two triangles !
So I see only two options :
use a 3D renderer
compute the texturing yourself if you only need a few images and not a realtime animation.