Can someone check if my logic correct? - visual-c++

I am just starting the opengl study, I am going to post a segment of code, and explains my understanding, could you follow my explanation and point out any problems?
glMatrixMode(GL_MODELVIEW);
glLoadIdentity(); //start with an identity matrix
glPushMatrix();
//push the current identity matrix onto the stack, and start with a new identity matrix as
the transformation matrix
glPushMatrix();
//copy the current matrix which is the identity as the new transformation matrix and then push the current transformation matrix onto stack
glScalef(10, 10, 1.0);
**Question 1**
//I feels like the order which the image is built is kinda reversed
//It's like drawSquare happens first, then color, then scale
//can anyone clarify?
//Second, the drawsquare defines the 4 vertices around the origin(+/-0.5,+/-0.5)
//is the origin located at the center of the window by default?
//what happen when it is scaled? does point (0.5,0.5) scaled to (5,5)?
glColor3f(0.0, 1.0, 0.0);
drawSquare(1.0);
glPopMatrix();
//forget the current transformation matrix, pop out the one on top of the stack
//which is the identity matrix
//In the code below:
//my understanding is 4 vertices is defined around origin, but is this the same origin?
//then the unit square is moved just below the X-axis
//and the 4 vertices are scaled one by one?
//ex: (0.5,0) -> (1,0) (0.5,-1) -> (1,-2)
glScalef(2, 2, 1.0);
glTranslatef(0.0, -0.5, 0.0);
glColor3f(1.0, 0.0, 0.0);
drawSquare(1.0);
//last question, if I want to make the second square rotate at a point along z-axis
//do I have to specify it here?
//for example: add glRotatef(rotate_degree, 0.0, 0.0, 1.0); above the glScalef(2, 2, 1.0);
//such that later i can change the value in the rotate_degree?
glPopMatrix(); //forget about the current transformation matrix, pop out the top matrix on the stack.

That the order of operations seems inverted comes from the fact that matrices are non-commutative and right associative when multiplied with column vectors. Say you have a position column vector ↑p in model space. To bring it into world space you multiply it with matrix M, i.e
↑p_world = M · ↑p
Note that you can not change the order of operations! Column vectors match like a key into matrices and the key fits into the matrix-lock from the right.
In the next step you want to transform into view space, using matrix V so you write
↑p_view = V · ↑p_world
but this you can substitute with
↑p_view = V · M · ↑p
But of course if you have a lot of ↑p-s you'd want to save on computations, so you contract those two matrices M and V into a single matrix you call modelview. And when you build modelview with OpenGL you build it like this:
MV = 1
MV = MV · V
MV = MV · M
Due to the right associativity of column order matrix multiplication the first transformation applied to a vector is the last one multiplied onto the stack.
Note that by using row order matrix math, things become left associative, i.e. things happen in the order you write them. But column order right associativity is incredibly usefull, as it makes building branching transformation hierachies much, much easier.

Related

What's the different between using modelViewmatrix directly and using normalMatrix instead? [duplicate]

I am working on some shaders, and I need to transform normals.
I read in few tutorials the way you transform normals is you multiply them with the transpose of the inverse of the modelview matrix. But I can't find explanation of why is that so, and what is the logic behind that?
It flows from the definition of a normal.
Suppose you have the normal, N, and a vector, V, a tangent vector at the same position on the object as the normal. Then by definition N·V = 0.
Tangent vectors run in the same direction as the surface of an object. So if your surface is planar then the tangent is the difference between two identifiable points on the object. So if V = Q - R where Q and R are points on the surface then if you transform the object by B:
V' = BQ - BR
= B(Q - R)
= BV
The same logic applies for non-planar surfaces by considering limits.
In this case suppose you intend to transform the model by the matrix B. So B will be applied to the geometry. Then to figure out what to do to the normals you need to solve for the matrix, A so that:
(AN)·(BV) = 0
Turning that into a row versus column thing to eliminate the explicit dot product:
[tranpose(AN)](BV) = 0
Pull the transpose outside, eliminate the brackets:
transpose(N)*transpose(A)*B*V = 0
So that's "the transpose of the normal" [product with] "the transpose of the known transformation matrix" [product with] "the transformation we're solving for" [product with] "the vector on the surface of the model" = 0
But we started by stating that transpose(N)*V = 0, since that's the same as saying that N·V = 0. So to satisfy our constraints we need the middle part of the expression — transpose(A)*B — to go away.
Hence we can conclude that:
transpose(A)*B = identity
=> transpose(A) = identity*inverse(B)
=> transpose(A) = inverse(B)
=> A = transpose(inverse(B))
My favorite proof is below where N is the normal and V is a tangent vector. Since they are perpendicular their dot product is zero. M is any 3x3 invertible transformation (M-1 * M = I). N' and V' are the vectors transformed by M.
To get some intuition, consider the shear transformation below.
Note that this does not apply to tangent vectors.
Take a look at this tutorial:
https://paroj.github.io/gltut/Illumination/Tut09%20Normal%20Transformation.html
You can imagine that when the surface of a sphere stretches (so the sphere is scaled along one axis or something similar) the normals of that surface will all 'bend' towards each other. It turns out you need to invert the scale applied to the normals to achieve this. This is the same as transforming with the Inverse Transpose Matrix. The link above shows how to derive the inverse transpose matrix from this.
Also note that when the scale is uniform, you can simply pass the original matrix as normal matrix. Imagine the same sphere being scaled uniformly along all axes, the surface will not stretch or bend, nor will the normals.
If the model matrix is made of translation, rotation and scale, you don't need to do inverse transpose to calculate normal matrix. Simply divide the normal by squared scale and multiply by model matrix and we are done. You can extend that to any matrix with perpendicular axes, just calculate squared scale for each axes of the matrix you are using instead.
I wrote the details in my blog: https://lxjk.github.io/2017/10/01/Stop-Using-Normal-Matrix.html
Don't understand why you just don't zero out the 4th element of the direction vector before multiplying with the model matrix. No inverse or transpose needed. Think of the direction vector as the difference between two points. Move the two points with the rest of the model - they are still in the same relative position to the model. Take the difference between the two points to get the new direction, and the 4th element, cancels out to zero. Lot cheaper.

Homogeneous coordinates and perspective-correctness?

Does the technique that vulkan uses (and I assume other graphics libraries too) to interpolate vertex attributes in a perspective-correct manner require that the vertex shader must normalize the homogenous camera-space vertex position (ie: divide through by the w-coordinate such that the w-coordinate is 1.0) prior to multiplication by a typical projection matrix of the form...
g/s 0 0 0
0 g 0 n
0 0 f/(f-n) -nf/(f-n)
0 0 1 0
...in order for perspective-correctness to work properly?
Or, will perspective-correctness continue to work on any homogeneous vertex position in camera-space (with a w-coordinate other than 1.0)?
(I didn't completely follow the perspective-correctness math, so it is unclear which to me which is the case.)
Update:
In order to clarify terminology:
vec4 modelCoordinates = vec4(x_in, y_in, z_in, 1);
mat4 modelToWorld = ...;
vec4 worldCoordinates = modelToWorld * modelCoordinates;
mat4 worldToCamera = ...;
vec4 cameraCoordinates = worldToCamera * worldCoordinates;
mat4 cameraToProjection = ...;
vec4 clipCoordinates = cameraToProjection * cameraCoordinates;
output(clipCoordinates);
cameraToProjection is a matrix like the one shown in the question
The question is does cameraCoordinates.w have to be 1.0?
And consequently the last row of both the modelToWorld and worldToCamera matricies have to be 0 0 0 1?
You have this exactly backwards. Doing the perspective divide in the shader is what prevents perspective-correct interpolation. The rasterizer needs the perspective information provided by the W component to do its job. With a W of 1, the interpolation is done in window space, without any regard to perspective.
Provide a clip-space coordinate to the output of your vertex processing stage, and let the system do what it exists to do.
the vertex shader must normalize the homogenous camera-space vertex position (ie: divide through by the w-coordinate such that the w-coordinate is 1.0) prior to multiplication by a typical projection matrix of the form...
If your camera-space vertex position does not have a W of 1.0, then one of two things has happened:
You are deliberately operating in a post-projection world space or some similar construct. This is a perfectly valid thing to do, and the math for a camera space can be perfectly reasonable.
Your code is broken somewhere. That is, you intend for your world and camera space to be a normal, Euclidean, non-homogeneous space, but somehow the math didn't work out. Obviously, this is not a perfectly valid thing to do.
In both cases, dividing by W is the wrong thing to do. If your world space that you're placing a camera into is post-projection (such as in this example), dividing by W will break your perspective-correct interpolation, as outlined above. If your code is broken, dividing by W will merely mask the actual problem; better to fix your code than to hide the bug, as it may crop up elsewhere.
To see whether or not the camera coordinates need to be in normal form, let's represent the camera coordinates as multiples of w, so they are (wx,wy,wz,w).
Multiplying through by the given projection matrix, we get the clip coordinates (wxg/s, wyg, fwz/(f-n)-nfw/(f-n)), wz)
Calculating the x-y framebuffer coordinates as per the fixed Vulkan formula we get (P_x * xg/sz +O_x, P_y * Hgy/z + O_y). Notice this does not depend on w, so the position in the framebuffer of a polygons verticies doesn't require the camera coordinates be in normal form.
Likewise calculation of the barycentric coordinates of fragments within a polygon only depends on x,y in framebuffer coordinates, and so is also independant of w.
However perspective-correct perspective interpolation of fragment attributes does depend on W_clip of the verticies as this is used in the formula given in the Vulkan spec. As shown above W_clip is wz which does depend on w and scales with it, so we can conclude that camera coordinates must be in normal form (their w must be 1.0)

How to calculate correct plane-frustum intersection?

Question:
I need to calculate intersection shape (purple) of plane defined by Ax + By + Cz + D = 0 and frustum defined by 4 rays emitting from corners of rectangle (red arrows). The result shoud be quadrilateral (4 points) and important requirement is that result shape must be in plane's local space. Plane is created with transformation matrix T (planes' normal is vec3(0, 0, 1) in T's space).
Explanation:
This is perspective form of my rectangle projection to another space (transformation / matrix / node). I am able to calculate intersection shape of any rectangle without perspective rays (all rays are parallel) by plane-line intersection algorithm (pseudocode):
Definitions:
// Plane defined by normal (A, B, C) and D
struct Plane { vec3 n; float d; };
// Line defined by 2 points
struct Line { vec3 a, b; };
Intersection:
vec3 PlaneLineIntersection(Plane plane, Line line) {
vec3 ba = normalize(line.b, line.a);
float dotA = dot(plane.n, l.a);
float dotBA = dot(plane.n, ba);
float t = (plane.d - dotA) / dotBA;
return line.a + ba * t;
}
Perspective form comes with some problems, because some of rays could be parallel with plane (intersection point is in infinite) or final shape is self-intersecting. Its works in some cases, but it's not enough for arbitary transformation. How to get correct intersection part of plane wtih perspective?
Simply, I need to get visible part of arbitary plane by arbitary perspective "camera".
Thank you for suggestions.
Intersection between a plane (one Ax+By+Cx+D equation) and a line (two planes equations) is a matter of solving the 3x3 matrix for x,y,z.
Doing all calculations on T-space (origin is at the top of the pyramid) is easier as some A,B,C are 0.
What I don't know if you are aware of is that perspective is a kind of projection that distorts the z ("depth", far from the origin). So if the plane that contains the rectangle is not perpendicular to the axis of the fustrum (z-axis) then it's not a rectangle when projected into the plane, but a trapezoid.
Anyhow, using the projection perspective matrix you can get projected coordinates for the four rectangle corners.
To tell if a point is in one side of a plane or in the other just put the point coordinates in the plane equation and get the sign, as shown here
Your question seems inherently mathematic so excuse my mathematical solution on StackOverflow. If your four arrows emit from a single point and the formed side planes share a common angle, then you are looking for a solution to the frustum projection problem. Your requirements simplify the problem quite a bit because you define the plane with a normal, not two bounded vectors, thus if you agree to the definitions...
then I can provide you with the mathematical solution here (Internet Explorer .mht file, possibly requiring modern Windows OS). If you are thinking about an actual implementation then I can only direct you to a very similar frustum projection implementation that I have implemented/uploaded here (Lua): https://github.com/quiret/mta_lua_3d_math
The roadmap for the implementation could be as follows: creation of condition container classes for all sub-problems (0 < k1*a1 + k2, etc) plus the and/or chains, writing algorithms for the comparisions across and-chains as well as normal-form creation, optimization of object construction/memory allocation. Since each check for frustum intersection requires just a fixed amount of algebraic objects you can implement an efficient cache.

Finding most distant point in circle from point

I'm trying to find the best way to get the most distant point of a circle from a specified point in 2D space. What I have found so far, is how to get the distance between the point and the circle position, but I'm not entirely sure how to expand this to find the most distant point of the circle.
The known variables are:
Point a
Point b (circle position)
Radius r (circle radius)
To find the distance between the point and the circle position, I have found this:
xd = x2 - x1
yd = y2 - y1
Distance = SquareRoot(xd * xd + yd * yd)
It seems to me, this is part of the solution. How would this be expanded to get the position of Point x in the below image?
As an additional but optional part of the question: I have read in some places that it would be possible to get the distance portion without using the Square Root, which is very performance intensive and should be avoided if fast code is necessary. In my case, I would be doing this calculation quite often; Any comments on this within the context of the main question would be welcome too.
What about this?
Calculate A-B.
We now have a vector pointing from the center of the circle towards A (if B is the origin, skip this and just consider point A a vector).
Normalize.
Now we have a well defined length (the length is 1)
If the circle is not of unit radius, multiply by radius. If it is unit radius, skip this.
Now we have the correct length.
Invert sign (can be done in one step with 3., just multiply with the negative radius)
Now our vector points in the correct direction.
Add B (if B is the origin, skip this).
Now our vector is offset correctly so its endpoint is the point we want.
(Alternatively, you could calculate B-A to save the negation, but then you have to do one more operation to offset the origin correctly.)
By the way, it works the same in 3D, except the circle would be a sphere, and the vectors would have 3 components (or 4, if you use homogenous coords, in this case remember -- for correctness -- setting w to 0 when "turning points into vectors" and to 1 at the end when making a point from the vector).
EDIT:
(in reply of pseudocode)
Assuming you have a vec2 class which is a struct of two float numbers with operators for vector subtraction and scalar multiplicaion (pretty trivial, around a dozen lines of code) and a function normalize which needs to be no more than a shorthand for multiplying with inv_sqrt(x*x+y*y), the pseudocode (my pseudocode here is something like a C++/GLSL mix) could look something like this:
vec2 most_distant_on_circle(vec2 const& B, float r, vec2 const& A)
{
vec2 P(A - B);
normalize(P);
return -r * P + B;
}
Most math libraries that you'd use should have all of these functions and types built-in. HLSL and GLSL have them as first type primitives and intrinsic functions. Some GPUs even have a dedicated normalize instruction.

Transforming a 3D plane using a 4x4 matrix

I have a shape made out of several triangles which is positioned somewhere in world space with scale, rotate, translate. I also have a plane on which I would like to project (orthogonal) the shape.
I could multiply every vertex of every triangle in the shape with the objects transformation matrix to find out where it is located in world coordinates, and then project this point onto the plane.
But I don't need to draw the projection, and instead I would like to transform the plane with the inverse transformation matrix of the shape, and then project all the vertices onto the (inverse transformed) plane. Since it only requires me to transform the plane once and not every vertex.
My plane has a normal (xyz) and a distance (d). How do I multiply it with a 4x4 transformation matrix so that it turns out ok?
Can you create a vec4 as xyzd and multiply that? Or maybe create a vector xyz1 and then what to do with d?
You need to convert your plane to a different representation. One where N is the normal, and O is any point on the plane. The normal you already know, it's your (xyz). A point on the plane is also easy, it's your normal N times your distance d.
Transform O by the 4x4 matrix in the normal way, this becomes your new O. You will need a Vector4 to multiply with a 4x4 matrix, set the W component to 1 (x, y, z, 1).
Also transform N by the 4x4 matrix, but set the W component to 0 (x, y, z, 0). Setting the W component to 0 means that your normals won't get translated. If your matrix is composed of more that just translating and rotating, then this step isn't so simple. Instead of multiplying by your transformation matrix, you have to multiply by the transpose of the inverse of the matrix i.e. Matrix4.Transpose(Matrix4.Invert(Transform)), there's a good explanation on why here.
You now have a new normal vector N and a new position vector O. However I suppose you want it in xyzd form again? No problem. As before, xyz is your normal N all that's left is to calculate d. d is the distance of the plane from the origin, along the normal vector. Hence, it is simply the dot product of O and N.
There you have it! If you tell me what language you're doing this in, I'd happily type it up in code as well.
EDIT, In pseudocode:
The plane is vector3 xyz and number d, the matrix is a matrix4x4 M
vector4 O = (xyz * d, 1)
vector4 N = (xyz, 0)
O = M * O
N = transpose(invert(M)) * N
xyz = N.xyz
d = dot(O.xyz, N.xyz)
xyz and d represent the new plane
This question is a bit old but I would like to correct the accepted answer.
You do not need to convert your plane representation.
Any point lies on the plane if
It can be written as dot product :
You are looking for the plane transformed by your 4x4 matrix .
For the same reason, you must have
So and with some arrangements
TLDR : if p=(a,b,c,d), p' = transpose(inverse(M))*p
Notation:
n is a normal represented as a (1x3) row-vector
n' is the transformed normal of n according to transform matrix T
(n|d) is a plane represented as a (1x4) row-vector (with n the plane's normal and d the plane's distance to the origin)
(n'|d') is the transformed plane of (n|d) according to transform matrix T
T is a (4x4) (affine) column-major transformation matrix (i.e. transforming a column-vector t is defined as t' = T t).
Transforming a normal n:
n' = n adj(T)
Transforming a plane (n|d):
(n'|d') = (n|d) adj(T)
Here, adj is the adjugate of a matrix which is defined as follows in terms of the inverse and determinant of a matrix:
T^-1 = adj(T)/det(T)
Note:
The adjugate is generally not equal to the inverse of a transformation matrix T. If T includes a reflection, det(T) = -1, reversing the winding order!
Re-normalizing n' is mathematically not required (but maybe numerically depending on the implementation) since scaling is taken care off by the determinant. Thanks to Adrian Leonhard.
You can directly transform the plane without first decomposing and recomposing a plane (normal and point).

Resources