Regarding graphics pipeline - graphics

In graphics pipeline after vertex shader comes, primitive assembly-> Clipping to view frustum-> normalized device coordinates -> viewport transformation.
Now in vertex shader we multiply object cordinates by modelview and projection matrix. " The Projection Matrix transforms the vertices in view coordinates into the
canonical view volume (a cube of sides 2 2 2, centered at the origin, and aligned with the 3 coordinate axes). Typically, this will be either by an orthographic projection or a perspective projection. This transform includes multiplication by the projection transformation matrix followed by a normalization
of each vertex, calculated by dividing each vertex by its own w coordinate. "
Now, if this is done in vertex shader only why it comes after the vertex shader part in pipeline shouldn't it just be a part of vertex shader.? If not what is the output of projection matrix multiplied by vertex coordinates?

I'm not sure I understand your question, but after you multiply your points by modelview and projection matrix in vertex shader, your points will be in clip coordinates. This is done, because now, the graphics hw can determine which objects can be visible and which not. This is called clipping and it is a separate step after the vertex shader. After this, it does the perspective division (divides xyz coordinates by homogenous coordinate w, this is hard coded inside the gpu) to get normalized device coordinates [-1, 1].

Related

How can I calculate the Z value of triangles?

I am trying to implement a Z-buffer (depth buffer) for a polygon rasterization algorithm. All of my polygons are triangles and I understand that three points (x,y,z) that make up a triangle also form a plane. If I have the (x,y,z) values of the verices, how would I calculate the depth of every position on the face of the triangle?
In OpenGl or WebGl a z-buffer is applied just after rasterization i.e. for each pixel, not for each vertex of a triangle. In this case you need to save z-value for each pixel and then just get a pixel this max z-value. This is done automatically in pipeline.
If you wanna calculate a z-buffer just for vertices you need your own algorithm. For example just getting max z-value of triangle's vertices and sort triangles by this value.
Also check this link for more info.

How are polygon "sleeves" called and how are they calculated?

When a polygon gets rotated it skips all possible rotations in between the current and the desired situation. Here are 3 images illustrating what I mean:
This is the current polygon:
Rotating it 45 degrees (in clockwise direction) would result in:
The current polygon rotated by 45 degrees in clockwise direction, with all possible situations in between would result in:
How are these "sleeves" (in-between situations) actually called, and how are these "complete polygons" calculated/approximated based on the current polygon and desired angle of rotation?
In the CAD industry, we would call this operation a 2d sweep, or a 2d planar sweep; in this case a 2d planar rotational sweep. (Not to be confused with a sweep line algorithm.) The resulting area would be the 2d swept area or 2d swept face, and the outline called the 2d swept boundary.
A couple of papers on the topic can be found here:
Polygonal boundary approximation for a 2D general sweep based on envelope and boolean operations (2000) (another link, and a direct PDF Link.)
Approximate General Sweep Boundary of a 2D Curved Object (1993).
Your case of a 2d rotational sweep is not as general as the cases considered in these papers. If you think about sweeping just a single curve -- a line or an arc, say -- then the boundary curves of the swept area in 2d will as follows. Imagine sweeping the curve in 3d, where the curve is simultaneously extruded along the axis of rotation as it is rotated around the axis. In that case, the boundaries of your 2d sweep would be the boundaries of the 3d swept surface projected back to 2d plus the 3d silhouettes of the swept surface projected back into 2d, taking the axis of rotation as the view vector for silhouette creation.
Computing silhouettes of general surfaces is nontrivial, but for a rotational sweep + extrusion along the axis of rotation the silhouettes will be traced out by points where the tangent of the swept curve is parallel to the direction of rotation -- i.e., perpendicular to the radius vector drawn out from the center of rotation. Thus an algorithm to compute your 2d area might look like:
For each edge segment of the area to be rotationally swept,
Split the edge where the tangent becomes parallel to the direction of rotation.
Exclude any degenerate curves -- arcs that are coaxial with the axis of rotation.
For each split edge segment, form a 2d area comprised of the start position of the curve, the end position of the curve, and the start and end points connected by arcs. Since we split at the silhouette points, there should be no self intersections.
Do a 2d boolean of the 2d area in its start position, its end position, and the swept areas created in the first steps.

What happened in rasterizer stage?

I want to use Direct3D 11 to blend several images that from multi-view into one texture, so i do multiple projection at Vertex Shader stage and Geometry Shader stage, one of the projection's result stored in SV_Position, others stored in POSITION0, POSITION1 and so on. These positions would be used to sample the image.
Then at the Pixel shader stage, the value in SV_Position is typical like a (307.5,87.5), because it's in screen space. as the size of render target is 500x500, so the uv for sample is (0.615,0.0.175), it's correct. but value in POSITION0 would be like a (0.1312, 0.370), it's vertical reversed with offset. i have to do (0.5 + x, 0.5 - y). the projection is twisted and just roughly matched.
What do the rasterizer stage do on SV_Position?
The rasterizer stage expects the coordinates in SV_Position to be normalized device coordinates. In this space X and Y values between -1.0 and +1.0 cover the whole output target, with Y going "up". That way you do not have to care about the exact output resolution in the shaders.
So as you realized, before a pixel is written to the target another transformation is performed. One that inverts the Y axis, scales X and Y and moves the origin to the top left corner.
In Direct3D11 the parameters for this transformation can be controlled through the ID3D11DeviceContext::RSSetViewports method.
If you need pixel coordinates in the pixel shader you have to do the transformation yourself. For accessing the output resolution in the shader bind them as shader-constants, for example.

How is 3D plane normal vector related to its rotation

What i am trying to do http://www.youtube.com/watch?v=CaTI2d0tQME 3:15
In my 3D api there is quad.rotation[x,y,z], quad[x,y,z] which is center of it and width/height. I understand that vertices are being calculated from all of the given. And normal can be calculated from vertices but i have a feeling i should be able to get it just from the rotation?
Yes you can !
Your quad must be axis-oriented (along the X, Y or Z axis, which is its normal vector in its local space). Compose this vector with the quad rotation matrix and you will have your new, nice and shiny normal vector in world space !
A little warning : if the quad transformation matrix is generated by any 3D engine, it could contain scaling factors that will mess the normal vector up. In this case, the classical solution is to compute the transposed inverse of the matrix, or to generate your custom transformation matrix with the quad's rotation values.

How to map points in a 3D-plane into screen plane

I have given an assignment of to project a object in 3D space into a 2D plane using simple graphics in C. The question is that a cube is placed in fixed 3D space and there is camera which is placed in a position whose co-ordinates are x,y,z and the camera is looking at the origin i.e. 0,0,0. Now we have to project the cube vertex into the camera plane.
I am proceeding with the following steps
Step 1: I find the equation of the plane aX+bY+cZ+d=0 which is perpendicular to the line drawn from the camera position to the origin.
Step 2: I find the projection of each vertex of the cube to the plane which is obtained in the above step.
Now I want to map those vertex position which i got by projection in step 2 in the plane aX+bY+cZ+d=0 into my screen plane.
thanks,
I don't think that by letting the z co-ordinate equals zero will lead me to the actual mapping. So any help to figure out this.
You can do that in two simple steps:
Translate the cube's coordinates to the camera's system (using
rotation), such that the camera's own coordinates in that system are x=y=z=0 and the cube's translated z's are > 0.
Project the translated cube's coordinates onto a 2d plain by dividing its x's and y's by their respective z's (you may need to apply a constant scaling factor here for the coordinates to be reasonable for the screen, e.g. not too small and within +/-half the screen's height in pixels). This will create the perspective effect. You can now draw pixels using these divided x's and y's on the screen assuming x=y=0 is the center of it.
This is pretty much how it is done in 3d games. If you use cube vertex coordinates, then you get projections of its sides onto the screen. You may then solid-fill the resultant 2d shapes or texture-map them. But for that you'll have to first figure out which sides are not obscured by others (unless, of course, you use a technique called z-buffering). You don't need that for a simple wire-frame demo, though, just draw straight lines between the projected vertices.

Resources