I am a beginner in polygon based computer graphics. Whatever I read I always come across the term vertex shading.
What is it? As far as I know vertices are the points where two edges of the polygon meet. So how do you shade a vertex (its just a point)?
Please explain
Good question; the term "vertex shader" is indeed a misnomer. The term "shader" is used for any program that typically runs on the GPU (as opposed to the CPU). The first incarnation of these were pixel shaders, also known as fragment shaders, where the name still made sense.
Then vertex shaders were invented, but they don't actually shade anything; they have the ability to transform a vertex's position in space, and they can pass down per-vertex data to the pixel shader. "Vertex program" would be a better name, but the word "shader" apparently stuck.
Related
I am very new to Shaders and programming in direct 11(c++) and HLSL for shaders. However, I have been given a task to:
Implement cube mapping of a static environment onto a complex model (not a cube). Cube mapping allows an object to reflect the scene around it.
There aren't many resources online can anyone please tell me the steps to follow to achieve a correct cube mapping. I'm more concerned about the calculations to do on the HLSL side.
For a very basic environment mapping, all you need to do is:
Compute the position and surface normal of the current pixel (in the pixel shader) in world space
Compute the (normalised) view direction (world space pixel position - world space camera position)
Compute the reflection vector from view direction and surface normal (there is a builtin HLSL function to do that, if you don't want to do the math yourself)
Sample the cube map with that reflection vector, and return that color.
This then works like a mirror: The reflection vector is the direction in which your line of sight would get reflected if the surface of your mesh would be a perfect mirror, and then you ask the cube map what color is in that direction (aka, whats the reflection you're seeing). How simple or complex your mesh shape is, doesn't matter during this, because you're always only looking at one pixel of that (rasterized) mesh at a time, using that pixels sufrace normal as a guide.
More advanced environment mapping techinques will then blur the reflection based on the surface roughness (usually by sampling different mip map levels of your cube map), merge the color with other light/color computations of that pixel, add indirect environment mapping coloring (which requires sampling a different cube map, which was pre-computed in a special way, with the direction of the surface normal directly), etc. That's then where all the papers and stuff come into play, but the very basic concept of environment mapping are just a few lines of code and is very straight forward.
So a polygon mesh is defined as the following:
class Triangle{
int vertices[3]; //vertex indices
float nx, ny, nz; //face-plane normal
};
Is this a convenient way to represent a mesh used with flat shading? Explain
Suggest an object for which this is a good mesh format when used with Gouraud shading. Explain
Suggest an object for which this is a bad mesh format when used with Gouraud shading. Explain
So for 1, I said yes because the face plane normal can be easily converted to a point in the middle of the face. I read somewhere that normals don't have positions?
For 2 I said a ball; more gentle angles
And 3 a box; steeper angles.
I don't know, I don't think I really understand what the normal vector is.
mostly yes
from geometry computations is this OK however from rendering aspect having triangles in indices form only can be sometimes problematic (depends on the rendering engine, HW, etc). Usually is faster to have the triangle points directly in vector form instead of just indexes sometimes triangle contains both... However that is wasting space.
depends on how you classify what is OK and what not.
smooth objects like sphere will look like this
while flat side meshes like cube will be rendered without visible distortions in shape (but with flat shaded like colors only so lighting will be corrupted)
So answer to this is depend on what you want to achieve less lighting error, or better shape recognition or what. Basically using 1 normal for face will turn Gourard into flat shading.
Lighting can be improved by dividing big flat surfaces into more triangles
is unanswerable exactly for the same reasons as #2
So if you want to answer #2,#3 you need to clarify what it means good and bad ...
Background
Using gluTess to build a triangle list in Direct3D9 from a GDI+ DrawString(..) path:
A pixel shader (v3.0) is then used to fill in the shape. When painting with opaque values, everything looks fine:
The problem
At certain font sizes, if the color has an alpha component (ie Argb #55FFFFFF) we begin to see these nasty tessellation artifacts where triangles may overlap ever so slightly:
At larger font sizes the problem is sometimes not present:
Using Intel's excellent GPA Frame Analyzer Pixel History tool, we can see in areas where the artifacts occur, the pixel has been "touched" 3 times from the single Erg.
I'm trying to figure out how I can stop my pixel shader from touching the same pixel more than once.
Other solutions relating to overdraw prevention seem to be all about zbuffer strategies, however this problem is more to do with painting of a single 2D triangle list within a single pixel shader pass.
I'm at a bit of a loss trying to come up with a solution on this one. I was hoping that HLSL might have some sort of "touch each pixel only once" flag, but I've been unable to find anything like that. The closest I've found was to set the BLENDOP to MAX instead of ADD. But the output is not correct when blending over other colors in the scene.
I also have SRCBLEND = ONE, DSTBLEND = INVSRCALPHA. The only combination of flags which produce correct output (albeit with overdraw artifacts.)
I have played with SEPARATEALPHABLENDENABLE in the GPA frame analyzer, which sounded like almost exactly what I need here -- set blending to MAX but only on the "alpha" channel, however from what I can determine, that setting (and corresponding BLENDOPALPHA) affects nothing at all.
One final thing I thought of was to bake text as opaque onto a texture, and then repaint that texture into the scene with the appropriate alpha value applied, however this doesn't actually work in this project because I also support gradient brushes, where stop values may contain alpha, meaning either the artifacts would still be seen, or the final output just plain wrong if we stripped the alpha away from the stop values prior to baking to a texture. Also the whole endeavor would be hideously expensive.
Any hints or pointers would be appreciated. Thanks for reading.
The problem you're seeing shouldn't happen.
If two of your triangles are overlapping it's because you've placed the vertices in such a way that when the adjacent triangles are drawn, they overlap. What's probably happening is that these two adjacent triangles share two vertices, but each triangle has its own copy of each vertex that's been calculated to be in a very, very slightly different position.
The solution to the problem isn't to try and make the pixel shader touch the pixel only once it's to use an index buffer (if you aren't already) and have the shared vertices between each triangle actually share the same vertex and not use one that's ever-so-slightly not in the same place as the one used by the adjacent triangle.
If you aren't in control of the tessellation algorithm being used you may have to run a pass over the vertex buffer after its been generated to detect and merge vertices that are within some very small tolerance of one another. Even without an index buffer, a naive solution would be this:
For each vertex in the vertex buffer, compare its position to every other vertex in the rest of the vertex buffer.
If two vertices are within some small tolerance of another, replace the second vertex's position with the position of the one you are comparing it against.
This should have the effect of pairing up the positions of two vertices if they are close enough that you deem them to be the same.
You now shouldn't have any problem with overlapping triangles. In everyday rendering two triangles share edges with each other all the time and you won't ever get the effect where they appear to every-so-slightly overlap. The hardware guarantees that a sample point is either on one side of the line or the other, but never both at the same time, no matter how close the point is to the line (even if it's mathematically on the line, it still fails on one side or the other).
I understand the difference between Lambert and Phong in general computer graphics. I also understand how we can change and create our own materials using three.js. But I cannot work out the difference between MeshLambertMaterial and MeshPhongMaterial in their default states.
I have tried switching them on a scene with one directional light source and 125 spheres, I cannot see any differences whatsoever. Three.js is being used in a chapter of my book and so I need to make sure all information is accurate and precise.
Thanks,
Shane
Shane, it's not your fault that you're confused.
Lambert is an illumination model (with a physical basis) for the light reflected off a surface, expressed in terms of the incoming illumination's direction with respect to the surface normal at the point of incidence.
Phong is a more nuanced shading model (albeit a more hacky one) which says that light is composed of ambient + diffuse + specular components. It treats the ambient component as constant everywhere (hack!), the diffuse component using the Lambertian model above, and the specular component using a power-law falloff (which is a clever hack, roughly approximating actual BRDFs).
The word "Phong" is also an interpolation method (when used in the context of modern triangle-based rendering pipelines). When computing the illumination at a pixel in the interior of a triangle, you have two choices:
Gouraud shading: Compute the color at the three vertices and interpolate in the interior, using barycentric coordinates, or
Phong shading: Using the normal at the three vertices, interpolate the normal in the interior and compute the shading using this interpolated normal at each pixel.
This is why (as #RayToal pointed out), if your specular "highlight" falls in the interior of a triangle, none of the vertices will be bright, but Phong shading will interpolate the normal and there will be a bright spot in the interior of your rendered triangle.
I am assuming you want the exact difference between MeshLambertMaterial and MeshPhongMaterial as implemented in three.js.
You have to differentiate between the shading model and the illumination model. Three.js does not implement 'pure' Phong or Lambert models.
For MeshLambertMaterial, the illumination calculation is performed at each vertex, and the resulting color is interpolated across the face of the polygon. ( Gouraud shading; (generalized) Lambert illumination model )
For MeshPhongMaterial, vertex normals are interpolated across the surface of the polygon, and the illumination calculation is performed at each texel. ( Phong shading; (generalized) Phong illumination model )
You will see a clear difference when you have a pointLight that is close to a face -- especially if the light's attenuation distance is less than the distance to the face's vertices.
For both materials, in the case of FlatShading, the face normal replaces each vertex normal.
three.js.r.66
In computer graphics, it is very common to confuse Phong reflection model with Phong shading. While former is a model of local illumination of points like Lambertian, the later is an interpolation method like Gouraud shading. In case you find it hard to differentiate between them, here's a list of detailed articles on each of these topics.
http://en.wikipedia.org/wiki/List_of_common_shading_algorithms
If you know a little GLSL, I think the best thing for you to do is to look at the vertex/fragment shaders generated in both cases and look for the differences. You can use http://benvanik.github.com/WebGL-Inspector/ to get the code of the programs, or put a console.log() at the right place in three js sources (look for buildProgram, you should output prefix_fragment + fragmentShader and prefix_vertex + vertexShader to see the program code).
Also, you can have a look to the building blocks used to create both shaders:
Lambert: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2036
Phong: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2157
It may be more readable than to look at the source program code.
An old Direct3D book says
"...you can achieve an acceptable frame
rate with hardware acceleration while
displaying between 2000 and 4000
polygons per frame..."
What is one polygon in Direct3D? Do they mean one primitive (indexed or otherwise) or one triangle?
That book means triangles. Otherwise, what if I wanted 1000-sided polygons? Could I still achieve 2000-4000 such shapes per frame?
In practice, the only thing you'll want it to be is a triangle because if a polygon is not a triangle it's generally tessellated to be one anyway. (Eg, a quad consists of two triangles, et cetera). A basic triangulation (tessellation) algorithm for that is really simple; you just loop though the vertices and turn every three vertices into a triangle.
Here, a "polygon" refers to a triangle. All . However, as you point out, there are many more variables than just the number of triangles which determine performance.
Key issues that matter are:
The format of storage (indexed or not; list, fan, or strip)
The location of storage (host-memory vertex arrays, host-memory vertex buffers, or GPU-memory vertex buffers)
The mode of rendering (is the draw primitive command issued fully from the host, or via instancing)
Triangle size
Together, those variables can create much greater than a 2x variation in performance.
Similarly, the hardware on which the application is running may vary 10x or more in performance in the real world: a GPU (or integrated graphics processor) that was low-end in 2005 will perform 10-100x slower in any meaningful metric than a current top-of-the-line GPU.
All told, any recommendation that you use 2-4000 triangles is so ridiculously outdated that it should be entirely ignored today. Even low-end hardware today can easily push 100,000 triangles in a frame under reasonable conditions. Further, most visually interesting applications today are dominated by pixel shading performance, not triangle count.
General rules of thumb for achieving good triangle throughput today:
Use [indexed] triangle (or quad) lists
Store data in GPU-memory vertex buffers
Draw large batches with each draw primitives call (thousands of primitives)
Use triangles mostly >= 16 pixels on screen
Don't use the Geometry Shader (especially for geometry amplification)
Do all of those things, and any machine today should be able to render tens or hundreds of thousands of triangles with ease.
According to this page, a polygon is n-sided in Direct3d.
In C#:
public static Mesh Polygon(
Device device,
float length,
int sides
)
As others already said, polygons here means triangles.
Main advantage of triangles is that, since 3 points define a plane, triangles are coplanar by definition. This means that every point within the triangle is exactly defined as a linear combination of polygon points. More vertices aren't necessarily coplanar, and they don't define a unique curved plane.
An advantage more in mechanical modeling than in graphics is that triangles are also undeformable.