i'm programming in WebGL (using OpenGL shaders) simple model loader. I've implemented phong shading in fragment shader. However when i load larger objects than simple monkey/cube and turn camera out of light source, meshes looks strange (aliased?). Some of them are even lightened although they should be hidden (black).
Lightened side is OK:
Other side is wrong:
I calculate normals for every vertex same way, so normals should be OK (when i turn camera on lightened side of car, everything goes right).
Thank you very much for your tips.
This looks like an single sided vs two sided lighting issue to me. In case your mesh consists of only a single "layer" of faces, those will have normals that point into only one direction. If single sided lighting is used, then the backface, i.e. if the light is on the side from which the normal points away, will look weird.
There are three ways to overcome this:
Use two sided illumination
draw the object twice with back faces culled, then flip the normals and culling the front face
Give the mesh thickness, so that there are two sides (you should enable backface culling then)
I think i found bug in my Collada Parser where i do not respect exported normals but i'm calculating new one. This causes inverted normals from time to time (this door mesh of this car for example). Anyway two sided rendering has to be implemented too.
Thank you.
Related
This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.
So a polygon mesh is defined as the following:
class Triangle{
int vertices[3]; //vertex indices
float nx, ny, nz; //face-plane normal
};
Is this a convenient way to represent a mesh used with flat shading? Explain
Suggest an object for which this is a good mesh format when used with Gouraud shading. Explain
Suggest an object for which this is a bad mesh format when used with Gouraud shading. Explain
So for 1, I said yes because the face plane normal can be easily converted to a point in the middle of the face. I read somewhere that normals don't have positions?
For 2 I said a ball; more gentle angles
And 3 a box; steeper angles.
I don't know, I don't think I really understand what the normal vector is.
mostly yes
from geometry computations is this OK however from rendering aspect having triangles in indices form only can be sometimes problematic (depends on the rendering engine, HW, etc). Usually is faster to have the triangle points directly in vector form instead of just indexes sometimes triangle contains both... However that is wasting space.
depends on how you classify what is OK and what not.
smooth objects like sphere will look like this
while flat side meshes like cube will be rendered without visible distortions in shape (but with flat shaded like colors only so lighting will be corrupted)
So answer to this is depend on what you want to achieve less lighting error, or better shape recognition or what. Basically using 1 normal for face will turn Gourard into flat shading.
Lighting can be improved by dividing big flat surfaces into more triangles
is unanswerable exactly for the same reasons as #2
So if you want to answer #2,#3 you need to clarify what it means good and bad ...
There are some 3D applications which can cast shadow or silhouette below 3D models. They render pretty fast and smooth. I wonder what kind of technology is the standard procedure to get 3D model shadow/silhouette.
For example is there any C++ library like libigl or CGAL to get shadow/silhouette pretty fast? Or maybe GLSL shading is used? Any hint would be appreciated on the standard technology stack.
For rendering, it's trivial. Just project the vertices to the surface (for the case of the XY plane, this just entails setting the Z coordinate to 0) and render the triangles. There'll be a lot of overlap, but since you're just rendering that won't matter.
If you're trying to build a set of polygons representing the silhouette shape, you'll need to instead union the projected triangles using something like the Vatti clipping algorithm.
Computing shadows is a vast and uneasy topic. In the real world, light sources are extended and the shadow edges are not sharp (there is penumbra). Then there are cast shadows, and even self-shadows.
If you limit yourself to punctual light sources (hence sharp shadows), there is a simple principle: if you place an observer at the light source, the faces he will see are illuminated by that light source. Conversely, the hidden surfaces are in the shadow.
For correct rendering, the shadowed areas should be back-projected to the scene and painted black.
By nature, the ray-tracing techniques make this process easy to implement.
In computer graphics, why do we need to know that backward face and forward face of a polygon are different?
There are several reasons why a triangle's face might be important.
Face Culling
If you draw a cube, you can only ever see at most 3 sides of it. The front three sides will block your view of the back 3 sides. And while depth testing will prevent drawing the fragments corresponding to the back sides... why bother? In order to do depth testing, you have to rasterize those triangles. That's a lot of work for triangles that won't be seen.
Therefore, we have a way to cull triangles based on their facing, before performing rasterization on them. While vertex processing will still be done on those triangles, they will be discarded before doing heavy-weight operations like rasterization.
Through face culling, you can eliminate approximately half of the triangles in a closed mesh. That's a pretty decent performance savings.
Two-Sided Rendering
A leaf is a thin object, so you might render it as one flat polygon, without face culling. However, a leaf does not look the same on both sides. The top side is usually quite a bit darker than the bottom side.
You can achieve this effect by sending two colors when rendering the leaf; one meant for the top side and one for the bottom. In your fragment shader, you can detect which side of the polygon that fragment was generated from, by looking at the built-in variable gl_FrontFacing. That boolean can be used to select which color to use.
It could even be used to select which texture to sample from, if you want to do more complex two-sided rendering.
I'm trying to produce a shader to replicate a white plastic object with a colored light inside. Either by having a shader that will be translucent and if I put a light inside the object the light will show through or by having a shader that fakes the effect of a light inside.
The effect im going for is kinda like a light going through a lamp shade similar to these pictures:
Ideally I would be able to control the strength and colour of the light to get it to pulse and rotate through some nice bright fluro colours
Though I'm not sure where to start!
My question is does anyone know the techniques I should be researching to be able to produce such a shader or have an example of the same/similar shader I can use as a starting point? Or even if you want to provide a shader that might do the job
You might want to do some research on Subsurface Scattering to get an idea of how to go about recreating this kind of effect. Subsurface scattering is important for rendering realistic skin but in that case you are generally dealing with a light in front of or behind a translucent object rather than inside it. The same basic principles apply but some of the tricks and hacks used for real time subsurface scattering approximations may not work for your case.
Nice photos. It looks like the type of translucent plastic you use can make a big difference. What I see is that the brightness of the plastic at each point is based on the angle between the ray from the light source to that point, and the surface normal at that point. (The viewer angle is irrelevant.)
When the vector from internal light source to surface point is nearly parallel to the surface normal vector, the surface point is bright; when they're nearly perpendicular to each other, the surface point is dark. So try using the dot product of those two vectors. Don't forget to normalize.
In other words, it's basically diffuse reflection, except that you're adding the effect of internal light sources (transmitted) to the effect of external light sources (reflected). See Lambertian_reflectance as a starting point.
You may also want to add a little specular reflection on top of that.
The third image is more complex: I think it's showing the shadows of the inner surfaces on the outer ones.
You can also fake this effect, by translating diffuse lighting from back face to front face. More specifically, you should mix lighting on both sides using some transfer function. But this method only appicable for thin walled objects.