Should opaque objects always be rendered before alpha test objects? - graphics

From the slides(Page 32)
Ideal rendering order:
Opaque first, then alpha test
Why is it? Provided that I have an almost flat terrain and lots of grasses on it, rendering grasses before terrain can take the advantage of Early Z to cull a lot of pixels of terrain to shade.

Scene with Transparent polygons should be rendered in Z-sorted order from the most far to the closest (in respect to camera view direction).
Alpha channel is used usually for modulating transparency strength in which case your statement is not true as alpha test polygons may be transparent too.
In some cases is alpha channel used as a mask (for sprites or fonts for example) and rendered polygons are not transparent at all. In that case the statement is true (up to a point which depends on your scene organization and rendering pipeline).
For more info see
OpenGL How to create Order Independent transparency?

Related

How does Skia or Direct2D render lines or polygons with GPU?

This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.

In computer graphics, faces of a polygon

In computer graphics, why do we need to know that backward face and forward face of a polygon are different?
There are several reasons why a triangle's face might be important.
Face Culling
If you draw a cube, you can only ever see at most 3 sides of it. The front three sides will block your view of the back 3 sides. And while depth testing will prevent drawing the fragments corresponding to the back sides... why bother? In order to do depth testing, you have to rasterize those triangles. That's a lot of work for triangles that won't be seen.
Therefore, we have a way to cull triangles based on their facing, before performing rasterization on them. While vertex processing will still be done on those triangles, they will be discarded before doing heavy-weight operations like rasterization.
Through face culling, you can eliminate approximately half of the triangles in a closed mesh. That's a pretty decent performance savings.
Two-Sided Rendering
A leaf is a thin object, so you might render it as one flat polygon, without face culling. However, a leaf does not look the same on both sides. The top side is usually quite a bit darker than the bottom side.
You can achieve this effect by sending two colors when rendering the leaf; one meant for the top side and one for the bottom. In your fragment shader, you can detect which side of the polygon that fragment was generated from, by looking at the built-in variable gl_FrontFacing. That boolean can be used to select which color to use.
It could even be used to select which texture to sample from, if you want to do more complex two-sided rendering.

Inner and Outer Glow Implementation using Opengl ES 3.0

I want to implement inner and outer glow for a rendered 3D object. Here the glow is to be applied only on the 3D models who have glow enabled and not for the entire scene.
There is one post in stackoverflow that talks about implementing it using modifying the mesh, which in my opinion is difficult and intensive.
Was wondering if it can be achieved through multi-pass rendering? Something like a bloom effect thats applied to specific objects in the scene and only to the inner and outer boundaries.
I assume you want the glow only near the object's contours?
I did an outer glow using a multi-pass approach (after all "regular" drawing):
Draw object to texture (cleared to fully transparent) using constant output shader (using glow color as output), marking the stencil buffer in the process. Use EQUAL depth test if you only want a glow around the part where the object is actually visible on screen. Obviously using the depth buffer used to do normal scene drawing.
Separated gaussian blur on this texture.
Blend result to the output buffer for all pixels that do not have the stencil buffer marked in step 1.
For an inner + outer glow, you could do an edge detection on the result of (1), keeping only marked pixels near the boundary, followed by the blur and and an unmasked blend.
You could also try to combine the edge detection and blurring by using a filter that scales its output based on the variance of all samples in its radius. It would be non-separable though...

In 3D graphics, why is antialiasing not more often achieved using textures?

Commonly, techniques such as supersampling or multisampling are used to produce high fidelity images.
I've been messing around on mobile devices with CSS3 3D lately and this trick does a fantastic job of obtaining high quality non-aliased edges on quads.
The way the trick works is that the texture for the quad gains two extra pixels in each dimension forming a transparent one-pixel-wide outline outside the border. Due to texture sampling interpolation, so long as the transformation does not put the camera too close to an edge the effect is not unlike a pre-filtered antialiased rendering approach.
What are the conceptual and technical limitations of taking this sort of approach to render a 3D model, for example?
I think I already have one point that precludes using this kind of trick in the general case. Whenever geometry is not rectangular it does nothing to reduce aliasing: The fact that the result with a transparent 1px outline border is smooth for HTML5 with CSS3 depends on those elements being rectangular so that they rasterize neatly into a pixel grid.
The trick you linked to doesn't seem to have to do with texture interpolation. The CSS added a border that is drawn as a line. The rasterizer in the browser is drawing polygons without antialiasing and is drawing lines with antialiasing.
To answer your question of why you wouldn't want to blend into transparency over a 1 pixel border is that transparency is very difficult to draw correctly and could lead to artifacts when polygons are not drawn from back to front. You either need to presort your polygons based on distance or have opaque polygons that you check occlusion of using a depth buffer and multisampling.

DirectX alpha blending (deferred rendering)

I'm having a major issue which has been bugging me for a while now.
My problem is my game uses a deferred rendering engine which makes it very difficult to do alpha blending.
The only way I can think of solving this issue is to render the scene (including depth map, normal map and diffuse map) without any objects which have alphas.
Then for each polygon which has a texture with an alpha component, disable the z buffer and render it out including normals, depth and colour, and wherever alpha is '0' don't output anything to the depth, normal and colour buffer. Perform lighting calculations/other deferred effects on these two separate textures then combine the colour buffers using the depth map to check for which pixel is visible.
This idea would be extremely costly (not to mention has some severe short comings) to do so obviously should only be reserved for as few cases as possible, which makes rendering forest areas out of the question. However if there is no better solution I have one question.
When doing alpha blending with directx is there a shader/device state I can set which makes it so that I can avoid writing to the depth/normal/colour buffer when I want to? The issue is the pixel shader has to output to all its render targets specified, so if its set to output to the 3 render targets it must do it, which will override the previous colour value for that texel in the texture.
If there is no blend state which allows me to do this it would mean I would have to copy the normal, texture and depth map to keep the scene and then render to a new texture, depth and normal map then combine the two textures based on the alpha and depth values.
I guess really all I want toknow is if there is a simple sure-fire and possibly cheap way to render alphas in a deferred renderer?
A usual approach to draw transparent geometry in deferred renderer is just draw them in a separate pass, but using the usual forward rendering, not deferred rendering.

Resources