How to draw shapes in the proper order when rendering? - graphics

I am trying my hand at writing a 3d graphics engine, but I am having some trouble with drawing the shapes in the correct order.
When I translate the points of triangles into window space, i.e. the 2-dimensional space that directly correlates to position on the screen, in addition to an x and y position of each point, I also assign them a depth variable that stores how far away from the viewer that point was in 3d space.
At the moment, the only shapes I am rendering are triangles. My current render order algorithm sorts the triangles by the average depth of their 3 points. I knew when I started it that it would not be perfect, but I wanted a placeholder for testing.
For testing purposes, I constructed a square box with an open top, each side being a different color and made from 2 triangles, as shown below:
As you can see from the image above, the algorithm I am using works most of the time. However, at certain angles and positions, the triangles will be rendered in the wrong order, as show below:
As you can see, one of the cyan triangles on the bottom of the box is being drawn before one of the yellow triangles on the side. Clearly, sorting the triangles by the average depth of their points is not satisfactory.
Is there a better method of ordering shapes so that they are rendered in the correct order?

The standard method to draw 3D in correct depth order is to use a Z-buffer.
Basically, the idea is that for each pixel you set in the color buffer, you also set it's interpolated depth in the z (depth..) buffer. Whenever you're about to paint the next pixel, you first check that z-buffer to validate the new pixel if in front of the already painted pixel.
On top of that you can add various sorts of optimizations, such as sorting triangles in order to minimize the number of times you actually paint the color buffer.
On the other hand, it's sometimes required to do the exact opposite in order to properly handle transparency or other "advanced" effects.

Related

Preventing pixelshader overdraw for a single ERG

Background
Using gluTess to build a triangle list in Direct3D9 from a GDI+ DrawString(..) path:
A pixel shader (v3.0) is then used to fill in the shape. When painting with opaque values, everything looks fine:
The problem
At certain font sizes, if the color has an alpha component (ie Argb #55FFFFFF) we begin to see these nasty tessellation artifacts where triangles may overlap ever so slightly:
At larger font sizes the problem is sometimes not present:
Using Intel's excellent GPA Frame Analyzer Pixel History tool, we can see in areas where the artifacts occur, the pixel has been "touched" 3 times from the single Erg.
I'm trying to figure out how I can stop my pixel shader from touching the same pixel more than once.
Other solutions relating to overdraw prevention seem to be all about zbuffer strategies, however this problem is more to do with painting of a single 2D triangle list within a single pixel shader pass.
I'm at a bit of a loss trying to come up with a solution on this one. I was hoping that HLSL might have some sort of "touch each pixel only once" flag, but I've been unable to find anything like that. The closest I've found was to set the BLENDOP to MAX instead of ADD. But the output is not correct when blending over other colors in the scene.
I also have SRCBLEND = ONE, DSTBLEND = INVSRCALPHA. The only combination of flags which produce correct output (albeit with overdraw artifacts.)
I have played with SEPARATEALPHABLENDENABLE in the GPA frame analyzer, which sounded like almost exactly what I need here -- set blending to MAX but only on the "alpha" channel, however from what I can determine, that setting (and corresponding BLENDOPALPHA) affects nothing at all.
One final thing I thought of was to bake text as opaque onto a texture, and then repaint that texture into the scene with the appropriate alpha value applied, however this doesn't actually work in this project because I also support gradient brushes, where stop values may contain alpha, meaning either the artifacts would still be seen, or the final output just plain wrong if we stripped the alpha away from the stop values prior to baking to a texture. Also the whole endeavor would be hideously expensive.
Any hints or pointers would be appreciated. Thanks for reading.
The problem you're seeing shouldn't happen.
If two of your triangles are overlapping it's because you've placed the vertices in such a way that when the adjacent triangles are drawn, they overlap. What's probably happening is that these two adjacent triangles share two vertices, but each triangle has its own copy of each vertex that's been calculated to be in a very, very slightly different position.
The solution to the problem isn't to try and make the pixel shader touch the pixel only once it's to use an index buffer (if you aren't already) and have the shared vertices between each triangle actually share the same vertex and not use one that's ever-so-slightly not in the same place as the one used by the adjacent triangle.
If you aren't in control of the tessellation algorithm being used you may have to run a pass over the vertex buffer after its been generated to detect and merge vertices that are within some very small tolerance of one another. Even without an index buffer, a naive solution would be this:
For each vertex in the vertex buffer, compare its position to every other vertex in the rest of the vertex buffer.
If two vertices are within some small tolerance of another, replace the second vertex's position with the position of the one you are comparing it against.
This should have the effect of pairing up the positions of two vertices if they are close enough that you deem them to be the same.
You now shouldn't have any problem with overlapping triangles. In everyday rendering two triangles share edges with each other all the time and you won't ever get the effect where they appear to every-so-slightly overlap. The hardware guarantees that a sample point is either on one side of the line or the other, but never both at the same time, no matter how close the point is to the line (even if it's mathematically on the line, it still fails on one side or the other).

HLSL Set pixel position in pixel shader to control where the pixel will end up in the texture

How can I manually set where the pixel ends up in the texture in PixelShaderFunction HLSL? Ideally I want the GPU to follow the next logic:
Write pixels one by one in no particular order. Meaning whenever first pixel comes out, write it into the top left corner of the texture. Write second one to the right of the first one and the third one to the right of the first one, and so on.
When you reach the end of the line - go to the next line.
When you reach the end of the texture - drop all the remaining pixels.
Thanks.
I feel like I can do it by manually computing the needed position for my pixel at the vertex shader level. If I could understand better how the pixel positioning works I might be able to pull it out. If I have a render target 2000*4. How can I ensure at the vertex shader level that my pixel will end up in the second row?
What if my RenderTarget is a texture with height = 1 can I not bother computing the positions? Or do I risk loosing data via pixel merging? I am planning to draw nothing but long lines through the screen, one by one and clear the target in between.
Basically you can't do what you're describing.
After the vertex shader, the GPU has a collection of triangles to draw. It fills them, pixel-by-pixel, on the render target (possibly the backbuffer). As part of this filling process - to determine the colour of each pixel - your pixel shader gets called (like a function) for that specific pixel being filled. There is no capacity at this point for "moving" the output pixel.
What you can do is modulate the texture coordinate parameter to tex2D (MSDN) when sampling from a texture in your pixel shader. You can apply whatever functions make sense to achieve your desired result.
Or, if the transform is simple, you can simply set the texture coordinates appropriately either in the vertex data, or using a vertex shader.

Smooth transitions between two intersecting polygons (interesting problem)

I have an interesting problem that I've been trying to solve for a while. There is no "right" solution to this, as there is no strict criteria for success. What I want to accomplish is a smooth transition between two simple polygons, from polygon A to polygon B. Polygon A is completely contained within polygon B.
My criteria for this transition are:
The transition is continuous in time and space
The area that is being "filled" from polygon A into polygon B should be filled in as if there was a liquid in A that was pouring out into the shape of B
It is important that this animation can be calculated either on the fly, or be defined by a set of parameters that require little space, say less than a few Kb.
Cheating is perfectly fine, any way to solve this so that it looks good is a possible solution.
Solutions I've considered, and mostly ruled out:
Pairing up vertices in A and B and simply interpolate. Will not look good and does not work in the case of concave polygons.
Dividing the area B-A into convex polygons, perhaps a Voronoi diagram, and calculate the discrete states of the polygon by doing a BFS on the smaller convex polygons. Then I interpolate between the discrete states. Note: If polygon B-A is convex, the transition is fairly trivial. I didn't go with this solution because dividing B-A into equally sized small convex polygons was surprisingly difficult
Simulation: Subdivide polygon A. Move each vertex along the polygon line normal (outwards) in discrete but small steps. For each step, check if vertex is still inside B. If not, then move back to previous position. Repeat until A equals B. I don't like this solution because the check to see whether a vertex is inside a polygon is slow.
Does anybody have any different ideas?
If you want to keep this simple and somewhat fast, you could go ahead with your last idea where you consider scaling polygon A so that it gradually fills polygon B. You don't necessarily have to check if the scaled-outward vertices are still inside polygon B. Depending on what your code environment and API is like, you could mask the pixels of the expanding polygon A with the outline of polygon B.
In modern OpenGL, you could do this inside a fragment shader. You would have to render polygon B to a texture, send that texture to the shader, and then use that texture to look up if the current fragment being rendered maps to a texture value that has been set by polygon B. If it is not, the fragment gets discarded. You would need to have the texture be as large as the screen. If not, you would need to include some camera calculations in your shaders so you can "render" the fragment-to-test into the texture in the same way you rendered polygon B into that texture.

Direct3D: Wireframe without Diagonals

When using wireframe fill mode in Direct3D, all rectangular faces display a diagonal running across due to the face being split in to two triangles. How do I eliminate this line? I also want to remove hidden surfaces. Wireframe mode doesn't do this.
I need to display a Direct3D model in isometric wireframe view. The rendered scene must display the boundaries of the model's faces but must exclude the diagonals.
Getting rid of the diagonals is tricky as the hardware is likely to only draw triangles and it would be difficult to determine which edge is the diagonal. Alternatively, you could apply a wireframe texture (or a shader that generates a suitable texture). That would solve the hidden line issues, but would look odd as the thickness of the lines would be dependant on z distance.
Using line primitives is not trivial, although surfaces facing away from the camera can be easily removed, partially obscured surfaces would require manual clipping. As a final thought, do a two pass approach - the first pass draws the filled polygons but only drawing to the z buffer, then draw the lines over the top with suitable z bias. That would handle the partially obscured surface problem.
The built-in wireframe mode renders edges of the primitives. As in D3D the primitives are triangles (or lines, or points - but not arbitrary polygons), that means the built-in way won't cut it.
I guess you have to look up some sort of "edge detection" algorithms. These could operate in image space, where you render the model into a texture, assigning unique color for each logical polygon, and then do a postprocessing pass using pixel shader and detect any changes in color (color changes = output black, otherwise output something else).
Alternatively, you could construct a line list that only has the edges you need and just render them.
Yet another alternative could be using geometry shaders in Direct3D 10. Somehow, lots of different options here.
I think you'll need to draw those line manually, as wireframe mode is a built in mode, so I don't think you can modify that. You can get the list of vertex in your mesh, and process them into a list of lines that you need to draw.

How to produce Photoshop stroke effect?

I'm looking for a way to programmatically recreate the following effect:
Give an input image:
input http://www.shiny.co.il/shooshx/ConeCarv/q_input.png
I want to iteratively apply the "stroke" effect.
The first step looks like this:
step 1 http://www.shiny.co.il/shooshx/ConeCarv/q_step1.png
The second step like this:
alt text http://www.shiny.co.il/shooshx/ConeCarv/q_step2.png
And so on.
I assume this will involves some kind of edge detection and then tracing the edge somehow.
Is there a known algorithm to do this in an efficient and robust way?
Basically, a custom algorithm would be, according to this thread:
Take the 3x3 neighborhood around a pixel, threshold the alpha channel, and then see if any of the 8 pixels around the pixel has a different alpha value from it. If so paint a
circle of a given radius with center at the pixel. To do inside/outside, modulate by the thresholded alpha channel (negate to do the other side). You'll have to threshold a larger neighborhood if the circle radius is larger than a pixel (which it probably is).
This is implemented using gray-scale morphological operations. This is also the same technique used to expand/contract selections. Basically, to stroke the center of a selection (or an alpha channel), what one would do is to first make two separate copies of the selection. The first selection would be expanded by the radius of the stroke, whereas the second would be contracted. The opacity of the stroke would then be obtained by subtracting the second selection from the first.
In order to do inside and outside strokes you would contract/expand by twice the radius and subtract the parts that intersect with the original selection.
It should be noted that the most general morphological algorithm requires O(m*n) operations, where m is the number of pixels of the image and n is the number of elements in the "structuring element". However, for certain special cases, this can be optimized to O(m) operations (e.g. if the structuring element is a rectangle or a diamond).

Resources