I am trying to draw two triangles in a single drawcall. The two triangles are parallel. And the forward direction of camera is along the normal of those triangles which is perpendicular to both triangles. From camera view, the two triangles are perfectly overlapped.
Alpha blend is enabled with blendop being srcAlpha and invSrcAlpha. The color of triangle in back is (0, 1, 0, 0.5), the color of triangle in front is (1, 0, 0, 0.5). And the RT is cleared as black. The pixel shader simply output the triangle color.
Here is an image to show the scene, the vertices of triangles are indexed as in the image.
What could be the final color in RT, could be all (0.5, 0.25, 0). In graphics pipeline, is it guaranteed the pixel of green triangle output before red triangle?
You do not any guarantee on the pixel evaluation order, here the red and green pxel can be evaluated in any order. Precisely be executed in the order the triangles are ordered in the vertex/index buffer.
It exists a feature named Rasterizer Order Views, documentation here. But, first, it depends on an optional feature, and second, it can only turns on when you are using a unordered access view, it is not the case here when you simply use the output merger to write the samples.
It looks like DirectX pipeline guarantees the order.
"DirectX rendering follows a strict set of rule that ensure triangles are always rendered in the order they are submitted: if two triangles are overlapping on the screen, the hardware guarantees that Triangle 1 will have its color result blended to the screen before Triangle 2 is processed and blended."
Here is the link https://software.intel.com/en-us/gamedev/articles/rasterizer-order-views-101-a-primer, and at the section "DirectX Pipeline and the limitations of UAVs".
Related
Let's say I have an entity's global (world) coordinate v (QVector3D). Then I make a coordinate transformation:
pos = camera.projectionMatrix() * camera.viewMatrix() * v
where projectionMatrix() and viewMatrix()are QMatrix4x4 instances. What do I actually get and how is this related to widget coordinates?
The following values are for OpenGL. They may differ in other Graphics APIs
You get Clip Space coordinates. Imagine a Cube with side length 2 ( i.e. -1 to 1 -w to w on all axes1). You transform your world to have everything you see with your camera in this cube, so that the graphics card can discard everything outside the cube (since you don't see it, it doesn't need rendering. this is for perforamnce reasons).
Going further, you (or rather your graphics API) would do a perspective divide. Then you are in Normalized Device Space - basically here you go from 3D to 2D, such that you know where in your rendering canvas your pixels have to be colored with whatever lighting calculations you use. This canvas is a quad with side length 1 (I believe).
Afterwards you would stretch these normalized device coordinates with whatever width and height your widget has, such that you know where in your widget the colored pixels go (defined in OpenGL as your Viewport).
What you see as Widget Coordinates are probably the coordinates of where your widget is on screen (usually the upper left corner is specified). Therefore, if your widget coordinate is (10, 10) and you have a rendered pixel in your Viewport transformation at (10, 10), then on screen your rendered pixel would be at (10+10, 10+10).
1After having had a discussion with derhass (see comments), a lot of books for graphics programming speak of [-1, -1, -1] x [1, 1, 1] as the clipping volume. The OpenGL 4.6 Core Spec however states that it is actually [-w, -w, -w] x [w, w, w] (and according to derhass, it is the same for other APIs. I have not checked this).
I am trying my hand at writing a 3d graphics engine, but I am having some trouble with drawing the shapes in the correct order.
When I translate the points of triangles into window space, i.e. the 2-dimensional space that directly correlates to position on the screen, in addition to an x and y position of each point, I also assign them a depth variable that stores how far away from the viewer that point was in 3d space.
At the moment, the only shapes I am rendering are triangles. My current render order algorithm sorts the triangles by the average depth of their 3 points. I knew when I started it that it would not be perfect, but I wanted a placeholder for testing.
For testing purposes, I constructed a square box with an open top, each side being a different color and made from 2 triangles, as shown below:
As you can see from the image above, the algorithm I am using works most of the time. However, at certain angles and positions, the triangles will be rendered in the wrong order, as show below:
As you can see, one of the cyan triangles on the bottom of the box is being drawn before one of the yellow triangles on the side. Clearly, sorting the triangles by the average depth of their points is not satisfactory.
Is there a better method of ordering shapes so that they are rendered in the correct order?
The standard method to draw 3D in correct depth order is to use a Z-buffer.
Basically, the idea is that for each pixel you set in the color buffer, you also set it's interpolated depth in the z (depth..) buffer. Whenever you're about to paint the next pixel, you first check that z-buffer to validate the new pixel if in front of the already painted pixel.
On top of that you can add various sorts of optimizations, such as sorting triangles in order to minimize the number of times you actually paint the color buffer.
On the other hand, it's sometimes required to do the exact opposite in order to properly handle transparency or other "advanced" effects.
I want to use Direct3D 11 to blend several images that from multi-view into one texture, so i do multiple projection at Vertex Shader stage and Geometry Shader stage, one of the projection's result stored in SV_Position, others stored in POSITION0, POSITION1 and so on. These positions would be used to sample the image.
Then at the Pixel shader stage, the value in SV_Position is typical like a (307.5,87.5), because it's in screen space. as the size of render target is 500x500, so the uv for sample is (0.615,0.0.175), it's correct. but value in POSITION0 would be like a (0.1312, 0.370), it's vertical reversed with offset. i have to do (0.5 + x, 0.5 - y). the projection is twisted and just roughly matched.
What do the rasterizer stage do on SV_Position?
The rasterizer stage expects the coordinates in SV_Position to be normalized device coordinates. In this space X and Y values between -1.0 and +1.0 cover the whole output target, with Y going "up". That way you do not have to care about the exact output resolution in the shaders.
So as you realized, before a pixel is written to the target another transformation is performed. One that inverts the Y axis, scales X and Y and moves the origin to the top left corner.
In Direct3D11 the parameters for this transformation can be controlled through the ID3D11DeviceContext::RSSetViewports method.
If you need pixel coordinates in the pixel shader you have to do the transformation yourself. For accessing the output resolution in the shader bind them as shader-constants, for example.
How can I manually set where the pixel ends up in the texture in PixelShaderFunction HLSL? Ideally I want the GPU to follow the next logic:
Write pixels one by one in no particular order. Meaning whenever first pixel comes out, write it into the top left corner of the texture. Write second one to the right of the first one and the third one to the right of the first one, and so on.
When you reach the end of the line - go to the next line.
When you reach the end of the texture - drop all the remaining pixels.
Thanks.
I feel like I can do it by manually computing the needed position for my pixel at the vertex shader level. If I could understand better how the pixel positioning works I might be able to pull it out. If I have a render target 2000*4. How can I ensure at the vertex shader level that my pixel will end up in the second row?
What if my RenderTarget is a texture with height = 1 can I not bother computing the positions? Or do I risk loosing data via pixel merging? I am planning to draw nothing but long lines through the screen, one by one and clear the target in between.
Basically you can't do what you're describing.
After the vertex shader, the GPU has a collection of triangles to draw. It fills them, pixel-by-pixel, on the render target (possibly the backbuffer). As part of this filling process - to determine the colour of each pixel - your pixel shader gets called (like a function) for that specific pixel being filled. There is no capacity at this point for "moving" the output pixel.
What you can do is modulate the texture coordinate parameter to tex2D (MSDN) when sampling from a texture in your pixel shader. You can apply whatever functions make sense to achieve your desired result.
Or, if the transform is simple, you can simply set the texture coordinates appropriately either in the vertex data, or using a vertex shader.
I'm looking for a way to programmatically recreate the following effect:
Give an input image:
input http://www.shiny.co.il/shooshx/ConeCarv/q_input.png
I want to iteratively apply the "stroke" effect.
The first step looks like this:
step 1 http://www.shiny.co.il/shooshx/ConeCarv/q_step1.png
The second step like this:
alt text http://www.shiny.co.il/shooshx/ConeCarv/q_step2.png
And so on.
I assume this will involves some kind of edge detection and then tracing the edge somehow.
Is there a known algorithm to do this in an efficient and robust way?
Basically, a custom algorithm would be, according to this thread:
Take the 3x3 neighborhood around a pixel, threshold the alpha channel, and then see if any of the 8 pixels around the pixel has a different alpha value from it. If so paint a
circle of a given radius with center at the pixel. To do inside/outside, modulate by the thresholded alpha channel (negate to do the other side). You'll have to threshold a larger neighborhood if the circle radius is larger than a pixel (which it probably is).
This is implemented using gray-scale morphological operations. This is also the same technique used to expand/contract selections. Basically, to stroke the center of a selection (or an alpha channel), what one would do is to first make two separate copies of the selection. The first selection would be expanded by the radius of the stroke, whereas the second would be contracted. The opacity of the stroke would then be obtained by subtracting the second selection from the first.
In order to do inside and outside strokes you would contract/expand by twice the radius and subtract the parts that intersect with the original selection.
It should be noted that the most general morphological algorithm requires O(m*n) operations, where m is the number of pixels of the image and n is the number of elements in the "structuring element". However, for certain special cases, this can be optimized to O(m) operations (e.g. if the structuring element is a rectangle or a diamond).