In OpenGL we can set line pattern usingglEnable (GL_LINE_STIPPLE);glLineStipple(2,0x00FF);
And in dx9, we can draw stipple line using ID3DXLine's method SetPattern(0x00FF).
But it seems that there is not such a method in dx11 to set pattern for stipple line. If it is true i wonder if there is a smart way to draw stipple line in dx11?
You might look at this question. It asks how to do line stipple in non-deprecated modern OpenGL, which is similar in functionality to Direct3D 10+.
My answer basically was to use a combination of alpha testing and the geometry shader to do it:
Perhaps you could also use a 1D texture with the alpha (or red)
channel encoding the pattern as 0.0 (no line) or 1.0 (line) and then
have the line's texture coordinate go from 0 to 1 and in the fragment
shader you make a simple alpha test, discarding fragments with alpha
below some threshold. You can facilitate the geometry shader to
generate your line's texCoords, as otherwise you need different vertices
for every line. This way you can also make the texCoord dependent on
the screen space length of the line.
The whole thing get's more difficult if you draw triangles (using
polygon mode GL_LINE). Then you have to do the triangle-line
transformation yourself in the geometry shader, putting in triangles
and putting out lines (that could also be a reason for deprecating
polygon mode in the future, if it hasn't already).
Although this question was about OpenGL, the basic principles are exactly the same, you just have to map the shaders from the answer to HLSL, which shouldn't be too difficult given their simplicity.
Related
This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.
I'm trying to implement a line stipple (dashed/dotted line) with OpenGL ES2.0 and found many threads about this topic. But there weren't any examples. Has someone done this before and can help me with implementation?
There is no direct support for stippled lines in OpenGL ES, so the common approaches are:
Render multiple distinct line segments, each segment rendering one solid part of the stipple pattern.
Render a thin quad made out of two triangles and apply the line stipple effect using a transparent texture.
From a performance point of view I'd recommend the latter if you have a significant number of lines on screen.
Option three, beyond those given by solidpixel, is to use a textured line; the fragment shader for a line can receive varyings and sample textures just like any other fragment shader. So supply a texturing coordinate as a varying running from x = 0 to x = k * (length of line), then sample your texture to produce a fragment colour.
The behaviour on OpenGL when rendering a line is different from that when rendering a quad: a thin quad on a diagonal can miss fragment centres, whereas a line will always paint a continuous collection of fragments.
So it's more or less the difference between computing the stipple on the fly versus precomputing it and looking it up. If you can compute it as you go, as solidpixel advocates, that's likely preferable as it'll consume less bandwidth.
Using only a box function, what is the proper way to draw an annulus (wide circle) using Bresenham's algorithm? I assume that consecutive parallel lines could be drawn, but that using an angled line instead of a point would be more feasible, but also involve trigonometry.
I am using Python, but examples in any language appreciated.
You cannot fill all ring points with radial lines, because for R2=2*R1 outer circumference contains twice as much points in it's raster representations, and there will be empty places near outer circle.
Graphics engines (DirectX, OpenGL and so on) often use triangle fans to fill the circles, ellipses, rings.
I want to implement inner and outer glow for a rendered 3D object. Here the glow is to be applied only on the 3D models who have glow enabled and not for the entire scene.
There is one post in stackoverflow that talks about implementing it using modifying the mesh, which in my opinion is difficult and intensive.
Was wondering if it can be achieved through multi-pass rendering? Something like a bloom effect thats applied to specific objects in the scene and only to the inner and outer boundaries.
I assume you want the glow only near the object's contours?
I did an outer glow using a multi-pass approach (after all "regular" drawing):
Draw object to texture (cleared to fully transparent) using constant output shader (using glow color as output), marking the stencil buffer in the process. Use EQUAL depth test if you only want a glow around the part where the object is actually visible on screen. Obviously using the depth buffer used to do normal scene drawing.
Separated gaussian blur on this texture.
Blend result to the output buffer for all pixels that do not have the stencil buffer marked in step 1.
For an inner + outer glow, you could do an edge detection on the result of (1), keeping only marked pixels near the boundary, followed by the blur and and an unmasked blend.
You could also try to combine the edge detection and blurring by using a filter that scales its output based on the variance of all samples in its radius. It would be non-separable though...
When using wireframe fill mode in Direct3D, all rectangular faces display a diagonal running across due to the face being split in to two triangles. How do I eliminate this line? I also want to remove hidden surfaces. Wireframe mode doesn't do this.
I need to display a Direct3D model in isometric wireframe view. The rendered scene must display the boundaries of the model's faces but must exclude the diagonals.
Getting rid of the diagonals is tricky as the hardware is likely to only draw triangles and it would be difficult to determine which edge is the diagonal. Alternatively, you could apply a wireframe texture (or a shader that generates a suitable texture). That would solve the hidden line issues, but would look odd as the thickness of the lines would be dependant on z distance.
Using line primitives is not trivial, although surfaces facing away from the camera can be easily removed, partially obscured surfaces would require manual clipping. As a final thought, do a two pass approach - the first pass draws the filled polygons but only drawing to the z buffer, then draw the lines over the top with suitable z bias. That would handle the partially obscured surface problem.
The built-in wireframe mode renders edges of the primitives. As in D3D the primitives are triangles (or lines, or points - but not arbitrary polygons), that means the built-in way won't cut it.
I guess you have to look up some sort of "edge detection" algorithms. These could operate in image space, where you render the model into a texture, assigning unique color for each logical polygon, and then do a postprocessing pass using pixel shader and detect any changes in color (color changes = output black, otherwise output something else).
Alternatively, you could construct a line list that only has the edges you need and just render them.
Yet another alternative could be using geometry shaders in Direct3D 10. Somehow, lots of different options here.
I think you'll need to draw those line manually, as wireframe mode is a built in mode, so I don't think you can modify that. You can get the list of vertex in your mesh, and process them into a list of lines that you need to draw.