Line stippling with OpenGL ES - graphics

I'm trying to implement a line stipple (dashed/dotted line) with OpenGL ES2.0 and found many threads about this topic. But there weren't any examples. Has someone done this before and can help me with implementation?

There is no direct support for stippled lines in OpenGL ES, so the common approaches are:
Render multiple distinct line segments, each segment rendering one solid part of the stipple pattern.
Render a thin quad made out of two triangles and apply the line stipple effect using a transparent texture.
From a performance point of view I'd recommend the latter if you have a significant number of lines on screen.

Option three, beyond those given by solidpixel, is to use a textured line; the fragment shader for a line can receive varyings and sample textures just like any other fragment shader. So supply a texturing coordinate as a varying running from x = 0 to x = k * (length of line), then sample your texture to produce a fragment colour.
The behaviour on OpenGL when rendering a line is different from that when rendering a quad: a thin quad on a diagonal can miss fragment centres, whereas a line will always paint a continuous collection of fragments.
So it's more or less the difference between computing the stipple on the fly versus precomputing it and looking it up. If you can compute it as you go, as solidpixel advocates, that's likely preferable as it'll consume less bandwidth.

Related

How does Skia or Direct2D render lines or polygons with GPU?

This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.

How is the GPU "instructed" to render an image?

If this question is off, please let me know as I don't want to clutter the platform with off-topic questions!
Anyways, I'm having a hard time finding information about what's actually going on when an image is rendered because of some code I've written.
Say I wanted to add the numbers 5 and 3. The CPU would write 5 to one register and 3 to another one. The ALU would take care of the calculation and output 8. That's fine, the CPU uses MOVE and ADD to produce a result.
What I don't find any information on however, is what's going on when I want to draw a rectangle. There are importable frameworks for most programming languages which lets you do this. In SpriteKit (Swift & Objc) for example, you would write something like
let node = SKSpriteNode(color: .white, size: CGSize(width: 200, height: 300))
and add node to an SKScene (just a scene containing childNodes) and a white rectangle would "magically" get rendered. What I would like to know is what goes on under the hood. Why does this exact framework let you draw a rectangle. What is the assembly code (say, for Intel Core M) which makes the GPU calculate what this rectangle will look like? And how does SpriteKit build on the basics of Swift/Objective C to actually do this (and could I do this myself)?
Maybe a weird question, but I feel like I have to know (yes, sometimes I'm too curious). Thank you.
P.S. I would love a really detailed answer, not "the CPU 'tells' the GPU to draw a rectangle" - CPUs can't talk!
There are many ways to render convex polygon. The most used in past was ScanLine algorithm where you simply rasterize all the lines of circumference into left/right buffers and then just render using horizontal lines and interpolating the other coordinates along the way (like z,r,g,b,tx,ty,nx,ny,nz...). This was suited for single-thread CPU based SW rendering.
With parallelization (like on GPU) different approach get more popular. It simply renders only triangles (so you need to triangulate your polygons) and renders like this:
compute AABB
so simply min,max of x,y coordinates of the triangle vertexes.
loop through AABB
this is done in parallel and its done by GPU interpolators. Each interpolated (looped) "pixel" is called fragment (as it usually contains more than just color)
for each fragment
compute barycentric coordinates and from the result decide if fragment is inside (s+t<=1) or outside (s+t>1) triangle. If inside invoke Fragment shader.
All this gets done just before Fragment shader stage and usually all this (or majority of it) is implemented in HW so no code.
Nowadays GPU rendering is done by passing geometry to the gfx driver itself. What drivers does under the hood is just guess work for us but most likely they also just pass the geometry and configuration setting to the right places on the GPU (memory, registers, ...).

fast 2D texture line sample

Imagine you have a chessboard textured triangle shown in front of you.
Then imagine you move the camera so that you can see the triangle from one side, when it nearly looks as a line.
You will provably see the line as grey, because this is the average color of the texels shown in a straight line from the camera to the end of the triangle. The GPU does this all the time.
Now, how is this implemented? Should I sample every texel in a straight line and average the result to get the same output? Or is there another more efficient way to do this? Maybe using mipmaps?
It does not matter if you look at the object from the side, front, or back; the implementation remains exactly the same.
The exact implementation depends on the required results. A typical graphics API such as Direct3D has many different texture sample techniques, which all have different properties. Have a look at the documentation for some common sampling techniques and an explanation.
If you start looking at objects from an oblique angle, the texture on the triangle might look distorted with most sampling techniques, and Anisotropic Filtering is often used in these scenario's.

draw stipple line using vertex buffer in directx11

In OpenGL we can set line pattern usingglEnable (GL_LINE_STIPPLE);glLineStipple(2,0x00FF);
And in dx9, we can draw stipple line using ID3DXLine's method SetPattern(0x00FF).
But it seems that there is not such a method in dx11 to set pattern for stipple line. If it is true i wonder if there is a smart way to draw stipple line in dx11?
You might look at this question. It asks how to do line stipple in non-deprecated modern OpenGL, which is similar in functionality to Direct3D 10+.
My answer basically was to use a combination of alpha testing and the geometry shader to do it:
Perhaps you could also use a 1D texture with the alpha (or red)
channel encoding the pattern as 0.0 (no line) or 1.0 (line) and then
have the line's texture coordinate go from 0 to 1 and in the fragment
shader you make a simple alpha test, discarding fragments with alpha
below some threshold. You can facilitate the geometry shader to
generate your line's texCoords, as otherwise you need different vertices
for every line. This way you can also make the texCoord dependent on
the screen space length of the line.
The whole thing get's more difficult if you draw triangles (using
polygon mode GL_LINE). Then you have to do the triangle-line
transformation yourself in the geometry shader, putting in triangles
and putting out lines (that could also be a reason for deprecating
polygon mode in the future, if it hasn't already).
Although this question was about OpenGL, the basic principles are exactly the same, you just have to map the shaders from the answer to HLSL, which shouldn't be too difficult given their simplicity.

Direct3D: Wireframe without Diagonals

When using wireframe fill mode in Direct3D, all rectangular faces display a diagonal running across due to the face being split in to two triangles. How do I eliminate this line? I also want to remove hidden surfaces. Wireframe mode doesn't do this.
I need to display a Direct3D model in isometric wireframe view. The rendered scene must display the boundaries of the model's faces but must exclude the diagonals.
Getting rid of the diagonals is tricky as the hardware is likely to only draw triangles and it would be difficult to determine which edge is the diagonal. Alternatively, you could apply a wireframe texture (or a shader that generates a suitable texture). That would solve the hidden line issues, but would look odd as the thickness of the lines would be dependant on z distance.
Using line primitives is not trivial, although surfaces facing away from the camera can be easily removed, partially obscured surfaces would require manual clipping. As a final thought, do a two pass approach - the first pass draws the filled polygons but only drawing to the z buffer, then draw the lines over the top with suitable z bias. That would handle the partially obscured surface problem.
The built-in wireframe mode renders edges of the primitives. As in D3D the primitives are triangles (or lines, or points - but not arbitrary polygons), that means the built-in way won't cut it.
I guess you have to look up some sort of "edge detection" algorithms. These could operate in image space, where you render the model into a texture, assigning unique color for each logical polygon, and then do a postprocessing pass using pixel shader and detect any changes in color (color changes = output black, otherwise output something else).
Alternatively, you could construct a line list that only has the edges you need and just render them.
Yet another alternative could be using geometry shaders in Direct3D 10. Somehow, lots of different options here.
I think you'll need to draw those line manually, as wireframe mode is a built in mode, so I don't think you can modify that. You can get the list of vertex in your mesh, and process them into a list of lines that you need to draw.

Resources