Imagine you have a chessboard textured triangle shown in front of you.
Then imagine you move the camera so that you can see the triangle from one side, when it nearly looks as a line.
You will provably see the line as grey, because this is the average color of the texels shown in a straight line from the camera to the end of the triangle. The GPU does this all the time.
Now, how is this implemented? Should I sample every texel in a straight line and average the result to get the same output? Or is there another more efficient way to do this? Maybe using mipmaps?
It does not matter if you look at the object from the side, front, or back; the implementation remains exactly the same.
The exact implementation depends on the required results. A typical graphics API such as Direct3D has many different texture sample techniques, which all have different properties. Have a look at the documentation for some common sampling techniques and an explanation.
If you start looking at objects from an oblique angle, the texture on the triangle might look distorted with most sampling techniques, and Anisotropic Filtering is often used in these scenario's.
Related
This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.
I enjoy computer graphics.
I was wondering what the fastest engine was with the following functionality:
Draws triangles with 4 color channels rgba and allows for the drawing of point and directional lights.
Texturing would be a cool additional feature, but again I am looking for the fastest engine, not the most functional. Camera animation and object animation will be imperative.
Finally there are really 2 answers for this question, 1 for general development and one for web, but if you can only speak to one or the other your contributions will be appreciated!
There are quite a lot of engines that do the job. One of the most known is for example Unity, where you also have tons of other features in good performance.
But I think you are not really looking for an engine but an API. Examples are OpenGL or DirectX (already mentioned). OpenGL even has a specific web content (WebGL).
There is one more problem: the triangles should be semitransparent. What is missing in the other answer is the question if the triangles are already ordered. OpenGL for example is good in rendering objects where it does not matter which triangle is nearest to the viewer. It "searches" this one on the fly and shows only the triangle that is visible. But with semitransparent triangles it is possible to see different triangles overlapping each other and therefore it is not only necessary to know which triangle is in the front, but which triangle comes directly after that and so on. OpenGL offers blending for this feature, but is necessary to order the semitransparent triangles manually before rendering. This is called the Painters Algorithm. While Sorting of objects is a complex problem, exspecially with a large number of objects, this could take quite long time.
For this there is another solution called "depth peeling". The idea is to render all triangles multiple times with OpenGL. The first time you get all the triangles which are in the front. Now you render all triangles again, but without the triangles in the front. This results in the second nearest triangles to the viewer. After that all triangles are rendered again, but without the first two "peels", which results in the third nearest triangles and so on. This is expensive because everything has to get rendered multiple times, but in cases where there is a very large number of triangles this is faster than sorting (and more precise due to overlapping triangles). In most cases four peels are enough for good results. For further read I suggest the following paper of Everitt: http://gamedevs.org/uploads/interactive-order-independent-transparency.pdf
Your best bet is probably OpenGL. In the case of the web, you could use WebGL and in the case of native desktop or mobile development you could directly use OpenGL.
Let's say I've got a rgba texture, and a polygon class , which constructor takes vector array of verticies coordinates.
Is there some way to create a polygon of this texture, for example, using alpha channel of the texture ...?
in 2d
Absolutely, yes it can be done. Is it easy? No. I haven't seen any game/geometry engines that would help you out too much either. Doing it yourself, the biggest problem you're going to have is generating a simplified mesh. One quad per pixel is going to generate a lot of geometry very quickly. Holes in the geometry may be an issue if you're tracing the edges and triangulating afterwards. Then there's the issue of determining what's in and what's out. Alpha is the obvious candidate, but unless you're looking at either full-on or full-off, you may be thinking about nice smooth edges. That's going to be hard to get right and would probably involve some kind of marching squares over the interpolated alpha. So while it's not impossible, its a lot of work.
Edit: As pointed out below, Unity does provide a method of generating a polygon from the alpha of a sprite - a PolygonCollider2D. In the script reference for it, it mentions the pathCount variable which describes the number of polygons it contains, which in describes which indexes are valid for the GetPath method. So this method could be used to generate polygons from alpha. It does rely on using Unity however. But with the combination of the sprite alpha for controlling what is drawn, and the collider controlling intersections with other objects, it covers a lot of use cases. This doesn't mean it's appropriate for your application.
Does anyone know of a graphics system which handles composition of multiple anti-aliased lines well?
I'm showing a dependency diagram and have a bunch of curves emanating from a point. These are drawn anti-aliased in the usual way, of blending partially covered pixels. So if two lines would occupy the same half of a pixel, the antialiasing blends it to 75% filled rather than 50% filled. With enough lines drawn on top of each other, the pixel blend clamps and you end up with aliased lines.
I know anti-grain geometry has algorithms for calculating blends which cater for lines which abut, and that oversampling might work, but are there any other approaches?
Handling this form of line composition well is going to be slow (you have to consider all the lines that impinge upon each pixel using a deferred rendering approach). I doubt that there are many (if any) libraries out there that will do it for you.
The quickest and easiest method (and possibly the only realistic and cost effective solution for your case), which will work with virtually any drawing library would be to supersample it - draw to an offscreen bitmap at much higher resolution (e.g. 4 times wider and higher, with lines of 4 pixels width. Disable antialiasing when drawing this as it'll only slow it down) and then scale the result down with bilinear filtering. The main down-side is that it uses a lot of memory for the offscreen bitmap.
If you need an existing system that gets antialiased lines "visually correct", you might try using one of several existing RenderMan-compliant 3D renderers. The REYES algorithm, which many of these renderers use, works by breaking up primitives into micropolygons, then sampling them at several random point locations within each pixel. So even if you have a million lines collectively obscuring 50% of a pixel, the resulting image value will show roughly 50% coverage. (This is, for example, how the millions of antialiased hairs are drawn on characters in many animated movies.)
Of course, using a full-blown 3D renderer to draw 2D lines is like driving nails with a sledgehammer. You'd need a fairly pathological scenario for the 3D renderer to be any more efficient than simply supersampling with a traditional 2D renderer.
It sounds like you want a premade drawing library, which I do not know of.
However, to answer your question of knowing any approach that would work, you can consider a pixel to be a square. You can then approximate any shape that you draw as a polygon that intersects the pixel box. By clipping these polygons against the box of the pixel and against each other, you can get a very good estimate of the areas associated with each color that intersects the pixel for accurate antialiasing. This is, of course, very slow to calculate and is not suitable for interactive drawing.
Crayon Physics Deluxe is a commercial game that came out recently. Watch the video on the main link to get an idea of what I'm talking about.
It allows you to draw shapes and have them react with proper physics. The goal is to move a ball to a star across the screen using contraptions and shapes you build.
While the game is basically a wrapper for the popular Box2D Physics Engine, it does have one feature that I'm curious about how it is implemented.
Its drawing looks very much like a Crayon. You can see the texture of the crayon and as it draws it varies in thickness and darkness just like an actual crayon drawing would look like.
(source: kloonigames.com)
(source: kloonigames.com)
The background texture is freely available here.
What kind of algorithm would be used to render those lines in a way that looks like a Crayon? Is it a simple texture applied with a random thickness and darkness or is there something more going on?
I remember reading (a long time ago) a short description of an algorithm to do so:
for the general form of the line, you split the segment in two at a random point, and move this point slightly away from it's position (the variation depending on the distance of the point to the extremity). Repeat recursively/randomly. In this way, you lines are not "perfect" (straight line)
for a given segment you can "overshoot" a little bit, by extending one extremity or the other (or both). In this way, you don't have perfect joints. If i remember well, the best was to extends the original extremities, but you can do this for the sub-segment if you want to visibly split them.
draw the lines with pattern/stamp
there was also the (already mentioned) possibility to drawn with different starting and ending opacity (to mimic the tendency to release the pen at the end of drawing)
You can use a different size for the stamp on the beginning and the end of the line (also to mimic the tendency to release the pen at the end of drawing). For the same effect, you can also draw the line twice, with a small variation for one of the extremity (be careful with the alpha in this case, as the line will be drawn twice)
Last, for a given line, you can do the previous modifications several times (ie draw the line twice, with different variations) : human tend to repeat a line if they make some mistakes.
Regards
If you blow the image up you can see a repeating stamp-pattern .. there's probably a small assortment that it uses as it moves from a to b - might even rotate them ..
The wavering of the line can't be all that difficult to do. Divide into a bunch of random segments, pick positions slightly away from the direct route and draw splines.
Here's a paper that uses a lot of math to simulate the deposition of wax on paper using a model of friction. But I think your best bet is to just use a repeating pattern, as another reader mentioned, and vary the opacity according to pressure.
For the imperfect line drawing parts, I have a blog entry describing how to do it using bezier curves.
You can base darkness on speed. That's just measuring the distance traveled by the cursor between this frame and the last frame (remember Pythagorean theorem) and then when you go to draw that line segment on screen, adjust the alpha (opacity) according to the distance you measured.
There is a paper available called Mimicking Hand Drawn Pencil Lines which covers a bit of what you are after. Although it doesn't present a very detailed view of the algorithm, the authors do cover the basics of the steps that they used.
This paper includes high level descriptions of how they generated the lines, as well as how they generated the textures for the lines, and they get results which are similar to what you want.
This article on rendering chart series to look like XKCD comics has an algorithm for perturbing lines which may be relevant. It doesn't cover calculating the texture of a crayon drawn line, but it does offer an approach to making a straight line look imperfect in a human-like way.
Example output:
I believe the easiest way would simply be to use a texture with random darkness (some gradients, maybe) throughout, and set size randomly.