Antialiased composition by coverage? - graphics

Does anyone know of a graphics system which handles composition of multiple anti-aliased lines well?
I'm showing a dependency diagram and have a bunch of curves emanating from a point. These are drawn anti-aliased in the usual way, of blending partially covered pixels. So if two lines would occupy the same half of a pixel, the antialiasing blends it to 75% filled rather than 50% filled. With enough lines drawn on top of each other, the pixel blend clamps and you end up with aliased lines.
I know anti-grain geometry has algorithms for calculating blends which cater for lines which abut, and that oversampling might work, but are there any other approaches?

Handling this form of line composition well is going to be slow (you have to consider all the lines that impinge upon each pixel using a deferred rendering approach). I doubt that there are many (if any) libraries out there that will do it for you.
The quickest and easiest method (and possibly the only realistic and cost effective solution for your case), which will work with virtually any drawing library would be to supersample it - draw to an offscreen bitmap at much higher resolution (e.g. 4 times wider and higher, with lines of 4 pixels width. Disable antialiasing when drawing this as it'll only slow it down) and then scale the result down with bilinear filtering. The main down-side is that it uses a lot of memory for the offscreen bitmap.

If you need an existing system that gets antialiased lines "visually correct", you might try using one of several existing RenderMan-compliant 3D renderers. The REYES algorithm, which many of these renderers use, works by breaking up primitives into micropolygons, then sampling them at several random point locations within each pixel. So even if you have a million lines collectively obscuring 50% of a pixel, the resulting image value will show roughly 50% coverage. (This is, for example, how the millions of antialiased hairs are drawn on characters in many animated movies.)
Of course, using a full-blown 3D renderer to draw 2D lines is like driving nails with a sledgehammer. You'd need a fairly pathological scenario for the 3D renderer to be any more efficient than simply supersampling with a traditional 2D renderer.

It sounds like you want a premade drawing library, which I do not know of.
However, to answer your question of knowing any approach that would work, you can consider a pixel to be a square. You can then approximate any shape that you draw as a polygon that intersects the pixel box. By clipping these polygons against the box of the pixel and against each other, you can get a very good estimate of the areas associated with each color that intersects the pixel for accurate antialiasing. This is, of course, very slow to calculate and is not suitable for interactive drawing.

Related

How does Skia or Direct2D render lines or polygons with GPU?

This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.

In computer graphics, faces of a polygon

In computer graphics, why do we need to know that backward face and forward face of a polygon are different?
There are several reasons why a triangle's face might be important.
Face Culling
If you draw a cube, you can only ever see at most 3 sides of it. The front three sides will block your view of the back 3 sides. And while depth testing will prevent drawing the fragments corresponding to the back sides... why bother? In order to do depth testing, you have to rasterize those triangles. That's a lot of work for triangles that won't be seen.
Therefore, we have a way to cull triangles based on their facing, before performing rasterization on them. While vertex processing will still be done on those triangles, they will be discarded before doing heavy-weight operations like rasterization.
Through face culling, you can eliminate approximately half of the triangles in a closed mesh. That's a pretty decent performance savings.
Two-Sided Rendering
A leaf is a thin object, so you might render it as one flat polygon, without face culling. However, a leaf does not look the same on both sides. The top side is usually quite a bit darker than the bottom side.
You can achieve this effect by sending two colors when rendering the leaf; one meant for the top side and one for the bottom. In your fragment shader, you can detect which side of the polygon that fragment was generated from, by looking at the built-in variable gl_FrontFacing. That boolean can be used to select which color to use.
It could even be used to select which texture to sample from, if you want to do more complex two-sided rendering.

In 3D graphics, why is antialiasing not more often achieved using textures?

Commonly, techniques such as supersampling or multisampling are used to produce high fidelity images.
I've been messing around on mobile devices with CSS3 3D lately and this trick does a fantastic job of obtaining high quality non-aliased edges on quads.
The way the trick works is that the texture for the quad gains two extra pixels in each dimension forming a transparent one-pixel-wide outline outside the border. Due to texture sampling interpolation, so long as the transformation does not put the camera too close to an edge the effect is not unlike a pre-filtered antialiased rendering approach.
What are the conceptual and technical limitations of taking this sort of approach to render a 3D model, for example?
I think I already have one point that precludes using this kind of trick in the general case. Whenever geometry is not rectangular it does nothing to reduce aliasing: The fact that the result with a transparent 1px outline border is smooth for HTML5 with CSS3 depends on those elements being rectangular so that they rasterize neatly into a pixel grid.
The trick you linked to doesn't seem to have to do with texture interpolation. The CSS added a border that is drawn as a line. The rasterizer in the browser is drawing polygons without antialiasing and is drawing lines with antialiasing.
To answer your question of why you wouldn't want to blend into transparency over a 1 pixel border is that transparency is very difficult to draw correctly and could lead to artifacts when polygons are not drawn from back to front. You either need to presort your polygons based on distance or have opaque polygons that you check occlusion of using a depth buffer and multisampling.

SVG plot from point-value pairs

I need to write some code (for a web.py webapp with a straight-HTML/JS client) that will generate a visual representation of a set of point-values. Each point has an X and Y coordinate, and the value is an integer. If I can use SVG to do this, then I can scale the image client-side with no extra code. Can I actually do this? I am concerned about a couple of things:
The points don't necessarily have any relation to each other. They aren't necessarily in a grid, nor can we say how many points are nearby, etc.
Gradients are primarily one-direction, and multiple gradients on the same shape seems to be a foreign concept.
Fills require an existing image, at which point, I'd be better off generating the entire image server-side anyway.
Objects always have a layering, even if it isn't specified, which can change how the image is rendered.
If it helps, consider a situation where we have a point surrounded by 5 others, where one of them is a bit closer than the others (exact distances and sizes can be adjusted). All six of the points have different colors (Red, Green, Blue, Cyan, Magenta, Yellow, with red in the center and Yellow being slightly closer), and the outer five points are arranged roughly in a pentagon. Note that this situation is not the only option, just a theoretically possible situation.
Can I do this with SVG, or should I render an image server-side?
EDIT: The main difficulty isn't in drawing the points, it is in filling the space between the points so that there is no whitespace, and color transitions aren't harsh/unpredictable if you know the data.
I don't entirely understand the different issues you are having with wanting to use svg. I am currently using the set up you are describing to render X-Y scatter plots and gaussian curves and found that it works great.
Regarding the last point about object layering, you have to be particularly careful when layering objects with less than 100% opacity which are different colors. The way the colors "add" depends on the order in which you add the objects to your svg drawing.
Thankfully you can use different filters to overlay the colors without blending them. Specifically I am using the FeComposite filter element. There is a good example of its usage here:
http://www.w3.org/TR/SVG/filters.html#feCompositeElement

circle drawing algorithm for n-pixel border

I know the Bresenham and related algorithms, and I found a good algorithm to draw a circle with a 1-pixel wide border. Is there any 'standard' algorithm to draw a circle with an n-pixel wide border, without restoring to drawing n circles?
Drawing the pixel and n2 surrounding pixels might be a solution, but it draws many more pixels than needed.
I am writing a graphics library for an embedded system, so I am not looking for a way to do this using an existing library, although a library that does this function and is open source might be a lead.
Compute the points for a single octant for both radii at the same time and simultaneously replicate it eight ways, which is how Bresenham circles are usually drawn anyway. To avoid overdrawing (e.g., for XOR drawing), the second octant should be constrained to draw outside the first octant's x-extents.
Note that this approach breaks down if the line is very thick compared to the radius.
Treat it as a rasterization problem:
Take the bounding box of your annulus.
Consider the image rows falling in the bounding box.
For each row, compute the intersection with the 2 circles (ie solve x^2+y^2=r^2, so x=sqrt(r^2-y^2) for each, for x,y relative to the circle centres.
Fill in the spans. Repeat for next row.
This approach generalizes to all sorts of shapes, can produce sub-pixel coordinates useful for anti-aliasing and scales better with increasing resolution than hacky solutions involving multiple shifted draws.
If the sqrt looks scary for an embedded system, bear in mind there are fast approximate algorithms which would probably be good enough, especially if you're rounding off to the nearest pixel.

Resources