circle drawing algorithm for n-pixel border - graphics

I know the Bresenham and related algorithms, and I found a good algorithm to draw a circle with a 1-pixel wide border. Is there any 'standard' algorithm to draw a circle with an n-pixel wide border, without restoring to drawing n circles?
Drawing the pixel and n2 surrounding pixels might be a solution, but it draws many more pixels than needed.
I am writing a graphics library for an embedded system, so I am not looking for a way to do this using an existing library, although a library that does this function and is open source might be a lead.

Compute the points for a single octant for both radii at the same time and simultaneously replicate it eight ways, which is how Bresenham circles are usually drawn anyway. To avoid overdrawing (e.g., for XOR drawing), the second octant should be constrained to draw outside the first octant's x-extents.
Note that this approach breaks down if the line is very thick compared to the radius.

Treat it as a rasterization problem:
Take the bounding box of your annulus.
Consider the image rows falling in the bounding box.
For each row, compute the intersection with the 2 circles (ie solve x^2+y^2=r^2, so x=sqrt(r^2-y^2) for each, for x,y relative to the circle centres.
Fill in the spans. Repeat for next row.
This approach generalizes to all sorts of shapes, can produce sub-pixel coordinates useful for anti-aliasing and scales better with increasing resolution than hacky solutions involving multiple shifted draws.
If the sqrt looks scary for an embedded system, bear in mind there are fast approximate algorithms which would probably be good enough, especially if you're rounding off to the nearest pixel.

Related

Circle triangulation

I am currently experimenting with openGL, and I'm drawing a lot of circles that I have to break down to triangles (triangulate a circle).
I've been calculating the vertices of the triangles by having an angle that is incremented, and using cos() and sin() to get the x and y values for one vertex.
I searched a bit on the internet about the best and most efficient way of doing this, and even though there's not much information avaliable realized that thin and long triangles (my approach) are not very good. A better approach would be to start with an equilateral triangle and then repeatedly add triangles that cover the larges possible area that's not yet covered.
left - my method; right - new method
I am wondering if this is the most efficient way of doing this, and if yes, how would that be implemented in actual code.
The website where I found the method: link
both triangulations has their pros and cons
The Triangle FAN has equal sized triangles which sometimes looks better with textures (and other interpolated stuff) and the code to generate is simple for loop with parametric circle equation.
The increasing detail mesh has less triangles and can easily support LOD which might be faster. However number of points is not arbitrary (3,6,12,24,48,...). The code is slightly more complicated you could:
start with equilateral triangle remembering circumference edges
so add triangle (p0,p1,p2) to mesh and edges (p0,p1),(p1,p2),(p2,p0) to circumference.
for each edge (p0,p1) of circumference
compute:
p2 = 0.5*(p0+p1); // mid point
p2 = r*p2/|p2|; // normalize it to circle circumference assuming (0,0) is center
add triangle (p0,p1,p2) to mesh and replace p0,p1 edge with (p0,p2),(p2,p1) edges
note that r/|p2| will be the same for all edges in current detail level so no need to compute expensive |p2| over and over again.
goto #2 until you have enough dense triangulation
so 2 for loops and few dynamic lists (points,triangles,circumference_edges,and some temps if not doing this inplace). Also this method does not need goniometrics at all (can be modified to generate the triangle fan too).
Here similar stuff:
sphere triangulation using similar technique

How to clip a line using circle and triangle clipping windows?

While using Cohen-Sutherland line clipping algorithm, the clipping window is a rectangle. Is it possible to clip a line using a triangular or circular windows using a similar technique?
The idea can be reused for clipping against a triangle, with seven regions instead of nine. You can easily see what pair of origin/destination regions result in no visibility or full visibility. For the remaining cases deeper analysis is required.
For circles the coding is of less use because two "outside" codes are not enough to decide. But clipping a segment against a circle is simple: write the parametric equation of the segment and find the values of the parameter giving a point inside the circle (this amounts to the resolution of a quadratic equation).

Pixel overlap with polygon: efficient (scanline-type) algorithm

Problem specification:
I have a rectangular and uniformly spaced image of pixels with vertex coordinates (i,j), (i+1,j), (i, j+1), (i+1, j+1) [i=0,...,m-1; j=0,...,n-1] and a polygon P with vertex coordinates (x_1,y_1), ..., (x_n, y_n). Now I want to efficiently compute the percentage of every pixel overlapping with P. P can be non-convex, or even self-intersection.
Essentially, this is a "soft" generalization of the scan-line rasterization algorithms which check efficiently if the pixel centers lie inside / outside the polygon.
I can think of the following approaches:
(1) Upsample the image (e.g. by a factor 10*10), count how many subpixel centers lie inside the polygon, and divide by 100. Problems: time efficiency, memory efficiency, accuracy.
(2) Use the scan-line algorithm on a slightly bigger and by (0.5,0.5) translated grid to compute the pixels that lie fully inside / outside, create a list of "borderline" pixels, walk counter-clockwise along the edges and compute the intersection areas with all pixels along the way. Problems: requires subtle coding, easy to introduce bugs.
My question: Has anybody already encountered this problem, and do you know a third, superior approach? And if not, have you made better experiences with (1) or with (2)? I assume that this problem may arise in the context of antialiasing?
Doing the exact geometric analysis might not be too difficult.
Deal with those pixels that are partially covered by the polygon first: you can use a technique from ray-tracing to quickly find all pixels that intersect with the polygon edges. You can then use the Cohen-Sutherland algorithm to efficiently find the points of intersection between the edge and the pixel, and hence you can compute the area of coverage for that pixel.
Note that you can avoid one of the two clipping operations involved in Cohen-Sutherland as adjacent pixels will share a segment intersection point. For instance - if you have two adjacent pixels, A and B that intersect with a segment p->q at points a1, a2, b1 and b2, then a2 and b1 will be the same. Passing the segment a2->q into the routine when clipping against B should avoid repeating work.
You'll have to treat the pixels that contain the polygon vertices specially, but again it shouldn't be too tricky: Cohen-Sutherland will help here as well.
Self-intersecting polygons will also throw up some special cases to handle - pixels that intersect with two or more edges. I can easily imagine that handling these exactly in all cases might get tricky, so I'd be tempted to just do the upsampling approach here.
Once these edge pixels have been identified, you can do the standard scan-line thing to fill in the polygon's interior pixels.
edit: Actually, now that I think more about it, you can totally skip the Cohen-Sutherland step. The algorithm in the linked paper can be easily extended to return the intersection points between the segment and the pixel grid. The segment will leave a given pixel at min( tMaxX, tMaxY ). Keep track of the last exit point to re-use as the entry point for the next pixel.
I would do
1a) Upsample when the pixel is partly overlapping:
but not the whole image, only the current pixel to be checked, or all pixels in the current scan line if that helps.
Than there is no memory argument.
speed? up to 16x16 i dont think that speed is an issue.

Smooth transitions between two intersecting polygons (interesting problem)

I have an interesting problem that I've been trying to solve for a while. There is no "right" solution to this, as there is no strict criteria for success. What I want to accomplish is a smooth transition between two simple polygons, from polygon A to polygon B. Polygon A is completely contained within polygon B.
My criteria for this transition are:
The transition is continuous in time and space
The area that is being "filled" from polygon A into polygon B should be filled in as if there was a liquid in A that was pouring out into the shape of B
It is important that this animation can be calculated either on the fly, or be defined by a set of parameters that require little space, say less than a few Kb.
Cheating is perfectly fine, any way to solve this so that it looks good is a possible solution.
Solutions I've considered, and mostly ruled out:
Pairing up vertices in A and B and simply interpolate. Will not look good and does not work in the case of concave polygons.
Dividing the area B-A into convex polygons, perhaps a Voronoi diagram, and calculate the discrete states of the polygon by doing a BFS on the smaller convex polygons. Then I interpolate between the discrete states. Note: If polygon B-A is convex, the transition is fairly trivial. I didn't go with this solution because dividing B-A into equally sized small convex polygons was surprisingly difficult
Simulation: Subdivide polygon A. Move each vertex along the polygon line normal (outwards) in discrete but small steps. For each step, check if vertex is still inside B. If not, then move back to previous position. Repeat until A equals B. I don't like this solution because the check to see whether a vertex is inside a polygon is slow.
Does anybody have any different ideas?
If you want to keep this simple and somewhat fast, you could go ahead with your last idea where you consider scaling polygon A so that it gradually fills polygon B. You don't necessarily have to check if the scaled-outward vertices are still inside polygon B. Depending on what your code environment and API is like, you could mask the pixels of the expanding polygon A with the outline of polygon B.
In modern OpenGL, you could do this inside a fragment shader. You would have to render polygon B to a texture, send that texture to the shader, and then use that texture to look up if the current fragment being rendered maps to a texture value that has been set by polygon B. If it is not, the fragment gets discarded. You would need to have the texture be as large as the screen. If not, you would need to include some camera calculations in your shaders so you can "render" the fragment-to-test into the texture in the same way you rendered polygon B into that texture.

Antialiased composition by coverage?

Does anyone know of a graphics system which handles composition of multiple anti-aliased lines well?
I'm showing a dependency diagram and have a bunch of curves emanating from a point. These are drawn anti-aliased in the usual way, of blending partially covered pixels. So if two lines would occupy the same half of a pixel, the antialiasing blends it to 75% filled rather than 50% filled. With enough lines drawn on top of each other, the pixel blend clamps and you end up with aliased lines.
I know anti-grain geometry has algorithms for calculating blends which cater for lines which abut, and that oversampling might work, but are there any other approaches?
Handling this form of line composition well is going to be slow (you have to consider all the lines that impinge upon each pixel using a deferred rendering approach). I doubt that there are many (if any) libraries out there that will do it for you.
The quickest and easiest method (and possibly the only realistic and cost effective solution for your case), which will work with virtually any drawing library would be to supersample it - draw to an offscreen bitmap at much higher resolution (e.g. 4 times wider and higher, with lines of 4 pixels width. Disable antialiasing when drawing this as it'll only slow it down) and then scale the result down with bilinear filtering. The main down-side is that it uses a lot of memory for the offscreen bitmap.
If you need an existing system that gets antialiased lines "visually correct", you might try using one of several existing RenderMan-compliant 3D renderers. The REYES algorithm, which many of these renderers use, works by breaking up primitives into micropolygons, then sampling them at several random point locations within each pixel. So even if you have a million lines collectively obscuring 50% of a pixel, the resulting image value will show roughly 50% coverage. (This is, for example, how the millions of antialiased hairs are drawn on characters in many animated movies.)
Of course, using a full-blown 3D renderer to draw 2D lines is like driving nails with a sledgehammer. You'd need a fairly pathological scenario for the 3D renderer to be any more efficient than simply supersampling with a traditional 2D renderer.
It sounds like you want a premade drawing library, which I do not know of.
However, to answer your question of knowing any approach that would work, you can consider a pixel to be a square. You can then approximate any shape that you draw as a polygon that intersects the pixel box. By clipping these polygons against the box of the pixel and against each other, you can get a very good estimate of the areas associated with each color that intersects the pixel for accurate antialiasing. This is, of course, very slow to calculate and is not suitable for interactive drawing.

Resources