Midpoint algorithm for diameter with an even number of pixels? - geometry

Using the Midpoint circle algorithm you can draw symmetrical circles, visiting each pixel only once. Due to the nature of the algorithm, it can only draw circles with an odd diameter (2 * r + 1). Is it possible to extend this algorithm so that it can successfully draw circles with a diameter with an even number of pixels?
Some requirements for the algorithm:
Pixels must be drawn only once.
RAM is very expensive.
If the Midpoint circle algorithm indeed cannot be modified to handle this, then the following solution would be fine:
void DrawCircle(int x, int y, int diameter)
{
if (diameter % 2 == 0)
EvenWidthCircle(x, y, diameter / 2);
else
MidpointCircle(x, y, diameter / 2);
}

You can use the midpoint algorithm for even diameters, but you have to do a little modification, for a diameter 8 circle, do a diameter 7 circle using the midpoint algorithm, then:
2 | 1
-----
3 | 4
For the octants in quadrant 1, do nothing. For 2, shift left 1. For 3, shift left and down. For 4, shift down, then fill in the missing pixels/blocks/whatever you're making circles with.

A general idea is to split every pixel into 2x2 subpixels that doubles the diameter making it even.

Related

Pixel space depth offset in vertex shader

I'm trying to draw simple scaled points in my custom graphics engine. The points are scaled in pixel space, and the radius of the points are in pixels, but the position of the points fed to the draw function are in world coordinates.
So far, everything is working great, except for a depth clipping issue. The points are of constant size, regardless of how far away they are, which is done by offsetting the vertices in projected/clip space. However, when they are close to surfaces, they partially intersect them in the depth buffer.
Since these points represent world coordinates, I want them to use the depth buffer, and be hidden behind objects that are in front of them. However, when the point is close to a surface, I want to push it toward the camera, so it doesn't partially intersect it. I think it is easier to just always do this push, regardless of the point being close to a surface. What makes the most sense to me is to just push it by its radius, so that all of its vertices are exactly far enough away to avoid clipping into nearby surfaces.
The easiest way I've found to do this is to simply subtract from the Z value in the vertex shader, after transforming into view-projection space. However, I'm having some trouble converting my pixel radius into a depth offset. Regardless of the math I use, what works close up never seems to work far away. I'm thinking maybe this is due to how the z buffer is non-linear, but could be wrong.
Currently, the closest I've been to solving this is the following:
proj_vertex_pos.z -= point_pixel_radius / proj_vertex_pos.w * 100.0
I'm honestly not sure why 100.0 helps make this work yet. I added it simply because dividing the radius by w was too small of a value. Can anyone point me in the right direction? How do I convert my pixel distance into a depth distance? Especially if the depth distance changes scale depending on which depth you are at? Or am I just way off?
The solution was to convert my pixel space radius into world space units, since the z-buffer is still in world space, even after transforming by the view-projection transform. This can be done by converting pixels into a factor (factor = pixels / screen_size), then convert the factor into world space units, which was a little more involved - I had to calculate the world-space size of the screen at a given distance, then multiply the factor by that to get world units. I can post the related code if anyone needs it. There's probably a simpler way to calculate it, but my brain always goes straight for factors.
The reason I was getting different results at different distances was mainly because I was only offsetting the z component of the clip position by the result. It's also necessary to offset the w component, to make the depth offset work at any distance (linear). However, in order to offset the w component, you first have to scale xy by w, modify w as needed, then divide xy by the new w. This resulted in making the math pretty involved, so I changed the strategy to offset the vertex before clip space, which requires calculating the distance to the camera in Z space manually, but it honestly ended up being about the same amount of math either way.
Here is the final vertex shader at the moment. Hopefully the global values make sense. I did not modify this to post it, so please forgive any sillyness in my comments. EDIT: I had to make some edits to this, because I was accidentally moving the vertex along the camera-Z direction instead of directly toward the camera:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// compute offset from vertex to camera
float3 to_cam_offset = Scene.CamPos - vin.Position.xyz;
// compute the Z distance of the camera from the vertex
float cam_z_dist = -dot( Scene.CamZ, to_cam_offset );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * cam_z_dist * 2.0;
// finally, push the vertex toward the camera by the world radius
// + note: moving by radius will only work with surfaces facing the camera, since we are moving toward the camera, rather than away from the surface
// + because of this, we also multiply by another 4, to compensate for nearby surface angles, but there is no scale that would work for every angle
float3 offset = normalize(to_cam_offset) * (radius_world * -4.0);
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz + offset, 1.0) );
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
Here is the other version that offsets z & w instead of changing things in world space. After edits above, this is probably the more optimal solution:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz, 1.0) );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * pin.ClipPos.w * 2.0;
// offset depth by our world radius
// + we scale this extra to compensate for surfaces with high angles relative to the camera (since we are moving directly at it)
// + notice we have to make the perspective divide before modifying w, then re-apply it after, or xy will be off
pin.ClipPos.xy /= pin.ClipPos.w;
pin.ClipPos.z -= radius_world * 10.0;
pin.ClipPos.w -= radius_world * 10.0;
pin.ClipPos.xy *= pin.ClipPos.w;
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}

Is it accurate to conclude the radius of a circle given 4 bazier curves in svg?

I have used svg2paths2, and wanted to figure out what is the position and radius of a circle, I have noticed the circle is consructed by 4 CubicBezier, as follow:
Path(CubicBezier(start=(127.773+90.5469j), control1=(127.773+85.7656j), control2=(123.898+81.8906j), end=(119.121+81.8906j)),
CubicBezier(start=(119.121+81.8906j), control1=(114.34+81.8906j), control2=(110.465+85.7656j), end=(110.465+90.5469j)),
CubicBezier(start=(110.465+90.5469j), control1=(110.465+95.3281j), control2=(114.34+99.1992j), end=(119.121+99.1992j)),
CubicBezier(start=(119.121+99.1992j), control1=(123.898+99.1992j), control2=(127.773+95.3281j), end=(127.773+90.5469j)))
I have read the standard approach is to divide the circle into four equal sections, and fit each section to a cubic Bézier curve.
So I was wondering is it accurate to say the Radius of the circle is
(q1.start.real - q3.start.real)/2
or
(q2.start.imag - q4.start.imag)/2
And the center of the circle is:
c_x = (q1.start.real + q1.end.real) / 2
c_y = (q1.start.imag + q1.end.imag) / 2
Thank you!
I'm assuming you are using svg.path library in python, or svg2paths2 is related.
from svg.path import Path, Line, Arc, CubicBezier, QuadraticBezier, Close
path = Path(CubicBezier(start=(127.773+90.5469j), control1=(127.773+85.7656j), control2=(123.898+81.8906j), end=(119.121+81.8906j)),
CubicBezier(start=(119.121+81.8906j), control1=(114.34+81.8906j), control2=(110.465+85.7656j), end=(110.465+90.5469j)),
CubicBezier(start=(110.465+90.5469j), control1=(110.465+95.3281j), control2=(114.34+99.1992j), end=(119.121+99.1992j)),
CubicBezier(start=(119.121+99.1992j), control1=(123.898+99.1992j), control2=(127.773+95.3281j), end=(127.773+90.5469j)))
q1 = path[0]
q2 = path[1]
q3 = path[2]
q4 = path[3]
.real is the X coordinate
.imag is the Y coordinate
There's a very slight error in accuracy in the drawing program you are using and it's not at all an issue unless you want extreme accuracy.
(q1.start.real - q3.start.real) / 2 # 8.6539 is the radius in this case.
(q4.start.imag - q2.start.imag)/2 # 8.6543 is also the radius.
(q1.start.real - q1.end.real) # 8.6539 is again also the radius.
This accesses the same property, q1 of path and I' prefer it to the two above ways because it's accessing one property not two.
Below shown by green circle in diagram
c_x = (q1.start.real + q1.end.real) / 2 # 123.447 not center x
c_y = (q1.start.imag + q1.end.imag) / 2 # 86.21875 not center y
Below shown by red circle in diagram
c_x = q1.end.imag # 119.121 this is center x
c_y = q1.start.real # 90.5469 this is center y
To explain how serious the error in accuracy, the pink circle uses 8.6543 radius, below it is 8.6539 in green, perhaps viewable with an extreme zoom. But this does illustrate how important or not the decimal points can be.
Consider using numbers under 100 and as few decimal points as possible, especially understanding a new idea. Shorter text-length numbers vastly improves readability, understanding no end.
I often use just numbers below ten.
Note: you are drawing the circle counter-clockwise. Clockwise is the usual way.

reconstructing circles from Bezier curves

I am trying to reconstruct original graphics primitives from Postscript/SVG paths. Thus an original circle is rendered (in SVG markup) as:
<path stroke-width="0.5" d="M159.679 141.309
C159.679 141.793 159.286 142.186 158.801 142.186
C158.318 142.186 157.925 141.793 157.925 141.309
C157.925 140.825 158.318 140.432 158.801 140.432
C159.286 140.432 159.679 140.825 159.679 141.309" />
This is an approximation using 4 Beziers curves to create a circle.In other places circular arcs are approximated by linked Bezier curves.
My question is whether there is an algorithm I can use to recognize this construct and reconstruct the "best" circle. I don't mind small errors - they will be second-order at worst.
UPDATE: Note that I don't know a priori that this is a circle or an arc - it could be anything. And there could be 2, 3 4 or possibly even more points on the curve. So I'd really like a function of the sort:
error = getCircleFromPath(path)
where error will give an early indication of whether this is likely to be a circle.
[I agree that if I know it's a circle it's an easier problem.]
UPDATE: #george goes some way towards answering my problem but I don't think it's the whole story.
After translation to the origin and normalization I appear to have the following four points on the curve:
point [0, 1] with control point at [+-d,1] // horizontal tangent
point [1, 0] with control point at [1,+-d] // vertical tangent
point [0, -1] with control point at [+-d,-1] // horizontal tangent
point [-1, 0] with control point at [-1,+-d] // vertical tangent
This guarantees that the tangent at each point is "parallel" to the path direction at the point. It also guarantees the symmetry (4-fold axis with reflection. But it does not guarantee a circle. For example a large value of d will give a rounded box and a small value a rounded diamond.
My value of d appears to be about 0.57. This might be 1/sqrt(3.) or it might be something else.It is this sort of relationship I am asking for.
#george gives midpoint of arc as;
{p1,(p1 + 3 (p2 + p3) + p4)/8,p4}
so in my example (for 1,0 to 0,1) this would be:
[[1,0]+3[1,d]+3[d,1]+[0,1]] / 8
i.e.
[0.5+3d/8, 3d/8+0.5]
and if d =0.57, this gives 0.71, so maybe d is
(sqrt(0.5)-0.5)*8./3.
This holds for a square diamond, but for circular arcs the formula must be more general and I'd be grateful if anyone has it. For example, I am not familiar with Bezier math, so #george's formula was new to me
enter code here
Without doing all the math for you.. this may help:
there are always 4 control points on a bezier.
Your curve is 4 beziers linked together with points 1-4 , 4-7 , 7-10 , and 10-13 the control points
for each part. Points 1 , 4 , 7 and 10 (&13==1) lie exactly on the curve. To see if you have a nice circle calculate:
center = ( p1+p7 )/2 =( {159.679, 141.309} + {157.925, 141.309} ) / 2
= {158.802, 141.309}
verify you get the same result using points 4+10 -> {158.801, 141.309}
Once you know the center you can sample points along the curve and see if you have a constant distance.
If you only have a single bezier arc with 4 points a useful formula is that the midpoint is at
(p1 + 3 (p2 + p3) + p4)/8. So you can find the circle passing through three points:
{p1,(p1 + 3 (p2 + p3) + p4)/8,p4}
and again sample other points on the curve to decide if you indeed have a near circular arc.
Edit
the bezier formula is this:
x=(1-t)^3 p1 + 3 (1-t)^2 t p2 + 3 (1-t) t^2 p3 + t^3 p4 with parameter 0 < t < 1
so for example at t=1/4 you have
x=( 27 p1 + 27 p2 + 9 p3 + 1 p4 ) / 64
so once you find the center you can readily check a few points and calculate their distance.
I suspect if you only want to detect nearly exact circular arcs then checking two extra points with a tight tolerance will do the job. If you want to detect things that are approximately circular I would compute a bunch of points and use the average error as a criteria.
If all your elements are circle-like then you can just get the dimensions through path.getBBox() and generate a circle from there. In this case I'm considering ellipses, but you can easily translate it to actual circle elements:
var path = document.getElementById("circle_path");
var bbox = path.getBBox();
var rx = bbox.width/2;
var ry = bbox.height/2;
var cx = bbox.x + rx;
var cy = bbox.y + ry;
var ellipse = document.createElementNS(xmlns, "ellipse");
ellipse.setAttribute("fill", "none");
ellipse.setAttribute("stroke", "red");
ellipse.setAttribute("stroke-width", 0.1);
ellipse.setAttribute("cx", cx);
ellipse.setAttribute("cy", cy);
ellipse.setAttribute("rx", rx);
ellipse.setAttribute("ry", ry);
svg.appendChild(ellipse);
You can see a demo here:
http://jsfiddle.net/nwHm6/
The endpoints of the Bézier curves are probably on the circle. If so, it's easy to reconstruct the original circle.
Another possibility is to take the barycenter of the control points as the center of the circle because the control points are probably laid out symmetrically around the center. From the center, you get the radius as the average distance of the four control points closest to the center.
One can define an ellipse as a unit circle centred on (0,0), translated (2 params), scaled (2 params), and rotated (1 param). So on each arc take five points (t=0 ¼ ½ ¾ 1) and solve for these five parameters. Next take the in-between four points (t=⅛ ⅜ ⅝ ⅞), and test whether these lie on the same transformed circle. If yes, whoopee!, this is (part of) a transformed circle.
Immediately before and after might be another arc or arcn. Are these the same ellipse? If yes, and the subtended angles touch, then join together your descriptions of the pieces.

Finding internal angles of polygon

I have some lines that their intersection describes a polygon, like this:
I know the order of the lines, and their equations.
To find the internal angles, I found each lines orientations. But I've got confused as subtracting two lines orientation would give two different angles, even if I do it in the order of polygon's sides.
For example, in the following image, if I just subtract the orientation of the lines, I would get any of the following angles:
What made me more confused, is when the polygon is not convex, I will have angles greater than 180, and using my approach I don't get the correct angle at all:
And I found out that this way of approaching the problem is wrong.
So, What is the best way of finding the internal angles using just the lines? I know for a convex polygon, I may find vectors and then find the angle between them, but even for P6 in my example the vector approach fails.
Anyway, I prefer a method that won't include a conditional case for solving that concavity problem.
Thanks.
With ordered lines it is possible to find points of intersection (polygon vertexes) in clockwise order. Then you can calculate internal angles:
Angle[i] = Pi + ArcTan2(V[i] x V[i+1], V[i] * V[i+1])
(crossproduct and dotproduct of incoming and outgoing vectors for every vertex)
or
Angle[i] = Pi + ArcTan2( dx_in*dy_out-dx_out*dy_in, dx_in*dx_out+dy_in*dy_out2 )
Note: change plus sign after Pi to minus for anti-clockwise direction.
Edit:
Note that crossproduct and dotproduct are scalars, not vectors.
Example for your data:
dx1 = 5; dy1 = -15; dx2 = -15; dy2 = 5
Angle = Pi + ArcTan2(5*5-15*15, -5*15-5*15) = Pi - 2.11 radians ~ 59 degrees
Example for vectors:
(0,-1) (1,0) (L-curve)
Angle = Pi + ArcTan2(1, 0) = 270 degrees

How can I translate an image with subpixel accuracy?

I have a system that requires moving an image on the screen. I am currently using a png and just placing it at the desired screen coordinates.
Because of a combination of the screen resolution and the required frame rate, some frames are identical because the image has not yet moved a full pixel. Unfortunately, the resolution of the screen is not negotiable.
I have a general understanding of how sub-pixel rendering works to smooth out edges but I have been unable to find a resource (if it exists) as to how I can use shading to translate an image by less than a single pixel.
Ideally, this would be usable with any image but if it was only possible with a simple shape like a circle or a ring, that would also be acceptable.
Sub-pixel interpolation is relatively simple. Typically you apply what amounts to an all-pass filter with a constant phase shift, where the phase shift corresponds to the required sub-pixel image shift. Depending on the required image quality you might use e.g. a 5 point Lanczos or other windowed sinc function and then apply this in one or both axes depending on whether you want an X shift or a Y shift or both.
E.g. for a 0.5 pixel shift the coefficients might be [ 0.06645, 0.18965, 0.27713, 0.27713, 0.18965 ]. (Note that the coefficients are normalised, i.e. their sum is equal to 1.0.)
To generate a horizontal shift you would convolve these coefficients with the pixels from x - 2 to x + 2, e.g.
const float kCoeffs[5] = { 0.06645f, 0.18965f, 0.27713f, 0.27713f, 0.18965f };
for (y = 0; y < height; ++y) // for each row
for (x = 2; x < width - 2; ++x) // for each col (apart from 2 pixel border)
{
float p = 0.0f; // convolve pixel with Lanczos coeffs
for (dx = -2; dx <= 2; ++dx)
p += in[y][x + dx] * kCoeffs[dx + 2];
out[y][x] = p; // store interpolated pixel
}
Conceptually, the operation is very simple. First you scale up the image (using any method of interpolation, as you like), then you translate the result, and finally you subsample down to the original image size.
The scale factor depends on the precision of sub-pixel translation you want to do. If you want to translate by 0.5 degrees, you need scale up the original image by a factor of 2 then you translate the resulting image by 1 pixel; if you want to translate by 0.25 degrees, you need to scale up by a factor of 4, and so on.
Note that this implementation is not efficient because when you scale up you end up calculating pixel values that you won't actually use because they're just dropped when you subsample back to the original image size. The implementation in Paul's answer is more efficient.

Resources