I have a rectangle R of side lenghts lx (horizontal) and lz (vertical) and area of A = lx*lz. In this rectangle, there are N randomly distributed points. I would like to split A into N sub-areas, where each sub-area is determined based on circles of growing radius around each point of the point cloud. As soon as two growing circles intersect, the resulting radical line should demark the local border between the sub-areas of two points. The borders of rectangle R should additionally delimit the sub-areas.
The expected result is sketched in this Figure:
Expected sub-areas in an example with 6 points
The approximate procedure is artistically illustrated in this short Youtube movie:
https://www.youtube.com/watch?v=BXNvcQTNWXM&ab_channel=stopmotionkim
To find the sub-areas, I am looking for an approximate or exact algorithm that (first priority) is efficient and scales well with N (better than O[N^2], ideally O[N], although I doubt that this is possible), and (second priority), if approximate, is as accurate as possible.
Does anybody know how to do this or has a hint how one could start? Thanks a lot for your help!
What you are looking for is Clipped Voronoi Diagram. It can be calculated in O(nlogn). I suggest you to look at those papers/courses to have a better idea of the underlying theory and computational methods:
Efficient Computation of 3D Clipped Voronoi Diagram
Computational Geometry
For a Python implementation, see this post
Related
I am dealing with a reverse-engineering problem regarding road geometry and estimation of design conditions.
Suppose you have a set of points obtained from the measurement of positions of a road. This road has straight sections as well as curve sections. Straight sections are, of course, represented by lines, and curves are represented by circles of unknown center and radius. There are, as well, transition sections, which may be clothoids / Euler spirals or any other usual track transition curve. A representation of the track may look like this:
We know in advance that the road / track was designed taking this transition + circle + transition principle into account for every curve, yet we only have the measurement points, and the goal is to find the parameters describing every curve on the track, this is, the transition parameters as well as the circle's center and radius.
I have written some code using a nonlinear optimization algorithm, where a user can select start and end points and fit a circle that to the arc section between them, as it shows in next figure:
However, I don't find a suitable way to take the transition into account. After giving it some thought I came to think that this s because, given a set of discrete points -with their measurement error- representing a full curve, it is not entirely clear where to consider it "begins" and where it "ends" and, moreover, it is less clear where to consider the transition, the proper circle and the exit transition "begin" and "end".
Is there any work on this subject which I may have missed? is there a proper way to fit the whole transition + curve + transition structure into the set of points?
As far as I know, there's no method to fit a sequence clothoid1-circle-clothoid2 into a given set of points.
Basic facts are that two points define a straight, and three points define a unique circle.
The clothoid is far more complex, because you need: The parameter A, the final radius Rf, an initial point px,py, the radius Ri at that point, and the tangent T (angle with X-axis) at that point.
These are 5 data you may use to find the solution.
Due to clothoid coords are calculated by expanded Fresnel integrals (see https://math.stackexchange.com/a/3359006/688039 a little explanation), and then apply a translation & rotation, there's no an easy way to fit this spiral into a set of given points.
When I've had to deal with your issue, what I've done is:
Calculate the radius for triplets of consecutive points: p1p2p3, p2p3p4, p3p4p5, etc
Observe the sequence of radius. Similar values mean a circle, increasing/decreasing values mean a clothoid; Big values would mean a straight.
For each basic element (line, circle) find the most probably characteristics (angles, vertices, radius) by hand or by some regression method. Many times the common sense is the best.
For a spiral you may start with aproximated values, taken from the adjacent elements. These values may very well be the initial angle and point, and the initial and final radius. Then you need to iterate, playing with Fresnel and 'space change' until you find a "good" parameter A. Then repeat with small differences in the other values, those you took from adjacents.
Make the changes you consider as good. For example, many values (A, radius) use to be integers, without decimals, just because it was easier for the designer to type.
If you can make a small applet to do these steps then it's enough. Using a typical roads software helps, but doesn't avoid you the iteration process.
If the points are dense compared to the effective radii of curvature, estimate the local curvature by least square fitting of a circle on a small number of points, taking into account that the curvature is most of the time zero.
You will obtain a plot with constant values and ramps that connect them. You can use an estimate of the slope at the inflection points to figure out the transition points.
Problem specification:
I have a rectangular and uniformly spaced image of pixels with vertex coordinates (i,j), (i+1,j), (i, j+1), (i+1, j+1) [i=0,...,m-1; j=0,...,n-1] and a polygon P with vertex coordinates (x_1,y_1), ..., (x_n, y_n). Now I want to efficiently compute the percentage of every pixel overlapping with P. P can be non-convex, or even self-intersection.
Essentially, this is a "soft" generalization of the scan-line rasterization algorithms which check efficiently if the pixel centers lie inside / outside the polygon.
I can think of the following approaches:
(1) Upsample the image (e.g. by a factor 10*10), count how many subpixel centers lie inside the polygon, and divide by 100. Problems: time efficiency, memory efficiency, accuracy.
(2) Use the scan-line algorithm on a slightly bigger and by (0.5,0.5) translated grid to compute the pixels that lie fully inside / outside, create a list of "borderline" pixels, walk counter-clockwise along the edges and compute the intersection areas with all pixels along the way. Problems: requires subtle coding, easy to introduce bugs.
My question: Has anybody already encountered this problem, and do you know a third, superior approach? And if not, have you made better experiences with (1) or with (2)? I assume that this problem may arise in the context of antialiasing?
Doing the exact geometric analysis might not be too difficult.
Deal with those pixels that are partially covered by the polygon first: you can use a technique from ray-tracing to quickly find all pixels that intersect with the polygon edges. You can then use the Cohen-Sutherland algorithm to efficiently find the points of intersection between the edge and the pixel, and hence you can compute the area of coverage for that pixel.
Note that you can avoid one of the two clipping operations involved in Cohen-Sutherland as adjacent pixels will share a segment intersection point. For instance - if you have two adjacent pixels, A and B that intersect with a segment p->q at points a1, a2, b1 and b2, then a2 and b1 will be the same. Passing the segment a2->q into the routine when clipping against B should avoid repeating work.
You'll have to treat the pixels that contain the polygon vertices specially, but again it shouldn't be too tricky: Cohen-Sutherland will help here as well.
Self-intersecting polygons will also throw up some special cases to handle - pixels that intersect with two or more edges. I can easily imagine that handling these exactly in all cases might get tricky, so I'd be tempted to just do the upsampling approach here.
Once these edge pixels have been identified, you can do the standard scan-line thing to fill in the polygon's interior pixels.
edit: Actually, now that I think more about it, you can totally skip the Cohen-Sutherland step. The algorithm in the linked paper can be easily extended to return the intersection points between the segment and the pixel grid. The segment will leave a given pixel at min( tMaxX, tMaxY ). Keep track of the last exit point to re-use as the entry point for the next pixel.
I would do
1a) Upsample when the pixel is partly overlapping:
but not the whole image, only the current pixel to be checked, or all pixels in the current scan line if that helps.
Than there is no memory argument.
speed? up to 16x16 i dont think that speed is an issue.
I know the Bresenham and related algorithms, and I found a good algorithm to draw a circle with a 1-pixel wide border. Is there any 'standard' algorithm to draw a circle with an n-pixel wide border, without restoring to drawing n circles?
Drawing the pixel and n2 surrounding pixels might be a solution, but it draws many more pixels than needed.
I am writing a graphics library for an embedded system, so I am not looking for a way to do this using an existing library, although a library that does this function and is open source might be a lead.
Compute the points for a single octant for both radii at the same time and simultaneously replicate it eight ways, which is how Bresenham circles are usually drawn anyway. To avoid overdrawing (e.g., for XOR drawing), the second octant should be constrained to draw outside the first octant's x-extents.
Note that this approach breaks down if the line is very thick compared to the radius.
Treat it as a rasterization problem:
Take the bounding box of your annulus.
Consider the image rows falling in the bounding box.
For each row, compute the intersection with the 2 circles (ie solve x^2+y^2=r^2, so x=sqrt(r^2-y^2) for each, for x,y relative to the circle centres.
Fill in the spans. Repeat for next row.
This approach generalizes to all sorts of shapes, can produce sub-pixel coordinates useful for anti-aliasing and scales better with increasing resolution than hacky solutions involving multiple shifted draws.
If the sqrt looks scary for an embedded system, bear in mind there are fast approximate algorithms which would probably be good enough, especially if you're rounding off to the nearest pixel.
I have given the coordinates of 1000 triangles on a plane (triangle number (T0001-T1000) and its coordinates (x1,y1) (x2,y2),(x3,y3)). Now, for a given point P(x,y), I need to find a triangle which contains the point P.
One option might be to check all the triangles and find the triangle that contain P. But, I am looking for efficient solution for this problem.
You are going to have to check every triangle at some point during the execution of your program. That's obvious right? If you want to maximize the efficiency of this calculation then you are going to create some kind of cache data structure. The details of the data structure depend on your application. For example: How often do the triangles change? How often do you need to calculate where a point is?
One way to make the cache would be this: Divide your plane in to a finite grid of boxes. For each box in the grid, store a list of the triangles that might intersect with the box.
Then when you need to find out which triangles your point is inside of, you would first figure out which box it is in (this would be O(1) time because you just look at the coordinates) and then look at the triangles in the triangle list for that box.
Several different ways you could search through your triangles. I would start by eliminating impossibilities.
Find a lowest left corner for each triangle and eliminate any that lie above and/or to the right of your point. continue search with the other triangles and you should eliminate the vast majority of the original triangles.
Take what you have left and use the polar coordinate system to gather the rest of the needed information based on angles between the corners and the point (java does have some tools for this, I do not know about other languages).
Some things to look at would be convex hull (different but somewhat helpful), Bernoullies triangles, and some methods for sorting would probably be helpful.
What kind of algorithms would generate random "goo balls" like those in World of Goo. I'm using Proccesing, but any generic algorithm would do.
I guess it boils down to how to "randomly" make balls that are kind of round, but not perfectly round, and still looking realistic?
Thanks in advance!
The thing that makes objects realistic in World of Goo is not their shape, but the fact that the behavior of objects is a (more or less) realistic simulation of 2D physics, especially
bending, stretching, compressing (elastic deformation)
breaking due to stress
and all of the above with proper simulation of dynamics, with no perceivable shortcuts
So, try to make the behavior of your objects realistic and that will make them look (feel) realistic.
Not sure if this is what you're looking for since I can't look at that site from work. :)
A circle is just a special case of an ellipse, where the major and minor axes are equal. A squished ball shape is an ellipse where one of the axes is longer than the other. You can generate different lengths for the axes and rotate the ellipse around to get these kinds of irregular shapes.
Maybe Metaballs (wiki) are something to start from.. but I'm not sure.
Otherwise I would suggest a particle approach in which a ball is composed by many particles that stick together, giving an irregularity (mind that this needs a minimal physical engine to handle the spring body that keeps all particles together).
As Unreason said, World of Goo is not so much about shape, but physics simulation.
But an easy way to create ball-like irregular shapes could be to start with n vertices (points) V_1, V_2 ... V_n on a circle and apply some random deformation to it. There are many ways to do that, going from simply moving around some single vertices to complex physical simulations.
Some ideas:
1) Chose a random vertex V_i, chose a random vector T, apply that vector as a translation (movement) to V_i, apply T to all other vertices V_j, too, but scaled down depending on the "distance" from V_i (where distance could be the absolute differenece between j and i, or the actual geometric distance of V_j to V_i). For the scaling factor you could use any function f that is 1 for f(0) and decreasing for increasing distances (basically a radial basis function).
for each V_j
V_j = scalingFactor(distance(V_i, V_j)) * translationVector + V_j
2) You move V_i as in 1, but now you simulate springlike connections between all neigbouring vertices and iteratively move all vertices based on the forces created by stretched springs.
3) For more round shapes you can do 1) or 2) on the control points of a B-spline curve.
Beware of self-intersections when you move vertices too much.
Just some rough ideas, not tested...