How to generate irregular ball shapes? - graphics

What kind of algorithms would generate random "goo balls" like those in World of Goo. I'm using Proccesing, but any generic algorithm would do.
I guess it boils down to how to "randomly" make balls that are kind of round, but not perfectly round, and still looking realistic?
Thanks in advance!

The thing that makes objects realistic in World of Goo is not their shape, but the fact that the behavior of objects is a (more or less) realistic simulation of 2D physics, especially
bending, stretching, compressing (elastic deformation)
breaking due to stress
and all of the above with proper simulation of dynamics, with no perceivable shortcuts
So, try to make the behavior of your objects realistic and that will make them look (feel) realistic.

Not sure if this is what you're looking for since I can't look at that site from work. :)
A circle is just a special case of an ellipse, where the major and minor axes are equal. A squished ball shape is an ellipse where one of the axes is longer than the other. You can generate different lengths for the axes and rotate the ellipse around to get these kinds of irregular shapes.

Maybe Metaballs (wiki) are something to start from.. but I'm not sure.
Otherwise I would suggest a particle approach in which a ball is composed by many particles that stick together, giving an irregularity (mind that this needs a minimal physical engine to handle the spring body that keeps all particles together).

As Unreason said, World of Goo is not so much about shape, but physics simulation.
But an easy way to create ball-like irregular shapes could be to start with n vertices (points) V_1, V_2 ... V_n on a circle and apply some random deformation to it. There are many ways to do that, going from simply moving around some single vertices to complex physical simulations.
Some ideas:
1) Chose a random vertex V_i, chose a random vector T, apply that vector as a translation (movement) to V_i, apply T to all other vertices V_j, too, but scaled down depending on the "distance" from V_i (where distance could be the absolute differenece between j and i, or the actual geometric distance of V_j to V_i). For the scaling factor you could use any function f that is 1 for f(0) and decreasing for increasing distances (basically a radial basis function).
for each V_j
V_j = scalingFactor(distance(V_i, V_j)) * translationVector + V_j
2) You move V_i as in 1, but now you simulate springlike connections between all neigbouring vertices and iteratively move all vertices based on the forces created by stretched springs.
3) For more round shapes you can do 1) or 2) on the control points of a B-spline curve.
Beware of self-intersections when you move vertices too much.
Just some rough ideas, not tested...

Related

Circle triangulation

I am currently experimenting with openGL, and I'm drawing a lot of circles that I have to break down to triangles (triangulate a circle).
I've been calculating the vertices of the triangles by having an angle that is incremented, and using cos() and sin() to get the x and y values for one vertex.
I searched a bit on the internet about the best and most efficient way of doing this, and even though there's not much information avaliable realized that thin and long triangles (my approach) are not very good. A better approach would be to start with an equilateral triangle and then repeatedly add triangles that cover the larges possible area that's not yet covered.
left - my method; right - new method
I am wondering if this is the most efficient way of doing this, and if yes, how would that be implemented in actual code.
The website where I found the method: link
both triangulations has their pros and cons
The Triangle FAN has equal sized triangles which sometimes looks better with textures (and other interpolated stuff) and the code to generate is simple for loop with parametric circle equation.
The increasing detail mesh has less triangles and can easily support LOD which might be faster. However number of points is not arbitrary (3,6,12,24,48,...). The code is slightly more complicated you could:
start with equilateral triangle remembering circumference edges
so add triangle (p0,p1,p2) to mesh and edges (p0,p1),(p1,p2),(p2,p0) to circumference.
for each edge (p0,p1) of circumference
compute:
p2 = 0.5*(p0+p1); // mid point
p2 = r*p2/|p2|; // normalize it to circle circumference assuming (0,0) is center
add triangle (p0,p1,p2) to mesh and replace p0,p1 edge with (p0,p2),(p2,p1) edges
note that r/|p2| will be the same for all edges in current detail level so no need to compute expensive |p2| over and over again.
goto #2 until you have enough dense triangulation
so 2 for loops and few dynamic lists (points,triangles,circumference_edges,and some temps if not doing this inplace). Also this method does not need goniometrics at all (can be modified to generate the triangle fan too).
Here similar stuff:
sphere triangulation using similar technique

Fitting a transition + circle + transition curve to a set of measured points

I am dealing with a reverse-engineering problem regarding road geometry and estimation of design conditions.
Suppose you have a set of points obtained from the measurement of positions of a road. This road has straight sections as well as curve sections. Straight sections are, of course, represented by lines, and curves are represented by circles of unknown center and radius. There are, as well, transition sections, which may be clothoids / Euler spirals or any other usual track transition curve. A representation of the track may look like this:
We know in advance that the road / track was designed taking this transition + circle + transition principle into account for every curve, yet we only have the measurement points, and the goal is to find the parameters describing every curve on the track, this is, the transition parameters as well as the circle's center and radius.
I have written some code using a nonlinear optimization algorithm, where a user can select start and end points and fit a circle that to the arc section between them, as it shows in next figure:
However, I don't find a suitable way to take the transition into account. After giving it some thought I came to think that this s because, given a set of discrete points -with their measurement error- representing a full curve, it is not entirely clear where to consider it "begins" and where it "ends" and, moreover, it is less clear where to consider the transition, the proper circle and the exit transition "begin" and "end".
Is there any work on this subject which I may have missed? is there a proper way to fit the whole transition + curve + transition structure into the set of points?
As far as I know, there's no method to fit a sequence clothoid1-circle-clothoid2 into a given set of points.
Basic facts are that two points define a straight, and three points define a unique circle.
The clothoid is far more complex, because you need: The parameter A, the final radius Rf, an initial point px,py, the radius Ri at that point, and the tangent T (angle with X-axis) at that point.
These are 5 data you may use to find the solution.
Due to clothoid coords are calculated by expanded Fresnel integrals (see https://math.stackexchange.com/a/3359006/688039 a little explanation), and then apply a translation & rotation, there's no an easy way to fit this spiral into a set of given points.
When I've had to deal with your issue, what I've done is:
Calculate the radius for triplets of consecutive points: p1p2p3, p2p3p4, p3p4p5, etc
Observe the sequence of radius. Similar values mean a circle, increasing/decreasing values mean a clothoid; Big values would mean a straight.
For each basic element (line, circle) find the most probably characteristics (angles, vertices, radius) by hand or by some regression method. Many times the common sense is the best.
For a spiral you may start with aproximated values, taken from the adjacent elements. These values may very well be the initial angle and point, and the initial and final radius. Then you need to iterate, playing with Fresnel and 'space change' until you find a "good" parameter A. Then repeat with small differences in the other values, those you took from adjacents.
Make the changes you consider as good. For example, many values (A, radius) use to be integers, without decimals, just because it was easier for the designer to type.
If you can make a small applet to do these steps then it's enough. Using a typical roads software helps, but doesn't avoid you the iteration process.
If the points are dense compared to the effective radii of curvature, estimate the local curvature by least square fitting of a circle on a small number of points, taking into account that the curvature is most of the time zero.
You will obtain a plot with constant values and ramps that connect them. You can use an estimate of the slope at the inflection points to figure out the transition points.

Divide area into subareas based on circles of point cloud

I have a rectangle R of side lenghts lx (horizontal) and lz (vertical) and area of A = lx*lz. In this rectangle, there are N randomly distributed points. I would like to split A into N sub-areas, where each sub-area is determined based on circles of growing radius around each point of the point cloud. As soon as two growing circles intersect, the resulting radical line should demark the local border between the sub-areas of two points. The borders of rectangle R should additionally delimit the sub-areas.
The expected result is sketched in this Figure:
Expected sub-areas in an example with 6 points
The approximate procedure is artistically illustrated in this short Youtube movie:
https://www.youtube.com/watch?v=BXNvcQTNWXM&ab_channel=stopmotionkim
To find the sub-areas, I am looking for an approximate or exact algorithm that (first priority) is efficient and scales well with N (better than O[N^2], ideally O[N], although I doubt that this is possible), and (second priority), if approximate, is as accurate as possible.
Does anybody know how to do this or has a hint how one could start? Thanks a lot for your help!
What you are looking for is Clipped Voronoi Diagram. It can be calculated in O(nlogn). I suggest you to look at those papers/courses to have a better idea of the underlying theory and computational methods:
Efficient Computation of 3D Clipped Voronoi Diagram
Computational Geometry
For a Python implementation, see this post

Point Cloud - Principal Axes - Use of Inertia

I have got point clouds of different primitive objects (cone, plane, torus, cylinder, sphere, ellipsoid). The all vary in orientation, position and scaling. Furthermore all of them are initialized with a unique set of parameters (e.g. height, radius, etc.) so that their shape can be quiet different (some cones are tall, others are small and fat).
Now to my question:
I am trying to find the objects "principal components". Using PCA doesn't lead to good results, since rotated primitives can have their main variation in any direction (which doesn't have to be necessarily along the length of the objects).
The only chance that I see is to use somehow the symmetry of my primitives. Isn't there a method based on inertia? Maybe some way to find the main symmetry axis and two others perpendicular to it?
Can you give me some advice or point me to papers or implementations (maybe even python)?
Thanks a lot, Merlin.
PS: This is what I get if I only apply a PCA. Especially for cones this doesn't really work. Only cones that are almost identical in shape share the same orientation, but I need them all to point in one direction (e.g. up).
So you got cones and just need to rotate them all in the same direction?
If so you can fit a triangle to them and point the peak (e.g with the perpendicular bisectors of the sides) to your main axis.
You have an interesting problem. Normally used shape descriptors (VFH) that are invariant to shape but not pose (which is what you would want, really) would not be invariant to stretching in the shape.
I think to succeed at this you need to be clearer about the invariants that you are trying to maintain when a shape changes. Is it a topological invariant? If so, then here is a good starting point: https://www.google.com.tr/search?q=topologically+invariant+shape+descriptor
I decided to just stick to simple PCA since it's the only method that is totally generic and doesn't depend on prior (expert) knowledge about the data.

RayTracing: When to Normalize a vector?

I am rewriting my ray tracer and just trying to better understand certain aspects of it.
I seem to have down pat the issue regarding normals and how you should multiply them by the inverse of the transpose of a transformation matrix.
What I'm confused about is when I should be normalizing my direction vectors?
I'm following a certain book and sometimes it'll explicitly state to Normalize my vector and other cases it doesn't and I find out that I needed to.
Normalized vector is in the same direction with just unit length 1? So I'm unclear when it is necessary?
Thanks
You never need to normalize a vector unless you are working with the angles between vectors, or unless you are rotating a vector.
That's it.
In the former case, all of your trig functions require your vectors to land on a unit circle, which means the vectors are normalized. In the latter case, you are dividing out the magnitude, rotating the vector, making sure it stays a unit, and then multiplying the magnitude back in. Normalization just goes with the territory.
If someone tells you that coordinate system are defined by n unit vectors, know that i-hat, j-hat, k-hat, and so on can be any arbitrary vector(s) of any length and direction, so long as none of them are parallel. This is the heart of affine transformations.
If someone tries to tell you that the dot product requires normalized vectors, shake your head and smile. The dot product only needs normalized vectors when you are using it to get the angle between two vectors.
But doesn't normalization make the math "simpler"?
Not really -- It adds a magnitude computation and a division. Numbers between 0..1 are no different than numbers between 0..x.
Having said that, you sometimes normalize in order to play well with others. But if you find yourself normalizing vectors as a matter of principle before calling methods, consider using a flag attached to the vector to save yourself a step. Mathematically, it is unimportant, but practically, it can make a huge difference in performance.
So again... it's all about rotating a vector or measuring its angle against another vector. If you aren't doing that, don't waste cycles.
tl;dr: Normalized vectors simplify your math. They also reduce the number of very hard to diagnose visual artifacts in your images.
Normalized vector is in the same direction with just unit length 1? So
I'm unclear when it is necessary?
You almost always want all vectors in a ray tracer to be normalized.
The simplest example is that of the intersection test: where does a bouncing ray hit another object.
Consider a ray where:
p(t) = p_0 + v * t
In this case, a point anywhere along that ray p(t) is defined as an offset from the original point p_0 and an offset along a particular direction v. For every increment of parameter t, the resulting p(t) will move another increment of length equal to the length of the vector v.
Remember, you know p_0 and v. When you are trying to find the point where this ray next hits another object, you have to solve for that t. It is obviously more convenient, if not always obviously necessary, to use normalized vector vs in that representation.
However, that same vector v is used in lighting calculations. Imagine that we have another direction vector u that points towards a lighting source. For the purpose of a very simple shading model, we can define the light at a particular point to be the dot product between those two vectors:
L(p) = v * u
Admittedly, this is a very uninteresting reflection model but it captures the high points of the discussion. A spot on a surface is bright if reflection points towards the light and dim if not.
Now, remember that another way of writing this dot product is the product of the magnitudes of the vectors times the cosine of the angle between them:
L(p) = ||v|| ||u|| cos(theta)
If u and v are of unit length (normalized), then the equation will evaluate to be proportional to the angle between the two vectors. However, if v is not of unit length, say because you didn't bother to normalize after reflecting the vector in the ray model above, now your lighting model has a problem. Spots on the surface using a larger v will be much brighter than spots that do not.
It is necessary to normalize a direction vector whenever you use it in some math that is influenced by its length.
The prime example is the dot product, which is used in most lighting equations. You also sometimes need to normalize vectors that you use in lighting calculations, even if you believe that they are normal.
For example, when using an interpolated normal on a triangle. Common sense tells you that since the normals at the vertices are normal, the vectors you get by interpolating are too. So much for common sense... the truth is that they will be shorter unless they incidentially all point into the same direction. Which means that you will shade the triangle too dark (to make matters worse, the effect is more pronounced the closer the light source gets to the surface, which is a... very funny result).
Another example where a vector might or might not be normalized is the cross product, depending on what you are doing. For example, when using the two cross products to build an orthonormal base, then you must at least normalize once (though if you do it naively, you end up doing it more often).
If you only care about the direction of the resulting "up vector", or about the sign, you don't need to normalize.
I'll answer the opposite question. When do you NOT need to normalize? Almost all calculations related to lighting require unit vectors - the dot product then gives you the cosine of the angle between vectors which is really useful. Some equations can still cope but become more complex (essentially doing the normalization in the equation) That leaves mostly intersection tests.
Equations for many intersection tests can be simplified if you have unit vectors. Some do not require it - for example if you have a plane equation (with a unit normal) you can find the ray-plane intersection without normalizing the ray direction vector. The distance will be in terms of the ray direction vectors length. This might be OK if all you want is to intersect a bunch of those planes (the relative distances will all be correct). But as soon as you want to compare with a different distance - calculated using the normalized ray direction - the distance values will not compare properly.
You might think about normalizing a direction vector AFTER doing some work that does not require it - maybe you have an acceleration structure that can be traversed without a normalized vector. But that isn't relevant either because eventually the ray will hit something and you're going to want to do a lighting/shading calculation with it. So you may as well normalize them from the start...
In other words, any specific calculation may not require a normalized direction vector, but a given direction vector will almost certainly need to be normalized at some point in the process.
Vectors are used to store two conceptually different elements: points in space and directions:
If you are storing a point in space (for example the position of the camera, the origin of the ray, the vertices of triangles) you don't want to normalize, because you would be modifying the value of the vector, and losing the specific position.
If you are storing a direction (for example the camera up, the ray direction, the object normals) you want to normalize, because in this case you are interested not in the specific value of the point, but on the direction it represents, so you don't need the magnitude. Normalization is useful in this case because it simplifies some operations, such as calculating the cosine of two vectors, something that can be done with a dot product if both are normalized.

Resources