Related
I'm using a polygonal chain to approximate a curve. I want to approximate the average of a function of curvature of all points that lie on the curve. One function of curvature that I need is, for example, the square of curvature.
I can get near that by choosing some points on the chain, calculating the curvature in those points, applying the function on it (for example squaring it), and then averaging the calculated values.
I need both accuracy and speed. I appreciate both — fast, but approximate; as well as accurate, but slow solutions. I'm working in Java, but the answer doesn't need to be written in Java — it doesn't even need to contain any code at all.
Polygonal chain with uniform segment length
If the polygonal chain's segments all have equal length, I can just calculate the curvature in the vertices and then average that. I see two ways to get the curvature in a vertex.
One way is to get the circle that goes through the selected vertex, the vertex before it, and the one after it. The curvature is then 1/radius of the circle.
The other way is to calculate the external angle (in radians) of the two segments connected at the selected vertex and then divide its absolute value by the length of a segment. In the following image, φ marks the external angle:
I am not sure if this method is correct, as I haven't mathematically derived it, but I've noticed through experimentation that it gives similar results to the above method.
Polygonal chain with non-uniform segment length
Unfortunately, though, there's no guarantee that the segments have uniform length.
If I try using the first of the above methods, vertices connected to longer segments give lower curvatures, even if they are visibly sharper. I tried substituting previous and next vertices with a point x units before the selected vertex and a point x units after it. I don't know what to set the x constant to, to get accurate results. All the values I've tried seemed to give inaccurate results.
If I try using the second method, I don't know what length to divide the angle by. If I don't divide by anything at all, I actually get pretty good results for comparing two curves and determining which one is curvier, but I need to be able to determine the actual curvature in a point.
With both of these methods there's also the problem that parts with shorter segments (where points are denser) will affect the average more.
Another possible solution would be to ignore the vertices and instead use an array of points on the chain that are evenly spaced, treat them as a new polygonal chain (connect the points with straight lines), and then calculate curvatures on this new chain instead, using one of the methods I mentioned under the header titled "Polygonal chain with uniform segment length".
Finding such an array of points is not trivial, though, because I have to choose a segment length, and only after producing the points, I can see if the length of the resulting chain is divisible by the chosen segment length.
If you aren't short on space, the last solution you mentioned would be the best, because the "sphere" approximation, as you've perhaps realized, would give awful results in more extreme cases, especially if the curvature is large or changes sign quickly.
There are many ways to do interpolations, the simplest being quadratic and cubic splines. However if you have more pre-processing time, Lagrange polynomials produce very good results: https://en.wikipedia.org/wiki/Lagrange_polynomial.
Side note on your angle division method, consider this diagram:
(From simple geometry the inside angle there is also theta)
For a << l. So the curvature:
So your approximation is in fact correct for small curvatures.
An alternative is to use a local parabola approximation to estimate the curvature. Basically, to estimate the curvature at point P(i), you take P(i-1), P(i) and P(i+1) and construct a parabola from these 3 points. Then, you compute the curvature at P(i) from the parabola. Remember to use chord-length (or centripetal) parametrization when constructing the parabola.
I am looking for an algorithm that given two meshes could clip one using another.
The simplest form of this is clipping a mesh using a plane. I've already implemented that by following something similar to what is described here.
What it does is basically inspecting all mesh vertices and triangles with respect to the plane (the plane's normal and point are given). If the triangle is completely above the plane, it is left untouched. If it falls completely below the plane, it is discarded. If some of the edges of the triangle intersect with the plane, the intersecting points with the plane are calculated and added as the new vertices. Finally a cap is generated for the hole on the place the mesh was cut.
The problem is that the algorithm assumes that the plane is unlimited, therefore whatever is in its path is clipped. In the simplest form, I need an extension of this without the assumption of a plane of "infinite" size.
To clarify, imagine that we have a 3D model of a desk with 2 boxes on it. The boxes are adjacent (but not touching or stacked). The user will define a cutting plane of a limited width and height underneath the first box and performs the cut. We end up with a desk model (mesh) with a box on it and another box (mesh) that can be freely moved around/manipulated.
In the general form, I'd like the user to be able to define a bounding box for the box he/she wants to separate from the desk model and perform the cut using that bounding box.
If I could extend the algorithm I already have to an algorithm with limited-sized planes, that would be great for now.
What you're looking for are constructive solid geometry/boolean algorithms with arbitrary meshes. It's considerably more complex than slicing meshes by an infinite plane.
Among the earliest and simplest research in this area, and a good starting point, is Constructive Solid Geometry for Polyhedral Objects by Trumbore and Hughes.
http://cs.brown.edu/~jfh/papers/Laidlaw-CSG-1986/main.htm
From the original paper:
More elaborate solutions extend upon this subject with a variety of data structures.
The real complexity of the operation lies in the slicing algorithm to slice one triangle against another. The nightmare of implementing robust CSG lies in numerical precision. It's easy when you involve objects far more complex than a cube to run into cases where a slice is made just barely next to a vertex (at which point you have the tough decision of merging the new split vertex or not prior to carrying out more splits), where polygons are coplanar (or almost), etc.
So I suggest initially erring on the side of using very high-precision floating point numbers, possibly even higher than double precision to focus on getting something working correctly and robustly. You can optimize later (first pass should be to use an accelerator like an octree/kd-tree/bvh), but you'll avoid many headaches this way in your first iteration.
This is vastly simpler to implement at render time if you're focusing on a raytracer rather than a modeling software, e.g. With raytracers, all you have to do to do this kind of arbitrary clipping is pretend that an object used to subtract from another has its polygons flipped in the culling process, e.g. It's easy to solve robustly at the ray level, but quite a bit harder to do robustly at the geometric level.
Another thing you can do to make your life so much easier if you can afford it is to voxelize your object, find subtractions/additions/unions of voxels, and then translate the voxels back into a mesh. This is so much easier to make robust, but harder to do efficiently and the voxel->polygon conversion can get quite involved if you want better results than what marching cubes provide.
It's a really tough area to do extremely well and requires perseverance, and thus the reason for the existence of things like this: http://carve-csg.com/about.
If someone is interested, currently there is a solution for this problem in CGAL library. It allows clipping one triangular mesh using another mesh as bounding volume. The usage example can be found here.
I'm using Unity, but the solution should be generic.
I will get user input from mouse clicks, which define the vertex list of a closed irregular polygon.
That vertices will define the outer edges of a flat 3D mesh.
To procedurally generate a mesh in Unity, I have to specify all the vertices and how they are connected to form triangles.
So, for convex polygons it's trivial, I'd just make triangles with vertices 1,2,3 then 1,3,4 etc. forming something like a Peacock tail.
But for concave polygons it's not so simple.
Is there an efficient algorithm to find the internal triangles?
You could make use of a constrained Delaunay triangulation (which is not trivial to implement!). Good library implementations are available within Triangle and CGAL, providing efficient O(n*log(n)) implementations.
If the vertex set is small, the ear-clipping algorithm is also a possibility, although it wont necessarily give you a Delaunay triangulation (it will typically produce sub-optimal triangles) and runs in O(n^2). It is pretty easy to implement yourself though.
Since the input vertices exist on a flat plane in 3d space, you could obtain a 2d problem by projecting onto the plane, computing the triangulation in 2d and then applying the same mesh topology to your 3d vertex set.
I've implemented the ear clipping algorithm as follows:
Iterate over the vertices until a convex vertex, v is found
Check whether any point on the polygon lies within the triangle (v-1,v,v+1). If there are, then you need to partition the polygon along the vertices v, and the point which is farthest away from the line (v-1, v+1). Recursively evaluate both partitions.
If the triangle around vertex v contains no other vertices, add the triangle to your output list and remove vertex v, repeat until done.
Notes:
This is inherently a 2D operation even when working on 3D faces. To consider the problem in 2D, simply ignore the vector coordinate of the face's normal which has the largest absolute value. (This is how you "project" the 3D face into 2D coordinates). For example, if the face had normal (0,1,0), you would ignore the y coordinate and work in the x,z plane.
To determine which vertices are convex, you first need to know the polygon's winding. You can determine this by finding the leftmost (smallest x coordinate) vertex in the polygon (break ties by finding the smallest y). Such a vertex is always convex, so the winding of this vertex gives you the winding of the polygon.
You determine winding and/or convexity with the signed triangle area equation. See: http://softsurfer.com/Archive/algorithm_0101/algorithm_0101.htm. Depending on your polygon's winding, all convex triangles with either have positive area (counterclockwise winding), or negative area (clockwise winding).
The point-in-triangle formula is constructed from the signed-triangle-area formula. See: How to determine if a point is in a 2D triangle?.
In step 2 where you need to determine which vertex (v) is farthest away from the line, you can do so by forming the triangles (L0, v, L1), and checking which one has the largest area (absolute value, unless you're assuming a specific winding direction)
This algorithm is not well defined for self-intersecting polygons, and due to the nature of floating point precision, you will likely encounter such a case. Some safeguards can be implemented for stability: - A point should not be considered to be inside your triangle unless it is a concave point. (Such a case indicates self-intersection and you should not partition your set along this vertex). You may encounter a situation where a partition is entirely concave (i.e. it's wound differently to the original polygon's winding). This partition should be discarded.
Because the algorithm is cyclic and involves partitioning the sets, it is highly efficient to use a bidirectional link list structure with an array for storage. You can then partition the sets in 0(1), however the algorithm still has an average O(n^2) runtime. The best case running time is actually a set where you need to partition many times, as this rapidly reduces the number of comparisons.
There is a community script for triangulating concave polygons but I've not personally used it. The author claims it works on 3D points as well as 2D.
One hack I've used in the past if I want to constrain the problem to 2D is to use principal component analysis to find the 2 axes of greatest change in my 3D data and making these my "X" and "Y".
I have a 3d volume given by a binary space partition tree. Usually these are made from polygon models, and the splitted polygons already stored inside the tree nodes.
But mine is not, so I have no polygons. Every node has nothing but it's cut plane (given by normal and origin distance for example). Thus the tree still represent a solid 3d volume, defined by all the cuts made. However, for visualisation I need a polygonal mesh of this volume. How can that be reconstructed efficiently?
The crude method would be to convert the infinite half spaces of the leaves to large enough polhedrons (eg. cubes) and push every single one of them upwards the tree, cutting it by every node's plane it passes. That seems extremely costly, as the tree may be unbalanced (eg. if stupidly made from a convex polyhedra). Is there any classic solution?
In order to recover the polygonal surface you need to intersect the planes. Where each vertex of a polygon is generated by an intersection of three planes and each edge by an intersection of 2 planes. But making this efficient and numerical stable is no trivial task. So i propose to use qhalf that is part of qhull. A documentation of the input and ouput of qhalf can be found here. Of course you can use qhull (and the functionality from qhalf) as a library.
I am rewriting my ray tracer and just trying to better understand certain aspects of it.
I seem to have down pat the issue regarding normals and how you should multiply them by the inverse of the transpose of a transformation matrix.
What I'm confused about is when I should be normalizing my direction vectors?
I'm following a certain book and sometimes it'll explicitly state to Normalize my vector and other cases it doesn't and I find out that I needed to.
Normalized vector is in the same direction with just unit length 1? So I'm unclear when it is necessary?
Thanks
You never need to normalize a vector unless you are working with the angles between vectors, or unless you are rotating a vector.
That's it.
In the former case, all of your trig functions require your vectors to land on a unit circle, which means the vectors are normalized. In the latter case, you are dividing out the magnitude, rotating the vector, making sure it stays a unit, and then multiplying the magnitude back in. Normalization just goes with the territory.
If someone tells you that coordinate system are defined by n unit vectors, know that i-hat, j-hat, k-hat, and so on can be any arbitrary vector(s) of any length and direction, so long as none of them are parallel. This is the heart of affine transformations.
If someone tries to tell you that the dot product requires normalized vectors, shake your head and smile. The dot product only needs normalized vectors when you are using it to get the angle between two vectors.
But doesn't normalization make the math "simpler"?
Not really -- It adds a magnitude computation and a division. Numbers between 0..1 are no different than numbers between 0..x.
Having said that, you sometimes normalize in order to play well with others. But if you find yourself normalizing vectors as a matter of principle before calling methods, consider using a flag attached to the vector to save yourself a step. Mathematically, it is unimportant, but practically, it can make a huge difference in performance.
So again... it's all about rotating a vector or measuring its angle against another vector. If you aren't doing that, don't waste cycles.
tl;dr: Normalized vectors simplify your math. They also reduce the number of very hard to diagnose visual artifacts in your images.
Normalized vector is in the same direction with just unit length 1? So
I'm unclear when it is necessary?
You almost always want all vectors in a ray tracer to be normalized.
The simplest example is that of the intersection test: where does a bouncing ray hit another object.
Consider a ray where:
p(t) = p_0 + v * t
In this case, a point anywhere along that ray p(t) is defined as an offset from the original point p_0 and an offset along a particular direction v. For every increment of parameter t, the resulting p(t) will move another increment of length equal to the length of the vector v.
Remember, you know p_0 and v. When you are trying to find the point where this ray next hits another object, you have to solve for that t. It is obviously more convenient, if not always obviously necessary, to use normalized vector vs in that representation.
However, that same vector v is used in lighting calculations. Imagine that we have another direction vector u that points towards a lighting source. For the purpose of a very simple shading model, we can define the light at a particular point to be the dot product between those two vectors:
L(p) = v * u
Admittedly, this is a very uninteresting reflection model but it captures the high points of the discussion. A spot on a surface is bright if reflection points towards the light and dim if not.
Now, remember that another way of writing this dot product is the product of the magnitudes of the vectors times the cosine of the angle between them:
L(p) = ||v|| ||u|| cos(theta)
If u and v are of unit length (normalized), then the equation will evaluate to be proportional to the angle between the two vectors. However, if v is not of unit length, say because you didn't bother to normalize after reflecting the vector in the ray model above, now your lighting model has a problem. Spots on the surface using a larger v will be much brighter than spots that do not.
It is necessary to normalize a direction vector whenever you use it in some math that is influenced by its length.
The prime example is the dot product, which is used in most lighting equations. You also sometimes need to normalize vectors that you use in lighting calculations, even if you believe that they are normal.
For example, when using an interpolated normal on a triangle. Common sense tells you that since the normals at the vertices are normal, the vectors you get by interpolating are too. So much for common sense... the truth is that they will be shorter unless they incidentially all point into the same direction. Which means that you will shade the triangle too dark (to make matters worse, the effect is more pronounced the closer the light source gets to the surface, which is a... very funny result).
Another example where a vector might or might not be normalized is the cross product, depending on what you are doing. For example, when using the two cross products to build an orthonormal base, then you must at least normalize once (though if you do it naively, you end up doing it more often).
If you only care about the direction of the resulting "up vector", or about the sign, you don't need to normalize.
I'll answer the opposite question. When do you NOT need to normalize? Almost all calculations related to lighting require unit vectors - the dot product then gives you the cosine of the angle between vectors which is really useful. Some equations can still cope but become more complex (essentially doing the normalization in the equation) That leaves mostly intersection tests.
Equations for many intersection tests can be simplified if you have unit vectors. Some do not require it - for example if you have a plane equation (with a unit normal) you can find the ray-plane intersection without normalizing the ray direction vector. The distance will be in terms of the ray direction vectors length. This might be OK if all you want is to intersect a bunch of those planes (the relative distances will all be correct). But as soon as you want to compare with a different distance - calculated using the normalized ray direction - the distance values will not compare properly.
You might think about normalizing a direction vector AFTER doing some work that does not require it - maybe you have an acceleration structure that can be traversed without a normalized vector. But that isn't relevant either because eventually the ray will hit something and you're going to want to do a lighting/shading calculation with it. So you may as well normalize them from the start...
In other words, any specific calculation may not require a normalized direction vector, but a given direction vector will almost certainly need to be normalized at some point in the process.
Vectors are used to store two conceptually different elements: points in space and directions:
If you are storing a point in space (for example the position of the camera, the origin of the ray, the vertices of triangles) you don't want to normalize, because you would be modifying the value of the vector, and losing the specific position.
If you are storing a direction (for example the camera up, the ray direction, the object normals) you want to normalize, because in this case you are interested not in the specific value of the point, but on the direction it represents, so you don't need the magnitude. Normalization is useful in this case because it simplifies some operations, such as calculating the cosine of two vectors, something that can be done with a dot product if both are normalized.