A rotation vector represent rotations by directly storing the axis of rotation and the angle magnitude.
Quaternions seem to be used much more to represent rotations. Why are quaternions preferred over rotation vectors in computer graphics?
Quaternions are much easier to compute with, for the computer of course (as a human you shouldn't bother with 3D rotations anyway):
What do you do when you want to concatenate two rotations in vector representation? You have to convert them to quaternion or matrix form (using costly trigonometrics) to do that (and maybe back again), whereas quaternions can be concatenated efficiently by using the classical quaternion multiplication.
What do you do when you want to rotate a point/vector using a rotation in vector-format, or send it to GL/D3D as matrix? You convert it into a matrix (again using costly trigonometrics). A quaternion on the other hand is quite efficiently converted into a matrix, since it already encodes the needed sines and cosines.
So matrices and quaternions are much more appropriate rotational representations. From those two quaternions are more compact and they are also quite easy to convert into an axis-angle representation (and back again), though using trigonometrics. So if you need axis-angle information at the peripherals (it's only us humans who sometimes need an actual rotation axis and angle, the computer doesn't really care) you can still use it, but for internal representation and computation quaternions or matrices are a much better choice.
If quaternions seem a bit heavy at first with their "3-dimensional complex number" explanation, don't bother with their exact mathematical underpinnings. Just start to understand how they work and how to use them. Pragmatically they are just a kind of axis-angle representation, but with implicitly encoded sines and cosines, which are needed for efficient transformation and computation.
For a good explanation of potential reasons why quaterions are used and sometimes preferred over vectors, see this very intersting article. In this lengthy but insightful thread you will find opposing opinions on the usefulness of quaternions.
TL;DR - the author's view is that we rather don't really need quaterions but because of their intricate and complex nature they seem to be very appealing to programmers. All operations exressed using quaternions can be expressed using vectors. This opinion is quite controversial though.
Related
Say I had a point cloud with n number of points in 3d space(relatively densely packed together). What is the most efficient way to create a surface that goes contains every single point in it and lets me calculate values such as the normal and curvature at some point on the surface that was created? I also need to be able to create this surface as fast as possible(a few milliseconds hopefully working with python) and it can be assumed that n < 1000.
There is no "most efficient and effective" way (this is true of any problem in any domain).
In the first place, the surface you have in mind is not mathematically defined uniquely.
A possible approach is by means of the so-called Alpha-shapes, implemented either from a Delaunay tetrahedrization, or by the ball-pivoting method. For other methods, lookup "mesh reconstruction" or "surface reconstruction".
On another hand, normals and curvature can be computed locally, from neighbors configurations, without reconstructing a surface (though there is an ambiguity on the orientation of the normals).
I could suggest Nina Amenta's Power Crust algorithm (link to code), or also meshlab suite, which can compute the curvatures too.
I was wondering if there was a direct way of computing the iteration matrix for nth Linear Block Gauss Seidel iteration within OpenMDAO?
thank you
If I understand you correctly, you are referring to the matrix-form of the Gauss Seidel algorithm where you take Ax=b, and break A up into the Diagonal (D), Lower (L) and Upper (U) parts, then use those parts to compute the next iterate.
Specifically you compute [D-L]^-1. This, I believe is what you are referring to as the "iteration matrix" (I am not familiar with this terminology, but based on the algorithm I'm comfortable making an educated guess).
This formulation of the algorithm is useful to think about and a simple way to implement it, but OpenMDAO takes a different approach. The LBGS algorithm implemented in OpenMDAO is set up to work in a matrix-free manner. That means it only interacts with the linear operator methods solve_linear and apply_linear and never explicitly assembles the A matrix at all. Hence there isn't an opportunity to split A up into D, L, U.
Depending on the way you constructed the model, the A matrix you would need might or might not be there at all because OpenMDAO is capable of working in a completely matrix free context. However, if all of your components use the compute_partials or linearize methods to provide partial derivatives then the data you would need for the A matrix does exist in memory.
You'll have to dig for it a bit, and ironically the best place to see how to do that is in the direct solver which does actually require the matrix be formed to compute a factorization.
Also, in that code you'll see a function can iteratively call the linear operator to construct a dense matrix even if the underlying components don't provide their partials directly. Please note that this approach for assembling the matrix is extremely slow and is not recommended for normal operations.
I am sketching out a new simulation that will involve thousands of ships moving around on Earth's oceans and interacting over long periods of time. So, lots of "intersection detection" for sensor and communications ranges, as well as region detection for various environmental conditions. We'll assume a spherical earth, not WGS84. This is an event-step simulation that spits out metrics, not a real time game or anything like that.
A question is to use Cartesian coordinates (Earth-Centered, Earth-Fixed) or Geodic/polar coordinates. With polar coordinates a ship's track would be internally represented as a series of lat/lon waypoints with times and a great circle paths between them. With a Cartesian representation the waypoints would be connected with polyline renderings of the great circle between them.
The reason this is a question is I suspect that by sticking to a Cartesian data model it becomes possible to use various geometry libraries that are performance tuned, and even offer up SIMD/GPU performance advantages. The polar coordinates would probably be the more natural way to proceed if writing everything from scratch. But I suspect that by keeping things Cartesian I will have greater access to better and faster libraries. Is this an invalid line of thought? Another consideration is that I know polar coordinate calculations tend to get really screwy when near the poles.
Just curious if somebody with experience could save me a whole lot of time prototyping some scenarios both ways.
It often works well to represent directions as unit vectors instead of angles. Rotation of a vector by another angle becomes a 2x2 or 3x3 matmul (efficient with SIMD, but still more expensive than an FP add of two numbers in radians), but you very rarely need sin/cos.
You may occasionally want atan2 to get an angle, but usually not inside tight loops.
Intersection-detection can be very efficient (with SIMD) for XYZ coordinates given another XYZ + range. I'm not sure how efficiently you could check which lat/lon pairs were within range of a given point, not a problem I've looked at.
IDK what kind of stuff you'd find in existing libraries, or what you'd want to do with it.
I am looking for an algorithm that given two meshes could clip one using another.
The simplest form of this is clipping a mesh using a plane. I've already implemented that by following something similar to what is described here.
What it does is basically inspecting all mesh vertices and triangles with respect to the plane (the plane's normal and point are given). If the triangle is completely above the plane, it is left untouched. If it falls completely below the plane, it is discarded. If some of the edges of the triangle intersect with the plane, the intersecting points with the plane are calculated and added as the new vertices. Finally a cap is generated for the hole on the place the mesh was cut.
The problem is that the algorithm assumes that the plane is unlimited, therefore whatever is in its path is clipped. In the simplest form, I need an extension of this without the assumption of a plane of "infinite" size.
To clarify, imagine that we have a 3D model of a desk with 2 boxes on it. The boxes are adjacent (but not touching or stacked). The user will define a cutting plane of a limited width and height underneath the first box and performs the cut. We end up with a desk model (mesh) with a box on it and another box (mesh) that can be freely moved around/manipulated.
In the general form, I'd like the user to be able to define a bounding box for the box he/she wants to separate from the desk model and perform the cut using that bounding box.
If I could extend the algorithm I already have to an algorithm with limited-sized planes, that would be great for now.
What you're looking for are constructive solid geometry/boolean algorithms with arbitrary meshes. It's considerably more complex than slicing meshes by an infinite plane.
Among the earliest and simplest research in this area, and a good starting point, is Constructive Solid Geometry for Polyhedral Objects by Trumbore and Hughes.
http://cs.brown.edu/~jfh/papers/Laidlaw-CSG-1986/main.htm
From the original paper:
More elaborate solutions extend upon this subject with a variety of data structures.
The real complexity of the operation lies in the slicing algorithm to slice one triangle against another. The nightmare of implementing robust CSG lies in numerical precision. It's easy when you involve objects far more complex than a cube to run into cases where a slice is made just barely next to a vertex (at which point you have the tough decision of merging the new split vertex or not prior to carrying out more splits), where polygons are coplanar (or almost), etc.
So I suggest initially erring on the side of using very high-precision floating point numbers, possibly even higher than double precision to focus on getting something working correctly and robustly. You can optimize later (first pass should be to use an accelerator like an octree/kd-tree/bvh), but you'll avoid many headaches this way in your first iteration.
This is vastly simpler to implement at render time if you're focusing on a raytracer rather than a modeling software, e.g. With raytracers, all you have to do to do this kind of arbitrary clipping is pretend that an object used to subtract from another has its polygons flipped in the culling process, e.g. It's easy to solve robustly at the ray level, but quite a bit harder to do robustly at the geometric level.
Another thing you can do to make your life so much easier if you can afford it is to voxelize your object, find subtractions/additions/unions of voxels, and then translate the voxels back into a mesh. This is so much easier to make robust, but harder to do efficiently and the voxel->polygon conversion can get quite involved if you want better results than what marching cubes provide.
It's a really tough area to do extremely well and requires perseverance, and thus the reason for the existence of things like this: http://carve-csg.com/about.
If someone is interested, currently there is a solution for this problem in CGAL library. It allows clipping one triangular mesh using another mesh as bounding volume. The usage example can be found here.
I am rewriting my ray tracer and just trying to better understand certain aspects of it.
I seem to have down pat the issue regarding normals and how you should multiply them by the inverse of the transpose of a transformation matrix.
What I'm confused about is when I should be normalizing my direction vectors?
I'm following a certain book and sometimes it'll explicitly state to Normalize my vector and other cases it doesn't and I find out that I needed to.
Normalized vector is in the same direction with just unit length 1? So I'm unclear when it is necessary?
Thanks
You never need to normalize a vector unless you are working with the angles between vectors, or unless you are rotating a vector.
That's it.
In the former case, all of your trig functions require your vectors to land on a unit circle, which means the vectors are normalized. In the latter case, you are dividing out the magnitude, rotating the vector, making sure it stays a unit, and then multiplying the magnitude back in. Normalization just goes with the territory.
If someone tells you that coordinate system are defined by n unit vectors, know that i-hat, j-hat, k-hat, and so on can be any arbitrary vector(s) of any length and direction, so long as none of them are parallel. This is the heart of affine transformations.
If someone tries to tell you that the dot product requires normalized vectors, shake your head and smile. The dot product only needs normalized vectors when you are using it to get the angle between two vectors.
But doesn't normalization make the math "simpler"?
Not really -- It adds a magnitude computation and a division. Numbers between 0..1 are no different than numbers between 0..x.
Having said that, you sometimes normalize in order to play well with others. But if you find yourself normalizing vectors as a matter of principle before calling methods, consider using a flag attached to the vector to save yourself a step. Mathematically, it is unimportant, but practically, it can make a huge difference in performance.
So again... it's all about rotating a vector or measuring its angle against another vector. If you aren't doing that, don't waste cycles.
tl;dr: Normalized vectors simplify your math. They also reduce the number of very hard to diagnose visual artifacts in your images.
Normalized vector is in the same direction with just unit length 1? So
I'm unclear when it is necessary?
You almost always want all vectors in a ray tracer to be normalized.
The simplest example is that of the intersection test: where does a bouncing ray hit another object.
Consider a ray where:
p(t) = p_0 + v * t
In this case, a point anywhere along that ray p(t) is defined as an offset from the original point p_0 and an offset along a particular direction v. For every increment of parameter t, the resulting p(t) will move another increment of length equal to the length of the vector v.
Remember, you know p_0 and v. When you are trying to find the point where this ray next hits another object, you have to solve for that t. It is obviously more convenient, if not always obviously necessary, to use normalized vector vs in that representation.
However, that same vector v is used in lighting calculations. Imagine that we have another direction vector u that points towards a lighting source. For the purpose of a very simple shading model, we can define the light at a particular point to be the dot product between those two vectors:
L(p) = v * u
Admittedly, this is a very uninteresting reflection model but it captures the high points of the discussion. A spot on a surface is bright if reflection points towards the light and dim if not.
Now, remember that another way of writing this dot product is the product of the magnitudes of the vectors times the cosine of the angle between them:
L(p) = ||v|| ||u|| cos(theta)
If u and v are of unit length (normalized), then the equation will evaluate to be proportional to the angle between the two vectors. However, if v is not of unit length, say because you didn't bother to normalize after reflecting the vector in the ray model above, now your lighting model has a problem. Spots on the surface using a larger v will be much brighter than spots that do not.
It is necessary to normalize a direction vector whenever you use it in some math that is influenced by its length.
The prime example is the dot product, which is used in most lighting equations. You also sometimes need to normalize vectors that you use in lighting calculations, even if you believe that they are normal.
For example, when using an interpolated normal on a triangle. Common sense tells you that since the normals at the vertices are normal, the vectors you get by interpolating are too. So much for common sense... the truth is that they will be shorter unless they incidentially all point into the same direction. Which means that you will shade the triangle too dark (to make matters worse, the effect is more pronounced the closer the light source gets to the surface, which is a... very funny result).
Another example where a vector might or might not be normalized is the cross product, depending on what you are doing. For example, when using the two cross products to build an orthonormal base, then you must at least normalize once (though if you do it naively, you end up doing it more often).
If you only care about the direction of the resulting "up vector", or about the sign, you don't need to normalize.
I'll answer the opposite question. When do you NOT need to normalize? Almost all calculations related to lighting require unit vectors - the dot product then gives you the cosine of the angle between vectors which is really useful. Some equations can still cope but become more complex (essentially doing the normalization in the equation) That leaves mostly intersection tests.
Equations for many intersection tests can be simplified if you have unit vectors. Some do not require it - for example if you have a plane equation (with a unit normal) you can find the ray-plane intersection without normalizing the ray direction vector. The distance will be in terms of the ray direction vectors length. This might be OK if all you want is to intersect a bunch of those planes (the relative distances will all be correct). But as soon as you want to compare with a different distance - calculated using the normalized ray direction - the distance values will not compare properly.
You might think about normalizing a direction vector AFTER doing some work that does not require it - maybe you have an acceleration structure that can be traversed without a normalized vector. But that isn't relevant either because eventually the ray will hit something and you're going to want to do a lighting/shading calculation with it. So you may as well normalize them from the start...
In other words, any specific calculation may not require a normalized direction vector, but a given direction vector will almost certainly need to be normalized at some point in the process.
Vectors are used to store two conceptually different elements: points in space and directions:
If you are storing a point in space (for example the position of the camera, the origin of the ray, the vertices of triangles) you don't want to normalize, because you would be modifying the value of the vector, and losing the specific position.
If you are storing a direction (for example the camera up, the ray direction, the object normals) you want to normalize, because in this case you are interested not in the specific value of the point, but on the direction it represents, so you don't need the magnitude. Normalization is useful in this case because it simplifies some operations, such as calculating the cosine of two vectors, something that can be done with a dot product if both are normalized.