consider we have a coordination system (i,j,k) its origin at (0,0,0)
now i create another coordination system inside the original space with three vectors(t,u,v) from original space.
what is the rotation (yaw,pitch,roll) of this new basis relative to the original space ?
thanks
If the origin does not change in the new basis, then it's a linear transformation, otherwise it's an affine transformation matrix, which combines the rotation and translation. In both cases, you can determine the transformation matrices, with which the rotation (yaw,pitch,roll) can be easily determined. See here for more information. Also, just a friendly reminder, math.stackexchange.com is a better place to ask this kind of questions.
Related
Say I had a point cloud with n number of points in 3d space(relatively densely packed together). What is the most efficient way to create a surface that goes contains every single point in it and lets me calculate values such as the normal and curvature at some point on the surface that was created? I also need to be able to create this surface as fast as possible(a few milliseconds hopefully working with python) and it can be assumed that n < 1000.
There is no "most efficient and effective" way (this is true of any problem in any domain).
In the first place, the surface you have in mind is not mathematically defined uniquely.
A possible approach is by means of the so-called Alpha-shapes, implemented either from a Delaunay tetrahedrization, or by the ball-pivoting method. For other methods, lookup "mesh reconstruction" or "surface reconstruction".
On another hand, normals and curvature can be computed locally, from neighbors configurations, without reconstructing a surface (though there is an ambiguity on the orientation of the normals).
I could suggest Nina Amenta's Power Crust algorithm (link to code), or also meshlab suite, which can compute the curvatures too.
The offset for CGContext.setShadow has to be specified in base-space:
offset - Specifies a translation in base-space.
(from https://developer.apple.com/documentation/coregraphics/cgcontext/1455205-setshadow)
What is this "base-space"?
Semi related docs have this explanation:
The drawing (user) coordinate system. This coordinate system is used when you issue drawing commands.
The view coordinate system (base space). This coordinate system is a fixed coordinate system relative to the view.
The (physical) device coordinate system. This coordinate system represents pixels on the physical screen.
(from: https://developer.apple.com/library/archive/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html)
This makes sense. However, how do I get the transform of this base-space? There is CGContext.userSpaceToDeviceSpaceTransform but it seems to be transform from user->physical. How do I get from user->base or base->physical?
I believe that the base-space is equivalent to when the user-space has an identity transform matrix. In Apple's documentation, figure 1-1, the show the base-space have the origin in the upper-left, and an identity matrix of one (the indicated pixel of (3, 5) is 3 to the right, 5 down, as expected).
Thus, the shadow offset is in untransformed units. This is probably convenient for you, as you probably want the shadow offset to be the same, regardless of what the scale factor is. (If you scale a piece of vector clip art in a PowerPoint presentation, you want the shadow to be offset the same no matter how big you expand the clip art.)
I am looking for an algorithm that given two meshes could clip one using another.
The simplest form of this is clipping a mesh using a plane. I've already implemented that by following something similar to what is described here.
What it does is basically inspecting all mesh vertices and triangles with respect to the plane (the plane's normal and point are given). If the triangle is completely above the plane, it is left untouched. If it falls completely below the plane, it is discarded. If some of the edges of the triangle intersect with the plane, the intersecting points with the plane are calculated and added as the new vertices. Finally a cap is generated for the hole on the place the mesh was cut.
The problem is that the algorithm assumes that the plane is unlimited, therefore whatever is in its path is clipped. In the simplest form, I need an extension of this without the assumption of a plane of "infinite" size.
To clarify, imagine that we have a 3D model of a desk with 2 boxes on it. The boxes are adjacent (but not touching or stacked). The user will define a cutting plane of a limited width and height underneath the first box and performs the cut. We end up with a desk model (mesh) with a box on it and another box (mesh) that can be freely moved around/manipulated.
In the general form, I'd like the user to be able to define a bounding box for the box he/she wants to separate from the desk model and perform the cut using that bounding box.
If I could extend the algorithm I already have to an algorithm with limited-sized planes, that would be great for now.
What you're looking for are constructive solid geometry/boolean algorithms with arbitrary meshes. It's considerably more complex than slicing meshes by an infinite plane.
Among the earliest and simplest research in this area, and a good starting point, is Constructive Solid Geometry for Polyhedral Objects by Trumbore and Hughes.
http://cs.brown.edu/~jfh/papers/Laidlaw-CSG-1986/main.htm
From the original paper:
More elaborate solutions extend upon this subject with a variety of data structures.
The real complexity of the operation lies in the slicing algorithm to slice one triangle against another. The nightmare of implementing robust CSG lies in numerical precision. It's easy when you involve objects far more complex than a cube to run into cases where a slice is made just barely next to a vertex (at which point you have the tough decision of merging the new split vertex or not prior to carrying out more splits), where polygons are coplanar (or almost), etc.
So I suggest initially erring on the side of using very high-precision floating point numbers, possibly even higher than double precision to focus on getting something working correctly and robustly. You can optimize later (first pass should be to use an accelerator like an octree/kd-tree/bvh), but you'll avoid many headaches this way in your first iteration.
This is vastly simpler to implement at render time if you're focusing on a raytracer rather than a modeling software, e.g. With raytracers, all you have to do to do this kind of arbitrary clipping is pretend that an object used to subtract from another has its polygons flipped in the culling process, e.g. It's easy to solve robustly at the ray level, but quite a bit harder to do robustly at the geometric level.
Another thing you can do to make your life so much easier if you can afford it is to voxelize your object, find subtractions/additions/unions of voxels, and then translate the voxels back into a mesh. This is so much easier to make robust, but harder to do efficiently and the voxel->polygon conversion can get quite involved if you want better results than what marching cubes provide.
It's a really tough area to do extremely well and requires perseverance, and thus the reason for the existence of things like this: http://carve-csg.com/about.
If someone is interested, currently there is a solution for this problem in CGAL library. It allows clipping one triangular mesh using another mesh as bounding volume. The usage example can be found here.
I have got point clouds of different primitive objects (cone, plane, torus, cylinder, sphere, ellipsoid). The all vary in orientation, position and scaling. Furthermore all of them are initialized with a unique set of parameters (e.g. height, radius, etc.) so that their shape can be quiet different (some cones are tall, others are small and fat).
Now to my question:
I am trying to find the objects "principal components". Using PCA doesn't lead to good results, since rotated primitives can have their main variation in any direction (which doesn't have to be necessarily along the length of the objects).
The only chance that I see is to use somehow the symmetry of my primitives. Isn't there a method based on inertia? Maybe some way to find the main symmetry axis and two others perpendicular to it?
Can you give me some advice or point me to papers or implementations (maybe even python)?
Thanks a lot, Merlin.
PS: This is what I get if I only apply a PCA. Especially for cones this doesn't really work. Only cones that are almost identical in shape share the same orientation, but I need them all to point in one direction (e.g. up).
So you got cones and just need to rotate them all in the same direction?
If so you can fit a triangle to them and point the peak (e.g with the perpendicular bisectors of the sides) to your main axis.
You have an interesting problem. Normally used shape descriptors (VFH) that are invariant to shape but not pose (which is what you would want, really) would not be invariant to stretching in the shape.
I think to succeed at this you need to be clearer about the invariants that you are trying to maintain when a shape changes. Is it a topological invariant? If so, then here is a good starting point: https://www.google.com.tr/search?q=topologically+invariant+shape+descriptor
I decided to just stick to simple PCA since it's the only method that is totally generic and doesn't depend on prior (expert) knowledge about the data.
I'd like to write a program that lets me arbitrarily distort a textured polygon by dragging its vertices. I want the texture to distort fluidly and without overlap, assuming the new polygon doesn't intersect itself. I should also be able to repeat the process with the new shape, and with a minimum amount of loss.
Are there any algorithms for doing this?
It sounds like you might want a variation on the Schwarz-Christoffel mapping. This is a type of conformal mapping that can be used to warp a polygon to and from a simpler region, like a disk; although I have not implemented it, apparently it is computationally tractable.
For your application, you would set up a map from the original polygon to the simpler region, and compute the inverse map to the modified polygon; combining the two should give you a nice conformal mapping from the original to the modified polygon.
Conformal mappings are nice and smooth, but they can sometimes behave in unintuitive ways; I can imagine that an animated version might yield some entertaining "slidy" effects. The conformal mapping will preserve local angles in the interior of the polygon; this means that the size distortion very near a modified vertex can be severe.
People have been working on solutions to this problem for the past decade or two, and the state of the art keeps on getting better and better (but the math gets harder as well). A good place to start (and sort of where I stopped following it) is the work http://www.cs.technion.ac.il/~weber/Publications/Complex-Coordinates/
Read the paper there, and look up the papers in the references. One of them should give you an algorithm that you're willing to implement.
The simplest method I can think of is to triangulate the input polygon (using an ear clipping method, or something similarly good) and then move the points. Then you can use a barycentric mapping from the original polygon to the new space.
If you're looking for something more robust, you might look at mean value coordinates.