Rotate/Translate a plane from one normal to another - graphics

I have a plane with its normal as (0,1,0), i.e. it's an x-z plane. I have a new normal and distance. I want to convert my original plane to the new plane normal/distance.
To calculate the rotation, I simply took the cross product of the two normals, and got the angle by doing the dot products. Then I rotated it. How do I move the plane along the new normal? If my original plane originates at (0,0,0), do I just translate it by (Nx*d, Ny*d, Nz*d) (where N = new normal, and d = distance from the origin)?

How do I move the plane along the new normal?
I think your proposal is correct.
Assume you represent the plane with a unit normal and a distance which is the distance from origin to the plane. Then, you can do any translation as below,
m_distance += m_normal.Dot(translation); \\ translation will be (Nx*d, Ny*d, Nz*d) in your case.
m_normal; \\ normal stay the same, as translation won't change the direction of the plan.
Actually m_distance = m_distance + d in this case.

Related

In geometry nodes how can I align instances to both a curve and ground plane?

Given this simple setup:
Node Tree
Viewport Preview
How can I align the planes instances such that the y axis of each plane is parallel to the curve, and the x axis of the planes are parallel to the ground plane (x and Y axis)?
I've tried various combinations with "Align Eular to Vector" node, but as soon as the curve does not face a specific axis the planes get tilted and the alignment to ground plane is lost.
any suggestions?
So after some research I found a solution to my own question. I'm posting it in case anyone else needs to find this out.
Note that I'm not a mathematician and there might be a shorter solution (or specific nodes that I'm not aware of that can perform some of the steps). Also note that this is a solution for instances on a straight line which is what I was aiming for, I didn't test this setup on a curved line but my guess is that it will not work.
For that you'll need to perform step 3 for every point or something like that.
Ok here we go:
Generate instances on a line with the instance on point node.
Auto orient the instances on the z axis with the align Euler to vector node based on the normal of the line.
Calculate a vector between 2 points on the line (which point is not important since the line is straight but the order of the subtraction does!). To calculate the vector from point 1 to point 2 you'll have to subtract point 1 from point 2 (like so: point 2 - point 1).
Calculate the angle between the new vector and the vector of the ground plane [0,0,1]. to do that use this formula:
θ = arccosine ( dot product/ ( length(v1) * length(v2) ) ).
Calculate the complementary angle which is 90 degrees - θ
*** convert 90 to radians of course
rotate the instances on x axis by the result value.
Node Tree
Result
If there is a shorter/easier solution, let me know.

Find all the planar surfaces in an rgbd image using depth and normal data

Many questions deal with generating normal from depth or depth from normal, but I want to ask about a simple way to generate all the planar surfaces given the depth and normal of an image.
I already have depth and normal of each pixel in the image. For each pixel (ui, vi), assume that we can get its 3D coordinates (xi, yi, zi) with zi as the depth and normal vector (nix, niy, niz). Thus, a unique tangent plane is defined by: nix(x - xi) + niy(y - yi) + niz(z - zi) = 0. Then, for each pixel we can define a unique planar surface by the above equation.
What is a common practice in finding the function f such that f(u, v) = (x, y, z) (from pixel to 3D coordinates)? Is pinhole model (plus the depth data) an effective and accurate one?
How does one generate all the planar surfaces effectively? One way is to iterate through all the pixels in the image and find all the planes, but this seems like an ineffective method.
If its pinhole model
make sure your 3D data is not distorted by projection.
group your points by normal
this is easy or hard depending on the points/normal accuracy. Simply sort the points by normals which leads to O(n.log(n)) where n is number of points.
test/group by planes in single normal group
The idea is to pick 3 points from a group compute plane from it and test which points of the group belongs to it. If too low count you got wrong points picked (not belonging to the same plane) and need to pick different ones. Also if the picked points are too close to each or on the same line you can not get correct plane from it.
The math function for plane is:
x*nx + y*ny + z*nz + d = 0
where (nx,ny,nz) is your normal of the group (unit vector) and (x,y,z) is your point position. So you just compute d from a known point (one of the picked ones (x0,y0,z0) ) ...
d = -x0*nx -y0*ny -z0*nz
and then just test which points are sattisfying this condition:
threshod=1e-20; // just accuracy margin
fabs(x*nx + y*ny + z*nz + d) <= threshod
now remove matched points from the group (move them into found plane object) and apply this bullet again on the remaining points until they count is low or no valid plane is found...
then test another group until no groups are left...
I think RANSAC can speed things up to avoid brute force in this case but never used it myself so google ...
A possible approach for the planes is to consider the set of normal vectors and perform clustering on them (for instance by k-means). Then every cluster can correspond to several parallel surfaces. By evaluating the distance from the origin (a scalar function), you can form sub-clusters which will separate those surfaces. Finally, points at constant distance can belong to different coplanar patches, which you can separate by connected component labelling.
It is likely that clustering on the normal vectors and distance simultaneously (hence in a 4D space) will yield better results and be simpler. Be sure to normalize the vectors. Another option is to represent the vectors by just two parameters (such as spherical angles), but this will lead to a quite non-uniform mapping, and create phase wrapping issues.

Fast vertex computation for near skeletal animation

I am building a 3D flower mesh through a series of extrusions (Working in Unity 4 environment). For each flower there are splines that define each branch, petals, leaves etc. Around each spline there is a shape that is being extruded along that spline. And a thickness curve that defines at each point in the spline, how thick the shape could be.
I am trying to animate this mesh using parameters (i.e. look towards sun, bend with the wind).
I have been working on this for nearly two months and my algorithm boiled down to basically reconstructing the flower 'every frame', by iterating over the shape for each spline point, transforming it into the the current space and calculating new vertex positions out of it, a version of parallel transport frames.
for each vertex:
v = p + q * e * w
where
p = point on the path
e = vertex position in the local space of shape being extruded
q = quaternion to transform into the local space of the p (by its direction towards next path point)
w = width at point p
I believe this is as small of a step as it gets to extrude the model. I need to do this once for each vertex in the model basically.
I hit the point where this is too slow for my current scene. I need to have 5-6 flowers with total around 60k vertices and I concluded that this is the bottleneck.
I can see that this is a similar problem to skeletal animation, where each joint would control a cross-section of the extruded shape. It is not a direct question but I'm wondering if someone can elaborate on whether I can steal some techniques from skeletal animation to speed up the process. In my current perspective, I just don't see how I can avoid at least one calculation per vertex, and in that case, I just choose to rebuild the mesh every frame.

Create larger plane of cartesian points from a smaller plane with few representative points

I have a plane of cartesian points made up of a few points. I would like to use this small number of points to create a larger plane of points similar geometrically to the smaller plane. Is there an easy way to accomplish this?
1) Take three points that you know are coplanar, and calculate the normal vector to that polygon.
2) http://en.wikipedia.org/wiki/Coplanarity From your normal, you can solve the plane formula and randomly generate other points on the same plane.
(to decide with what distribution to randomly generate the points, you could find the central mean of all points and their standard distribution from it, for example)

How to map points in a 3D-plane into screen plane

I have given an assignment of to project a object in 3D space into a 2D plane using simple graphics in C. The question is that a cube is placed in fixed 3D space and there is camera which is placed in a position whose co-ordinates are x,y,z and the camera is looking at the origin i.e. 0,0,0. Now we have to project the cube vertex into the camera plane.
I am proceeding with the following steps
Step 1: I find the equation of the plane aX+bY+cZ+d=0 which is perpendicular to the line drawn from the camera position to the origin.
Step 2: I find the projection of each vertex of the cube to the plane which is obtained in the above step.
Now I want to map those vertex position which i got by projection in step 2 in the plane aX+bY+cZ+d=0 into my screen plane.
thanks,
I don't think that by letting the z co-ordinate equals zero will lead me to the actual mapping. So any help to figure out this.
You can do that in two simple steps:
Translate the cube's coordinates to the camera's system (using
rotation), such that the camera's own coordinates in that system are x=y=z=0 and the cube's translated z's are > 0.
Project the translated cube's coordinates onto a 2d plain by dividing its x's and y's by their respective z's (you may need to apply a constant scaling factor here for the coordinates to be reasonable for the screen, e.g. not too small and within +/-half the screen's height in pixels). This will create the perspective effect. You can now draw pixels using these divided x's and y's on the screen assuming x=y=0 is the center of it.
This is pretty much how it is done in 3d games. If you use cube vertex coordinates, then you get projections of its sides onto the screen. You may then solid-fill the resultant 2d shapes or texture-map them. But for that you'll have to first figure out which sides are not obscured by others (unless, of course, you use a technique called z-buffering). You don't need that for a simple wire-frame demo, though, just draw straight lines between the projected vertices.

Resources