Finding forward acceleration with data from 3 axis accelerometer - trigonometry

I am using an accelerometer and would like to know if I can calculate the forward acceleration with the data from the sensor when the orientation of the sensor is not that one axis face along the gravitational force, that is, none of the three axis of the sensor have the reading of 9.81
Let a be the forward acceleration
, x” is the reading of x axis of the sensor
, y” is the reading of y axis of the sensor
, g is the gravitational acceleration
Simple illustration is here
Now g, x” and y” are known. Is there anyway to calculate a, and the angle that between g and y”, which I didn’t draw on the pic?
so far I can get is the following:
x" = gcosθ - asinθ ..... (1)
y" = gsinθ - acosθ ..... (2)
After some calculation, I got:
g = y" sinθ + x" cosθ
Am I on the right track to find θ? Could you guys suggest me something to get the value θ and acceleration a?

Related

How do I rotate a point on the surface of a sphere given 3 degrees of rotation?

I have a point on a sphere that needs to be rotated. I have 3 different degrees of rotation (roll, pitch, yaw). Are there any formulas I could use to calculate where the point would end up after applying each rotation? For simplicity sake, the sphere can be centered on the origin if that helps.
I've tried looking at different ways of rotation, but nothing quite matches what I am looking for. If I needed to just rotate the sphere, I could do that, but I need to know the position of a point based on the rotation of the sphere.
Using Unity for an example, this is outside of unity in a separate project so using their library is not possible:
If the original point is at (1, 0, 0)
And the sphere then gets rotated by [45, 30, 15]:
What is the new (x, y, z) of the point?
If you have a given rotation as a Quaternion q, then you can rotate your point (Vector3) p like this:
Vector3 pRotated = q * p;
And if you have your rotation in Euler Angles then you can always convert it to a Quaternion like this (where x, y and z are the rotations in degrees around those axes):
Quaternion q = Quaternion.Euler(x,y,z);
Note that Unity's euler angles are defined so that first the object is rotated around the z axis, then around the x axis and finally around the y axis - and that these axes are all the in the space of the parent transform, if any (not the object's local axes, which will move with each rotation).
So I suppose that the z-axis would be roll, the x-axis would be pitch and the y axis would be yaw.You might have to switch the signs on some axes to match the expected result - for example, a positive x rotation will tilt the object downwards (assuming that the object's notion of forward is in its positive z direction and that up is in its positive y direction).

How to find the orientation of a plane?

I have three non-colinear 3D points, let's say pt1, pt2, pt3. I've computed the plane P using the sympy.Plane. How can I find the orientation of this plane(P) i.e. RPY(euler angles) or in quaternion?
I never used sympy, but you should be able to find a function to get the angle between 2 vectors (your normal vector and the world Y axis.)
theta = yaxis.angle_between(P.normal_vector)
then get the rotation axis, which is the normalized cross product of those same vectors.
axis = yaxis.cross(P.normal_vector).normal()
Then construct a quaternion from the axis and angle
q = Quaternion.from_axis_angle(axis, theta)

Having trouble aligning 2D Lidar pointcloud to match the coordinate system of HTC Vive Controller

I have strapped on an RPLidar A1 to an HTC Vive Controller and have written a python script that converts the lidar's pointcloud to XY coordinates and then transforms these points to match the rotation and movement of the Vive controller. The end goal is to be able to scan a 3D space using the controller as tracking.
Sadly, everything I try, the native quaternion of the triad_openvr library, transformation matrix transform, euler angles even, I simply cannot get the system to function on all possible movement/rotation axes.
# This converts the angle and distance measurement of lidar into x,y coordinates
# (divided by 1000 to convert from mm to m)
coord_x = (float(polar_point[0])/1000)*math.sin(math.radians(float(polar_point[1])))
coord_y = (float(polar_point[0])/1000)*math.cos(math.radians(float(polar_point[1])))
# I then tried to use the transformation matrix of the
# vive controller on these points to no avail
matrix = vr.devices["controller_1"].get_pose_matrix()
x = (matrix[0][0]*coord_x+matrix[0][1]*coord_y+matrix[0][2]*coord_z+(pos_x-float(position_x)))
y = (matrix[1][0]*coord_x+matrix[1][1]*coord_y+matrix[1][2]*coord_z+(pos_y-float(position_y)))
z = (matrix[2][0]*coord_x+matrix[2][1]*coord_y+matrix[2][2]*coord_z+(pos_z-float(position_z)))
# I tried making quaternions using the euler angles and world axes
# and noticed that the math for getting euler angles does not correspond
# to the math included in the triad_vr library so I tried both to no avail
>>>>my euler angles
>>>>angle_x = math.atan2(matrix[2][1],matrix[2][2])
>>>>angle_y = math.atan2(-matrix[2][0],math.sqrt(math.pow(matrix[2][1],2)+math.pow(matrix[2][2],2)))
>>>>angle_z = math.atan2(matrix[1][0],matrix[0][0])
euler = v.devices["controller_1"].get_pose_euler()
>>>>their euler angles (pose_mat = matrix)
>>>>yaw = math.pi * math.atan2(pose_mat[1][0], pose_mat[0][0])
>>>>pitch = math.pi * math.atan2(pose_mat[2][0], pose_mat[0][0])
>>>>roll = math.pi * math.atan2(pose_mat[2][1], pose_mat[2][2])
#quaternion is just a generic conversion from the transformation matrix
#etc
Expected results are a correctly oriented 2D slice in 3D space of data, that, if appended, would eventually map the whole 3D space. Currently, I have only managed to successfully scan only on a single axis Z and pitch rotation. I have tried a near infinite number of combinations, some found on other posts, some based on raw linear algebra, and some simply random. What am I doing wrong?
Well we figured it out by working with the euler rotations and converting those to a quaternion:
We had to modify the whole definition of how triad_openvr calculates the euler angles
def convert_to_euler(pose_mat):
pitch = 180 / math.pi * math.atan2(pose_mat[2][1], pose_mat[2][2])
yaw = 180 / math.pi * math.asin(pose_mat[2][0])
roll = 180 / math.pi * math.atan2(-pose_mat[1][0], pose_mat[0][0])
x = pose_mat[0][3]
y = pose_mat[1][3]
z = pose_mat[2][3]
return [x,y,z,yaw,pitch,roll]
And then had to further do some rotations of the euler coordinates originating from the controller here (roll corresponds to X, yaw corresponds to Y, and pitch corresponds to Z axis for some unknown reason):
r0 = np.array([math.radians(-180-euler[5]), math.radians(euler[3]), -math.radians(euler[4]+180)])
As well as pre-rotate our LiDAR points to correspond to the axis displacement of our real world construction:
coord_x = (float(polar_point[0])/1000)*math.sin(math.radians(float(polar_point[1])))*(-1)
coord_y = (float(polar_point[0])/1000)*math.cos(math.radians(float(polar_point[1])))*(math.sqrt(3))*(0.5)-0.125
coord_z = (float(polar_point[0])/1000)*math.cos(math.radians(float(polar_point[1])))*(-0.5)
It was finally a case of rebuilding the quaternions from the euler angles (a workaround, we are aware) and do the rotation & translation, in that order.

Different between rotating the camera vs rotating the scene point (only the point, not the entire scene)?

I think rotating the camera and taking the photo of a scene would yield the same result with keeping the camera stable and rotating the scene in reverse way.
Assume that the original camera rotation matrix is R1. Rotating the camera means that we apply another rotation matrix R12 (so R2=R12*R1 is the new rotation matrix). Assume that X is the realworld coordinate of scene point. Rotating the scene point in the reverse way means that we apply the reverse rotation matrix R12^-1 to X (this might be wrong).
So why (R12*R1)X != R1(R12^-1*X) ?
Can anyone explain me what I'm wrong?
p.s. I'm not asking about programing as well as complexity of the two method. I just want to know
(1) the mathematical equation for the action "rotating the scene"
(2) if my assumed equation for "rotating the scene" is correct, why doesn't the mathematical equations reflect the phenomenon in the real world as I described.
Edit 1: According to Spektre's answer, when I rotate the entire scene with the rotation matrix R, then the new camera rotation matrix is
R^-1*R1
In this case, I rotate the entire scene with the rotation matrix R12^-1, then the new camera rotation matrix is
(R12^-1)^-1*R1=R12*R1
However, what if I consider that rotating the camera is equivalent to rotating the scene point X (only the scene point X, not the entire scene). At that time, the rotation matrix of the camera is still R1. But the scene point X now become X'. And the image coordinate of X' is R1*X'. What is the equation for X' ? Note that
R1*X' = R12*R1*X
Of course, you can answer that
X'=R1^-1*R12*R1*X
But I think X' should be defined only by R12 and X (R1 doesn't need to be known to form X'). That's why I ask "what is the mathematical equation for rotating the scene point". X' is the result of the action "rotating X" by some rotation matrix related to R12.
I have another example when the camera does not rotate but move. Assume that I'm taking the photo of a model who is standing right in front of me. Her position is X. My position is C. In the first case, I move to the right (of me) and take the first photo. In the second case, I don't move but the model move the left (of me) with the same steps and I take the second photo. The position of the model in the two image must be identical. This is expressed by the mathematical equation
[R1 -R1*(C+d)]*X = [R1 -R1*C]*(X-d)
In the equation above (which I checked to be true), -R1*C is the translation vector, -R1*(C+d) is the translation vector when I move to the right of me, (X-d) is the position of the model when she move to the left of me.
In the above example, X' = X-d (so X' is defined through X and my movement d). In the case of rotating the camera, what is X'?
Edit 2: Since Spektre still don't understand my question. There's a need to emphasize that in the second case, I DO NOT rotate the ENTIRE world, I ONLY rotate the point X. (If I rotate the entire world, the world coordinate of X remains the same after its world rotate. But if I rotate only X, its world coordinate will be changed to X').
Just imagine the example about taking the photo of the model. In the first case, I rotate the camera and take the first photo of her (and her boy friend standing next to her).
In the second case, I rotate ONLY the model in the reverse direction (her boy friend is stable), then I take the second photo. When I compare the two photos, the position of the model is the same (the position of her boy friend would be different).
In both case, the real world position of her boy friend are the same. But the real world position of the model is changed in the second case since I rotated her. My question is what is the real world position of the girl after I rotate her?
The answer to Title is: mathematically they are both almost the same (except inversion of all operations) but physically rotating camera means changing single matrix but to rotate scene you have to rotate all the objects in your world (can be thousands and more) which is a lot slower ...
But I assume the title and text is misleading as the real question is about linear algebra matrix equations.
Let R1,R2,R12 be square matrices of size N x N and X is vector of size N. If we ignore the vector orientation (1 x N vs N x 1) then for your convention:
R2 = R12.R1
R1 = Inverse(R12).R2
so:
R12.R1.X == R12.Inverse(R12).R2.X == R2.X
As you can see equation in your Question is wrong because you change the matrix multiplication order which is wrong because:
R1.R12 != R12.R1
If you want to know more closely why then study linear algebra.
[Edit1] simple 1x1 example
let:
R1 = 1
R12= 2
R2 = R12.R1 = 2
so rewriting your wrong equation:
R12*R1*X != R1*Inverse(R12)*X
2* 1*X != 1* 0.5*X
2*X != 0.5*X
and using the correct one
R12*R1*X == R12*Inverse(R12)*R2*X == R2*X
2* 1*X == 2* 0.5* 2*X == 2*X
2*X == 2*X == 2*X
[Edit2] simple 2D example
I see you are still confused so here an 2D example of the problem:
On the left you got rotated camera by R1 so for rendering transforming world point (x,y) into its local coordinates (x1,y1). On the right is the situation reversed so camera coordinate system is axis aligned (unit matrix) and the scene is rotated in reverse by Inverse(R1). That is how it works (where R1 is the relative matrix in this case).
Now if I try to port it to your matrix names and convention so the relative matrix is R12 and R1 is the camera :
(R1.R12).(x,y) = (x1,y1)
Inverse(R1.R12).(x1,y1) = (x,y)

How to calculate Angles it would take to rotate from one vector to another?

I have two normalized vectors:
A) 0,0,-1
B) 0.559055,0.503937,0.653543
I want to know, what rotations about the axes would it take to take the vector at 0,0,-1 to 0.559055,0.503937,0.653543?
How would I calculate this? Something like, rotate over X axis 40 degrees and Y axis 220 (that's just example, but I don't know how to do it).
Check this out. (google is a good thing)
This calculates the angle between two vectors.
If Vector A is (ax, ay, az) and
Vector B is (bx, by, bz), then
The cos of angle between them is:
(ax*bx + ay*by + az*bz)
--------------------------------------------------------
sqrt(ax*ax + ay*ay + az*az) * sqrt(bx*bx + by*by + bz*bz)
To calculate the angle between the two vectors as projected onto the x-y plane, just ignore the z-coordinates.
Cosine of Angle in x-y plane =
(ax*bx + ay*by)
--------------------------------------
sqrt(ax*ax + ay*ay) * sqrt(bx*bx + by*by
Similarly, to calculate the angle between the projections of the two vectors in the x-z plane, ignore the y-coordinates.
It sounds like you're trying convert from Cartesian coordinates (x,y,z) into spherical coordinates (rho,theta,psi).
Since they're both unit vectors, rho, the radius, will be 1. This means your magnitudes will also be 1 and you can skip the whole denominator and just use the dot-product.
Rotating in the X/Y plane (about the Z axis) will be very difficult with your first example (0,0,-1) because it has no extension in X or Y. So there's nothing to rotate.
(0,0,-1) is 90 degrees from (1,0,0) or (0,1,0). If you take the x-axis to be the 0-angle for theta, then you calculate the phi (rotation off of the X/Y plane) by applying the inverse cos upon (x,y,z) and (x,y,0), then you can skip dot-products and get theta (the x/y rotation) with atan2(y,x).
Beware of gimbal lock which may cause problems.

Resources