I have been measuring the change in orientation of a 3D shape (a 3-space sensor) with different starting positions for a controlled end position.
I am using Euler angles, (P)itch, (R)oll and (Y)aw.
My measure of orientation change Is:
Orientation change OC = |dP| + |dR| + |dY|
The 3 starting positions differ only in sensor roll in degrees:
1). 0
2). 45
3). 90
With each starting position the sensor is tared then elevated to 30 degrees along the roll axis.
The problem is that for 1). & 3). as expected I get OC = 30, representing only pitch and only yaw error of 30 degrees respectively. However for 2). OC is significantly >30 being a sum of non-zero pitch, roll and yaw.
Is this as expected? Assuming it is, is there a better measure of OC which is not sensitive to starting position?
The answer supplied by Nico in the comments:
Was to use a quaternion distance, this was the dot product of the initial and final quaternions which my sensor supplied as unit quaternions where x2 + y2 + z2 + w2 = 1
The dot product was given by:
dot = xi.xf + yi.yf + zi.zf + wi.wf
Furthermore I used the relative angle theta (radians) between these quaternions given by:
Theta = 2cos-1(dot)
The sensors gave greatly improved results using theta, which only improved more when they had a gradient descent calibration applied.
Related
I'm trying to draw simple scaled points in my custom graphics engine. The points are scaled in pixel space, and the radius of the points are in pixels, but the position of the points fed to the draw function are in world coordinates.
So far, everything is working great, except for a depth clipping issue. The points are of constant size, regardless of how far away they are, which is done by offsetting the vertices in projected/clip space. However, when they are close to surfaces, they partially intersect them in the depth buffer.
Since these points represent world coordinates, I want them to use the depth buffer, and be hidden behind objects that are in front of them. However, when the point is close to a surface, I want to push it toward the camera, so it doesn't partially intersect it. I think it is easier to just always do this push, regardless of the point being close to a surface. What makes the most sense to me is to just push it by its radius, so that all of its vertices are exactly far enough away to avoid clipping into nearby surfaces.
The easiest way I've found to do this is to simply subtract from the Z value in the vertex shader, after transforming into view-projection space. However, I'm having some trouble converting my pixel radius into a depth offset. Regardless of the math I use, what works close up never seems to work far away. I'm thinking maybe this is due to how the z buffer is non-linear, but could be wrong.
Currently, the closest I've been to solving this is the following:
proj_vertex_pos.z -= point_pixel_radius / proj_vertex_pos.w * 100.0
I'm honestly not sure why 100.0 helps make this work yet. I added it simply because dividing the radius by w was too small of a value. Can anyone point me in the right direction? How do I convert my pixel distance into a depth distance? Especially if the depth distance changes scale depending on which depth you are at? Or am I just way off?
The solution was to convert my pixel space radius into world space units, since the z-buffer is still in world space, even after transforming by the view-projection transform. This can be done by converting pixels into a factor (factor = pixels / screen_size), then convert the factor into world space units, which was a little more involved - I had to calculate the world-space size of the screen at a given distance, then multiply the factor by that to get world units. I can post the related code if anyone needs it. There's probably a simpler way to calculate it, but my brain always goes straight for factors.
The reason I was getting different results at different distances was mainly because I was only offsetting the z component of the clip position by the result. It's also necessary to offset the w component, to make the depth offset work at any distance (linear). However, in order to offset the w component, you first have to scale xy by w, modify w as needed, then divide xy by the new w. This resulted in making the math pretty involved, so I changed the strategy to offset the vertex before clip space, which requires calculating the distance to the camera in Z space manually, but it honestly ended up being about the same amount of math either way.
Here is the final vertex shader at the moment. Hopefully the global values make sense. I did not modify this to post it, so please forgive any sillyness in my comments. EDIT: I had to make some edits to this, because I was accidentally moving the vertex along the camera-Z direction instead of directly toward the camera:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// compute offset from vertex to camera
float3 to_cam_offset = Scene.CamPos - vin.Position.xyz;
// compute the Z distance of the camera from the vertex
float cam_z_dist = -dot( Scene.CamZ, to_cam_offset );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * cam_z_dist * 2.0;
// finally, push the vertex toward the camera by the world radius
// + note: moving by radius will only work with surfaces facing the camera, since we are moving toward the camera, rather than away from the surface
// + because of this, we also multiply by another 4, to compensate for nearby surface angles, but there is no scale that would work for every angle
float3 offset = normalize(to_cam_offset) * (radius_world * -4.0);
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz + offset, 1.0) );
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
Here is the other version that offsets z & w instead of changing things in world space. After edits above, this is probably the more optimal solution:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz, 1.0) );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * pin.ClipPos.w * 2.0;
// offset depth by our world radius
// + we scale this extra to compensate for surfaces with high angles relative to the camera (since we are moving directly at it)
// + notice we have to make the perspective divide before modifying w, then re-apply it after, or xy will be off
pin.ClipPos.xy /= pin.ClipPos.w;
pin.ClipPos.z -= radius_world * 10.0;
pin.ClipPos.w -= radius_world * 10.0;
pin.ClipPos.xy *= pin.ClipPos.w;
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
I am trying to convert a quaternion to yaw pitch roll euler angles. I am experiencing problems with the gimbal lock. The first strange thing that occurs is that errors already start to appear when the pitch angle is in the neighbourhood of +-pi/2. I thought problems should only occur at exactly pi/2.
Secondly my code shows an incorrect yaw angle of 180 degrees at a pitch of -90 degrees.
I tried the code at this post and from this site but none of them worked. I also tried the pyquaternion library, but this one does not even attempt to compensate for gimbal lock. In the end I made the python equivalent of this section of a handbook.
This worked the best but still gives the above issues. It seems like a problem that must have been solved a 1000 times but I can't pinpoint the issue.
For the quaternion: [ 0.86169383 0.02081877 -0.5058515 0.03412598] the code returns the correct yaw pitch roll angles: [0.15911653941132517, -60.832556785346696, -9.335093630495875]
For the quaternion: [ 0.81154224 0.01913839 -0.58165337 0.05207959] the code returns the incorrect yaw pitch roll angles: [-173.53260107524108, -71.09657335881491, 0.0]
Here is my code:
def yaw_pitch_roll(self):
"""Get the equivalent yaw-pitch-roll angles aka. intrinsic Tait-Bryan angles following the z-y'-x'' convention
Returns:
yaw: rotation angle around the z-axis in radians, in the range `[-pi, pi]`
pitch: rotation angle around the y'-axis in radians, in the range `[-pi/2, pi/2]`
roll: rotation angle around the x''-axis in radians, in the range `[-pi, pi]`
The resulting rotation_matrix would be R = R_x(roll) R_y(pitch) R_z(yaw)
Note:
This feature only makes sense when referring to a unit quaternion. Calling this method will implicitly normalise the Quaternion object to a unit quaternion if it is not already one.
"""
self._normalise()
qw = self.q[0]
qx = self.q[1]
qy = self.q[2]
qz = self.q[3]
print(2*(qx*qz-qw*qy), self.q)
if 2*(qx*qz-qw*qy)>=0.94: #Preventing gimbal lock for north pole
yaw = np.arctan2(qx*qy-qw*qz,qx*qz+qw*qy)
roll = 0
elif 2*(qx*qz-qw*qy)<=-0.94: #Preventing gimbal lock for south pole
yaw = -np.arctan2(qx*qy-qw*qz,qx*qz+qw*qy)
roll = 0
else:
yaw = np.arctan2(qy*qz + qw*qx,
1/2 - (qx**2 + qy**2))
roll = np.arctan2(qx*qy - qw*qz,
1/2 - (qy**2 + qz**2))
pitch = np.arcsin(-2*(qx * qz - qw * qy))
return yaw, pitch, roll
Having given a Quaternion q, you can calculate roll, pitch and yaw like this:
yaw = atan2(2.0*(qy*qz + qw*qx), qw*qw - qx*qx - qy*qy + qz*qz);
pitch = asin(-2.0*(qx*qz - qw*qy));
roll = atan2(2.0*(qx*qy + qw*qz), qw*qw + qx*qx - qy*qy - qz*qz);
This should fit for intrinsic tait-bryan rotation of xyz-order. For other rotation orders, extrinsic and proper-euler rotations other conversions have to be used.
This works well for me in Autodesk Maya, where other solutions with pole exceptions had strange gimbal effects.
I have the plane equation describing the points belonging to a plane in 3D and the origin of the normal X, Y, Z. This should be enough to be able to generate something like a 3D arrow. In pcl this is possible via the viewer but I would like to actually store those 3D points inside the cloud. How to generate them then ? A cylinder with a cone on top ?
To generate a line perpendicular to the plane:
You have the plane equation. This gives you the direction of the normal to the plane. If you used PCL to get the plane, this is in ModelCoefficients. See the details here: SampleConsensusModelPerpendicularPlane
The first step is to make a line perpendicular to the normal at the point you mention (X,Y,Z). Let (NORMAL_X,NORMAL_Y,NORMAL_Z) be the normal you got from your plane equation. Something like.
pcl::PointXYZ pnt_on_line;
for(double distfromstart=0.0;distfromstart<LINE_LENGTH;distfromstart+=DISTANCE_INCREMENT){
pnt_on_line.x = X + distfromstart*NORMAL_X;
pnt_on_line.y = Y + distfromstart*NORMAL_Y;
pnt_on_line.z = Z + distfromstart*NORMAL_Z;
my_cloud.points.push_back(pnt_on_line);
}
Now you want to put a hat on your arrow and now pnt_on_line contains the end of the line exactly where you want to put it. To make the cone you could loop over angle and distance along the arrow, calculate a local x and y and z from that and convert them to points in point cloud space: the z part would be converted into your point cloud's frame of reference by multiplying with the normal vector as with above, the x and y would be multiplied into vectors perpendicular to this normal vectorE. To get these, choose an arbitrary unit vector perpendicular to the normal vector (for your x axis) and cross product it with the normal vector to find the y axis.
The second part of this explanation is fairly terse but the first part may be the more important.
Update
So possibly the best way to describe how to do the cone is to start with a cylinder, which is an extension of the line described above. In the case of the line, there is (part of) a one dimensional manifold embedded in 3D space. That is we have one variable that we loop over adding points. The cylinder is a two dimensional object so we have to loop over two dimensions: the angle and the distance. In the case of the line we already have the distance. So the above loop would now look like:
for(double distfromstart=0.0;distfromstart<LINE_LENGTH;distfromstart+=DISTANCE_INCREMENT){
for(double angle=0.0;angle<2*M_PI;angle+=M_PI/8){
//calculate coordinates of point and add to cloud
}
}
Now in order to calculate the coordinates of the new point, well we already have the point on the line, now we just need to add it to a vector to move it away from the line in the appropriate direction of the angle. Let's say the radius of our cylinder will be 0.1, and let's say an orthonormal basis that we have already calculated perpendicular to the normal of the plane (which we will see how to calculate later) is perpendicular_1 and perpendicular_2 (that is, two vectors perpendicular to each other, of length 1, also perpendicular to the vector (NORMAL_X,NORMAL_Y,NORMAL_Z)):
//calculate coordinates of point and add to cloud
pnt_on_cylinder.x = pnt_on_line.x + 0.1 * perpendicular_1.x * 0.1 * cos(angle) + perpendicular_2.x * sin(angle)
pnt_on_cylinder.y = pnt_on_line.y + perpendicular_1.y * 0.1 * cos(angle) + perpendicular_2.y * 0.1 * sin(angle)
pnt_on_cylinder.z = pnt_on_line.z + perpendicular_1.z * 0.1 * cos(angle) + perpendicular_2.z * 0.1 * sin(angle)
my_cloud.points.push_back(pnt_on_cylinder);
Actually, this is a vector summation and if we were to write the operation as vectors it would look like:
pnt_on_line+perpendicular_1*cos(angle)+perpendicular_2*sin(angle)
Now I said I would talk about how to calculate perpendicular_1 and perpendicular_2. Let K be any unit vector that is not parallel to (NORMAL_X,NORMAL_Y,NORMAL_Z) (this can be found by trying e.g. (1,0,0) then (0,1,0)).
Then
perpendicular_1 = K X (NORMAL_X,NORMAL_Y,NORMAL_Z)
perpendicular_2 = perpendicular_1 X (NORMAL_X,NORMAL_Y,NORMAL_Z)
Here X is the vector cross product and the above are vector equations. Note also that the original calculation of pnt_on_line involved a vector dot product and a vector summation (I am just writing this for completeness of the exposition).
If you can manage this then the cone is easy just by changing a couple of things in the double loop: the radius just changes along its length until it is zero at the end of the loop and in the loop distfromstart will not start at 0.
I have two normalized vectors:
A) 0,0,-1
B) 0.559055,0.503937,0.653543
I want to know, what rotations about the axes would it take to take the vector at 0,0,-1 to 0.559055,0.503937,0.653543?
How would I calculate this? Something like, rotate over X axis 40 degrees and Y axis 220 (that's just example, but I don't know how to do it).
Check this out. (google is a good thing)
This calculates the angle between two vectors.
If Vector A is (ax, ay, az) and
Vector B is (bx, by, bz), then
The cos of angle between them is:
(ax*bx + ay*by + az*bz)
--------------------------------------------------------
sqrt(ax*ax + ay*ay + az*az) * sqrt(bx*bx + by*by + bz*bz)
To calculate the angle between the two vectors as projected onto the x-y plane, just ignore the z-coordinates.
Cosine of Angle in x-y plane =
(ax*bx + ay*by)
--------------------------------------
sqrt(ax*ax + ay*ay) * sqrt(bx*bx + by*by
Similarly, to calculate the angle between the projections of the two vectors in the x-z plane, ignore the y-coordinates.
It sounds like you're trying convert from Cartesian coordinates (x,y,z) into spherical coordinates (rho,theta,psi).
Since they're both unit vectors, rho, the radius, will be 1. This means your magnitudes will also be 1 and you can skip the whole denominator and just use the dot-product.
Rotating in the X/Y plane (about the Z axis) will be very difficult with your first example (0,0,-1) because it has no extension in X or Y. So there's nothing to rotate.
(0,0,-1) is 90 degrees from (1,0,0) or (0,1,0). If you take the x-axis to be the 0-angle for theta, then you calculate the phi (rotation off of the X/Y plane) by applying the inverse cos upon (x,y,z) and (x,y,0), then you can skip dot-products and get theta (the x/y rotation) with atan2(y,x).
Beware of gimbal lock which may cause problems.
I have read similar topics in order to find solution, but with no success.
What I'm trying to do is make the tool same as can be found in CorelDraw, named "Pen Tool". I did it by connecting Bezier cubic curves, but still missing one feature, which is dragging curve (not control point) in order to edit its shape.
I can successfully determine the "t" parameter on the curve where dragging should begin, but don't know how to recalculate control points of that curve.
Here I want to higlight some things related to CorelDraw''s PenTool behaviour that may be used as constaints. I've noticed that when dragging curve strictly vertically, or horizontally, control points of that Bezier curve behave accordingly, i.e. they move on their verticals, or horizontals, respectively.
So, how can I recalculate positions of control points while curve dragging?
Ive just look into Inkspace sources and found such code, may be it help you:
// Magic Bezier Drag Equations follow!
// "weight" describes how the influence of the drag should be distributed
// among the handles; 0 = front handle only, 1 = back handle only.
double weight, t = _t;
if (t <= 1.0 / 6.0) weight = 0;
else if (t <= 0.5) weight = (pow((6 * t - 1) / 2.0, 3)) / 2;
else if (t <= 5.0 / 6.0) weight = (1 - pow((6 * (1-t) - 1) / 2.0, 3)) / 2 + 0.5;
else weight = 1;
Geom::Point delta = new_pos - position();
Geom::Point offset0 = ((1-weight)/(3*t*(1-t)*(1-t))) * delta;
Geom::Point offset1 = (weight/(3*t*t*(1-t))) * delta;
first->front()->move(first->front()->position() + offset0);
second->back()->move(second->back()->position() + offset1);
In you case "first->front()" and "second->back()" would mean two control points
The bezier curve is nothing more then two polynomials: X(t), Y(t).
The cubic one:
x = ax*t^3 + bx*t^2 + cx*t + dx
0 <= t <= 1
y = ay*t^3 + by*t^2 + cy*t + dy
So if you have a curve - you have the poly coefficients. If you move your point and you know it's t parameter - then you can simply recalculate the poly's coefficients - it will be a system of 6 linear equations for coefficients (for each of the point). The system is subdivided per two systems (x and y) and can be solved exactly or using some numerical methods - they are not hard too.
So your task now is to calculate control points of your curve when you know the explicit equation of your curve.
It can be also brought to the linear system. I don't know how to do it for generalized Bezier curve, but it is not hard for cubic or quadric curves.
The cubic curve via control points:
B(t) = (1-t)^3*P0 + 3(1-t)^2*t*P1 + 3(1-t)*t^2*P2 + t^3*P3
Everything you have to do is to produce the standard polynomial form (just open the brackets) and to equate the coefficients. That will provide the final system for control points!
When you clicks on curve, you already know position of current control point. So you can calculate offset X and offset Y from that point to mouse position. In case of mouse move, you would be able to recalculate new control point with help of X/Y offsets.
Sorry for my english