Orientation from magnetometer data - get

I saw lot of topics with the same title but answers are different.
I have a magnetometer in my phone which give me the components of magnetic field in direction of X, Y, Z.
Which of the following angles can be determined using datas of magnetometer?
Roll, pitch, yaw? And how?
Thank you,
Robert

With a magnetometer alone convention would say you could calulate pitch and yaw. Assuming the z axis of the magnetometer points north. If the device was pointed perpendicular to the north vector then it would be able to calculate roll. As I understand it.

Related

Find the north using Quaternions, Gyro and Compass

This question is about an algorithm, not a specific language.
I have a mobile device, with the following axis -
When the device is pointed perpendicular to the ground (Positive Z facing up), I get an accurate reading of the compass in degrees with the angle being between north and the positive Y of the device.
So if I hold the device and point it west, I will get either the value 270 or the value -90 (not quite sure how to figure out which one I'll get).
When I hold the device with Y pointing towards the sky, the compass is no longer accurate BUT I can use the device accelerometer to figure out when that is the case AND I can use the gyro to figure out what the rotation of the device is.
So, when I get an accurate reading of the compass, I am saving a parameter called "lastAccurateAttitude", which is the Quaternion from the gyro.
I also save "lastAccurateHeading" as the angle from north.
What I am attempting to do is use lastAccurateHeading, lastAccurateAttitude and the current attitude, to calculate the angle between the north and the negative z axis of the device.
Right now, I am stuck on the math, so I would appreciate some help.
If possible, I would love for this to also work when either x, -x, y or -y of the device are pointing upwards. (The north would still be calculated compared to -z).

Triangulate camera position and orientation in regards to known objects

I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!

Projected travelled distance from gyroscope / accelerometer measurements

I want to build an app that calculates project distance traveled by a kicked ball (ball is kicked into a training net) based on the initial readings from accelerometer/gyroscope and some assumptions about weight, air resistance, etc. The ball contains Triple Axis Accelerometer and Gyro. The measurements can either be raw accel/gyro x/y/z values or one of the following:
Actual quaternion components in a [w, x, y, z] format
Euler angles (in degrees) calculated from the quaternions
yaw/pitch/roll angles (in degrees) calculated from the quaternions acceleration components with gravity removed. This acceleration reference frame is not compensated for orientation, so +X is always +X according to the sensor, just without the effects of gravity.
acceleration components with gravity removed and adjusted for the world frame of reference (yaw is relative to initial orientation)
I actually don't know what any of the above means.
Could someone suggest a formula to use in this scenario with this kind of data available? I went through this question, but it doesn't exactly have what I'm looking for.

Gyroscope sensor: what is the axis around which the device's rotating?

i am coding 1 app which is same app "iHandy Level Free" on google play.
i am using gyroscope sensor, but i don't know what is the axis around which the device's rotating ? because when i rotate, tilt device, 3 values x, y, z are change too.
thanks
The Android follows ENU(east, North, UP ), have a look at this Application Note : http://www.st.com/st-web-ui/static/active/jp/resource/technical/document/application_note/DM00063297.pdf
convention , so you will get a bigger value for the axis around which the device is being rotated.
It is not possible to get a Zero value around any axis no matter how gently you move the device .You are bound get some angular rate around the tationary axis (which you are assuming to be stationary)

how to convert depth in Z-cordinate

I'm making a project in which i need to draw user's feet in a rectangle of 640*480
and i'm mapping the skeleton joints coordinate into depth coordinates to fit it in the box,
but the DepthImagePoint gives x-y coordinate and depth(in mm) but i want x-z coordinate.
How can I get the z coordinate to fit in the resolution of 640*480?
or Can i somehow convert skeleton joints coordinate to proper resolution to fit the box?
I'm using Microsoft Kinect SDK with C#.
Thanks in Advance.
There are several functions in the CoordinateMapper that will map points between the different data streams. MapSkeletonPontToDepthPoint being the first to come to mind. Others in the mapper may also assist in finding the proper depth mapping you are looking for.
http://msdn.microsoft.com/en-us/library/jj883690.aspx

Resources