Projected travelled distance from gyroscope / accelerometer measurements - position

I want to build an app that calculates project distance traveled by a kicked ball (ball is kicked into a training net) based on the initial readings from accelerometer/gyroscope and some assumptions about weight, air resistance, etc. The ball contains Triple Axis Accelerometer and Gyro. The measurements can either be raw accel/gyro x/y/z values or one of the following:
Actual quaternion components in a [w, x, y, z] format
Euler angles (in degrees) calculated from the quaternions
yaw/pitch/roll angles (in degrees) calculated from the quaternions acceleration components with gravity removed. This acceleration reference frame is not compensated for orientation, so +X is always +X according to the sensor, just without the effects of gravity.
acceleration components with gravity removed and adjusted for the world frame of reference (yaw is relative to initial orientation)
I actually don't know what any of the above means.
Could someone suggest a formula to use in this scenario with this kind of data available? I went through this question, but it doesn't exactly have what I'm looking for.

Related

Raytracing and Computer Graphics. Color perception functions

Summary
This is a question about how to map light intensity values, as calculated in a raytracing model, to color values percieved by humans. I have built a ray tracing model, and found that including the inverse square law for calculation of light intensities produces graphical results which I believe are unintuitive. I think this is partly to do with the limited range of brightness values available with 8 bit color images, but more likely that I should not be using a linear map between light intensity and pixel color.
Background
I developed a recent interest in creating computer graphics with raytracing techniques.
A basic raytracing model might work something like this
Calculate ray vectors from the center of the camera (eye) in the direction of each screen pixel to be rendered
Perform vector collision tests with all objects in the world
If collision, make a record of the color of the object at the point where the collision occurs
Create a new vector from the collision point to the nearest light
Multiply the color of the light by the color of the object
This creates reasonable, but flat looking images, even when surface normals are included in the calculation.
Model Extensions
My interest was in trying to extend this model by including the distance into the light calculations.
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
It doesn't apply to all light models. (For example light arriving from an infinite distance has intensity independent of the position of an object.)
From playing around with my code I have found that this inverse square law doesn't produce the realistic lighting I was hoping for.
For example, I built some initial objects for a model of a room/scene, to test things out.
There are some objects at a distance of 3-5 from the camera.
There are walls which make a boundry for the room, and I have placed them with distance of order 10 to 100 from the camera.
There are some lights, distance of order 10 from the camera.
What I have found is this
If the boundry of the room is more than distance 10 from the camera, the color values are very dim.
If the boundry of the room is a distance 100 from the camera it is completely invisible.
This doesn't match up with what I would expect intuitively. It makes sense mathematically, as I am using a linear function to translate between color intensity and RGB pixel values.
Discussion
Moving an object from a distance 10 to a distance 100 reduces the color intensity by a factor of (100/10)^2 = 100. Since pixel RGB colors are in the range of 0 - 255, clearly a factor of 100 is significant and would explain why an object at distance 10 moved to distance 100 becomes completely invisible.
However, I suspect that the human perception of color is non-linear in some way, and I assume this is a problem which has already been solved in computer graphics. (Otherwise raytracing engines wouldn't work.)
My guess would be there is some kind of color perception function which describes how absolute light intensities should be mapped to human perception of light intensity / color.
Does anyone know anything about this problem or can point me in the right direction?
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
The physical quantity you're describing here is not intensity, but radiant flux. For a discussion of radiometric concepts in the context of ray tracing, see Chapter 5.4 of Physically Based Rendering.
If the boundary of the room is more than distance 10 from the camera, the color values are very dim.
If the boundary of the room is a distance 100 from the camera it is completely invisible.
The inverse square law can be a useful first approximation for point lights in a ray tracer (before more accurate lighting models are implemented). The key point is that the law - radiant flux falling off by the square of the distance - applies only to the light from the point source to a surface, not to the light that's then reflected from the surface to the camera.
In other words, moving the camera back from the scene shouldn't reduce the brightness of the objects in the rendered image; it should only reduce their size.

Find the closest point on a curve to a given point

The curve is in fact the trajectory of a bus, the curve is represented by many (up to a few thousand) discrete points on the curve (the points were recorded by a GPS device installed on the bus).
Input a point P, I need to find the closest point on the curve to the point P. The point P is usually no more than 30m away from the trajectory of the bus. Note, the closest point isn't necessary a point recorded by the GPS device, it could be a point somewhere between two recorded points.
First I need an algorithm to recover the trajectory from those recorded points. It would be great if the interpolated curve could show sharp turns made by the bus. Which curve is best for such task ? Is Bezier curve good enough ? And finally I have to calculate the closest point on the curve, of course the algorithm completely depends on the kind of curve chosen.
I'm doing some research, and don't have much knowledge in curve interpolation, so any suggestions are welcome.
For computing the trajectory from recorded points, I recommended using the centripetal or chord-length Catmull-Rom splines. See link for more details. Catmll-Rom splines are in fact special cubic Hermite curves, which can be easily converted into cubic Bezier curves. Please note that the result from Catmull-Rom spline is a G1 curve only in general. If you want the trajectory to be with higher continuity (such as C2), you can go with natural cubic splines or general B-spline interpolation. Whatever approach you take, it is advised to keep the spline's degree no higher than 5. Degree 3 is a popular choice.
Once you have the mathematical representation for the trajectory, you can compute the minimum distance between a given point P and the trajectory. In general, the squared distance between point P and a curve C(t) is represented as D(t) = |P-C(t)|^2. The minimum of D will happen at where its first derivative is zero, which means we have to find the root for the following equation:
dD/dt = 2*(P-C(t)).C'(t) =0
When C(t) is of degree 3, dD/dt will be of degree 5. This is the reason why it is recommended to use a low degree curve earlier.
There are many literatures or online materials talking about how to find the root of a polynomial (of any degree) efficiently and robustly. Here is another SO post that might be useful.

Raytracing the 'sunshape'

This is based on the question I asked here, but I think I might have asked the question in the wrong way. This is my problem:
I am writing a scientific ray tracer. I.e. not for graphics although the concepts are identical.
I am firing rays from a horizontal plane toward a parabolic dish with a focus distance of 100m (and perfect specular reflection). I have a Target at the focal point of the dish. The rays are not fired perpendicularly from the plane but are perturbed by a certain angle to emulate the fact that the sun is not a point source but a disc in the sky.
However, the flux coming form the sun is not radially constant across the sun disc. Its hotter in the middle than at the edges. If you have ever looked at the sun on a hazy day you'll see a ring around the sun.
Because of the parabolic dish, the reflected image on the Target should be the image of the sun. i.e. It should be brighter (hotter, more flux) in the middle than at the edges. This is given by a graph with Intensity Vs. Radial distance from the center
There is two ways I can simulate this.
Firstly: Uniform Sampling: Each rays is shot out from the with a equal (uniform) probability of taking an angle between zero and the size of the sun disk. I then scale the flux carried by the ray according to the corresponding flux value at that angle.
Secondly: Arbitrarily Sampling: Each rays is shot out from the plane according to the distribution of the Intensity Vs. Radial Distance. Therefore there will be less rays toward the outer edges than rays within the centre. This, to me seems far more efficient. But I can not get it to work. Any suggenstions?
This is what I have done:
Uniformly
phi = 2*pi*X_1
alpha = arccos (1-(1-cos(theta))*X_2)
x = sin(alpha)*cos(phi)
y = sin(alpha)*sin*phi
z = -cos(alpha)
Where X is a uniform random number and theta is a the subtend angle of the Solar Disk.
Arbitarily Sampling
alpha = arccos (1-(1-cos(theta)) B1)
Where B is a random number generated from an arbiatry distribution using the algorithm on pg 27 here.
I am desperate to sort this out.
your function drops to zero and since the sun is not a smooth surfaced object, that is probably wrong. Chances are there are photons emitting at all parts of the sun in all directions.
But: what is your actual QUESTION?
You are looking for Monte-Carlo integration.
The key idea is: although you will sample less rays outside of the disc, you will weight these rays more and they will contribute to the sum with a higher importance.
While with a uniform sampling, you just sum your intensity values, with a non uniform sampling, you divide each intensity by the value of the probability distribution of the rays that are shot (e.g., for a uniform distribution, this value is a constant and doesn't change anything).

Great Circle distance plus altitude change

How do you exactly compute the distance traveled between two points at different altitudes on a spherical body? If the two points are at the same altitude it's a simple great-circle calculation. But what is the additional term to account for a steady climb or descent precisely? Say we're talking about a spaceplane that steadily climbs up to a great height over a great distance after taking off.
Illustration:
http://i.imgur.com/nAp1S.png
The National Geodetic Survey (NGS) (a division of NOAA) has some information on this, and even sample Fortran code and working programs on their website and for a PC.
See:
http://www.ngs.noaa.gov/TOOLS/Inv_Fwd/Inv_Fwd.html
The program that you want is INVERS3D:
http://www.ngs.noaa.gov/PC_PROD/Inv_Fwd/
You will need to look through their code for specifics, but they calculate "ellipsoidal distance, the mark-to-mark distance, and the ellipsoid height difference" using lat/long/altitude.
From their website:
INVERS3D
Program INVERS3D is the three dimensional version of program INVERSE,
and is the tool for computing not just the geodetic azimuth and
ellipsoidal distance, but also the mark-to-mark distance, the
ellipsoid height difference, the DX, DY, DZ (differential X, Y, Z used
to express GPS vectors), and the DN, DE, DU (differential North, East,
Up using the FROM station as the origin of the NEU-coordinate system).
The program requires geodetic coordinates as input, expressed as
either: 1) latitude and longitude in degrees, minutes, and seconds or
decimal degrees along with the ellipsoid heights for both stations, or
2) rectangular coordinates (X, Y, Z in the Conventional Terrestrial
Reference System) for each station. The program works exclusively on
the GRS80 ellipsoid and the units are meters. Both types of
coordinates may be used in the same computation. The program reads
input geodetic positions with the default hemispheres for latitude and
longitude set at North and West.

Defining Up in the Direct3D View Matrix when Camera Is Constantly Moving

In my Direct3D application, the camera can be moved using the mouse or arrow keys. But if I hard code (0,1,0) as the up direction vector in LookAtLH, the frame goes blank at some orientations of the camera.
I just learned the hard way that when looking along the Y-axis, (0,1,0) no longer works as the Up direction (seems obvious?). I am thinking of switching my up direction to something else for each of these special cases. Is there a more graceful way to handle this?
Assuming you can calculate a vector pointing forward (what you are looking at - your position) and a vector pointing right (always on the XZ-plane unless you can roll). Normalize both these vectors, then up is forward x right (where x is cross product).
In general, you can plug in your yaw, pitch and roll into a rotation matrix and rotate the axis vectors to get right, up and forward, but I guess that's what you are using LookAtLH to avoid.
See http://en.wikipedia.org/wiki/Rotation_matrix#The_3-dimensional_rotation_matricies
The graceful way to handle this is to use Unit Quaternions. A quaternion is a vector of 4 values that encodes an orientation in 3D space (not a rotation as some articles assert) and a unit quaternion is one where the vector length sqrt(x^2+y^2+z^2+w^2) is 1.0. There are a set of mathematical operations for working with quaternions that are analogous to using matrices to encode rotations, with the added bonus that quaternions can never represent an degenerate orientation. You can freely convert quaternions to a 3x3 or 4x4 matrix when you need to feed the result to a GPU.
Your problem is that, while you are moving your camera, you will introduce a little twist into the camera's up direction. By forcing the camera to re-center itself on the (0,1,0) vector every iteration, you are in effect rotating the camera and then clamping the camera's orientation to remain on the surface of a sphere, but when your camera hits the pole of this sphere there is no good direction to call "up" and your matrix goes singular and gives you zero-sized polygons (hence the black screen). Quaternions have the ability to interpolate through these poles and come out the other side just fine, leaving you with a valid matrix at all times. all you have to do is control the "twist".
To measure this twist you should read Ken Shoemake's article "Fiber Bundle Twist Reduction" in the book Graphics Gems 4. He shows a good way to measure this accumulated twist and how to remove it when it is offensive.

Resources