How to estimate velocity of a point between two frame given 3d coordinates and angle between two segments? - geometry

I am working on gait data which has 3D coordinates of position for 15 joints per frame. I am trying to estimate velocity of ankle joint but I don't know how to find across frames. Given frame rate is 30 fps. Also I want to find angles between two segments let's say thigh and shank of same leg.

Related

How to calculate aspect ratio and tilt based on two groups of point clouds

Now I am collecting two sets of 2D point sets in two different postures. How to calculate
Translation, rotation, aspect ratio, and tilt

Compute points at a given geodesic distance on a mesh

Repeating this question for better visibility. I have a triangular mesh (assume a manifold mesh). I want to sample corners of a square on a mesh that is independent of the triangulation.
I am following these steps
Sample a triangle (based on the areas of the triangles)
Sample a point uniformly on the triangle/face
Sample a pair of random perpendicular directions
I want to calculate the distance of three other corners of the square given an edge length. Since the corners can be on any other face, the output should be of the format (Face, barycentric coordinates on that face).
I am looking at libraries such as Polyscope or pygeodesic that use the heat method to compute the geodesic distance between two vertices of the mesh, but I am not sure how to get points at an arbitrary geodesic distance from another point.

How do I calculate a lon/lat from an existing coordinate set and offsets?

I have an problem I need some help with. I have 2 sensors (a weather sensor and a camera) mounted under a large balloon (basically). The weather sensor records pitch, roll, yaw, altitude, heading (0-360), lon and lat. I also have x,y,z values that represent the offset distance that the weather sensor is from the camera. The camera does not have it's own INS so the values from the weather sensor are sent to the camera. However, since those values are coming from the weather sensor that is not in the same position as the camera, the values are not accurate. I need to perform calculations on the values before sending them to the camera and that is where I need help.
For the record:
Both devices are facing the front of the craft
The X is the Front to Back axis
The Y is the Top to Bottom axis
The Z is the Left to Right axis
coordinate planes diagram
I know the formula to get coordinates for a target point given a starting point, bearing, and distance. I can get the distance by using the Pythagorean Theorem (using the measured X and Z values). Those are 10.58055" and 17.53322" respectively. We already have the starting point (it comes from the weather sensor).
First, am I on the right track here?
Second, how do I appropriately calculate bearing? I can use trig to get the angle that the weather sensor is offset from the camera, which I think is required. I also think I need to account for the orientation of the sensor to the camera (i.e. if the weather sensor is in front of the camera, it needs to "turn around" to get to the camera). This would mean that if the X value is negative and the Z value is positive, I would subtract my angle (let's call it theta) from 180. However, that would only work if I was heading north so I believe you then need to add in the heading (that came from the weather sensor).
I think I am close on this. I need some smarter people letting me know if I am approaching this correctly and then possibly little things like the appropriate way to handle the bearing measurement going above 360 (which I believe is to just subtract 360).

Build a geographical map from triangle points based on distance

I have 5 {x,y} points randomly placed on a grid
Each of the points do not know the {x,y} coordinates of the other points
Each of the points do know the distance of each of the other points from their {x,y} position
Each of the points exchanges this distance information with every other point
So every point knows every distance of every other point
Using this distance information every point can calculate (by finding the angles) triangles for every other point using itself as a reference point
Example, point 1 can calculate the following triangles:
1-2-3,
1-2-4,
1-2-5,
1-3-4,
1-3-5,
1-4-5,
and using the distance data recieved from the other points it can also calculate
2-3-4,
2-3-5,
2-4-5,
3-4-5
I would like to build a map of the location of every other point relative to a single point
How should I go about doing this? I am asuming it would be some kind of triangulation algorithm but these mainly seem to compute the location of a point from three other points, not the other way around where the other points {x,y} coordinates are discovered based on only the distance information.
I have tried plotting the two possible triangles for every 3 triangle points and then rotating them on a fixed known point to try and align them, but I think this avenue will end up with too many possibilities and errors
Ultimately I would like every point to end up with {x,y} coordinates of every other point relative to itself
You know the distance from one point to every other, dij. Thus, point 2 lies in a circumference of center point 1 and radius = d12. Point 3 lies in a circumference of center point 1 and R=d13 and it also lies in another circumference of center point 2 and R=d23.
See this picture:
I've set point 2 in X-axis for simplicity.
As you see, point 3 is on the intersection of two cicrcumferences centered at P1 and P2. There is a second intersection, P3a. Let's choose the one that is upwards and continue.
For P4 we can use three circumferences, centered at P1, P2 and P3. Again we get two solutions.
The same process can be done with the rest of points. For Pn you have n-1 circumferences.
I'm sure you can find the maths for circle-circle intersection.
Some remarks must be observed:
1) The construction is simpler if you first sort the points by distance to P1.
2) Not all distances generate a solution. For example, increase d13 an there's no intersection between the two circumferences for P3. Or increase d14 and now the three circumferences don't intersect in just the two expected points 4 and 4a.
3) This fact can be overworked by considering the average of intersections and the distance from each solution to this average. You can set a tolerance in these distances and tell if the average is a solution or else some dij is wrong. Since two solutions are possible, you must consider two averages.
4) The two possible triangulations are symmetric, over X-axis in the case I've drawn.
The real solution is obtained by a rotation around P1. To calculate the angle of rotation you need the {x,y} coordinates of another point.

Converting angular velocities to orientation Wii Motionplus

I'm working on the Wii Motionplus and I've extracted the raw values using WiimoteLib Library. However, when I normalize it, I get random values that don't tally with what is actually happening.
This is how I'm normalizing:
Calibrate the Motionplus (i.e. Find the raw value that corresponds to zero; I do this by holding it stationary for a point of time)
For every subsequent raw value read, I subtract the zero value from it to get the "relative" raw value.
Then, I scale this value using http://wiibrew.org/wiki/Wiimote/Extension_Controllers (checking for yaw_fast, pitch_fast etc.), where the numerical values are computed using the measure (raw value of 8192 corresponds to 595 deg/s)
I sum up all these values over time (discrete integration) to get the angle of the wiimote wrt initial orientation.
However, when I calculate this and plot it out on a graph, a step change in one of the axes is NOT being reflected in the graph. I tried using a digital compass with it to compare, but while the compass reflects the values correctly, the wii values are completely different (even the pattern is not the same)
Can anyone tell me where I'm going wrong with the normalization?
Thanks!
The numbers that are being sent out are rotations about the x y and z axis respective to itself. In order to relate this to x,y,z coordinates you will need to use a rotation matrix, and since the rotation readings are not a fixed axis but depend on what orientation you are at you need to use a Euler Matrix to relate this to a fixed x,y,z coordinate
In other words you are receiving roll, yaw and pitch velocities and you need to use a Euler Matrix to relate this to cartessian coordinates. Once you know your initial roll, pitch and yaw you can simply add your next reading of roll, pitch and yaw to that initial times the time interval that that reading applies to.
ROLL is Rotation about the y-axis
PITCH is Rotation about the x-axis
YAW is Rotation about the z-axis

Resources