I am new to this area - I have a background in a Gait and Posture.
I have a series of motion files of timestamped coordinates (containing X, Y, and Z in mm) with a number of joints (30).
What would be the simplest way to extract the following from the motion observations. 1) The number of active features (i.e. active joints). 2) average speed of motion.
Same file is the format of NxP. Where P is the number of joints and N is the number of frame observations.
What I am looking for is some pointers into possible areas to explore.
Regards,
Dan
A couple of possibilities you might like to explore - both using completely free, (and open source), software:
Python + Numpy/SciPy can easily read in your
coordinate values and calculate the data you require - it is also
possible to plot in 3D using matplotlib.
You could use your positional data to animate a stick figure in
Blender - some of the test blends would provide a good starting point for this.
Related
I have a personal project I'd like to achieve as one of my first complete python projects/learning/experience. The project is to make an application that will accept some input (or just change hard-coded input) for a time scale, and convert planetary orbits into sound waves.
We would need to transpose the timescales up quite a bit to be able to hear an orbital frequency, and I want to be able to change that transposition depending on what the user or I want.
Let's say I pull Nasa JPL data, and they return something like this :
A.D. 2019-Jan-01 00:00:00.0000, -4.714853635016740E+07, -4.782573126195183E+07, 4.173036297464184E+05,
How can I take this Date, X, Y, Z information (there would be more information for many days, years etc... my/user choice) and convert it into a sine wave (probably into a CSV) where I can then take the CSV and generate a .wav using Python?
I believe I'll need to change the orbit information to exist within +1 and -1, and somehow figure out how to take the circular/elliptical data and convert it into sine waves (some X or Y data will need to invert from negative to positive? along an axis?)
Any help to get me on the right path would be greatly appreciated!
I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!
For one of my classes, I made a 3D graphing application (using Visual Basic). It takes in a string (z=f(x,y)) as input, parses it into RPN notation, then evaluates and graphs the equation. While it did work, it took about 20 seconds to graph. I would have liked to add slide bars to rotate the graph vertically and horizontally, but it was definitely too slow to allow that.
Does anyone know what programming languages would be best for this type of thing? Ideally, I will be able to smoothly rotate the function once it is graphed.
Also, I’m trying to find a better way to rotate the function. Right now, I evaluate it at a bunch of points, and then plot the points to the screen. Every time it is rotated, it must be re-evaluated and plot all the new points. This takes just as long as the original graph process, as it basically treats it as a completely new function.
Lastly, I need a better way to display the graph. Currently (using VB with visual studio) I plot 200,000 points to a chart, but this does not look great by any means. Eventually, I would like to be able to change color based on height, and other graphics manipulation to make it look better.
To be clear, I am not asking for someone to do any of this for me, but rather the means to go about coding this in an efficient way. I will greatly appreciate any advice anyone can give to help with any of these three concerns.
So I will explain how I would go about it using C++ and OpenGL. This doesn't mean those are the tools that you must use, it's just those are standard graphics tools.
Your function's surface is essentially a 2D manifold, which has the nice property of having an intuitive mapping to a 2D space. What is commonly referred to as UV mapping.
What you should do is pick the ranges for the rectangle domain you want to display (minimum x, maximum x, minimum y, maximum y) And make 2 nested for loops of the form:
// Pseudocode
for (x=minimum; x<maximum; x++)
for (y=minimum; y=maximum; y++)
3D point = (x,y, f(x,y))
Store all of these points into a container (std vector for c++ works fine) and this will be your "mesh".
This is done once, prior to rendering. You then render those points using, for example GL_POINTS, and rotate your graph mesh using rotations on the GPU.
This will only show scattered points, not a surface.
If you also wish to show the surface of your function, and not just the points, you can triangulate that set of points fairly easily.
Group each 4 contiguous vertices (i.e the vertices at indices <x,y>, <x+1,y>, <x+1,y>, <x+1,y+1>) and create the 2 triangles:
(<x,y>, <x+1,y>, <x,y+1>), (<x+1,y>, <x+1,y+1>, <x,y+1>)
This will fill triangulate the surface of your mesh.
Essentially you only need to build your mesh once, and this way rendering should be 60 fps for something with 20 000 vertices, regardless of whether you only render points or triangles too.
Programming language is mostly not relevant, so VB itself is probably not the issue. You can have the same issues in Python, C#, C++, etc. Of course you must master the programming language you choose.
One key aspect is using the right algorithms and data-structures. Proper use of memory allocations and memory layout for maximizing CPU (and GPU) cache are also key. Then you must take advantage of the platform and hardware capabilities (GPU and Multithreading). For the last point you definetely need to use a graphics library such as OpenGL or Vulkan.
I start this thread asking for your help in Excel.
The main goal is to determine the coordinates of the intersection point P=(x,y) between two curves (curve A, curve B) modeled by points.
The curves are non-linear and each defining point is determined using complex equations (equations are dependent by a lot of parameters chosen by user, as well as user will choose the number of points which will define the accuracy of the curves). That is to say that each curve (curve A and curve B) is always changing in the plane XY (Z coordinate is always zero, we are working on the XY plane) according to the input parameters and the number of the defining points is also depending by the user choice.
My first attempt was to determine the intersection point through the trend equations of each curve (I used the LINEST function to determine the coefficients of the polynomial equation) and by solving the solution putting them into a system. The problem is that Excel is not interpolating very well the curves because they are too wide, then the intersection point (the solution of the system) is very far from the real solution.
Then, what I want to do is to shorten the ranges of points to be able to find two defining trend equations for the curves, cutting away the portion of curves where cannot exist the intersection.
Today, in order to find the solution, I plot the curves on Siemens NX cad using multi-segment splines with order 3 and then I can easily find the coordinates of the intersection point. Please notice that I am using the multi-segment splines to be more precise with the approximation of the functions curve A and curve B.
Since I want to avoid the CAD tool and stay always on Excel, is there a way to select a shorter range of the defining points close to the intersection point in order to better approximate curve A and curve B with trend equations (Linest function with 4 points and 3rd order spline) and then find the solution?
I attach a picture to give you an example of Curve A and Curve B on the plane:
https://postimg.cc/MfnKYqtk
At the following link you can find the Excel file with the coordinate points and the curve plot:
https://www.mediafire.com/file/jqph8jrnin0i7g1/intersection.xlsx/file
I hope to solve this problem with your help, thank you in advance!
kalo86
Your question gave me some days of thinking and research.
With the help of https://pomax.github.io/bezierinfo/
§ 27 - Intersections (Line-line intersections)
and
§ 28 - Curve/curve intersection
your problem can be solved in Excel.
About the mystery of Excel smoothed lines you find details here:
https://blog.splitwise.com/2012/01/31/mystery-solved-the-secret-of-excel-curved-line-interpolation/
The author of this fit is Dr. Brian T. Murphy, PhD, PE from www.xlrotor.com. You find details here:
https://www.xlrotor.com/index.php/our-company/about-dr-murphy
https://www.xlrotor.com/index.php/knowledge-center/files
=>see Smooth_curve_bezier_example_file.xls
https://www.xlrotor.com/smooth_curve_bezier_example_file.zip
These knitted together you get the following results for the intersection of your given curves:
for the straight line intersection:
(x = -1,02914127711195 / y = 23,2340949174492)
for the smooth line intersection:
(x = -1,02947493047196 / y = 23,2370611219553)
For a full automation of your task you would need to add more details regarding the needed accuracy and what details you need for further processing (and this is actually not the scope of this website ;-).
Intersection of the straight lines:
Intersection of the smoothed lines:
comparison charts:
solution,
Thank you very much for the anwer, you perfectly centered my goal.
Your solution (for the smoothed lines) is very very close to what I determine in Siemens NX.
I'm going to read the documentation at the provided link https://pomax.github.io/bezierinfo/ in order to better understand the math behind this argument.
Then, to resume my request, you have been able to find the coordinates (x,y) of the intersection point between two curves without passing through an advanced CAD system with a very good precision.
I am starting to study now, best regards!
kalo86
I have generated a .obj file from a scan of a 3d scanner. However, I am not sure how to interpret all this data. I have looked on wikipedia and understood the general structure of the .ibj file. My goal is to extract some information about the colour and I am not sure how to do that. What do the numbers in the vt line represent and how can I use those to come up with a colour? My end objective is to scan a foot and cancel out the floor "portion" of the scan. When scanning the foot, the floor is also part of the scan and I would like to disregard the floor and concentrate on the foot. Here is a small part of the .obj file:
Look s like Wavefront obj ASCII fileformat ... so google a bit and you will find tons of descriptions. In Your example:
v x y z means point coordinate (vertex) [x,y,z]
vn nx,ny,nz means normal vector (nx,ny,nz) of last point [x,y,z]
vt tx,ty means texture coordinate [tx,ty]
Vertexes are the points of the polygonal mesh. Normals are used for lighting computations (shading) so if you do not use it you can skip it. The color is stored in some texture image and you will pick it as a pixel at [tx,ty] the range is tx,ty=<-1,+1> or <0,+1> so you need to rescale to image resolution.
So you need to read all this data to some table and then find section with faces (starts with f):
f v1,v2,v3 means render polygon with 3 vertexes where v1,v2,v3 are index of vertex from the table. Beware the indexing starts from 1 so for C++ style arrays you need to decrement the indexes by 1.
There are a lot of deviations so without an example is hard to elaborate further (your example shows only the start of vertex table).