Transition from spherical coordinate system (local) to cartesian - geometry

I have got this problem, so there is a set of data as points in the spherical coordinate system - local (not a faithful arrangement of geographic or mathematical)and I'm trying to convert it to a Cartesian system to preview it in any program to draw the shape which should rise from these points .
Points are collected by the meter with a rotating laser head (thus slightly noisy). The head rotates in two axes, called phi, theta and the distance r.
Where
phi - is left-right rotation (-90 to 90)
theta - is up-down rotation (-90 to 90)
r - the distance
This can be seen in the figure below:
I tried to convert the data to Cartesian (xyz) according to the following formulas:
Unfortunately, every time something happens to me run them down and the picture that I get is incorrect.
For sample collection:
sample
I get such a picture (seen from top):
The expected result should be a rectangular tub (with bare upper part). This first arc (at the point where data has not yet been ran over) is called. lens effect, resulting from the fact that the meter was close to the wall, and second end of graph puzzles me end where the data are arranged in a completely unexpected.
With this number of points is hard for me to figure out what causes failure or bad for the conversion of data, or simply meter so measured. I would be grateful for verification my way of thinking and any advices.
Thank you in advance.

I think i am late to answer this question.
I can't see the images, anyway you can go through enter link description here .
It will give you clear idea how to convert spherical cordinate data into cartesian cordinate system.

Related

Triangulate camera position and orientation in regards to known objects

I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!

What is the reference point for measuring angles in OpenCV?

I'm trying to infer an object's direction of movement using dense optical flow in OpenCV. I'm using calcOpticalFlowFarneback() to get flow coordinates and cartToPolar() to acquire vector angles which would indicate direction.
To interpret the results I need to know the reference point for measuring the angle. I have found this blog post indicating that the range of angles is 360°. That tells me that the angle measurement would go along the lines of the unit circle. I couldn't make out much more than that.
The documentation for cartToPolar() doesn't cover this and my attempts at testing it have failed.
It seems that the angle produced by cartToPolar() is in reference to the unit circle rotated clockwise by 90° centered on the image coordinate starting point in the top left corner. It would look like this.
I came to this conclusion by using the dense optical flow example provided by OpenCV. I replaced the line hsv[...,0] = ang*180/np.pi/2 with hsv[...,0] = ang*180/np.pi to get correct angle conversion from radians. Then I tested a video with people moving from top right to bottom left and vice versa. I sampled the dominant color with GIMP and got RGB values which I converted to HSV values. Hue value corresponds to the angle in degrees.
People moving from top right to bottom left produced an angle of about 300° and people moving the other way round produced an angle of about 120°. This hinted at the way the unit circle is positioned.
Looking at the code, fastAtan32f is used to compute the angles. and that seems to be a atan2 implementation.

Orthographic projection of point to line in WGS84

I need to find points (from a rather small dataset) which are close enough to a polyline. All coordinates are WGS84.
I think of some r-tree thing to reduce the data to just a few candidates which then have to be checked in more detail.
While i managed to do this using "great circle" arithmetic, i am sure this is too pedantic for the following reasons:
The segmentation of those polylines is quite high. A single segment of a polyline can be considered to be no longer than 10 km.
The points in question are not more than a few hundred meters away from segments.
The area in question is Europe, so the algorithm does not need to be valid for extreme (near pole?) conditions. Again: points don't need to be checked agains the whole polyline (which could be hundrets of kilometers). Only the "nearby" segments need to be considured.
Do i need to transform the WGS84 coordinates to
some local cartesian reference system
to a mercator system
Or can i even just calculate with "angle differences"? I know that this is just a matter of accuracy: I can accept an error which is below ~50 meters.
I highly appreciate your suggestions!
On how to measure distance from point to polyline:
you have to measure distances from all your points to all segments of a polyline.
See Distance from a point to a polygon
You can do without converting coordinates to cartesian (especially if the area is rather small, you don't mind 50 meters error and you don't need exact distances, just relative) See https://en.wikipedia.org/wiki/Decimal_degrees.

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

How do I find the world coordinates of a pixel on the image plane?

A bit of background
I am writing a simple ray tracer in C++. I have most of the core complete but don't understand how to retrieve the world coordinate of a pixel on the image plane. I need this location so that I can cast the ray into the world.
Currently I have a Camera with a position(aka my perspective reference point), a direction (vector) which is not normalized. The directions length signifies the center of the image plane and which way the camera is facing.
There are other values associated with the camera but they should not be relevant.
My image coordinates will range from -1 to 1 and the perspective(focal length), will change based on the distance of the direction associated with the camera.
What I need help with
I need to go from pixel coordinates (say [0, 256] in an image 256 pixels on each side) to my world coordinates.
I will also want to program this so that no matter where the camera is placed and where it is directed, that I can find the pixel in the world coordinates. (Currently the camera will almost always be centered at the origin and will look down the negative z axis. I would like to program this with the future changes in mind.) It is also important to know if this code should be pushed down into my threaded code as well. Otherwise it will be calculated by the main thread and then the ray will be used in the threaded code.
(source: in.tum.de)
I did not make this image and it is only there to give an idea of what I need.
Please leave comments if you need any additional info. Otherwise I would like a simple theory/code example of what to do.
Basically you have to do the inverse process of V * MVP which transforms the point to unit cube dimensions. Look at the following urls for programming help
http://nehe.gamedev.net/article/using_gluunproject/16013/ https://sites.google.com/site/vamsikrishnav/gluunproject

Resources