Pyplot is too slow when plotting point by point - python-3.x

I have too many points(x,y) (about 280million) to store in a list of x and a list of y and then feed to pyplot.scatter().
So I thought I could send x,y point by point. But pyplot.scatter is very slow to process this.
Are there any alternatives? My end game is to save the graph of points as an image.

Related

XV-11 Lidar read data with Raspberry pi 3B+

I have a XV-11 Lidar sensor from an old vacuum cleaner and I want to use it for a robot project.
During my research, I saw a very interesting and simple approach using Matplotlib and display all the distances using scatter points. eg (https://udayankumar.com/2018/08/01/working-with-lidar/) but when I run this python code to RP3 indeed a Matplotlib window is popping up with all the distances but the refresh rate for data it's too slow and impossible to view in real time. I mean the matplotlib display is falling behind a few dozens of seconds with all the sensor readings.
My next idea was to do something by myself with the following display lines but I have same result: Good readings but delayed a lot.
points =[]
plt.ion()
x = dist_mm*np.cos(angle_rad)
y = dist_mm*np.sin(angle_rad)
points.append([x,y])
points = np.array(points)
plt.scatter(points[:,0], points[:,1])
if angle == 356:
plt.plot()
plt.draw()
plt.pause(0.0001)
plt.clf()
print ("-----------")
What I'm doing wrong or what I can improve in this case? My expectations are something like this
Lidar animation, source: https://github.com/Hyun-je/pyrplidar but in this example it's used a different Lidar sensor
You are clearing and re-creating the axes, background etc. every time. At the very least you can limit this drawing/re-drawing to only the relevant plot points for a degree of improvement.
If you're not familiar with this I'd start with the animation guidance- https://matplotlib.org/stable/api/animation_api.html which introduces some basics like updating only parts of the figure.
If you're still churning out too much data to update then limiting the frequency with which you read your data or more specifically the rate at which you redraw might result in more stability too.
Probably worth hunting down more general guidance on realtime plotting e.g. update frame in matplotlib with live camera preview

Triangulate camera position and orientation in regards to known objects

I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!

Transition from spherical coordinate system (local) to cartesian

I have got this problem, so there is a set of data as points in the spherical coordinate system - local (not a faithful arrangement of geographic or mathematical)and I'm trying to convert it to a Cartesian system to preview it in any program to draw the shape which should rise from these points .
Points are collected by the meter with a rotating laser head (thus slightly noisy). The head rotates in two axes, called phi, theta and the distance r.
Where
phi - is left-right rotation (-90 to 90)
theta - is up-down rotation (-90 to 90)
r - the distance
This can be seen in the figure below:
I tried to convert the data to Cartesian (xyz) according to the following formulas:
Unfortunately, every time something happens to me run them down and the picture that I get is incorrect.
For sample collection:
sample
I get such a picture (seen from top):
The expected result should be a rectangular tub (with bare upper part). This first arc (at the point where data has not yet been ran over) is called. lens effect, resulting from the fact that the meter was close to the wall, and second end of graph puzzles me end where the data are arranged in a completely unexpected.
With this number of points is hard for me to figure out what causes failure or bad for the conversion of data, or simply meter so measured. I would be grateful for verification my way of thinking and any advices.
Thank you in advance.
I think i am late to answer this question.
I can't see the images, anyway you can go through enter link description here .
It will give you clear idea how to convert spherical cordinate data into cartesian cordinate system.

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Calculating X and Y speed

I am trying to calculate X and Y speed for small object that are attracted by gravity of one big object This is video how my calculation works. As you can see after a while every small object goes into one point. I want achieve moves like in this video. How may I calculate X and Y speed. I have saved coordinates of all small object in list and every game cycle I add X speed to X coordinate and Y speed to Y coordinate.
From what I can tell from the videos, in yours, the speed is variable. And in the video that you're aiming for, the velocities are either -1 or 1? It seems in both cases, (more so with your version admittedly) the small objects converge to a point.

Resources