Constructing 3D image from center points and radius data - python-3.x

I have to construct a 3D image of spherical particles using python array operations. The data I have is center points and radius of spherical particles in (x,y,z,r) format. Where x,y,z,r are in the form of arrays of length 55000. When I do 3D plotting of these coordinates using mpl_toolkits.mplot3d the structure looks like as one shown in figure.
Can you suggest a good way to make 3D image using numpy or scipy.ndimage image processing tools. If not possible is there any alternative method to solve this issue? Thanks in advance.

Related

How to fit a ellipse to given points in Octave

I would like to know how to fit a bunch of (x,y) points in an ellipse shape. I've got two vectors, both the same size [1001,1], but I don't know how to fit an ellipse. Every time I tried the least squares it ends in some different approximation. I was using trigonometric polynomials by the way.
Here is an image of the graphic plot.
I will appreciate any help :)
Image plot: https://i.stack.imgur.com/P4vnt.png

Create a .stl file from a collection of points

So the software I am using accepts 3D objects in the form of contours or .stl files. The contours I have are along the z-plane(each plane has a unique z). I have had to modify the contours for my experiment and now the contours do not have a unique z for each plane(they are now slightly angled wrt z=0 plane).
The points represent the edges of the 3D object. What would be the best way to take this collection of points and create a .stl file?
I am relatively new to working with python and 3D objects, so any help, pointers or suggestions would be much appreciated.
Edit: I have the simplices and verticies using the Delaunay(), but how do I proceed next?
The co-ordinates of all points are in this text file in the format "x y z".
So after seeking an answer for months and trying to use Meshlab and Blender I finally stumbled across the answer using numpy-stl. Hopeful that it will help others in a similar situation.
Here is the code to generate the .STL file:
from stl import mesh
num_triangles=len(fin_list)
data = np.zeros(num_triangles, dtype=mesh.Mesh.dtype)
for i in range(num_triangles):
#I did not know how to use numpy-arrays in this case. This was the major roadblock
# assign vertex co-ordinates to variables to write into mesh
data["vectors"][i] = np.array([[v1x, v1y, v1z],[v2x, v2y, v2z],[v3x, v3y, v3z]])
m=mesh.Mesh(data)
m.save('filename.stl')
The three vertices that form a triangle in the mesh go in as a vector that define the surface normal. I just collected three such vertices that form a triangle and wrote them into the mesh. Since I had a regular array of points, it was easy to collect the triangles:
for i in range(len(point_list)-1):
plane_a=[]
plane_b=[]
for j in range(len(point_list[i])-1):
tri_a=[]
tri_b=[]
#series a triangles
tri_a.append(point_list[i+1][j])
tri_a.append(point_list[i][j+1])
tri_a.append(point_list[i][j])
#series b triangles
tri_b.append(point_list[i+1][j])
tri_b.append(point_list[i+1][j+1])
tri_b.append(point_list[i][j+1])
#load to plane
plane_a.append(tri_a)
plane_b.append(tri_b)
group_a.append(plane_a)
group_b.append(plane_b)
The rules for choosing triangles for creating a mesh are as follows:
The vertices must be arranged in a counter-clock direction.
Each triangle must share two vertices with adjacent triangles.
The direction normal must point out of the surface.
There were two more rules that I did not follow but it still worked in my case:
1. All coordinates must be positive(In 1st Quadrant only)
2. All triangles must be arranged in an increasing z-order.
Note: There can be two kinds of .STL file formats: Binary and ASCII. numpy-stl writes out in the binary format. More info on STL files can be found here.
Hope this helps!

How to map 3D spherical point configuration (not the earth) onto 2D plane?

I have a 3d spherical point set of 10 points, (2 layers of pentagonal loudspeaker array) 2. I want to obtain 2D representation of this configuration. (Mollweide, Mercator, Cylindirical or Equirectangular projection?)
I will set the axis so that they give corresponding elevation and azimuthal angles. An example is given in 1 (Taken by ALLRADecoder vst plugin by IEM)
Is there a way to do this with a Python package like matplotlib, mayavi, or similar?
If you want to actually render something similar to above, you might want to look at PyOpenGL however it might not be the most intuitive package to dive into.
If you want to just get the 2d set of points this 3d object makes you could just look at doing some projection maths and then a package like pygame would be ideal for plotting this 2d representation and is quite user-friendly if you're new to python and graphics.

VTK - create 3D model

I'm trying to create a 3D mask model from the 3D coordinate points that are stored in the txt file. I use the Marching cubes algorithm. It looks like it´s not able to link individual points, and therefore holes are created in the model.
Steps: (by https://lorensen.github.io/VTKExamples/site/Cxx/Modelling/MarchingCubes/)
First, load 3D points from file as vtkPolyData.
Then, use vtkVoxelModeller
Put voxelModeller output to MC algorithm and finally visualize
visualization
Any ideas?
Thanks
The example takes a spherical mesh (a.k.a. a set of triangles forming a sealed 3D shape), converts it to a voxel representation (a 3D image where the voxels outside the mesh are black and those inside are not) then converts it back to a mesh using Marching Cubes algorithm. In practice the input and output of the example are very similar meshes.
In your case, you load the points and try to create a voxel representation of them. The problem is that your set of points is not sufficient to define a volume, they are not a sealed mesh, just a list of points.
In order to replicate the example you should do the following:
1) building a 3D mesh from your points (you gave no information of what the points are/represent so I can't help you much with this task). In other words you need to tell how these points are connected between then to form a 3D shape (vtkPolyData). VTK can't guess how your points are connected, you have to tell it.
2) once you have a mesh, if you need a voxel representation (vtkImageData) of it you can use vtkVoxelModeller or vtkImplicitModeller. At this point you can use vtk filters that need a vtkImageData as input.
3) finally in order to convert voxels back to a mesh (vtkPolyData) you can use vtkMarchingCubes (or better vtkFlyingEdges3D that is a very similar algorithm but much faster).
Edit:
It is not clear what the shape you want should be, but you can try to use vtkImageOpenClose3D so the steps are:
First, load 3D points from file as vtkPolyData.
Then, use vtkVoxelModeller
Put voxelModeller output to vtkImageOpenClose3D algorithm, then vtkImageOpenClose3D algorithm output to MC (change to vtkFlyingEdges3D) algorithm and finally visualize
Example for vtkImageOpenClose3D:
https://www.vtk.org/Wiki/VTK/Examples/Cxx/Images/ImageOpenClose3D

skimage project an image's 3D plane to fronto-parallel view

I'm working on implementing Akush Gupta's synthetic data generation dataset (http://www.robots.ox.ac.uk/~vgg/data/scenetext/gupta16.pdf). In his work. he used a convolutional neural network to extract a point cloud from a 2-dimensional scenery image, segmented the point clouds to isolate different planes, used RANSAC to fit a 3d plane to the point cloud segments, and then warped the pixels for the segment, given the 3D plane, to a fronto-parallel view.
I'm stuck in this last part- warping my extracted 3D plane to a fronto-parallel view. I have X, Y, and Z vectors as well as a normal vector. I'm thinking what I need to do is perform some type of perspective transform or rotation that would bring all the pixels on the plane to a complete 0 Z-axis while the X and Y would remain the same. I could be wrong about this, it's been a long time since I've had any formal training in geometry or linear algebra.
It looks like skimage's Perspective Transform requires me to know the dimensions of the final segment coordinates in 2d space. It looks like AffineTransform requires me to know the rotation. All I have at this point is my X,Y,Z and normal vector and the suspicion that I may know my destination plane by just setting the Z axis to all zeros. I'm not sure if my assumption is correct but I need to be able to warp all the pixels in the segment of interest to fronto-parallel, fit a bounding box, place text inside of it, then warp the final segment back to the original perspective in 3d space.
Any help with how to think about this or implement it would be massively useful.

Resources