Create a .stl file from a collection of points - python-3.x

So the software I am using accepts 3D objects in the form of contours or .stl files. The contours I have are along the z-plane(each plane has a unique z). I have had to modify the contours for my experiment and now the contours do not have a unique z for each plane(they are now slightly angled wrt z=0 plane).
The points represent the edges of the 3D object. What would be the best way to take this collection of points and create a .stl file?
I am relatively new to working with python and 3D objects, so any help, pointers or suggestions would be much appreciated.
Edit: I have the simplices and verticies using the Delaunay(), but how do I proceed next?
The co-ordinates of all points are in this text file in the format "x y z".

So after seeking an answer for months and trying to use Meshlab and Blender I finally stumbled across the answer using numpy-stl. Hopeful that it will help others in a similar situation.
Here is the code to generate the .STL file:
from stl import mesh
num_triangles=len(fin_list)
data = np.zeros(num_triangles, dtype=mesh.Mesh.dtype)
for i in range(num_triangles):
#I did not know how to use numpy-arrays in this case. This was the major roadblock
# assign vertex co-ordinates to variables to write into mesh
data["vectors"][i] = np.array([[v1x, v1y, v1z],[v2x, v2y, v2z],[v3x, v3y, v3z]])
m=mesh.Mesh(data)
m.save('filename.stl')
The three vertices that form a triangle in the mesh go in as a vector that define the surface normal. I just collected three such vertices that form a triangle and wrote them into the mesh. Since I had a regular array of points, it was easy to collect the triangles:
for i in range(len(point_list)-1):
plane_a=[]
plane_b=[]
for j in range(len(point_list[i])-1):
tri_a=[]
tri_b=[]
#series a triangles
tri_a.append(point_list[i+1][j])
tri_a.append(point_list[i][j+1])
tri_a.append(point_list[i][j])
#series b triangles
tri_b.append(point_list[i+1][j])
tri_b.append(point_list[i+1][j+1])
tri_b.append(point_list[i][j+1])
#load to plane
plane_a.append(tri_a)
plane_b.append(tri_b)
group_a.append(plane_a)
group_b.append(plane_b)
The rules for choosing triangles for creating a mesh are as follows:
The vertices must be arranged in a counter-clock direction.
Each triangle must share two vertices with adjacent triangles.
The direction normal must point out of the surface.
There were two more rules that I did not follow but it still worked in my case:
1. All coordinates must be positive(In 1st Quadrant only)
2. All triangles must be arranged in an increasing z-order.
Note: There can be two kinds of .STL file formats: Binary and ASCII. numpy-stl writes out in the binary format. More info on STL files can be found here.
Hope this helps!

Related

Extracting relevant information from .obj 3d file

I have generated a .obj file from a scan of a 3d scanner. However, I am not sure how to interpret all this data. I have looked on wikipedia and understood the general structure of the .ibj file. My goal is to extract some information about the colour and I am not sure how to do that. What do the numbers in the vt line represent and how can I use those to come up with a colour? My end objective is to scan a foot and cancel out the floor "portion" of the scan. When scanning the foot, the floor is also part of the scan and I would like to disregard the floor and concentrate on the foot. Here is a small part of the .obj file:
Look s like Wavefront obj ASCII fileformat ... so google a bit and you will find tons of descriptions. In Your example:
v x y z means point coordinate (vertex) [x,y,z]
vn nx,ny,nz means normal vector (nx,ny,nz) of last point [x,y,z]
vt tx,ty means texture coordinate [tx,ty]
Vertexes are the points of the polygonal mesh. Normals are used for lighting computations (shading) so if you do not use it you can skip it. The color is stored in some texture image and you will pick it as a pixel at [tx,ty] the range is tx,ty=<-1,+1> or <0,+1> so you need to rescale to image resolution.
So you need to read all this data to some table and then find section with faces (starts with f):
f v1,v2,v3 means render polygon with 3 vertexes where v1,v2,v3 are index of vertex from the table. Beware the indexing starts from 1 so for C++ style arrays you need to decrement the indexes by 1.
There are a lot of deviations so without an example is hard to elaborate further (your example shows only the start of vertex table).

I the have country boundaries. How do I fill in with dots?

I got my country lat/long boundaries from koordinates.com. Now I want to fill in the interior with dots.
Since the file I have is KML, I was thinking of converting the coordinates to cartesian using the NetTopologySuite.
I do not want a polygon overlay. I want to generate dots/coordinates for the polygons interior - ideally at a density of my choosing.
I have seen algorithms like this one, http://alienryderflex.com/polygon_fill/. Is there a library that will do this for me? Alternatively, can someone share code?
Ultimately, I will convert the dot coordinates back to lat/long and populate a globe like this one
http://code.google.com/p/webgl-globe/
I'm affraid GIS isn't my area of expertise, but I've got two ideas:
Generate a set of random points. You can use a Point-In-Polygon function to determine if you're points are in the right place.
You can use a rectangle grid of points and use a 'resolution' to determine how many points there will be and how close. You can offset the grid positions to make them look more random if you need to. You'll check if the point inside the bounding rectangle of your polygon is inside the polygon or not.
Notice that the webgl-globe example uses a grid of points(similar to point(2)) converted to spherical coordinates.
Both ideas is kind of similar, only the points distribution differs.
You can find a roughly related implementation I did using actionscript here,
but I would also suggest asking on the GIS site.

Detecting arbitrary shapes

Greetings,
We have a set of points which represent an intersection of a 3d body and a horizontal plane. We would like to detect the 2D shapes that represent the cross sections of the body. There can be one or more such shapes. We found articles that discuss how to operate on images using Hough Transform, but we may have thousands of such points, so converting to an image is very wasteful. Is there a simpler way to do this?
Thank you
In converting your 3D model to a set of points, you have thrown away the information required to find the intersection shapes. Walk the edge-face connectivity graph of your 3D model to find the edge-plane intersection points in order.
Assuming you have, or can construct, the 3d model topography (some number of vertices, edges between vertices, faces bound by edges):
Iterate through the edge list until you find one that intersects the test plane, add it to a list
Pick one of the faces that share this edge
Iterate through the other edges of that face to find the next intersection, add it to the list
Repeat for the other face that shares that edge until you arrive back at the starting edge
You've built an ordered list of edges that intersect the plane - it's trivial to linearly interpolate each edge to find the intersection points, in order, that form the intersection shape. Note that this process assumes that the face polygons are convex, which in your case they are.
If your volume is concave you'll have multiple discrete intersection shapes, and so you need to repeat this process until all edges have been examined.
There's some java code that does this here
The algorithm / code from the accepted answer does not work for complex special cases, when the plane intersects some vertices of a concave surface. In this case "walking" the edge-face connectivity graph greedily could close some of the polygons before time.
What happens is, that because the plane intersects a vertex, at one point when walking the graph there are two possibilities for the next edge, and it does matter which one is chosen.
A possible solution is to implement a graph traversal algorithm (for instance depth-first search), and choose the longest loop which contains the starting edge.
It looks like you wanted to combine intersection points back into connected figures using some detection or Hough Transform.
Much simpler and more robust way is to immediately get not just intersection points, but contours of 3D body, where the plane cuts it.
To construct contours on the body given by triangular mesh, define the value in each mesh vertex equal to signed distance from the plane (positive on one side of the plane and negative on the other side). The marching squares algorithm for isovalue=0 can be then applied to extract the segments of the contours:
This algorithm works well even when the plane passes through a vertex or an edge of the mesh.
To better understand what is the result of plane section, please take a look at this short video. Following the links there, one can find the implementation as well.

Determine outer boundries of polygon from lat/lng point array

I have a large array of lat/lng points. Could be up to 20k points. I'm plotting them using KML. What I want to do is to take only the outter most points and use them to draw a polygon instead. I already know how to draw a polygon in kml, I just need to figure out how to select only the outer most points of the group.
Any ideas? I'd like to have at least 5 points to the polygon but no more than 25 points total.
So far i've come up with checking for top most and bottom most points (basically crearing a square) using < & > logic.
The points will be in america & canada only if that matters.
Thanks for any help.
EDIT: I've gotten the Convex Hull algorithm to work, but it isnt exactly what I need. I'm trying to map out zip codes. If a zip code has an L shape then the polygon is going to be a triangle shape and not an L shape. Any ideas?
You need to use a Convex Hull algorithm. It's not too hard to implement yourself if it's not available in whatever software package you're using.

Creating closed spatial polygons

I need to create a (large) set of spatial polygons for test purposes. Is there an algorithm that will create a randomly shaped polygon staying within a bounding envelope? I'm using OGC Simple stuff so a routine to create the well known text is the most useful, Language of choice is C# but it's not that important.
Here you can find two examples of how to generate random convex polygons. They both are in Java, but should be easy to rewrite them to C#:
Generate Polygon example from Sun
from JTS mailing list, post Minimum Area bounding box by Michael Bedward
Another possible approach based on generating set of random points and employ Delaunay tessellation.
Generally, problem of generating proper random polygons is not trivial.
Do they really need to be random, or would some real WKT do? Because if it will, just go to http://koordinates.com/ and download a few layers.
What shape is your bounding envelope ? If it's a rectangle, then generate your random polygon as a list of points within [0,1]x[0,1] and scale to the size of your rectangle.
If the envelope is not a rectangle things get a little more tricky. In this case you might get best performance simply by generating points inside the unit square and rejecting any which lie in the part of the unit square which does not scale to the bounding envelope of your choice.
HTH
Mark
Supplement
If you wanted only convex polygons you'd use one of the convex hull algorithms. Since you don't seem to want only convex polygons your suggestion of a circular sweep would work.
But you might find it simpler to sweep along a line parallel to either the x- or y-axis. Assume the x-axis.
Sort the points into x-order.
Select the leftmost (ie first) point. At the y-coordinate of this point draw an imaginary horizontal line across the unit square. Prepare to create a list of points along the boundary of the polygon above the imaginary line, and another list along the boundary below it.
Select the next point. Add it to the upper or lower boundary list as determined by it's y-coordinate.
Continue until you're out of points.
This will generate convex and non-convex polygons, but the non-convexity will be of a fairly limited form. No inlets or twists and turns.
Another Thought
To avoid edge crossings and to avoid a circular sweep after generating your random points inside the unit square you could:
Generate random points inside the unit circle in polar coordinates, ie (r, theta).
Sort the points in theta order.
Transform to cartesian coordinates.
Scale the unit circle to a bounding ellipse of your choice.
Off the top of my head, that seems to work OK

Resources