Extracting relevant information from .obj 3d file - graphics

I have generated a .obj file from a scan of a 3d scanner. However, I am not sure how to interpret all this data. I have looked on wikipedia and understood the general structure of the .ibj file. My goal is to extract some information about the colour and I am not sure how to do that. What do the numbers in the vt line represent and how can I use those to come up with a colour? My end objective is to scan a foot and cancel out the floor "portion" of the scan. When scanning the foot, the floor is also part of the scan and I would like to disregard the floor and concentrate on the foot. Here is a small part of the .obj file:

Look s like Wavefront obj ASCII fileformat ... so google a bit and you will find tons of descriptions. In Your example:
v x y z means point coordinate (vertex) [x,y,z]
vn nx,ny,nz means normal vector (nx,ny,nz) of last point [x,y,z]
vt tx,ty means texture coordinate [tx,ty]
Vertexes are the points of the polygonal mesh. Normals are used for lighting computations (shading) so if you do not use it you can skip it. The color is stored in some texture image and you will pick it as a pixel at [tx,ty] the range is tx,ty=<-1,+1> or <0,+1> so you need to rescale to image resolution.
So you need to read all this data to some table and then find section with faces (starts with f):
f v1,v2,v3 means render polygon with 3 vertexes where v1,v2,v3 are index of vertex from the table. Beware the indexing starts from 1 so for C++ style arrays you need to decrement the indexes by 1.
There are a lot of deviations so without an example is hard to elaborate further (your example shows only the start of vertex table).

Related

Create a .stl file from a collection of points

So the software I am using accepts 3D objects in the form of contours or .stl files. The contours I have are along the z-plane(each plane has a unique z). I have had to modify the contours for my experiment and now the contours do not have a unique z for each plane(they are now slightly angled wrt z=0 plane).
The points represent the edges of the 3D object. What would be the best way to take this collection of points and create a .stl file?
I am relatively new to working with python and 3D objects, so any help, pointers or suggestions would be much appreciated.
Edit: I have the simplices and verticies using the Delaunay(), but how do I proceed next?
The co-ordinates of all points are in this text file in the format "x y z".
So after seeking an answer for months and trying to use Meshlab and Blender I finally stumbled across the answer using numpy-stl. Hopeful that it will help others in a similar situation.
Here is the code to generate the .STL file:
from stl import mesh
num_triangles=len(fin_list)
data = np.zeros(num_triangles, dtype=mesh.Mesh.dtype)
for i in range(num_triangles):
#I did not know how to use numpy-arrays in this case. This was the major roadblock
# assign vertex co-ordinates to variables to write into mesh
data["vectors"][i] = np.array([[v1x, v1y, v1z],[v2x, v2y, v2z],[v3x, v3y, v3z]])
m=mesh.Mesh(data)
m.save('filename.stl')
The three vertices that form a triangle in the mesh go in as a vector that define the surface normal. I just collected three such vertices that form a triangle and wrote them into the mesh. Since I had a regular array of points, it was easy to collect the triangles:
for i in range(len(point_list)-1):
plane_a=[]
plane_b=[]
for j in range(len(point_list[i])-1):
tri_a=[]
tri_b=[]
#series a triangles
tri_a.append(point_list[i+1][j])
tri_a.append(point_list[i][j+1])
tri_a.append(point_list[i][j])
#series b triangles
tri_b.append(point_list[i+1][j])
tri_b.append(point_list[i+1][j+1])
tri_b.append(point_list[i][j+1])
#load to plane
plane_a.append(tri_a)
plane_b.append(tri_b)
group_a.append(plane_a)
group_b.append(plane_b)
The rules for choosing triangles for creating a mesh are as follows:
The vertices must be arranged in a counter-clock direction.
Each triangle must share two vertices with adjacent triangles.
The direction normal must point out of the surface.
There were two more rules that I did not follow but it still worked in my case:
1. All coordinates must be positive(In 1st Quadrant only)
2. All triangles must be arranged in an increasing z-order.
Note: There can be two kinds of .STL file formats: Binary and ASCII. numpy-stl writes out in the binary format. More info on STL files can be found here.
Hope this helps!

Determing the direction of face normals consistently?

I'm a newbie to computer graphics so I apologize if some of my language is inexact or the question misses something basic.
Is it possible to calculate face normals correctly, given a list of vertices, and a list of faces like this:
v1: x_1, y_1, z_1
v2: x_2, y_2, z_2
...
v_n: x_n, y_n, z_n
f1: v1,v2,v3
f2: v4,v2,v5
...
f_m: v_j, v_k, v_l
Each x_i, y_i , z_i specifies the vertices position in 3d space (but isn't neccesarily a vector)
Each f_i contains the indices of the three vertices specifying it.
I understand that you can use the cross product of two sides of a face to get a normal, but the direction of that normal depends on the order and choice of sides (from what I understand).
Given this is the only data I have is it possible to correctly determine the direction of the normals? or is it possible to determine them consistently atleast? (all normals may be pointing in the wrong direction?)
In general there is no way to assign normal "consistently" all over a set of 3d faces... consider as an example the famous Möbius strip...
You will notice that if you start walking on it after one loop you get to the same point but on the opposite side. In other words this strip doesn't have two faces, but only one. If you build such a shape with a strip of triangles of course there's no way to assign normals in a consistent way and you'll necessarily end up having two adjacent triangles with normals pointing in opposite directions.
That said, if your collection of triangles is indeed orientable (i.e. there actually exist a consistent normal assignment) a solution is to start from one triangle and then propagate to neighbors like in a flood-fill algorithm. For example in Python it would look something like:
active = [triangles[0]]
oriented = set([triangles[0]])
while active:
next_active = []
for tri in active:
for other in neighbors(tri):
if other not in oriented:
if not agree(tri, other):
flip(other)
oriented.add(other)
next_active.append(other)
active = next_active
In CG its done by polygon winding rule. That means all the faces are defined so the points are in CW (or CCW) order when looked on the face directly. Then using cross product will lead to consistent normals.
However many meshes out there does not comply the winding rule (some faces are CW others CCW not all the same) and for those its a problem. There are two approaches I know of:
for simple shapes (not too much concave)
the sign of dot product of your face_normal and face_center-cube_center will tell you if the normal points inside or outside of the object.
if ( dot( face_normal , face_center-cube_center ) >= 0.0 ) normal_points_out
You can even use any point of face instead of the face center too. Anyway for more complex concave shapes this will not work correctly.
test if point above face is inside or not
simply displace center of face by some small distance (not too big) in normal direction and then test if the point is inside polygonal mesh or not:
if ( !inside( face_center+0.001*face_normal ) ) normal_points_out
to check if point is inside or not you can use hit test.
However if the normal is used just for lighting computations then its usage is usually inside a dot product. So we can use its abs value instead and that will solve all lighting problems regardless of the normal side. For example:
output_color = face_color * abs(dot(face_normal,light_direction))
some gfx apis have implemented this already (look for double sided materials or normals, turning them on usually use the abs value ...) For example in OpenGL:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);

Why is a normal vector necessary for STL files?

STL is the most popular 3d model file format for 3d printing. It records triangular surfaces that makes up a 3d shape.
I read the specification the STL file format. It is a rather simple format. Each triangle is represented by 12 float point number. The first 3 define the normal vector, and the next 9 define three vertices. But here's one question. Three vertices are sufficient to define a triangle. The normal vector can be computed by taking the cross product of two vectors (each pointing from a vertex to another).
I know that a normal vector can be useful in rendering, and by including a normal vector, the program doesn't have to compute the normal vectors every time it loads the same model. But I wonder what would happen if the creation software include wrong normal vectors on purpose? Would it produce wrong results in the rendering software?
On the other hand, 3 vertices says everything about a triangle. Include normal vectors will allow logical conflicts in the information and increase the size of file by 33%. Normal vectors can be computed by the rendering software under reasonable amount of time if necessary. So why should the format include it? The format was created in 1987 for stereolithographic 3D printing. Was computing normal vectors to costly to computers back then?
I read in a thread that Autodesk Meshmixer would disregard the normal vector and graph triangles according to the vertices. Providing wrong normal vector doesn't seem to change the result.
Why do Stereolithography (.STL) files require each triangle to have a normal vector?
At least when using Cura to slice a model, the direction of the surface normal can make a difference. I have regularly run into STL files that look just find when rendered as solid objects in any viewer, but because some faces have the wrong direction of the surface normal, the slicer "thinks" that a region (typically concave) which should be empty is part of the interior, and the slicer creates a "top layer" covering up the details of the concave region. (And this was with an STL exported from a Meshmixer file that was imported from some SketchUp source).
FWIW, Meshmixer has a FlipSurfaceNormals tool to help deal with this.

How can i create an image morpher inside a graphics shader?

Image morphing is mostly a graphic design SFX to adapt one picture into another one using some points decided by the artist, who has to match the eyes some key zones on one portrait with another, and then some kinds of algorithms adapt the entire picture to change from one to another.
I would like to do something a bit similar with a shader, which can load any 2 graphics and automatically choose zones of the most similar colors in the same kinds of zone of the picture and automatically morph two pictures in real time processing. Perhaps a shader based version would be logically alot faster at the task? except I don't even understand how it works at all.
If you know, Please don't worry about a complete reply about the process, it would be great if you have save vague background concepts and keywords, for how to attempt a 2d texture morph in a graphics shader.
There are more morphing methods out there the one you are describing is based on geometry.
morph by interpolation
you have 2 data sets with similar properties (for example 2 images are both 2D) and interpolate between them by some parameter. In case of 2D images you can use linear interpolation if both images are the same resolution or trilinear interpolation if not.
So you just pick corresponding pixels from each images and interpolate the actual color for some parameter t=<0,1>. for the same resolution something like this:
for (y=0;y<img1.height;y++)
for (x=0;x<img1.width;x++)
img.pixel[x][y]=(1.0-t)*img1.pixel[x][y] + t*img2.pixel[x][y];
where img1,img2 are input images and img is the ouptput. Beware the t is float so you need to overtype to avoid integer rounding problems or use scale t=<0,256> and correct the result by bit shift right by 8 bits or by /256 For different sizes you need to bilinear-ly interpolate the corresponding (x,y) position in both of the source images first.
All This can be done very easily in fragment shader. Just bind the img1,img2 to texture units 0,1 pick the texel from them interpolate and output the final color. The bilinear coordinate interpolation is done automatically by GLSL because texture coordinates are normalized to <0,1> no matter the resolution. In Vertex you just pass the texture and vertex coordinates. And in main program side you just draw single Quad covering the final image output...
morph by geometry
You have 2 polygons (or matching points) and interpolate their positions between the 2. For example something like this: Morph a cube to coil. This is suited for vector graphics. you just need to have points corespondency and then the interpolation is similar to #1.
for (i=0;i<points;i++)
{
p(i).x=(1.0-t)*p1.x + t*p2.x
p(i).y=(1.0-t)*p1.y + t*p2.y
}
where p1(i),p2(i) is i-th point from each input geometry set and p(i) is point from the final result...
To enhance visual appearance the linear interpolation is exchanged with specific trajectory (like BEZIER curves) so the morph look more cool. For example see
Path generation for non-intersecting disc movement on a plane
To acomplish this you need to use geometry shader (or maybe even tesselation shader). you would need to pass both polygons as single primitive, then geometry shader should interpolate the actual polygon and pass it to vertex shader.
morph by particle swarms
In this case you find corresponding pixels in source images by matching colors. Then handle each pixel as particle and create its path from position in img1 to img2 with parameter t. It i s the same as #2 but instead polygon areas you got just points. The particle has its color,position you interpolate both ... because there is very slim chance you will get exact color matches and the count ... (histograms would be the same) which is in-probable.
hybrid morphing
It is any combination of #1,#2,#3
I am sure there is more methods for morphing these are just the ones I know of. Also the morphing can be done not only in spatial domain...

Looking for a fast polygon rendering algorithm

I am working with a Microchip dsPIC33FJ128GP802. It's a small DSP-based microcontroller, and it doesn't have much power (40 million instructions per second). I'm looking for a way to render a convex (i.e. simple) polygon. I am only dealing with 2D shapes, integer math, and set or clear pixels (i.e. 1 bit per pixel.) I already have routines for drawing fast horizontal and vertical lines (writing up to 16 pixels in 88 cycles), so I would like to use a scanline algorithm.
However, all the algorithms I have found seem to depend on division (which takes 18 cycles on this processor) and floating point math (which is emulated in software and so is very slow; it also takes up a lot of ROM), or assume that I have a large amount of memory. I only have 2K left, ~14K is used for graphics RAM of my 16K. So does anyone know of any good, embedded machine algorithms they can point me to with a simple C or pseudocode implementation which I can implement in assembly? Preferably on the 'net, I don't live near any good bookstores with many programming books.
Thanks. :)
EDIT: Clarification, this is a polygon filling algorithm I'm looking for. I can implement a polygon outline algorithm using Bresenham's line drawing algorithm (as Marc B suggests.)
EDIT #2: I wanted to let everyone know I got a basic algorithm up in Python. Here's a link to the code. Public domain code.
http://dl.dropbox.com/u/1134084/bresenham_demos.py
How about Bresenham's Line algorithm? After some setup, it's pure integer math, and can be adapted to draw a polygon by simple iteration of starting points along the polygon edges.
comments followup:
I'll try to draw this in ASCII, but it'll probably look like crud. Bresenham's can be used to draw a filled polygon by picking a starting edge, and iteratively moving a bresenham line across the canvas parallel to that point.
Let's say you've got some points like this:
*(1)
*(3)
*(2)
*(4)
These are numbered in left-right sort priority, so you pick the left-most starting point (1) and decide if you want to go vertically (start 1,2) or horizontally (1,3). That'd probably depend on how your DSP does its display, but let's go with vertical.
So... You use the 1-2 line as your starting bresenham line. You calculate the starting points of your fill lines by using lines 1-3 and 2-4 as your start/end points. Start a bresenham calculation for each, and draw another Bresenham between those two points. Kinda like:
1.1 -> 2.1, then 1.2 -> 2.2, then 1.3 -> 2.3
etc... until you reach the end of either of those lines. In this case, that'd be when the lower starting point reaches (4). At that point, you start iterating up the 4,3 line, until you reach point 3 with both starting points, and you're done.
*-------
\\\\\\\\ *
\\\\\\\\
*-----\\
------- *
Where the dashes are the starting points you calculated along 1-3 and 2-4, and the slashes are the fill lines.
Of course, this only works if the points are properly sorted, and you've got a convex polygon. If it's concave, you'll have to be very careful to not let your fill lines cross over the border, or do some pre-processing and subdivide the original poly into two or more convex ones.
You may want to look at Michael Abrash's articles on Dr Dobbs about polygon fill/raster/etc. It uses fixed-point math
Thomas, if you have a Bresenham line drawing algorithm available, then use it as a basis for further enhancement: divide your polygon to sub-polygons with an horizontal cutting line through every vertex. Then, start tracing the 2 left and right sides of each of these sub-polys, using Bresenham. This way you have the 2 end-points of each scan line in your polygon.
I would start by converting the polygon to a collection of triangles and render those, because triangles are easy to render by scanlines. Although even so there are some details.
Essentially, the draw-triangle sub-procedure will be given a raw triangle and proceed:
Reject degenerate triangles (where two of the three vertices overlap).
Sort the vertices in Y (since there are only three you can hardcode the sorting logic).
Now, at this point you should know that there will be three kinds of triangles: ones with a flat top, ones with a flat bottom, and "general" triangles. You want to handle a general triangle by essentially splitting it into one each of the flat types. This is because you don't want to have an if test every scanline to detect if the slope changed.
To render a flat triangle, you would run two Bresenham algorithms in parallel to iterate the pixels comprising the edges, and use the points they give you as the endpoints of each horizontal scanline.
It may be easier to break the problem into two parts. First, locate/write an algorithm that draws and fills a triangle. Second, write an algorithm that breaks up an arbitrary polygon into triangles (using different combinations of the vertices).
To draw/fill a triangle, use Bresenham's Line Algorithm to simultaneously draw a line between points 0 and 1, and between 1 and 2. For each input point x, draw the pixel if it is equal to or in between the y points generated by the two lines. When you reach one endpoint, continue by using the unfinished side and the side that has not yet been used.
Edit:
To break your convex polygon into triangles, arrange the points in order and call them P1, P2, ... PN. Let P1 be your "root" point, and build triangles using that point and combinations of adjacent points. For example, a pentagon would yield the three triangles P1-P2-P3, P1-P3-P4, and P1-P4-P5. In general, a convex polygon with N sides will decompose into N-2 triangles.

Resources