I am trying to write a obj file with vertices. I am computing the vertex normals in my code and writing to this file as well. When I try to render this file in Meshlab it reads the vertices correctly, but when I go to 'Render->Show vertex normals', it is not showing the normals that I computed. Rather MeshLab computes its own normals and shows them.
I am not sure how I can visualize the normals that I computed/wrote to file. I want to apply Meshlab shader later based on my computed normals.
To test this I created a test obj file-
vn 0.517350 0.517350 0.517350
v 0.500000 0.500000 0.500000
vn -0.333333 0.666667 0.666667
v -0.500000 0.500000 0.500000
vn 0.666667 -0.333333 0.666667
v 0.500000 -0.500000 0.500000
vn -0.666667 -0.666667 0.333333
v -0.500000 -0.500000 0.500000
f 1//1 2//2 3//3
f 4//4 3//3 2//2
This is just one square. Now if I change the normal values in this file, it still shows its own vertex normals when I select 'Render->Show vertex normals'.
How can I have my own normals and apply shader that works on my computed normals?? Please help.
Thanks!!
Not all OBJ importers respect normals. I found this old bug which appears to still be open about how MeshLab's ignores normals in OBJs: http://sourceforge.net/p/meshlab/bugs/70/
You might be doing everything correctly but the issue may not be on your side.
When dealing with mesh interchange which can get quite hairy because of the different levels of support in various software, it's handy if you are doing it a lot to have multiple 3D applications to test your exported data against. Then you can more quickly figure out if the problem is on your side or theirs.
One workaround if you absolutely need the object to display correctly against a broken importer and can't use other formats is to manually unweld (duplicate) the vertices to give you those sharp creases/hard edges. That won't give you as much freedom as arbitrarily specifying normals, but it'll allow you to preserve those discontinuous boundaries where regions should not be smoothly interpolated and instead have a crease.
It seem than today (2019) the problem is solved in meshlab.
In the image below you can see your original .obj file (left), and one modified version changing the orientation of a normal (right). The normal has changed as expected.
Related
So the software I am using accepts 3D objects in the form of contours or .stl files. The contours I have are along the z-plane(each plane has a unique z). I have had to modify the contours for my experiment and now the contours do not have a unique z for each plane(they are now slightly angled wrt z=0 plane).
The points represent the edges of the 3D object. What would be the best way to take this collection of points and create a .stl file?
I am relatively new to working with python and 3D objects, so any help, pointers or suggestions would be much appreciated.
Edit: I have the simplices and verticies using the Delaunay(), but how do I proceed next?
The co-ordinates of all points are in this text file in the format "x y z".
So after seeking an answer for months and trying to use Meshlab and Blender I finally stumbled across the answer using numpy-stl. Hopeful that it will help others in a similar situation.
Here is the code to generate the .STL file:
from stl import mesh
num_triangles=len(fin_list)
data = np.zeros(num_triangles, dtype=mesh.Mesh.dtype)
for i in range(num_triangles):
#I did not know how to use numpy-arrays in this case. This was the major roadblock
# assign vertex co-ordinates to variables to write into mesh
data["vectors"][i] = np.array([[v1x, v1y, v1z],[v2x, v2y, v2z],[v3x, v3y, v3z]])
m=mesh.Mesh(data)
m.save('filename.stl')
The three vertices that form a triangle in the mesh go in as a vector that define the surface normal. I just collected three such vertices that form a triangle and wrote them into the mesh. Since I had a regular array of points, it was easy to collect the triangles:
for i in range(len(point_list)-1):
plane_a=[]
plane_b=[]
for j in range(len(point_list[i])-1):
tri_a=[]
tri_b=[]
#series a triangles
tri_a.append(point_list[i+1][j])
tri_a.append(point_list[i][j+1])
tri_a.append(point_list[i][j])
#series b triangles
tri_b.append(point_list[i+1][j])
tri_b.append(point_list[i+1][j+1])
tri_b.append(point_list[i][j+1])
#load to plane
plane_a.append(tri_a)
plane_b.append(tri_b)
group_a.append(plane_a)
group_b.append(plane_b)
The rules for choosing triangles for creating a mesh are as follows:
The vertices must be arranged in a counter-clock direction.
Each triangle must share two vertices with adjacent triangles.
The direction normal must point out of the surface.
There were two more rules that I did not follow but it still worked in my case:
1. All coordinates must be positive(In 1st Quadrant only)
2. All triangles must be arranged in an increasing z-order.
Note: There can be two kinds of .STL file formats: Binary and ASCII. numpy-stl writes out in the binary format. More info on STL files can be found here.
Hope this helps!
I have generated a .obj file from a scan of a 3d scanner. However, I am not sure how to interpret all this data. I have looked on wikipedia and understood the general structure of the .ibj file. My goal is to extract some information about the colour and I am not sure how to do that. What do the numbers in the vt line represent and how can I use those to come up with a colour? My end objective is to scan a foot and cancel out the floor "portion" of the scan. When scanning the foot, the floor is also part of the scan and I would like to disregard the floor and concentrate on the foot. Here is a small part of the .obj file:
Look s like Wavefront obj ASCII fileformat ... so google a bit and you will find tons of descriptions. In Your example:
v x y z means point coordinate (vertex) [x,y,z]
vn nx,ny,nz means normal vector (nx,ny,nz) of last point [x,y,z]
vt tx,ty means texture coordinate [tx,ty]
Vertexes are the points of the polygonal mesh. Normals are used for lighting computations (shading) so if you do not use it you can skip it. The color is stored in some texture image and you will pick it as a pixel at [tx,ty] the range is tx,ty=<-1,+1> or <0,+1> so you need to rescale to image resolution.
So you need to read all this data to some table and then find section with faces (starts with f):
f v1,v2,v3 means render polygon with 3 vertexes where v1,v2,v3 are index of vertex from the table. Beware the indexing starts from 1 so for C++ style arrays you need to decrement the indexes by 1.
There are a lot of deviations so without an example is hard to elaborate further (your example shows only the start of vertex table).
I wanted to use .obj format, but I noticed that it doesn't have representation for the type of material, i.e. opaque, transparent, reflective. Is there a common file format that includes that information as well, or should I just take the known .obj format and change it so that it'll include that info?
you might want to check mtl-files. Haven't (yet) used it myself though ;)
http://people.sc.fsu.edu/~jburkardt/data/mtl/mtl.html
and
http://people.sc.fsu.edu/~jburkardt/data/obj/obj.html
Cheers
.obj can referance .mtl files, which can hold opaque, transparent, reflective, colours, refractive index, and more.
The file is referanced by putting following line at the top:
mtllib *fileName*.mtl
Then in the faces section of the .obj file you can add these:
usemtl *materialName*
Finaly in the MTL file you will want a few sections like this:
# declaration of new material
newmtl *materialName*
# shininess
Ns 0.000000
# ambient colour
Ka 0.200000 0.200000 0.200000
# diffuse colour
Kd 0.800000 0.800000 0.800000
# specular colour
Ks 1.000000 1.000000 1.000000
# refractive index
Ni 1.000000
# transparency
d 1.000000
# illumination model
illum 2
# texture
map_Kd texName.png
In working with textures, does "UVW mapping" mean the same thing as "UV mapping"?
If so why are there two terms, and what is the "W"?
If not, what's the difference between them?
[Wikipedia currently isn't illuminating on this question: http://en.wikipedia.org/wiki/Talk:UVW_mapping]
U and V are the coordinates for a 2D map. Adding the W component adds a third dimension.
It's tedious to say the least to actually hand generate a 3D texture map, but they are useful if you have a procedural way to generate texture data. E.g. if you wanted your object to look like it's a solid chunk of marble, it may be easiest to "model" the marble "texture" as a 3D procedural texture and then use 3D coordinates to draw data out of the procedural texture.
UVW is to XYZ as XYZ is to world coordinates. Since XYZ was already being used to refer to world coordinates, UV is used to refer to the X and Y (2D) coordinates of a flat map. By extrapolation, the W is the Z in XYZ.
UVW infers a more complex 2d representation which is, in effect, the skin of the object that has been 'unwrapped' from a 3d object. Imagine a box 'unwrapped'. You now have a flat UVW map that you can paint on to your hearts content and then wrap back onto the six-sided box with no distortion. In short the UVW map knows where to rewrap the x, y and z points to reform the box.
Now imagine a sphere 'unwrapped'. You might end up with something like a Mercator projection. The hitch is that with this problem, when you wrap this 2d representation back onto the sphere, you will get some distortion.
The term UV mapping is very commonly used. I don't hear the term UVW as often except as described above.
The term procedural mapping can be misleading. Simply put, it means the computer is following some algorithms to paint a realistic representation of a material, like wood, onto the object, giving you the impression that the grain travels completely through the wood so it can be seen properly on both sides of the object. Procedural mapping can use images or not, or a combination of approaches...it all depends on the 'procedure'.
Lastly, there is no requirement to transform a '3d procedural texture' to 'UVW' first, since UVW and XYZ mean effectively the same thing - they are either referring to the world, or an unwrapped image of on object in the world, or for that matter of a 'chunk' of the world, as in the sky. The point is that UV or UVW refers to image/texture mapping.
I have an interesting problem coming up soon and I've started to think about the algorithm. The more I think about it, the more I get frightened because I think it's going to scale horribly (O(n^4)), unless I can get smart. I'm having trouble getting smart about this one. Here's a simplified description of the problem.
I have N polygons (where N can be huge >10,000,000) that are stored as a list of M vertices (where M is on the order of 100). What I need to do is for each polygon create a list of any vertices that are shared among other polygons (Think of the polygons as surrounding regions of interest, sometimes the regions but up against each other). I see something like this
Polygon i | Vertex | Polygon j | Vertex
1 1 2 2
1 2 2 3
1 5 3 1
1 6 3 2
1 7 3 3
This mean that vertex 1 in polygon 1 is the same point as vertex 2 in polygon 2, and vertex 2 in polygon 1 is the same point as vertex 3 in polygon 2. Likewise vertex 5 in polygon 1 is the same as vertex 1 in polygon 3....
For simplicity, we can assume that polygons never overlap, the closest they get is touching at the edge, and that all the vertices are integers (to make the equality easy to test).
The only thing I can thing of right now is for each polygon I have to loop over all of the polygons and vertices giving me a scaling of O(N^2*M^2) which is going to be very bad in my case. I can have very large files of polygons, so I can't even store it all in RAM, so that would mean multiple reads of the file.
Here's my pseudocode so far
for i=1 to N
Pi=Polygon(i)
for j = i+1 to N
Pj=Polygon(j)
for ii=1 to Pi.VertexCount()
Vi=Pi.Vertex(ii)
for jj=1 to Pj.VertexCount()
Vj=Pj.Vertex(jj)
if (Vi==Vj) AddToList(i,ii,j,jj)
end for
end for
end for
end for
I'm assuming that this has come up in the graphics community (I don't spend much time there, so I don't know the literature). Any Ideas?
This is a classic iteration-vs-memory problem. If you're comparing every polygon with every other polygon, you'll run into a O(n^2) solution. If you build a table as you step through all the polygons, then march through the table afterwards, you get a nice O(2n) solution. I ask a similar question during interviews.
Assuming you have the memory available, you want to create a multimap (one key, multiple entries) with each vertex as the key, and the polygon as the entry. Then you can walk each polygon exactly once, inserting the vertex and polygon into the map. If the vertex already exists, you add the polygon as an additional entry to that vertex key.
Once you've hit all the polygons, you walk the entire map once and do whatever you need to do with any vertex that has more than one polygon entry.
If you have the polygon/face data you don't even need to look at the vertices.
Create an array from [0..M] (where M is the number of verts)
iterate over the polygons and increment the array entry of each vertex index.
This gives you an array that describes how many times each vertex is used.*
You can then do another pass over the polygons and check the entry for each vertex. If it's > 1 you know that vertex is shared by another polygon.
You can build upon this strategy further if you need to store/find other information. For example instead of a count you could store polygons directly in the array allowing you to get a list of all faces that use a given vertex index. At this point you're effectively creating a map where vertex indices are the key.
(*this example assumes you have no degenerate polygons, but those could easily be handled).
Well, one simple optimization would be to make a map (hashtable, probably) that maps each distinct vertex (identified by its coordinates) to a list of all polygons of which it is a part. That cuts down your runtime to something like O(NM) - still large but I have my doubts that you could do better, since I can't imagine any way to avoid examining all the vertices.