I am looking for a lightweight c++ wavefront obj parser. In addition, I need the following capabilities:
Join the adjacent planar triangles to form polygons.
Read the curves as well from obj file
I have considered glm and assimp, but they do not provide the above things.
Could someone please suggest something.
you could try libg3d if you are using Linux.
Related
Softwares like Catia, SolidWorks or the like all can visualize complex models while designing.
Exporting such models to raster triangle meshes yields huge files that later need to be greatly simplified to be imported into 3D engines like Unreal Engine or equivalent.
My question is: how do they visualize such complex geometries without rasterization? How do they do it that fast?
GPUs can only deal with triangles, therefore they tessellate geometry exactly as for STL export. Tessellation tolerance may vary from display to STL export affecting the time required to compute it.
Exporting such models to raster triangle meshes yields huge files
Not entirely correct. When you ask solidworks for the mesh you also provide quality that will influence number of triangles you receive - can be millions, can be just a dozen.
CAD packages operate with most bodies/shapes analytically - they have a formula. My guess is any other 3D engine does the same, the thing is format of the analytical data that different engines use is not the same. So you need to convert from one to another using triangles, format that everybody understands.
I have a 3D model as mesh structure or in .stl/.obj format which I converted to voxels using binvox voxelization tool. Using a Java program, I have done some processing on the voxel grid thus obtained. Now, I wish to covert this voxelized model back into a "smooth" mesh structure (or any other format), which can later be exported to .stl or .obj format.
Can someone suggest how can I achieve the last part, i.e. converting the voxel grid into some format for retrieving back the "smooth" surfaces ? Any help, including pointing to existing tools, or relevant theory in this direction will be appreciated.
Give a try to Marching Cubes algorithm. See http://paulbourke.net/geometry/polygonise/ for more details.
Is it possible to export or convert my 3D models into GLSL ES 2.0? Is there any converter or any exporter tool/addon existing for any editor programs like Blender/3DS MAX/Maya that creates GLSL ES 2.0 code?
I'd like to create my models conveniently in any of the above mentioned editors and then I'd like to export/convert them into GLSL ES 2.0.
I already have a template WebGL code that displays my shaders. I want to replace my fragment shader and vertex shader parts with the GLSL ES code created automatically by a converter or an exporter tool.
I'd like to do something like this (but for GLSL ES 2.0):
Blender to GLSL
You're comparing apples with cars here. OpenGL is a drawing API, GLSL is a programming language for implementing shader code.
3D models are neither of that. The sole question "how can I convert my 3D model to OpanGL?" makes no sense.
Is it possible?
No. Because that's not the purpose of GLSL
Choose a model file format (preferrably implementing a reading parser is straightforward for) implement the parser, fill in apropriate data structures and feed those into the right parts of OpenGL, making the right calls to draw them.
OpenGL itself doesn't deal with models, scenes or even files. GLSL is not even a file format, it's a language.
I'd start with OBJ or STL files. They're reasonably easy to read and interpret and match very closely the primitive types OpenGL uses.
Probably the hardest format to read is .blend files; effectively a .blend file is a dump of the Blender process memory image. It takes a fully featured Blender (or something very similar to it) to make sense of a .blend file.
Update due to comment:
Please, please carefully read what this exporter script you linked to does: It takes an objects material settings (not the model itself) and generates GLSL code, that when used in the right framework (i.e. apropriate uniform and attribute names, matrix setup, etc.) will result in shading operations that resemble those material settings as close as possible. The script does not export a model!
You asked about exporting a 3D model. That would be the mesh of the model and it's attributes to place it in the world. Materials are not what's stored in a OBJ or STL file. They're textures, and yes, shaders. But they're completely independent of the model data itself. It's perfectly possible to use the same material settings on multiple models, or to freely exchange a model's material (textures and shaders), as long as the model provides all the required vertex attributes to make this material work.
Update 2 due to comment:
Do you even understand what a shader does? If not, here's a short synopsis: You have vertex attribute data (in buffers). These indexed attributes are submitted to OpenGL. Using a call to glDrawElements or glDrawArrays the attributes are interpreted as primitives (points, lines or triangles (or quads on older OpenGL versions)). Each primitive is then subjected to a number of transformations.
Mandatory: First step is the vertex shader which responsibility is to determine its final position in the viewport.
Optional: After vertex shading vertices the primitives formed by the vertices undergo tesselation shading. Tesselation is used to refine geometry, for example adding detail to terrain or making curved surfaces smoother.
Optional: Next comes geometry shading which can replace a single vertex with a (small) number of vertices. A geometry shader may even change the primitive type. So a single point could be replaced with a triangle for example (usefull for rendering particle systems).
Mandatory: The last step is fragment shading the primitive. After a primitive's position in the viewport has been determined, each of the pixels it covers is processed in one or more fragments. The fragment shader is a program that determines the final color and translucency in the target framebuffer.
Each shading step is controlled by a user defined program. It is these programs, shaders they are called, that are written in GLSL. Not geometry, no models. Programs! And very simple programs at that. They don't produce geometry from nothing, they always process already existing geometry passed to OpenGL.
Shaders are not used for defining or storing models. They just modify them at rendering time.
Have a look at http://www.inka3d.com which converts your Maya shaders to GLSL. For the models do you need WebGL or OpenGL ES 2.0?
I would like to do some odd geometric/odd shape recognition. But I'm not sure how to do it.
Here's what I have so far:
Convert RGB image to Monochrome.
Otsu Threshold
Hough Transform.
I'm not sure what to do next.
For geometric information, you could do a raster to vector conversion to convert your image into coordinated vectors (lines and points) and finite element analysis to look for known shapes. Not easy but libraries should be available for both.
Edit: Note that there are sometimes easier practical solutions, but they depend on the image and types of errors. For example, removing perspective, identifying a 3d object from a 2d image, significance of colour, etc... You often see registration markers added to the real world object to overcome
this and allow much easier identification. Looking up articles on feature extraction techniques might help.
From what I understand, taking a polygon and breaking it up into composite triangles is called "tesselation". What's the opposite process called and can anyone link me to an algorithm for it?
Essentially, I have a list of 2D triangles and I need an algorithm to recombine them into a polygon.
Thanks!
I think you need to transform your triangles as a half edge data structure, and then you should be able to easily find the half edges which have no opposite.
It's called mesh decimation. Here is some code I wrote to do this for a class. Tibur is correct that the half edge data structure makes this much more efficient.
http://www.cs.virginia.edu/~mjh7v/advgfx/proj1/
The thing that you are calling tessellation is actually called triangulation. The thing you are searching for is tessellation (you may have heard of it referred to as tiling).
If you are more specific about the problem you are trying to solve (e.g. do you know the shape of the final polygon?) I can try to recommend some more specific algorithms.