I am using ami.js for a brain research project. The objective is to load a brain model that is colored smooth shaded but here are the issues:
a) The original stl model that I have, does not have the normals and it seems to be impossible to compute them at least with the three version ami use.
original stl model
b) I have the same model with computed normals but in ply format that Unfortunately is not supported yet. (PLYLoader)
ply with normals
Is there a way to load and smooth the stl model or I can use the PLYLoader in a clean way.
pdta: I easily load the model using a clean three js(r93)+PLYLoader
Related
I've been working on creating my own ray tracer and I implemented surface shading using the Phong illumination model. I'm looking to make it look more realistic so I'm looking at different models. Is this what is also used for commercial renderers (i.e., Renderman, Arnold)? Or are there other ones that are used more (Blinn-Phong, Beckman Distribution, etc.)?
Renderman and friends are all programmable renderers, so the user can implement whatever shading model is desired, even using different shading models on different objects in a scene. I wrote one using the modulus operator to simulate a faceted surface where the underlying geometry was smooth, for example. Also, the operation of the rendering model is generally driven by multiple hand-painted or procedurally generated maps, so for example a map could specify a broader specular highlight in one area, and narrower in another.
That said, Lambert diffuse shading with Phong specular shading is the basic starting point.
I am learning to manipulate a model in Assimp in Visual Studio Express 2019 . So I load a model and it has 0 animations. And I want to apply some transformation to its bone maybe hand and see its effect in the obj format I export. Now to see the effect in obj format I have to apply changes to its vertices, so I have to apply transformation in bone hierarchically. But I don't get the idea , how to implement all this.
I have seen oglDev tutorial 38 but can't understand it. So any help will be appreciated.
Here are the thinks I need help in->
1. How to apply some transformation to a bone and to all its child (consider a simple human model).
2. How to get the final location of vertex affected by the particular bone as for bone we have just transformation matrices , so how do I convert them to X,Y,Z coordinate to tell vertex position .
I am not using OpenGL.
I want to use simple C++ code in Assimp for all this.
Please refer here for additional progress and difficulties details :
Wrong Bone Rotation in Assimp
.obj format does not have support for skeletal animation (or any form of animation). You cannot rotate the bones that do not exist. You need to use a model format that supports skeletal animation and use a model that contains that data. Example formats include .fbx, .gltf, .dae (COLLADA).
Softwares like Catia, SolidWorks or the like all can visualize complex models while designing.
Exporting such models to raster triangle meshes yields huge files that later need to be greatly simplified to be imported into 3D engines like Unreal Engine or equivalent.
My question is: how do they visualize such complex geometries without rasterization? How do they do it that fast?
GPUs can only deal with triangles, therefore they tessellate geometry exactly as for STL export. Tessellation tolerance may vary from display to STL export affecting the time required to compute it.
Exporting such models to raster triangle meshes yields huge files
Not entirely correct. When you ask solidworks for the mesh you also provide quality that will influence number of triangles you receive - can be millions, can be just a dozen.
CAD packages operate with most bodies/shapes analytically - they have a formula. My guess is any other 3D engine does the same, the thing is format of the analytical data that different engines use is not the same. So you need to convert from one to another using triangles, format that everybody understands.
I am trying to write a script that converts the vertex colors of a scanned .ply model to a good UV texture map so that it can be 3D painted as well as re-sculpted in another program like Mudbox.
Right now I am unwrapping the model using smart projection in Blender, and then using Meshlab to convert the vertex colors to a texture. My approach is mostly working, and at first the texture seems to be converted with no issues, but when I try to use the smooth brush in Mudbox/Blender to smooth out some areas of the model after the texture conversion, small polygons rise to the surface that are untextured. Here is an image of the problem: https://www.dropbox.com/s/pmekzxvvi44umce/Image.png?dl=0
All of these small polygons seem to have their own UV shells separate from the rest of the mesh, they all seem to be invisible from the surface of the model before smoothing, and they are difficult or impossible to repaint in Mudbox/Blender.
I tried baking the texture in Blender as well but experienced similar problems. I'm pretty stumped so any solutions or suggestions would be greatly appreciated!
Doing some basic mesh cleanup in Meshlab (merging close vertices in particular) seems to have mostly solved the problem.
Is it possible to export or convert my 3D models into GLSL ES 2.0? Is there any converter or any exporter tool/addon existing for any editor programs like Blender/3DS MAX/Maya that creates GLSL ES 2.0 code?
I'd like to create my models conveniently in any of the above mentioned editors and then I'd like to export/convert them into GLSL ES 2.0.
I already have a template WebGL code that displays my shaders. I want to replace my fragment shader and vertex shader parts with the GLSL ES code created automatically by a converter or an exporter tool.
I'd like to do something like this (but for GLSL ES 2.0):
Blender to GLSL
You're comparing apples with cars here. OpenGL is a drawing API, GLSL is a programming language for implementing shader code.
3D models are neither of that. The sole question "how can I convert my 3D model to OpanGL?" makes no sense.
Is it possible?
No. Because that's not the purpose of GLSL
Choose a model file format (preferrably implementing a reading parser is straightforward for) implement the parser, fill in apropriate data structures and feed those into the right parts of OpenGL, making the right calls to draw them.
OpenGL itself doesn't deal with models, scenes or even files. GLSL is not even a file format, it's a language.
I'd start with OBJ or STL files. They're reasonably easy to read and interpret and match very closely the primitive types OpenGL uses.
Probably the hardest format to read is .blend files; effectively a .blend file is a dump of the Blender process memory image. It takes a fully featured Blender (or something very similar to it) to make sense of a .blend file.
Update due to comment:
Please, please carefully read what this exporter script you linked to does: It takes an objects material settings (not the model itself) and generates GLSL code, that when used in the right framework (i.e. apropriate uniform and attribute names, matrix setup, etc.) will result in shading operations that resemble those material settings as close as possible. The script does not export a model!
You asked about exporting a 3D model. That would be the mesh of the model and it's attributes to place it in the world. Materials are not what's stored in a OBJ or STL file. They're textures, and yes, shaders. But they're completely independent of the model data itself. It's perfectly possible to use the same material settings on multiple models, or to freely exchange a model's material (textures and shaders), as long as the model provides all the required vertex attributes to make this material work.
Update 2 due to comment:
Do you even understand what a shader does? If not, here's a short synopsis: You have vertex attribute data (in buffers). These indexed attributes are submitted to OpenGL. Using a call to glDrawElements or glDrawArrays the attributes are interpreted as primitives (points, lines or triangles (or quads on older OpenGL versions)). Each primitive is then subjected to a number of transformations.
Mandatory: First step is the vertex shader which responsibility is to determine its final position in the viewport.
Optional: After vertex shading vertices the primitives formed by the vertices undergo tesselation shading. Tesselation is used to refine geometry, for example adding detail to terrain or making curved surfaces smoother.
Optional: Next comes geometry shading which can replace a single vertex with a (small) number of vertices. A geometry shader may even change the primitive type. So a single point could be replaced with a triangle for example (usefull for rendering particle systems).
Mandatory: The last step is fragment shading the primitive. After a primitive's position in the viewport has been determined, each of the pixels it covers is processed in one or more fragments. The fragment shader is a program that determines the final color and translucency in the target framebuffer.
Each shading step is controlled by a user defined program. It is these programs, shaders they are called, that are written in GLSL. Not geometry, no models. Programs! And very simple programs at that. They don't produce geometry from nothing, they always process already existing geometry passed to OpenGL.
Shaders are not used for defining or storing models. They just modify them at rendering time.
Have a look at http://www.inka3d.com which converts your Maya shaders to GLSL. For the models do you need WebGL or OpenGL ES 2.0?