I am learning to manipulate a model in Assimp in Visual Studio Express 2019 . So I load a model and it has 0 animations. And I want to apply some transformation to its bone maybe hand and see its effect in the obj format I export. Now to see the effect in obj format I have to apply changes to its vertices, so I have to apply transformation in bone hierarchically. But I don't get the idea , how to implement all this.
I have seen oglDev tutorial 38 but can't understand it. So any help will be appreciated.
Here are the thinks I need help in->
1. How to apply some transformation to a bone and to all its child (consider a simple human model).
2. How to get the final location of vertex affected by the particular bone as for bone we have just transformation matrices , so how do I convert them to X,Y,Z coordinate to tell vertex position .
I am not using OpenGL.
I want to use simple C++ code in Assimp for all this.
Please refer here for additional progress and difficulties details :
Wrong Bone Rotation in Assimp
.obj format does not have support for skeletal animation (or any form of animation). You cannot rotate the bones that do not exist. You need to use a model format that supports skeletal animation and use a model that contains that data. Example formats include .fbx, .gltf, .dae (COLLADA).
Related
Plotting packages offer a variety of methods for displaying data. Write an interactive plotting application for two dimensionsional curves. Your application should be able to allow the user to choose the mode (line strip or polyline display of the data, bar chart or pie charts), colours, and line styles.
You should start with the GUI editation like this:
Does anyone know of a low level (no frameworks) example of a drag & drop, re-order-able list?
and change it to your primitives (more points per primitive instead of one ... handle each point as (sub)object so you can change its position later).
Then just add tools like add object,del object,... For hand drawing tool use piece wise interpolation cubics
The grid can be done like this:
How to draw dynamic 2D grid that adjusts according to camera zoom: OpenGL
Mouse zooming/panning is also important
Zooming graphics based on current mouse position
Putting all above together into simple editor looks like this:
Using GPU for curve rendering might give you some nice speed and functionality boost:
Is it possible to express "t" variable from Cubic Bezier Curve equation?
Mouse selection of objects might be a speed problem if your scene contains too many objects so in such case its best to use index buffers where you can mouse select with pixel perfect precision for almost free in O(1):
OpenGL 3D-raypicking with high poly meshes
The example is for 3D , in 2D is much simpler ...
Also do not forget to implement save/load functionality to some vector file format. I recommend using SVG it might be complicated to start with it but you can quickly check it contents in any SVG viewer or browser also in notepad as its just a text file. If you use just basic path elements and ignore the rest of SVG features you will see the parsing and creating SVG is not that hard for example See these:
Get Vertices/Edges From BMP or SVG (C#)
Discrete probability distribution plot with given values
For really big datasets you might want to use spatial subdivision techniques (Bounding (Volume)Area Hierarchy, or Quad tree) to ease up the operations...
More in depth implementation details about 2D vector gfx editors depends on language, OS, gfx api and GUI api you using and task you are aiming for ...
I have a 3D model as mesh structure or in .stl/.obj format which I converted to voxels using binvox voxelization tool. Using a Java program, I have done some processing on the voxel grid thus obtained. Now, I wish to covert this voxelized model back into a "smooth" mesh structure (or any other format), which can later be exported to .stl or .obj format.
Can someone suggest how can I achieve the last part, i.e. converting the voxel grid into some format for retrieving back the "smooth" surfaces ? Any help, including pointing to existing tools, or relevant theory in this direction will be appreciated.
Give a try to Marching Cubes algorithm. See http://paulbourke.net/geometry/polygonise/ for more details.
I have a program which is able to parse and interprets OBJ file format in an OpenGL context.
I created a little project in Blender containing a simple sphere with 'Hair' particles on it.
After conversion (separating particules from the sphere) my particles form a new mesh. So I have two meshes in my project (named 'Sphere' and 'Hair'). When I want to export the mesh 'Sphere' in an OBJ file (File/export/Wavefront (.obj)), selecting 'include Normals', after exportation, the file contains all informations about normals (ex: vn 0.5889 0.14501 0.45455, ...).
When I try to do the same thing with particles, selecting 'include Normals' too, I don't have normals in the OBJ file. (Before the exporting I have selected the right mesh.)
So, I don't unsterstand why normals properties are not exported for mesh of type particles.
Here's above the general Blender render of my hair particules. As you can see all particules have a reaction with the light. So Blender use normals properties for thoses particules.
And now, the picture above shows (in Blender 'Edit mode' -> after conversion) that particules are formed of several lines. In my opengl program I use GL_LINES to render the same particules. I just want to have normals information to manage light properties on my particules.
Do you have an idea how to export normals properties for particules meshes?
Thanks in advance for your help.
You are trying to give normals to lines. Let's think about what that means.
When we talk about normal vectors on a surface, we mean "pointing out of the surface"
For triangles, when we define one side to be the "front" face, there is exactly one normal. For lines, any vector perpendicular to the line counts as a normal - there are infinite and any one will "do".
What are some reasons we care about normals in graphics?
Lighting: e.g. diffuse lighting is approximated by using the dot product of the normal with the incident light vector. This doesn't apply to hair though!
Getting a transformation matrix: for this you can pick any normal (do you want to transform into hair-space?)
In short: you either can pick any perpendicular vector for your normal (it's easy to calculate this) or just not use normals at all for your hair. It depends on what you are trying to do.
Is it possible to export or convert my 3D models into GLSL ES 2.0? Is there any converter or any exporter tool/addon existing for any editor programs like Blender/3DS MAX/Maya that creates GLSL ES 2.0 code?
I'd like to create my models conveniently in any of the above mentioned editors and then I'd like to export/convert them into GLSL ES 2.0.
I already have a template WebGL code that displays my shaders. I want to replace my fragment shader and vertex shader parts with the GLSL ES code created automatically by a converter or an exporter tool.
I'd like to do something like this (but for GLSL ES 2.0):
Blender to GLSL
You're comparing apples with cars here. OpenGL is a drawing API, GLSL is a programming language for implementing shader code.
3D models are neither of that. The sole question "how can I convert my 3D model to OpanGL?" makes no sense.
Is it possible?
No. Because that's not the purpose of GLSL
Choose a model file format (preferrably implementing a reading parser is straightforward for) implement the parser, fill in apropriate data structures and feed those into the right parts of OpenGL, making the right calls to draw them.
OpenGL itself doesn't deal with models, scenes or even files. GLSL is not even a file format, it's a language.
I'd start with OBJ or STL files. They're reasonably easy to read and interpret and match very closely the primitive types OpenGL uses.
Probably the hardest format to read is .blend files; effectively a .blend file is a dump of the Blender process memory image. It takes a fully featured Blender (or something very similar to it) to make sense of a .blend file.
Update due to comment:
Please, please carefully read what this exporter script you linked to does: It takes an objects material settings (not the model itself) and generates GLSL code, that when used in the right framework (i.e. apropriate uniform and attribute names, matrix setup, etc.) will result in shading operations that resemble those material settings as close as possible. The script does not export a model!
You asked about exporting a 3D model. That would be the mesh of the model and it's attributes to place it in the world. Materials are not what's stored in a OBJ or STL file. They're textures, and yes, shaders. But they're completely independent of the model data itself. It's perfectly possible to use the same material settings on multiple models, or to freely exchange a model's material (textures and shaders), as long as the model provides all the required vertex attributes to make this material work.
Update 2 due to comment:
Do you even understand what a shader does? If not, here's a short synopsis: You have vertex attribute data (in buffers). These indexed attributes are submitted to OpenGL. Using a call to glDrawElements or glDrawArrays the attributes are interpreted as primitives (points, lines or triangles (or quads on older OpenGL versions)). Each primitive is then subjected to a number of transformations.
Mandatory: First step is the vertex shader which responsibility is to determine its final position in the viewport.
Optional: After vertex shading vertices the primitives formed by the vertices undergo tesselation shading. Tesselation is used to refine geometry, for example adding detail to terrain or making curved surfaces smoother.
Optional: Next comes geometry shading which can replace a single vertex with a (small) number of vertices. A geometry shader may even change the primitive type. So a single point could be replaced with a triangle for example (usefull for rendering particle systems).
Mandatory: The last step is fragment shading the primitive. After a primitive's position in the viewport has been determined, each of the pixels it covers is processed in one or more fragments. The fragment shader is a program that determines the final color and translucency in the target framebuffer.
Each shading step is controlled by a user defined program. It is these programs, shaders they are called, that are written in GLSL. Not geometry, no models. Programs! And very simple programs at that. They don't produce geometry from nothing, they always process already existing geometry passed to OpenGL.
Shaders are not used for defining or storing models. They just modify them at rendering time.
Have a look at http://www.inka3d.com which converts your Maya shaders to GLSL. For the models do you need WebGL or OpenGL ES 2.0?
I have a black and white 2D drawing of a silhouette (say, a chess piece) that I would like to rotate around an axis to create a 3D object.
Then I want to render that 3D object from multiple angles using some sort of raytracing software, saving each angle into a separate file.
What would be the easiest way to automatically (repeatedly) 1. get a vector path from the 2d drawing 2. create the 3D model by rotating it 3. import it into the raytracer.
I haven't chosen a specific raytracer yet, but Sunflow has caught my eye.
Texturing/bump mapping would be nice but non-essential
The modeling feature you're looking for is a Lathe.
Sunflow can import 3ds files and blender files.
I've never used blender, but here's a tutorial for using the lathe to make a wine glass. You'd replace the silhouette of the wine glass with your shape:
http://www.blendermagz.com/2009/04/14/blender-3d-lathe-modeling-wine-glass/
Blender is FOSS, you can down load it here:
www.blender.org/download/get-blender/ (can't post more than one link, so you'll have to type this one in yourself :-)
I found a pretty cool site where you can do this online, interactively:
http://www.fi.uu.nl/toepassingen/00182/toepassing_wisweb.en.html
No great detail revolution but maybe you can find the code and extend it to your needs.