How to export vertex normal (vn) in OBJ file with Blender using particle system - graphics

I have a program which is able to parse and interprets OBJ file format in an OpenGL context.
I created a little project in Blender containing a simple sphere with 'Hair' particles on it.
After conversion (separating particules from the sphere) my particles form a new mesh. So I have two meshes in my project (named 'Sphere' and 'Hair'). When I want to export the mesh 'Sphere' in an OBJ file (File/export/Wavefront (.obj)), selecting 'include Normals', after exportation, the file contains all informations about normals (ex: vn 0.5889 0.14501 0.45455, ...).
When I try to do the same thing with particles, selecting 'include Normals' too, I don't have normals in the OBJ file. (Before the exporting I have selected the right mesh.)
So, I don't unsterstand why normals properties are not exported for mesh of type particles.
Here's above the general Blender render of my hair particules. As you can see all particules have a reaction with the light. So Blender use normals properties for thoses particules.
And now, the picture above shows (in Blender 'Edit mode' -> after conversion) that particules are formed of several lines. In my opengl program I use GL_LINES to render the same particules. I just want to have normals information to manage light properties on my particules.
Do you have an idea how to export normals properties for particules meshes?
Thanks in advance for your help.

You are trying to give normals to lines. Let's think about what that means.
When we talk about normal vectors on a surface, we mean "pointing out of the surface"
For triangles, when we define one side to be the "front" face, there is exactly one normal. For lines, any vector perpendicular to the line counts as a normal - there are infinite and any one will "do".
What are some reasons we care about normals in graphics?
Lighting: e.g. diffuse lighting is approximated by using the dot product of the normal with the incident light vector. This doesn't apply to hair though!
Getting a transformation matrix: for this you can pick any normal (do you want to transform into hair-space?)
In short: you either can pick any perpendicular vector for your normal (it's easy to calculate this) or just not use normals at all for your hair. It depends on what you are trying to do.

Related

how to do cube mapping of a static environment onto a complex model by directx11 and HLSL?

I am very new to Shaders and programming in direct 11(c++) and HLSL for shaders. However, I have been given a task to:
Implement cube mapping of a static environment onto a complex model (not a cube). Cube mapping allows an object to reflect the scene around it.
There aren't many resources online can anyone please tell me the steps to follow to achieve a correct cube mapping. I'm more concerned about the calculations to do on the HLSL side.
For a very basic environment mapping, all you need to do is:
Compute the position and surface normal of the current pixel (in the pixel shader) in world space
Compute the (normalised) view direction (world space pixel position - world space camera position)
Compute the reflection vector from view direction and surface normal (there is a builtin HLSL function to do that, if you don't want to do the math yourself)
Sample the cube map with that reflection vector, and return that color.
This then works like a mirror: The reflection vector is the direction in which your line of sight would get reflected if the surface of your mesh would be a perfect mirror, and then you ask the cube map what color is in that direction (aka, whats the reflection you're seeing). How simple or complex your mesh shape is, doesn't matter during this, because you're always only looking at one pixel of that (rasterized) mesh at a time, using that pixels sufrace normal as a guide.
More advanced environment mapping techinques will then blur the reflection based on the surface roughness (usually by sampling different mip map levels of your cube map), merge the color with other light/color computations of that pixel, add indirect environment mapping coloring (which requires sampling a different cube map, which was pre-computed in a special way, with the direction of the surface normal directly), etc. That's then where all the papers and stuff come into play, but the very basic concept of environment mapping are just a few lines of code and is very straight forward.

Mesh on the other side of light source looks strange

i'm programming in WebGL (using OpenGL shaders) simple model loader. I've implemented phong shading in fragment shader. However when i load larger objects than simple monkey/cube and turn camera out of light source, meshes looks strange (aliased?). Some of them are even lightened although they should be hidden (black).
Lightened side is OK:
Other side is wrong:
I calculate normals for every vertex same way, so normals should be OK (when i turn camera on lightened side of car, everything goes right).
Thank you very much for your tips.
This looks like an single sided vs two sided lighting issue to me. In case your mesh consists of only a single "layer" of faces, those will have normals that point into only one direction. If single sided lighting is used, then the backface, i.e. if the light is on the side from which the normal points away, will look weird.
There are three ways to overcome this:
Use two sided illumination
draw the object twice with back faces culled, then flip the normals and culling the front face
Give the mesh thickness, so that there are two sides (you should enable backface culling then)
I think i found bug in my Collada Parser where i do not respect exported normals but i'm calculating new one. This causes inverted normals from time to time (this door mesh of this car for example). Anyway two sided rendering has to be implemented too.
Thank you.

How to draw the heightmap onto the screen?

I'm using DirectX10 to simulate a water surface, and I'm now with a height map,which is a 2D array of the heights(y) at the points (x,z). But to draw it on the screen, I must turn it into a mesh or have a index to draw triangle topology.
But the data is too large to do it manually. Are there any methods for me to draw it on the screen. I hope it's easy to implement. If there is function included in DirectX10 which can make it, the it's the best one for me.
Create a mesh that format a grid of squares (each made of two triangles) and set all vertices y = 0. In the vertex shader sample the heightmap and add the value stored in the heightmap to the y of the vertice.
This might help you.
P.S: If the area you want it to cover is too big you should take a look at terrain LOD techniques (should work the same for water).
I'm sure you can make a mesh out of it. I doubt you can generate the heightmap for a water surface that is too large to "meshify".
Why are you looking at Diamond square. For a 512x512 heightmap all you need to do is define a set of point and then generate the triangles for it. Its really very simple.

Fill 2D area bound by vertices in XNA

I'm learning XNA by doing and, as the title states, I'm trying to see if there's a way to fill a 2D area that is defined by a collection of vertices on a plane. I want to fill with a color, not a file-based texture.
For an example, take a rounded rectangle whose vertices are defined by four quarter-circle triangle fans. The vertices are defined by building a collection of triangles, but the triangles may not be adjacent.
Additionally, I would like to fill it with more than a single color -- i.e. divide the bound area into four vertical bands and have each a different color. You don't have to provide me the code, pointing me towards resources will help a great deal. I can be handy with Google (which I did try first, but have failed miserably).
This is as much an exploration into "what's appropriate for XNA" as it is the implementation of it. Being pretty new to XNA, I'm wanting to also learn what should and shouldn't be done on top of what can and can't be done.
Not too much but here's a start:
The color fill is accomplished by using a shader. Reimer's XNA Tutorials on pixel shaders is a great resource on the topic.
You need to calculate the geometry and build up vertex buffers to hold it. Note that all vector geometry in XNA is in 3D, but using a camera fixed to a plane will simulate 2D.
To add different colors to different triangles you basically need to group geometry into separate vertex buffers. Then, using a shader with a color parameter, for each buffer,
set the appropriate color before passing the buffer to the graphics device. Alternatively, you can use a vertex format containing color information, which basically let you assign a color to each vertex.

Direct3D: Wireframe without Diagonals

When using wireframe fill mode in Direct3D, all rectangular faces display a diagonal running across due to the face being split in to two triangles. How do I eliminate this line? I also want to remove hidden surfaces. Wireframe mode doesn't do this.
I need to display a Direct3D model in isometric wireframe view. The rendered scene must display the boundaries of the model's faces but must exclude the diagonals.
Getting rid of the diagonals is tricky as the hardware is likely to only draw triangles and it would be difficult to determine which edge is the diagonal. Alternatively, you could apply a wireframe texture (or a shader that generates a suitable texture). That would solve the hidden line issues, but would look odd as the thickness of the lines would be dependant on z distance.
Using line primitives is not trivial, although surfaces facing away from the camera can be easily removed, partially obscured surfaces would require manual clipping. As a final thought, do a two pass approach - the first pass draws the filled polygons but only drawing to the z buffer, then draw the lines over the top with suitable z bias. That would handle the partially obscured surface problem.
The built-in wireframe mode renders edges of the primitives. As in D3D the primitives are triangles (or lines, or points - but not arbitrary polygons), that means the built-in way won't cut it.
I guess you have to look up some sort of "edge detection" algorithms. These could operate in image space, where you render the model into a texture, assigning unique color for each logical polygon, and then do a postprocessing pass using pixel shader and detect any changes in color (color changes = output black, otherwise output something else).
Alternatively, you could construct a line list that only has the edges you need and just render them.
Yet another alternative could be using geometry shaders in Direct3D 10. Somehow, lots of different options here.
I think you'll need to draw those line manually, as wireframe mode is a built in mode, so I don't think you can modify that. You can get the list of vertex in your mesh, and process them into a list of lines that you need to draw.

Resources