I have an object in blender that has sharp corners, and easily distinguishable faces, exactly what I want. However, when I place it in Unity all of the vertices smooth out, and it is impossible to see what you are looking at. How do I get the same object that I have in Blender to show up in unity?
This is tackled here
blender-normal-smoothing-import-problem
Also you can calculate the normals on import via 'Smoothing angle' which will edge break/phong break based on the angles
Related
There are some 3D applications which can cast shadow or silhouette below 3D models. They render pretty fast and smooth. I wonder what kind of technology is the standard procedure to get 3D model shadow/silhouette.
For example is there any C++ library like libigl or CGAL to get shadow/silhouette pretty fast? Or maybe GLSL shading is used? Any hint would be appreciated on the standard technology stack.
For rendering, it's trivial. Just project the vertices to the surface (for the case of the XY plane, this just entails setting the Z coordinate to 0) and render the triangles. There'll be a lot of overlap, but since you're just rendering that won't matter.
If you're trying to build a set of polygons representing the silhouette shape, you'll need to instead union the projected triangles using something like the Vatti clipping algorithm.
Computing shadows is a vast and uneasy topic. In the real world, light sources are extended and the shadow edges are not sharp (there is penumbra). Then there are cast shadows, and even self-shadows.
If you limit yourself to punctual light sources (hence sharp shadows), there is a simple principle: if you place an observer at the light source, the faces he will see are illuminated by that light source. Conversely, the hidden surfaces are in the shadow.
For correct rendering, the shadowed areas should be back-projected to the scene and painted black.
By nature, the ray-tracing techniques make this process easy to implement.
Let's say I've got a rgba texture, and a polygon class , which constructor takes vector array of verticies coordinates.
Is there some way to create a polygon of this texture, for example, using alpha channel of the texture ...?
in 2d
Absolutely, yes it can be done. Is it easy? No. I haven't seen any game/geometry engines that would help you out too much either. Doing it yourself, the biggest problem you're going to have is generating a simplified mesh. One quad per pixel is going to generate a lot of geometry very quickly. Holes in the geometry may be an issue if you're tracing the edges and triangulating afterwards. Then there's the issue of determining what's in and what's out. Alpha is the obvious candidate, but unless you're looking at either full-on or full-off, you may be thinking about nice smooth edges. That's going to be hard to get right and would probably involve some kind of marching squares over the interpolated alpha. So while it's not impossible, its a lot of work.
Edit: As pointed out below, Unity does provide a method of generating a polygon from the alpha of a sprite - a PolygonCollider2D. In the script reference for it, it mentions the pathCount variable which describes the number of polygons it contains, which in describes which indexes are valid for the GetPath method. So this method could be used to generate polygons from alpha. It does rely on using Unity however. But with the combination of the sprite alpha for controlling what is drawn, and the collider controlling intersections with other objects, it covers a lot of use cases. This doesn't mean it's appropriate for your application.
I am trying to write a script that converts the vertex colors of a scanned .ply model to a good UV texture map so that it can be 3D painted as well as re-sculpted in another program like Mudbox.
Right now I am unwrapping the model using smart projection in Blender, and then using Meshlab to convert the vertex colors to a texture. My approach is mostly working, and at first the texture seems to be converted with no issues, but when I try to use the smooth brush in Mudbox/Blender to smooth out some areas of the model after the texture conversion, small polygons rise to the surface that are untextured. Here is an image of the problem: https://www.dropbox.com/s/pmekzxvvi44umce/Image.png?dl=0
All of these small polygons seem to have their own UV shells separate from the rest of the mesh, they all seem to be invisible from the surface of the model before smoothing, and they are difficult or impossible to repaint in Mudbox/Blender.
I tried baking the texture in Blender as well but experienced similar problems. I'm pretty stumped so any solutions or suggestions would be greatly appreciated!
Doing some basic mesh cleanup in Meshlab (merging close vertices in particular) seems to have mostly solved the problem.
i'm programming in WebGL (using OpenGL shaders) simple model loader. I've implemented phong shading in fragment shader. However when i load larger objects than simple monkey/cube and turn camera out of light source, meshes looks strange (aliased?). Some of them are even lightened although they should be hidden (black).
Lightened side is OK:
Other side is wrong:
I calculate normals for every vertex same way, so normals should be OK (when i turn camera on lightened side of car, everything goes right).
Thank you very much for your tips.
This looks like an single sided vs two sided lighting issue to me. In case your mesh consists of only a single "layer" of faces, those will have normals that point into only one direction. If single sided lighting is used, then the backface, i.e. if the light is on the side from which the normal points away, will look weird.
There are three ways to overcome this:
Use two sided illumination
draw the object twice with back faces culled, then flip the normals and culling the front face
Give the mesh thickness, so that there are two sides (you should enable backface culling then)
I think i found bug in my Collada Parser where i do not respect exported normals but i'm calculating new one. This causes inverted normals from time to time (this door mesh of this car for example). Anyway two sided rendering has to be implemented too.
Thank you.
I have a program which is able to parse and interprets OBJ file format in an OpenGL context.
I created a little project in Blender containing a simple sphere with 'Hair' particles on it.
After conversion (separating particules from the sphere) my particles form a new mesh. So I have two meshes in my project (named 'Sphere' and 'Hair'). When I want to export the mesh 'Sphere' in an OBJ file (File/export/Wavefront (.obj)), selecting 'include Normals', after exportation, the file contains all informations about normals (ex: vn 0.5889 0.14501 0.45455, ...).
When I try to do the same thing with particles, selecting 'include Normals' too, I don't have normals in the OBJ file. (Before the exporting I have selected the right mesh.)
So, I don't unsterstand why normals properties are not exported for mesh of type particles.
Here's above the general Blender render of my hair particules. As you can see all particules have a reaction with the light. So Blender use normals properties for thoses particules.
And now, the picture above shows (in Blender 'Edit mode' -> after conversion) that particules are formed of several lines. In my opengl program I use GL_LINES to render the same particules. I just want to have normals information to manage light properties on my particules.
Do you have an idea how to export normals properties for particules meshes?
Thanks in advance for your help.
You are trying to give normals to lines. Let's think about what that means.
When we talk about normal vectors on a surface, we mean "pointing out of the surface"
For triangles, when we define one side to be the "front" face, there is exactly one normal. For lines, any vector perpendicular to the line counts as a normal - there are infinite and any one will "do".
What are some reasons we care about normals in graphics?
Lighting: e.g. diffuse lighting is approximated by using the dot product of the normal with the incident light vector. This doesn't apply to hair though!
Getting a transformation matrix: for this you can pick any normal (do you want to transform into hair-space?)
In short: you either can pick any perpendicular vector for your normal (it's easy to calculate this) or just not use normals at all for your hair. It depends on what you are trying to do.