Standard Illumination Model for commercial renderers - graphics

I've been working on creating my own ray tracer and I implemented surface shading using the Phong illumination model. I'm looking to make it look more realistic so I'm looking at different models. Is this what is also used for commercial renderers (i.e., Renderman, Arnold)? Or are there other ones that are used more (Blinn-Phong, Beckman Distribution, etc.)?

Renderman and friends are all programmable renderers, so the user can implement whatever shading model is desired, even using different shading models on different objects in a scene. I wrote one using the modulus operator to simulate a faceted surface where the underlying geometry was smooth, for example. Also, the operation of the rendering model is generally driven by multiple hand-painted or procedurally generated maps, so for example a map could specify a broader specular highlight in one area, and narrower in another.
That said, Lambert diffuse shading with Phong specular shading is the basic starting point.

Related

Silhouette below 3D model

There are some 3D applications which can cast shadow or silhouette below 3D models. They render pretty fast and smooth. I wonder what kind of technology is the standard procedure to get 3D model shadow/silhouette.
For example is there any C++ library like libigl or CGAL to get shadow/silhouette pretty fast? Or maybe GLSL shading is used? Any hint would be appreciated on the standard technology stack.
For rendering, it's trivial. Just project the vertices to the surface (for the case of the XY plane, this just entails setting the Z coordinate to 0) and render the triangles. There'll be a lot of overlap, but since you're just rendering that won't matter.
If you're trying to build a set of polygons representing the silhouette shape, you'll need to instead union the projected triangles using something like the Vatti clipping algorithm.
Computing shadows is a vast and uneasy topic. In the real world, light sources are extended and the shadow edges are not sharp (there is penumbra). Then there are cast shadows, and even self-shadows.
If you limit yourself to punctual light sources (hence sharp shadows), there is a simple principle: if you place an observer at the light source, the faces he will see are illuminated by that light source. Conversely, the hidden surfaces are in the shadow.
For correct rendering, the shadowed areas should be back-projected to the scene and painted black.
By nature, the ray-tracing techniques make this process easy to implement.

What formula or algorithm can I use to draw a 3D Sphere without using OpenGL-like libs?

I know that there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
Then, those models go through hidden surface removal algorithm.
Am I correct?
Be that way, What formula or algorithm can I use to draw a 3D Sphere?
I am using a low-level library named WinBGIm from colorado university.
there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
These are modelling techniques and not rendering techniques. They allow you to mathematically define your mesh's geometry. How you render this data on to a 2D canvas is another story.
There are two fundamental approaches to rendering 3D models on a 2D canvas.
Ray Tracing
The basic idea of ray tracing is to pass a ray from the camera's origin, through the point on the canvas whose colour needs to be determined. Determine which models get hit by it and pick the closest one, determine how it's lit to compute the colour there. This is done by further tracing rays from the hit point to all the light sources in the scene. If you notice, this approach eliminates the need to use hidden surface determination algorithms like the back face culling, z-buffer, etc. since the basic idea is rooted on a hidden surface algorithm (ray tracing).
There are packages, libraries, etc. that help you do this. However, it's common that ray tracers are written from scratch as a college-level project. However, this approach takes more time to render (not to code), but the results are generally more pleasing than the below one. This approach is more popular when you want to render non-interactive visuals like movies.
Rasterization
This approach takes primitives (triangles and quads) that define the models in the scene and sample them at regular intervals (screen pixels they cover) and write it on to a colour buffer. Here hidden surface is usually eliminated using the Z-buffer; a buffer that stores the z-order of the fragment and the closer one wins, when writing to the colour buffer.
Rasterization is the more popular approach with cheap hardware support for it available on most modern computers due to years of research and money that has gone in to it. Libraries like OpenGL and Direct3D are readily available to facilitate development. Although the results are less pleasing than ray tracing, it's faster to render and thus is widely used in interactive, real-time rendering like games.
If you want to not use those libraries, then you have to do what is commonly known as software rendering i.e. you will end up doing what these libraries do.
What formula or algorithm can I use to draw a 3D Sphere?
Depends on which one of the above you choose. If you simply rasterize a 3D sphere in 2D with orthographic projection, all you have to do is draw a circle on the canvas.
If you are looking for hidden lines removal (drawing the edges rather than the inside of the faces), the solution is easy: "back face culling".
Every edge of your model belongs to two faces. For every face you can compute the normal vector and check if it is facing to the observer (by the sign of the dot product of the normal and the direction of the projection line); in other words, if the observer is located in the outer half-space defined by the plane of the face. Then an edge is wholly visible if and only if it belongs to at least one front face.
Usual discretization of the sphere are made by drawing equidistant parallels and meridians. It may be advantageous to adjust the spacing of the parallels so that all tiles are about the same area.

Three.js: What Is The Exact Difference Between Lambert and Phong?

I understand the difference between Lambert and Phong in general computer graphics. I also understand how we can change and create our own materials using three.js. But I cannot work out the difference between MeshLambertMaterial and MeshPhongMaterial in their default states.
I have tried switching them on a scene with one directional light source and 125 spheres, I cannot see any differences whatsoever. Three.js is being used in a chapter of my book and so I need to make sure all information is accurate and precise.
Thanks,
Shane
Shane, it's not your fault that you're confused.
Lambert is an illumination model (with a physical basis) for the light reflected off a surface, expressed in terms of the incoming illumination's direction with respect to the surface normal at the point of incidence.
Phong is a more nuanced shading model (albeit a more hacky one) which says that light is composed of ambient + diffuse + specular components. It treats the ambient component as constant everywhere (hack!), the diffuse component using the Lambertian model above, and the specular component using a power-law falloff (which is a clever hack, roughly approximating actual BRDFs).
The word "Phong" is also an interpolation method (when used in the context of modern triangle-based rendering pipelines). When computing the illumination at a pixel in the interior of a triangle, you have two choices:
Gouraud shading: Compute the color at the three vertices and interpolate in the interior, using barycentric coordinates, or
Phong shading: Using the normal at the three vertices, interpolate the normal in the interior and compute the shading using this interpolated normal at each pixel.
This is why (as #RayToal pointed out), if your specular "highlight" falls in the interior of a triangle, none of the vertices will be bright, but Phong shading will interpolate the normal and there will be a bright spot in the interior of your rendered triangle.
I am assuming you want the exact difference between MeshLambertMaterial and MeshPhongMaterial as implemented in three.js.
You have to differentiate between the shading model and the illumination model. Three.js does not implement 'pure' Phong or Lambert models.
For MeshLambertMaterial, the illumination calculation is performed at each vertex, and the resulting color is interpolated across the face of the polygon. ( Gouraud shading; (generalized) Lambert illumination model )
For MeshPhongMaterial, vertex normals are interpolated across the surface of the polygon, and the illumination calculation is performed at each texel. ( Phong shading; (generalized) Phong illumination model )
You will see a clear difference when you have a pointLight that is close to a face -- especially if the light's attenuation distance is less than the distance to the face's vertices.
For both materials, in the case of FlatShading, the face normal replaces each vertex normal.
three.js.r.66
In computer graphics, it is very common to confuse Phong reflection model with Phong shading. While former is a model of local illumination of points like Lambertian, the later is an interpolation method like Gouraud shading. In case you find it hard to differentiate between them, here's a list of detailed articles on each of these topics.
http://en.wikipedia.org/wiki/List_of_common_shading_algorithms
If you know a little GLSL, I think the best thing for you to do is to look at the vertex/fragment shaders generated in both cases and look for the differences. You can use http://benvanik.github.com/WebGL-Inspector/ to get the code of the programs, or put a console.log() at the right place in three js sources (look for buildProgram, you should output prefix_fragment + fragmentShader and prefix_vertex + vertexShader to see the program code).
Also, you can have a look to the building blocks used to create both shaders:
Lambert: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2036
Phong: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2157
It may be more readable than to look at the source program code.

Is it possible to convert/export my 3D model (dae/blend/3ds/...) into GLSL ES 2.0?

Is it possible to export or convert my 3D models into GLSL ES 2.0? Is there any converter or any exporter tool/addon existing for any editor programs like Blender/3DS MAX/Maya that creates GLSL ES 2.0 code?
I'd like to create my models conveniently in any of the above mentioned editors and then I'd like to export/convert them into GLSL ES 2.0.
I already have a template WebGL code that displays my shaders. I want to replace my fragment shader and vertex shader parts with the GLSL ES code created automatically by a converter or an exporter tool.
I'd like to do something like this (but for GLSL ES 2.0):
Blender to GLSL
You're comparing apples with cars here. OpenGL is a drawing API, GLSL is a programming language for implementing shader code.
3D models are neither of that. The sole question "how can I convert my 3D model to OpanGL?" makes no sense.
Is it possible?
No. Because that's not the purpose of GLSL
Choose a model file format (preferrably implementing a reading parser is straightforward for) implement the parser, fill in apropriate data structures and feed those into the right parts of OpenGL, making the right calls to draw them.
OpenGL itself doesn't deal with models, scenes or even files. GLSL is not even a file format, it's a language.
I'd start with OBJ or STL files. They're reasonably easy to read and interpret and match very closely the primitive types OpenGL uses.
Probably the hardest format to read is .blend files; effectively a .blend file is a dump of the Blender process memory image. It takes a fully featured Blender (or something very similar to it) to make sense of a .blend file.
Update due to comment:
Please, please carefully read what this exporter script you linked to does: It takes an objects material settings (not the model itself) and generates GLSL code, that when used in the right framework (i.e. apropriate uniform and attribute names, matrix setup, etc.) will result in shading operations that resemble those material settings as close as possible. The script does not export a model!
You asked about exporting a 3D model. That would be the mesh of the model and it's attributes to place it in the world. Materials are not what's stored in a OBJ or STL file. They're textures, and yes, shaders. But they're completely independent of the model data itself. It's perfectly possible to use the same material settings on multiple models, or to freely exchange a model's material (textures and shaders), as long as the model provides all the required vertex attributes to make this material work.
Update 2 due to comment:
Do you even understand what a shader does? If not, here's a short synopsis: You have vertex attribute data (in buffers). These indexed attributes are submitted to OpenGL. Using a call to glDrawElements or glDrawArrays the attributes are interpreted as primitives (points, lines or triangles (or quads on older OpenGL versions)). Each primitive is then subjected to a number of transformations.
Mandatory: First step is the vertex shader which responsibility is to determine its final position in the viewport.
Optional: After vertex shading vertices the primitives formed by the vertices undergo tesselation shading. Tesselation is used to refine geometry, for example adding detail to terrain or making curved surfaces smoother.
Optional: Next comes geometry shading which can replace a single vertex with a (small) number of vertices. A geometry shader may even change the primitive type. So a single point could be replaced with a triangle for example (usefull for rendering particle systems).
Mandatory: The last step is fragment shading the primitive. After a primitive's position in the viewport has been determined, each of the pixels it covers is processed in one or more fragments. The fragment shader is a program that determines the final color and translucency in the target framebuffer.
Each shading step is controlled by a user defined program. It is these programs, shaders they are called, that are written in GLSL. Not geometry, no models. Programs! And very simple programs at that. They don't produce geometry from nothing, they always process already existing geometry passed to OpenGL.
Shaders are not used for defining or storing models. They just modify them at rendering time.
Have a look at http://www.inka3d.com which converts your Maya shaders to GLSL. For the models do you need WebGL or OpenGL ES 2.0?

Non-Affine image transformations in .NET

Are there any classes, methods in the .NET library, or any algorithms in general, to perform non-affine transformations? (i.e. transformations that involve more than just rotation, scale, translation and shear)
e.g.:
(source: last100.com)
Is there another term for non-affine transformations?
I am not aware of anything integrated in .Net letting you do non affine transforms.
I guess you are trying to have some sort of 3D texture mapping? If that's the case you need an homogenous affine transform, which is not available in .Net. I'm also not aware of any integrated way to make pixel displacement transforms in .Net.
However, the currently voted solution might be good for what you are trying to do, just be aware that it won't do perspective correction out of the box.
For instance:
The picture on the left was generated using the single quad distort library provided by Neil N. The picture on the right was generated using a single quad (two triangles actually) in DirectX.
This may not have any impact on what you are trying to do, but this is something to keep in mind if you want to do 3D stuff, it will look very weird without perspective correct mapping.
All of the example images you posted can be done with a Quadrilateral Distortion. Though I cant say for certain that a quad distort will cover ALL non affine transforms.
Heres a link to a not so good implementation of it in C#... it works, but is slow. Poke around Wikipedia for the many different optimizations available for these kinds of calculations
http://www.vcskicks.com/image-distortion.html
-Neil
You can do this in wpf using a the Viewport3d control and a non-affine transform matrix. Rendering this to a bitmap again may be interesting.... Which I "fixed" by including an invisible <image> control with the same image as on my textured plane... (Also, I've had to work around the max texture size issues by splitting up the plane and cropping images...)
http://www.charlespetzold.com/blog/2007/08/060605.html
In my case I wanted the reverse of this (transform so arbitrary points on the warped become the corners of my rectangular window), which is the Inverse of the matrix to do the opposite.

Resources