Understanding Oren-Nayar reflectance model - graphics

In the Oren–Nayar reflectance model, each facet is assumed to be Lambertian in reflectance. wiki
My understanding :
Law of reflection tells us that angle of incidence is equal to angle of reflection. So on a perfectly smooth surface, we get specular reflection. On a rough surface, an incident beam is scattered in a number of directions (each ray obeying law of reflection). To model this, a microfacet model is used which defines grooves (facets) on the surface. Now what does a lambertian facet mean? Are we assuming grooves on the already present grooves?

Lambertian BRDF is a theoretical model describing perfect diffuse reflection.
Microfacet BRDFs on the other hand are based on geometrical optics theory, where a surface is covered with microfacets and described by means of statistical distributions.
Oren-Nayar BRDF is an improvement over the theoretical Lambertian model, applying distribution of purely diffuse microfacets. So to answer your question - Lambertian microfacet is an entity itself not subject to microfacet theory, it's not a "groove with grooves". It's a purely theoretical concept used to apply microfacet theory to matte surfaces. There are many different BRDF models, only some of them follow geometrical optics like the microfacet models do, it is useful to keep in mind that microfacet is not a universal theory.
For further reading, I recommend the overview of BRDF models by Montes and Ureña, their classification was very enlightening to me at least.

Related

Standard Illumination Model for commercial renderers

I've been working on creating my own ray tracer and I implemented surface shading using the Phong illumination model. I'm looking to make it look more realistic so I'm looking at different models. Is this what is also used for commercial renderers (i.e., Renderman, Arnold)? Or are there other ones that are used more (Blinn-Phong, Beckman Distribution, etc.)?
Renderman and friends are all programmable renderers, so the user can implement whatever shading model is desired, even using different shading models on different objects in a scene. I wrote one using the modulus operator to simulate a faceted surface where the underlying geometry was smooth, for example. Also, the operation of the rendering model is generally driven by multiple hand-painted or procedurally generated maps, so for example a map could specify a broader specular highlight in one area, and narrower in another.
That said, Lambert diffuse shading with Phong specular shading is the basic starting point.

Papers that Originally Introduced the Concept of Inverse Perspective Projection

I am studying the rasterization algorithm and try to make a list of papers which were seminal in this area. For example "A Parallel Algorithm for Polygon Rasterization" would be one.
The one or group of papers I am looking for at the moment, are the papers that introduced the concept of interpolating vertex attributes (RGB, n, st, etc.) across the surface of a triangle using the inverse projection method.
Basically, my goal is to get back to the source of the technique.
Any other fundamental/seminal paper you could actually recommend in this area would be helpful as well. Thanks
To answer the question in part, the Wikipedia article on Gouraud shading mentions Gouraud's PhD thesis and apparently a follow-up paper as sources.
Gouraud, Henri (1971). Computer Display of Curved Surfaces, Doctoral
Thesis. University of Utah. Gouraud, Henri (1971).
"Continuous shading
of curved surfaces". IEEE Transactions on Computers C–20 (6): 623–629.
doi:10.1109/T-C.1971.223313.

what's the difference between BRDF and Phong model?

I am wondering if Phong model is one kind of BRDF?
Is BRDF only dealing with opaque surface? If the surface is translucent, which model to use?
There's two thing called Phong :
a BRDF : I = Ia + Id + Is ... this is Phong model i suppose for you
a way to interpolate illumination between vertex
so yes, what you call Phong model IS a BRDF, not a physically based one of course
and BRDF is only for opaque because it's a bidirectionnal REFLECTANCE distribution function
look at BTDF (transmission) or even BSDF (scattering)
Practically a general BRDF is based on microfacet theory while original Phong is not.

Three.js: What Is The Exact Difference Between Lambert and Phong?

I understand the difference between Lambert and Phong in general computer graphics. I also understand how we can change and create our own materials using three.js. But I cannot work out the difference between MeshLambertMaterial and MeshPhongMaterial in their default states.
I have tried switching them on a scene with one directional light source and 125 spheres, I cannot see any differences whatsoever. Three.js is being used in a chapter of my book and so I need to make sure all information is accurate and precise.
Thanks,
Shane
Shane, it's not your fault that you're confused.
Lambert is an illumination model (with a physical basis) for the light reflected off a surface, expressed in terms of the incoming illumination's direction with respect to the surface normal at the point of incidence.
Phong is a more nuanced shading model (albeit a more hacky one) which says that light is composed of ambient + diffuse + specular components. It treats the ambient component as constant everywhere (hack!), the diffuse component using the Lambertian model above, and the specular component using a power-law falloff (which is a clever hack, roughly approximating actual BRDFs).
The word "Phong" is also an interpolation method (when used in the context of modern triangle-based rendering pipelines). When computing the illumination at a pixel in the interior of a triangle, you have two choices:
Gouraud shading: Compute the color at the three vertices and interpolate in the interior, using barycentric coordinates, or
Phong shading: Using the normal at the three vertices, interpolate the normal in the interior and compute the shading using this interpolated normal at each pixel.
This is why (as #RayToal pointed out), if your specular "highlight" falls in the interior of a triangle, none of the vertices will be bright, but Phong shading will interpolate the normal and there will be a bright spot in the interior of your rendered triangle.
I am assuming you want the exact difference between MeshLambertMaterial and MeshPhongMaterial as implemented in three.js.
You have to differentiate between the shading model and the illumination model. Three.js does not implement 'pure' Phong or Lambert models.
For MeshLambertMaterial, the illumination calculation is performed at each vertex, and the resulting color is interpolated across the face of the polygon. ( Gouraud shading; (generalized) Lambert illumination model )
For MeshPhongMaterial, vertex normals are interpolated across the surface of the polygon, and the illumination calculation is performed at each texel. ( Phong shading; (generalized) Phong illumination model )
You will see a clear difference when you have a pointLight that is close to a face -- especially if the light's attenuation distance is less than the distance to the face's vertices.
For both materials, in the case of FlatShading, the face normal replaces each vertex normal.
three.js.r.66
In computer graphics, it is very common to confuse Phong reflection model with Phong shading. While former is a model of local illumination of points like Lambertian, the later is an interpolation method like Gouraud shading. In case you find it hard to differentiate between them, here's a list of detailed articles on each of these topics.
http://en.wikipedia.org/wiki/List_of_common_shading_algorithms
If you know a little GLSL, I think the best thing for you to do is to look at the vertex/fragment shaders generated in both cases and look for the differences. You can use http://benvanik.github.com/WebGL-Inspector/ to get the code of the programs, or put a console.log() at the right place in three js sources (look for buildProgram, you should output prefix_fragment + fragmentShader and prefix_vertex + vertexShader to see the program code).
Also, you can have a look to the building blocks used to create both shaders:
Lambert: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2036
Phong: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2157
It may be more readable than to look at the source program code.

Are there any rendering alternatives to rasterisation or ray tracing?

Rasterisation (triangles) and ray tracing are the only methods I've ever come across to render a 3D scene. Are there any others? Also, I'd love to know of any other really "out there" ways of doing 3D, such as not using polygons.
Aagh! These answers are very uninformed!
Of course, it doesn't help that the question is imprecise.
OK, "rendering" is a really wide topic. One issue within rendering is camera visibility or "hidden surface algorithms" -- figuring out what objects are seen in each pixel. There are various categorizations of visibility algorithms. That's probably what the poster was asking about (given that they thought of it as a dichotomy between "rasterization" and "ray tracing").
A classic (though now somewhat dated) categorization reference is Sutherland et al "A Characterization of Ten Hidden-Surface Algorithms", ACM Computer Surveys 1974. It's very outdated, but it's still excellent for providing a framework for thinking about how to categorize such algorithms.
One class of hidden surface algorithms involves "ray casting", which is computing the intersection of the line from the camera through each pixel with objects (which can have various representations, including triangles, algebraic surfaces, NURBS, etc.).
Other classes of hidden surface algorithms include "z-buffer", "scanline techniques", "list priority algorithms", and so on. They were pretty darned creative with algorithms back in the days when there weren't many compute cycles and not enough memory to store a z-buffer.
These days, both compute and memory are cheap, and so three techniques have pretty much won out: (1) dicing everything into triangles and using a z-buffer; (2) ray casting; (3) Reyes-like algorithms that uses an extended z-buffer to handle transparency and the like. Modern graphics cards do #1; high-end software rendering usually does #2 or #3 or a combination. Though various ray tracing hardware has been proposed, and sometimes built, but never caught on, and also modern GPUs are now programmable enough to actually ray trace, though at a severe speed disadvantage to their hard-coded rasterization techniques. Other more exotic algorithms have mostly fallen by the wayside over the years. (Although various sorting/splatting algorithms can be used for volume rendering or other special purposes.)
"Rasterizing" really just means "figuring out which pixels an object lies on." Convention dictates that it excludes ray tracing, but this is shaky. I suppose you could justify that rasterization answers "which pixels does this shape overlap" whereas ray tracing answers "which object is behind this pixel", if you see the difference.
Now then, hidden surface removal is not the only problem to be solved in the field of "rendering." Knowing what object is visible in each pixel is only a start; you also need to know what color it is, which means having some method of computing how light propagates around the scene. There are a whole bunch of techniques, usually broken down into dealing with shadows, reflections, and "global illumination" (that which bounces between objects, as opposed to coming directly from lights).
"Ray tracing" means applying the ray casting technique to also determine visibility for shadows, reflections, global illumination, etc. It's possible to use ray tracing for everything, or to use various rasterization methods for camera visibility and ray tracing for shadows, reflections, and GI. "Photon mapping" and "path tracing" are techniques for calculating certain kinds of light propagation (using ray tracing, so it's just wrong to say they are somehow fundamentally a different rendering technique). There are also global illumination techniques that don't use ray tracing, such as "radiosity" methods (which is a finite element approach to solving global light propagation, but in most parts of the field have fallen out of favor lately). But using radiosity or photon mapping for light propagation STILL requires you to make a final picture somehow, generally with one of the standard techniques (ray casting, z buffer/rasterization, etc.).
People who mention specific shape representations (NURBS, volumes, triangles) are also a little confused. This is an orthogonal problem to ray trace vs rasterization. For example, you can ray trace nurbs directly, or you can dice the nurbs into triangles and trace them. You can directly rasterize triangles into a z-buffer, but you can also directly rasterize high-order parametric surfaces in scanline order (c.f. Lane/Carpenter/etc CACM 1980).
There's a technique called photon mapping that is actually quite similar to ray tracing, but provides various advantages in complex scenes. In fact, it's the only method (at least of which I know) that provides truly realistic (i.e. all the laws of optics are obeyed) rendering if done properly. It's a technique that's used sparingly as far as I know, since it's performance is hugely worse than even ray tracing (given that it effectively does the opposite and simulates the paths taken by photons from the light sources to the camera) - yet this is it's only disadvantage. It's certainly an interesting algorithm, though you're not going to see it in widescale use until well after ray tracing (if ever).
The Rendering article on Wikipedia covers various techniques.
Intro paragraph:
Many rendering algorithms have been
researched, and software used for
rendering may employ a number of
different techniques to obtain a final
image.
Tracing every ray of light in a scene
is impractical and would take an
enormous amount of time. Even tracing
a portion large enough to produce an
image takes an inordinate amount of
time if the sampling is not
intelligently restricted.
Therefore, four loose families of
more-efficient light transport
modelling techniques have emerged:
rasterisation, including scanline
rendering, geometrically projects
objects in the scene to an image
plane, without advanced optical
effects; ray casting considers the
scene as observed from a specific
point-of-view, calculating the
observed image based only on geometry
and very basic optical laws of
reflection intensity, and perhaps
using Monte Carlo techniques to reduce
artifacts; radiosity uses finite
element mathematics to simulate
diffuse spreading of light from
surfaces; and ray tracing is similar
to ray casting, but employs more
advanced optical simulation, and
usually uses Monte Carlo techniques to
obtain more realistic results at a
speed that is often orders of
magnitude slower.
Most advanced software combines two or
more of the techniques to obtain
good-enough results at reasonable
cost.
Another distinction is between image
order algorithms, which iterate over
pixels of the image plane, and object
order algorithms, which iterate over
objects in the scene. Generally object
order is more efficient, as there are
usually fewer objects in a scene than
pixels.
From those descriptions, only radiosity seems different in concept to me.

Resources