what's the difference between BRDF and Phong model? - graphics

I am wondering if Phong model is one kind of BRDF?
Is BRDF only dealing with opaque surface? If the surface is translucent, which model to use?

There's two thing called Phong :
a BRDF : I = Ia + Id + Is ... this is Phong model i suppose for you
a way to interpolate illumination between vertex
so yes, what you call Phong model IS a BRDF, not a physically based one of course
and BRDF is only for opaque because it's a bidirectionnal REFLECTANCE distribution function
look at BTDF (transmission) or even BSDF (scattering)

Practically a general BRDF is based on microfacet theory while original Phong is not.

Related

Standard Illumination Model for commercial renderers

I've been working on creating my own ray tracer and I implemented surface shading using the Phong illumination model. I'm looking to make it look more realistic so I'm looking at different models. Is this what is also used for commercial renderers (i.e., Renderman, Arnold)? Or are there other ones that are used more (Blinn-Phong, Beckman Distribution, etc.)?
Renderman and friends are all programmable renderers, so the user can implement whatever shading model is desired, even using different shading models on different objects in a scene. I wrote one using the modulus operator to simulate a faceted surface where the underlying geometry was smooth, for example. Also, the operation of the rendering model is generally driven by multiple hand-painted or procedurally generated maps, so for example a map could specify a broader specular highlight in one area, and narrower in another.
That said, Lambert diffuse shading with Phong specular shading is the basic starting point.

Understanding Oren-Nayar reflectance model

In the Oren–Nayar reflectance model, each facet is assumed to be Lambertian in reflectance. wiki
My understanding :
Law of reflection tells us that angle of incidence is equal to angle of reflection. So on a perfectly smooth surface, we get specular reflection. On a rough surface, an incident beam is scattered in a number of directions (each ray obeying law of reflection). To model this, a microfacet model is used which defines grooves (facets) on the surface. Now what does a lambertian facet mean? Are we assuming grooves on the already present grooves?
Lambertian BRDF is a theoretical model describing perfect diffuse reflection.
Microfacet BRDFs on the other hand are based on geometrical optics theory, where a surface is covered with microfacets and described by means of statistical distributions.
Oren-Nayar BRDF is an improvement over the theoretical Lambertian model, applying distribution of purely diffuse microfacets. So to answer your question - Lambertian microfacet is an entity itself not subject to microfacet theory, it's not a "groove with grooves". It's a purely theoretical concept used to apply microfacet theory to matte surfaces. There are many different BRDF models, only some of them follow geometrical optics like the microfacet models do, it is useful to keep in mind that microfacet is not a universal theory.
For further reading, I recommend the overview of BRDF models by Montes and Ureña, their classification was very enlightening to me at least.

BRDFs in 3D Rendering

I'm reading this famous article and I can't seem to understand the BRDF concept, especially the bold part:
The surface’s response to light is quantified by a function called the
BRDF (Bidirectional Reflectance Distribution Function), which we will
denote as f(l, v). Each direction (incoming and outgoing) can be
parameterized with two numbers (e.g. polar coordinates), so the
overall dimensionality of the BRDF is four.
To which directions is the author referring? Also, if this is 3d, then how can a direction be parameterized with two numbers and not three?
A BRDF describes the light reflectance characteristics of a surface. For every pair of incoming (l) and outgoing (v) directions, the BRDF tells you how much light will be reflected along v. Since we are in surface space, two polar coordinates are sufficient to define the entire hemisphere over the reflection point. The following image from anu.edu.au illustrates this concept:

Three.js: What Is The Exact Difference Between Lambert and Phong?

I understand the difference between Lambert and Phong in general computer graphics. I also understand how we can change and create our own materials using three.js. But I cannot work out the difference between MeshLambertMaterial and MeshPhongMaterial in their default states.
I have tried switching them on a scene with one directional light source and 125 spheres, I cannot see any differences whatsoever. Three.js is being used in a chapter of my book and so I need to make sure all information is accurate and precise.
Thanks,
Shane
Shane, it's not your fault that you're confused.
Lambert is an illumination model (with a physical basis) for the light reflected off a surface, expressed in terms of the incoming illumination's direction with respect to the surface normal at the point of incidence.
Phong is a more nuanced shading model (albeit a more hacky one) which says that light is composed of ambient + diffuse + specular components. It treats the ambient component as constant everywhere (hack!), the diffuse component using the Lambertian model above, and the specular component using a power-law falloff (which is a clever hack, roughly approximating actual BRDFs).
The word "Phong" is also an interpolation method (when used in the context of modern triangle-based rendering pipelines). When computing the illumination at a pixel in the interior of a triangle, you have two choices:
Gouraud shading: Compute the color at the three vertices and interpolate in the interior, using barycentric coordinates, or
Phong shading: Using the normal at the three vertices, interpolate the normal in the interior and compute the shading using this interpolated normal at each pixel.
This is why (as #RayToal pointed out), if your specular "highlight" falls in the interior of a triangle, none of the vertices will be bright, but Phong shading will interpolate the normal and there will be a bright spot in the interior of your rendered triangle.
I am assuming you want the exact difference between MeshLambertMaterial and MeshPhongMaterial as implemented in three.js.
You have to differentiate between the shading model and the illumination model. Three.js does not implement 'pure' Phong or Lambert models.
For MeshLambertMaterial, the illumination calculation is performed at each vertex, and the resulting color is interpolated across the face of the polygon. ( Gouraud shading; (generalized) Lambert illumination model )
For MeshPhongMaterial, vertex normals are interpolated across the surface of the polygon, and the illumination calculation is performed at each texel. ( Phong shading; (generalized) Phong illumination model )
You will see a clear difference when you have a pointLight that is close to a face -- especially if the light's attenuation distance is less than the distance to the face's vertices.
For both materials, in the case of FlatShading, the face normal replaces each vertex normal.
three.js.r.66
In computer graphics, it is very common to confuse Phong reflection model with Phong shading. While former is a model of local illumination of points like Lambertian, the later is an interpolation method like Gouraud shading. In case you find it hard to differentiate between them, here's a list of detailed articles on each of these topics.
http://en.wikipedia.org/wiki/List_of_common_shading_algorithms
If you know a little GLSL, I think the best thing for you to do is to look at the vertex/fragment shaders generated in both cases and look for the differences. You can use http://benvanik.github.com/WebGL-Inspector/ to get the code of the programs, or put a console.log() at the right place in three js sources (look for buildProgram, you should output prefix_fragment + fragmentShader and prefix_vertex + vertexShader to see the program code).
Also, you can have a look to the building blocks used to create both shaders:
Lambert: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2036
Phong: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2157
It may be more readable than to look at the source program code.

Graphics-Related Question: Mesh and Geometry

What's the difference between mesh and geometry? Aren't they the same? i.e. collection of vertices that form triangles?
A point is geometry, but it is not a mesh. A curve is geometry, but it is not a mesh. An iso-surface is geometry, but it is not... enfin you get the point by now.
Meshes are geometry, not the other way around.
Geometry in the context of computing is far more limited that geometry as a branch of mathematics. There are only a few types of geometry typically used in computer graphics. Sprites are used when rendering points (particles), line segments are used when rendering curves and meshes are used when rendering surface-like geometry.
A mesh is typically a collection of polygons/geometric objects. For instance triangles, quads or a mixture of various polygons. A mesh is simply a more complex shape.
From Wikipedia:
Geometry is a part of mathematics
concerned with questions of size,
shape, and relative position of
figures and with properties of space
IMO a mesh falls under that criteria.
In the context implied by your question:
A mesh is a collection of polygons arranged in such a way that each polygon shares at least one vertex with another polygon in that collection. You can reach any polygon in a mesh from any other polygon in that mesh by traversing the edges and vertices that define those polygons.
Geometry refers to any object in space whose properties may be described according to the principles of the branch of mathematics known as geometry.
That the term "geometry" has different meanings mathematically and in rendering. In rendering it usually denotes what is static in a scene (walls, etc.) What is widely called a "mesh" is a group of geometrical objects (basically triangles) that describe or form an "object" in the scene - pretty much like envalid said it, but usually a mesh forms a single object or entity in a scene. Very often that is how rendering engines use the term: The geometrical data of each scene element (object, entity) composes that element's mesh.
Although this is tagged in "graphics", I think the answer connects with the interpretation from computational physics. There, we usually think of the geometry as an abstraction of the system that is to be represented/simulated, while the mesh is an approximation of the geometry - a compromise we usually have to make to be able to represent the spatial domain within the finite memory of the machine.
You can think of them basically as regular or unstructured sets of points "sprayed" on a surface or within a volume in space.
To be able to do visualization/simulation, it is also necessary to determine the neighbors of each point - for example using Delaunay triangulation which allows you to group sets of points into elements (for which you can solve algebraic versions of the equations describing your system).
In the context of surface representation in computer graphics, I think all major APIs (e.g. OpenGL) have functions which can display these primitives (which can be triangles as given by Delaunay, quads or maybe some other elements).

Resources