how to light an object, phong model - graphics

I am trying to figure out how to scale a color with the lighting illumination using the phong model. For example given I = KaAx, where ka is the ambient coefficient and Ax is the ambient lighting intensity where x can be r b or g, I want to apply that to a surface with a texture color of (1,0,1) for example. I tried multiplying the individual rgb values by I, (r*Ka*Ar.r,r*Kb*Ag,r*Kg*Ab) the illumination but alas, it can completely change the color which is not what I want.

OK I see few things:
ambient and diffuse lights should not be multiplied together use addition instead
you do not use any normal vector or light direction/position (at least I cant find it anywhere)
also you should use glColor parameter (unless you do not use it at all)
Try this for each color(.r,.g,.b) and directional light like Sun:
pixel_color.r=clamp_to_1 // clamp to <0.0,1.1>
(
texture_color.r*glColor.r // pixel color without lighting
*( // apply lighting
diffuse.r*((glNormal.xyz*NormalMatrix).light_dir.xyz) // diffuse * dot product of light source direction and normal vectors. If you need also consider distance just multiply by next term...
+ambient.r // ambient light is additive !!!
)
);
PS.
NormalMatrix is ModelView with origin set to [0.0,0.0,0.0] (no position shift)
If you want also add reflectance then do not forget to compute reflected normal ... it is different (unless the skybox is infinite in size) as used for diffuse light. Also cubemap with enviroment skybox helps a lot.

Related

Flat shading coordinate system

can someone please direct me to a link where I would be able to then solve such a question, seeing as this is an exam question I would like to attempt it first before asking for a solution.
Consider a triangular face of three vertices A(0,2,-1), B(1,0,1) and the origin O, and
the normal vectors at the vertices are nA=(0,1,0), nB=(1,0,0) and nO=(0,0,1),
respectively. The incident light is white and directional in direction of L=(1,2,2) and the
intensity is 1, the background ambient light intensity is 0.1, and the diffuse reflection
coefficients for (red, green, blue) are (0.6,0.7,0.8). No specular light contribution
needs to be considered..
a) Find the (red, green, blue) intensity values in the face using flat shading at the centre of the face.
Thanks
BeyelerStudios comment tells everything you need to know. But I feel you are complete rookie in the field so here some more info:
definitions
Lets have triangle face defined by its 3 vertexes (v0,v1,v2) and normals (n0,n1,n2). Let the light source be directional with to light vector light. The light has ambient and directional parts with (r,g,b) colors: col_dir=(1.0,1.0,1.0) and col_amb=(0.1,0.1,0.1). The reflectance of the surface is col_face=(0.6,0.7,0.8). You want to get the pixel color for center point of triangle.
compute normal at the the point of interest
To map arbitrary point of interest you can use barycentric coordinates (as you are computing this on paper it is better in such case).
But in your case the point is center so the normal is just average of the 3 normals:
n=(n0+n1+n2)/3.0
If I remember correctly In case of arbitrary point given in barycentric coordinates (u,v,w=1-u-v) it would be like this:
n=u*n0 + v*n1 + w*n2
compute cos(angle) between normal and to light vector
That is easy use dot product for this (while both vectors are unit in size ... normalized):
cos(angle) = (n.x*light.x)+(n.y*light.y)+(n.z*light.z)
If your vectors are not normalized you need to divide the result by their size.
cos(angle) = ( (n.x*light.x)+(n.y*light.y)+(n.z*light.z) ) / (|n|*|light|)
compute the color
That is also easy:
color = col_face * ( col_dir*cos(ang) + col_amb )
Do not forget to handle negative cos(ang). The behavior depends on your implementation. sometimes is used max(0.0,cos(ang)) other times |cos(ang)|.
[Notes]
If you are interested how rendering engines handle the interpolations see
how to rasterize rotated rectangle (in 2d by setpixel)

How to apply flat shading to RGB colors?

I am creating a small 3d rendering application. I decided to use simple flat shading for my triangles - just calculate the cosine of angle between face normal and light source and scale light intensity by it.
But I'm not sure about how exactly should I apply that shading coefficient to my RGB colors.
For example, imagine some surface at 60 degree angle to light source. cos(60 degree) = 0.5, so I should retain only half of the energy in emitted light.
I could simply scale RGB values by that coefficient, as in following pseudocode:
double shade = cos(angle(normal, lightDir))
Color out = new Color(in.r * shade, in.g * shade, in.b * shade)
But the resulting colors get too dark even at smaller angles. After some thought, that seems logical - our eyes perceive the logarithm of light energy (it's why we can see both in the bright day, and in the night). And RGB values already represent that log scale.
My next attempt was to use that linear/logarithmic insight. Theoretically:
output energy = lg(exp(input energy) * shade)
That can be simplified to:
output energy = lg(exp(input energy)) + lg(shade)
output energy = input energy + lg(shade)
So such shading will just amount to adding logarithm of shade coefficient (which is negative) to RGB values:
double shade = lg(cos(angle(normal, lightDir)))
Color out = new Color(in.r + shade, in.g + shade, in.b + shade)
That seems to work, but is it correct? How it is done in real rendering pipelines?
The color RGB vector is multiplied by the shade coefficient
The cosine value as you initially assumed. The logarithmic scaling is done by the target imaging device and human eyes
If your colors get too dark then the probable cause is:
the cosine or angle value get truncated to integer
or your pipeline does not have linear scale output (some gamma corrections can do that)
or you have a bug somewhere
or your angle and cosine uses different metrics (radians/degrees)
you forget to add ambient light coefficient to the shade value
your vectors are opposite or wrong (check them visually see the first link on how)
your vectors are not in the same coordinate system (light is usually in GCS and Normal vectors in model LCS so you need convert at least one of them to the coordinate system of the other)
The cos(angle) itself is not usually computed by cosine
As you got all data as vectors then just use dot product
double shade = dot(normal, lightDir)/(|normal|.|lightDir|)
if the vectors are unit size then you can discard the division by sizes ... that is why normal and light vectors are normalized ...
Some related questions and notes
Normal shading this may enlight thing or two (for beginners)
Normal/Bump mapping see fragment shader and search the dot
mirrored light see for slightly more complex lighting scheme
GCS/LCS mean global/local coordinate system

Ray tracing - color mixing

I am writing a ray tracer. So far, I have diffuse and specular lighting, and I am planning to implement reflection and refraction, too.
So far I have used white lights, where I calculated the surface color like this: surface_color * light_intensity, divided by the proper distance^2 values, since I am using point light sources. For specular reflection, it's light_color * light_intensity. Afaik, specular reflection doesn't change the light's color, so this should work with different color light sources, too.
How would I calculate the color reflected from a diffuse surface when the light source is not white? For example, (0.7, 0.2, 0) light hits (0.5, 0.5, 0.5) surface. Also, does distance factor in differently in this case?
Also, how would I add light contributions at a single point from different color light sources? For example, (1, 0.5, 1) surface is lit by (0.5, 0.5, 1) and (1, 0.7, 0.2) lights. Do I simply calculate both (distances included) and add them together?
I've found that RGB is a poor color space to do lighting calculations in because you have to consider a bunch of special cases to get anything that looks realistic or behaves the way you would expect it to.
With that said, it may be conceptually easier to do your lighting calculations in HSL rather than RGB. Depending on the language and toolkit you're using, this should be part of the standard library/distribution or an available toolkit.
A more physically accurate alternative would be to implement spectral rendering, where instead of your tracing functions returning RGB values, they return a sampled spectral power distribution. SPDs are more accurate and easier to work with than keeping track of a whole bunch of RGB blending special cases, at the cost of a slight but noticeable performance hit (especially if left unoptimized). Specular highlights and colored lights are a natural consequence of this model and don't require any special handling in the general case.

Three.js: What Is The Exact Difference Between Lambert and Phong?

I understand the difference between Lambert and Phong in general computer graphics. I also understand how we can change and create our own materials using three.js. But I cannot work out the difference between MeshLambertMaterial and MeshPhongMaterial in their default states.
I have tried switching them on a scene with one directional light source and 125 spheres, I cannot see any differences whatsoever. Three.js is being used in a chapter of my book and so I need to make sure all information is accurate and precise.
Thanks,
Shane
Shane, it's not your fault that you're confused.
Lambert is an illumination model (with a physical basis) for the light reflected off a surface, expressed in terms of the incoming illumination's direction with respect to the surface normal at the point of incidence.
Phong is a more nuanced shading model (albeit a more hacky one) which says that light is composed of ambient + diffuse + specular components. It treats the ambient component as constant everywhere (hack!), the diffuse component using the Lambertian model above, and the specular component using a power-law falloff (which is a clever hack, roughly approximating actual BRDFs).
The word "Phong" is also an interpolation method (when used in the context of modern triangle-based rendering pipelines). When computing the illumination at a pixel in the interior of a triangle, you have two choices:
Gouraud shading: Compute the color at the three vertices and interpolate in the interior, using barycentric coordinates, or
Phong shading: Using the normal at the three vertices, interpolate the normal in the interior and compute the shading using this interpolated normal at each pixel.
This is why (as #RayToal pointed out), if your specular "highlight" falls in the interior of a triangle, none of the vertices will be bright, but Phong shading will interpolate the normal and there will be a bright spot in the interior of your rendered triangle.
I am assuming you want the exact difference between MeshLambertMaterial and MeshPhongMaterial as implemented in three.js.
You have to differentiate between the shading model and the illumination model. Three.js does not implement 'pure' Phong or Lambert models.
For MeshLambertMaterial, the illumination calculation is performed at each vertex, and the resulting color is interpolated across the face of the polygon. ( Gouraud shading; (generalized) Lambert illumination model )
For MeshPhongMaterial, vertex normals are interpolated across the surface of the polygon, and the illumination calculation is performed at each texel. ( Phong shading; (generalized) Phong illumination model )
You will see a clear difference when you have a pointLight that is close to a face -- especially if the light's attenuation distance is less than the distance to the face's vertices.
For both materials, in the case of FlatShading, the face normal replaces each vertex normal.
three.js.r.66
In computer graphics, it is very common to confuse Phong reflection model with Phong shading. While former is a model of local illumination of points like Lambertian, the later is an interpolation method like Gouraud shading. In case you find it hard to differentiate between them, here's a list of detailed articles on each of these topics.
http://en.wikipedia.org/wiki/List_of_common_shading_algorithms
If you know a little GLSL, I think the best thing for you to do is to look at the vertex/fragment shaders generated in both cases and look for the differences. You can use http://benvanik.github.com/WebGL-Inspector/ to get the code of the programs, or put a console.log() at the right place in three js sources (look for buildProgram, you should output prefix_fragment + fragmentShader and prefix_vertex + vertexShader to see the program code).
Also, you can have a look to the building blocks used to create both shaders:
Lambert: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2036
Phong: https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLShaders.js#L2157
It may be more readable than to look at the source program code.

The right model for shading in Ray tracing

I am wondering about the most accurate way to calculate the shadow generated from several different light sources and ambient light.
Ambient light is light that exists in the entire 'world' with the same intensity and no particular direction, and diffused lighting is the lighting that occurs due a direct lighting from a point light source.
Given that Ka is the coefficient for the surface ambient reflectivity, Ia is the intensity of the ambient light, Kd is the surface diffuse reflectivity, Ip1 is intensity of the the first (accordingly) point light source, N is the surface normal, and L1 is the light (of the first source accordingly) direction.
According to my reference material the intensity of the color at the spot should be:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))
where '.' is the dot product.
But according to my understanding the real light intensity should do some sort of average between the light sources and not just add them up, so that if there are only two light sources the equation should look like:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))/2
and if there are 3 light sources, but the third is blocked and doesn't light the surface directly then:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))/3
(so that if there is a place where all 3 lights contribute it would be lighten brighter.
Am I right at my assumption?
Well, no, light shouldn't be averaged. Think about it. If you have just one powerful light source, and you add another, very faint light, would the color of the object be diminished? For example say the powerful light has intensity 10, the color (presuming the direction is perpendicular to the normal, and no ambient light, for simplicity sake) would be 10. Then after you add the second faint light, with say intensity 0.1 the color would be (10 + 0.1) / 2 which is 5.05. So adding more light would make the object seem darker. That doesn't make sense.
In the real world, light adds. It should in your ray tracer, too.
Luminance is not a linear function of light intensity. In other words, two identical light sources aimed at one spot are not perceived as twice as "bright" as one light. (Brightness is an ambiguous term -- luminance is a better term that means radiance weighted by human vision).
What you can do as an approximation to correcting the image to be viewed on your monitor, knowing intensities of various pixels, is called gamma correction.

Resources