light intensity eq for shading - graphics

This is the information i have for computing light intensity (color) phong shading:
Have a directional light coming from [1,1,1] eye coordinates.
I have normal for each vertex of the object which is nx, ny, nz not in eye coordinates.
I have [0.1,0.1,0.2] as color for ambient and diffuse.
I have [0.8,0.8,0.8] as color for specular.
How do i implement the light intensity (color) using this information alone without using open gl (assignment sake).

There's a good description of Phong Shading here:
http://en.wikipedia.org/wiki/Phong_shading

Related

Raytracing and Computer Graphics. Color perception functions

Summary
This is a question about how to map light intensity values, as calculated in a raytracing model, to color values percieved by humans. I have built a ray tracing model, and found that including the inverse square law for calculation of light intensities produces graphical results which I believe are unintuitive. I think this is partly to do with the limited range of brightness values available with 8 bit color images, but more likely that I should not be using a linear map between light intensity and pixel color.
Background
I developed a recent interest in creating computer graphics with raytracing techniques.
A basic raytracing model might work something like this
Calculate ray vectors from the center of the camera (eye) in the direction of each screen pixel to be rendered
Perform vector collision tests with all objects in the world
If collision, make a record of the color of the object at the point where the collision occurs
Create a new vector from the collision point to the nearest light
Multiply the color of the light by the color of the object
This creates reasonable, but flat looking images, even when surface normals are included in the calculation.
Model Extensions
My interest was in trying to extend this model by including the distance into the light calculations.
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
It doesn't apply to all light models. (For example light arriving from an infinite distance has intensity independent of the position of an object.)
From playing around with my code I have found that this inverse square law doesn't produce the realistic lighting I was hoping for.
For example, I built some initial objects for a model of a room/scene, to test things out.
There are some objects at a distance of 3-5 from the camera.
There are walls which make a boundry for the room, and I have placed them with distance of order 10 to 100 from the camera.
There are some lights, distance of order 10 from the camera.
What I have found is this
If the boundry of the room is more than distance 10 from the camera, the color values are very dim.
If the boundry of the room is a distance 100 from the camera it is completely invisible.
This doesn't match up with what I would expect intuitively. It makes sense mathematically, as I am using a linear function to translate between color intensity and RGB pixel values.
Discussion
Moving an object from a distance 10 to a distance 100 reduces the color intensity by a factor of (100/10)^2 = 100. Since pixel RGB colors are in the range of 0 - 255, clearly a factor of 100 is significant and would explain why an object at distance 10 moved to distance 100 becomes completely invisible.
However, I suspect that the human perception of color is non-linear in some way, and I assume this is a problem which has already been solved in computer graphics. (Otherwise raytracing engines wouldn't work.)
My guess would be there is some kind of color perception function which describes how absolute light intensities should be mapped to human perception of light intensity / color.
Does anyone know anything about this problem or can point me in the right direction?
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
The physical quantity you're describing here is not intensity, but radiant flux. For a discussion of radiometric concepts in the context of ray tracing, see Chapter 5.4 of Physically Based Rendering.
If the boundary of the room is more than distance 10 from the camera, the color values are very dim.
If the boundary of the room is a distance 100 from the camera it is completely invisible.
The inverse square law can be a useful first approximation for point lights in a ray tracer (before more accurate lighting models are implemented). The key point is that the law - radiant flux falling off by the square of the distance - applies only to the light from the point source to a surface, not to the light that's then reflected from the surface to the camera.
In other words, moving the camera back from the scene shouldn't reduce the brightness of the objects in the rendered image; it should only reduce their size.

Phong illumination produces black

I guess I am somehow stuck with a basic question where I just don't get the correct answer.
The Phong illumination model contains an ambient, diffuse and specular part.
Each part contains a multiplication of the color of light (ambient or source) with a coefficient (ambient, diffuse, specular)): I * coe
The light and the coefficents consist of the r,g,b color channels:
I_r * coe_r
I_g * coe_g
I_b * coe_b
Assuming a light would be green (0,1,0) and the coefficient (doesn't matter which) is blue (0,0,1) the result would be black (0,0,0).
How does this make any sense?
A blue object only reflects blue light. If you light it using white light, which contains all colors, it reflects only the blue light, so that is why it appears blue to the viewer. If you shine a light that has no blue component on a blue object, no light will be reflected.
In real life, lights and pigments are never "pure", and a object will not appear completely black in these situations. However, in the world of computer graphics, this can happen easily.

Flat shading coordinate system

can someone please direct me to a link where I would be able to then solve such a question, seeing as this is an exam question I would like to attempt it first before asking for a solution.
Consider a triangular face of three vertices A(0,2,-1), B(1,0,1) and the origin O, and
the normal vectors at the vertices are nA=(0,1,0), nB=(1,0,0) and nO=(0,0,1),
respectively. The incident light is white and directional in direction of L=(1,2,2) and the
intensity is 1, the background ambient light intensity is 0.1, and the diffuse reflection
coefficients for (red, green, blue) are (0.6,0.7,0.8). No specular light contribution
needs to be considered..
a) Find the (red, green, blue) intensity values in the face using flat shading at the centre of the face.
Thanks
BeyelerStudios comment tells everything you need to know. But I feel you are complete rookie in the field so here some more info:
definitions
Lets have triangle face defined by its 3 vertexes (v0,v1,v2) and normals (n0,n1,n2). Let the light source be directional with to light vector light. The light has ambient and directional parts with (r,g,b) colors: col_dir=(1.0,1.0,1.0) and col_amb=(0.1,0.1,0.1). The reflectance of the surface is col_face=(0.6,0.7,0.8). You want to get the pixel color for center point of triangle.
compute normal at the the point of interest
To map arbitrary point of interest you can use barycentric coordinates (as you are computing this on paper it is better in such case).
But in your case the point is center so the normal is just average of the 3 normals:
n=(n0+n1+n2)/3.0
If I remember correctly In case of arbitrary point given in barycentric coordinates (u,v,w=1-u-v) it would be like this:
n=u*n0 + v*n1 + w*n2
compute cos(angle) between normal and to light vector
That is easy use dot product for this (while both vectors are unit in size ... normalized):
cos(angle) = (n.x*light.x)+(n.y*light.y)+(n.z*light.z)
If your vectors are not normalized you need to divide the result by their size.
cos(angle) = ( (n.x*light.x)+(n.y*light.y)+(n.z*light.z) ) / (|n|*|light|)
compute the color
That is also easy:
color = col_face * ( col_dir*cos(ang) + col_amb )
Do not forget to handle negative cos(ang). The behavior depends on your implementation. sometimes is used max(0.0,cos(ang)) other times |cos(ang)|.
[Notes]
If you are interested how rendering engines handle the interpolations see
how to rasterize rotated rectangle (in 2d by setpixel)

how to light an object, phong model

I am trying to figure out how to scale a color with the lighting illumination using the phong model. For example given I = KaAx, where ka is the ambient coefficient and Ax is the ambient lighting intensity where x can be r b or g, I want to apply that to a surface with a texture color of (1,0,1) for example. I tried multiplying the individual rgb values by I, (r*Ka*Ar.r,r*Kb*Ag,r*Kg*Ab) the illumination but alas, it can completely change the color which is not what I want.
OK I see few things:
ambient and diffuse lights should not be multiplied together use addition instead
you do not use any normal vector or light direction/position (at least I cant find it anywhere)
also you should use glColor parameter (unless you do not use it at all)
Try this for each color(.r,.g,.b) and directional light like Sun:
pixel_color.r=clamp_to_1 // clamp to <0.0,1.1>
(
texture_color.r*glColor.r // pixel color without lighting
*( // apply lighting
diffuse.r*((glNormal.xyz*NormalMatrix).light_dir.xyz) // diffuse * dot product of light source direction and normal vectors. If you need also consider distance just multiply by next term...
+ambient.r // ambient light is additive !!!
)
);
PS.
NormalMatrix is ModelView with origin set to [0.0,0.0,0.0] (no position shift)
If you want also add reflectance then do not forget to compute reflected normal ... it is different (unless the skybox is infinite in size) as used for diffuse light. Also cubemap with enviroment skybox helps a lot.

The right model for shading in Ray tracing

I am wondering about the most accurate way to calculate the shadow generated from several different light sources and ambient light.
Ambient light is light that exists in the entire 'world' with the same intensity and no particular direction, and diffused lighting is the lighting that occurs due a direct lighting from a point light source.
Given that Ka is the coefficient for the surface ambient reflectivity, Ia is the intensity of the ambient light, Kd is the surface diffuse reflectivity, Ip1 is intensity of the the first (accordingly) point light source, N is the surface normal, and L1 is the light (of the first source accordingly) direction.
According to my reference material the intensity of the color at the spot should be:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))
where '.' is the dot product.
But according to my understanding the real light intensity should do some sort of average between the light sources and not just add them up, so that if there are only two light sources the equation should look like:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))/2
and if there are 3 light sources, but the third is blocked and doesn't light the surface directly then:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))/3
(so that if there is a place where all 3 lights contribute it would be lighten brighter.
Am I right at my assumption?
Well, no, light shouldn't be averaged. Think about it. If you have just one powerful light source, and you add another, very faint light, would the color of the object be diminished? For example say the powerful light has intensity 10, the color (presuming the direction is perpendicular to the normal, and no ambient light, for simplicity sake) would be 10. Then after you add the second faint light, with say intensity 0.1 the color would be (10 + 0.1) / 2 which is 5.05. So adding more light would make the object seem darker. That doesn't make sense.
In the real world, light adds. It should in your ray tracer, too.
Luminance is not a linear function of light intensity. In other words, two identical light sources aimed at one spot are not perceived as twice as "bright" as one light. (Brightness is an ambiguous term -- luminance is a better term that means radiance weighted by human vision).
What you can do as an approximation to correcting the image to be viewed on your monitor, knowing intensities of various pixels, is called gamma correction.

Resources