can someone please direct me to a link where I would be able to then solve such a question, seeing as this is an exam question I would like to attempt it first before asking for a solution.
Consider a triangular face of three vertices A(0,2,-1), B(1,0,1) and the origin O, and
the normal vectors at the vertices are nA=(0,1,0), nB=(1,0,0) and nO=(0,0,1),
respectively. The incident light is white and directional in direction of L=(1,2,2) and the
intensity is 1, the background ambient light intensity is 0.1, and the diffuse reflection
coefficients for (red, green, blue) are (0.6,0.7,0.8). No specular light contribution
needs to be considered..
a) Find the (red, green, blue) intensity values in the face using flat shading at the centre of the face.
Thanks
BeyelerStudios comment tells everything you need to know. But I feel you are complete rookie in the field so here some more info:
definitions
Lets have triangle face defined by its 3 vertexes (v0,v1,v2) and normals (n0,n1,n2). Let the light source be directional with to light vector light. The light has ambient and directional parts with (r,g,b) colors: col_dir=(1.0,1.0,1.0) and col_amb=(0.1,0.1,0.1). The reflectance of the surface is col_face=(0.6,0.7,0.8). You want to get the pixel color for center point of triangle.
compute normal at the the point of interest
To map arbitrary point of interest you can use barycentric coordinates (as you are computing this on paper it is better in such case).
But in your case the point is center so the normal is just average of the 3 normals:
n=(n0+n1+n2)/3.0
If I remember correctly In case of arbitrary point given in barycentric coordinates (u,v,w=1-u-v) it would be like this:
n=u*n0 + v*n1 + w*n2
compute cos(angle) between normal and to light vector
That is easy use dot product for this (while both vectors are unit in size ... normalized):
cos(angle) = (n.x*light.x)+(n.y*light.y)+(n.z*light.z)
If your vectors are not normalized you need to divide the result by their size.
cos(angle) = ( (n.x*light.x)+(n.y*light.y)+(n.z*light.z) ) / (|n|*|light|)
compute the color
That is also easy:
color = col_face * ( col_dir*cos(ang) + col_amb )
Do not forget to handle negative cos(ang). The behavior depends on your implementation. sometimes is used max(0.0,cos(ang)) other times |cos(ang)|.
[Notes]
If you are interested how rendering engines handle the interpolations see
how to rasterize rotated rectangle (in 2d by setpixel)
Related
Summary
This is a question about how to map light intensity values, as calculated in a raytracing model, to color values percieved by humans. I have built a ray tracing model, and found that including the inverse square law for calculation of light intensities produces graphical results which I believe are unintuitive. I think this is partly to do with the limited range of brightness values available with 8 bit color images, but more likely that I should not be using a linear map between light intensity and pixel color.
Background
I developed a recent interest in creating computer graphics with raytracing techniques.
A basic raytracing model might work something like this
Calculate ray vectors from the center of the camera (eye) in the direction of each screen pixel to be rendered
Perform vector collision tests with all objects in the world
If collision, make a record of the color of the object at the point where the collision occurs
Create a new vector from the collision point to the nearest light
Multiply the color of the light by the color of the object
This creates reasonable, but flat looking images, even when surface normals are included in the calculation.
Model Extensions
My interest was in trying to extend this model by including the distance into the light calculations.
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
It doesn't apply to all light models. (For example light arriving from an infinite distance has intensity independent of the position of an object.)
From playing around with my code I have found that this inverse square law doesn't produce the realistic lighting I was hoping for.
For example, I built some initial objects for a model of a room/scene, to test things out.
There are some objects at a distance of 3-5 from the camera.
There are walls which make a boundry for the room, and I have placed them with distance of order 10 to 100 from the camera.
There are some lights, distance of order 10 from the camera.
What I have found is this
If the boundry of the room is more than distance 10 from the camera, the color values are very dim.
If the boundry of the room is a distance 100 from the camera it is completely invisible.
This doesn't match up with what I would expect intuitively. It makes sense mathematically, as I am using a linear function to translate between color intensity and RGB pixel values.
Discussion
Moving an object from a distance 10 to a distance 100 reduces the color intensity by a factor of (100/10)^2 = 100. Since pixel RGB colors are in the range of 0 - 255, clearly a factor of 100 is significant and would explain why an object at distance 10 moved to distance 100 becomes completely invisible.
However, I suspect that the human perception of color is non-linear in some way, and I assume this is a problem which has already been solved in computer graphics. (Otherwise raytracing engines wouldn't work.)
My guess would be there is some kind of color perception function which describes how absolute light intensities should be mapped to human perception of light intensity / color.
Does anyone know anything about this problem or can point me in the right direction?
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
The physical quantity you're describing here is not intensity, but radiant flux. For a discussion of radiometric concepts in the context of ray tracing, see Chapter 5.4 of Physically Based Rendering.
If the boundary of the room is more than distance 10 from the camera, the color values are very dim.
If the boundary of the room is a distance 100 from the camera it is completely invisible.
The inverse square law can be a useful first approximation for point lights in a ray tracer (before more accurate lighting models are implemented). The key point is that the law - radiant flux falling off by the square of the distance - applies only to the light from the point source to a surface, not to the light that's then reflected from the surface to the camera.
In other words, moving the camera back from the scene shouldn't reduce the brightness of the objects in the rendered image; it should only reduce their size.
Short version: When a color described in XYZ or xyY coordinates has a luminance Y=1, what are the physical units of that? Does that mean 1 candela, or 1 lumen? Is there any way to translate between this conceptual space and physical brightness?
Long version: I want to simulate how the sky looks in different directions, at different times of day, and (eventually) under different cloudiness and air pollution conditions. I've learned enough to figure out how to translate a given spectrum into a chrominance, for example xyz coordinates. But almost everything I've read on color theory in graphical display is focused on relative color, so the luminance is always 1. Non-programming color theory describes the units of luminance, so that I can translate from a spectrum in watts/square meter/steradian to candela or lumens, but nothing that describes the units of luminance in programming. What are the units of luminance in XYZ coordinates? I understand that the actual brightness of a patch would depend on monitor settings, but I'm really not finding any hints as to how to proceed.
Below is an example of what I'm coming across. The base color, at relative luminance of 1, was calculated from first principles. All the other colors are generated by increasing or decreasing the luminance. Most of them are plausible colors for mid-day sky. For the parameters I've chosen, I believe the total intensity in the visible range is 6.5 W/m2/sr = 4434 cd/m2, which seems to be in the right ballpark according to Wiki: Orders of Magnitude. Which color would I choose to represent that patch of sky?
Without more, luminance is usually expressed in candelas per square meter (cd/m2), and CIE XYZ's Y component is a luminance in cd/m2 — if the convention used is "absolute XYZ", which is rare. (The link is to an article I wrote which contains more detailed information.) More commonly, XYZ colors are normalized such that the white point (such as the D65 or D50 white point) has Y = 1 (or Y = 100).
I am creating a small 3d rendering application. I decided to use simple flat shading for my triangles - just calculate the cosine of angle between face normal and light source and scale light intensity by it.
But I'm not sure about how exactly should I apply that shading coefficient to my RGB colors.
For example, imagine some surface at 60 degree angle to light source. cos(60 degree) = 0.5, so I should retain only half of the energy in emitted light.
I could simply scale RGB values by that coefficient, as in following pseudocode:
double shade = cos(angle(normal, lightDir))
Color out = new Color(in.r * shade, in.g * shade, in.b * shade)
But the resulting colors get too dark even at smaller angles. After some thought, that seems logical - our eyes perceive the logarithm of light energy (it's why we can see both in the bright day, and in the night). And RGB values already represent that log scale.
My next attempt was to use that linear/logarithmic insight. Theoretically:
output energy = lg(exp(input energy) * shade)
That can be simplified to:
output energy = lg(exp(input energy)) + lg(shade)
output energy = input energy + lg(shade)
So such shading will just amount to adding logarithm of shade coefficient (which is negative) to RGB values:
double shade = lg(cos(angle(normal, lightDir)))
Color out = new Color(in.r + shade, in.g + shade, in.b + shade)
That seems to work, but is it correct? How it is done in real rendering pipelines?
The color RGB vector is multiplied by the shade coefficient
The cosine value as you initially assumed. The logarithmic scaling is done by the target imaging device and human eyes
If your colors get too dark then the probable cause is:
the cosine or angle value get truncated to integer
or your pipeline does not have linear scale output (some gamma corrections can do that)
or you have a bug somewhere
or your angle and cosine uses different metrics (radians/degrees)
you forget to add ambient light coefficient to the shade value
your vectors are opposite or wrong (check them visually see the first link on how)
your vectors are not in the same coordinate system (light is usually in GCS and Normal vectors in model LCS so you need convert at least one of them to the coordinate system of the other)
The cos(angle) itself is not usually computed by cosine
As you got all data as vectors then just use dot product
double shade = dot(normal, lightDir)/(|normal|.|lightDir|)
if the vectors are unit size then you can discard the division by sizes ... that is why normal and light vectors are normalized ...
Some related questions and notes
Normal shading this may enlight thing or two (for beginners)
Normal/Bump mapping see fragment shader and search the dot
mirrored light see for slightly more complex lighting scheme
GCS/LCS mean global/local coordinate system
I am trying to figure out how to scale a color with the lighting illumination using the phong model. For example given I = KaAx, where ka is the ambient coefficient and Ax is the ambient lighting intensity where x can be r b or g, I want to apply that to a surface with a texture color of (1,0,1) for example. I tried multiplying the individual rgb values by I, (r*Ka*Ar.r,r*Kb*Ag,r*Kg*Ab) the illumination but alas, it can completely change the color which is not what I want.
OK I see few things:
ambient and diffuse lights should not be multiplied together use addition instead
you do not use any normal vector or light direction/position (at least I cant find it anywhere)
also you should use glColor parameter (unless you do not use it at all)
Try this for each color(.r,.g,.b) and directional light like Sun:
pixel_color.r=clamp_to_1 // clamp to <0.0,1.1>
(
texture_color.r*glColor.r // pixel color without lighting
*( // apply lighting
diffuse.r*((glNormal.xyz*NormalMatrix).light_dir.xyz) // diffuse * dot product of light source direction and normal vectors. If you need also consider distance just multiply by next term...
+ambient.r // ambient light is additive !!!
)
);
PS.
NormalMatrix is ModelView with origin set to [0.0,0.0,0.0] (no position shift)
If you want also add reflectance then do not forget to compute reflected normal ... it is different (unless the skybox is infinite in size) as used for diffuse light. Also cubemap with enviroment skybox helps a lot.
I am wondering about the most accurate way to calculate the shadow generated from several different light sources and ambient light.
Ambient light is light that exists in the entire 'world' with the same intensity and no particular direction, and diffused lighting is the lighting that occurs due a direct lighting from a point light source.
Given that Ka is the coefficient for the surface ambient reflectivity, Ia is the intensity of the ambient light, Kd is the surface diffuse reflectivity, Ip1 is intensity of the the first (accordingly) point light source, N is the surface normal, and L1 is the light (of the first source accordingly) direction.
According to my reference material the intensity of the color at the spot should be:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))
where '.' is the dot product.
But according to my understanding the real light intensity should do some sort of average between the light sources and not just add them up, so that if there are only two light sources the equation should look like:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))/2
and if there are 3 light sources, but the third is blocked and doesn't light the surface directly then:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))/3
(so that if there is a place where all 3 lights contribute it would be lighten brighter.
Am I right at my assumption?
Well, no, light shouldn't be averaged. Think about it. If you have just one powerful light source, and you add another, very faint light, would the color of the object be diminished? For example say the powerful light has intensity 10, the color (presuming the direction is perpendicular to the normal, and no ambient light, for simplicity sake) would be 10. Then after you add the second faint light, with say intensity 0.1 the color would be (10 + 0.1) / 2 which is 5.05. So adding more light would make the object seem darker. That doesn't make sense.
In the real world, light adds. It should in your ray tracer, too.
Luminance is not a linear function of light intensity. In other words, two identical light sources aimed at one spot are not perceived as twice as "bright" as one light. (Brightness is an ambiguous term -- luminance is a better term that means radiance weighted by human vision).
What you can do as an approximation to correcting the image to be viewed on your monitor, knowing intensities of various pixels, is called gamma correction.