The right model for shading in Ray tracing - graphics

I am wondering about the most accurate way to calculate the shadow generated from several different light sources and ambient light.
Ambient light is light that exists in the entire 'world' with the same intensity and no particular direction, and diffused lighting is the lighting that occurs due a direct lighting from a point light source.
Given that Ka is the coefficient for the surface ambient reflectivity, Ia is the intensity of the ambient light, Kd is the surface diffuse reflectivity, Ip1 is intensity of the the first (accordingly) point light source, N is the surface normal, and L1 is the light (of the first source accordingly) direction.
According to my reference material the intensity of the color at the spot should be:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))
where '.' is the dot product.
But according to my understanding the real light intensity should do some sort of average between the light sources and not just add them up, so that if there are only two light sources the equation should look like:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))/2
and if there are 3 light sources, but the third is blocked and doesn't light the surface directly then:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))/3
(so that if there is a place where all 3 lights contribute it would be lighten brighter.
Am I right at my assumption?

Well, no, light shouldn't be averaged. Think about it. If you have just one powerful light source, and you add another, very faint light, would the color of the object be diminished? For example say the powerful light has intensity 10, the color (presuming the direction is perpendicular to the normal, and no ambient light, for simplicity sake) would be 10. Then after you add the second faint light, with say intensity 0.1 the color would be (10 + 0.1) / 2 which is 5.05. So adding more light would make the object seem darker. That doesn't make sense.

In the real world, light adds. It should in your ray tracer, too.

Luminance is not a linear function of light intensity. In other words, two identical light sources aimed at one spot are not perceived as twice as "bright" as one light. (Brightness is an ambiguous term -- luminance is a better term that means radiance weighted by human vision).
What you can do as an approximation to correcting the image to be viewed on your monitor, knowing intensities of various pixels, is called gamma correction.

Related

Raytracing and Computer Graphics. Color perception functions

Summary
This is a question about how to map light intensity values, as calculated in a raytracing model, to color values percieved by humans. I have built a ray tracing model, and found that including the inverse square law for calculation of light intensities produces graphical results which I believe are unintuitive. I think this is partly to do with the limited range of brightness values available with 8 bit color images, but more likely that I should not be using a linear map between light intensity and pixel color.
Background
I developed a recent interest in creating computer graphics with raytracing techniques.
A basic raytracing model might work something like this
Calculate ray vectors from the center of the camera (eye) in the direction of each screen pixel to be rendered
Perform vector collision tests with all objects in the world
If collision, make a record of the color of the object at the point where the collision occurs
Create a new vector from the collision point to the nearest light
Multiply the color of the light by the color of the object
This creates reasonable, but flat looking images, even when surface normals are included in the calculation.
Model Extensions
My interest was in trying to extend this model by including the distance into the light calculations.
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
It doesn't apply to all light models. (For example light arriving from an infinite distance has intensity independent of the position of an object.)
From playing around with my code I have found that this inverse square law doesn't produce the realistic lighting I was hoping for.
For example, I built some initial objects for a model of a room/scene, to test things out.
There are some objects at a distance of 3-5 from the camera.
There are walls which make a boundry for the room, and I have placed them with distance of order 10 to 100 from the camera.
There are some lights, distance of order 10 from the camera.
What I have found is this
If the boundry of the room is more than distance 10 from the camera, the color values are very dim.
If the boundry of the room is a distance 100 from the camera it is completely invisible.
This doesn't match up with what I would expect intuitively. It makes sense mathematically, as I am using a linear function to translate between color intensity and RGB pixel values.
Discussion
Moving an object from a distance 10 to a distance 100 reduces the color intensity by a factor of (100/10)^2 = 100. Since pixel RGB colors are in the range of 0 - 255, clearly a factor of 100 is significant and would explain why an object at distance 10 moved to distance 100 becomes completely invisible.
However, I suspect that the human perception of color is non-linear in some way, and I assume this is a problem which has already been solved in computer graphics. (Otherwise raytracing engines wouldn't work.)
My guess would be there is some kind of color perception function which describes how absolute light intensities should be mapped to human perception of light intensity / color.
Does anyone know anything about this problem or can point me in the right direction?
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
The physical quantity you're describing here is not intensity, but radiant flux. For a discussion of radiometric concepts in the context of ray tracing, see Chapter 5.4 of Physically Based Rendering.
If the boundary of the room is more than distance 10 from the camera, the color values are very dim.
If the boundary of the room is a distance 100 from the camera it is completely invisible.
The inverse square law can be a useful first approximation for point lights in a ray tracer (before more accurate lighting models are implemented). The key point is that the law - radiant flux falling off by the square of the distance - applies only to the light from the point source to a surface, not to the light that's then reflected from the surface to the camera.
In other words, moving the camera back from the scene shouldn't reduce the brightness of the objects in the rendered image; it should only reduce their size.

What is the minimum distance between two colors to be easily distinguishable?

I need to draw a Sierpinski Triangle for some homework.
The task includes a random color picker for the triangle. As the background is white this may lead to problems with visibility.
My question is what is the minimum distance between two rgb values for them to still be easily distinguishable?
I am aware that this is rather subjective and depends on monitor, ambient light and the definition of "easily distinguishable" but a rough estimate would suffice. Web searches were mostly concerned with physical distance.

Phong illumination produces black

I guess I am somehow stuck with a basic question where I just don't get the correct answer.
The Phong illumination model contains an ambient, diffuse and specular part.
Each part contains a multiplication of the color of light (ambient or source) with a coefficient (ambient, diffuse, specular)): I * coe
The light and the coefficents consist of the r,g,b color channels:
I_r * coe_r
I_g * coe_g
I_b * coe_b
Assuming a light would be green (0,1,0) and the coefficient (doesn't matter which) is blue (0,0,1) the result would be black (0,0,0).
How does this make any sense?
A blue object only reflects blue light. If you light it using white light, which contains all colors, it reflects only the blue light, so that is why it appears blue to the viewer. If you shine a light that has no blue component on a blue object, no light will be reflected.
In real life, lights and pigments are never "pure", and a object will not appear completely black in these situations. However, in the world of computer graphics, this can happen easily.

Flat shading coordinate system

can someone please direct me to a link where I would be able to then solve such a question, seeing as this is an exam question I would like to attempt it first before asking for a solution.
Consider a triangular face of three vertices A(0,2,-1), B(1,0,1) and the origin O, and
the normal vectors at the vertices are nA=(0,1,0), nB=(1,0,0) and nO=(0,0,1),
respectively. The incident light is white and directional in direction of L=(1,2,2) and the
intensity is 1, the background ambient light intensity is 0.1, and the diffuse reflection
coefficients for (red, green, blue) are (0.6,0.7,0.8). No specular light contribution
needs to be considered..
a) Find the (red, green, blue) intensity values in the face using flat shading at the centre of the face.
Thanks
BeyelerStudios comment tells everything you need to know. But I feel you are complete rookie in the field so here some more info:
definitions
Lets have triangle face defined by its 3 vertexes (v0,v1,v2) and normals (n0,n1,n2). Let the light source be directional with to light vector light. The light has ambient and directional parts with (r,g,b) colors: col_dir=(1.0,1.0,1.0) and col_amb=(0.1,0.1,0.1). The reflectance of the surface is col_face=(0.6,0.7,0.8). You want to get the pixel color for center point of triangle.
compute normal at the the point of interest
To map arbitrary point of interest you can use barycentric coordinates (as you are computing this on paper it is better in such case).
But in your case the point is center so the normal is just average of the 3 normals:
n=(n0+n1+n2)/3.0
If I remember correctly In case of arbitrary point given in barycentric coordinates (u,v,w=1-u-v) it would be like this:
n=u*n0 + v*n1 + w*n2
compute cos(angle) between normal and to light vector
That is easy use dot product for this (while both vectors are unit in size ... normalized):
cos(angle) = (n.x*light.x)+(n.y*light.y)+(n.z*light.z)
If your vectors are not normalized you need to divide the result by their size.
cos(angle) = ( (n.x*light.x)+(n.y*light.y)+(n.z*light.z) ) / (|n|*|light|)
compute the color
That is also easy:
color = col_face * ( col_dir*cos(ang) + col_amb )
Do not forget to handle negative cos(ang). The behavior depends on your implementation. sometimes is used max(0.0,cos(ang)) other times |cos(ang)|.
[Notes]
If you are interested how rendering engines handle the interpolations see
how to rasterize rotated rectangle (in 2d by setpixel)

Question on Specular reflection behaviour?

Why Specular reflected light will be in bright color(usually white) while other parts of the object are reflecting the perceived color wavelength?
From a physical perspective, this is because:
specular reflection results from light bouncing off the surface of material
diffuse reflection results from light bouncing around inside the material
Say you have a piece of red plastic with a smooth surface. The plastic is red because it contains a red dye or pigment. Incoming light that enters the plastic tends to be reflected if red, or absorbed if it is not; this red light bounces around inside the plastic and makes it back out in a more or less random direction (which is why this component is called "diffuse").
On the other hand, some of the incoming light never makes it into the plastic to begin with: it bounces off the surface, instead. Because the surface of the plastic is smooth, its direction is not randomized: it reflects off in a direction based on the mirror reflection angle (which is why it is called "specular"). Since it never hits any of the colorant in the plastic, its color is not changed by selective absorption like the diffuse component; this is why specular reflection is usually white.
I should add that the above is a highly simplified version of reality: there are plenty of cases that are not covered by these two possibilities. However, they are common enough and generally applicable enough for computer graphics work: the diffuse+specular model can give a good visible approximation to many surfaces, especially when combined with other cheap approximation like bump mapping, etc.
Edit: a reference in response to Ayappa's comment -- the mechanism that generally gives rise to specular highlights is called Fresnel reflection. It is a classical phenomenon, depending solely on the refractive index of the material.
If the surface of the material is optically smooth (e.g., a high-quality glass window), the Fresnel reflection will produce a true mirror-like image. If the material is only partly smooth (like semigloss paint) you will get a specular highlight, which may be narrow or wide based on how smooth it is at the microscopic level. If the material is completely rough (either at a microscopic level or at some larger scale which is smaller than your image resolution), then the Fresnel reflection becomes effectively diffuse, and cannot be readily distinguished from other forms of diffuse reflection.
Its a question of wavelength absorption vs reflection.
First, specular reflections do not exist in the real world. Everything you see is mostly reflected light (the rest being emissive or other), including diffuse lighting. Realistically, there is no real difference between diffuse and specular lighting : its all a reflection. Also keep in mind that real world lighting is not clamped to the 0-1 range as pixels are.
Diffusion of light reflected off of a surface is caused by the microscopic roughness of the surface (microfacets). Imagine a surface is made up of millions of microscopic mirrors. If they are all aligned, you get a perfect polished mirror. If they are all randomly oriented, light is scattered in every direction and the resulting reflection is "blurred". Many formulas in computer graphics try to model this microscopic surface roughness, like Oren–Nayar, but usually the simple Lambert model is used because it is computationally cheap.
Colors are a result of wavelength absorption vs reflection. When light energy hits a material, some of that energy is absorbed by that material. Not all wavelengths of the energy are absorbed at the same rate however. If white light bounces off of a surface which absorbs red wavelengths, you will see a green-blue color. The more a surface absorbs light, the darker the color will appear as less and less light energy is returned. Most of the absorbed light energy is converted to thermal energy, and is why black materials will heat up in the sun faster than white materials.
Specular in computer graphics is meant to simulate a strong direct light source reflecting off of a surface like it may do in the real world. Realistically though, you would have to reflect the entire scene in high range lighting and color depth, and specular would be the result of light sources being much brighter than the rest of the reflected scene and returning a much higher amount of light energy after one or more reflections than the rest of the light from the scene. That would be quite computationally painful though! Not feasible for realtime graphics just yet. Lighting with HDR environment maps were an attempt to properly simulate this.
Additional references and explanations :
Specular Reflections :
Specular reflections only differ from diffuse reflections by the roughness of a reflective surface. There is no inherent difference between them, both terms refer to reflected light. Also note that diffusion in this context simply means the scattering of light, and diffuse reflection should not be confused with other forms of light diffusion such as subsurface diffusion (commonly called subsurface scattering or SSS). Specular and diffuse reflections could be replaced with terms like "sharp" reflections and "blurry" reflections of light.
Electromagnetic Energy Absorption by Atoms :
Atoms seek a balanced energy state, so if you add energy to an atom, it will seek to discharge it. When energy like light is passed to an atom, some of the energy is absorbed which excites the atom, causing a gain in thermal energy (heat), the rest is reflected or transmitted (passes "through"). Atoms will absorb energy at different wavelengths at different rates, and the reflected light with modified intensity per wavelength is what gives color. How much energy an atom can absorb depends on it's current energy state and atomic structure.
So, in a very very simple model, ignoring angle of incidence and other factors, say i shine RGB(1,1,1) on a surface which absorbs RGB(0.5,0,0.75), assuming no transmittance is occurring, your reflected light value is RGB(0.5,1.0,0.25).
Now say you shine a light at RGB(2,2,2) on the same surface. The surface's properties have not changed. Reflected light is RGB( 1.5 , 2.0 , 1.25 ). If the sensor receiving this reflected light clamps at 1.0, then perceived light is RGB(1,1,1), or white, even though the material is colored.
Some references :
page at www.physicsclassroom.com
page on ask a scientist
Wikipedia : Atoms
Wikipedia : Energy Levels

Resources