Why Specular reflected light will be in bright color(usually white) while other parts of the object are reflecting the perceived color wavelength?
From a physical perspective, this is because:
specular reflection results from light bouncing off the surface of material
diffuse reflection results from light bouncing around inside the material
Say you have a piece of red plastic with a smooth surface. The plastic is red because it contains a red dye or pigment. Incoming light that enters the plastic tends to be reflected if red, or absorbed if it is not; this red light bounces around inside the plastic and makes it back out in a more or less random direction (which is why this component is called "diffuse").
On the other hand, some of the incoming light never makes it into the plastic to begin with: it bounces off the surface, instead. Because the surface of the plastic is smooth, its direction is not randomized: it reflects off in a direction based on the mirror reflection angle (which is why it is called "specular"). Since it never hits any of the colorant in the plastic, its color is not changed by selective absorption like the diffuse component; this is why specular reflection is usually white.
I should add that the above is a highly simplified version of reality: there are plenty of cases that are not covered by these two possibilities. However, they are common enough and generally applicable enough for computer graphics work: the diffuse+specular model can give a good visible approximation to many surfaces, especially when combined with other cheap approximation like bump mapping, etc.
Edit: a reference in response to Ayappa's comment -- the mechanism that generally gives rise to specular highlights is called Fresnel reflection. It is a classical phenomenon, depending solely on the refractive index of the material.
If the surface of the material is optically smooth (e.g., a high-quality glass window), the Fresnel reflection will produce a true mirror-like image. If the material is only partly smooth (like semigloss paint) you will get a specular highlight, which may be narrow or wide based on how smooth it is at the microscopic level. If the material is completely rough (either at a microscopic level or at some larger scale which is smaller than your image resolution), then the Fresnel reflection becomes effectively diffuse, and cannot be readily distinguished from other forms of diffuse reflection.
Its a question of wavelength absorption vs reflection.
First, specular reflections do not exist in the real world. Everything you see is mostly reflected light (the rest being emissive or other), including diffuse lighting. Realistically, there is no real difference between diffuse and specular lighting : its all a reflection. Also keep in mind that real world lighting is not clamped to the 0-1 range as pixels are.
Diffusion of light reflected off of a surface is caused by the microscopic roughness of the surface (microfacets). Imagine a surface is made up of millions of microscopic mirrors. If they are all aligned, you get a perfect polished mirror. If they are all randomly oriented, light is scattered in every direction and the resulting reflection is "blurred". Many formulas in computer graphics try to model this microscopic surface roughness, like Oren–Nayar, but usually the simple Lambert model is used because it is computationally cheap.
Colors are a result of wavelength absorption vs reflection. When light energy hits a material, some of that energy is absorbed by that material. Not all wavelengths of the energy are absorbed at the same rate however. If white light bounces off of a surface which absorbs red wavelengths, you will see a green-blue color. The more a surface absorbs light, the darker the color will appear as less and less light energy is returned. Most of the absorbed light energy is converted to thermal energy, and is why black materials will heat up in the sun faster than white materials.
Specular in computer graphics is meant to simulate a strong direct light source reflecting off of a surface like it may do in the real world. Realistically though, you would have to reflect the entire scene in high range lighting and color depth, and specular would be the result of light sources being much brighter than the rest of the reflected scene and returning a much higher amount of light energy after one or more reflections than the rest of the light from the scene. That would be quite computationally painful though! Not feasible for realtime graphics just yet. Lighting with HDR environment maps were an attempt to properly simulate this.
Additional references and explanations :
Specular Reflections :
Specular reflections only differ from diffuse reflections by the roughness of a reflective surface. There is no inherent difference between them, both terms refer to reflected light. Also note that diffusion in this context simply means the scattering of light, and diffuse reflection should not be confused with other forms of light diffusion such as subsurface diffusion (commonly called subsurface scattering or SSS). Specular and diffuse reflections could be replaced with terms like "sharp" reflections and "blurry" reflections of light.
Electromagnetic Energy Absorption by Atoms :
Atoms seek a balanced energy state, so if you add energy to an atom, it will seek to discharge it. When energy like light is passed to an atom, some of the energy is absorbed which excites the atom, causing a gain in thermal energy (heat), the rest is reflected or transmitted (passes "through"). Atoms will absorb energy at different wavelengths at different rates, and the reflected light with modified intensity per wavelength is what gives color. How much energy an atom can absorb depends on it's current energy state and atomic structure.
So, in a very very simple model, ignoring angle of incidence and other factors, say i shine RGB(1,1,1) on a surface which absorbs RGB(0.5,0,0.75), assuming no transmittance is occurring, your reflected light value is RGB(0.5,1.0,0.25).
Now say you shine a light at RGB(2,2,2) on the same surface. The surface's properties have not changed. Reflected light is RGB( 1.5 , 2.0 , 1.25 ). If the sensor receiving this reflected light clamps at 1.0, then perceived light is RGB(1,1,1), or white, even though the material is colored.
Some references :
page at www.physicsclassroom.com
page on ask a scientist
Wikipedia : Atoms
Wikipedia : Energy Levels
Related
Summary
This is a question about how to map light intensity values, as calculated in a raytracing model, to color values percieved by humans. I have built a ray tracing model, and found that including the inverse square law for calculation of light intensities produces graphical results which I believe are unintuitive. I think this is partly to do with the limited range of brightness values available with 8 bit color images, but more likely that I should not be using a linear map between light intensity and pixel color.
Background
I developed a recent interest in creating computer graphics with raytracing techniques.
A basic raytracing model might work something like this
Calculate ray vectors from the center of the camera (eye) in the direction of each screen pixel to be rendered
Perform vector collision tests with all objects in the world
If collision, make a record of the color of the object at the point where the collision occurs
Create a new vector from the collision point to the nearest light
Multiply the color of the light by the color of the object
This creates reasonable, but flat looking images, even when surface normals are included in the calculation.
Model Extensions
My interest was in trying to extend this model by including the distance into the light calculations.
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
It doesn't apply to all light models. (For example light arriving from an infinite distance has intensity independent of the position of an object.)
From playing around with my code I have found that this inverse square law doesn't produce the realistic lighting I was hoping for.
For example, I built some initial objects for a model of a room/scene, to test things out.
There are some objects at a distance of 3-5 from the camera.
There are walls which make a boundry for the room, and I have placed them with distance of order 10 to 100 from the camera.
There are some lights, distance of order 10 from the camera.
What I have found is this
If the boundry of the room is more than distance 10 from the camera, the color values are very dim.
If the boundry of the room is a distance 100 from the camera it is completely invisible.
This doesn't match up with what I would expect intuitively. It makes sense mathematically, as I am using a linear function to translate between color intensity and RGB pixel values.
Discussion
Moving an object from a distance 10 to a distance 100 reduces the color intensity by a factor of (100/10)^2 = 100. Since pixel RGB colors are in the range of 0 - 255, clearly a factor of 100 is significant and would explain why an object at distance 10 moved to distance 100 becomes completely invisible.
However, I suspect that the human perception of color is non-linear in some way, and I assume this is a problem which has already been solved in computer graphics. (Otherwise raytracing engines wouldn't work.)
My guess would be there is some kind of color perception function which describes how absolute light intensities should be mapped to human perception of light intensity / color.
Does anyone know anything about this problem or can point me in the right direction?
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
The physical quantity you're describing here is not intensity, but radiant flux. For a discussion of radiometric concepts in the context of ray tracing, see Chapter 5.4 of Physically Based Rendering.
If the boundary of the room is more than distance 10 from the camera, the color values are very dim.
If the boundary of the room is a distance 100 from the camera it is completely invisible.
The inverse square law can be a useful first approximation for point lights in a ray tracer (before more accurate lighting models are implemented). The key point is that the law - radiant flux falling off by the square of the distance - applies only to the light from the point source to a surface, not to the light that's then reflected from the surface to the camera.
In other words, moving the camera back from the scene shouldn't reduce the brightness of the objects in the rendered image; it should only reduce their size.
In reading academic papers on rendering, graphics processing, lighting, etc..., I am finding a wide variety of units mentioned and used.
For example, Bruneton's Atmospheric Scattering paper seems to use candellas per square meter (cd/m^2), representing luminance. However, Jensen's Night Model uses watts per square meter (W/m^2), implying irradiance. Other papers mention irradiance, luminance, illuminance, etc., with seemingly no common representation of the lighting calculations used. How then, can one even be sure that in implementing all of these papers, that the calculations will "play well" together?
To add to the confusion, most papers on adaptive tonemapping seem to forego units at all, merely recommending that pixels (referred to as luminance) be rendered in a log format (decibels). A decibel is useless without a reference intensity/power.
Which begs the question, what unit does a single pixel represent? When I calculate the "average luminance" of a scene by averaging the log-brightness of the pixels, what exactly am I calculating? The term "luminance" itself implies an area being illuminated and a solid angle for the source. This leads to two more questions: "What is the solid angle of the point source?" "What is the area of a pixel?"
My question is thus,
What units should lighting in a 3d graphics engine be represented in to allow for proper, calibrated brightness control across a wide variety of light sources, from faint starlight to bright sunlight, and how does this unit relate to the brightness of individual pixels?
Briefly: radiance, measured in candela per square meter (cd/m^2) is the appropriate unit.
Less briefly: computer graphics is usually concerned with what things should look like to people. The units that describe this are:
"luminous flux" is measured in lumens, or lm, which are defined proportional to the total radiated power (in watts) of light at a particular wavelength.
"luminous intensity" is measured in candela, or cd, which can be defined as lumens per steradian (lm/sr).
Intuitively, when you spread the same amount of energy over a larger area, it becomes proportionately less bright. This yields two more quantities:
"irradiance" is the luminous flux per unit area. It is measured in lm/m^2, and is proportional to W/m^2.
"radiance" is the luminous intensity per unit area. It is measured in cd/m^2, or lm/(sr.m^2).
Now, to answer your questions:
Each pixel represents a finite amount of solid angle from the camera, which is measured in steradian. In the context of your question, the relevant area is the area of the object being rendered.
The radiance (measured in cd/m^2) represents surface brightness, and has the unique property that it is invariant along any unobstructed path of observation (which makes it the most appropriate quantity for a rendering engine). The color of each pixel represents the average radiance over the solid angle occupied by that pixel.
Note that, by definition, a point source doesn't occupy any solid angle; its radiance is technically infinite. Its irradiance is finite, though, and technically it should only contribute a finite (if potentially large) effect to a given pixel value. In any case, if you want to directly render point sources, you will need to treat them differently from area sources, and deal with the problem that quantities other than radiance are not invariant over a given ray...
When Jensen et al's paper "A Physically-Based Night Sky Model" uses an irradiance-related W/m^2 in a table of various sources of illumination, I would guess that their intention was to describe their relative contribution as averaged over the entire night sky, as abstracted from any visual details.
Finally, note that truly physically based models need to integrate over the observable spectrum in order to evaluate brightness and color. Thus, such models must start out with a distribution of watts over visible wavelengths, and use the standard human colorimetric model to evaluate lumens.
The SI unit for brightness is the Candela per square metre so if your wanting to represent actual physical quantities it would be hard to argue against using that. As for how this unit relates to the brightness of an individual pixel that would be a function of the brightness at that part of the illumination source represented in the pixels viewing area combined with contributions from elsewhere in the scene as calculated by the engine - presumably this would very completely depending on the renderer.
I am using Java to write a very primitive 3D graphics engine based on The Black Art of 3D Game Programming from 1995. I have gotten to the point where I can draw single color polygons to the screen and move the camera around the "scene". I even have a Z buffer that handles translucent objects properly by sorting those pixels by Z, as long as I don't show too many translucent pixels at once. I am at the point where I want to add lighting. I want to keep it simple, and ambient light seems simple enough, directional light should be fairly simple too. But I really want point lighting with the ability to move the light source around and cast very primitive shadows ( mostly I don't want light shining through walls ).
My problem is that I don't know the best way to approach this. I imagine a point light source casting rays at regular angles, and if these rays intersect a polygon it will light that polygon and stop moving forward. However when I think about a scene with multiple light sources and multiple polygons with all those rays I imagine it will get very slow. I also don't know how to handle a case where a polygon is far enough away from a light source that if falls in between two rays. I would give each light source a maximum distance, and if I gave it enough rays, then there should be no point within that distance that any two rays are too far apart to miss a polygon, but that only increases my problem with the number of calculations to perform.
My question to you is: Is there some trick to point light sources to speed them up or just to organize it better? I'm afraid I'll just get a nightmare of nested for loops. I can't use openGL or Direct3D or any other cheats because I want to write my own.
If you want to see my results so far, here is a youtube video. I have already fixed the bad camera rotation. http://www.youtube.com/watch?v=_XYj113Le58&feature=plcp
Lighting for real time 3d applications is (or rather - has in the past generally been) done by very simple approximations - see http://en.wikipedia.org/wiki/Shading. Shadows are expensive - and have generally in rasterizing 3d engines been accomplished via shadow maps & Shadow Volumes. Point lights make shadows even more expensive.
Dynamic real time light sources have only recently become a common feature in games - simply because they place such a heavy burden on the rendering system. And these games leverage dedicated graphics cards. So I think you may struggle to get good performance out of your engine if you decide to include dynamic - shadow casting - point lights.
Today it is commonplace for lighting to be applied in two ways:
Traditionally this has been "forward rendering". In this method, for every vertex (if you are doing the lighting per vertex) or fragment (if you are doing it per-pixel) you would calculate the contribution of each light source.
More recently, "deferred" lighting has become popular, wherein the geometry and extra data like normals & colour info are all rendered to intermediate buffers - which is then used to calculate lighting contributions. This way, the lighting calculations are not dependent on the geometry count. It does however, have a lot of other overhead.
There are a lot of options. Implementing anything much more complex than some the basic models that have been used by dedicated graphics cards over the past couple of years is going to be challenging, however!
My suggestion would be to start out with something simple - basic lighting without shadows. From there you can extend and optimize.
What are you doing the ray-triangle intersection test for? Are you trying to light only triangles which the light would reach? Ray-triangle
intersections for every light with every poly is going to be very expensive I think. For lighting without shadows, typically you would
just iterate through every face (or if you are doing it per vertex, through every vertex) and calculate & add the lighting contribution per light - you would do this just before you start rasterizing as you have to pass through all polys in anycase.
You can calculate the lighting by making use of any illumination model, something very simple like Lambertian reflectance - which shades the surface based upon the dot product of the normal of the surface and the direction vector from the surface to the light. Make sure your vectors are in the same spaces! This is possibly why you are getting the strange results that you are. If your surface normal is in world space, be sure to calculate the world space light vector. There are a bunch of advantages for calulating lighting in certain spaces, you can have a look at that later on, for now I suggest you just get the basics up and running. Also have a look at Blinn-phong - this is the shading model graphics cards used for many years.
For lighting with shadows - look into the links I posted. They were developed because realistic lighting is so expensive to calculate.
By the way, LaMothe had a follow up book called Tricks of the 3D Game Programming Gurus-Advanced 3D Graphics and Rasterization.
This takes you through every step of programming a 3d engine. I am not sure what the black art book covers.
I am wondering about the most accurate way to calculate the shadow generated from several different light sources and ambient light.
Ambient light is light that exists in the entire 'world' with the same intensity and no particular direction, and diffused lighting is the lighting that occurs due a direct lighting from a point light source.
Given that Ka is the coefficient for the surface ambient reflectivity, Ia is the intensity of the ambient light, Kd is the surface diffuse reflectivity, Ip1 is intensity of the the first (accordingly) point light source, N is the surface normal, and L1 is the light (of the first source accordingly) direction.
According to my reference material the intensity of the color at the spot should be:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))
where '.' is the dot product.
But according to my understanding the real light intensity should do some sort of average between the light sources and not just add them up, so that if there are only two light sources the equation should look like:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))/2
and if there are 3 light sources, but the third is blocked and doesn't light the surface directly then:
I=Ka.Ia+Kd(Ip1(N.L1)+Ip2(N.L2))/3
(so that if there is a place where all 3 lights contribute it would be lighten brighter.
Am I right at my assumption?
Well, no, light shouldn't be averaged. Think about it. If you have just one powerful light source, and you add another, very faint light, would the color of the object be diminished? For example say the powerful light has intensity 10, the color (presuming the direction is perpendicular to the normal, and no ambient light, for simplicity sake) would be 10. Then after you add the second faint light, with say intensity 0.1 the color would be (10 + 0.1) / 2 which is 5.05. So adding more light would make the object seem darker. That doesn't make sense.
In the real world, light adds. It should in your ray tracer, too.
Luminance is not a linear function of light intensity. In other words, two identical light sources aimed at one spot are not perceived as twice as "bright" as one light. (Brightness is an ambiguous term -- luminance is a better term that means radiance weighted by human vision).
What you can do as an approximation to correcting the image to be viewed on your monitor, knowing intensities of various pixels, is called gamma correction.
I'm trying to produce a shader to replicate a white plastic object with a colored light inside. Either by having a shader that will be translucent and if I put a light inside the object the light will show through or by having a shader that fakes the effect of a light inside.
The effect im going for is kinda like a light going through a lamp shade similar to these pictures:
Ideally I would be able to control the strength and colour of the light to get it to pulse and rotate through some nice bright fluro colours
Though I'm not sure where to start!
My question is does anyone know the techniques I should be researching to be able to produce such a shader or have an example of the same/similar shader I can use as a starting point? Or even if you want to provide a shader that might do the job
You might want to do some research on Subsurface Scattering to get an idea of how to go about recreating this kind of effect. Subsurface scattering is important for rendering realistic skin but in that case you are generally dealing with a light in front of or behind a translucent object rather than inside it. The same basic principles apply but some of the tricks and hacks used for real time subsurface scattering approximations may not work for your case.
Nice photos. It looks like the type of translucent plastic you use can make a big difference. What I see is that the brightness of the plastic at each point is based on the angle between the ray from the light source to that point, and the surface normal at that point. (The viewer angle is irrelevant.)
When the vector from internal light source to surface point is nearly parallel to the surface normal vector, the surface point is bright; when they're nearly perpendicular to each other, the surface point is dark. So try using the dot product of those two vectors. Don't forget to normalize.
In other words, it's basically diffuse reflection, except that you're adding the effect of internal light sources (transmitted) to the effect of external light sources (reflected). See Lambertian_reflectance as a starting point.
You may also want to add a little specular reflection on top of that.
The third image is more complex: I think it's showing the shadows of the inner surfaces on the outer ones.
You can also fake this effect, by translating diffuse lighting from back face to front face. More specifically, you should mix lighting on both sides using some transfer function. But this method only appicable for thin walled objects.