Considering the two types of light in an illumination model, can I understand the area light source as the infinite number of point lights distributed over an area? Or is this number finite and the whole area is composed of physical objects like e.g. bulbs?
Related
In reading academic papers on rendering, graphics processing, lighting, etc..., I am finding a wide variety of units mentioned and used.
For example, Bruneton's Atmospheric Scattering paper seems to use candellas per square meter (cd/m^2), representing luminance. However, Jensen's Night Model uses watts per square meter (W/m^2), implying irradiance. Other papers mention irradiance, luminance, illuminance, etc., with seemingly no common representation of the lighting calculations used. How then, can one even be sure that in implementing all of these papers, that the calculations will "play well" together?
To add to the confusion, most papers on adaptive tonemapping seem to forego units at all, merely recommending that pixels (referred to as luminance) be rendered in a log format (decibels). A decibel is useless without a reference intensity/power.
Which begs the question, what unit does a single pixel represent? When I calculate the "average luminance" of a scene by averaging the log-brightness of the pixels, what exactly am I calculating? The term "luminance" itself implies an area being illuminated and a solid angle for the source. This leads to two more questions: "What is the solid angle of the point source?" "What is the area of a pixel?"
My question is thus,
What units should lighting in a 3d graphics engine be represented in to allow for proper, calibrated brightness control across a wide variety of light sources, from faint starlight to bright sunlight, and how does this unit relate to the brightness of individual pixels?
Briefly: radiance, measured in candela per square meter (cd/m^2) is the appropriate unit.
Less briefly: computer graphics is usually concerned with what things should look like to people. The units that describe this are:
"luminous flux" is measured in lumens, or lm, which are defined proportional to the total radiated power (in watts) of light at a particular wavelength.
"luminous intensity" is measured in candela, or cd, which can be defined as lumens per steradian (lm/sr).
Intuitively, when you spread the same amount of energy over a larger area, it becomes proportionately less bright. This yields two more quantities:
"irradiance" is the luminous flux per unit area. It is measured in lm/m^2, and is proportional to W/m^2.
"radiance" is the luminous intensity per unit area. It is measured in cd/m^2, or lm/(sr.m^2).
Now, to answer your questions:
Each pixel represents a finite amount of solid angle from the camera, which is measured in steradian. In the context of your question, the relevant area is the area of the object being rendered.
The radiance (measured in cd/m^2) represents surface brightness, and has the unique property that it is invariant along any unobstructed path of observation (which makes it the most appropriate quantity for a rendering engine). The color of each pixel represents the average radiance over the solid angle occupied by that pixel.
Note that, by definition, a point source doesn't occupy any solid angle; its radiance is technically infinite. Its irradiance is finite, though, and technically it should only contribute a finite (if potentially large) effect to a given pixel value. In any case, if you want to directly render point sources, you will need to treat them differently from area sources, and deal with the problem that quantities other than radiance are not invariant over a given ray...
When Jensen et al's paper "A Physically-Based Night Sky Model" uses an irradiance-related W/m^2 in a table of various sources of illumination, I would guess that their intention was to describe their relative contribution as averaged over the entire night sky, as abstracted from any visual details.
Finally, note that truly physically based models need to integrate over the observable spectrum in order to evaluate brightness and color. Thus, such models must start out with a distribution of watts over visible wavelengths, and use the standard human colorimetric model to evaluate lumens.
The SI unit for brightness is the Candela per square metre so if your wanting to represent actual physical quantities it would be hard to argue against using that. As for how this unit relates to the brightness of an individual pixel that would be a function of the brightness at that part of the illumination source represented in the pixels viewing area combined with contributions from elsewhere in the scene as calculated by the engine - presumably this would very completely depending on the renderer.
I'm not really sure if this fits in here or better in a scientific computer science or math forum but since I'm searching for a concrete algorithm...
I have a 3d model which is somehow defined either by a mesh or as an algebraic variety and i want to remesh/approximate this thing just using a fixed chosen type of congruent tiles, e.g. isoscele triangles with certain ratio of sides length to the base length. Is there a algorithm for that or does anyone know the right name for the problem? I found some algorithms that come close to what I need, but they all mesh via some tolerance in the length and different sizes of the tiles.
In freeform shapes tiling is achieved via a very complicated algorithm. In real world architecture there is this method of tiling with as many identical tiles as possible and still get the shape, but there are angle tolerances and all sort of tolerances that you can manipulate. check paneling of freeform shapes.
I am using Java to write a very primitive 3D graphics engine based on The Black Art of 3D Game Programming from 1995. I have gotten to the point where I can draw single color polygons to the screen and move the camera around the "scene". I even have a Z buffer that handles translucent objects properly by sorting those pixels by Z, as long as I don't show too many translucent pixels at once. I am at the point where I want to add lighting. I want to keep it simple, and ambient light seems simple enough, directional light should be fairly simple too. But I really want point lighting with the ability to move the light source around and cast very primitive shadows ( mostly I don't want light shining through walls ).
My problem is that I don't know the best way to approach this. I imagine a point light source casting rays at regular angles, and if these rays intersect a polygon it will light that polygon and stop moving forward. However when I think about a scene with multiple light sources and multiple polygons with all those rays I imagine it will get very slow. I also don't know how to handle a case where a polygon is far enough away from a light source that if falls in between two rays. I would give each light source a maximum distance, and if I gave it enough rays, then there should be no point within that distance that any two rays are too far apart to miss a polygon, but that only increases my problem with the number of calculations to perform.
My question to you is: Is there some trick to point light sources to speed them up or just to organize it better? I'm afraid I'll just get a nightmare of nested for loops. I can't use openGL or Direct3D or any other cheats because I want to write my own.
If you want to see my results so far, here is a youtube video. I have already fixed the bad camera rotation. http://www.youtube.com/watch?v=_XYj113Le58&feature=plcp
Lighting for real time 3d applications is (or rather - has in the past generally been) done by very simple approximations - see http://en.wikipedia.org/wiki/Shading. Shadows are expensive - and have generally in rasterizing 3d engines been accomplished via shadow maps & Shadow Volumes. Point lights make shadows even more expensive.
Dynamic real time light sources have only recently become a common feature in games - simply because they place such a heavy burden on the rendering system. And these games leverage dedicated graphics cards. So I think you may struggle to get good performance out of your engine if you decide to include dynamic - shadow casting - point lights.
Today it is commonplace for lighting to be applied in two ways:
Traditionally this has been "forward rendering". In this method, for every vertex (if you are doing the lighting per vertex) or fragment (if you are doing it per-pixel) you would calculate the contribution of each light source.
More recently, "deferred" lighting has become popular, wherein the geometry and extra data like normals & colour info are all rendered to intermediate buffers - which is then used to calculate lighting contributions. This way, the lighting calculations are not dependent on the geometry count. It does however, have a lot of other overhead.
There are a lot of options. Implementing anything much more complex than some the basic models that have been used by dedicated graphics cards over the past couple of years is going to be challenging, however!
My suggestion would be to start out with something simple - basic lighting without shadows. From there you can extend and optimize.
What are you doing the ray-triangle intersection test for? Are you trying to light only triangles which the light would reach? Ray-triangle
intersections for every light with every poly is going to be very expensive I think. For lighting without shadows, typically you would
just iterate through every face (or if you are doing it per vertex, through every vertex) and calculate & add the lighting contribution per light - you would do this just before you start rasterizing as you have to pass through all polys in anycase.
You can calculate the lighting by making use of any illumination model, something very simple like Lambertian reflectance - which shades the surface based upon the dot product of the normal of the surface and the direction vector from the surface to the light. Make sure your vectors are in the same spaces! This is possibly why you are getting the strange results that you are. If your surface normal is in world space, be sure to calculate the world space light vector. There are a bunch of advantages for calulating lighting in certain spaces, you can have a look at that later on, for now I suggest you just get the basics up and running. Also have a look at Blinn-phong - this is the shading model graphics cards used for many years.
For lighting with shadows - look into the links I posted. They were developed because realistic lighting is so expensive to calculate.
By the way, LaMothe had a follow up book called Tricks of the 3D Game Programming Gurus-Advanced 3D Graphics and Rasterization.
This takes you through every step of programming a 3d engine. I am not sure what the black art book covers.
As I understand it, shadow-mapping is done by rendering the scene from the perspective of the light to create a depth map. Then you re-render the scene from the POV of the camera, and for each point (fragment in GLSL) in the scene you calculate the distance from there to the light source; if it matches what you have in your shadow map, then it's in the light, otherwise it's in the shadow.
I was just reading through this tutorial to get an idea of how how to do shadow mapping with a point/omnidirectional light.
Under section 12.2.2 it says:
We use a single shadow map for all light sources
And then under 12.3.6 it says:
1) Calculate the squared distance from the current pixel to the light source.
...
4) Compare the calculated distance value with the fetched shadow map value to determine whether or not we're in shadow.
Which is roughly what I stated above.
What I don't get is if we've baked all our lights into one shadow map, then which light do we need to compare the distance to? The distance baked into the map shouldn't correspond to anything, because it's a blend of all the lights, isn't it?
I'm sure I'm missing something, but hopefully someone can explain this to me.
Also, if we are using a single shadow map, how do we blend it for all the light sources?
For a single light source the shadow map just stores the distance of the closest object to the light (i.e., a depth map), but for multiple light sources, what would it contain?
You've cut the sentence short prematurely:
We use a single shadow map for all light sources, creating an image
with multipass rendering and performing one pass for each light
source.
So the shadow map contains the data for a single light source at a time but they use only one map because they render only one light at a time.
I think this flows into your second question — light is additive so you combine the results from multiple lights simply by adding them together. In GPU Gems' case, they add together directly in the frame buffer, no doubt because of the relatively limited number of storage texture samplers available on GPUs at the time. Nowadays you probably want to do a combination of combining in the frame buffer and directly in the fragment shader.
You also generally apply the test of "pixel is lit if it's less than or equal to the distance in the shadow buffer plus a little bit" rather than exactly equal, due to floating point rounding error accumulation.
An old Direct3D book says
"...you can achieve an acceptable frame
rate with hardware acceleration while
displaying between 2000 and 4000
polygons per frame..."
What is one polygon in Direct3D? Do they mean one primitive (indexed or otherwise) or one triangle?
That book means triangles. Otherwise, what if I wanted 1000-sided polygons? Could I still achieve 2000-4000 such shapes per frame?
In practice, the only thing you'll want it to be is a triangle because if a polygon is not a triangle it's generally tessellated to be one anyway. (Eg, a quad consists of two triangles, et cetera). A basic triangulation (tessellation) algorithm for that is really simple; you just loop though the vertices and turn every three vertices into a triangle.
Here, a "polygon" refers to a triangle. All . However, as you point out, there are many more variables than just the number of triangles which determine performance.
Key issues that matter are:
The format of storage (indexed or not; list, fan, or strip)
The location of storage (host-memory vertex arrays, host-memory vertex buffers, or GPU-memory vertex buffers)
The mode of rendering (is the draw primitive command issued fully from the host, or via instancing)
Triangle size
Together, those variables can create much greater than a 2x variation in performance.
Similarly, the hardware on which the application is running may vary 10x or more in performance in the real world: a GPU (or integrated graphics processor) that was low-end in 2005 will perform 10-100x slower in any meaningful metric than a current top-of-the-line GPU.
All told, any recommendation that you use 2-4000 triangles is so ridiculously outdated that it should be entirely ignored today. Even low-end hardware today can easily push 100,000 triangles in a frame under reasonable conditions. Further, most visually interesting applications today are dominated by pixel shading performance, not triangle count.
General rules of thumb for achieving good triangle throughput today:
Use [indexed] triangle (or quad) lists
Store data in GPU-memory vertex buffers
Draw large batches with each draw primitives call (thousands of primitives)
Use triangles mostly >= 16 pixels on screen
Don't use the Geometry Shader (especially for geometry amplification)
Do all of those things, and any machine today should be able to render tens or hundreds of thousands of triangles with ease.
According to this page, a polygon is n-sided in Direct3d.
In C#:
public static Mesh Polygon(
Device device,
float length,
int sides
)
As others already said, polygons here means triangles.
Main advantage of triangles is that, since 3 points define a plane, triangles are coplanar by definition. This means that every point within the triangle is exactly defined as a linear combination of polygon points. More vertices aren't necessarily coplanar, and they don't define a unique curved plane.
An advantage more in mechanical modeling than in graphics is that triangles are also undeformable.