Why is the BRDF a quotient of two differentials? - graphics

I am not sure if this is the rigth place to ask this question but i can not think of an better. Can somebody explane to me why the BRDF is an quotient of two differentials and not a quotient of undifferentiated quantities for outcoming radicance and irradiance? In the best case with some scribbles. All the explantions i could find to this topic is the one of Wikipedia:
"The reason the function is defined as a quotient of two differentials and not directly as a quotient between the undifferentiated quantities, is because other irradiating light than dEi(ωi), which are of no interest for fr(ωi , ωr), might illuminate the surface which would unintentionally affect Lr(ωr), whereas d⁡Lr (ωr) is only affected by dEi(ωi)." (https://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function) I do not understand this explation.
I hope some of you can help me and i am very thankfully for every trie.
P.S.: I hope I do not sound like an six year old but english is not my mothertoung.

BRDF describes the ratio of outgoing light to incoming light. The BRDF function depends on 'differentiating' individual components of the incoming and outgoing light. Specifically, we break the light ray into individual photons, or differentiated elements of the light ray for computation.
I hope this sheds some light on the BRDF function.
Some extended discussions on the topic:
StackOverflow: Can an infinite be undifferentiated?
Github: Approximating BRDF

Related

BRDFs in 3D Rendering

I'm reading this famous article and I can't seem to understand the BRDF concept, especially the bold part:
The surface’s response to light is quantified by a function called the
BRDF (Bidirectional Reflectance Distribution Function), which we will
denote as f(l, v). Each direction (incoming and outgoing) can be
parameterized with two numbers (e.g. polar coordinates), so the
overall dimensionality of the BRDF is four.
To which directions is the author referring? Also, if this is 3d, then how can a direction be parameterized with two numbers and not three?
A BRDF describes the light reflectance characteristics of a surface. For every pair of incoming (l) and outgoing (v) directions, the BRDF tells you how much light will be reflected along v. Since we are in surface space, two polar coordinates are sufficient to define the entire hemisphere over the reflection point. The following image from anu.edu.au illustrates this concept:

In computer graphics, why is everything an approximation?

While going through some computer graphics exam questions, I found this one and I was wondering what everyone thinks the answer is:
Given the statement: “In Computer Graphics, everything is an approximation”,
explain what you understand by this, in terms of modelling and rendering.
We can't have 100% quality in our models or lighting because the computational power and memory required to store and render that data would be huge.
Just imagine simulating photon per photon-lighting in a scene with literally billions of particles, and how these photons would be colliding and absorbed by atoms and molecules - in real time. It would basically be impossible because of the power needed.
In addition to what #Tokfrans said, think about mesh structures they are all discrete versions of a surface in which the surface is approximated by a set of triangles(polygons) the curve is the same story as we use a poly line to represent it. Even if the curve is infinitely smooth, we usually render and represent it with a set of connected lines.

"ray through vertex" special case when detecting point in polygon

To detect if a point is in a polygon, you project a line from the point, to infinity, and see how many of polygon's vertices it intersects with... simple enough. My problem is that if the ray intersects the polygon on one of the points, then it is counted as intersecting two segments, and considered outside the polygon. I altered my function to make it only count one of the segments when the ray intersects a point of the polygon, but there are cases where a line could intersect the point while still being outside as well. Take this image as an example:
If you assume the point in the top left is "infinity", and cast a ray to either of the other points, both intersect at a point of the polygon, and would count as intersecting the same number of vertices even though one is inside, and one is outside.
Is there a way to compensate for that, or do I just have to assume that those fringe cases won't pop up?
If the ray crosses a side exactly on a vertex, only count that side if the other vertex is above the ray. That will fix your corner case.
For example in the picture you posted, the lower ray crosses two sides of the square at the top-left vertex, but one side is above the ray and the other below, so that contributes 1 and the target point is found to be inside. The upper ray crosses two sides at the top-right vertex, both sides are below the ray, so they contribute 0 to the count and the target point is found to be outside.
Update:
I remembered reading an article which describes a technique for dealing with singular cases in general. Please read my other answer if interested.
While my first answer should do the trick for this simple problem, I can't help but mention that there exist general techniques for dealing with these kinds of special cases.
This article describes a technique for dealing with these kinds of issues in general. And one of the first examples they provide happens to be the algorithm you ask about!
The idea is to apply Automatic differentiation aka Dual numbers to compute symbolic perturbations.
By the way the same technique can also be used to avoid handling 0/0 as a special case in programs!
Here is the blog post I originally learned this from, it gives some great background to the technique, and the author blogs a lot about automatic differentiation (AD).
Despite appearances AD is a very practical technique especially in languages with good support for operator overloading (eg: C++, Haskell, Python ...) and I have used it in "real life" (industrial applications in C++).
Send ray in another direction.
If you try n+1 different directions (n is number of polygon points) one of them surely will not pass through any vertex.
This will simplify the code compared to consideration of corner cases.
Worst case becomes O(n)*CheckComplexity(n) which is likely O(n^2). If it's not acceptable, you can just sort all vertices by direction from the point to them and select middle of some interval. This will give O(n*log n).

Are there any rendering alternatives to rasterisation or ray tracing?

Rasterisation (triangles) and ray tracing are the only methods I've ever come across to render a 3D scene. Are there any others? Also, I'd love to know of any other really "out there" ways of doing 3D, such as not using polygons.
Aagh! These answers are very uninformed!
Of course, it doesn't help that the question is imprecise.
OK, "rendering" is a really wide topic. One issue within rendering is camera visibility or "hidden surface algorithms" -- figuring out what objects are seen in each pixel. There are various categorizations of visibility algorithms. That's probably what the poster was asking about (given that they thought of it as a dichotomy between "rasterization" and "ray tracing").
A classic (though now somewhat dated) categorization reference is Sutherland et al "A Characterization of Ten Hidden-Surface Algorithms", ACM Computer Surveys 1974. It's very outdated, but it's still excellent for providing a framework for thinking about how to categorize such algorithms.
One class of hidden surface algorithms involves "ray casting", which is computing the intersection of the line from the camera through each pixel with objects (which can have various representations, including triangles, algebraic surfaces, NURBS, etc.).
Other classes of hidden surface algorithms include "z-buffer", "scanline techniques", "list priority algorithms", and so on. They were pretty darned creative with algorithms back in the days when there weren't many compute cycles and not enough memory to store a z-buffer.
These days, both compute and memory are cheap, and so three techniques have pretty much won out: (1) dicing everything into triangles and using a z-buffer; (2) ray casting; (3) Reyes-like algorithms that uses an extended z-buffer to handle transparency and the like. Modern graphics cards do #1; high-end software rendering usually does #2 or #3 or a combination. Though various ray tracing hardware has been proposed, and sometimes built, but never caught on, and also modern GPUs are now programmable enough to actually ray trace, though at a severe speed disadvantage to their hard-coded rasterization techniques. Other more exotic algorithms have mostly fallen by the wayside over the years. (Although various sorting/splatting algorithms can be used for volume rendering or other special purposes.)
"Rasterizing" really just means "figuring out which pixels an object lies on." Convention dictates that it excludes ray tracing, but this is shaky. I suppose you could justify that rasterization answers "which pixels does this shape overlap" whereas ray tracing answers "which object is behind this pixel", if you see the difference.
Now then, hidden surface removal is not the only problem to be solved in the field of "rendering." Knowing what object is visible in each pixel is only a start; you also need to know what color it is, which means having some method of computing how light propagates around the scene. There are a whole bunch of techniques, usually broken down into dealing with shadows, reflections, and "global illumination" (that which bounces between objects, as opposed to coming directly from lights).
"Ray tracing" means applying the ray casting technique to also determine visibility for shadows, reflections, global illumination, etc. It's possible to use ray tracing for everything, or to use various rasterization methods for camera visibility and ray tracing for shadows, reflections, and GI. "Photon mapping" and "path tracing" are techniques for calculating certain kinds of light propagation (using ray tracing, so it's just wrong to say they are somehow fundamentally a different rendering technique). There are also global illumination techniques that don't use ray tracing, such as "radiosity" methods (which is a finite element approach to solving global light propagation, but in most parts of the field have fallen out of favor lately). But using radiosity or photon mapping for light propagation STILL requires you to make a final picture somehow, generally with one of the standard techniques (ray casting, z buffer/rasterization, etc.).
People who mention specific shape representations (NURBS, volumes, triangles) are also a little confused. This is an orthogonal problem to ray trace vs rasterization. For example, you can ray trace nurbs directly, or you can dice the nurbs into triangles and trace them. You can directly rasterize triangles into a z-buffer, but you can also directly rasterize high-order parametric surfaces in scanline order (c.f. Lane/Carpenter/etc CACM 1980).
There's a technique called photon mapping that is actually quite similar to ray tracing, but provides various advantages in complex scenes. In fact, it's the only method (at least of which I know) that provides truly realistic (i.e. all the laws of optics are obeyed) rendering if done properly. It's a technique that's used sparingly as far as I know, since it's performance is hugely worse than even ray tracing (given that it effectively does the opposite and simulates the paths taken by photons from the light sources to the camera) - yet this is it's only disadvantage. It's certainly an interesting algorithm, though you're not going to see it in widescale use until well after ray tracing (if ever).
The Rendering article on Wikipedia covers various techniques.
Intro paragraph:
Many rendering algorithms have been
researched, and software used for
rendering may employ a number of
different techniques to obtain a final
image.
Tracing every ray of light in a scene
is impractical and would take an
enormous amount of time. Even tracing
a portion large enough to produce an
image takes an inordinate amount of
time if the sampling is not
intelligently restricted.
Therefore, four loose families of
more-efficient light transport
modelling techniques have emerged:
rasterisation, including scanline
rendering, geometrically projects
objects in the scene to an image
plane, without advanced optical
effects; ray casting considers the
scene as observed from a specific
point-of-view, calculating the
observed image based only on geometry
and very basic optical laws of
reflection intensity, and perhaps
using Monte Carlo techniques to reduce
artifacts; radiosity uses finite
element mathematics to simulate
diffuse spreading of light from
surfaces; and ray tracing is similar
to ray casting, but employs more
advanced optical simulation, and
usually uses Monte Carlo techniques to
obtain more realistic results at a
speed that is often orders of
magnitude slower.
Most advanced software combines two or
more of the techniques to obtain
good-enough results at reasonable
cost.
Another distinction is between image
order algorithms, which iterate over
pixels of the image plane, and object
order algorithms, which iterate over
objects in the scene. Generally object
order is more efficient, as there are
usually fewer objects in a scene than
pixels.
From those descriptions, only radiosity seems different in concept to me.

Volumetric particles

I'm toying with the idea of volumetric particles. By 'volumetric' I don't mean actually 3D model per particle - usually it's more expensive and harder to blend with other particles. What I mean is 2D particles that will look as close as possible to be volumetric.
Right now what I/we have tried is particles with additional local Z texture (spherical for example), and we conduct the alpha transparency according to the combination of the alpha value and the closeness by Z which is improved by the fact that particle does not have a single planar Z.
I think a cool add would be interaction with lighting (and shadows as well), but here the question is how will the lighting formula look like (taking transparency into account, let's assume that we are talking about smoke and dust/clouds and not additive blend) - any suggestions would be welcomed.
I also though about adding normal so I can actually squeeze all in two textures:
Diffuse & Alpha texture.
Normal & 256 level precision Z channel texture.
I ask this question to see what other directions can be thought of and to get your ideas regarding the proper lighting equation that might be used.
It sounds like you are asking for information on techniques for the simulation of participating media: "Participating media may absorb, emit and/or scatter light. The simplest participating medium only absorbs light. That means that light passing through the medium is attenuated depending on the density of the medium."
Here are some links to some example images and to Frisvad, Christensen, Jensen's the SIGGRAPH 2007 paper (including the PDF).
A nice paper on using spherical billboards to represent volumetric effects:
http://www.iit.bme.hu/~szirmay/firesmoke_link.htm
Doesn't handle particpating media, though.
See Volume Rendering and Voxel.

Resources